id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9812/hep-th9812104.html
ar5iv
text
# Quantum Gravity at the Planck Length ## 1 Introduction For obvious reasons, the SLAC Summer Institute is usually concerned with the three particle interactions. It is very appropriate, though, that the subject of the 1998 SSI is gravity, because the next step in understanding the weak, strong, and electromagnetic interactions will probably require the inclusion of gravity as well. There are many reasons for making this statement, but I will focus on two, one based on supersymmetry and one based on the unification of the couplings. What is supersymmetry? I will answer this in more detail later, but for now let me give two short answers: * A lot of new particles. * A new spacetime symmetry. Answer A is the pragmatic one for a particle experimentalist or phenomenologist. In answer B, I am distinguishing internal symmetries like flavor and electric charge, which act on the fields at each point of spacetime, from symmetries like Lorentz invariance that move the fields from one point to another. Supersymmetry is of the second type. If the widely anticipated discovery of supersymmetry actually takes place in the next few years, it not only means a lot more particles to discover. It also will be the first new spacetime symmetry since the discovery of relativity, bringing the structure of the particle interactions closer to that of gravity; in a sense, supersymmetry is a partial unification of particle physics and gravity. The unification of the couplings is depicted in figure 1. This is usually drawn with a rather different vertical scale. Here the scale is compressed so that the three gauge couplings can hardly be distinguished, but this makes room for the fourth coupling, the gravitational coupling. Newton’s constant is dimensionful, so what is actually drawn is the dimensionless coupling $`G_\mathrm{N}E^2`$ with $`E`$ the energy scale and $`\mathrm{}=c=1`$. This dimensionless gravitational coupling depends strongly on energy, in contrast to the slow running of the gauge couplings. It is well-known that the three gauge couplings unify to good accuracy (in supersymmetric theories) at an energy around $`2\times 10^{16}`$ GeV. Note however that the fourth coupling does not miss by much, a factor of 20 or 30 in energy scale. This is another way of saying that the grand unification scale is near the Planck scale. In fact, the Planck scale $`M_\mathrm{P}=2\times 10^{19}`$ GeV is deceptively high because of various factors like $`4\pi `$ that must be included. Figure 1 suggests that the grand unification of the three gauge interactions will actually be a very grand unification including gravity as well. The failure of the four couplings to meet exactly could be due to any of several small effects, which I will discuss briefly later.<sup>1</sup><sup>1</sup>1I will also discuss briefly the idea of low energy string theory, in which figure 1 is drastically changed. Figure 1 also shows why the phenomenologies of the gauge interactions and gravity are so different: at accessible energies the coupling strengths are very different. For the same reason, the energy scale where the couplings meet is far removed from experiment. Nevertheless, we believe that we can deduce much of what happens at this scale, and this is the subject of my lectures. At the end I will briefly discuss experimental signatures, and Michael Peskin and Nima Arkani-Hamed will discuss some of these in more detail. In section 2 I discuss the idea that spacetime has more than four dimensions: first why this is not such a radical idea, and then why it is actually a good idea. In section 3 I review string theory as it stood a few years ago: the motivations from the short distance problem of gravity, from earlier unifying ideas, and from the search for new mathematical ideas, as well as the main problem, vacuum selection. In sections 4 I introduce the idea of duality, including weak–strong and electric–magnetic. I explain how supersymmetry gives information about strongly coupled systems. I then describe the consequences for string theory, including string duality, the eleventh dimension, D-branes, and M-theory. In section 5 I develop an alternative theory of quantum gravity, only to find that ‘all roads lead to string theory.’ In section 6 I explain how the new methods have solved some of the puzzles of black hole quantum mechanics. This in turn leads to the Maldacena dualities, which give detailed new information about supersymmetric gauge field theories. In section 7 I discuss some of the ways that the new ideas might affect particle physics, through the unification of the couplings and the possibility of low energy string theory and large new dimensions. In section 8 I summarize and present the outlook. ## 2 Beyond Four Dimensions Gravity is the dynamics of spacetime. It is very likely that at lengths near the Planck scale ($`L_\mathrm{P}=10^{33}`$ cm) it becomes evident that spacetime has more than the four dimensions that are visible to us. That is, spacetime is as shown in figure 2a, with four large dimensions (including time) and some additional number of small and highly curved spatial dimensions. A physicist who probes this spacetime with wavelengths long compared to the size of the small dimensions sees only the large ones, as in figure 2b. I will first give two reasons why this is a natural possibility to consider, and then explain why it is a good idea. The first argument is cosmological. The universe is expanding, so the dimensions that we see were once smaller and highly curved. It may have been that initially there were more than four small dimensions, and that only the four that are evident to us began to expand. That is, we know of no reason that that the initial expansion had to be isotropic. The second argument is based on symmetry breaking. Most of the symmetry in nature is spontaneously broken or otherwise hidden from us. For example, of the $`SU(3)\times SU(2)\times U(1)`$ gauge symmetries, only a $`U(1)`$ is visible. Similarly the flavor symmetry is partly broken, as are the symmetries in many condensed matter systems. This symmetry breaking is part of what makes physics so rich: if all of the symmetry of the underlying theory were unbroken, it would be much easier to figure out what that theory is! Suppose that this same symmetry breaking principle holds for the spacetime symmetries. The visible spacetime symmetry is $`SO(3,1)`$, the Lorentz invariance of special relativity consisting of the boosts and rotations. A larger symmetry would be $`SO(d,1)`$ for $`d>3`$, the Lorentz invariance of $`d+1`$ spacetime dimensions. Figure 2 shows how this symmetry would be broken by the geometry of spacetime. So extra dimensions are cosmologically plausible, and are a natural extension of the familiar phenomenon of spontaneous symmetry breaking. In addition, they may be responsible for some of the physics that we see in nature. To see why this is so, consider first the following cartoon version of grand unification. The traceless $`3\times 3`$ and $`2\times 2`$ matrices for the strong and weak gauge interactions fit into a $`5\times 5`$ matrix, with room for an extra $`U(1)`$ down the diagonal: $$\left[\begin{array}{ccc}& & \\ 3\times 3& & X,Y\\ & & \\ & & \\ & & \\ X,Y& & 2\times 2\end{array}\right]$$ (1) Now let us try to do something similar, but for gravity and electromagnetism. Gravity is described by a metric $`g_{\mu \nu }`$, which is a $`4\times 4`$ matrix, and electromagnetism by a 4-vector $`A_\mu `$. These fit into a $`5\times 5`$ matrix: $$\left[\begin{array}{ccc}& & \\ g_{\mu \nu }& & A_\mu \\ & & \\ & & \\ & & \\ A_\nu & & \varphi \end{array}\right]$$ (2) In fact, if one takes Einstein’s equations in five dimensions, and writes them out in terms of the components (2), they become Einstein’s equations for the four-dimensional metric $`g_{\mu \nu }`$ plus Maxwell’s equation for the vector potential $`A_\mu `$. This elegant unification of gravity and electromagnetism is known as Kaluza–Klein theory. If one looks at the Dirac equation in the higher-dimensional space, one finds a possible explanation for another of the striking patterns in nature, the existence of quark and lepton generations. That is, a single spinor field in the higher-dimensional space generally reduces to several four-dimensional spinor fields, with repeated copies of the same gauge quantum numbers. Unification is accompanied by new physics. In the case of grand unification this includes the $`X`$ and $`Y`$ bosons, which mediate proton decay. In Kaluza–Klein theory it includes the dilaton $`\varphi `$, which is the last element in the matrix (2). I will discuss the dilaton further later, but for now let me note that it is likely not to have observable effects. Of course, in Kaluza–Klein theory there is more new physics: the extra dimension(s)! Finally, let me consider the threshold behavior as one passes from figure 2b to figure 2a. At energies greater than the inverse size of the small dimensions, one can excite particles moving in those directions. The states are quantized because of the finite size, and each state of motion looks, from the lower-dimensional point of view, like a different kind of particle. Thus the signature of passing such a threshold is a whole tower of new particles, with a spectrum characteristic of the shape of the extra dimensions. ## 3 String Theory ### 3.1 The UV Problem To motivate string theory, I will start with the UV problem of quantum gravity. A very similar problem arose in the early days of the weak interaction. The original Fermi theory was based on an interaction of four fermionic fields at a spacetime point as depicted in figure 3a. The Fermi coupling constant $`G_\mathrm{F}`$ has units of length-squared, or inverse energy-squared. In a process with a characteristic energy $`E`$ the effective dimensionless coupling is then $`G_\mathrm{F}E^2`$. It follows that at sufficiently high energy the coupling becomes arbitrarily strong, and this also implies divergences in the perturbation theory. The second order weak amplitude of figure 3b is dimensionally of the form $$G_\mathrm{F}^2^{\mathrm{}}E^{}𝑑E^{},$$ (3) where $`E^{}`$ is the energy of the virtual state in the second order process, and this diverges at high energy. In position space the divergence comes when the two weak interactions occur at the same spacetime point (high energy = short distance). The divergences become worse at each higher order of perturbation theory and so cannot be controlled even with renormalization. Such a divergence suggests that the theory one is working with is only valid up to some energy scale, beyond which new physics appears. The new physics should have the effect of smearing out the interaction in spacetime and so softening the high energy behavior. One might imagine that this could be done in many ways, but in fact it is quite difficult to do without spoiling Lorentz invariance or causality; this is because Lorentz invariance requires that if the interaction is spread out in space it is also spread out in time. The solution to the short-distance problem of the weak interaction is not quite unique, but combined with two of the broad features of the weak interaction — its $`VA`$ structure and its universal coupling to different quarks and leptons — a unique solution emerges. This is depicted in figure 3c, where the four-fermi interaction is resolved into the exchange of a vector boson. Moreover, this vector boson must be of a very specific kind, coming from a spontaneously broken gauge invariance. And indeed, this is the way that nature works.<sup>2</sup><sup>2</sup>2It could also have been that the divergences are an artifact of perturbation theory but do not appear in the exact amplitudes. This is a logical possibility, a ‘nontrivial UV fixed point.’ Although possible, it seems unlikely, and it is not what happens in the case of the weak interaction. For gravity the discussion is much the same. The gravitational interaction is depicted in figure 4a. As we have already noted in discussing figure 1, the gravitational coupling $`G_\mathrm{N}`$ has units of length-squared and so the dimensionless coupling is $`G_\mathrm{N}E^2`$. This grows large at high energy and gives again a nonrenormalizable perturbation theory.<sup>3</sup><sup>3</sup>3Note that the bad gravitational interaction of figure 4a is the same graph as the smeared-out weak interaction of figure 3c. However, its high energy behavior is worse because gravity couples to energy rather than charge. Again the natural suspicion is that new short-distance physics smears out the interaction, and again there is only one known way to do this. It involves a bigger step than in the case of the weak interaction: it requires that at the Planck length the graviton and other particles turn out to be not points but one-dimensional objects, loops of ‘string,’ figure 5a. Their spacetime histories are then two-dimensional surfaces as shown in figure 4b. At first sight this is an odd idea. It is not obvious why it should work and not other possibilities. It may simply be that we have not been imaginative enough, but because UV problems are so hard to solve we should consider carefully this one solution that we have found. And in this case the idea becomes increasingly attractive as we consider it. ### 3.2 All Roads Lead to String Theory The basic idea is that the string has different states with the properties of different particles. Its internal vibrations are quantized, and depending on which oscillators are excited it can look like a scalar, a gauge boson, a graviton, or a fermion. Thus the full Standard Model plus gravity can be obtained from this one building block. The basic string interaction is as in figure 5c, one string splitting in two or the reverse. This one interaction, depending on the states of the strings involved, can look like any of the interactions in nature: gauge, gravitational, Yukawa. A promising fact is that string theory is unique: we have known for some time that there are only a small number of string theories, and now have learned that these are actually all the same. (For now, this does not lead to predictive power because the theory has many vacuum states, with different physics.) Further, string theory dovetails very nicely with previous ideas for extending the Standard Model. First, string theory automatically incorporates supersymmetry: it turns out that in order for the theory to be consistent the strings must move in a ‘superspace’ which has ‘fermionic’ dimensions in addition to the ordinary ones. Second, the spacetime symmetry of string theory is $`SO(9,1)`$, meaning that the strings move in ten dimensions. As I have already explained, this is a likely way to explain some of the features of nature, and it is incorporated in string theory. Third, string theory can incorporate ordinary grand unification: some of the simplest string vacua have the same gauge groups and matter that one finds in unifying the Standard Model. From another point of view, if one searches for higher symmetries or new mathematical structures that might be useful in physics, one again finds many connections to string theory. It is worthwhile to note that these three kinds of motivation — solving the divergence problem, explaining the broad patterns in the Standard Model, and the connection with mathematics, were also present in the weak interaction. Weinberg emphasized the divergence problem as I have done. Salam was more guided by the idea that non-Abelian gauge theory was a beautiful mathematical structure that should be incorporated in physics. Experiment gave no direct indication that the weak interaction was anything but the pointlike interaction of figure 3a, and no direct clue as to the new physics that smears it out, just as today it gives no direct indication of what lies beyond the Standard Model. But it did show certain broad patterns — universality and the $`VA`$ structure — that were telltale signs that the weak interaction is due to exchange of a gauge boson. It appears that nature is kind to us, in providing many trails to a correct theory. ### 3.3 Vacuum Selection and Dynamics So how do we go from explaining broad patterns to making precise predictions? The main problem is that string theory has many approximately stable vacua, corresponding to different shapes and sizes for the rolled-up dimensions. The physics that we see depends on which of these vacua we are in. Thus we need to understand the dynamics of the theory in great detail, so as to determine which vacua are truly stable, and how cosmology selects one among the stable vacua. Until recently our understanding of string theory was based entirely on perturbation theory, the analog of the Feynman graph expansion, describing small numbers of strings interacting weakly. However, we know from quantum field theory that there are many important dynamical effects that arise when we have large numbers of degrees of freedom and/or strong couplings. Some of these effects, such as confinement, the Higgs mechanism, and dynamical symmetry breaking, play an essential role in the Standard Model. If one did not know about them, one could not understand how the Standard Model Hamiltonian actually gives rise to the physics that we see. String theory is seemingly much more complicated than field theory, and so undoubtedly has new dynamical effects of its own. I am sure that all the experimentalists would like to know, “How do I falsify string theory? How do I make it go away and not come back?” Well, you can’t. Not yet. To understand why, remember that in the ’50s Wolfgang Pauli thought that he had falsified Yang–Mills theory, because it seems to predict long range forces not seen in nature. The field equations for the weak and strong forces are closely parallel to those for electromagnetism, and so apparently of infinite range. It is the dynamical effects, symmetry breaking and confinement, that make these short range forces. Just as one couldn’t falsify Yang–Mills theory in the ’50s, one cannot falsify string theory today. In particular, because we cannot reach the analog of the parton regime where the stringy physics is directly visible, the physics that we see is filtered through a great deal of complicated dynamics. There is a deeper problem as well. The Feynman graph expansion does not converge, in field theory or string theory. Thus it does not define the theory at finite nonzero coupling. One needs more, the analog of the path integral and renormalization group of field theory. Happily, since 1994 we have many new methods for understanding both field theories and string theory at strong coupling. These have led to steady progress on the questions that we need to answer, and to many new results and many surprises. This progress is the subject of the rest of my lectures. ## 4 Duality in Field and String Theory ### 4.1 Dualities One important idea in the recent developments is duality. This refers to the equivalence between seemingly distinct physical systems. One starts with different Hamiltonians, and even with different fields, but when after solving the theory one finds that the spectra and the transition amplitudes are identical. Often this occurs because a quantum system has more than one classical limit, so that one gets back to the same quantum theory by ‘quantizing’ either classical theory. This phenomenon is common in quantum field theories in two spacetime dimensions. The duality of the Sine-Gordon and Thirring models is one example; the high-temperature–low-temperature duality of the Ising model is another. The great surprise of the recent developments is that it is also common in quantum field theories in four dimensions, and in string theory. A particularly important phenomenon is weak–strong duality. I have emphasized that perturbation theory does not converge. It gives the asymptotics as the coupling $`g`$ goes to zero, but it misses important physics at finite coupling, and at large coupling it becomes more and more useless. In some cases, though, when $`g`$ becomes very large there is a simple alternate description, a weakly coupled dual theory with $`g^{}=1/g`$. In one sense, as $`g\mathrm{}`$ the quantum fluctuations of the original fields become very large (non-Gaussian), but one can find a dual set of fields which become more and more classical. Another important idea is electric–magnetic duality. A striking feature of Maxwell’s equations is the symmetry of the left-hand side under $`𝐄𝐁`$ and $`𝐁𝐄`$. This symmetry suggests that there should be magnetic as well as electric charges. This idea became more interesting with Dirac’s discovery of the quantization condition $$q_\mathrm{e}q_\mathrm{m}=2\pi n\mathrm{},$$ (4) which relates the quantization of the electric charge (its equal magnitude for protons and electrons) to the existence of magnetic monopoles. A further key step was the discovery by ’t Hooft and Polyakov that grand unified theories predict magnetic monopoles. These monopoles are solitons, smooth classical field configurations. Thus they look rather different from the electric charges, which are the basic quanta: the latter are light, pointlike, and weakly coupled while monopoles are heavy, ‘fuzzy,’ and (as a consequence of the Dirac quantization) strongly coupled. In 1977 Montonen and Olive proposed that in certain supersymmetric unified theories the situation at strong coupling would be reversed: the electric objects would be big, heavy, and strongly coupled and the magnetic objects small, light and weakly coupled. The symmetry of the sourceless Maxwell’s equations would then be extended to the interacting theory, with an inversion of the coupling constant. Thus electric–magnetic duality would be a special case of weak–strong duality, with the magnetically charged fields being the dual variables for the strongly coupled theory. The evidence for this conjecture was circumstantial: no one could actually find the dual magnetic variables. For this reason the reaction to this conjecture was skeptical for many years. In fact the evidence remains circumstantial, but in recent years it has become so much stronger that the existence of this duality is in little doubt. ### 4.2 Supersymmetry and Strong Coupling The key that makes it possible to discuss the strongly coupled theory is supersymmetry. One way to think about supersymmetry is in terms of extra dimensions — but unlike the dimensions that we see, and unlike the small dimensions discussed earlier, these dimensions are ‘fermionic.’ In other words, the coordinates for ordinary dimensions are real numbers and so commute with each other: they are ‘bosonic;’ the fermionic coordinates instead satisfy $$\theta _i\theta _j=\theta _j\theta _i.$$ (5) For $`i=j`$ this implies that $`\theta _i^2=0`$, so in some sense these dimensions have zero size. This may sound rather mysterious but in practice the effect is the same as having just the bosonic dimensions but with an extra symmetry that relates the masses and couplings of fermions to those of bosons. To understand how supersymmetry gives new information about strong coupling, let us recall the distinction between symmetry and dynamics. Symmetry tells us that some quantities (masses or amplitudes) vanish, and others are equal to one another. To actually determine the values of the masses or amplitudes is a dynamical question. In fact, supersymmetry gives some information that one would normally consider dynamical. To see this, let us consider in quantum theory the Hamiltonian operator $`H`$, the charge operator $`G`$ associated with an ordinary symmetry like electric charge or baryon number, and the operator $`Q`$ associated with a supersymmetry. The statement that $`G`$ is a symmetry means that it commutes with the Hamiltonian, $$[H,G]=0.$$ (6) For supersymmetry one has the same, $$[H,Q]=0,$$ (7) but there is an additional relation $$Q^2=H+G,$$ (8) in which the Hamiltonian and ordinary symmetries appear on the right. There are usually several $`G`$s and several $`Q`$s, so that there should be additional indices and constants in these equations, but this schematic form is enough to explain the point. It is this second equation that gives the extra information. To see one example of this, consider a state $`|\psi `$ having the special property that it is neutral under supersymmetry: $$Q|\psi =0.$$ (9) To be precise, since we have said that there are usually several $`Q`$s, we are interested in states that are neutral under at least one $`Q`$ but usually not all of them. These are known as BPS (Bogomolnyi–Prasad–Sommerfield) states. Now take the expectation value of the second relation (8) in this state: $$\psi |Q^2|\psi =\psi |H|\psi +\psi |G|\psi .$$ (10) The left side vanishes by the BPS property, while the two terms on the right are the energy $`E`$ of the state $`|\psi `$ and its charge $`q`$ under the operator $`G`$. Thus $$E=q,$$ (11) and so the energy of the state is determined in terms of its charge. But the energy is a dynamical quantity: even in quantum mechanics we must solve Schrödinger’s equation to obtain it. Here, it is determined entirely by symmetry information. (There is a constant of proportionality missing in (11), because we omitted it from (8) for simplicity, but it is determined by the symmetry.) Since the calculation of $`E`$ uses only symmetry information, it does not depend on any coupling being weak: it an exact property of the theory. Thus we know something about the spectrum at strong coupling. Actually, this argument only gives the allowed values of $`E`$, not the ones that actually appear in the spectrum. The latter requires an extra step: we first calculate the spectrum of BPS states at weak coupling, and then adiabatically continue the spectrum: the BPS property enables us to follow the spectrum to strong coupling. The BPS states are only a small part of the spectrum, but by using this and similar types of information from supersymmetry, together with general properties of quantum systems, one can usually recognize a distinctive pattern in the strongly coupled theory and so deduce the dual theory. Actually, this argument was already made by Montonen and Olive in 1977, but only in 1994, after this kind of reasoning was applied in a systematic way in many examples starting with Seiberg, did it become clear that it works and that electric–magnetic duality is a real property of supersymmetric gauge theories. ### 4.3 String Duality, D-Branes, M-Theory Thus far the discussion of duality has focussed on quantum field theory, but the same ideas apply to string theory. Prior to 1994 there were various conjectures about duality in string theory, but after the developments described above, Hull, Townsend, and Witten considered the issue in a systematic way. They found that for each strongly coupled string theory (with enough supersymmetry) there was a unique candidate for a weakly coupled dual. These conjectures fit together in an intricate and consistent way as dimensions are compactified, and evidence for them rapidly mounted. Thus weak–strong duality seems to be a general property in string theory. Weak–strong duality in field theory interchanged the pointlike quanta of the original fields with smooth solitons constructed from those fields. In string theory, the duality mixes up various kinds of object: the basic quanta (which are now strings), smooth solitons, black holes (which are like solitons, but with horizons and singularities), and new stringy objects known as D-branes. The D-branes play a major role, so I will describe them in more detail. In string theory strings usually move freely. However, some string theories also predict localized objects, sort of like defects in a crystal, where strings can break open and their endpoints get stuck. These are known as D-branes, short for Dirichlet (a kind of boundary condition — see Jackson) membranes. Depicted in figure 6, they can be points (D0-branes), curves (D1-branes), sheets (D2-branes), or higher-dimensional objects. They are dynamical objects — they can move, and bend — and their properties, at weak coupling, can be determined with the same machinery used elsewhere in string theory. Even before string duality it was found that one could make D-branes starting with just ordinary strings (for string theorists, I am talking about $`T`$-duality). Now we know that they are needed to fill out the duality multiplets. They have many interesting properties. One is that they are smaller than strings; one cannot really see this pictorially, because it includes the quantum fluctuations, but it follows from calculations of the relevant form factors. Since we are used to thinking that smaller means more fundamental, this is intriguing, and we will return to it. Returning to string duality, figure 7 gives a schematic picture of what was learned in 1995. Before that time there were five known string theories. These differed primarily in the way that supersymmetry acts on the string, and the type I theory also in that it includes open strings. We now know that starting with any one of these theories and going to strong coupling, we can reach any of the others. Again, the idea is that one follows the BPS states and recognizes distinctive patterns in the limits. The parameter space in the figure can be thought of as two coupling constants, or as the radii of two compact dimensions. In figure 7 there is a sixth limit, labeled M-theory. We have emphasized that the underlying spacetime symmetry of string theory is $`SO(9,1)`$. However, the M-theory point in the figure is in fact a point of $`SO(10,1)`$ symmetry: the spacetime symmetry of string theory is larger than had been suspected. The extra piece is badly spontaneously broken, at weak coupling, and not visible in the perturbation theory, but it is a property of the exact theory. It is interesting that $`SO(10,1)`$ is known to be the largest spacetime symmetry compatible with supersymmetry. Another way to describe this is that in the M-theory limit the theory lives in eleven spacetime dimensions: a new dimension has appeared. This is one of the surprising discoveries of the past few years. How does one discover a new dimension? It is worthwhile explaining this in some more detail. The D0-brane mass is related to the characteristic string mass scale $`m_\mathrm{s}`$ and the dimensionless string coupling $`g_\mathrm{s}`$, by $$m_{\mathrm{D0}}=\frac{m_\mathrm{s}}{g_\mathrm{s}}.$$ (12) When $`g_\mathrm{s}`$ is small this is heavier than the string scale, but when $`g_\mathrm{s}`$ is large it is lighter. Further, the D0-brane is a BPS state and so this result is exact. If one considers now a state with $`N`$ D0-branes, the mass is bounded below by $`Nm_{\mathrm{D0}}`$, an in fact this bound is saturated: there is a BPS bound state with $$m_{N\mathrm{D0}}=N\frac{m_\mathrm{s}}{g_\mathrm{s}}$$ (13) exactly. Now observe that for $`g_\mathrm{s}`$ large, all of these masses become small. What can the physics be? In fact, this is the spectrum associated with passing a threshold where a new spacetime dimension becomes visible. The radius of this dimension is $$R=\frac{g_\mathrm{s}}{m_\mathrm{s}}.$$ (14) That is, small $`g_\mathrm{s}`$ is small $`R`$ and large $`g_\mathrm{s}`$ is large $`R`$. In particular, perturbation theory in $`g_\mathrm{s}`$ is an expansion around $`R=0`$: this is why this dimension has always been invisible! ## 5 An Alternative to String Theory? On Lance Dixon’s tentative outline for my lectures, one of the items was ‘Alternatives to String Theory.’ My first reaction was that this was silly, there are no alternatives, but on reflection I realized that there was an interesting alternative to discuss. So let us try to construct a quantum theory of gravity based on a new principle, not string theory. We will fail, of course, but we will fail in an interesting way. Let us start as follows. In quantum mechanics we have the usual position-momentum uncertainty relation $$\delta x\delta p\mathrm{}^2.$$ (15) Quantum gravity seems to imply a breakdown in spacetime at the Planck length, so perhaps there is also a position-position uncertainty relation $$\delta x\delta xL_\mathrm{P}^2.$$ (16) This has been discussed many times, and there are many ways that one might try to implement it. We will do this as follows. Suppose that we have $`N`$ nonrelativistic particles. In normal quantum mechanics the state would be defined by $`N`$ position vectors $$𝐗_i,i=1,\mathrm{},N.$$ (17) Let us instead make these into Hermitean matrices in the particle-number index $$𝐗_{ij},i,j=1,\mathrm{},N.$$ (18) It is not obvious what this means, but we will see that it leads to an interesting result. For the Hamiltonian we take $$H=\frac{1}{2M}\underset{m=1}{\overset{D1}{}}\underset{i,j=1}{\overset{N}{}}(p_{ij}^m)^2+M^5\underset{m,n=1}{\overset{D1}{}}\underset{i,j=1}{\overset{N}{}}|[X^m,X^n]_{ij}|^2.$$ (19) The first term is just an ordinary nonrelativistic kinetic term, except that we now have $`N^2`$ coordinate vectors rather than $`N`$ so there is a momentum for each, and we sum the squares of all of them. The indices $`m`$ and $`n`$ run over the $`D1`$ spatial directions, and $`M`$ and $`M^{}`$ are large masses, of order the Planck scale. The potential term is chosen as follows. We want to recover ordinary quantum mechanics at low energy. The potential is the sums of the squares of all of the components of all of the commutators of the matrices $`𝐗_{ij}`$, with a large coefficient. It is therefore large unless all of these matrices commute. In states with energies below the Planck scale, the matrices will then commute to good approximation, so we do not see the new uncertainty (16) and we recover the usual quantum mechanics. In particular, we can find a basis which diagonalizes all the commuting $`X_{ij}^m`$. Thus the effective coordinates are just the $`N`$ diagonal elements $`X_{ii}^m`$ of each matrix in this basis, which is the right count for $`N`$ particles in ordinary quantum mechanics: the $`X_{ii}^m`$ behave like ordinary coordinates. The Hamiltonian (19) has interesting connections with other parts of physics. First, the commutator-squared term has the exact same structure as the four-gluon interaction in Yang–Mills theory. This is no accident, as we will see later on. Second, there is a close connection to supersymmetry. In supersymmetric quantum mechanics, one has operators satisfying the algebra (7,8). Again in general there are several supersymmetry charges, and the number $`𝒩`$ of these $`Q`$s is significant. For small values of $`𝒩`$, like 1, 2 or 4, there are many Hamiltonians with the symmetry. As $`𝒩`$ increases the symmetry becomes more constraining, and $`𝒩=16`$ is the maximum number. For $`𝒩=16`$ there is only one invariant Hamiltonian, and it is none other than our model (19). To be precise, supersymmetry requires that the particles have spin, that the Hamiltonian also has a spin-dependent piece, and that the spacetime dimension $`D`$ be 10. In fact, supersymmetry is necessary for this idea to work. The vanishing of the potential for commuting configurations was needed, but we only considered the classical potential, not the quantum corrections. The latter vanish only if the theory is supersymmetric. So this model has interesting connections, but let us return to the idea that we want a theory of gravity. The interactions among low energy particles come about as follows. We have argued that the potential forces the $`𝐗_{ij}`$ to be diagonal: the off-diagonal pieces are very massive. Still, virtual off-diagonal excitations induce interactions among the low-energy states. In fact, the leading effect, from one loop of the massive states, produces precisely the (super)gravity interaction among the low energy particles. So this simple idea seems to be working quite well, but we said that we were going to fail in our attempt to find an alternative to string theory. In fact we have failed because this is not an alternative: it is string theory. It is actually one piece of string theory, namely the Hamiltonian describing the low energy dynamics of $`N`$ D0-branes. This illustrates the following principle: that all good ideas are part of string theory. That sounds arrogant, but with all the recent progress in string theory, and a fuller understanding of the dualities and dynamical possibilities, string theory has extended its reach into more areas of mathematics and has absorbed previous ideas for unification (including $`D=11`$ supergravity). We have discussed this model not just to introduce this principle, but because the model is important for a number of other reasons. In fact, it is conjectured that it is not just a piece of string theory, but is actually a complete description. The idea is that if we view any state in string theory from a very highly boosted frame, it will be described by the Hamiltonian (19) with $`N`$ large. Particle physicists are familiar with the idea that systems look different as one boosts them: the parton distributions evolve. The idea here is that the D0-branes are the partons for string theory; in effect the string is a necklace of partons. This is the matrix theory idea of Banks, Fischler, Shenker, and Susskind (based on earlier ideas of Thorn), and at this point it seems very likely to be correct or at least a step in the correct direction. To put this in context, let us return to the illustration in figure 7 of the space of string vacua, and to the point made earlier that the perturbation theory does not define the theory for finite $`g`$. In fact, every indication is that the string description is useful only near the five cusps of the figure in which the string coupling becomes weak. In the center of the parameter space, not only do we not know the Hamiltonian but we do not know what degrees of freedom are supposed to appear in it. It is likely that they are not the one-dimensional objects that one usually thinks of in string theory; is it more likely that they are the coordinate matrices of the D-branes. ## 6 Black Hole Quantum Mechanics ### 6.1 Black Hole Thermodynamics In the ’70s it was found that there is a close analogy between the laws of black hole mechanics and the laws of thermodynamics. In particular, the event horizon area (in Planck units) is like the entropy. It is nondecreasing in classical gravitational processes, and the sum of this Bekenstein–Hawking entropy and the entropy of radiation is nondecreasing when Hawking radiation is included. For more than 20 years is has been a goal to find the statistical mechanical picture from which this thermodynamics derives — that is, to count the quantum states of a black hole. There have been suggestive ideas over the years, but no systematic framework for addressing the question. I have described the new ideas we have for understanding strongly interacting strings. A black hole certainly has strong gravitational interactions, so we might hope that the new tools would be useful here. Pursuing this line of thought, Strominger and Vafa were able in early 1996 to count the quantum states of a black hole for the first time. They did this with the following thought experiment. Start with a black hole and imagine adiabatically reducing the gravitational coupling $`G_\mathrm{N}`$. At some point the gravitational binding becomes weak enough that the black hole can no longer stay black, but must turn into ordinary matter. A complete theory of quantum gravity must predict what the final state will look like. The answer depends on what kind of black hole we begin with — in other words, what are its electric, magnetic, and other charges (the no-hair theorem says that this is all that identifies a black hole). For the state counting we want to take a supersymmetric black hole, one that corresponds to a BPS state in the quantum theory. For the simplest such black holes, the charges that they carry determine that at weak coupling they will turn into a gas of weakly coupled D-branes, as depicted in figure 8. For these we know the Hamiltonian, so we can count the states and continue back to strong coupling where the system is a black hole. Indeed, the answer is found to agree precisely with the Bekenstein–Hawking entropy. Our initial motivation was one problem of quantum gravity, the UV divergences. Now, many years later, string theory has solved another, very different, long-standing problem in the subject. This result led to much further study. It was found that in addition to the agreement of the entropy of BPS states with the Bekenstein–Hawking entropy, agreement was also found between the general relativity calculation and the D-brane calculation for the entropies of near-BPS states, and for various dynamical quantities such as absorption and decay amplitudes. This goes beyond the adiabatic continuation argument used to justify the entropy calculation, and in late 1997 these results were understood as consequences of a new duality, the Maldacena duality. This states that, not only are the weakly coupled D-branes the adiabatic continuation of the black hole, but that the D-brane system at all couplings is dual (physically equivalent) to the black hole. In effect, D-branes are the atoms from which certain black holes are made, but for large black holes they are in a highly quantum state while the dual gravitational field is in a very classical state. A precise statement of the Maldacena duality requires a low energy limit in the D-brane system, while on the gravitational side one takes the limit of the geometry near the horizon. ### 6.2 The Information Paradox This new duality has two important consequences. The first is for another of the nagging problems of quantum gravity, the black hole information paradox. A black hole emits thermal Hawking radiation, and will eventually decay completely. The final state is independent of what went into the black hole, and incoherent. In other words, an initially pure state evolves into a mixed state; this is inconsistent with the usual rules of quantum mechanics. Hawking argued that in quantum gravity the evolution of states must be generalized in this way. This has been a source of great controversy. While most physicists would be pleased to see quantum mechanics replaced by something less weird, the particular modification proposed by Hawking simply makes it uglier, and quite possibly inconsistent. But twenty years of people trying to find Hawking’s ‘mistake,’ to identify the mechanism that preserves the purity of the quantum state, has only served to sharpen the paradox: because the quantum correlations are lost behind the horizon, either quantum mechanics is modified in Hawking’s way, or the locality of physics must break down in a way that is subtle enough not to infect most of physics, yet act over long distances. The duality conjecture above states that the black hole is equivalent to an ordinary quantum system, so that the laws of quantum evolution are unmodified. However, to resolve fully the paradox one must identify the associated nonlocality in the spacetime physics. This is hard to do because the local properties of spacetime are difficult to extract from the highly quantum D-brane system: this is related to the holographic principle. This term refers to the property of a hologram, that the full picture is contained in any one piece. It also has the further connotation that the quantum state of any system can be encoded in variables living on the boundary of that system, an idea that is suggested by the entropy–area connection of the black hole. This is a key point where our ideas are in still in flux. ### 6.3 Black Holes and Gauge Theory Dualities between two systems give information in each direction: for each system there are some things that can be calculated much more easily in the dual description. In the previous subsection we used the Maldacena duality to make statements about black holes. We can also use it in the other direction, to calculate properties of the D-brane theory. To take full advantage of this we must first make a generalization. We have said that D-branes can be points, strings, sheets, and so on: they can be extended in $`p`$ directions, where here $`p=0,1,2`$. Thus we refer to D$`p`$-branes. The same is true of black holes: the usual ones are local objects, but we can also have black strings — strings with event horizons — and so on. A black $`p`$-brane is extended in $`p`$ directions and has a black hole geometry in the orthogonal directions. The full Maldacena duality is between the low energy physics of D$`p`$-branes and strings in the near-horizon geometry of a black $`p`$-brane. Further, for $`p3`$ the low energy physics of $`N`$ D$`p`$-branes is described by $`U(N)`$ Yang–Mills theory with $`𝒩=16`$ supersymmetries. That is, the gauge fields live on the D-branes, so that they constitute a field theory in $`p+1`$ ‘spacetime’ dimensions, where here spacetime is just the world-volume of the brane. For $`p=0`$, this is the connection of matrix quantum mechanics to Yang–Mills theory that we have already mentioned below (19). The Maldacena duality then implies that various quantities in the gauge theory can be calculated more easily in the dual black $`p`$-brane geometry. This method is only useful for large $`N`$, because this is necessary to get a black hole which is larger than string scale and so described by ordinary general relativity. Of course we have a particular interest in gauge theories in $`3+1`$ dimensions, so let us focus on $`p=3`$. The Maldacena duality for $`p=3`$ partly solves an old problem in the strong interaction. In the mid-’70s ’t Hooft observed that Yang–Mills theory simplifies when the number of colors is large. This simplification was not enough to allow analytic calculation, but its form led ’t Hooft to conjecture a duality between large-$`N`$ gauge theory and some unknown string theory. The Maldacena duality is a precise realization of this idea, for supersymmetric gauge theories.<sup>4</sup><sup>4</sup>4For $`p=3`$ the near-horizon geometry is the product of an anti-de Sitter space and a sphere, while the supersymmetric gauge theory is conformally invariant (a conformal field theory), so this is also known as the AdS–CFT correspondence. For the strong interaction we need of course to understand nonsupersymmetric gauge theories. One can obtain a rough picture of these from the Maldacena duality, but a precise description seems still far off. It is notable, however, that string theory, which began as an attempt to describe the strong interaction, have now returned to their roots, but only by means of an excursion through black hole physics and other strange paths. ### 6.4 Spacetime Topology Change This subsection is not directly related to black holes, but deals with another exotic question in quantum gravity. Gravity is due to the bending of spacetime. It is an old question, whether spacetime can not only bend but break: does its topology as well as its geometry evolve in time? Again, string theory provides the tools to answer this question. The answer is ‘yes’ — under certain controlled circumstances the geometry can evolve as shown schematically in figure 9. It is interesting to focus on the case that the topology change is taking place in the compactified dimensions, and to contrast the situation as seen by the short-distance and long-distance observers of figures 2a and 2b. The short distance observer sees the actual process of figure 9. The long distance observer cannot see this. Rather, this observer sees a phase transition. At the point where the topology changes, some additional particles become massless and the symmetry breaking pattern changes. Thus the transition can be analyzed with the ordinary methods of field theory; it is this that makes the quantitative analysis of the topology change possible. Incidentally, topology change has often been discussed in the context of spacetime foam, the idea that the topology of spacetime is constantly fluctuating at Planckian distance scales. It is likely that the truth is even more strange, as in matrix theory where spacetime becomes ‘non-Abelian.’ ## 7 Unification and Large Dimensions My talks have been unapologetically theoretical. The Planck length is far removed from experiment, yet we believe we have a great deal of understanding of the very exotic physics that lies there. In this final section I would like to discuss some ways in which the discoveries of the last few years might affect the physics that we see. Let me return to the unification of the couplings in figure 10a, and to the failure of the gravitational coupling to meet the other three exactly. There are many ideas to explain this. There could be additional particles at the weak, intermediate, or unified scales, which change the running of the gauge couplings so as to raise the unification point. Or it may be that the gauge couplings actually do unify first, so that there is a normal grand unified theory over a small range of scales before the gravitational coupling unifies. These ideas focus on changing the behavior of the gauge couplings. Since these already unify to good approximation, it would be simpler to change the behavior of the gravitational coupling so that it meets the other three at a lower scale — to lower the Planck scale. Unfortunately this is not so easy. The energy-dependence of the gravitational coupling is just dimensional analysis, which is not so easy to change.<sup>5</sup><sup>5</sup>5It was asked whether the gravitational coupling has additional $`\beta `$-function type running. Although this could occur in principle, it does not do so because of a combination of dimensional analysis and symmetry arguments. There is a way to change the dimensional analysis — that is, to change the dimension! We have discussed the possibility that at some scale we pass the threshold to a new dimension. Suppose that this occurred below the unification scale. For both the gauge and the gravitational couplings the units change, so that both turn upward as in figure 10b. This does not help; the couplings meet no sooner. There is a more interesting possibility, which was first noticed in the strong coupling limit of the $`E_8\times E_8`$ heterotic string. Of the five string theories, this is the one whose weakly coupled physics looks most promising for unification. Its strong-coupling behavior, shown in figure 11, is interesting. A new dimension appears, but it is not simply a circle. Rather, it is bounded by two walls. Moreover, all the gauge fields and the particles that carry gauge charges move only in the walls, while gravity moves in the bulk. Consider now the unification of the couplings. The dynamics of the gauge couplings, and their running, remains as in $`3+1`$ dimensions; however, the gravitational coupling has a kink at the threshold, so the net effect can be as in figure 10c. If the threshold is at the correct scale, the four couplings meet at a point. As it stands this has no more predictive power than any of the other proposed solutions. There is one more unknown parameter, the new threshold scale, and one more prediction. However, it does illustrate that the new understand of string theory will lead to some very new ideas about the nature of unification. Figure 11 is only one example of a much more general idea now under study, that the Standard Model lives in a brane and does not move in the full space of the compact dimensions, while gravity does do so. This new idea leads in turn to the possibility of radically changing the scales of new physics in string theory. To see this, imagine lowering the threshold energy (the kink) in figure 10c; this also lowers the string scale, which is where the gravitational coupling meets the other three. From a completely model-independent point of view, how low can we go? The string scale must be at least a TeV, else we would already have seen string physics. The five-dimensional threshold must correspond to a radius of no more than a millimeter, else Cavendish experiments would already have shown the four-dimensional inverse square law turning into a five-dimensional inverse cube. Remarkably, it is difficult to improve on these extreme model-independent bounds. The large dimension in particular might seem to imply a whole tower of new states at energies above $`10^4`$ eV, but these are very weakly coupled (gravitational strength) and so would not be seen. It may be that construction of a full model, with a sensible cosmology, will raise these scales, but that they will still lie lower than we used to imagine. I had been somewhat skeptical about this idea, for a reason that is evident in figure 10c. If the threshold is lowered further, the gravitational coupling meets the other three before they unify and one loses the successful prediction of $`\mathrm{sin}^2\theta _\mathrm{w}`$. However, it is wrong to pin so much on this one number; the correct prediction might come out in the end in a more complicated way. One should certainly explore the many new possibilities that arise, to see what other consequences there are and to broaden our perspective on the possible nature of unification. ## 8 Outlook I will start with the more theoretical problems. 1. The black hole information problem. It seems that the necessary ingredients to solve this are at hand, and that we will soon assemble them correctly. However, it has seemed this way before, and the clock on this problem is at 22 years and counting. Still, our understanding is clearly deeper than it has ever been. 2. The cosmological constant problem. In any quantum theory the vacuum is a busy place, and should gravitate. Why is the cosmological constant, even if nonzero, so much smaller than particle or Planck energies? This is another hard problem, not just in string theory but in any theory of gravity. It has resisted solution for a long time, and seems to require radical new ideas. The new ideas that I have described have not led to a solution, but they have suggested new possibilities. One important ingredient may be supersymmetry. Throughout the discussion of duality this plays a central role in canceling quantum fluctuations, suggesting that it also does so in the vacuum energy. The problem is that supersymmetry is broken in nature; we need a phase with some properties of the broken theory and some of the unbroken. We have learned about many new phases of string theory, but not yet one with just the right properties. Another ingredient may be nonlocality. The cosmological constant affects physics on cosmic scales but is determined by dynamics at short distance: this suggests the need for some nonlocal feedback mechanism. Recall that the black hole information problem also seems to need nonlocality; perhaps these are related. 3. Precise predictions from string theory? Our understanding of string dynamics is much improved, but still very insufficient for solving the vacuum selection/stability problem, especially with nonsupersymmetric vacua. It is hard to see how one could begin to address this before solving the cosmological constant problem, since this tells us that we are missing something important about the vacuum. An optimistic projection is that we soon solve the information problem, that this gives us the needed idea to solve the cosmological constant problem, and then we can address vacuum selection. More likely, we still are missing some key concepts. 4. What is string theory? We are closer to a nonperturbative formulation than ever before: the things that we have learned in the past few years have completely changed our point of view. It may be that again the ingredients are in place, in that both matrix theory and the Maldacena duality give nonperturbative definitions, and we simply need to extract the essence. 5. Distinct signatures of string theory? Is there any distinctively stringy experimental signature? All of the new physics may lie far beyond accessible energies, but we might be lucky instead. I have discussed the possibility of low energy string theory and large dimensions. I am still inclined to expect the standard picture to hold, but the new ideas are and will remain a serious alternative. Another possibility is a fifth force from the dilaton or other moduli (scalars that are common in string theory). These are massless to first approximation, but quantum effects almost invariably induce masses for all scalars. The resulting mass is likely in the range $$\frac{m_{\mathrm{weak}}^2}{m_\mathrm{P}}<m_{\mathrm{scalar}}<m_{\mathrm{weak}}.$$ (20) The lower limit is interesting for a fifth force, while the whole range is interesting for dark matter. The most interesting hope is for something unexpected, perhaps cosmological and associated with the holographic principle, or perhaps a distinctive form of supersymmetry breaking. 6. Supersymmetry. Supersymmetry has played a role throughout these talks. In string theory it is a symmetry at least at the Planck scale, but is broken somewhere between the Planck and weak scales. The main arguments for breaking at the weak scale are independent of string theory: the Higgs hierarchy problem, the unification of the couplings, the heavy top quark. In addition, the ubiquitous role that supersymmetry plays in suppressing quantum fluctuations in our discussion of strongly coupled physics supports the idea that it suppresses the quantum corrections to the Higgs mass. The one cautionary note is that the cosmological constant suggests a new phase of supersymmetry, whose phenomenology at this point is completely unknown. Still, the discovery and precision study of supersymmetry remains the best bet for testing all of these ideas. In conclusion, the last few years have seen remarkable progress, and there is a real prospect of answering difficult and long-standing problems in the near future. ## References Two texts on string theory: * M. B. Green, J. H. Schwarz, and E. Witten, Superstring Theory, Vols. 1 and 2 (Cambridge University Press, Cambridge, 1987). * J. Polchinski, String Theory, Vols. 1 and 2 (Cambridge University Press, Cambridge, 1998). An earlier version of these lectures, with extensive references: * J. Polchinski, “String Duality,” Rev. Mod. Phys. 68, 1245 (1996), hep-th/9607050. The talk by Michael Peskin at the SSI Topical Conference gives more detail on some of the subjects in my lectures. Other popular accounts: * G. P. Collins, “Quantum Black Holes Are Tied to D-Branes and Strings,” Physics Today 50, 19 (March 1997). * E. Witten, “Duality, Spacetime, and Quantum Mechanics,” Physics Today 50, 28 (May 1997). * B. G. Levi, “Strings May Tie Quantum Gravity to Quantum Chromodynamics,” Physics Today 51, 20 (August 1998). A summer school covering many of the developments up to 1996: * C. Efthimiou and B. Greene, editors, Fields, Strings, and Duality, TASI 1996 (Singapore: World Scientific, 1997). Lectures on matrix theory: * T. Banks, “Matrix Theory,” Nucl. Phys. Proc. Suppl. 67, 180 (1998), hep-th/9710231. * D. Bigatti and L. Susskind, “Review of Matrix Theory,” hep-th/9712072. A discussion of the spacetime uncertainty principle: * M. Li and T. Yoneya, “Short Distance Spacetime Structure and Black Holes in String Theory,” hep-th/9806240. Lectures on developments in black hole quantum mechanics: * G. T. Horowitz, “Quantum States of Black Holes,” gr-qc/9704072. * A. W. Peet, “The Bekenstein Formula and String Theory,” Classical and Quantum Gravity 15, 3291 (1998), hep-th/9712253. A recent review of nonperturbative string theory: * A. Sen, “An Introduction to Non-perturbative String Theory,” hep-ph/9802051. Two very recent colloquium-style presentations, including the Maldacena conjecture: * A. Sen, “Developments in Superstring Theory,” hep-ph/9810356. * J. Schwarz, “Beyond Gauge Theories,” hep-th/9807195. For more on low energy string theory and millimeter dimensions see the talk by Nima Arkani-Hamed at the SSI Topical Conference, and references therein.
no-problem/9812/astro-ph9812322.html
ar5iv
text
# Simulating Galaxy Evolution ## Introduction Understanding how galaxies formed is the key to unraveling the mysteries of the high redshift universe. To interpret the deepest images of distant galaxies one has to simulate galaxy evolution. The prescription for such a simulation seems straightforward. Take a gas cloud, massive enough to be self-gravitating, and add a simple prescription for star formation based on the local free-fall time scale. In practice, this approach has yielded star formation histories that appear to match observations. However the predictive power of this approach is limited. The reason is the following. One has to assume a prescription for star formation. Reasonable guesses can be made, but one has no guarantee that these are valid. There is no way of evaluating the uncertainty in the adopted ansatz for forming stars. This is true for primordial clouds, and equally valid for current star formation. Of course the star formation prescription, once selected, has parameters that can be adjusted, often with little freedom when confronted with the observational data. This approach has been applied to the early universe, commencing with density fluctuations that grow by hierarchical clustering of cold dark matter. One can try to assess the uncertainties by comparing snapshots of the universe at different redshifts. If one matches the data, one can deduce that one has a working model of galaxy formation, but one cannot expect this to be a useful guide to extreme situations that are not included in the simple algorithm. These might include, for example, the role of active galactic nuclei in primordial and current epoch star formation. I conclude that it is useful to consider an alternative to “ab initio” galaxy formation. In this talk I will describe such an approach that is based on nearby examples of star formation in a global context, that one attempts to run backwards in time. Clearly, forwards and backwards evolution are complementary descriptions of the same fundamental issues that describe galaxy formation. ## Galaxy Evolution From Primordial Fluctuations Inflationary cosmology prescribes the initial spectrum of density fluctuations. The horizon scale at matter-radiation equality imprints a scale on the relic fluctuation spectrum: at $`LL_{eq}12(\mathrm{\Omega }h^2)^1`$ Mpc, $`\delta \rho /\rho M^{1/2n/6}`$ and $`n1`$ whereas at $`LL_{eq}`$, $`n_{eff}`$ approaches $`3`$, reflecting scale invariance for fluctuations that entered the horizon during radiation domination. On galaxy scales, $`n_{eff}2`$. This leads to a hierarchical formation sequence of structure. Larger and larger structures merge together. Numerical simulations show that some substructure survives. This is potentially a problem for understanding why galactic disks remain thin if the surrounding dark halos contain even a percent of their mass in massive substructures, characteristic in mass, say, of dwarf galaxies. Dwarf-disk interactions would overheat the disk tot . This can be partially rectified by gas infall, which certainly helps renew thin disks. The discovery of high velocity hydrogen clouds at the periphery of the halo lends some support to the availability of a gas reservoir today bli . The properties of dark halos are accounted for by hierarchical clustering. The abundance, mass function, density profile and rotation curve for a typical galaxy halo all agree with empirical estimates. The clustering of galaxies is described by the galaxy correlation function, and simulations of clustering provide a fit over several decades of scale. One accounts for the mass function of galaxy clusters and its evolution with redshift eke ; bah by setting $`\mathrm{\Omega }0.3(\pm 1)`$. Interpretation of massive halos as rare peaks accounts for the observed clustering of Lyman break galaxies at $`z3`$ gia . The properties of the intergalactic medium agree with predictions of the hierarchical model. One has to adopt a metagalactic ionizing radiation field. This is taken from the observed quasar luminosity function. The gas distribution from the simulations is exposed to the ionizing radiation field, and the effects of the peculiar velocity field are found to play an important role in reproducing Lyman alpha cloud absorption profiles. One can explain kat the distribution of observed column densities ranging from damped Lyman alpha clouds with HI column densities in excess of $`10^{21}`$ cm<sup>-2</sup> down to the Lyman alpha forest below $`10^{14}`$ cm<sup>-2</sup>. The gas overdensities range from $`\delta \rho /\rho `$ of order several hundred for damped clouds to unity for the forest. The structural properties of the Lyman alpha clouds are simply understood. There is some controversy however over the nature of the relatively rare damped clouds. These have been argued to be rotating protodisks wol . However the observed spread of velocities is not simply a thin disk, but can either be interpreted as a thick disk or as a more incoherent, quasi-spherical halo containing many smaller clouds hae . More problematic for disk theory is the failure of simulations to reproduce the sizes of galactic disks. Angular momentum conservation of a uniformly collapsing and dissipating cloud of baryons within a dark halo suggests that the disk size is $`\lambda R_i`$, where $`R_i`$ is the halo virialization radius and $`\lambda `$ is the critical dimensionless angular momentum. One has $`\lambda 0.06`$ and $`R_i`$ is typically about 100 kpc. This argument would actually give the correct disk size. However the clumpy nature of the halo is found to drive efficient angular momentum transfer via dynamical friction. The disk size found in simulations is a factor of five or more smaller than observed disk scale lengths ste . Evidently feedback from star formation is conspiring to limit the collapse of the gas. The galaxy luminosity function also represents a challenge for theoretical models, which more naturally specify the galaxy mass function. There are two difficulties. At the low mass end, the predicted slope of the mass function ($`dN/dm\stackrel{}{}m^2`$) is a poor fit to the power-law tail of the galaxy luminosity function, the slope of which depends on galaxy color selection and varies between $`dN/dL\stackrel{}{}L^{3/2}`$ in the blue and $`dN/dL\stackrel{}{}L^1`$ in the red. One corrects this problem by introducing inefficient star formation in low mass potential wells. The fraction of gas forming stars is assumed to be $`(\sigma /\sigma _{cr})^\alpha `$, with $`\alpha 2`$, where $`\sigma _{cr}75\mathrm{km}\mathrm{s}^1`$ denotes the transition velocity dispersion, below which retention of interstellar gas energised by supernova-driven winds becomes suppressed. This assumes that supernovae are effective at disrupting the interstellar gas in the shallow potential wells characteristic of dwarf galaxies dek . However the efficiency may only be high at masses below $`10^6\mathrm{M}_{\mathrm{}}`$, according to a recent analysis of starbursts in dwarf galaxies mac . This would only flatten the luminosity function at very low luminosities if one counts all gas-retaining galaxies. One difference between dwarfs ($`\stackrel{<}{}10^8\mathrm{M}_{\mathrm{}}`$) and giants is that the supernova ejecta are expelled, so that the residual gas, if retained, is very metal-poor. This might be sufficient to reduce the efficiency of star formation sufficiently so as to produce a population of low surface brightness dwarfs. At high luminosities the challenge is to explain why the nonlinear clustering mass present is $`10^{14}\mathrm{M}_{\mathrm{}}`$ whereas the value of L, above which the number of galaxies decreases exponentially, is $`10^{10}h^2\mathrm{L}_{\mathrm{}}`$. The corresponding dark halo mass is around $`10^{12}\mathrm{M}_{\mathrm{}}`$. Evidently some physical effect is intervening to limit the luminosity of a galaxy, which does not track the mass of the dark potential wells. The generally accepted resolution is that baryonic cooling is a necessary condition for star formation to occur in a primordial contracting cloud. If the density is too low for gas cooling, the intergalactic gas remains hot and diffuse. Efficient star formation must occur within a dynamical time-scale. This is certainly how monolithic formation of an elliptical must have occurred. In this case, the condition that the gas cooling time be less than a collapse time sets a maximum value on cooled galaxy baryon mass of about $`10^{12}\mathrm{M}_{\mathrm{}}`$. However gas continues to accrete and cool. The total mass of cooled gas does not provide a distinctive cut-off in the mass function of baryonstho . One has to vary the efficiency of star formation, reducing it on time-scales longer than a dynamical time, in order to account for L$`_{}`$. One can appeal to cluster formation to heat up the intergalactic gas, thereby removing the reservoir of cold gas which would potentially be accreted. This would lead one to expect that cluster ellipticals have a relatively homogeneous distribution of formation times, peaked at the epoch of cluster formation. One has to assume a hot gas environment for field ellipticals, associated with galaxy groups, to restrict cold gas infall. However since clustering in the field develops more recently than for rich clusters one expects field ellipticals, on the other hand, to display a much broader range of ages, and reveal, in some cases, signs of recent or current infall. Indications of this effect can be seen in the enhanced scatter in the fundamental plane for field ellipticals relative to cluster ellipticals for . For disk galaxies, the comparison of mass and luminosity via the predicted mass function challenges interpretations of the Tully-Fisher correlation between luminosity and maximum rotation velocity sten . The observed dispersion of fifteen percent in inferred distance wil may be compared with the dispersion between mass and halo circular velocity in the CDM hierarchy, which is of order 100 percent. Implementation of a prescription for star formation can reduce the dispersion between cooled baryon mass, and hence luminosity, and disk rotational velocity to the observed range by, for example, allowing stars to preferentially form in the more massive disks where the baryons are self-gravitating and dense enough to suppress supernova-driven winds. However there is a price: the Tully-Fisher normalization yields the disk mass-to-luminosity ratio, and the CDM hierarchy inevitably favors a high value relative to the observed value of $`M/L10h`$ for the baryon-dominated regions of disks. ## Galaxy Evolution Via Reverse Engineering A complementary approach to galaxy evolution allows one to circumvent some of these difficulties, although at the risk of introducing other complications. One commences with nearby galaxies, develops a model for star formation, and evolves the galaxies backwards in time. Actual images or idealized models of nearby galaxies are used as the starting point. Suppose one first ignores dynamical evolution. Star formation in disks can be described by an expression of the form $$\mathrm{SFR}=ϵ\mu _{\mathrm{gas}}\mathrm{\Omega }(r)f(Q).$$ Here $`\mu _{\mathrm{gas}}`$ is the surface density of atomic and molecular gas, $`\mathrm{\Omega }(r)`$ is the rotation rate, $`Q`$ is the Toomre parameter (approximately given for a self-gravitating disk of gas by $`\frac{\kappa \sigma }{\pi G\mu _{\mathrm{gas}}}`$, where $`\kappa `$ is the epicyclic frequency and $`\sigma `$ is the gas velocity dispersion) that guarantees gravitational instability to axisymmetric perturbations if $`Q<1`$, and $`ϵ`$ is an efficiency parameter. One needs to generalize the dependence on $`Q`$ to allow for non-axisymmetric instabilities, such as density waves which are responsible for the growth of molecular clouds and for the gravitational contribution of the stellar component. In general, however, one expects there to be a threshold for local instabilities when the surface density drops below a critical value, for typical disks amounting to about $`\mu _{\mathrm{gas}}10\mathrm{M}_{\mathrm{}}\mathrm{pc}^2`$. This empirical expression fits global star formation rates in disks remarkably well ken , and $`ϵ`$ may be interpreted as the fraction of gas converted into stars per dynamical time. Infall is one remaining ingredient that needs to be added. For individual disks, this model has been exploited to demonstrate that disks form inside out, that disk surface brightness increases by almost a magnitudecay to $`z1`$, and to account for the chemical evolution of old disk stars and of the interstellar medium at high redshiftpra . The model has considerable potential for predicting how galaxies appear to evolve in deep images obtained of the distant universe. In fact, one study boub has already demonstrated that such a scaling in galaxy size is necessary to reconcile faint galaxies sizes with galaxies at low redshift, this study carefully considering changes in the pixelisation, the PSF, and the surface brightness relative to the noise. Of course, a careful consideration of many of the same effects is important for testing models against the observations. One has to add a disk formation epoch, chosen from an analytical prescription for hierarchical CDM cosmology and some evolution in number density. The latter is required to crudely account for merging and is necessary to reproduce the observed deep galaxy counts. Ellipticals and spheroids must also be incorporated into the model. While these systems do not dominate the number counts, which at faint magnitudes are dominated by disks and their irregular precursors, they are important in the cumulative star formation history of the universe. Approximately half of the mass in stars is in the spheroidal component, and hence mostly in E’s and S0’s. This is the approximate assessment for the local luminosity function (and is due to the fact that while $`30`$ percent of galaxies are E’s and S0’s, the associated $`M/L`$ is about twice as large as for typical spirals). One also reaches an independent verification of this from the cosmic far infrared background. This recently discovered diffuse flux at 100 – 300 $`\mu `$m amounts to $`\lambda i_\nu 20\mathrm{nw}/\mathrm{m}^2/\mathrm{s}^2`$, comparable to the diffuse optical light flux when integrated over the HDF and near infrared. Modelling of disk galaxies incorporating dust can reproduce the optical background but only about fifty percent of the FIR background is explained by optically visible systems. The remainder is presumed to be due to dusty ellipticals. Of course if these systems form stars at an early epoch $`z_E`$ relative to spirals at $`z_S`$, then the inferred mass in stars (for the same initial mass function) in dust enshrouded spheroids is equal to $`[(1+z_E)/(1+z_S)]`$ times the contribution from disks. This comparison suggests that $`z_Ez_S`$ though in principle one could have $`z_Ez_S`$. One might worry that the FIR background could be due to AGN. However modelling of the x-ray background effectively constrains the AGN contribution to diffuse hard photons. Compton self absorption of the x-rays, required to obtain a spectral fit of the XRB, limits the possible contribution to the diffuse FIR background by dust-shrouded, x-ray-emitting AGN to be almost ten percent of the observed background. Direct observations by SCUBA find ultraluminous galaxies at $`z=1`$$`\mathrm{\hspace{0.17em}3}`$. Perhaps ten percent of these may be AGN-powered according to the previous argument, and this is consistent with direct spectroscopic signatures. ### Disk Parameterisation There are two major uncertainties in the modelling of the disk star formation rate: infall and efficiency. One can constrain the role of infall by three independent methods that respectively appeal to chemical evolution, disk dynamics, and to the evolution of disk sizes. The best studied is chemical evolution. Infall of metal-poor gas into the early gas is required to account for the paucity of metal poor G dwarfs. The sharp decline in supersolar metallicities of disk stars means that recent metal-poor infall is greatly reduced relative to infall in the first 5 Gyr. Infall of gas-rich clumps is predicted in the CDM model, but these interactions must avoid overheating the disk. Less than 4% of the disk can have fallen in over the past 5 Gyr according to one study tot . However recent calculations suggest that infalling satellites preferentially tilt rather than heat the disk car . The implications for high redshift galaxies is that disks are small at $`z>1`$. Without infall, disks would not be sufficiently small, according to one recent analysis, to account for the decrease in faint galaxy angular diameter. One can only decompose disks from bulges to $`z\stackrel{<}{}1`$, using HST data. Evolution of disk sizes to this redshift is quite model-dependent. Disk size varying as $`(1+z)^\alpha `$, with $`\alpha 2`$, fits the available data. However selection biases need to be modelled more carefully. One selects earlier type galaxies at high redshift than at low redshift because of surface brightness dimming, and this complicates comparisons. ### Disk Physics The essence of disk formation lies in inefficiency. Galaxies retain a sufficient gas reservoir so as to still be vigorously forming stars at the present epoch. The star formation rate increases dramatically with cosmic epoch, possibly peaking near $`z2`$. Hence gas infall drops off dramatically. This also is implicit in models of galactic chemical evolution, where infall of metal-poor gas over the first five or so Gyr helps account for the metal distribution of old disk stars. The inefficiency of star formation must be due, not to the availability of a gas supply but rather arises from being controlled by disk physics. Feedback of energy and momentum from star formation and death necessarily play an important role. One needs to include such physics to understand disk sizes. One could simultaneously account for gas longevity. Angular momentum transfer is central to such a model. A general class of theories which can successfully reproduce disk profiles is based on contracting viscous self-gravitating disks. The viscosity arises from cloud-cloud collisions, the cold disk being gravitationally unstable to cloud formation. The disk forms as angular momentum is transferred on a viscosity timescale. Since cloud collisions and mergers are assumed to drive star formation, one naturally relates the star formation and viscosity time scales. An exponential surface density profile is naturally generated lin . ### Bulge Evolution Bulges are expected to be prominent in observations of high redshift galaxies, both because of disk evolution and the high bulge surface brightness. Yet the sequence of bulge formation is poorly understood, and this makes it difficult to formulate and test ab initio predictions of disk evolution. Consider the following alternatives. Bulges form before disks, either monolithically or in major (i.e. comparable, or at least mass ratio 1:10) mergers. Bulges form simultaneously with disks via satellite mergers. Bulges form after disks, via secular instability of disks, and bar formation followed by dissolution as gas inflow drives bulge formation nor . Any of these scenarios are possible. Two, or even all three, may be operative. For example, secular evolution can form small bulges but not the massive objects of early type galaxies. Observational evidence that bulge and disk scale lengths are correlated favors a secular evolution origin of bulges for late-type spirals courteau . The ubiquity of bars, which are efficient at torquing accreting gas and driving the gas inwards to form a central bulge, also suggests that secular evolution must have played a significant role in bulge formation. Conversely, massive bulges are most likely formed by mergers. Satellite infall of gas-rich dwarf galaxies is expected to be a common occurrence in hierarchical models and provides a natural mechanism for simultaneously forming the bulges, as the dense stellar cores sink into the center of the galaxy by dynamical friction, and feeding disk growth with gas infall. There are hints of monolithic bulge formation from observations of many compact Lyman break galaxies, which have high star formation rates. One can try to address this confusing range of bulge formation possibilities by examining the properties of disk galaxies at $`z\stackrel{<}{}1`$, where component separation into bulge and disk is possible at HST resolution. Late-forming bulges are inevitably bluer and smaller than early-forming bulges, at a given redshift. Figure 1 shows a comparison of the model predictions with available data. HST images are shown, in a comparison with the HDF. Analyses of similar images boucs show that only with larger samples at $`z1`$ could one be able to distinguish between alternative models of bulge formation. ## Looking to the Future It will be possible in the not too distant future to greatly refine the observational constraints relevant to galaxy evolution. In Figure 2 we show HST Advanced Camera (2000) and NGST (2007) simulations of the same 85” x 85” field using the secular evolution scheme for bulge formation. The Advanced Camera simulations consider a 150,000-s integration, utilise a pixel size of 0.05 arcsec, and probe the $`gri`$ optical bands to $`i_{AB}30.3`$, whereas the NGST simulations consider a similar 150,000-s integration, utilise a pixel size of 0.029 arcsec, and probe the 1,3,5-$`\mu `$m wavelength bands to $`m_{1\mu m,AB}31.6`$. For comparison, we also show WFPC2 (pixel size is 0.1 arcsec, probes the $`I_{F814W}`$, $`V_{F606W}`$, and $`B_{F450W}`$ bands to a limiting magnitude $`I_{F814W,AB}29`$) and NICMOS (pixel size is 0.2 arcsec, probes the $`J_{F110W}`$ and $`H_{F160W}`$ infrared bands to $`H_{F160W,AB}28.3`$) simulations. Since the fiducial secular model for bulge formation breaks down at high redshift, we have included a variation of the Pozzetti, Bruzual, & Zamorani poz luminosity evolution model at these redshifts. The simulations include both $`K`$ and evolutionary corrections, cosmological angular size relations and volume elements ($`\mathrm{\Omega }=0.15`$, $`h=0.5`$), appropriate pixelisation, PSFs, and noise (see boub for a discussion). Obviously, one of the principal advantages of the Advanced Camera and NGST over WFPC2 and NICMOS are their increases in limiting magnitude, angular resolution, and field of view. Regarding the differing limiting magnitudes, using the Advanced Camera for similar length exposures to those shown here, one could probe to unobscured star formation rates $`0.5M_{}/yr`$ at $`z5`$ whereas with WFPC2, the limiting rate is only $`2M_{}/yr`$. For higher redshift observations, such as are only possible with NICMOS or NGST, NGST promises to push the sensitivity on unresolved star formation from its current value $`20M_{}/yr`$ at $`z10`$ obtainable with NICMOS exposures down to $`1M_{}/yr`$.
no-problem/9812/astro-ph9812328.html
ar5iv
text
# MOND in the Early Universe ## Standard Cosmology The standard hot big bang cosmology has many successes; too many to list here. The amount of data constraining cosmic parameters has increased rapidly, until only a small region of parameter space remains viable. This has led to talk of a ‘concordant’ cosmology with $`\mathrm{\Omega }_{}0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }0.7`$ OS . This is a rather strange place to end up. The data do not favor these parameters so much as they disfavor other combinations more. A skeptic might suspect that concordance is merely the corner we’ve painted ourselves into prior to the final brush stroke. This is not an idle concern, as there remains one major outstanding problem: dark matter. Something like 90% of the universe is supposedly made of stuff we can not see. There are, to my mind, two ironclad lines of reasoning that require the dark matter to be nonbaryonic, cold dark matter (CDM) like WIMPs or axions. 1. $`\mathrm{\Omega }_{}\mathrm{\Omega }_b.`$ 2. Structure does not have time to grow from a smooth microwave background to the rich structure observed today without a mass component whose perturbations can grow early without leaving an imprint on the CMBR. BUT we have yet to detect WIMPs or axions. Their existence remains an unproven, if well motivated, assumption. ## Is there any Dark Matter? It is often stated that the evidence for dark matter is overwhelming. This is not quite correct: the evidence for mass discrepancies is overwhelming. These might be attributed to either dark matter or a modification of gravity. Rotation curves played a key role in establishing the mass discrepancy problem, and remain the best illustration thereof. There are many fine-tuning problems in using dark matter to explain these data. I had hoped that the resolution of these problems would become clear with the acquisition of new data for low surface brightness galaxies. Instead, the problems have become much worse DM . These recent data are a particular problem for CDM models, which simply do not fit me99 . Tweaking the cosmic parameters can reduce but not eliminate the problems. No model in the concordant range can fit the data unless one invokes some deus ex machina to make it so. There is one theory which not only fits the recent observations, but predicted them MD . This is the modified dynamics (MOND) hypothesized by Milgrom Mb . The basic idea here is that instead of dark matter, the force law is modified on a small acceleration scale, $`a_01.2\mathrm{\AA }\mathrm{s}^2`$. For $`aa_0`$ everything is normal, but for $`aa_0`$ the effective force law is $`a=\sqrt{a_Na_0}`$, where $`a_N`$ is the usual Newtonian acceleration. This hypothesis might seem radical, but it has enormous success in rectifying the mass discrepancy. It works exquisitely well in rotating disks where there are few assumptions BBS ; S96 ; SV ; mondLSB . MOND also seems to work in other places, like dwarf Spheroidals MD ; GS ; 7dw ; MOVK , galaxy groups groups , and filaments filam . The only place in which it does not appear to completely remedy the mass discrepancy is in the cores of rich clusters of galaxies clust2 , a very limited missing mass problem. It is a real possibility is that MOND is correct, and CDM does not exist. Let us examine the cosmological consequences of this. ## Simple MOND Cosmology There exists no complete, relativistic theory encompassing MOND, so a strictly proper cosmology can not be derived. However, it is possible to obtain a number of heuristic results in the spirit of MOND. The simplest approach is to assume that $`a_0`$ does not vary with cosmic time. This need not be the case CoA ; AnnPh , but makes a good starting point. I do not have space to derive anything here, and refer the reader to detailed published work CoA ; AnnPh ; Scosm . Making this simple assumption, the first thing we encounter is that it is not trivial to derive the expansion history of the universe in MOND Scosm ; F84 . This might seem unappealing, but does have advantages. For example, a simple MOND universe will eventually recollapse irrespective of the value of $`\mathrm{\Omega }_{}`$. There is no special value of $`\mathrm{\Omega }_{}`$, so no flatness problem. Conventional estimates of $`\mathrm{\Omega }_{}`$ are overly large in MOND. Instead of $`0.2<\mathrm{\Omega }_{}<0.4`$, MOND gives $`0.01<\mathrm{\Omega }_{}<0.04`$. So a MOND universe is very low density, consistent with being composed purely of baryons in the amount required by big bang nucleosynthesis. This makes some sense. Accelerations in the early universe are too high for MOND to matter. This persists through nucleosynthesis and recombination, so everything is normal then and all the usual results are retained. MOND does not appear to contradict any empirically established cosmological constraint. The universe as a whole transitions into the MOND regime ($`cH_0a_0`$) around $`z3`$, depending on $`\mathrm{\Omega }_{}`$. Sub-horizon scale bubbles could begin to make this phase transition earlier, providing seeds for the growth of structure and setting the mass scale for galaxies Scosm . Nothing can happen until the radiation releases its grip on the baryons ($`z200`$), by which time the typical acceleration is quite small. As a result, things subsequently behave as if there were a lot of dark matter: perturbations grow fast. This provides a mechanism by which structure grows from a smooth state to a very clumpy one rapidly, without CDM. Now recall the two ironclad reasons why we must have CDM. In the case of MOND 1. $`\mathrm{\Omega }_{}=\mathrm{\Omega }_b0.02`$ 2. There is no problem growing structure rapidly from a smooth CMBR to the rich amount seen at $`z=0`$ with baryons alone. ## Predictions The simple MOND scenario makes two predictions which distinguish it from standard CDM models. 1. Structure grows rapidly and to large scales. 2. The universe is made of baryons. There are indications that at least some galaxies form early, and are already clustered at $`z3`$ GSADPK . At low redshifts, we are continually surprised by the size of the largest cosmic structures. It makes no sense in the conventional context that fractal analyses should work as well as they do fractal . A MOND universe need not be precisely fractal, but if analyzed in this way it naturally produces the observed dimensionality Scosm . So there are already a number of hints of MOND-induced behavior in cosmic data. A strong test may occur for rich clusters. These are rare at $`z>1`$ in any CDM cosmology ECF . In the simple MOND universe, clusters form at $`z3`$ Scosm . Upcoming X-ray missions should be able to detect these Mush . The rapid growth of perturbations in MOND overcomes the usual objections to purely baryonic cosmologies. The baryon fraction makes a tremendous difference to the bumps and wiggles in the CMBR power spectrum (Figure 1) EH . CDM smooths out the acoustic oscillations due to baryons in a way which can not happen if $`f_b=1`$. This should leave some distinctive feature in the CMBR that can be measured by upcoming missions like MAP Sperg . Unfortunately, it is easy only to predict the spectrum as it emerges shortly after recombination. Since the growth of structure is rapid and nonlinear in MOND, there might be a strong integrated Sachs-Wolfe effect. I would expect this to erase any hint of the oscillations in the $`z=0`$ galaxy distribution, but not necessarily in the CMBR. The initial spectrum in the CMBR is sufficiently different in the CDM and MOND cases that there is a good prospect of distinguishing between the two.
no-problem/9812/astro-ph9812235.html
ar5iv
text
# 1620 Geographos and 433 Eros: Shaped by Planetary Tides? ## 1 Introduction to 1620 Geographos The shapes of several Earth-crossing objects (ECOs) have now been inferred by delay-Doppler radar techniques (Ostro (1993); Ostro et al., 1995a ; Ostro et al., 1995b ; Hudson and Ostro (1994, 1995, 1997)). They show that ECOs have irregular shapes, often resembling beat-up potatoes or even contact binaries. It is generally believed that these shapes are by-products of asteroid disruption events in the main belt and/or cratering events occurring after an ECO has been ejected from its immediate precursor. A few of these bodies, however, have such unusual shapes and surface features that we suspect an additional reshaping mechanism has been at work. As we will show, at least one ECO, 1620 Geographos, has the exterior characteristics, orbit, and rotation rate of an object which has been significantly manipulated by planetary tidal forces. 1620 Geographos is an S-class asteroid with a mean diameter slightly over 3 km. It was observed with the Goldstone 2.52-cm (8510-MHz) radar from August 28 through September 2, 1994 when the object was within 0.0333 AU of Earth (Ostro et al., 1995a ; Ostro et al., (1996)). A delay-Doppler image of Geographos’s pole-on silhouette (Fig. 1) showed it to have more exact dimensions of $`5.11\times 1.85`$ km ($`2.76\times 1.0`$, normalized), making it the most elongated object yet found in the solar system (Ostro et al., 1995a ; Ostro et al., (1996)). In addition, Geographos’s rotation period ($`P=5.22`$ h) is short enough that loose material is scarcely bound near the ends of the body (Burns (1975)). For reference, Geographos would begin to shed mass for $`P4`$ h if its bulk density was 2.0 g $`\mathrm{cm}^3`$ (Harris 1996; Richardson et al. 1998). Geographos’s elongated axis ratio was unusual enough that Solem and Hills (1996) first hypothesized it may not be a consequence of collisions. Instead, they speculated it could be a by-product of planetary tidal forces, which kneaded the body into a new configuration during an encounter with Earth. To test their hypothesis, they employed a numerical $`N`$-body code to track the evolution of non-rotating strengthless spherical aggregates making close slow passes by the Earth. Some of their test cases showed that tidal forces stretch spherical progenitors into cigar-like figures as long or longer than the actual dimensions of Geographos. Since ECOs undergo close encounters with Earth (and Venus) with some frequency (Bottke et al., 1994), Solem and Hills (1996) postulated that other ECOs may have comparable elongations. Though Geographos’s elongation is provocative, it is, by itself, an inadequate means of determining whether the asteroid has been modified by tidal forces. To really make the case that 1620 Geographos is a tidally distorted object, several questions must be answered: 1. Is Geographos’s internal structure (or that of any other ECO) weak enough to allow tidal forces to pull it apart? 2. How likely is it that Geographos ever made a close slow encounter with a large terrestrial planet like Earth or Venus? 3. Can tidal forces reshape an ECO into a Geographos-like silhouette (not just an asymmetrical elongated figure) and reproduce its spin rate? 4. If so, how often do such events occur? 5. Is Geographos a singular case, or have other ECOs undergone comparable distortion? In the following sections, we will address each of these questions in turn. Our primary tool to investigate these issues is the $`N`$-body code of Richardson et al. (1998), more advanced than the code of Solem and Hills (1996) and capable of determining the ultimate shape and rotation of our progenitors. By applying a reasonable set of ECO starting conditions, we will show that Geographos-type shapes and spins are a natural consequence of tidal disruption. The results discussed here are based on the extensive parameter space surveys completed for Richardson et al. (1998). ## 2 Issue A: Evidence that ECOs are “rubble piles” Planetary tidal forces are, in general, too weak to modify the shapes of solid asteroids or comets unless the bodies are composed of very weak material (Jeffreys (1947); Öpik (1950)). Recent evidence, however, supports the view that most km-sized asteroids (and comets) are weak “rubble-piles”, aggregates of smaller fragments held together by self-gravity rather than material strength (Chapman (1978), Love and Ahrens (1996)). We list a few salient points; additional information can be found in Richardson et al. (1998). (i) Comet Shoemaker-Levy-9 (SL9) tidally disrupted when it passed within 1.6 planetary radii of Jupiter in 1992; numerical modelling suggests this could only have happened if SL9 were virtually strengthless (Asphaug and Benz (1996)). (ii) C-class asteroid 253 Mathilde has such a low density (1.3 g $`\mathrm{cm}^3`$; Veverka et al., (1998)) compared to the inferred composition of its surface material (i.e., if carbonaceous chondrite-like, it would have a density $`2`$ g $`\mathrm{cm}^3`$; Wasson 1985) that its interior must contain large void spaces, small fragments with substantial interparticle porosity, or a combination of the two. (iii) A set of 107 near-Earth and main belt asteroids smaller than 10 km shows no object with a rotation period shorter than 2.27 h; this spin rate matches where rubble-pile bodies would begin to fly apart from centrifugal forces (Harris (1996)). (iv) Most collisionally evolved bodies larger than $`1`$ km are highly fractured, according to numerical simulations of asteroid impacts (Asphaug and Melosh (1993); Greenberg et al., (1996); Love and Ahrens (1996)). (v) All of the small asteroids (or asteroid-like bodies) imaged so far by spacecraft (e.g., 253 Mathilde, 243 Ida, 951 Gaspra, and Phobos) have large craters on their surface, implying their internal structures are so broken up that they damp the propagation of shock waves and thereby limit the effects of an impact to a localized region (Asphaug (1998)). If this were not the case, many of these impacts would instead cause a catastrophic disruption. If the above lines of evidence have been properly interpreted, we can conclude that Geographos (and other ECOs) are probably rubble piles, since ECOs are generally thought to be collisionally evolved fragments derived from catastrophic collisions in the main belt. Thus, we predict that Geographos is weak enough to be susceptible to tidal distortion during a close pass with a planet. ## 3 Issue B: Probable Orbital Evolution of 1620 Geographos If Geographos is a tidally distorted object, it had to encounter a planet at some time in the past. Not just any encounter will do, however. Tidal forces drop off as the inverse cube of the distance between the bodies, such that distant encounters far outside the planet’s Roche limit cause negligible damage to the rubble pile. High velocity trajectories past a planet leave little time for tidal forces to modify the rubble pile’s shape. Thus, we need to estimate the probability that Geographos has made a close slow encounter with Earth or Venus. The orbits of ECOs evolve chaotically. Many of them have orbits which allow them to encounter multiple planets and the terrestrial planet region is crisscrossed with secular and mean-motion resonances (Froeschlé et al., (1995); Michel et al., (1997)). For these reasons, it is impossible to accurately track the orbital motion of any ECO more than a few hundred years into the past or future. The only way, therefore, to assess the likelihood that Geographos had a planetary encounter in the past is to numerically integrate its orbit with that of many clones, in the hope that broad evolution patterns can be readily characterized. To this end, following the procedure of Michel et al., (1996), we used a Bulirsch-Stoer variable step-size integration code, optimized for dealing accurately with close encounters, to track the evolution of 8 Geographos-like test clones. We integrated the nominal orbit with $`a=1.246`$ AU, $`e=0.335`$, $`i=13.34^{}`$; the other clones were defined by slightly changing their orbital parameters one at a time. All of the planets were included except Pluto. Orbital parameters were provided by the JPL’s Horizons On-line Ephemeris System v2.60 (Giorgini et al. 1998). Each clone was followed for 4 Myr. In general, we determined the orbital evolution of the clones to be controlled by two mechanisms: close encounters with Earth and overlapping secular resonances $`\nu _{13}`$ and $`\nu _{14}`$ involving the mean precession frequencies of the nodal longitudes of Earth and Mars’s orbits (Michel and Froeschlé 1997, Michel 1997). We found that 5 of the 8 clones (62.5%) had their inclinations increased by these resonances. This trend opens the possibility that these mechanisms could have affected Geographos’s orbit in the past and consequently that its inclination has been pumped up from a lower value. Similarly, 6 of the 8 clones (75%) had their orbital eccentricities increased by the $`\nu _2`$ and $`\nu _5`$ secular resonances with Venus and Jupiter. Lower eccentricities and inclinations in the past imply that close approaches near Earth were even more likely to occur, and to happen at the low velocities conducive for tidal disruption, in agreement with integrations by other groups (Froeschlé et al., (1995)). Thus, these integrations moderately increase our confidence that Geographos has been stretched by tides in the past. ## 4 Issue C. Tidal Disruption Model and Results ### 4.1 The model To investigate the effects of planetary tides on ECOs (cf., Solem and Hills (1996)), we have used a sophisticated $`N`$-body code to model Earth flybys of spherical-particle aggregates (Bottke et al., (1997, 1998); Richardson et al., (1998)). Our goal in this section is to determine whether Geographos-like shapes are common by-products of tidal disruption. Model details, analysis techniques, and general results described in Richardson et al. (1998). For brevity, we only review the basics here. The particles’ motions are tracked during the encounter using a 4th-order integration scheme that features individual particle timesteps (Aarseth 1985). This method allows us to treat interparticle collisions rigorously, with a coefficient of restitution included to produce energy loss (i.e., friction); previous models usually assumed elastic or perfectly inelastic collisions. Note that if energy dissipation is not included, clumps formed by gravitational instability are noticeably less tightly bound (Asphaug and Benz (1996)). The code is capable of modelling tidal disruption over a range of rubble pile shapes, spin rates, spin-axis orientations, and hyperbolic trajectories. To verify the code was accurate enough to realistically model shape changes, we consulted two experts in granular media, J. Jenkins of Cornell University, and C. Thornton of Aston University, UK. Based on their suggestions, we checked our code against some standard diagnostic tests in their field. For our first test, we numerically modeled spherical particles being dropped into a pile along a flat surface. Our results showed that we were able to reproduce an empirically-derived angle of repose. For a second test, we examined the pre- and post-planetary encounter particle configurations of our rubble piles to determine whether their shapes were artifacts of a crystalline lattice structure (i.e., “cannonball stacking”). Our results showed that lattice effects are nearly unavoidable in rubble pile interiors, especially when same-sized spherical particles are used, but that the outer surfaces of our rubble piles had essentially randomized particle distributions. Thus, based on our success with these tests and the positive comments of the granular media experts, we have some confidence that our $`N`$-body code yields reasonable results. Our model rubble piles had dimensions of $`2.8\times 1.7\times 1.5`$ km, our choice for a representative ECO shape (Richardson et al. 1998), and bulk densities of 2 g cm<sup>-3</sup>, similar to the estimated densities for Phobos and Deimos (Thomas et al., (1992)). Note that this value may be overly-conservative, given the 1.3 g cm<sup>-3</sup> density found for Mathilde. Individual particles have densities of 3.6 g cm<sup>-3</sup>, similar to ordinary chondritic meteorites (Wasson (1995)). For most test cases, our rubble pile consisted of 247 particles, with each particle having a diameter of 255 m. Same-sized particles were chosen for simplicity; future work will investigate more plausible particle size-distributions. Cases deemed interesting were examined further using rubble piles with 491 same-sized particles. In these instances, particle densities were modified to keep the aggregate’s bulk density the same as before. We found that the change in resolution did not significantly modify the degree of mass shedding, the final shape, or the final spin rate of the model asteroid, though it did make some shape features more distinctive. The tidal effects experienced during a rubble pile’s close approach to Earth are determined by the rubble pile’s trajectory, rotation, and physical properties. To investigate such a large parameter space, Richardson et al. (1998) systematically mapped their outcomes according to the asteroid’s perigee distance $`q`$ (between 1.01 and 5.0 Earth radii), approach speed $`v_{\mathrm{}}`$ (between 1.0 and 32 km s<sup>-1</sup>), rotation period $`P`$ (tested at $`P`$ = 4, 6, 8, 10, and 12 h for prograde rotation, $`P=6`$ and 12 h for retrograde rotation, and the no-spin case $`P=\mathrm{}`$), spin axis orientation (obliquity varied between $`0^{}`$ and $`180^{}`$ in steps of $`30^{}`$), and orientation of the asteroid’s long axis at perigee (tested over many angles between $`0^{}`$ and $`360^{}`$). We discuss the outcomes, especially those pertaining to Geographos, below. ### 4.2 Tidal disruption outcomes Several distinct outcomes for tidal disruption were found by Richardson et al. (1998). The most severely disrupted rubble piles were classified as “S”, a “SL9-type” catastrophic disruption forming a line of clumps of roughly equal size (a “string of pearls”) with the largest fragment containing less than 50% of the progenitor’s original mass. Less severe disruptions were classified as “B”, break-up events where 10% to 50% of the rubble pile was shed into clumps (three or more particles) and single particles. Mild disruption events were classified as “M”, with the progenitor losing less than 10% of its mass. As we will show below, each outcome class is capable of producing Geographos-like elongations and spin rates. ### 4.3 Reshaping rubble piles with planetary tidal forces To quantify shape changes, we measured the length of each rubble pile’s axes after encounter ($`a_1a_2a_3`$), calculated the axis ratios ($`q_2a_2/a_1`$ and $`q_3a_3/a_1`$), and defined a single-value measure of the remnant’s “ellipticity” ($`\epsilon _{\mathrm{rem}}1\frac{1}{2}(q_2+q_3)`$). For reference, our progenitor has $`\epsilon _{\mathrm{rem}}=0.43`$ and Geographos has a value of $`\epsilon _{\mathrm{rem}}=0.64`$. Sampling a broad set of parameters to map tidal disruption outcomes, Richardson et al. (1998) identified 195 S, B, or M-class events produced with a $`\epsilon _{\mathrm{rem}}=0.43`$ rubble pile. Fig. 2 shows this set with ellipticity plotted against the fraction of mass shed by the progenitor during tidal disruption. We find that, in general, S-class events tend to yield lower ellipticity values; only 2 of the 79 outcomes are likely to have a Geographos-like elongations ($`\epsilon _{\mathrm{rem}}>0.60`$). The mean value of $`\epsilon _{\mathrm{rem}}`$ for the S-class events is 0.22 with standard deviation $`\sigma `$ = 0.14. The near-spherical shapes produced by S-class events are a by-product of gravitational instabilities in the fragment chain which readily agglomerate scattered particles as they recede from the planet. B-class events do not show a simple trend with respect to ellipticity, though these values tend to increase as the degree of mass shedding decreases. We find that 5 of 40 outcomes have Geographos-like $`\epsilon _{\mathrm{rem}}`$ values. The mean value of $`\epsilon _{\mathrm{rem}}`$ for all 40 B-class events is 0.45 ($`\sigma =0.14`$), very close to the starting ellipticity of 0.43. M-class events are most effective at increasing $`\epsilon _{\mathrm{rem}}`$ and creating Geographos-like shapes, probably because tidal torques must first stretch and/or spin-up the rubble pile before particles or clumps can be ejected near the ends of the body. Fig. 2 shows 23 of 76 M-class events with Geographos-like $`\epsilon _{\mathrm{rem}}`$ values. Overall, the 76 outcomes have a mean $`\epsilon _{\mathrm{rem}}=0.54`$ with $`\sigma =0.10`$. Thus, getting a Geographos-like ellipticity from a M-class disruption is less than a 1 $`\sigma `$ event, decent odds if such disruptions (and $`\epsilon _{\mathrm{rem}}=0.43`$ progenitors) are common. ### 4.4 Spin-up and down with planetary tidal forces Tidal disruption also changes the spin rates of rubble piles. This can be done by applying a torque to the non-spherical mass distribution of the object, redistributing the object’s mass (and thereby altering its moment of inertia), removing mass (and angular momentum) from the system, or some combination of the three. Fig. 3 shows the spin periods of the remnant rubble piles ($`P_{\mathrm{rem}}`$) for the 195 disruption cases described above. Recall that the range of starting $`P`$ values was 4, 6, 8, 10, and 12 h for prograde rotation, $`P=6`$ and 12 h for retrograde rotation, and the no-spin case $`P=\mathrm{}`$. The mean spin period for 79 S-class outcomes is $`5.6\pm 2.2`$ hours, while the comparable value for the 40 B-class and 76 M-class events is $`5.2\pm 1.1`$ and $`4.9\pm 1.1`$ hours, respectively. Note that these last two values are close to the real spin period of Geographos (5.22 h). These similar values indicate that mass shedding only occurs when the km-sized bodies are stretched and spun-up to rotational break-up values. The final rotation rate of the rubble pile is then determined by the extent of the mass loss; in general, more mass shedding (S-class events) means a loss of more rotational angular momentum, which in turn translates into a slower final spin rate. Though the points of Fig. 3 do show some scatter, the 195 disruption events together have a mean $`P_{\mathrm{rem}}`$ value of $`5.2\pm 1.7`$ hours, a good match with Geographos once again. ### 4.5 Matching the shape and spin of 1620 Geographos Now that we have found tidal disruption outcomes which match Geographos’s ellipticity and spin rate, we can take a closer look at the resultant shapes of the rubble piles themselves. Our goal is to find distinctive features which match comparable features on Geographos, and which are possibly antithetical with a collisional origin. To make sure we can resolve these features, we have used a rubble pile containing nearly twice the number of components as before (491 particles). Fig. 4 shows this body going through a M-class event with the following encounter parameters: $`P=6`$ hours prograde, $`q=2.1R_{}`$, and $`v_{\mathrm{}}=8`$ km s<sup>-1</sup>. Fig. 4a shows the asteroid before encounter. The spin vector is normal to the orbital plane and points directly out of the page. The asteroid’s equipotential surface (to which a liquid would conform) is a function of local gravity, tidal, and centrifugal terms. At this stage, it hugs the outer surface of the rubble-pile. Fig. 4b shows the body shortly after perigee passage. Here, the equipotential surface becomes a more elongated ellipsoid with its longest axis oriented towards Earth. Differential tidal forces, greatest at perigee, and centrifugal effects combine to set the particles into relative motion, producing a landslide towards the ends of the body. Particles above the new angle of repose roll or slide downslope to fill the “low spots”, and thereby further modify the body’s potential. As a consequence, the rubble pile is elongated and, as the planet pulls on the body, its rotation rate altered. The action of the Earth stretches the model asteroid and, by pulling on the distorted mass, spins it up, increasing its total angular momentum. Mass ejection occurs when the total force on a particle near the asteroid’s tips is insufficient to provide the centrifugal acceleration needed to maintain rigid-body rotation. Fig. 4c shows the latter stages of the landslide. Particles near the tips are swept backward in the equatorial plane by the asteroid’s rotation. The material left behind frequently preserves this spiral signature as cusps pointing away from the rotation direction. Note that these cusps are easy to create but difficult to retain with identical spherical particles at this resolution; we believe that real rubble piles, with rough or craggy components, would more readily “freeze” in position near the ends. Particle movement along the long axis is not uniform; shape changes, increased angular momentum, and mass shedding cause one side of the body to become bow-like. This effect produces a convex surface along the long axis and a “hump”-like mound of material on the opposite side. Fig. 4d shows the final shape of the object. The spin ($`P_{\mathrm{rem}}=5.03`$ h) and ellipticity ($`\epsilon _{\mathrm{rem}}=0.65`$) are virtually identical to Geographos (Fig. 1). The shapes of the two ends are, surprisingly, not symmetric. We believe this is caused by the starting topography, which can play a decisive role in the effectiveness of tidal deformation. The strength of tidal and centrifugal terms depends on each particle’s position (Hamilton and Burns (1996)), such that some particles lie further above the local angle of repose than others. Since our model asteroid, like real ECOs, is neither a perfect ellipsoid nor a readily-adaptable viscous fluid, the new distorted shape is influenced by the body’s granular nature (i.e., friction and component size affect the strength of the landslide). Hence, particles leak more readily off one end than the other, often accentuated by limited particle movement before the rubble-pile reaches perigee. The end that sheds more mass frequently becomes elongated, tapered, and narrow when compared to the stubbier antipode. The overall final shape of the body is much like that of a “porpoise” or “schmoo”. A comparison between Fig. 1 and Fig. 4 shows a good match; all of Geographos’s main features have been reproduced. ## 5 Issue D. Production Rate of Geographos-Shaped Objects As described above, certain S-, B- and M-class disruptions can leave rubble piles with highly elongated shapes and fast spin rates. To estimate the frequency of those particular disruption events near Earth and Venus, we use the technique of Bottke et al. (1998) and combine a “map” of tidal ellipticity results (described in Richardson et al., 1998) with probability distributions based on ECO spins, ECO spin axis orientations, ECO close approaches with Earth and Venus, and ECO encounter velocities with Earth and Venus. Our results show that a typical ECO should undergo an S-, B-, or M-class event once every $`65`$ Myr, comparable to an ECO’s collision rate with Earth and Venus (Richardson et al., 1998). Similarly, this same body should get an Geographos-like ellipticity ($`\epsilon _{\mathrm{rem}}>0.60`$) once every $`560`$ Myr. The most likely disruption candidates have low $`e`$’s and $`i`$’s, consistent with the Geographos’s probable orbital history (i.e., Sec. 3). Since the dynamical lifetime of ECOs against planetary collision, comminution, or ejection by Jupiter is thought to be on the order of 10 Myr (Gladman et al., (1997)), we predict that $`15`$% of all ECOs undergo S-, B- or M-class disruptions (i.e., 10 Myr / 65 Myr), and that $`2`$% of all ECOs (i.e., 10 Myr / 560 Myr) should have shapes (and spins) like Geographos. The implications of this prediction will be discussed below. ## 6 Issue E. Other Geographos-Like Objects ### 6.1 Detecting tidally-distorted ECOs Our estimate that 2% of all rubble pile ECOs should have Geographos-type shapes and spin periods is, at best, only accurate to a factor of several, given the many unknown quantities we are modeling and the relatively unknown shape distribution of the ECO population. Still, the following thought experiment is useful in providing a crude “reality check”. 1620 Geographos has a mean diameter of 3 km and an absolute magnitude of $`H`$ = 15.6 (Giorgini et al. 1998). Morrison (1992) estimates there are roughly 100 ECOs with absolute magnitudes brighter than 15.0 (6 and 3 km diameters, respectively, for the dark C’s and bright S’s). Since 2% of 100 objects is 2 objects, it is perhaps not surprising we have not noticed more Geographos-like asteroids. Alternatively, one could argue that, given these odds, it was fortunate to have discovered Geographos’s shape in the first place, especially when one considers that only 35% of the $`H<15.0`$ ECOs have been been discovered, and relatively few of them have had their shapes determined by delay-Doppler radar (Morrison 1992). It is useful to recall, however, that the known ECO population is biased towards objects which pass near the Earth on low inclination orbits (Jedicke 1996), exactly the class of objects which are favored to undergo tidal disruption. Hence, the discovery of Geographos’s shape among a limited sample of ECOs may not be a fluke. We predict, though, that more Geographos-like objects are lurking among the undiscovered ECO population. Our investigation of Geographos led us to examine a second asteroid, 433 Eros, which shares many of Geographos’s distinguishing characteristics. We believe Eros may also be tidally distorted, as we will discuss further below. ### 6.2 Application to 433 Eros Our success in suggesting an explanation for Geographos has led us to consider the next most elongated asteroid, S-class asteroid 433 Eros, the target of the NEAR mission. Eros has many of the same distinguishing characteristics as Geographos (and our B- and M-class remnant rubble piles). Visual and radar observations taken during a 0.15 AU pass near Earth in 1975 report that Eros has a short rotation period (5.27 hours) and a highly elongated shape ($`36\times 15\times 13`$ km; $`2.77\times 1.2\times 1.0`$, normalized; ellipticity $`\epsilon _{\mathrm{rem}}=0.61`$) (Zellner (1976); McFadden et al., (1989); Mitchell et al., (1998)). Both values are comparable to those recorded for Geographos and with 15% (30 out of 195) of our S-, B-, and M-class disruption cases. Even more intriguing, however, is Eros’s pole-on silhouette, which, after modeling the older Goldstone radar data, looks something like a kidney bean (Fig. 5) (Mitchell et al., (1998)). One must be careful not to overinterpret this shape, since it is based on data that has a signal to noise ratio of $`70`$ while the shape has been “fit” to a reference ellipsoid which can eliminate discriminating features. In fact, the concave side of the “kidney bean” shape may not be a single concavity, but several adjacent ones. Still, we believe it plausible that Eros’s arched back and tapered ends are analogous to similar features on Geographos, themselves produced by spiral deformations associated with tidal forces. Images from the NEAR spacecraft should readily resolve this issue. The NEAR spacecraft will offer several additional ways to test our hypothesis. Regardless of whether Eros is covered by regolith or bare rock, spectroscopic measurements will suggest a surface composition which can be directly compared to terrestrial rock samples. If the densities of these samples are substantially larger than Eros’ bulk density, we can infer that Eros is probably a rubble pile. While observations of large craters would support the rubble pile scenario, too many would weigh against the tidal disruption scenario; global landslides caused by a relatively recent tidal disruption event should modify or bury craters. For this reason, we expect most tidally distorted objects to have relatively young and spectroscopically uniform surfaces. As we will describe below, however, the unknown dynamical history of Eros makes any prediction problematic. Landslides also sort debris as it goes downhill; high resolution images near the ends of Eros may not only show cusp-like features but a prevalence of small fragments. An estimate of the spatial distribution of block sizes inside Eros may come from NEAR’s gravity field maps. Finally, the results of Bottke and Melosh (1996) and Richardson et al., (1998) show that asteroids affected by tides may often have small satellite companions which were torn from the original body. Thus, the presence of a small moon about Eros would be a strong indication that it had undergone tidal fission. A possible problem, dynamically-speaking, is that Eros is currently an Amor asteroid on a solely Mars-crossing orbit ($`a=1.46`$ AU, $`e=0.22`$, $`i=10.8^{}`$). Test results show that tidal disruption events occur relatively infrequently near Mars, since it is a weak perturber (Bottke and Melosh 1996). Studies of Eros’s orbital evolution, however, suggest that it may have been on a low-inclination, deeply Earth-crossing orbit in the past (Michel et al., (1998)). Numerical integrations of Eros-type clones show that secular resonances $`\nu _4`$ and $`\nu _{16}`$ probably modified Eros’s orbital parameters, decreasing its eccentricity enough to place it out of reach of the Earth, while increasing Eros’s inclination to its current value (Michel et al., (1996); Michel (1997); Michel et al., (1998)). If true, Eros would have been prone to low velocity Earth encounters (and tidal disruption) in some past epoch. ## 7 Conclusions Current evidence implies that km-sized asteroids and comets are rubble piles. When these objects, in the form of ECOs, encounter a planet like the Earth, S-, B-, and M- class tidal disruptions frequently produce elongated objects ($`\epsilon _{\mathrm{rem}}>0.6`$) with fast spin rates ($`P5`$ hours). These values are consistent with at least two objects in near-Earth space, 1620 Geographos and 433 Eros, which may have made a close slow encounter with Earth or Venus in the past. In addition, the shapes of our model asteroids that have been heavily distorted (and disrupted) by Earth or Venus’s tidal forces resemble the radar-derived shapes of Geographos and Eros. Estimates of the frequency of tidal disruption events indicate that a small but detectable fraction of the ECO population should have Geographos-like spins and shapes. For these reasons, we believe that planetary tidal forces should be added to collisional processes as recognized important geological process capable of modifying small bodies. We thank L. Benner, J. A. Burns, P. Farinella, S. Hudson, J. Jenkins, S. Ostro, and C. Thornton for useful discussions and critiques of this work. We also thank E. Asphaug and C. Chapman for his constructive reviews of this manuscript. PM worked on this paper while holding the External Fellowship of the European Space Agency. DCR was supported by grants from the NASA Innovative Research Program and NASA HPCC/ESS. Preparation of the paper was partly supported by NASA Grant NAGW-310 to J. A. Burns.
no-problem/9812/cond-mat9812365.html
ar5iv
text
# Magnetic precursor effects in Gd based intermetallic compounds ## I Introduction One of the points of debate in the field of giant magnetoresistance (GMR) is the origin of negative temperature coefficient of resistivity ($`\rho `$) above Curie temperature (T<sub>C</sub>) and resultant large negative magnetoresistance at T<sub>C</sub> \[Ref. 1, 2\]. Keeping such trends in the field of magnetism in recent years in mind, we have been carefully investigating the magnetoresistance behaviour of some of the Gd alloys in the vicinity of respective magnetic ordering temperatures (T<sub>o</sub>), in order to address the question whether such features can arise from some other factor. We have indeed noted an extra contribution to $`\rho `$ over a wide temperature range above T<sub>o</sub> in GdPt<sub>2</sub>Si<sub>2</sub>, GdPd<sub>2</sub>In, GdNi<sub>2</sub>Si<sub>2</sub> \[Ref. 3\], GdNi \[Ref. 4\], and Gd<sub>2</sub>PdSi<sub>3</sub> \[Ref. 5\], as a result of which the magnetoresistance is negative just above T<sub>o</sub>, attaining a large value at T<sub>o</sub>, similar to the behavior in manganites. In fact, in one of the Gd compounds, Gd<sub>2</sub>PdSi<sub>3</sub>, the temperature coefficient of $`\rho `$ is even negative just above Néel temperature (T<sub>N</sub>), with a distinct minimum at a temperature far above T<sub>N</sub>. Such observations suggest the need to explore the role of any other factor before long range magnetic order sets in. Similar resistance anomalies have been noted above T<sub>o</sub> even in some Tb and Dy alloys. Since critical spin fluctuations may set in as one approaches T<sub>o</sub>, the natural tendency is to attribute these features to such spin fluctuations extending to unusually higher temperature range. In our opinion, there exists a more subtle effect, e.g., a magnetism-induced electron localisation (magnetic polaronic effect) and consequent reduction in the mobility of the carriers as one approaches long range magnetic order. The efforts on manganites along these lines are actually underway and it appears that a decrease in mobility of the carriers are primarily responsible for negative temperature coefficient of $`\rho `$ above T<sub>C</sub> and large magnetoresistance. The results on the Gd alloys mentioned above are also important to various developments in the field of heavy-fermions and Kondo lattices, as discussed in Refs. 3-5, 9-11. Thus, the investigation of magnetic precursor effects in relatively simple magnetic systems is relevant to current trends in magnetism in general; the Gd systems are simple in the sense that Gd does not exhibit any complications due to double-exchange, crystal-fields, Jahn-Teller and Kondo effects. We therefore consider it worthwhile to get more experimental information on magnetic precursor effects in Gd systems. In this article, we report the results of electrical resistivity ($`\rho `$) measurements in a number of other Gd alloys crystallizing in the same (or closely related) structure, in order to arrive at a overall picture of the magnetic precursor effects in Gd compounds. Among the Gd alloys investigated, interestingly, many do not exhibit such resistance anomalies; in addition, we find that there is no one-to-one correspondence between the (non)observation of excess $`\rho `$ and a possible enhancement of heat capacity (C) above T<sub>o</sub> in these Gd alloys. The compounds under investigation are: GdCu<sub>2</sub>Ge<sub>2</sub> (T<sub>N</sub>= 12 K), GdAg<sub>2</sub>Si<sub>2</sub> (T<sub>N</sub>= 17 K), GdPd<sub>2</sub>Ge<sub>2</sub> (T<sub>N</sub>= 18 K, Ref. 13), GdCo<sub>2</sub>Si<sub>2</sub> (T<sub>N</sub>= 44 K), GdAu<sub>2</sub>Si<sub>2</sub> (T<sub>N</sub>= 12 K), GdNi<sub>2</sub>Sn<sub>2</sub> (T<sub>N</sub>= 7 K) and GdPt<sub>2</sub>Ge<sub>2</sub> (T<sub>N</sub>= 7 K). While the crystallographic and magnetic behaviour of most of these compounds have been well-known, this article reports magnetic characterization to our knowledge for the first time for GdNi<sub>2</sub>Sn<sub>2</sub> and GdPt<sub>2</sub>Ge<sub>2</sub>. We have chosen this set of compounds, since all of these compounds are crystallographically related: most of these form in ThCr<sub>2</sub>Si<sub>2</sub>-type tetragonal structure, while GdNi<sub>2</sub>Sn<sub>2</sub> and GdPt<sub>2</sub>Ge<sub>2</sub> appear to form in a related structure, viz., CaBe<sub>2</sub>Ge<sub>2</sub> or its monoclinic modification. ## II Experimental The samples were prepared by arc melting stoichiometric amounts of constituent elements in an arc furnace in an atmosphere of argon and annealed at 800 C for 7 days. The samples were characterized by x-ray diffraction. The electrical resistivity measurements were performed in zero field as well as in the presence of a magnetic field (H) of 50 kOe in the temperature interval 4.2 - 300 K by a conventional four-probe method employing a silver paint for electrical contacts of the leads with the samples; in addition, resistivity was measured as a function of H at selected temperatures; no significance may be attached to the absolute values of $`\rho `$ due to various uncertainties arising from the brittleness of these samples, voids and the spread of silver paint. We also performed the C measurements by a semiadiabatic heat-pulse method in the temperature interval 2 - 70 K in order to look for certain correlations with the behavior in $`\rho `$; respective non-magnetic Y or La compounds have also been measured so as to have an idea on the lattice contribution, though it is not found to be reliable at high temperatures (far above T<sub>o</sub>). In order to get further information on the magnetic behavior, the magnetic susceptibility ($`\chi `$) was also measured in a magnetic field of 2 kOe (2 \- 300 K) employing a superconducting quantum interference device and the behavior of isothermal magnetization (M) was also obtained at selected temperatures. ## III Results and discussion The results of $`\rho `$ measurements in the absence and in the presence of a magnetic field are shown in Fig. 1a below 45 K for GdPt<sub>2</sub>Ge<sub>2</sub>. The C data are shown in Fig. 1b. The $`\chi `$ data in the same temperature interval are shown in Fig. 1c to establish the value of T<sub>N</sub>. The magnetoresistance, defined as $`\mathrm{\Delta }\rho /\rho `$= \[$`\rho `$(H)-$`\rho `$(0)\]/$`\rho `$(0), as a function of H at selected temperatures are shown in Fig. 1d. From the comparison of the data in Figs. a, b and c, it is clear that this compound undergoes long range magnetic ordering at (T<sub>N</sub>= ) 7 K, presumably of an antiferromagnetic type, considering that the Curie-Weiss temperature ($`\theta `$<sub>p</sub>) obtained from high temperature Curie-Weiss behavior of $`\chi `$ is negative (-8 K) and the isothermal magnetization (M) at 4.5 K does not show indication for saturation (and, in fact, varies linearly with H, Fig. 1c, inset). There is an upturn in $`\rho `$ below 7 K, instead of a drop, presumably due to the development of magnetic Brillouin-zone boundary gaps; However, with the application of a magnetic field, say 50 kOe, this low temperature upturn in $`\rho `$ gets depressed; the point to be noted is that there is a significant depression of $`\rho `$ with the application of H even above 7 K, the magnitude of which decreases with increasing temperature. Thus, there is a significant negative magnetoresistance not only below T<sub>N</sub>, but also above it over a wide temperature range. This point can be emphasized more clearly when one measures $`\mathrm{\Delta }\rho /\rho `$ as a function of H at various temperatures (Fig. 1d). There is a quadratic variation with H (up to about 50 kOe) at all temperature mentioned in the plots, attaining a large value at higher fields, and these are characteristics of spin-fluctuation systems. In order to explore whether any such magnetic precursor effects are present in the C data, we show the magnetic contribution (C<sub>m</sub>) to C in Fig. 1b after subtracting the lattice contribution (derived from the C data of YPt<sub>2</sub>Ge<sub>2</sub>) as described in Refs. 9, 17. It appears that this may not be the perfect way of determination of C<sub>m</sub> above 30 K as the derived lattice part does not coincide with the measured data for the sample, though the magnetic entropy (obtained by extrapolation of C<sub>m</sub> to zero Kelvin) reached its highest value (R ln 8) around 40 K; there may possibly be different degree of crystallographic disorder between Gd and Y alloys, which is responsible for this discrepancy. Clearly the feature is rounded off at the higher temperature side of T<sub>N</sub>, resulting in a tail extending to higher temperature range and this feature is free from the error discussed above. The data basically provide evidence for the fact that the full magnetic entropy (R ln8) is attained only in the range 30 - 40 K and it is exactly the same temperature range till which we see an enhancement of $`\rho `$, depressing with the application of H. In short, this compound exhibits magnetic precursor effects both in C and $`\rho `$ data. As in the case of GdPt<sub>2</sub>Ge<sub>2</sub>, the results obtained from various measurements for GdNi<sub>2</sub>Sn<sub>2</sub> are shown in Fig. 2 below 35 K. It is clear from the features in $`\rho `$, C and $`\chi `$ that this compound orders magnetically at about 7 K; from the reduced value of peak C<sub>m</sub> (lattice contribution derived from the values of YNi<sub>2</sub>Sn<sub>2</sub>) \[Ref. 17\] and negative $`\theta `$<sub>p</sub>, we infer that the magnetic structure is of an amplitude-modulated antiferromagnetic type. The main point of emphasis is that there is an excess resistivity till about 15 K, which is highlighted by the depression of $`\rho `$ with the application of H. Though there are problems similar to GdPt<sub>2</sub>Ge<sub>2</sub> in deducing precise lattice contribution at higher temperature, we are confident that C<sub>m</sub> data (qualitaively) exhibit a tail till about 15 K and the total magnetic entropy is released around the same temperature. The magnetoresistance appears to vary nearly quadratically with H above T<sub>N</sub>, say, at 10 and 15 K. Thus, $`\rho `$ and C data show magnetic precursor effects for this alloy as well. We now present the results on a series of Gd alloys in which the excess resistance (in the sense described above) is not observable above T<sub>o</sub>. These alloys are GdCo<sub>2</sub>Si<sub>2</sub> (Fig. 3), GdAu<sub>2</sub>Si<sub>2</sub> (Fig. 4) and GdPd<sub>2</sub>Ge<sub>2</sub> (Fig. 5). It is clear from the figures 3-5 that the resistivity in the presence and in the absence of H are practically the same (within 0.1%) above their respective ordering temperatures, thereby establishing the absence of an additional contribution to $`\rho `$ before long range ordering sets in. In order to look for the ’tail’ in C<sub>m</sub> above T<sub>o</sub>, we attempted to obtain respective lattice contributions (employing the C values of YCo<sub>2</sub>Si<sub>2</sub>, YAu<sub>2</sub>Si<sub>2</sub> and YPd<sub>2</sub>Ge<sub>2</sub> respectively). We can safely state that the continuous decrease in C<sub>m</sub> just above T<sub>o</sub>, if exists, does not proceed beyond 1.2T<sub>o</sub> (See Figs. 3b, 4b and 5b). Thus, it appears that the magnetic precursor effects in C, if present, are negligible, thus tracking the behavior of ”excess resistance”. In GdCu<sub>2</sub>Ge<sub>2</sub> and GdAg<sub>2</sub>Si<sub>2</sub> as well, clearly there is no excess resistivity above T<sub>N</sub>, as the application of H does not suppress the value of $`\rho `$ (Figs. 6 and 7). However, it contrast to the cases discussed in the previous paragraph, it appears that there is no correlation between C and $`\rho `$ behavior prior to long range magnetic order. YCu<sub>2</sub>Ge<sub>2</sub> and LaAg<sub>2</sub>Si<sub>2</sub> have been used as references to obtain lattice contributions to C respectively. The finding of interest is that the magnetic contribution to C appears to exhibit a prominent tail (without any doubt in GdCu<sub>2</sub>Ge<sub>2</sub>), at least till 10 K above respective T<sub>N</sub>. This behavior is similar to that noted for GdCu<sub>2</sub>Si<sub>2</sub> earlier. We have also made various other interesting findings: The peak values of C<sub>m</sub> for GdPt<sub>2</sub>Ge<sub>2</sub> and GdNi<sub>2</sub>Sn<sub>2</sub> are much smaller than that expected (20.15 J/mol K, Ref. 17) for equal moment (simple antiferro, ferro or helimagnetic) magnetic structures and the fact that the value is reduced by at least a factor of about 1/3 shows that the magnetic structure is modulated. The situation is somewhat similar for GdPd<sub>2</sub>Ge<sub>2</sub>. However, for GdCo<sub>2</sub>Si<sub>2</sub> and GdAu<sub>2</sub>Si<sub>2</sub>, the peak values of C<sub>m</sub> are very close to the expected value for commensurate magnetic structures, thus suggesting that the (antiferromagnetic) magnetic structure is not modulated. For GdNi<sub>2</sub>Sn<sub>2</sub> (Fig. 2d), $`\mathrm{\Delta }\rho /\rho `$ as a function of H at 4.5 K exhibits a sharp rise for initial applications of H with a positive peak near 8 kOe. While the positive sign may be consistent with antiferromagnetism, corresponding anomaly in the isothermal magnetization at 4.5 K is not very prominent; the plot of M versus H, however, is not perfectly linear at 4.5 K, showing a weak metamagnetic tendency around 30 kOe (Fig. 2c, inset). It appears that the peak in the magnetoresistance is a result of significant changes in the scattering effects from a weak metamagnetism. Even in the case of GdAg<sub>2</sub>Si<sub>2</sub>, there is a weak feature in the plot of M vs H at 4.5 K around 40 kOe due to possible metamagnetic transition (see Fig. 7d), which is pronounced in the magnetoresistance beyond 20 kOe. In the case of GdPd<sub>2</sub>Ge<sub>2</sub>, at 5 K, $`\mathrm{\Delta }\rho /\rho `$ shows a positive value till 20 kOe, beyond which the value is negative exhibiting a non-monotonic variation with H (Fig. 5d); the plot of M versus H shows only a small deviation from linearity around this field. Thus there are very weak metamagnetic effects which have subtle effects on the scattering processes in the antiferromagnetically ordered state in these compounds. The plot of magnetoresistance versus H and that of isothermal magnetization look similar for GdCu<sub>2</sub>Ge<sub>2</sub> (Figs. 6d), with a very weak metamagnetic tendency near 35 kOe, as reflected by non-linear plots. These results suggest that the magnetoresistance technique is a powerful tool to probe metamagnetism, even the weak ones, which may not be clearly detectable by magnetization measurements. It is to be noted that, interestingly, the value of magnetoresistance is very large at high fields at 5 K (see Fig. 7d) for GdAg<sub>2</sub>Si<sub>2</sub> (possibly due to granularity?). The heat capacity data in the magnetically ordered state in GdAg<sub>2</sub>Si<sub>2</sub> as well as in GdPd<sub>2</sub>Ge<sub>2</sub>, reveal the existence of additional shoulders, which may be the result of a combined influence of spin reorientation and Zeeman effects. In particular, for GdAg<sub>2</sub>Si<sub>2</sub>, the magnetic behavior appears to be complex due to the presence of two prominent magnetic transitions, (interestingly) a discontinuous one near 17 K and the other at 11 K (see the features in C and $`\chi `$ in Figs. 7b and 7c). At the 17K-transition in this compound, there is a sudden upward jump in C, and at the same temperature $`\rho `$ shows a sudden upturn instead of a decrease (Fig. 7), possibly due to the formation of antiferromagnetic energy gaps. It would be interesting to probe whether the transition is first-order in nature. It appears that there is another magnetic transition in GdCo<sub>2</sub>Si<sub>2</sub> as well around 20 K as seen by an upturn in the susceptibility (Fig. 3c). ## IV CONCLUSIONS To summarise, on the basis of our investigations on Gd alloys, we divide the Gd compounds into two classes: Class I, in which there is an excess contribution to $`\rho `$ prior to long range magnetic order over a wide temperature range, as a result of which the magnetoresistance is large and negative, e.g., GdNi, GdNi<sub>2</sub>Si<sub>2</sub>, GdPt<sub>2</sub>Si<sub>2</sub>, GdPt<sub>2</sub>Ge<sub>2</sub>. GdNi<sub>2</sub>Sn<sub>2</sub>, GdPd<sub>2</sub>In, Gd<sub>2</sub>PdSi<sub>3</sub>. Class II, in which such features are absent, e.g., GdCu<sub>2</sub>Si<sub>2</sub>, GdCu<sub>2</sub>Ge<sub>2</sub>, GdAg<sub>2</sub>Si<sub>2</sub>, GdAu<sub>2</sub>Si<sub>2</sub>, GdCo<sub>2</sub>Si<sub>2</sub>, GdPd<sub>2</sub>Ge<sub>2</sub>. (At this juncture, we would like to add that we performed similar studies on compounds like, GdCu<sub>2</sub>, GdAg<sub>2</sub>, GdAu<sub>2</sub>, GdCoSi<sub>3</sub> and GdNiGa<sub>3</sub> and we do not find any magnetic precursor effects). The present study on isostructural compounds establishes that there is no straightforward relationship between the observation of the excess $`\rho `$ on the one hand and the crystal structure or the type of transition metal and s-p ions present in the compound on the other. The fact that all the compounds studied in this investigation are of layered type suggests that possible onset of magnetic correlations within a layer before long range magnetic order sets in cannot be offerred as the sole reason for excess resistivity selectively in some cases. If one is tempted to attribute the observation of excess $`\rho `$ to critical spin fluctuations extending to higher temperature range, as inferred from the tail in C<sub>m</sub> above T<sub>o</sub>, one does not get a consistent picture, the reason being that, in some of the class II alloys, there is a distinct tail in C<sub>m</sub>. It is therefore clear that there must be more physical meaning for the appearance of excess $`\rho `$ in class I alloys. As proposed in Refs 5 and 6, one may have to invoke the idea of ”magnetic disorder induced localisation of electrons” (and consequent reduction in the mobility of the charge carriers) before the onset of long range order, as a consequence of short range magnetic order, detected in the form of a tail in heat capacity. The data presented in this article essentially demand that one should explore various factors determining the presence or the absence of the proposed ”magnetic-localisation” effects in the presence of short-range magnetic correlations; possibly, the relative magnitudes of mean free path, localisation length and short range correlation length play a crucial role. It is worthwhile to pursue this question, so as to throw light on several issues in current trends in magnetism.
no-problem/9812/cond-mat9812416.html
ar5iv
text
# 1 Geometry in which the simply connected region 𝑅₁⁢𝑂⁢𝑅₂ is bounded by a self-avoiding walk 𝛾 and an arc of the circle 𝜌=𝑅. This region contains a critical system of which the correlation function of local operators at points 𝑟 and 𝑅' is of interest. This region, excluding the disc 𝜌<|𝑟|, is conformally mapped into a long strip of width 𝜋 and length ℓ_eff⁢(𝛾,𝑟/𝑅). JC-98-06 cond-mat/9812416 Critical Exponents near a Random Fractal Boundary John Cardy Department of Physics Theoretical Physics 1 Keble Road Oxford OX1 3NP, UK & All Souls College, Oxford ## Abstract The critical behaviour of correlation functions near a boundary is modified from that in the bulk. When the boundary is smooth this is known to be characterised by the surface scaling dimension $`\stackrel{~}{x}`$. We consider the case when the boundary is a random fractal, specifically a self-avoiding walk or the frontier of a Brownian walk, in two dimensions, and show that the boundary scaling behaviour of the correlation function is characterised by a set of multifractal boundary exponents, given exactly by conformal invariance arguments to be $`\lambda _n=\frac{1}{48}(\sqrt{1+24n\stackrel{~}{x}}+11)(\sqrt{1+24n\stackrel{~}{x}}1)`$. This result may be interpreted in terms of a scale-dependent distribution of opening angles $`\alpha `$ of the fractal boundary: on short distance scales these are sharply peaked around $`\alpha =\pi /3`$. Similar arguments give the multifractal exponents for the case of coupling to a quenched random bulk geometry. The subject of boundary critical behaviour is by now well understood, particularly in two dimensions . The two-point correlation function $`\varphi (r)\varphi (R)`$ of a scaling operator $`\varphi `$, which behaves in the bulk at large distances at the critical point as $`|rR|^{2x}`$, where $`x`$ is the bulk scaling dimension of $`\varphi `$, is modified when one of the points (say $`r`$) is close to the boundary to the form $$\varphi (r)\varphi (R)|r|^x|R|^x|R/r|^{\stackrel{~}{x}},$$ (1) where $`\stackrel{~}{x}`$ is the corresponding boundary scaling dimension, and the angular dependence has been suppressed for clarity. In two dimensions, the role played by $`\stackrel{~}{x}`$ is emphasised by making the conformal mapping $`z\mathrm{ln}z`$ of the upper half plane to a strip of width $`\pi `$: in that geometry the correlation function decays exponentially along the strip with an inverse correlation length equal to $`\stackrel{~}{x}`$ . Eq. 1 refers to the case when the boundary is smooth (at least on scales $`r`$) and it is an interesting to ask whether these results are modified when the boundary is a fractal on these scales. The example of an edge (or corner in two dimensions) on the boundary was analysed some time ago and it was shown that new edge scaling dimensions arise which depend continuously on the opening angle $`\alpha `$. In two dimensions this dependence is given by conformal invariance arguments by the simple form $`x(\alpha )=\pi \stackrel{~}{x}/\alpha `$. This suggests that close to a fractal boundary, which may be thought of as presenting a distribution of opening angles (which perhaps also depends on the scale at which it is probed), an even more complicated behaviour should obtain. In the case of a random fractal, one also expects to see behaviour characteristic of correlation functions in a quenched random environment. That is, they may exhibit multiscaling, which means that the average of their $`n`$th power does not scale in the same way as the $`n`$th power of their average. In this letter, we consider two cases where this problem is exactly solvable using conformal invariance methods in two dimensions, namely when the fractal boundary is a self-avoiding walk, and when it is the frontier (exterior boundary) of a Brownian (ordinary) random walk. In fact, both cases turn out to give identical results. Our methods are a simple generalisation of arguments due to Lawler and Werner , who have derived exact relations between multifractal exponents corresponding to self-intersection properties of Brownian walks in two dimensions. This corresponds to the special case when $`\varphi `$ is a free scalar field satisfying Laplace’s equation. This physically interesting example, and its relation to the exponents of star polymers, was in fact discussed some time ago in $`4ϵ`$ dimensions by Cates and Witten . The results of Lawler and Werner have recently been given an elegant interpretation and derivation by Duplantier in the context of coupling the system to a randomly fluctuating metric. Consider for definiteness self-avoiding walks $`\gamma `$ which are constrained to pass through the origin $`O`$. In order to be able to apply conformal invariance arguments, we work in the fixed fugacity ensemble, in which each walk of length $`L`$ is counted with a weight $`y^L`$, at the critical point where $`y^1=\mu `$, the lattice-dependent connective constant. The properties of the measure on walks on distance scales much larger than the lattice spacing are then supposed to be conformally invariant. Denote the radial coordinate by $`\rho `$. We want to focus on those walks which have a typical linear size $`R`$, and for which the origin $`O`$ is a typical interior point. Without loss of generality for computing scaling dimensions, we may then restrict the walks $`\gamma `$ to have the form of a pair of mutually avoiding self-avoiding walks, starting from $`O`$ and ending on the circle $`\rho =R`$. In the same spirit we may take these points to be the first intersections of the walks with this circle. The region bounded by $`\gamma `$ and an arc of the circle $`\rho =R`$ is thus simply connected. In this region we consider a critical system (for example an Ising model) with a suitable conformally invariant boundary condition on $`\gamma `$ and on $`\rho =R`$ (for example, that the spins are free). Consider the correlation function $`\varphi (r)\varphi (R^{})`$, where, without loss of generality for computing scaling dimensions we can choose $`|R^{}|R`$, and we are interested in the limit where $`rR`$. The geometry is illustrated in Fig. 1. Obviously this correlation function depends on $`\gamma `$, but we may hope to be able to compute suitable averages of this quantity over realisations of $`\gamma `$. By analogy with the case of a smooth boundary, we expect that $$\overline{\varphi (r)\varphi (R^{})^n}|r|^{nx}|R|^{nx}|R/r|^{\lambda _n},$$ (2) where the overline means an average over all the allowed realisations of $`\gamma `$. Note that if the average were over smooth boundaries only, we would expect $`\lambda _n=n\stackrel{~}{x}`$, with only the prefactor modified by the averaging. We now make a conformal transformation which maps the fractal boundary into a smooth one. It is convenient to exclude the disc $`\rho <r`$. This leaves the simply connected region bounded by two segments of $`\gamma `$ and arcs of the circles $`\rho =r`$ and $`\rho =R`$. By the Riemann theorem, the interior of this region may be mapped conformally by an analytic function $`zf(\gamma ,|r|,R;z)`$ onto the interior of a strip of width $`\pi `$, but with a length which is not simply $`\mathrm{}\mathrm{ln}(R/r)`$, but which will also depend on $`\gamma `$. Let us denote this by $`\mathrm{}_{\mathrm{eff}}(\gamma ,\mathrm{})`$. The correlation function $`\varphi (r)\varphi (R^{})`$ will be related by this conformal mapping to one between operators located near the ends of this strip. Taking the $`n`$th power and averaging we see by comparison with (2) that $$e^{\lambda _n\mathrm{}}\overline{e^{n\stackrel{~}{x}\mathrm{}_{\mathrm{eff}}(\gamma ,\mathrm{})}}$$ (3) At this point, we need further information about averages of quantities of the form $`e^{p\mathrm{}_{\mathrm{eff}}}`$, for arbitrary $`p`$. In particular, let us consider the case where $`n=1`$, and $`\varphi (r)`$ is the $`M`$-leg operator for $`M`$ mutually self-avoiding walks. It is convenient to generalise slightly and take $`\varphi (R^{})`$ to be a product of distinct single leg operators corresponding to the walks all ending on the arc of the circle $`\rho =R`$. The correlation function then gives the number of such walks which all begin at $`r`$ and end at a distance $`R`$, and, in the critical fugacity ensemble, scales in the bulk like $`|rR|^{x_MMx_1}`$, and near a smooth boundary according to (1) with a boundary scaling dimension $`\stackrel{~}{x}_M`$. Coulomb gas arguments lead to the conjectures $`x_M=\frac{3}{16}M^2\frac{1}{12}`$ and $`\stackrel{~}{x}_M=\frac{1}{8}M(3M+2)`$, which have been confirmed by numerical work and various other known exact information. If we now imagine taking an $`M`$-leg star polymer near the fractal boundary $`\gamma `$, which itself is a 2-leg star polymer, and performing the same average over the realisations of $`\gamma `$, the result will be an $`(M+2)`$-leg star polymer in the bulk. We conclude that $$e^{(x_{M+2}x_2)\mathrm{}}\overline{e^{\stackrel{~}{x}_M\mathrm{}_{\mathrm{eff}}(\gamma ,\mathrm{})}},$$ (4) where the factor $`e^{x_2\mathrm{}}`$ arises from the normalisation of the probability distribution of $`\gamma `$. Note that although this is initially defined only for $`M`$ a non-negative integer, it may be continued to other real values for which the average exists. Comparing with (3) we may therefore choose $`M`$ such that $`\stackrel{~}{x}_M=n\stackrel{~}{x}`$, solve for $`M`$ using the exact conjecture for $`\stackrel{~}{x}_M`$ given above, and substitute this into the exact form for $`x_{M+2}`$. After some simple algebra, this gives the result for $`\lambda _n`$ quoted in the abstract. A similar argument may be made when $`\gamma `$ is the exterior boundary of a Brownian walk. In this case the measure is rigorously known to be conformally invariant. As in the example considered by Lawler and Werner , one may now invoke the exact conjectures made by Duplantier and Kwon for the relevant dimensions of $`M`$ mutually avoiding ordinary random walks. However, the final result is identical, indicating that there is strong element of universality between these two fractals, not only with respect to their fractal dimensions . The nontrivial dependence of $`\lambda _n`$ shows that correlation functions near the boundary have a broad distribution of values. In particular, their average value, which scales as $`(r/R)^{\lambda _1}`$, may be quite different from a typical value. As argued, for example, in Ref. , the typical dependence should be of the form $`(r/R)^\lambda ^{}`$, where $`\lambda ^{}=d\lambda _n/dn|_{n=0}=3\stackrel{~}{x}`$. Interesting enough this is the behaviour which would obtain in wedge of interior angle $`\alpha =\pi /3`$. This idea may be made more explicit by interpreting (3) in terms of an average over a scale-dependent distribution $`P(\alpha ,\mathrm{})`$ of interior opening angles: $$_0^{2\pi }𝑑\alpha P(\alpha ,\mathrm{})e^{(\pi /\alpha )n\stackrel{~}{x}\mathrm{}}e^{\lambda _n\mathrm{}}$$ (5) Requiring that this be valid for all positive real $`n`$ determines the form of $`P`$. First we see that the behaviour of $`\lambda _nn\stackrel{~}{x}/2`$ as $`n\mathrm{}`$ at fixed $`\mathrm{}`$ implies that the effective angle in this regime is $`\alpha 2\pi `$. This is in agreement with a general argument of Cates and Witten . If we set $`\omega \frac{\pi }{\alpha }\frac{1}{2}`$ and $`un\stackrel{~}{x}`$, and define $`\stackrel{~}{P}(\omega ,\mathrm{})d\omega =P(\alpha ,\mathrm{})d\alpha `$ the above equation simplifies to $$_0^{\mathrm{}}𝑑\omega \stackrel{~}{P}(\omega ,\mathrm{})e^{\omega u\mathrm{}}\mathrm{exp}\left((5\mathrm{}/24)(\sqrt{1+24u}1)\right).$$ (6) Making the ansatz $`P(\omega ,\mathrm{})e^{5\mathrm{}/24}e^{\mathrm{}(a\omega +b/\omega )}`$ and using steepest descent then leads to the solution $$\stackrel{~}{P}(\omega ,\mathrm{})\mathrm{exp}\left(\frac{\mathrm{}}{24}\left(\sqrt{\omega }\frac{5}{2\sqrt{\omega }}\right)^2\right),$$ (7) valid for large $`\mathrm{}`$ (i.e. $`r/R1`$) and where we have suppressed more slowly varying prefactors. Note that this result is independent of $`\stackrel{~}{x}`$, consistent with it being an intrinsic property of the fractal<sup>1</sup><sup>1</sup>1However, it should be noted that the method of averaging used here, which sums over all realisations of $`\gamma `$ passing through a given point $`O`$ at given distance $`r`$ from a fixed point, tends to emphasise those parts of the fractal for which the opening angle is small.. It shows that the effective opening angle has a broad distribution which, however, becomes more and more strongly peaked around the typical value $`\omega =\frac{5}{2}`$ ($`\alpha =\frac{\pi }{3}`$) as $`r/R0`$, with a width of order $`(\mathrm{ln}(R/r))^{1/2}`$. A similar calculation may be carried out when the point $`O`$ is the root of an $`N`$-leg star polymer, by replacing $`x_{M+2}x_2`$ in (4) by $`x_{M+N}x_N`$. The case $`N=1`$ gives the end multifractal exponents, which, as first pointed out by Cates and Witten, are different form those which arise when $`O`$ is a typical interior point. In the case considered by Cates and Witten , where $`\varphi `$ is a massless scalar field (with bulk scaling dimension $`x=0`$) satisfying Dirichlet conditions $`\varphi =0`$ on the boundary, the appropriate boundary scaling operator is $`_{}\varphi `$ with dimension $`\stackrel{~}{x}=1`$. In that case we find that $`\lambda (1)=\frac{2}{3}=2D`$, where $`D=\frac{4}{3}`$ is the fractal dimension of the boundary. This is a consequence of the fact that $`\varphi `$ satisfies Laplace’s equation, equivalent to the conservation of particle flux in the Brownian interpretation . Note that we were lead to this unique result from making simple assumptions (rigorously grounded in the Brownian case) about the conformal invariance of the measure on $`\gamma `$. This suggests that all such curves which, with probability one, bound a simply connected region when viewed on macroscopic distance scales will fall into this universality class and, in particular, will have $`D=\frac{4}{3}`$. One of the most interesting features of our main result is that the $`\lambda _n`$ are generally, even for integer $`n`$, irrational (but algebraic) numbers. This is not in disagreement with any established results, since even if the bulk critical theory is unitary, there is no reflection positivity in the presence of a fractal boundary and so the theorem of Friedan, Qiu and Shenker is evaded. However, most examples of exactly calculable critical exponents in two dimensions have, even in nonunitary cases, led to rational values. Recently Duplantier has given an interesting interpretation of these type of results in the case of a general mixture of Brownian and self-avoiding walks, by considering the effects of coupling the system to a fluctuating background metric (quantum gravity). He was able to argue that, just as the scaling dimensions of overlapping objects in flat space should be added to obtain that of the composite, when they are coupled to quantum gravity their dressed scaling dimensions are additive if they avoid each other. In this way, by going back and forth between flat space and quantum gravity, and using the relation between ordinary and dressed scaling dimensions first obtained by Knizhnik, Polyakov and Zamolodchikov (KPZ) , specialised to the case $`c=0`$ appropriate to Brownian walks, he was able not only to recover the results of Lawler and Werner , but also to derive the earlier conjecture of Duplantier and Kwon . Thus, for this example, his methods are more powerful than the simple arguments we have used above, since we found it necessary to invoke the conjectured values for the $`M`$-leg scaling dimensions. Since our main result is a simple generalisation to the case when $`\stackrel{~}{x}1`$, one would expect that similar quantum gravity methods might apply. However, our result is supposed to be valid for a bulk theory with arbitrary central charge $`c`$, and it is therefore not clear why the KPZ relation with $`c=0`$ should appear in this more general case. We have given an exact formula for the multiscaling boundary exponents of an arbitrary conformally invariant two-dimensional critical system close to a random fractal boundary. This is the first example when such a multifractal spectrum with a non-trivial analytic structure has been found exactly. The basic method was to realise that this kind of geometric quenched disorder may be gauged away by making a suitable conformal transformation, at the cost of modifying the moduli (in this case $`\mathrm{}=\mathrm{ln}(R/r)`$.) The effective distribution of $`\mathrm{}_{\mathrm{eff}}`$ is then probed by replacing the critical system by one with $`c=0`$ (in our case, self-avoiding walks) for which the partition function is unity and therefore the quenched average of a correlation function is the same as its annealed average, which is more simply dealt with. Similar ideas may be applied to a critical system coupled to bulk quenched disorder in the form of a random metric (which may be realised as the continuum limit of a randomly connected lattice.) In this case we may consider an annulus of inner and outer radii $`r`$ and $`R`$ respectively, which in the case of a flat metric may be mapped conformally by the transformation $`z\mathrm{ln}z`$ to a flat metric on a cylinder of perimeter $`2\pi `$ and length $`\mathrm{}=\mathrm{ln}(R/r)`$. The inverse correlation length along this cylinder is then equal to the bulk scaling dimension $`x`$ . An arbitrary metric $`g`$ on the annulus is also conformally equivalent to the cylinder with a flat metric but with a length $`\mathrm{}_{\mathrm{eff}}(g,\mathrm{})`$. In analogy with (3) the multifractal bulk exponents $`\lambda _n^b`$, which govern the decay of the quenched average of the $`n`$th power of the bulk correlation function, are given by $$e^{\lambda _n^b\mathrm{}}\overline{e^{nx\mathrm{}_{\mathrm{eff}}(g,\mathrm{})}}$$ (8) We now consider the special case when the critical system case has $`c=0`$. In that case the quenched and annealed averages are identical, and the respective scaling dimensions $`X_0`$ and $`X`$ of an operator in flat space and when coupled to a fluctuating metric are related by the $`c=0`$ version of the KPZ relation $`X_0=\frac{1}{3}X(1+X)`$. Thus if we now set $`X_0=nx`$ and solve for $`X`$, this will yield $`\lambda _n^b`$. The result is $$\lambda _n^b=\frac{1}{2}\left(\sqrt{1+12nx}1\right).$$ (9) This result for the scaling dimensions when coupled to a quenched random lattice, was derived for the case $`n=1`$ by Baillie, Hawick and Johnston , but in fact these correlation functions exhibit multiscaling, with a whole spectrum of such exponents. Note that in this case the typical decay of a correlation function is determined by $`(\lambda ^b)^{}=3x`$. The author thanks G. Lawler and B. Duplantier for explaining their ideas, and the Fields Institute, Toronto, where this work was started, for its hospitality. This research was partly supported by the Engineering and Physical Sciences Research Council under Grant GR/J78327. After this work was completed preprints by Duplantier and Aizenman, Duplantier and Aharony appeared in which, among other things, it is argued that the fractal dimension of the accessible perimeter of a percolation cluster has fractal dimension $`D=\frac{4}{3}`$, consistent with the general arguments advanced above.
no-problem/9812/hep-th9812040.html
ar5iv
text
# Renormalizability of Nonrenormalizable Field Theories ## Abstract We give a simple and elegant proof of the Equivalence Theorem, stating that two field theories related by nonlinear field transformations have the same S matrix. We are thus able to identify a subclass of nonrenormalizable field theories which are actually physically equivalent to renormalizable ones. Our strategy is to show by means of the BRS formalism that the “nonrenormalizable” part of such fake nonrenormalizable theories, is a kind of gauge fixing, being confined in the cohomologically trivial sector of the theory. Recently there has been a renewed interest, triggered by the work of Gomis and Weinberg on the apparently nonrenormalizable theories. The main point was analyzed by Bergère and Lam who showed that two quantum field theories related by a nonlinear field transformation of the kind $$\varphi =\widehat{\varphi }+\alpha \widehat{\varphi }^2g(\widehat{\varphi };\alpha )$$ (1) have the same S-matrix. This statement is known in the literature as the “equivalence theorem” since more than fifty years , and we propose here an alternative very simple proof, which is easily adaptable to all situations, since it relies neither on the use of the equations of motion, nor on any particular renormalization scheme. The strategy is to approach the problem with the technology of nilpotent operators, as it is applied in gauge field theories, and hence to interpret the effect on the action generated by the nonlinear part of the transformation (1) as a “gauge fixing” term. If this is possible, we have immediately at our disposal the standard results of gauge field theories which insure that “physics” is independent from the gauge choice. In order to emphasize the relevant features we shall treat only the simplest case, $`i.e.`$ a scalar field theory with quartic interaction in four dimensions; the method can be straightforwardly extended to the physically interesting cases. We begin with the renormalizable classical action $$\mathrm{\Gamma }_R^{(0)}[\varphi ]=d^4x\left(\frac{1}{2}_\mu \varphi _\mu \varphi +\frac{1}{2}m^2\varphi ^2+\frac{1}{4!}\lambda \varphi ^4\right)$$ (2) and the related path integral representation for the vertex functional $$\mathrm{\Gamma }_R=D(\varphi )\mathrm{exp}(\mathrm{\Gamma }_R^{(0)}[\varphi ])$$ (3) Now we perform the nonlinear field redefinition (1), where the nonlinear part of the field transformation is identified by the parameter $`\alpha `$, and $`g(\widehat{\varphi };\alpha )`$ is an analytic function of both $`\varphi `$ and $`\alpha `$. The introduction of the $`\alpha `$-parameter, although it could appear as a computational artifact, is indeed natural since to preserve the dimensionally homogeneous character of (1), $`\alpha `$ has the dimension of $`[\frac{1}{m}]^{d_\varphi }`$. By applying (1) to (2), we obtain a new classical action, but we have also to take into account the Jacobian of the field transformation, which is conveniently exponentiated by means of anticommuting variables $`\overline{c(x)}`$ and $`c(x)`$. Thus we find a new classical action, non renormalizable by power counting $$\mathrm{\Gamma }_{NR}^{(0)}[\widehat{\varphi },c,\overline{c};\alpha ]=\mathrm{\Gamma }_R^{(0)}[\widehat{\varphi }]+\alpha \mathrm{\Gamma }^{(1)}[\widehat{\varphi };\alpha ]+d^4x\overline{c}\left(1+2\alpha \widehat{\varphi }g(\widehat{\varphi };\alpha )+\alpha \widehat{\varphi }^2g^{}(\widehat{\varphi };\alpha )\right)c$$ (4) where $`\alpha \mathrm{\Gamma }^{(1)}[\widehat{\varphi };\alpha ]`$ is obtained from $$\alpha \mathrm{\Gamma }^{(1)}[\widehat{\varphi };\alpha ]=\mathrm{\Gamma }_R^{(0)}[\widehat{\varphi }+\alpha \widehat{\varphi }^2g(\widehat{\varphi };\alpha )]\mathrm{\Gamma }_R^{(0)}[\widehat{\varphi }]$$ (5) The corresponding path integral formulation for the proper functional now reads $$\mathrm{\Gamma }_{NR}=D(\widehat{\varphi })D(c)D(\overline{c})\mathrm{exp}(\mathrm{\Gamma }_{NR}^{(0)}[\widehat{\varphi },c,\overline{c};\alpha ])$$ (6) It is precisely the part of $`\mathrm{\Gamma }_{NR}^{(0)}[\widehat{\varphi },c,\overline{c};\alpha ]`$ not coinciding with $`\mathrm{\Gamma }_R^{(0)}[\widehat{\varphi }]`$, which we would like to identify as a “gauge fixing term” with “gauge parameter” $`\alpha `$. With this in mind we introduce two ghosts $`\beta `$ and $`b(x)`$ of which the first is global, while the second is local, and the BRS transformations $$\begin{array}{c}s\widehat{\varphi }(x)=sc(x)=s\overline{c}(x)=s\alpha =0,\hfill \\ sb(x)=\overline{c}(x),\hfill \\ s\beta =\alpha \hfill \end{array}$$ (7) Correspondingly, the classical action (4) reads $$\mathrm{\Gamma }_{NR}^{(0)}[\widehat{\varphi },c,\overline{c};\alpha ]=\mathrm{\Gamma }_R^{(0)}[\widehat{\varphi }]+sY[\widehat{\varphi },c,\overline{c};\alpha ]$$ (8) where $$Y[\widehat{\varphi },c,\overline{c};\alpha ]=\beta \mathrm{\Gamma }^{(1)}+d^4xb(1+2\alpha \widehat{\varphi }g(\widehat{\varphi };\alpha )+\alpha \widehat{\varphi }^2g^{}(\widehat{\varphi };\alpha ))c$$ (9) The classical action $`\mathrm{\Gamma }_{NR}^{(0)}[\widehat{\varphi },c,\overline{c};\alpha ]`$ satisfies the linear Slavnov–Taylor identity $$𝒮\mathrm{\Gamma }_{NR}^{(0)}[\widehat{\varphi },c,\overline{c};\alpha ]=\left(d^4x\overline{c}(x)\frac{\delta }{\delta b(x)}+\alpha \frac{}{\beta }\right)\mathrm{\Gamma }_{NR}^{(0)}[\widehat{\varphi },c,\overline{c};\alpha ]=0$$ (10) Moreover, the action is uncharged with respect to the Faddeev-Popov assignments written in the Table | | $`\widehat{\varphi }`$ | $`c`$ | $`\overline{c}`$ | $`b`$ | $`\alpha `$ | $`\beta `$ | | --- | --- | --- | --- | --- | --- | --- | | $`\mathrm{\Phi }\mathrm{\Pi }`$ | $`0`$ | $`1`$ | $`1`$ | $`2`$ | $`0`$ | $`1`$ | Table Faddeev–Popov charges. To make contact with the initial problem, and to identify the physical subspace of our example, we restrict the space to that of analytic functions of the $`\alpha `$-parameter. Within this subspace we can analyze the cohomology of the BRS operator (7) and easily find that it contains only $`\alpha `$-independent local functionals of $`\widehat{\varphi }(x)`$ and $`c(x)`$, since $`\{b(x);\overline{c}(x)\}`$ and $`\{\beta ;\alpha \}`$ appear in (7) as BRS-doublets . Thus we have the parametric equation $$\alpha \frac{}{\alpha }\mathrm{\Gamma }=s\left(d^4x\widehat{X}\right)\mathrm{\Gamma }$$ (11) for a suitable local functional $`\widehat{X}`$. Notice that we have to employ the $`\alpha \frac{}{\alpha }`$ operator which leaves the cohomology invariant and not simply the $`\frac{}{\alpha }`$ operator which mixes the cohomological separation of the target space. Indeed, from the expression (10) we have that $$\frac{}{\alpha }𝒮𝒮\frac{}{\alpha }=\frac{}{\beta }$$ (12) where $``$ is a generic functional. Hence $$\frac{}{\alpha }\mathrm{\Gamma }_{NR}^{(0)}[\widehat{\varphi },c,\overline{c};\alpha ]=\frac{}{\alpha }𝒮Y[\widehat{\varphi },c,\overline{c};\alpha ]=\left(𝒮\frac{}{\alpha }+\frac{}{\beta }\right)Y[\widehat{\varphi },c,\overline{c};\alpha ]=𝒮\frac{}{\alpha }Y[\widehat{\varphi },c,\overline{c};\alpha ]+\mathrm{\Gamma }^{(1)}$$ (13) So $`\frac{}{\alpha }\mathrm{\Gamma }_{NR}^{(0)}[\widehat{\varphi },c,\overline{c};\alpha ]`$ is not cohomologically trivial, but $`\alpha \frac{}{\alpha }\mathrm{\Gamma }_{NR}^{(0)}[\widehat{\varphi },c,\overline{c};\alpha ]`$ on the contrary does, since $$\alpha \frac{}{\alpha }\mathrm{\Gamma }_{NR}^{(0)}[\widehat{\varphi },c,\overline{c};\alpha ]=𝒮\left(\alpha \frac{}{\alpha }Y[\widehat{\varphi },c,\overline{c};\alpha ]+\beta \mathrm{\Gamma }^{(1)}\right)$$ (14) Equation (11) is the statement that only the $`\alpha `$-independent Green functions are “physical” and these are built with the vertices and the propagator obtained by $`\mathrm{\Gamma }_R^{(0)}[\widehat{\varphi }]`$. One final remark may be in order; to implement the stability of the theory, we may impose on $`\mathrm{\Gamma }`$ the further conditions $$\frac{\mathrm{\Gamma }}{\beta }=\frac{\delta \mathrm{\Gamma }}{\delta b(x)}=0$$ (15) which are trivially true at the classical level. Finally, we would like to provide a simple method to decide whether or not a theory specified by a classical action which appears to be nonrenormalizable by power counting can be obtained by a power counting renormalizable action through a nonlinear field redefinition. To be definite, consider the example treated in this letter, $`i.e.`$ a scalar field $`\varphi (x)`$ with a certain classical action $`\mathrm{\Gamma }_{NR}(\varphi )`$. First collect in $`\mathrm{\Gamma }_R(\varphi )`$ all terms which are power counting renormalizable. The remaining contributions, being non power counting renormalizable, contain at least a power of a parameter $`\alpha `$ with the dimension of an inverse of a mass; therefore we can write $$\mathrm{\Gamma }_{NR}(\varphi )=\mathrm{\Gamma }_R(\varphi )+\alpha \mathrm{\Gamma }^{}(\varphi )$$ (16) If $`\mathrm{\Gamma }_{NR}(\varphi )`$ can be obtained from $`\mathrm{\Gamma }_R(\varphi )`$ by a nonlinear field transformation $$\varphi \varphi +\alpha X(\varphi ;\alpha )$$ (17) we have to solve for $`X(\varphi ;\alpha )`$ the equation $$\mathrm{\Gamma }_R(\varphi +\alpha X(\varphi ;\alpha ))\mathrm{\Gamma }_R(\varphi )=\alpha \mathrm{\Gamma }^{}(\varphi )$$ (18) A very simple criterion is to analyze it in a descending way, beginning from the highest order monomial in $`\mathrm{\Gamma }^{}(\varphi )`$ and $`\mathrm{\Gamma }_R(\varphi )`$ and remembering that $`X(\varphi ;\alpha )`$ is at least quadratic in $`\varphi (x)`$. For example, an action $`\mathrm{\Gamma }_{NR}(\varphi )`$ which contains $`\varphi ^2`$, $`\varphi ^4`$ and $`\varphi ^6`$ terms only is truly non power counting renormalizable since, through a bilinear transformation containing only one parameter we are bound to obtain a $`\varphi ^8`$ contribution, too. Acknowledgments We are grateful to the organizing Committee for the kind hospitality during the conference Quantization, Generalized BRS Cohomology and Anomalies, held at the Erwin Schroedinger International Institute (Vienna), from september 28 to october 7, 1998.
no-problem/9812/solv-int9812025.html
ar5iv
text
# The Cole-Hopf and Miura transformations revisited ## 1. Introduction Our aim in this note is to display the close similarity between the well-known Cole–Hopf transformation relating the Burgers and the heat equation, and the celebrated Miura transform connecting the Korteweg–de Vries (KdV) and the modified KdV (mKdV) equation. In doing so we will introduce an additional twist in the Cole–Hopf transformation (cf. (1.28), (1.29)), which to the best of our knowledge, appears to be new. Moreover, we will reveal the history of this transformation and uncover several instances of its rediscovery (including those by Cole and Hopf). We start with a brief introductory account on the KdV and mKdV equations. The KdV equation was derived as an equation modeling the behavior of shallow water waves moving in one direction by Korteweg and his student de Vries in 1895<sup>1</sup><sup>1</sup>1But the equation had been derived earlier by Boussinesq in 1871, see Heyerhoff and Pego .. The landmark discovery of the inverse scattering method by Gardner, Green, Kruskal, and Miura in 1967 (cf. also ) brought the KdV equation to the forefront of mathematical physics, and started the phenomenal development involving multiple disciplines of science as well as several branches of mathematics. The KdV equation (in a setting convenient for our purpose) reads $$\mathrm{KdV}(V)=V_t6VV_x+V_{xxx}=0,$$ (1.1) while its modified counterpart, the mKdV equation, equals $$\mathrm{mKdV}(\varphi )=\varphi _t6\varphi ^2\varphi _x+\varphi _{xxx}=0.$$ (1.2) Miura’s fundamental discovery was the realization that if $`\varphi `$ satisfies the mKdV equation (1.2), then $$V_\pm (x,t)=\varphi (x,t)^2\pm \varphi _x(x,t),(x,t)^2$$ (1.3) both satisfy the KdV equation. The transformation (1.3) has since been called the Miura transformation. Furthermore, explicit calculations by Miura showed the validity of the identity $$\mathrm{KdV}(V_\pm )=(2\varphi \pm _x)\mathrm{mKdV}(\varphi ).$$ (1.4) The Miura transformation (1.3) was quite prominently used in the construction of an infinite series of conservation laws for the KdV equation, see , , Sect. 5.1. Miura’s identity (1.4) then demonstrates how to transfer solutions of the mKdV equation to solutions of the KdV equation, but due to the nontrivial kernel of $`(2\varphi \pm _x),`$ it is not immediately clear how to reverse the procedure and to transfer solutions of the KdV equation to solutions of the mKdV equation. It was shown in (see also , , , ) how to revert the process. Following a similar treatment of the (modified) Kadomtsev-Petviashvili equation in , we use here a method that considerably simplies the proofs in , , , . Introduce the first-order differential expression $$\stackrel{~}{P}(V)=2V_xV_x.$$ (1.5) Then one derives $$\mathrm{mKdV}(\varphi )=_x\left(\frac{1}{\psi }\left(\psi _t\stackrel{~}{P}(V_\pm )\psi \right)\right),$$ (1.6) where $$\varphi =\psi _x/\psi ,\psi >0,V_\pm =\varphi ^2\pm \varphi _x.$$ (1.7) Next, let $`V=V(x,t)`$ be a solution of the KdV equation, $`\mathrm{KdV}(V)=0`$, and $`\psi >0`$ be a function satisfying $$\psi _t=\stackrel{~}{P}(V)\psi ,\psi _{xx}+V\psi =0.$$ (1.8) Then one immediately deduces that $`\varphi `$ solves the mKdV equation, $`\mathrm{mKdV}(\varphi )=0`$, and hence the Miura transformation has been “inverted”. The KdV equation (1.1) and the mKdV equation (1.2) are just the first (nonlinear) evolution equations in a countably infinite hierarchy of such equations (the (m)KdV hierarchy). The considerations (1.3)–(1.8) extend to the entire hierarchy of these equations, replacing the first-order differential expression $`\stackrel{~}{P}(V)=\stackrel{~}{P}_1(V)`$ by an appropriate first-order differential expression $`\stackrel{~}{P}_n(V)`$ for $`n`$ (cf., e.g., , , , ). More precisely, denoting the $`n`$th KdV equation in the KdV hierarchy by $$\mathrm{KdV}_n(V)=0,$$ (1.9) Lax constructed differential expressions $`P_{2n+1}(V)`$ of order $`2n+1`$ with coefficients differential polynomials of $`V`$ such that $$\frac{d}{dt}L[P_{2n+1}(V),L]=\mathrm{KdV}_n(V),n.$$ (1.10) Here $`L`$ denotes the Schrödinger differential expression $$L=_x^2+V.$$ (1.11) The KdV functional in (1.1) then corresponds to $`n=1`$ and one obtains $$P_3(V)=4_x^3+6V_x+3V_x$$ (1.12) in this case. Restriction of $`P_{2n+1}(V)`$ to the (algebraic) nullspace of $`L`$ then yields the first-order differential expression $$\stackrel{~}{P}_n(V)=P_{2n+1}(V)|_{\mathrm{ker}(L)},n.$$ (1.13) Next we turn to the the Cole–Hopf transformation and its history. The classical Cole–Hopf transformation , , covered in most textbooks on partial differential equations, states that $$V(x,t)=2\frac{\psi _x(x,t)}{\psi (x,t)},(x,t)\times (0,\mathrm{}),$$ (1.14) where $`\psi >0`$ is a solution of the heat equation $$\psi _t=\psi _{xx},$$ (1.15) satisfies the (viscous) Burgers equation $$V_t+VV_x=V_{xx}.$$ (1.16) However, already in 1906, Forsyth, in his multi-volume treatise on differential equations (, p. 100), discussed the equation (in his notation) $$\frac{}{x}\left\{\frac{1}{\beta }\left(\gamma \frac{\alpha }{x}\alpha ^2\right)\right\}2\frac{\alpha }{y}=0,$$ (1.17) where $`\alpha =\alpha (x,y)`$. Hence there exists a function $`\theta `$ such that $$\alpha =\frac{\theta }{x},\gamma \frac{\alpha }{x}\alpha ^2=2\beta \frac{\theta }{y}.$$ (1.18) Assuming the function $`z`$ satisfies $$z_{xx}+2\alpha z_x+2\beta z_y+\gamma =0,$$ (1.19) an easy calculation shows that $$\frac{^2}{x^2}(ze^\theta )+2\beta \frac{}{y}(ze^\theta )=0.$$ (1.20) Introducing new variables $`t=y`$ and $`u(x,t)=2\alpha (x,y)`$ as well as fixing $`\beta =1/2`$, $`\gamma =0`$, and $`z=1`$, one concludes that (1.17) indeed reduces to the viscous Burgers equation $$u_t+uu_x=u_{xx},$$ (1.21) while (1.20) equals $$(e^\theta )_t=(e^\theta )_{xx},$$ (1.22) with solutions related by $$u=2\theta _x.$$ (1.23) However, Forsyth did not study the ramifications of this transformation, and no applications are discussed. Shortly thereafter, in 1915, Bateman introduced the model equation $$u_t+uu_x=\nu u_{xx}.$$ (1.24) He was interested in the vanishing viscosity limit, that is, when $`\nu 0`$. By studying solutions of the form $`u=F(x+Ut)`$, he concluded that “the question of the limiting form of the motion of a viscous fluid when the viscosity tends to zero requires very careful investigation”. Only in 1940 did Burgers (, p. 8) introduce<sup>2</sup><sup>2</sup>2Frequently Burgers equations is quoted from his 1948 paper , but he had already introduced it in 1940. what has later been called the (viscous) Burgers equation, as a simple model of turbulence, and did some preliminary investigation on properties of the solution. Taking advantage of the later rediscovered Forsyth transformation by Cole and Hopf, Burgers continued the investigations of what he called the nonlinear diffusion equation, focusing mainly on statistical aspects of the equation. The results of these investigations were collected in his book . In 1948, Florin , in the context of applications to watersaturated flow, rediscovered Forsyth’s transformation, which would become well-known under the name Cole-Hopf transformation only some 44 years later. Although the Cole–Hopf transformation had already been published in 1906, it was only with the seminal papers by Hopf <sup>3</sup><sup>3</sup>3With a misprint in the title, writing $`u_t+uu_x=\mu _{xx}`$ rather than $`u_t+uu_x=\mu u_{xx}`$. in 1950<sup>4</sup><sup>4</sup>4Hopf states in a footnote (p. 202) that he had the “Cole–Hopf transformation” already in 1946, but “it was not until 1949 that I became sufficiently acquainted with the recent development of fluid dynamics to be convinced that a theory based on (1.24) could serve as an instructive introduction into some of the mathematical problems involved”. and by Cole in 1951 that the full impact of the simple transformation was seen. In particular the careful study by Hopf concerning the vanishing viscosity limit represented a landmark in the emerging theory of conservation laws. Although the Cole–Hopf transformation is restricted to the Burgers equation, the insight and the motivation from this analysis has been of fundamental importance in the theory of conservation laws. Furthermore, Cole states the generalization of the Cole–Hopf transformation to a particular multi-dimensional system. More precisely, if $`\psi =\psi (x,t)`$, $`(x,t)^n\times (0,\mathrm{})`$, satisfies the $`n`$-dimensional heat equation $$\psi _t=\nu \mathrm{\Delta }\psi ,\nu >0,$$ (1.25) and one defines $$V=2\nu \mathrm{ln}(\psi ),$$ (1.26) then $`V`$ satisfies $$V_t+(V)V=\nu \mathrm{\Delta }V,$$ (1.27) and the vector-valued function $`V=V(x,t)^n`$ has as many components (i.e., $`n`$) as the dimension of the underlying space. Observe, in particular, that $`V`$ is irrotational (i.e., $`V=W`$ for some $`W,`$ or equivalently, $`\mathrm{curl}V=0,`$). The multi-dimensional extension was rediscovered by Kuznetsov and Rozhdestvenskii in 1961. In this note we show the following relations, $`V_t+VV_x\nu V_{xx}`$ $`=2\nu \left({\displaystyle \frac{1}{\psi }}_x+{\displaystyle \frac{\psi _x}{\psi ^2}}\right)\left(\psi _t\nu \psi _{xx}\right)`$ (1.28) $`=2\nu _x\left({\displaystyle \frac{1}{\psi }}(\psi _t\nu \psi _{xx})\right),`$ (1.29) whenever $`V=2\nu \psi _x/\psi `$ for a positive function $`\psi .`$ This clearly displays the nature of the Cole–Hopf transformation and closely resembles Miura’s identity (1.4) and the relation (1.6). Even though identities (1.28) and (1.29) are elementary observations, much to our surprise, they appear to have escaped notice in the extensive literature on the Cole-Hopf transformation thus far. While both the KdV and mKdV equations are nonlinear partial differential equations, the case of the Burgers and heat equations just considered is a bit different since it relates a nonlinear and a linear partial differential equation (see also , Sect. 6.4). One can also extend the Cole–Hopf transformation to the case of a potential term $`F`$ in the heat equation, see, for instance, . Here the relation (1.29) reads as follows, $$V_t+VV_x\nu V_{xx}+2\nu F_x=2\nu _x\left(\frac{1}{\psi }(\psi _t\nu \psi _{xx}F\psi )\right),$$ (1.30) whenever $`V=2\nu _x\mathrm{ln}(\psi )`$ for a positive function $`\psi `$. The case of Burgers’ equation externally driven by a random potential term recently generated particular interest, see, for instance, , , , , , and the references therein. We also mention a very interesting application of the Cole-Hopf transformation to the pair of the telegraph and a nonlinear Boltzmann equation in , generalizing the pair of the heat and Burgers equation considered in this note. Equation (1.30) extends to the multi-dimensional case corresponding to (1.27) and one obtains $$V_t+\alpha (V)V\nu \mathrm{\Delta }V+\frac{2\nu }{\alpha }F=\frac{2\nu }{\alpha }\left(\frac{1}{\psi }\left(\psi _t\nu \mathrm{\Delta }\psi F\psi \right)\right),$$ (1.31) whenever $`\alpha \backslash \{0\}`$ and $`V=(2\nu /\alpha )\mathrm{ln}(\psi )`$ for a positive function $`\psi `$. Obviously there is a close similarity between the heat and the Burgers equation expressed by (1.28), and Miura’s identity (1.4) relating the mKdV and the KdV equation. The principal idea underlying these considerations being that one (hierarchy of) evolutions equation(s) can be represented as a linear differential expression acting on another (hierarchy of) evolution equation(s). As long as the null space of this linear differential expression can be analyzed in detail, it becomes possible to transfer solutions, in fact, entire classes of solutions (e.g., rational, soliton, algebro-geometric solutions, etc.) between these evolution equations. In concrete applications, however, it turns out to be simpler to rewrite a relationship between two evolution equations, such as (1.4) and (1.28), in a form analogous to (1.6) and (1.29), rather than analyzing the nullspaces of $`(2\varphi \pm _x)`$ and $`(_x+(\psi _x/\psi ))`$ in detail. These strategies relating (hierarchies of) evolution equations and their modified analogs is not at all restricted to the Burgers and heat equations and the KdV and mKdV hierarchies, respectively, but applies to a large number of evolution equations including the Boussinesq , and more generally, the Gelfand–Dickey hierarchy and its modified counterpart, the Drinfeld–Sokolov hierarchy , the Toda and Kac–van Moerbeke hierarchies , , the Kadomtsev–Petviashvili and modified Kadomtsev–Petviashvili hierarchies , , etc. For simplicity we restrict ourselves to classical solutions throughout this note. The case of distributional solutions of Burgers equation is considered, for instance, in . Throughout this note we abbreviate by $`C^{p,q}(\mathrm{\Omega }\times \mathrm{\Lambda }),`$ $`\mathrm{\Omega }^n,\mathrm{\Lambda }`$ open, $`n,`$ $`p,q_0,`$ the linear space of continuous functions $`f(x,t)`$ with continuous partial derivatives with respect to $`x=(x_1,\mathrm{},x_n)`$ up to order $`p`$ and $`q`$ partial derivatives with respect to $`t.`$ $`C^{p,q}(\mathrm{\Omega }\times \mathrm{\Lambda };^n)`$ is then defined analogously for $`f(x,t)^n.`$ ## 2. The Miura transformation We turn to the precise formulation of the relations between the KdV and the mKdV equation and omit details of a purely calculational nature. ###### Lemma 2.1. Let $`\psi =\psi (x,t)>0`$ be a positive function such that $`\psi C^{4,0}(\times ),`$ $`\psi _tC^{1,0}(\times ).`$ Define $`\varphi =\psi _x/\psi `$. Then $`\varphi C^{3,1}(\times )`$ and $$\mathrm{mKdV}(\varphi )=_x\left(\frac{1}{\psi }\left(\psi _t\stackrel{~}{P}(V_\pm )\psi \right)\right),$$ (2.1) where $$\stackrel{~}{P}(V)=2V_xV_x$$ (2.2) and $$V_\pm =\varphi ^2\pm \varphi _x.$$ (2.3) ###### Proof. A straightforward calculation. ∎ The application to the KdV equation then reads as follows. ###### Theorem 2.2. Let $`V=V(x,t)`$ be a solution of the KdV equation, $`\mathrm{KdV}(V)=0,`$ with $`VC^{3,1}(\times )`$, and let $`\psi >0`$ be a positive function satisfying $`\psi C^{2,0}(\times ),`$ $`\psi _tC^{1,0}(\times )`$ and $$\psi _t=\stackrel{~}{P}(V)\psi ,\psi _{xx}+V\psi =0,$$ (2.4) with $`\stackrel{~}{P}(V)`$ given by (2.2). Define $`\varphi =\psi _x/\psi `$ and $`\widehat{V}=\varphi ^2\varphi _x`$. Then $`V=\varphi ^2+\varphi _x`$ and $`\varphi `$ satisfies $`\varphi C^{4,1}(\times )`$ and the mKdV equation, $$\mathrm{mKdV}(\varphi )=0.$$ (2.5) Moreover, $`\widehat{V}`$ satisfies $`\widehat{V}C^{3,1}(\times )`$ and the KdV equation, $$\mathrm{KdV}(\widehat{V})=0.$$ (2.6) ###### Proof. A computation based on Lemma 2.1. ∎ Originally, Theorem 2.2 was proved in (see also , , , , ) using supersymmetric methods. The above arguments, following in the context of the (modified) Kadomtsev-Petviashvili equation, result in considerably shorter calculations. The “if part” in Theorem 2.2 also follows from prolongation methods developed in . A different approach to Theorem 2.2, assuming rapidly decreasing solutions of the KdV equation, can be found in Sect. 38 of . ###### Remark 2.3. The chain of transformations $$V\varphi \varphi \widehat{V}$$ (2.7) reveals a Bäcklund transformation between the KdV and mKdV equations ($`V\varphi `$) and two auto-Bäcklund transformations for the KdV ($`V\widehat{V}`$) and mKdV equations ($`\varphi \varphi `$), respectively. ###### Remark 2.4. For simplicity we assumed $`\psi (x,t)>0`$ for all $`(x,t)^2`$ in Theorem 2.2. However, as proven by Lax , one can show that $`\psi (x,t_0)>0`$ for some $`t_0`$ and all $`x`$ actually implies $`\psi (x,t)>0`$ for all $`(x,t)^2`$ (see also ). Moreover, in case $`V(x,t_0)`$ is real-valued, we note that $`L(t_0)\psi (x,t_0)=0`$ has a positive solution $`\psi (x,t_0)>0`$ if and only if the Schrödinger differential expression $`L(t_0)=_x^2+V(x,t_0)`$ is nonoscillatory at $`\pm \mathrm{}`$ (cf. ). While the system of equations (2.4) always has a solution $`\psi (x,t),`$ (cf. Lemma 3 in ), it is the additional requirement $`\psi (x,t)>0`$ for all $`(x,t)^2`$ which renders $`\varphi `$ (and hence $`\widehat{V}`$) nonsingular. Without the condition $`\psi >0,`$ Theorem 2.2 describes (auto)Bäcklund transformations for the KdV and mKdV equations with characteristic singularities (cf. ). ## 3. The Cole–Hopf transformation Finally we return to relations (1.28), (1.29), and (1.30). Since they are all proved by explicit calculations we may omit these details and focus on a precise formulation of the results instead. ###### Lemma 3.1. Let $`\psi =\psi (x,t)>0`$ be a positive function with $`\psi C^{3,0}(\times (0,\mathrm{})),`$ $`\psi _tC^{1,0}(\times (0,\mathrm{})).`$ Define $`V=2\nu \psi _x/\psi `$ with $`\nu >0`$. Then $`VC^{2,1}(\times (0,\mathrm{}))`$ and $`V_t+VV_x\nu V_{xx}`$ $`=2\nu \left({\displaystyle \frac{1}{\psi }}_x+{\displaystyle \frac{\psi _x}{\psi ^2}}\right)\left(\psi _t\nu \psi _{xx}\right)`$ (3.1) $`=2\nu _x\left({\displaystyle \frac{1}{\psi }}(\psi _t\nu \psi _{xx})\right).`$ (3.2) The extension to the case with a potential term $`F`$ in the heat equation reads as follows. ###### Lemma 3.2. Let $`FC^{1,0}(\times (0,\mathrm{}))`$ and assume $`\psi =\psi (x,t)>0`$ to be a positive function such that $`\psi C^{3,0}(\times (0,\mathrm{})),`$ $`\psi _tC^{1,0}(\times (0,\mathrm{})).`$ Define $`V=2\nu \psi _x/\psi `$ with $`\nu >0`$. Then $`VC^{2,1}(\times (0,\mathrm{}))`$ and $$V_t+VV_x\nu V_{xx}+2\nu F_x=2\nu _x\left(\frac{1}{\psi }(\psi _t\nu \psi _{xx}F\psi )\right).$$ (3.3) We can exploit these relations as follows. ###### Theorem 3.3. Let $`FC^{1,0}(\times (0,\mathrm{}))`$ and $`\nu >0.`$ (i) Suppose $`V`$ satisfies $`VC^{2,1}(\times (0,\mathrm{}))`$, and $$V_t+VV_x\nu V_{xx}+2\nu F_x=0$$ (3.4) for some $`\nu >0.`$ Define $$\psi (x,t)=\mathrm{exp}\left(\frac{1}{2\nu }^x𝑑yV(y,t)\right).$$ (3.5) Then $`\psi `$ satisfies $`0<\psi C^{3,1}(\times (0,\mathrm{}))`$ and $$\frac{1}{\psi }\left(\psi _t\nu \psi _{xx}F\psi \right)=C(t)$$ (3.6) for some $`x`$-independent $`CC()`$. (ii) Let $`\psi >0`$ be a positive function satisfying $`\psi C^{3,0}(\times (0,\mathrm{})),`$ $`\psi _tC^{1,0}(\times (0,\mathrm{}))`$ and suppose $$\psi _t=\nu \psi _{xx}+F\psi $$ (3.7) for some $`\nu >0.`$ Define $$V=2\nu \frac{\psi _x}{\psi }.$$ (3.8) Then $`VC^{2,1}(\times (0,\mathrm{}))`$ satisfies (3.4). ###### Remark 3.4. One can “scale away” $`C(t)`$ in Theorem 3.3 (i) by introducing a new function $`\stackrel{~}{\psi }`$. In fact, the function $`\stackrel{~}{\psi }(x,t)=\psi (x,t)\mathrm{exp}(_0^t𝑑sC(s))`$ satisfies $$\stackrel{~}{\psi }=\nu \stackrel{~}{\psi }_{xx}+F\stackrel{~}{\psi }.$$ (3.9) ###### Remark 3.5. Using the standard representation of solutions of the heat equation initial value problem, $$\psi _t=\nu \psi _{xx},\psi (x,0)=\psi _0(x),$$ (3.10) assuming $$\psi _0C(),\psi _0(x)C_1\mathrm{exp}(C_2|x|^{1+\gamma })\text{ for }|x|R$$ (3.11) for some $`R>0,`$ $`C_j0,`$ $`j=1,2,`$ and $`0\gamma <1,`$ given by (cf. , Ch. 3; , Ch. V) $$\psi (x,t)=\frac{1}{2(\pi \nu t)^{1/2}}_{}𝑑y\mathrm{exp}((xy)^2/4\nu t)\psi _0(y)>0,$$ (3.12) the corresponding initial value problem for the Burgers equation $$V_t+VV_x\nu V_{xx}=0,V(x,0)=V_0(x),$$ (3.13) reads $$V(x,t)=\frac{_{}𝑑y(xy)t^1\mathrm{exp}\left((2\nu )^1_0^y𝑑\eta V_0(\eta )(xy)^2(4\nu t)^1\right)}{_{}𝑑y\mathrm{exp}\left((2\nu )^1_0^y𝑑\eta V_0(\eta )(xy)^2(4\nu t)^1\right)},$$ (3.14) assuming $`V_0C()`$ and $$V_0(x)0\text{ or }\left|_0^x𝑑yV_0(y)\right|\underset{|x|\mathrm{}}{=}O(|x|^{1+\gamma })\text{ for some }\gamma <1.$$ (3.15) Without going into further details we mention the existence of a hierarchy of Burgers equations which are related to the linear partial differential equations $$\psi _t=_x^{n+1}\psi ,n$$ (3.16) by the Cole-Hopf transformation $`V=2\psi _x/\psi `$ (see ). The multi-dimensional extension of Lemma 3.2 reads as follows. ###### Lemma 3.6. Let $`FC^{1,0}(^n\times (0,\mathrm{}))`$ and assume $`\psi =\psi (x,t)>0`$ to be a positive function such that $`\psi C^{3,0}(^n\times (0,\mathrm{})),`$ $`\psi _tC^{1,0}(^n\times (0,\mathrm{})).`$ Define $`V=(2\nu /\alpha )\mathrm{ln}(\psi )`$ with $`\alpha \backslash \{0\},`$ $`\nu >0.`$ Then $`VC^{2,1}(^n\times (0,\mathrm{});^n)`$ and $$V_t+\alpha (V)V\nu \mathrm{\Delta }V+\frac{2\nu }{\alpha }F=\frac{2\nu }{\alpha }\left(\frac{1}{\psi }\left(\psi _t\nu \mathrm{\Delta }\psi F\psi \right)\right).$$ (3.17) Our final result shows how to transfer solutions between the multi-dimensional Burgers equation and the heat equation. ###### Theorem 3.7. Let $`FC^{1,0}(^n\times (0,\mathrm{})),`$ $`\alpha \backslash \{0\},`$ and $`\nu >0.`$ (i) Assume that $`VC^{2,1}(^n\times (0,\mathrm{});^n)`$ satisfies $$V=\mathrm{\Phi }$$ (3.18) for some potential $`\mathrm{\Phi }C^{3,1}(^n\times (0,\mathrm{}))`$ and $$V_t+\alpha (V)V\nu \mathrm{\Delta }V+\frac{2\nu }{\alpha }F=0.$$ (3.19) Define $$\psi =\mathrm{exp}\left(\frac{\alpha }{2\nu }\mathrm{\Phi }\right).$$ (3.20) Then $`\psi C^{3,1}(^n\times (0,\mathrm{}))`$ and $$\frac{1}{\psi }\left(\psi _t\nu \mathrm{\Delta }\psi F\psi \right)=C(t),$$ (3.21) for some $`x`$-independent $`CC((0,\mathrm{})).`$ (ii) Let $`\psi >0`$ be a positive function satisfying $`\psi C^{3,0}(^n\times (0,\mathrm{})),`$ $`\psi _tC^{1,0}(^n\times (0,\mathrm{}))`$ and suppose $$\psi _t=\nu \mathrm{\Delta }\psi +F\psi .$$ (3.22) Define $$V=\frac{2\nu }{\alpha }\mathrm{ln}(\psi ).$$ (3.23) Then $`VC^{2,1}(^n\times (0,\mathrm{});^n)`$ satisfies (3.19). Acknowledgments. We thank Mehmet Ünal and Karl Unterkofler for discussions on the Burgers and (m)KdV equations, respectively.
no-problem/9812/hep-ph9812450.html
ar5iv
text
# References Relations among QCD corrections beyond leading order <sup>1</sup><sup>1</sup>1Work supported in part by EU contract FMRX-CT98-0194 J. Blümlein, V. Ravindran and W.L. van Neerven <sup>2</sup><sup>2</sup>2On leave of absence from Instituut-Lorentz, University of Leiden, P.O. Box 9506, 2300 HA Leiden, The Netherlands. DESY-Zeuthen, Platanenallee 6, D-15738 Zeuthen, Germany. Symmetries are known to be a very useful guiding tool to understand the dynamics of various physical phenomena. Particularly, continuous symmetries played an important role in particle physics to unravel the structure of dynamics at low as well as high energies. In hadronic physics, such symmetries at low energies were found to be useful to classify various hadrons. At high energy, where the masses of the particles can be neglected, one finds in addition to the above mentioned symmetries new symmetries such as conformal and scale invariance. This for instance happens in deep inelastic lepton-hadron scattering (DIS) where the energy scale is much larger than the hadronic mass scale. At these energies one can in principle ignore the mass scale and the resulting dynamics is purely scale independent . We first discuss the Drell-Levy-Yan relation (DLY) which relates the structure functions $`F(x,Q^2)`$ measured in deep inelastic scattering to the fragmentation functions $`\stackrel{~}{F}(\stackrel{~}{x},Q^2)`$ observed in $`e^+e^{}`$-annihilation. Here $`x`$ denotes the Bjørken scaling variable which in deep inelastic scattering and $`e^+e^{}`$-annihilation is defined by $`x=Q^2/2p.q`$ and $`\stackrel{~}{x}=2p.q/Q^2`$ respectively. Notice that in deep inelastic scattering the virtual photon momentum $`q`$ is spacelike i.e. $`q^2=Q^2<0`$ whereas in $`e^+e^{}`$-annihilation it becomes timelike $`q^2=Q^2>0`$. Further $`p`$ denotes the in or outgoing hadron momentum. The DLY relation looks as follows $`\stackrel{~}{F}_i(\stackrel{~}{x},Q^2)=x𝒜c\left[F_i(1/x,Q^2)\right],`$ (1) where $`𝒜c`$ denotes the analytic continuation from the region $`0<x1`$ (DIS) to $`1<x<\mathrm{}`$ (annihilation region). At the level of splitting functions we have $`\stackrel{~}{P}_{ij}(\stackrel{~}{x})=x𝒜c\left[P_{ji}(1/x)\right].`$ (2) At LO, one finds $`\stackrel{~}{P}_{ij}^{(0)}(\stackrel{~}{x})=P_{ji}^{(0)}(x)`$, for $`x<1`$ which is nothing but Gribov-Lipatov relation . This relation in terms of physical observables is known to be violated when one goes beyond leading order. There is a similar violation of the DLY relation among coefficient functions as well. The DLY (analytical continuation) relation defined above holds at the level of physical quatities provided the analytical continuation is performed in both $`x`$ as well as the scale $`Q^2`$ ($`Q^2Q^2`$). For example the $`\mathrm{\Gamma }_{ij}`$ appearing in the following physical observables $$Q^2\frac{}{Q^2}\left(\begin{array}{c}F_A\\ F_B\end{array}\right)=\left(\begin{array}{cc}\mathrm{\Gamma }_{AA}& \mathrm{\Gamma }_{AB}\\ \mathrm{\Gamma }_{BA}& \mathrm{\Gamma }_{BB}\end{array}\right)\left(\begin{array}{c}F_A\\ F_B\end{array}\right)$$ satisfy the DLY relation, where $`F_A,F_B`$ are any two structure functions and $`Q`$ is the scale involved in the process. The violation of the DLY relation for the splitting functions and the coefficient functions is just an artifact of the adopted regularization method and the subtraction scheme. When these coefficient functions are combined with the splitting functions in a scheme invariant way, as for instance happens for the structure functions and fragmentation functions the DLY relation holds. The reason for the cancellation of the DLY violating terms among the splitting functions and coefficient functions is that the former are generated by simple scheme transformations. We now discuss Supersymmetric relations among splitting functions which determine the evolution of quark and gluon parton densities. These relations are valid when QCD becomes a supersymmetric $`𝒩=1`$ gauge field theory where both quarks and gluons are put in the adjoint representation with respect to the local gauge symmetry $`SU(N)`$. In this case one gets a simple relation between the colour factors which become $`C_F=C_A=2T_f=N`$. In the case of spacelike splitting functions, which determine the evolution of the parton densities in deep inelastic lepton-hadron scattering, one has made the claim (see ) that the combination defined by $`^{(i)}=P_{qq}^{(i)}P_{gg}^{(i)}+P_{gq}^{(i)}P_{qg}^{(i)},`$ (3) is equal to zero, i.e., $`^{(i)}=0`$. This relation should follow from an $`𝒩=1`$ supersymmetry although no proof has been given yet. An explicit calculation at leading order(LO) confirms this claim so that we have $`^{(0)}=0`$. However at next to leading order(NLO), when these splitting functions are computed in the $`\overline{\mathrm{MS}}`$-scheme, it turns out that $`_{\overline{\mathrm{MS}}}^{(1)}0`$. The reason that this relation is violated can be attributed to the regularization method and the renormalization scheme in which these splitting functions are computed. In this case it is $`D`$-dimensional regularization and the $`\overline{\mathrm{MS}}`$-scheme which breaks the supersymmetry. In fact, the breaking occurs already at the $`ϵ`$ dependent part of the leading order splitting functions. Although this does not affect the leading order splitting functions in the limit $`ϵ0`$ it leads to a finite contribution at the NLO level via the $`1/ϵ^2`$ terms which are characteristic of a two-loop calculation. If one carefully removes such breaking terms at the LO level consistently, one can avoid these terms at NLO level. They can also be avoided if one uses $`D`$-dimensional reduction which preserves the supersymmetry. The above observations also apply to the timelike splitting functions, which determine the evolution of fragmentation functions.
no-problem/9812/math-ph9812028.html
ar5iv
text
# (𝑞,ℎ)–analogue of Newton’s binomial formula ## acknowlegment I’d like to thank the DAAD for its financial support and the referee for his remarks.
no-problem/9812/cond-mat9812425.html
ar5iv
text
# Huge oxygen isotope effect on local lattice fluctuations in La2-xSrxCuO4 superconductor ## Abstract Recently a growing number of experiments have provided indications of the key role of polarons (composite particles formed by a charge strongly coupled with a local lattice deformation) in doped perovskites, hosting colossal magnetoresistance (CMR) and high T<sub>c</sub> superconductivity . While the role of polarons is generally recognized in manganites due to the large amplitude of the local lattice deformation, the scientific debate remains open on the cuprates where the lattice deformation is smaller and the anomalous normal metallic phase becomes complex due to the coexistence of polarons with itinerant carriers. Moreover the segregation of polarons and itinerant charges in different spatial domains forming lattice-charge stripes as well as the slow dynamic 1D spin fluctuations have been observed. The debate on the driving force for the stripe formation remains object of discussion since it could be purely due to electronic interactions and/or due to strong electron-lattice (polaronic) interactions. In order to explore the important role of the later in stripe charge segregation, we have studied isotope effects on the dynamical lattice fluctuations and polaron ordering temperature. Here we report a compelling evidence for a huge isotope effect on local lattice fluctuations of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> high T<sub>c</sub> superconductor by x-ray absorption spectroscopy, a fast ($``$ 10<sup>-15</sup> sec) and local probe ($``$ 5 $`\AA `$). Upon replacing <sup>16</sup>O with <sup>18</sup>O, the characteristic temperature T for polaron ordering in La<sub>1.94</sub>Sr<sub>0.06</sub>CuO<sub>4</sub> increases from about 110 K to 170 K. Until now most of the proposed theories for high T<sub>c</sub> superconductivity have considered spin and charge fluctuations in a doped 2D antiferromagnetic plane disregarding lattice instabilities. However, the isotope effect experiments do not support this point of view. In particular, the isotope effects in high-temperature superconductors (HTSC) are very unusual and even larger than that for conventional phonon-mediated superconductors while the HTSC materials are underdoped . Zhao and co-workers have studied the oxygen isotope effects on both the penetration depth and the carrier concentration in some cuprate superconductors . These results show that the effective supercarrier mass depends strongly on the oxygen isotope mass in underdoped cuprates, suggesting that polaronic and/or bipolaronic charge carriers exist in HTSC. Although the isotope-effect experiments indicate that lattice excitations are indeed important in the HTSC, some other crucial experiments have to be explained in a consistent way to understand the pairing mechanism. This depends on understanding the complex normal phase, that is a particular phase of condensed matter in which charge segregation occurs in stripes at low temperatures . Thus, if lattice excitations are strongly coupled to the charge carriers, the isotope mass should influence the local lattice deformation and the charge segregation temperature T should depend on the isotope mass. Here we report studies of how the dynamic local lattice distortions and the charge segregation temperature change with the oxygen isotope mass in La<sub>1.94</sub>Sr<sub>0.06</sub>CuO<sub>4</sub> from x-ray absorption near edge structure (XANES) measurements. We choose this compound where the largest oxygen isotope effects on both superconducting transition temperature and the effective supercarrier mass have been observed . The XANES spectroscopy probes the statistical distribution of the conformations of the Cu-O cluster via electron scattering by oxygen atoms neighbouring to the Cu within a measuring time of the order of 10<sup>-15</sup> sec . Therefore it is very sensitive to distortions of the local structure of oxygens around the photoabsorbing atom and probes instantaneous lattice conformations. Our results show that the oxygen isotope substitution modifies the local structure of the CuO<sub>2</sub> plane and shifts the temperature T for polaron ordering towards a higher temperature by 60 K $`i.e.`$, 110($`\pm `$ 10) K in the <sup>16</sup>O sample and 170($`\pm `$ 10) K in the <sup>18</sup>O sample. The experiments were performed on the <sup>16</sup>O and <sup>18</sup>O isotope samples at the European Synchrotron Radiation (ESRF), Grenoble. Fig. 1a shows the normalized Cu K-edge x-ray absorption near edge spectra for the <sup>16</sup>O and <sup>18</sup>O samples at 180 K. We denote the well-resolved peak features by A<sub>1</sub>, A<sub>2</sub>, B<sub>1</sub> and B<sub>2</sub>. These features, which extend from about 8 eV to 40 eV above the threshold, have been identified as ”shape resonances” determined by multiple scattering of the excited photoelectrons scattered with neighbouring atoms, as revealed by XANES calculations for La<sub>2</sub>CuO<sub>4</sub> . The resonances B<sub>1</sub>, B<sub>2</sub> (A<sub>1</sub>, A<sub>2</sub>) are determined by electron scattering with in (out of) plane atoms, therefore the main peak B<sub>1</sub> probes the multiple scattering from the oxygen atoms in the CuO<sub>2</sub> square planes while the peaks A<sub>1</sub>, A<sub>2</sub> are mainly due to the multiple scattering of the ejected photoelectron from the apical oxygen and La/Sr atoms. In the presence of local lattice fluctuations, with a time scale slower than the XANES measuring time, XANES probes the statistical distribution of the Cu sites. The effect of isotope substitution on the distribution of local Cu site conformations can be directly seen in Fig. 1, where we report the difference between the spectra of the two isotopes at 200 K. There is a large variation of the XANES spectrum due to isotope substitution, of the order of 2$`\%`$ of the normalised absorption. This is a direct measure of the effect of isotope mass on the distribution of Cu site structural conformations spanned during the dynamic lattice fluctuations associated with the polarons. We observe an energy shift of peaks A<sub>1</sub> (and A<sub>2</sub>), and an intensity decrease of peak B<sub>1</sub> (and B<sub>2</sub>) in the <sup>18</sup>O spectrum. These changes are clearly related with the effect of changing the oxygen mass on polaronic lattice fluctuations involving the tilting of apical oxygens (out of plane) and rhombic distortion of the Cu-O square planes . In order to study the evolution of the polaronic fluctuations as a function of temperature we report the absorption differences (with respect to spectra at 60 K) for the <sup>16</sup>O (Fig. 2a) and <sup>18</sup>O (Fig. 2b) samples. For both samples, the difference spectra show a sudden change at a characteristic temperature T. This characteristic temperature T is 110($`\pm `$ 10)K for the <sup>16</sup>O sample, and 170 ($`\pm `$10)K for the <sup>18</sup>O sample. The oxygen isotope shift on T is quite large (60 K). We have taken as a conformational parameter the intensity ratio R$`=`$ $`(B_1A_1)/(B_1+A_1)`$ that is plotted in Fig. 3. By decreasing the temperature the ratio R shows an increase at T where the local lattice fluctuations slow down at the temperature for the polaron ordering in lattice-charge stripes . The temperature dependence of the R (Fig. 3) shows that this crossover temperature T increases by about 60 K upon replacing <sup>16</sup>O with <sup>18</sup>O. This result indicates that the isotope substitution leads to a large change in the polaronic lattice fluctuations. Incidentally a large isotope effect has been theoretically predicted recently on polaronic lattice modes by Mustre de Leon $`etal`$ . The T is associated with the temperature below which the coexisting polaronic and itinerant charge carriers spatially distributed in stripes could be well distinguished within the time scale of the x-ray absorption technique as revealed by atomic pair distribution of CuO<sub>2</sub> plane . When the temperature is lowered, with the fixed time scale of the XANES, measuring change in higher order pair distribution, we observe an anomalous change in the local lattice across the temperature T due to a sudden change in the statistical distribution of the Cu-site due to a transition in an ordered phase. Obviously, the T could be different if measured by other techniques depending on the time scale. It is worth noting that T of the virgin samples ($`i.e.`$, the sample with the sample oxygen isotope) is nearly independent of doping (T $``$110 K for $`x`$=0.06, T $``$100 K for $`x`$=0.15, and T $``$150 K for La<sub>2</sub>CuO<sub>4.1</sub>) . This indicates that the observed giant oxygen isotope shift of T is not caused by any possible difference in the carrier concentrations of the two isotope samples. As a matter of fact, oxygen isotope substitution hardly affects the carrier concentrations . From Fig. 3, one can also see that below T the difference of the two isotope samples is much smaller than that above T. This may be explained by the fact that the average local structure deviation in the ordered striped phase due to isotope substitution is less than that in the disordered phase. The large oxygen isotope shift suggests that the stripe formation in high-temperature superconductors is not caused by purely electronic interactions. It appears that the electron-phonon interaction plays an important role in this phenomenon. This is consistent with the large oxygen isotope effects on both T<sub>c</sub> and the effective supercarrier mass observed in this material . The large oxygen isotope effect on the supercarrier mass implies that the nature of charge carriers in this material is of polaron and/or bipolaron type. Since the (bi)polaronic carrier for the <sup>18</sup>O sample is heavier than for the <sup>16</sup>O sample , one expects that the average local structure deviation of the <sup>18</sup>O sample should be larger for the given time scale, in agreement with the results shown in Fig. 1 and Fig. 3. The present experiment places important constraints on the microscopic origin of the stripe phase and on the microscopic mechanism of high-temperature superconductivity. Methods Sample preparation and properties. Samples of La<sub>1.94</sub>Sr<sub>0.06</sub>CuO<sub>4</sub> were prepared by a conventional solid-state reaction using dried La<sub>2</sub>O<sub>3</sub> (99.99$`\%`$), SrCO<sub>3</sub> (99.999$`\%`$) and CuO (99.999 $`\%`$). The powders were mixed, ground thoroughly, pressed into pellets and fired in air at 1000C for $``$96 h with three intermediate grindings. The diffusion was carried out for 40 h at 900 C and oxygen pressure of about 1 bar. The cooling time to room temperature was $``$ 4 h. The oxygen isotope-enrichment was determined from the weight changes of the <sup>16</sup>O and <sup>18</sup>O samples. The <sup>18</sup>O samples had $``$ 90$`\%`$ <sup>18</sup>O and $``$10$`\%`$ <sup>16</sup>O. The <sup>16</sup>O sample has a T<sub>c</sub> of about 8 K, and the <sup>18</sup>O has a T<sub>c</sub> lower by about 1 K than the <sup>16</sup>O sample . X-ray absorption measurements. The Cu K-edge absorption measurements were performed on powder samples. The absorption spectra were recorded on the beam line BM29 at ESRF, Grenoble. The Cu K$`\alpha `$ fluorescence yield (FY) off the samples was collected using multi-element Ge element X-ray detector array, covering a large solid angle, to measure the absorption signal. The samples were mounted in a closed cycle He refrigerator and the temperature was monitored with an accuracy of $`\pm `$ 1 K. To measure the spectra with a very high signal to noise ratio and with a high resolution at the Cu K-edge, we used Si(311) double crystal monochromator and collected several absorption scans at the same temperature. The measurements were repeated on another beam line (BM32) after several months and in spite of different experimental set-ups for the two experimental runs, the results showed a very good reproducibility. Figure captions: Fig. 1: Cu K-edge X-ray absorption near edge structure (XANES) of La<sub>1.94</sub>Sr<sub>0.06</sub>CuO<sub>4</sub> for the <sup>16</sup>O and <sup>18</sup>O isotope samples (panel a) and their difference (panel b) at 200K. The <sup>16</sup>O$`>`$ <sup>18</sup>O isotope effect on the Cu site structure fluctuations induces a decrease of the peak B<sub>1</sub> with an increase of peak A<sub>1</sub>. Fig. 2: Temperature dependence of the absorption difference spectra for the two isotopes with respect to the spectra at 60 K for the <sup>16</sup>O (panel a) and <sup>18</sup>O sample (panel b). In both cases the temperature, where the local structure displays a change, is underlined. Fig. 3: Temperature evolution of the XANES intensity ratio R$`=`$ $`(B_1A_1)/(B_1+A_1)`$ where B1(A1) is the intensity of XANES peak B<sub>1</sub> (A<sub>1</sub>) for the <sup>16</sup>O (upper) and <sup>18</sup>O (lower) substituted samples. There is a sudden change of the intensity ratio at a characteristic temperature T $``$110($`\pm `$10)K for the <sup>16</sup>O sample, and $``$170 ($`\pm `$10)K for the <sup>18</sup>O sample giving a huge oxygen isotope shift of $``$60 K.
no-problem/9812/hep-th9812037.html
ar5iv
text
# 1 Introduction ## 1 Introduction It is a pleasure to speak in such a remarkable venue – the old fortress of Corfu City. It is not often that one gets to meet inside a tourist attraction. The last such occasion for me was at a conference that took place inside the Chateau de Blois. Many of the talks in this conference will cover recent developments in superstring theory and M theory – in particular, the recent AdS/CFT conjecture. Since the organizers have chosen to place me first in the schedule, and since not everyone here is an expert in these matters, I have decided (in consultation with the organizers) to give a rather general introduction to some of these recent developments. Hopefully, this will help to provide some of the background that is needed for the more specialized talks that will follow. It should also relieve those speakers of the need to give an extended review of the basics. As I’m sure most of you know, the term M theory was introduced by Witten to describe the 11-dimensional quantum theory whose low energy effective description is 11-dimensional supergravity. However, the usage of this term has been extended by many authors (including myself on occasion) to refer to the underlying theory that reduces to the five different 10-dimensional superstring theories in various special limits, as well as the flat 11-dimensional theory in a sixth special limit. This is somewhat confusing. Therefore, in a recent talk at the Vancouver conference, Sen proposed that the term M theory should be reserved for the 11-dimensional quantum theory and that the (largely unknown) underlying fundamental theory, which is not specific to any particular spacetime dimension, should be called U Theory . He suggested that U could stand for either “unknown” or “unified”. This nomenclature makes a lot of sense to me, so I will try to adhere to it. In the text that follows, I will use the term M theory and not the term U theory. However, its usage will occur only in the sense that Witten originally intended, namely an 11-dimensional quantum theory. In the first half of this talk (in sections 2 – 5) I will survey some of the basic facts about type IIA and type IIB superstrings in 10 dimensions and the dualities that related them to M theory. Aside from some minor editing, these sections are copied from a review that I wrote earlier this year . Then, in section 6 I will give an introduction to the remarkable duality that has been proposed between superstring theory or M theory in certain anti de Sitter spacetime backgrounds and conformally invariant field theories. To be specific, I will focus on the duality between type IIB superstring theory in an $`AdS_5\times S^5`$ background and $`𝒩=4`$ supersymmetric gauge theory. As I have already indicated, in this talk I will only survey some of the basics, and leave the discussion of more advanced aspects of this subject to other speakers. For a more detailed survey of superstring theory and M theory I recommend the review paper written by Sen . Polchinski’s new textbook is also recommended . ## 2 Perturbative Superstring Theory Superstring theory first achieved widespread acceptance during the first superstring revolution in 1984-85. There were three main developments at this time. The first was the discovery of an anomaly cancellation mechanism , which showed that supersymmetric gauge theories can be consistent in ten dimensions provided they are coupled to supergravity (as in type I superstring theory) and the gauge group is either SO(32) or $`E_8\times E_8`$. Any other group necessarily would give uncanceled gauge anomalies and hence inconsistency at the quantum level. The second development was the discovery of two new superstring theories—called heterotic string theories—with precisely these gauge groups . The third development was the realization that the $`E_8\times E_8`$ heterotic string theory admits solutions in which six of the space dimensions form a Calabi–Yau space, and that this results in a 4d effective theory at low energies with many qualitatively realistic features . Unfortunately, there are very many Calabi–Yau spaces and a whole range of additional choices that can be made (orbifolds, Wilson loops, etc.). Thus there is an enormous variety of possibilities, none of which stands out as particularly special. In any case, after the first superstring revolution subsided, we had five distinct superstring theories with consistent weak coupling perturbation expansions, each in ten dimensions. Three of them, the type I theory and the two heterotic theories, have $`𝒩=1`$ supersymmetry in the ten-dimensional sense. Since the minimal 10d spinor is simultaneously Majorana and Weyl, this corresponds to 16 conserved supercharges. The other two theories, called type IIA and type IIB, have $`𝒩=2`$ supersymmetry (32 supercharges) . In the IIA case the two spinors have opposite handedness so that the spectrum is left-right symmetric (nonchiral). In the IIB case the two spinors have the same handedness and the spectrum is chiral. The understanding of these five superstring theories was developed in the ensuing years. In each case it became clear, and was largely proved, that there are consistent perturbation expansions of on-shell scattering amplitudes. In four of the five cases (heterotic and type II) the fundamental strings are oriented and unbreakable. As a result, these theories have particularly simple perturbation expansions. Specifically, there is a unique Feynman diagram at each order of the loop expansion. The Feynman diagrams depict string world sheets, and therefore they are two-dimensional surfaces. For these four theories the unique $`L`$-loop diagram is a closed orientable genus-$`L`$ Riemann surface, which can be visualized as a sphere with $`L`$ handles. External (incoming or outgoing) particles are represented by $`N`$ points (or “punctures”) on the Riemann surface. A given diagram represents a well-defined integral of dimension $`6L+2N6`$. This integral has no ultraviolet divergences, even though the spectrum contains states of arbitrarily high spin (including a massless graviton). From the viewpoint of point-particle contributions, string and supersymmetry properties are responsible for incredible cancellations. Type I superstrings are unoriented and breakable. As a result, the perturbation expansion is more complicated for this theory, and the various world-sheet diagrams at a given order (determined by the Euler number) have to be combined properly to cancel divergences and anomalies . ### 2.1 T Duality An important discovery that was made between the two superstring revolutions is called T duality . This is a property of string theories that can be understood within the context of perturbation theory. (The discoveries associated with the second superstring revolution are mostly nonperturbative.) T duality shows that spacetime geometry, as probed by strings, has some surprising properties (sometimes referred to as quantum geometry). The basic idea can be illustrated by the simplest example in which one spatial dimension forms a circle (denoted $`S^1`$). Then the ten-dimensional geometry is $`R^9\times S^1`$. T duality identifies this string compactification with one of a second string theory also on $`R^9\times S^1`$. However, if the radii of the circles in the two dual descriptions are denoted $`R_1`$ and $`R_2`$, then $$R_1R_2=\alpha ^{}.$$ (1) Here $`\alpha ^{}=\mathrm{}_s^2`$ is the universal Regge slope parameter, and $`\mathrm{}_s`$ is the fundamental string length scale (for both string theories). The tension of a fundamental string is given by $$T=2\pi m_s^2=\frac{1}{2\pi \alpha ^{}},$$ where we have introduced a fundamental string mass scale $`m_s=(2\pi \mathrm{}_s)^1.`$ Note that T duality implies that shrinking the circle to zero in one theory corresponds to decompactification of the dual theory. Compactification on a circle of radius $`R`$ implies that momenta in that direction are quantized, $`p=n/R`$. (These are called Kaluza–Klein excitations.) These momenta appear as masses for states that are massless from the higher-dimensional viewpoint. String theories also have a second class of excitations, called winding modes. Namely, a string wound $`m`$ times around the circle has energy $$E=2\pi RmT=mR/\alpha ^{}.$$ Equation (1) shows that the winding modes and Kaluza–Klein excitations are interchanged under T duality. What does T duality imply for our five superstring theories? The IIA and IIB theories are T dual . So compactifying the nonchiral IIA theory on a circle of radius $`R`$ and letting $`R0`$ gives the chiral IIB theory in ten dimensions! This means, in particular, that they should not be regarded as distinct theories. The radius $`R`$ is actually a $`vev`$ of a scalar field, which arises as an internal component of the 10d metric tensor. Thus the type IIA and type IIB theories in 10d are two limiting points in a continuous moduli space of quantum vacua. The two heterotic theories are also T dual, though there are technical details involving Wilson loops, which we will not explain here. T duality applied to the type I theory gives a dual description, which is sometimes called I. The names IA and IB have also been introduced by some authors. For the remainder of this paper, we will restrict attention to theories with maximal supersymmetry (32 conserved supercharges). This is sufficient to describe the basic ideas of M theory. Of course, it suppresses many fascinating and important issues and discoveries. In this way we will keep the presentation from becoming too long or too technical. The main focus will be to ask what happens when we go beyond perturbation theory and allow the coupling strength to become large in the type II theories. The answer in the IIA case, as we will see, is that another spatial dimension appears. ## 3 M Theory In the 1970s and 1980s various supersymmetry and supergravity theories were constructed. (See , for example.) In particular, supersymmetry representation theory showed that ten is the largest spacetime dimension in which there can be a matter theory (with spins $`1`$) in which supersymmetry is realized linearly. A realization of this is 10d super Yang–Mills theory, which has 16 supercharges . This is a pretty (i.e., very symmetrical) classical field theory, but at the quantum level it is both nonrenormalizable and anomalous for any nonabelian gauge group. However, as we indicated earlier, both problems can be overcome for suitable gauge groups (SO(32) or $`E_8\times E_8`$) when the Yang–Mills theory is embedded in a type I or heterotic string theory. The largest possible spacetime dimension for a supergravity theory (with spins $`2`$), on the other hand, is eleven. Eleven-dimensional supergravity, which has 32 conserved supercharges, was constructed 20 years ago . It has three kinds of fields—the graviton field (with 44 polarizations), the gravitino field (with 128 polarizations), and a three-index antisymmetric tensor gauge field $`C_{\mu \nu \rho }`$ (with 84 polarizations). These massless particles are referred to collectively as the supergraviton. 11d supergravity is also a pretty classical field theory, which has attracted a lot of attention over the years. It is not chiral, and therefore not subject to anomaly problems.<sup>2</sup><sup>2</sup>2Unless the spacetime has boundaries. The anomaly associated to a 10d boundary can be canceled by introducing $`E_8`$ supersymmetric gauge theory on the boundary . It is also nonrenormalizable, and thus it cannot be a fundamental theory. (Though it is difficult to demonstrate explicitly that it is not finite as a result of “miraculous” cancellations, we now know that this is not the case.) However, we now believe that it is a low-energy effective description of M theory, which is a well-defined quantum theory . This means, in particular, that higher dimension terms in the effective action for the supergravity fields have uniquely determined coefficients within the M theory setting, even though they are formally infinite (and hence undetermined) within the supergravity context. ### 3.1 Relation to Type IIA Superstring Theory Intriguing connections between type IIA string theory and 11d supergravity have been known for a long time. If one carries out dimensional reduction of 11d supergravity to 10d, one gets type IIA supergravity . In this case dimensional reduction can be viewed as a compactification on a circle in which one drops all the Kaluza–Klein excitations. It is easy to show that this does not break any of the supersymmetries. The field equations of 11d supergravity admit a solution that describes a supermembrane. This solution has the property that the energy density is concentrated on a two-dimensional surface. A 3d world-volume description of the dynamics of this supermembrane, quite analogous to the 2d world volume actions of superstrings, has been constructed . The authors suggested that a consistent 11d quantum theory might be defined in terms of this membrane, in analogy to string theories in ten dimensions.<sup>3</sup><sup>3</sup>3It is now clear that this cannot be done in any straightforward manner, since there is no weak coupling limit in which the supermembrane describes all the finite-mass excitations. Another striking result was the discovery of double dimensional reduction . This is a dimensional reduction in which one compactifies on a circle, wraps one dimension of the membrane around the circle and drops all Kaluza–Klein excitations for both the spacetime theory and the world-volume theory. The remarkable fact is that this gives the (previously known) type IIA superstring world-volume action . For many years these facts remained unexplained curiosities until they were reconsidered by Townsend and by Witten . The conclusion is that type IIA superstring theory really does have a circular 11th dimension in addition to the previously known ten spacetime dimensions. This fact was not recognized earlier because the appearance of the 11th dimension is a nonperturbative phenomenon, not visible in perturbation theory. To explain the relation between M theory and type IIA string theory, a good approach is to identify the parameters that characterize each of them and to explain how they are related. Eleven-dimensional supergravity (and hence M theory, too) has no dimensionless parameters. As we have seen, there are no massless scalar fields, whose vevs could give parameters. The only parameter is the 11d Newton constant, which raised to a suitable power ($`1/9`$), gives the 11d Planck mass $`m_p`$. When M theory is compactified on a circle (so that the spacetime geometry is $`R^{10}\times S^1`$) another parameter is the radius $`R`$ of the circle. Now consider the parameters of type IIA superstring theory. They are the string mass scale $`m_s`$, introduced earlier, and the dimensionless string coupling constant $`g_s`$. An important fact about all five superstring theories is that the coupling constant is not an arbitrary parameter. Rather, it is a dynamically determined vev of a scalar field, the dilaton, which is a supersymmetry partner of the graviton. With the usual conventions, one has $`g_s=e^\varphi `$. We can identify compactified M theory with type IIA superstring theory by making the following correspondences: $$m_s^2=2\pi Rm_p^3$$ (2) $$g_s=2\pi Rm_s.$$ (3) Using these one can derive other equivalent relations, such as $$g_s=(2\pi Rm_p)^{3/2}$$ $$m_s=g_s^{1/3}m_p.$$ The latter implies that the 11d Planck length is shorter than the string length scale at weak coupling by a factor of $`(g_s)^{1/3}`$. Conventional string perturbation theory is an expansion in powers of $`g_s`$ at fixed $`m_s`$. Equation (3) shows that this is equivalent to an expansion about $`R=0`$. In particular, the strong coupling limit of type IIA superstring theory corresponds to decompactification of the eleventh dimension, so in a sense M theory is type IIA string theory at infinite coupling.<sup>4</sup><sup>4</sup>4The $`E_8\times E_8`$ heterotic string theory is also eleven-dimensional at strong coupling . This explains why the eleventh dimension was not discovered in studies of string perturbation theory. These relations encode some interesting facts. The fact relevant to eq. (2) concerns the interpretation of the fundamental type IIA string. Earlier we discussed the old notion of double dimensional reduction, which allowed one to derive the IIA superstring world-sheet action from the 11d supermembrane (or M2-brane) world-volume action. Now we can make a stronger statement: The fundamental IIA string actually is an M2-brane of M theory with one of its dimensions wrapped around the circular spatial dimension. No truncation to zero modes is required. Denoting the string and membrane tensions (energy per unit volume) by $`T_{F1}`$ and $`T_{M2}`$, one deduces that $$T_{F1}=2\pi RT_{M2}.$$ (4) However, $`T_{F1}=2\pi m_s^2`$ and $`T_{M2}=2\pi m_p^3`$. Combining these relations gives eq. (2). It should be emphasized that all the formulas in this section are exact, due to the large amount of unbroken supersymmetry. ### 3.2 p-Branes and D-Branes Type II superstring theories contain a variety of $`p`$-brane solutions that preserve half of the 32 supersymmetries. These are solutions in which the energy is concentrated on a $`p`$-dimensional spatial hypersurface. (Adding the time dimension, the world volume of a $`p`$-brane has $`p+1`$ dimensions.) The corresponding solutions of supergravity theories were constructed by Horowitz and Strominger . A large class of these $`p`$-brane excitations are called D-branes (or D$`p`$-branes when we want to specify the dimension), whose tensions are given by $$T_{Dp}=2\pi m_s^{p+1}/g_s.$$ (5) This dependence on the coupling constant is one of the characteristic features of a D-brane. It is to be contrasted with the more familiar $`g^2`$ dependence of soliton masses (e.g., the ’t Hooft–Polyakov monopole). Another characteristic feature of D-branes is that they carry a charge that couples to a gauge field in the Ramond-Ramond (RR) sector of the theory. (Such fields can be described as bispinors.) The particular RR gauge fields that occur imply that even values of $`p`$ occur in the IIA theory and odd values in the IIB theory. D-branes have a number of special properties, which make them especially interesting. By definition, they are branes on which strings can end—D stands for Dirichlet boundary conditions. The end of a string carries a charge, and the D-brane world-volume theory contains a $`U(1)`$ gauge field that carries the associated flux. When $`n`$ D$`p`$-branes are coincident, or parallel and nearly coincident, the associated $`(p+1)`$-dimensional world-volume theory is a $`U(n)`$ gauge theory. The $`n^2`$ gauge bosons $`A_\mu ^{ij}`$ and their supersymmetry partners arise as the ground states of oriented strings running from the $`i`$th D$`p`$-brane to the $`j`$th D$`p`$-brane. The diagonal elements, belonging to the Cartan subalgebra, are massless. The field $`A_\mu ^{ij}`$ with $`ij`$ has a mass proportional to the separation of the $`i`$th and $`j`$th branes. This separation is described by the vev of a corresponding scalar field in the world-volume theory. In particular, the D2-brane of the type IIA theory corresponds to our friend the supermembrane of M theory, but now in a background geometry in which one of the transverse dimensions is a circle. The tensions check, because (using eqs. (2), (3), and (5)) $$T_{D2}=2\pi m_s^3/g_s=2\pi m_p^3=T_{M2}.$$ The mass of the first Kaluza–Klein excitation of the 11d supergraviton is $`1/R`$. Using eq. (3), we see that this can be identified with the D0-brane. More identifications of this type arise when we consider the magnetic dual of the M theory supermembrane. This turns out to be a five-brane, called the M5-brane.<sup>5</sup><sup>5</sup>5In general, the magnetic dual of a $`p`$-brane in $`d`$ dimensions is a $`(dp4)`$-brane. Its tension is $`T_{M5}=2\pi m_p^6`$. Wrapping one of its dimensions around the circle gives the D4-brane, with tension $$T_{D4}=2\pi RT_{M5}=2\pi m_s^5/g_s.$$ If, on the other hand, the M5-frame is not wrapped around the circle, one obtains the so-called NS5-brane of the IIA theory with tension $$T_{NS5}=T_{M5}=2\pi m_s^6/g_s^2.$$ This 5-brane, which is the magnetic dual of the fundamental IIA string, exhibits the conventional $`g^2`$ solitonic dependence. To summarize, type IIA superstring theory is M theory compactified on a circle of radius $`R=g_s\mathrm{}_s`$. M theory is believed to be a well-defined quantum theory in 11d, which is approximated at low energy by 11d supergravity. Its supersymmetric excitations (which are the only ones known when there is no compactification) are the massless supergraviton, the M2-brane, and the M5-brane. These account both for the (perturbative) fundamental string of the IIA theory and for many of its nonperturbative excitations. The identities presented here are exact, because they are protected by supersymmetry. ## 4 Type IIB Superstring Theory In the previous section we discussed type IIA superstring theory and its relationship to eleven-dimensional M theory. In this section we consider type IIB superstring theory, which is the other maximally supersymmetric string theory with 32 conserved supercharges. It is also 10-dimensional, but unlike the IIA theory its two supercharges have the same handedness. Since the spectrum contains massless chiral fields, one should check whether there are anomalies that break the gauge invariances—general coordinate invariance, local Lorentz invariance, and local supersymmetry. In fact, the UV finiteness of the string theory Feynman diagrams (and associated modular invariance) ensures that all anomalies must cancel. This was verified from a field theory viewpoint in ref. . The low-energy effective theory that approximates type IIB superstring theory is type IIB supergravity , just as 11d supergravity approximates M theory. In each case the supergravity theory is only well-defined as a classical field theory, but still it can teach us a lot. For example, it can be used to construct $`p`$-brane solutions and compute their tensions. Even though such solutions themselves are only approximate, supersymmetry ensures that their tensions, which are related to the kinds of charges they carry, are exact. ### 4.1 SL(2,Z) duality Another significant fact about type IIB supergravity is that it possesses a global $`SL(2,R)`$ symmetry. It is instructive to consider the bosonic spectrum and its $`SL(2,R)`$ transformation properties. There are two scalar fields—the dilaton $`\varphi `$ and an axion $`\chi `$, which are conveniently combined in a complex field $$\rho =\chi +ie^\varphi .$$ (6) The $`SL(2,R)`$ symmetry transforms this field nonlinearly: $$\rho \frac{a\rho +b}{c\rho +d},$$ (7) where $`a,b,c,d`$ are real numbers satisfying $`adbc=1`$. However, in the quantum string theory this symmetry is broken to the discrete subgroup $`SL(2,Z)`$ , which means that $`a,b,c,d`$ are restricted to be integers. Defining the vev of the $`\rho `$ field to be $$\rho =\frac{\theta }{2\pi }+\frac{i}{g_s},$$ (8) the $`SL(2,Z)`$ symmetry transformation $`\rho \rho +1`$ implies that $`\theta `$ is an angular coordinate. More significantly, in the special case $`\theta =0`$, the symmetry transformation $`\rho 1/\rho `$ takes $`g_s1/g_s`$. This symmetry, called S duality, implies that the theory with coupling constant $`g_s`$ is equivalent to coupling constant $`1/g_s`$, so that the weak coupling expansion and the strong coupling expansion are identical! The bosonic spectrum also contains a pair of two-form potentials $`B_{\mu \nu }^{(1)}`$ and $`B_{\mu \nu }^{(2)}`$,<sup>6</sup><sup>6</sup>6These are sometimes denoted $`B_{NS}`$ and $`B_{RR}`$. which transform as a doublet under $`SL(2,R)`$ or $`SL(2,Z)`$. In particular, the S duality transformation $`\rho 1/\rho `$ interchanges them. The remaining bosonic fields are the graviton and a four-form potential $`C_{\mu \nu \rho \lambda }`$, with a self-dual field strength. They are invariant under $`SL(2,R)`$ or $`SL(2,Z)`$. ### 4.2 Duality Between Type IIB Superstring Theory and M Theory In the introductory section we indicated that the type IIA and type IIB superstring theories are T dual, meaning that if they are compactified on circles of radii $`R_A`$ and $`R_B`$ one obtains equivalent theories for the identification $`R_AR_B=\mathrm{}_s^2`$. Moreover, in sect. 2 we saw that the type IIA theory is actually M theory compactified on a circle. The latter fact encodes nonperturbative information. It turns out to be very useful to combine these two facts and to consider the duality between M theory compactified on a torus $`(R^9\times T^2)`$ and type IIB superstring theory compactified on a circle $`(R^9\times S^1)`$. Recall that a torus can be described as the complex plane modded out by the equivalence relations $`zz+w_1`$ and $`zz+w_2`$. Up to conformal equivalence, the periods can be taken to be $`1`$ and $`\tau `$, with Im $`\tau >0`$. However, in this characterization $`\tau `$ and $`\tau ^{}=(a\tau +b)/(c\tau +d)`$, where $`a,b,c,d`$ are integers satisfying $`adbc=1`$, describe equivalent tori. Thus a torus is characterized by a modular parameter $`\tau `$ and an $`SL(2,Z)`$ modular group. The natural, and correct, conjecture at this point is that one should identify the modular parameter $`\tau `$ of the M theory torus with the parameter $`\rho `$ that characterizes the type IIB vacuum ! Then the duality gives a geometrical explanation of the nonperturbative S duality symmetry of the IIB theory: the transformation $`\rho 1/\rho `$, which sends $`g_s1/g_s`$ in the IIB theory, corresponds to interchanging the two cycles of the torus in the M theory description. To complete the story, we should relate the area of the M theory torus $`(A_M)`$ to the radius of the IIB theory circle $`(R_B)`$. The desired formula is a simple consequence of the ones given above $$m_p^3A_M=(2\pi R_B)^1.$$ (9) Thus the limit $`R_B0`$, at fixed $`\rho `$, corresponds to decompactification of the M theory torus, while preserving its shape. Conversely, the limit $`A_M0`$ corresponds to decompactification of the IIB theory circle. The duality can be explored further by matching the various $`p`$-branes in 9 dimensions that can be obtained from either the M theory or the IIB theory viewpoints . When this is done, one finds that everything matches nicely and that one deduces various relations among tensions, such as $$T_{M5}=\frac{1}{2\pi }(T_{M2})^2.$$ (10) This relation was used earlier when we asserted that $`T_{M2}=2\pi m_p^3`$ and $`T_{M5}=2\pi m_p^6`$. Even more interesting is the fact that the IIB theory contains an infinite family of strings labeled by pairs of relatively prime integers $`(p,q)`$ . These integers correspond to string charges that are sources of the gauge fields $`B_{\mu \nu }^{(1)}`$ and $`B_{\mu \nu }^{(2)}`$. The $`(1,0)`$ string can be identified as the fundamental IIB string, while the $`(0,1)`$ string is the D-string. From this viewpoint, a $`(p,q)`$ string can be regarded as a bound state of $`p`$ fundamental strings and $`q`$ D-strings . These strings have a very simple interpretation in the dual M theory description. They correspond to an M2-brane with one of its cycles wrapped around a $`(p,q)`$ cycle of the torus. The minimal length of such a cycle is proportional to $`|p+q\tau |`$, and thus (using $`\tau =\rho `$) one finds that the tension of a $`(p,q)`$ string is given by $$T_{p,q}=2\pi |p+q\rho |m_s^2.$$ (11) The normalization has been chosen to give $`T_{1,0}=2\pi m_s^2`$. Then (for $`\theta =0`$) $`T_{0,1}=2\pi m_s^2/g_s`$, as expected. Note that decay is kinematically forbidden by charge conservation when $`p`$ and $`q`$ are relatively prime. When they have a common divisor $`n`$, the tension is the same as that of an $`n`$-string system. Whether or not there are threshold bound states is a nontrivial dynamical question, which has different answers in different settings. In the present case there are no such bound states, which is why $`p`$ and $`q`$ should be relatively prime. Imagine that you lived in the 9-dimensional world that is described equivalently as M theory compactified on a torus or as the type IIB superstring theory compactified on a circle. Suppose, moreover, you had very high energy accelerators with which you were going to determine the “true” dimension of spacetime. Would you conclude that 10 or 11 is the correct answer? If either $`A_M`$ or $`R_B`$ was very large in Planck units there would be a natural choice, of course. But how could you decide otherwise? The answer is that either viewpoint is equally valid. What determines which choice you make is which of the massless fields you regard as “internal” components of the metric tensor and which ones you regard as matter fields. Fields that are metric components in one description correspond to matter fields in the dual one. ## 5 The D3-Brane and $`𝒩=4`$ Gauge Theory The $`U(n)`$ gauge theory associated with a stack of $`n`$ D$`p`$-branes has maximal supersymmetry (16 supercharges). The low-energy effective theory, when the brane separations are small compared to the string scale, is supersymmetric Yang–Mills theory. These theories can be constructed by dimensional reduction of 10d supersymmetric $`U(n)`$ gauge theory to $`p+1`$ dimensions. In fact, that is how they originally were constructed . For $`p3`$, the low-energy effective theory is renormalizable and defines a consistent quantum theory. For $`p=4,5`$ there is good evidence for the existence of nongravitational quantum theories that reduce to the gauge theory in the infrared. For $`p6`$, it appears that there is no decoupled nongravitational quantum theory . A case of particular interest, which we shall now focus on, is $`p=3`$. A stack of $`n`$ D3-branes in type IIB superstring theory has a decoupled $`𝒩=4,`$ $`d=4`$ $`U(n)`$ gauge theory associated to it. This gauge theory has a number of special features. For one thing, due to boson–fermion cancellations, there are no $`UV`$ divergences at any order of perturbation theory. The beta function $`\beta (g)`$ is identically zero, which implies that the theory is scale invariant (aside from scales introduced by vevs of the scalar fields). In fact, $`𝒩=4,`$ $`d=4`$ gauge theories are conformally invariant. The conformal invariance combines with the supersymmetry to give a superconformal symmetry, which contains 32 fermionic generators. Half are the ordinary linearly realized supersymmetrics, and half are nonlinearly realized ones associated to the conformal symmetry. The name of the superconformal group in this case is $`SU(4|2,2)`$. Another important property of $`𝒩=4`$, $`d=4`$ gauge theories is electric-magnetic duality . This extends to an $`SL(2,Z)`$ group of dualities. To understand these it is necessary to include a vacuum angle $`\theta _{YM}`$ and define a complex coupling $$\tau =\frac{\theta _{YM}}{2\pi }+i\frac{4\pi }{g_{YM}^2}.$$ (12) Under $`SL(2,Z)`$ transformations this coupling transforms in the usual nonlinear fashion $`\left(\tau \frac{a\tau +b}{c\tau +d}\right)`$ and the electric and magnetic fields transform as a doublet. Note that the conformal invariance ensures that $`\tau `$ is a meaningful scale-independent constant. Now consider the $`𝒩=4`$ $`U(n)`$ gauge theory associated to a stack of $`n`$ D3-branes in type IIB superstring theory. There is an obvious identification, that turns out to be correct. Namely, the $`SL(2,Z)`$ duality of the gauge theory is induced from that of the ambient type IIB superstring theory. In particular, the $`\tau `$ parameter of the gauge theory is the value of the complex scalar field $`\rho `$ of the string theory. This makes sense because $`\rho `$ is constant in the field configuration associated to a stack of D3-branes. The D3-branes themselves are invariant under $`SL(2,Z)`$ transformations. Only the parameter $`\tau =\rho `$ changes, but it is transformed to an equivalent value. All other fields, such as $`B_{\mu \nu }^{(i)}`$, which are not invariant, vanish in this case. As we have said, a fundamental $`(1,0)`$ string can end on a D3-brane. But by applying a suitable $`SL(2,Z)`$ transformation, this configuration is transformed to one in which a $`(p,q)`$ string—with $`p`$ and $`q`$ relatively prime—ends on the D3-brane. The charge on the end of this string describes a dyon with electric charge $`p`$ and magnetic $`q`$, with respect to the appropriate gauge field. More generally, for a stack of $`n`$ D3-branes, any pair can be connected by a $`(p,q)`$ string. The mass is proportional to the length of the string times its tension, which we saw is proportional to $`|p+q\rho |`$. In this way one sees that the electrically charged particles, described by fundamental fields, belong to infinite $`SL(2,Z)`$ multiplets. The other states are nonperturbative excitations of the gauge theory. The field configurations that describe them preserve half of the supersymmetry. As a result their masses saturate a BPS bound and are given exactly by the considerations described above. ### 5.1 Three String Junctions An interesting question, whose answer was unknown until recently, is whether $`𝒩=4`$ gauge theories in four dimensions also admit nonperturbative excitations that preserve 1/4 of the supersymmetry. To explain the answer, it is necessary to first make a digression to consider three-string junctions. As we have seen, type IIB superstring theory contains an infinite multiplet of strings labeled by a pair of relatively prime integers $`(p,q)`$. Three strings, with charges $`(p_i,q_i),`$ $`i=1,2,3,`$ can meet at a point provided that charge is conserved . This means that $$p_i=q_i=0,$$ (13) if the three strings are all oriented inwards. (This is like momentum conservation in an ordinary Feynman diagram.) Such a configuration is stable, and preserves 1/4 of the ambient supersymmetry provided that the tensions balance. It is easy to see how this can be achieved. If one regards the plane of the junction as a complex plane and orients the direction of a $`(p,q)`$ string by the phase of $`p+q\tau `$, then eqs. (11) and (13) ensure a force balance. The three-string junction has an interesting dual M theory interpretation. If one of the directions perpendicular to the plane of the junction is taken to be a circle, then we have a string junction in nine dimensions. This must have a dual interpretation in terms of M theory compactified on a torus. We have already seen that a $`(p,q)`$ string corresponds to an M2-brane with one of its cycles wrapped on a $`(p,q)`$ cycle of the torus. So now we join three such cylindrical membranes together. Altogether we have a single smooth M2-brane forming a $`Y`$, like a junction of pipes. The three arms are wrapped on $`(p_i,q_i)`$ cycles of the torus. This is only possible topologically when eq. (13) is satisfied. We can now describe a pretty construction of 1/4 BPS states in $`𝒩=4`$ gauge theory, due to Bergman . Such a state is described by a 3-string junction, with the three prongs terminating on three different D3-branes. This is only possible for $`n3`$, which is a necessary condition for 1/4 BPS states. The mass of such a state is given by summing the lengths of each string segment weighted by its tension. This gives a result in agreement with the BPS formula. Clearly this is just the beginning of a long story, since the simple picture we have described can be generalized to arbitrarily complicated string webs. So long as the web is in a plane, charges are conserved at the junctions, and all string segments are oriented in the way we have described, the configuration will be 1/4 BPS. Remarkably, arbitrarily high spins can occur. There are simple rules for determining them . There are also related results in $`𝒩=2`$ (Seiberg–Witten) gauge theory . When the web is nonplanar, supersymmetry is completely broken, and reliable mass calculations become difficult. However, one should still be able to achieve a reliable qualitative understanding of such excitations. In general, there are regions of moduli space in which such nonsupersymmetric states are stable. ## 6 Introductory Remarks on AdS/CFT Duality Maldacena conjectured a remarkable duality between superstring theory or M theory in a suitable anti de Sitter space background and conformally invariant field theories . (Some relevant prior papers are listed in ref. .) These are dualities in the usual sense: namely, when one description is weakly coupled the corresponding dual one is strongly coupled. Thus, assuming the conjecture is true, it allows the use of perturbative methods to learn nontrivial facts about the strongly coupled dual theory. This subject has developed with breathtaking speed: Maldacena’s paper appeared in November 1997, yet by the Strings 98 conference seven months later,<sup>7</sup><sup>7</sup>7All the talks (including audio) are available at http://www.itp.ucsb.edu/ strings98/. more than half the invited speakers chose to speak on this subject. What I propose to do here is to introduce some of the basic ideas of the subject. I will not attempt to be very detailed or precise. Maldacena arrived at his conjecture by considering the spacetime geometry in the vicinity of a large number ($`N`$) of coincident $`p`$-branes. The three basic examples of AdS/CFT duality with maximal supersymmetry are provided by taking the $`p`$-branes to be either M2-branes, D3-branes, or M5-branes. Then the corresponding world volume theories (in 3, 4, or 6 dimensions) have superconformal symmetry. They are conjectured to be dual to M theory or type IIB superstring in a spacetime geometry that is $`AdS_4\times S^7,AdS_5\times S^5`$, or $`AdS_7\times S^4`$. The background also has nonvanishing gauge fields with $`N`$ units of flux on the sphere. All three of these solutions to 11d supergravity or type IIB supergravity were studied over a decade ago , but the duality conjecture is new. Since I am not trying to be comprehensive, only the case of coincident D3 branes will be described. However, in order to explain what is special about the case $`p=3`$, we will begin by considering $`N`$ coincident D$`p`$-branes. This is a type IIA configuration if $`p`$ is even and a type IIB configuration if $`p`$ is odd. As we have discussed, the $`(p+1)`$-dimensional world-volume theory (in the infrared) is a maximally supersymmetric $`U(N)`$ gauge theory. The low-energy effective action is given by dimensional reduction of supersymmetric $`U(N)`$ gauge theory in 10 dimensions. The coupling constant $`g_{YM}`$ of such a gauge theory has dimensions (length)<sup>(p-3)/2</sup>. It is related to the dimensionless string coupling constant $`g_s`$ of the ambient 10d theory by $$g_{YM}^2=g_s(\mathrm{}_s)^{p3}.$$ Of course, the dimensionless effective coupling is scale dependent. At an energy scale $`E`$ it is given by $$g_{eff}^2(E)=g_{YM}^2NE^{p3}.$$ Thus perturbation theory applies in the UV for $`p<3`$ and in the IR for $`p>3`$. The special case $`p=3`$ corresponds to $`𝒩=4`$ super Yang–Mills theory in four dimensions, which is known to be a finite, conformally invariant field theory. As solutions of type II supergravity, an extremal system of $`N`$ coincident D$`p`$-branes has a string-frame metric $$ds^2=f^{1/2}(ds^2)_d+f^{1/2}(ds^2)_{10d},$$ (14) dilaton $$e^{2\varphi }=g_s^2(f)^{\frac{3p}{2}},$$ (15) and RR gauge field $$C_{01\mathrm{}p}=f^11,$$ (16) where $`d=p+1`$ and $$(ds^2)_d=dt^2+dx_1^2+\mathrm{}+dx_p^2$$ $$(ds^2)_{10d}=dx_{p+1}^2+\mathrm{}+dx_9^2=dr^2+r^2d\mathrm{\Omega }_{8p}^2$$ $$f=1+\frac{g_{YM}^2N}{\mathrm{}_s^4U^{7p}}$$ $$U=r/\mathrm{}_s^2.$$ The variable $`UTr`$ is essentially the energy of a string stretched between the D-branes at $`r=0`$ and a point a distance $`r`$ from the D-branes. The surface $`r=0`$ is the horizon of this geometry, which can be regarded as the location of the D-branes. The key step in Maldacena’s analysis is to isolate the behavior of the near-horizon geometry by letting $`\mathrm{}_s0`$ while holding $`U`$ fixed. In this limit one finds that $$ds^2\mathrm{}_s^2\left\{\frac{1}{\sqrt{\lambda }}U^{\frac{7p}{2}}(dx^2)_d+\sqrt{\lambda }U^{\frac{p7}{2}}dU^2+\sqrt{\lambda }U^{\frac{p3}{2}}d\mathrm{\Omega }_{8p}^2\right\},$$ (17) where $$\lambda =g_{YM}^2N.$$ (18) Also, $$e^\varphi \frac{1}{N}\left[g_{eff}(U)\right]^{\frac{7p}{2}}.$$ From these equations we see that the string is weakly coupled for $`g_{eff}^2(U)N^{4/(7p)}`$ and that the curvature is of order $`[\mathrm{}_s^2g_{eff}(U)]^1`$. Thus the supergravity approximation is good (i.e., stringy effects are negligible) for $`g_{eff}^2(U)1`$. Taking $`p<6`$ and requiring both of these inequalities gives the requirement $`N1`$. There is much more that can be said for each specific value of $`p`$. However, we will focus on the special case $`p=3`$ from now on, and refer the reader to ref. for a discussion of the other cases. Taking $`p=3`$, the near-horizon metric in eq. (17) simplifies to $$ds^2=\mathrm{}_s^2\left\{\frac{1}{\sqrt{\lambda }}U^2(dx^2)_4+\frac{\sqrt{\lambda }}{U^2}dU^2+\sqrt{\lambda }d\mathrm{\Omega }_5^2\right\},$$ (19) and the dilaton is constant: $$e^\varphi =g_s.$$ Making the change of variables $`z=\sqrt{\lambda }/U`$, the metric takes the form $$ds^2=\mathrm{}_s^2\sqrt{\lambda }\left\{\frac{(dx^2)_4+dz^2}{z^2}+d\mathrm{\Omega }_5^2\right\}.$$ (20) This describes the product-space geometry $`AdS_5\times S^5`$, where both factors have radius $$R=\lambda ^{1/4}\mathrm{}_s.$$ We see that stringy effects are suppressed for $`\lambda 1`$. Quantum corrections are small for $`N1`$, since the Planck length is given by $`\mathrm{}_p=g_s^{1/4}\mathrm{}_s`$ and $`\lambda =g_sN`$. The value of the RR gauge field corresponds to the self-dual field strength five-form $$F_5N\left((\mathrm{vol})_{AdS_5}+(\mathrm{vol})_{S^5}\right).$$ In particular the flux $`_{S^5}F_5N`$. The $`AdS_5\times S^5`$ metric has isometries $$SO(4,2)\times SO(6)SU(2,2)\times SU(4).$$ Including the supersymmetries, the complete isometry supergroup is $`SU(2,2|4)`$. This contains 32 supercharges transforming as $`(4,4)+(\overline{4},\overline{4})`$. ### 6.1 The Conjecture Maldacena’s conjecture is that type IIB superstring theory in the $`AdS_5\times S^5`$ background described above is dual to $`𝒩=4`$, $`d=4`$ super Yang–Mills theory with gauge group $`SU(N)`$. This is plausible because this theory is associated to a system of $`N`$ coincident D3-branes, as we have explained. The passage from $`U(N)`$ to $`SU(N)`$ is a technical detail that I will not attempt to explain.<sup>8</sup><sup>8</sup>8For a discussion of this point, and how it is reconciled with the SL(2,Z) duality, see ref. . The duality incorporates the following correspondences: * The (large) integer $`N`$ gives the rank of the gauge group, which corresponds to the flux of the five-form RR gauge field. * The YM coupling constant $`g_{YM}`$ is related to the string coupling constant by $`g_{YM}^2=g_s`$. The fact that $`g_{YM}`$ does not depend on energy scale corresponds to the fact the dilaton is constant. * The supergroup SU($`2,2|4`$) is the isometry group of the superstring theory background and it is the superconformal symmetry group of the $`𝒩=4`$ gauge theory. In the gauge theory, 16 of the fermionic generators are linearly realized supersymmetries and the other 16 generate superconformal transformations. * The common radius $`R`$ of the $`AdS_5`$ and $`S^5`$ geometries is related to the ‘t Hooft parameter $`\lambda =g_{YM}^2N`$ of the gauge theory by $`R/\mathrm{}_s=\lambda ^{1/4}`$. As a side remark, let me point out that the $`AdS_5\times S^5`$ metric is conformally flat, because the two radii are equal. As a result, in addition to its isometries it has additional conformal isometries. From the point of view of the dual gauge theory these correspond to the 24 10-dimensional Lorentz transformations that are broken by dimensional reduction from $`d=10`$ to $`d=4`$. They have no analogs for the M theory backgrounds $`AdS_4\times S^7`$ and $`AdS_7\times S^4`$, because the radii are unequal in these cases. ### 6.2 The Structure of Anti de Sitter Space In the preceding we have presented $`AdS_5`$ in Poincaré coordinates. In these coordinates, $`AdS_{d+1}`$ is given by $$ds^2=\frac{1}{z^2}\left((dx^2)_d+dz^2\right),z0.$$ (21) The boundary at spatial infinity ($`r\mathrm{}`$) corresponds to $`z=0`$. The AdS/CFT duality is holographic in the sense that the physics of the $`(d+1)`$-dimensional bulk is encoded in the $`d`$-dimensional boundary gauge theory. But how does this hologram work? The basic idea is that $`x^\mu `$ coordinates of a point in the bulk correspond to the $`x^\mu `$ position in the field theory, whereas the $`z=\sqrt{\lambda }/U`$ coordinate corresponds to taking the field theory to have an energy scale (in the Wilsonian sense) $`E=U`$. One fact in support of the identification $`EU`$ is that the bulk isometry $`(x^\mu ,z)(ax^\mu ,az)`$ corresponds to the field theory scale transformation $`(x^\mu ,E)(ax^\mu ,E/a)`$. This argument does not establish the constant of proportionality, however. An argument that achieves this is the identification of D-instantons in the bulk with YM instantons of the gauge theory. It turns out that the $`z`$ coordinate of the D instanton corresponds to the scale size of the YM instanton. This suggests the identification $`U=\sqrt{\lambda }E`$. It is a somewhat puzzling fact that this argument gives the factor of $`\sqrt{\lambda }`$, whereas the “stretched string” picture, discussed earlier, does not . Poincaré coordinates do not give a complete description of the Lorentzian $`AdS_{d+1}`$ spacetime. To understand this, it is useful to consider a hypersurface in $`d+2`$ Euclidean dimensions: $$x_1^2+\mathrm{}+x_d^2t_1^2t_2^2=R^2=1.$$ In the last step the radius has been set equal to one, for convenience. Next, we pass to spherical coordinates for both the $`x`$’s and the $`t`$’s: $$(x_1,\mathrm{},x_d)(r,\mathrm{\Omega }_p)$$ $$(t_1,t_2)(\tau ,\theta ).$$ In these coordinates the hypersurface is $`r^2\tau ^2=1`$, and the metric on this surface is $$ds^2=dx_i^2dt_j^2=\frac{dr^2}{1+r^2}+r^2d\mathrm{\Omega }_p^2(1+r^2)d\theta ^2.$$ (22) Note that the time-like coordinate $`\theta `$ is periodic! This would imply that the conjugate energy eigenvalues are quantized. This is definitely not what type IIB superstring theory on $`AdS_5\times S^5`$ gives, so we must pass to the covering space CAdS. Strictly speaking, one should speak of “CAdS/CFT duality.” So we replace $`\theta S^1`$ by $`tR`$. This gives a global description of the desired spacetime geometry. Letting $`r=\mathrm{tan}\rho `$, the metric becomes $$ds^2=\frac{1}{\mathrm{cos}^2\rho }(d\rho ^2+\mathrm{sin}^2\rho d\mathrm{\Omega }_p^2dt^2).$$ (23) This has topology $`B_{p+1}\times R`$ which can be visualized as a solid cylinder. The $`R`$ factor corresponds to the global time coordinate $`t`$, and $`B_{p+1}`$ is the ball interior to $`S^p`$. The boundary of the spacetime (at $`\rho =\pi /2`$) is $`S^p\times R`$. Thus the spatial coordinates of the dual gauge theory should be compactified on a sphere $`S^p`$. (The conformal symmetry also requires adjoining the point at infinity.) The $`S^p\times R`$ geometry makes the $`SO(p+1)\times SO(2)`$ subgroup of the $`SO(p+1,2)`$ conformal group manifest. Because of the conformal invariance, the radius of the $`S^p`$ is unphysical — there is no scale. One recovers Minkowski spacetime by letting it become infinite. The relation between the coordinates introduced here and the Poincaré coordinates given earlier is $$(z,y^\mu )=((t_1+x_d)^1,t_2z,x_iz).$$ For many purposes it is useful to consider the “Euclideanized” AdS geometry. This can be obtained by Wick-rotating the $`t_2`$ coordinate: $$x_1^2+\mathrm{}+x_d^2t_1^2+t_2^2=1.$$ (24) The symmetry is now $`SO(1,d+1)`$. This manifold should not be confused with Lorentzian signature de Sitter space, which would have $`+1`$ on the right-hand side. As before, this manifold can be described in Poincaré coordinates by $$ds^2=\frac{1}{z^2}(dz^2+(dy^2)_d),$$ (25) where now $$(dy^2)_d=dy_1^2+\mathrm{}+dy_d^2.$$ Unlike the Lorentzian case, these coordinates describe the space globally. They give a description that is equivalent to the one given by the metric $$ds^2=dr^2+\mathrm{sinh}^2rd\mathrm{\Omega }_d^2,$$ (26) which is the analog of eq. (23). Another equivalent metric is $$ds^2=\frac{4_{i=1}^{d+1}x_i^2}{(1x_i^2)^2},$$ (27) where $`x_i^21`$. The latter form shows that the topology is that of a ball $`B_{d+1}`$ whose boundary is a sphere $`S^d`$. Thus the dual Euclideanized gauge theory should be compactified on a sphere — $`S^4`$ for our main example. In this case the SO(5) subgroup of the SO(5,1) conformal group is manifest. Of course, conformal symmetry allows one to let the radius go to infinity. The AdS/CFT conjecture has been made more precise by Gubser, Klebanov, and Polyakov and by Witten . They give an explicit prescription for relating correlation functions of the Euclideanized conformal field theory to the bulk theory path integral for specified boundary behavior of the bulk fields. I will not spell out the prescription carefully here, but simply remark that it requires a one-to-one correspondence of bulk fields $`\varphi `$ and gauge invariant boundary operators $`𝒪`$. Denoting boundary values of $`\varphi `$ by $`\varphi _0`$, one computes the bulk theory path integral with these boundary values $`Z(\varphi _0)`$. Then this is identified with the correlation function $`\mathrm{exp}_{S^d}\varphi _0𝒪_{CFT}`$. (I am ignoring lots of technical details here.) The requisite correspondences have been verified for large classes of examples. The CAdS/CFT duality for Lorentzian signature entails new issues that have been considered in ref. . The boundary value problem in this case no longer has unique solutions, because one can add normalizable (propagating) modes. The conclusion of , if I understand it properly, is that nonnormalizable bulk modes correspond to backgrounds that couple to gauge invariant local operators of the boundary gauge theory, as in the Euclidean case. In addition, the normalizable modes correspond to localized fluctuations of the gauge theory. The latter do not appear in the Euclidean case, so this identification goes beyond those proposed in refs. . ### 6.3 Finite Temperature The passage to finite temperature is evident. One starts with the Euclideanized theory described above and takes the time coordinate to have period $`\beta `$, the inverse temperature. Fermi fields are required to be antiperiodic, as usual, which breaks supersymmetry. The topology of the boundary CFT in this case is $`S^p\times S^1`$. Witten observes that there are two different choices for the topology of the bulk that would give this boundary . The one suggested by the zero-temperature analysis is $`B_{p+1}\times S^1`$ and an alternative possibility is $`S^p\times B_2`$. The first choice corresponds to $`AdS_{d+1}`$ at finite temperature and the second to a (Euclidean) AdS-Schwarzschild black hole. By comparing the action for the two, Witten argues that there is a phase transition for $`N\mathrm{}`$. The low temperature phase exhibits confinement and a mass gap, whereas the high temperature phase has deconfinement. This is roughly the picture one expects for QCD. ## 7 Concluding Comments I have touched on some of the highlights in the remarkable development of superstring theory that has taken place in the past few years. It continues to amaze me that the rapid pace of progress is being maintained over such an extended period. Certainly, the implications of the AdS/CFT duality are still being digested, so it will continue for a while longer. In the brief introduction to this topic in the preceding section, I have not mentioned many of the applications and generalizations that have already been worked out. This work already makes it clear that this duality will teach us a great deal about strongly coupled gauge theories — in particular their large N master fields. It is less clear, but likely, that it will also enhance our understanding of nonperturbative string theory. Even so, I have the feeling that qualitatively new insights are still required to properly address the issue of the cosmological constant and the stabilization of moduli.
no-problem/9812/astro-ph9812066.html
ar5iv
text
# A Model for Luminous and Long Duration Cosmic Gamma Ray Bursts ## Abstract We present here a simple and generic model for the luminous ($`Q_\gamma 10^{51}`$ erg) and long duration ($`t_\gamma 10`$s) Gamma Ray Bursts (GRBs) based on the fundamental fact that the General Theory of Relativity (GTR) suggests the existence of Ultra Compact Objects (UCOs) having surface gravitational red-shift $`z_s0.615`$ even when most stringent constraint is imposed on the equation of state. This simple model may explain the genesis of an electromagnetic fireball (FB) of energy as high as $`Q_{FB}5\times 10^{53}`$ erg and an initial bulk Lorentz Factor as high as $`\eta 10^3`$. PACS: 98.70. Rz, 97.60.-s It is now clear that a large number of Gamma Ray Bursts (GRB) involve emission of $`\gamma `$-rays as large as $`Q_\gamma 10^{52}10^{53}`$ erg under condition of isotropy. Afterglow observations of GRB970228, 970508 and 980703 show that they indeed have quasi spherical morphology. In fact, if GRB9901023 were also isotropic, one would infer a value of $`Q_\gamma 3.4\times 10^{54}`$ erg! However, in this paper we shall not consider the unique case of GRB9901023, which might be anisotropic, and focus attention on the (other) most luminous events recorded so far. We explain them below as events related to the formation of UCOs whose existence is suggested by GTR irrespective of the details of the EOS of the collapsing matter. Since this is a spherical model, unlike the irregular non-spherical models, the liberated energy will be in the form of photons and neutrinos alone, and, not in gravitational radiation. As was first shown by Schwarzschild in 1916, GTR yields an absolute upper limit on the value of the surface gravitational redshift of a static relativistic spherical star irrespective of the details of the Equation of State (EOS): $$z_s=\left(1\frac{2GM(R)}{Rc^2}\right)^{1/2}1z_c=2$$ (1) Here the subscript “s” refers to the respective “surface” values, $`R`$ is the invariant circumference radius, $`c`$ is the speed of light, and $`M`$ is the gravitational mass inclosed within $`R=R`$ $$M(R)=_0^R\rho 𝑑V=_0^R𝑑M$$ (2) where $`\rho `$ is the total mass-energy density, $`dV=4\pi R^2dR`$ is coordinate volume element, and the symbol $`dM`$ is self-explanatory. Schwarzschild obtained this limit for homogeneous stars by demanding that the central pressure of the star does not blow up. This result is actually valid even for non-homogeneous stars (see pp.333 of Weinberg) and is obtained when the EOS is allowed to have a causality violating sound speed $`c_s=(dp/d\rho )^{1/2}>c`$. When the EOS is constrained to obey causality, it follows from Eq.(9.5.19) of Shapiro & Teukolsky (pp.261), that one would have a tighter limit on $`z_c=1.22`$. If one constrains the EOS further so as to have $`c_sc/\sqrt{3}`$, it follows that one has an even tighter bound on $`z_c=0.615`$. To present a realistic model, in the following we shall work with the tightest GTR and EOS bound on $`z_s=z_c=0.615`$. It may be also noted that this limit on $`z_s`$ is independent on the precise value of $`M`$ itself. Thus, this limit may be applicable to both stellar mass compact objects like Neutron Stars (NSs) or even supermassive stars, and, hence, it was debated after the discovery of quasars, whether their redshifts were of gravitational origin. Note that, the presumed canonical NS has a value of $`M1M_{}`$ and $`R10`$km with $`2GM/Rc^20.26`$ and $`z_s0.16`$. However, actually, many existing EOSs easily allow a value of $`M=(23)M_{}`$ and $`R7`$Km (for degenerate matter, there is inverse relationship between $`M`$ and $`R`$), and this may result in a value of $`2GM/Rc^20.63`$ or $`z_s1.0`$. Thus, we can very well have an UCO, in lieu of a so-called NS. And our UCO is nothing but a NS having a compactness, though, higher than the canonical (i.e, assumed) value, very much allowed not only GTR but also by all existing EOSs. So, all that GTR tells here is that for a non-trivial ($`p0`$) EOS, gravitational collapse process may end with static objects having $`zz_c`$. Beyond, $`z>z_c`$, there would be no stable static configuration. However, light can escape from the collapsing body and the collapse is, in principle, reversible, until one crosses a deadline of $`2GM/Rc^2=1`$ or $`z=\mathrm{}`$, when the collapse becomes irreversible. However, because of inaccuracies, the present day numerical computations have a blurred vision about $`z_c`$, and further, erroneously, they conclude that the collapse becomes irreversible immediately after $`z=z_c0.16`$. Recall that the 10 GTR collapse equations form a set of highly complicated coupled non-linear partial differential equations. Even, numerically, they can be solved to obtain a unqiue result only for the homogeneous dust ($`p=0`$). Other cases might also be solved upto a certain extent but there could be hundreds of solutions depending on a number of explicit or implicit assumptions one makes, like self-similarity, adiabaticity, polytropic EOS, the variation of the polytropic index, nature of opacities, radiation transport properties etc, etc. Equally important is the question of the initial conditions one explicitly or implicity assumes. And then depending on the expectations, one might get the desired result. The genesis of a high $`z_s`$ object would be marked by emission of energy flux $`QMc^2`$ and then it becomes practically impossible to handle the most complex coupled energy transport problem in a precise manner. By definition, in such cases, one requires to work in the strong gravity limit where most of the inherent assumptions break down. For stellar mass objects, although, the high density cold EOS is known with relatively more certainty, our knowledge about finite temperature EOS of nuclear matter at arbitrary high $`T`$ is, at present, at is infancy. Also note that for numerical computations, for a total accumulated uncertainty of few percentage, arising from either present theoretical inputs and intrinsic simplifications, a potential result like $`2GM/Rc^28/9`$ (corresponding to $`z_s2`$, with finite gravitational acceleration) may precipitate to a “$`2GM/Rc^21`$” ($`z\mathrm{}`$ with infinite gravitational acceleration) signalling the apparent formation of an early “event horizon” or a “trapped surface”. Even if we consider the infinitely simpler problem of collapse of an inhomogeneous dust, there could be varied numerical results, and, in particular, there is a raging debate whether such collapse gives rise to a Black Hole or a “naked singularity. Such gross uncertainties may, at present, obfuscate the signals of formation of more compact NSs. In any case, as dicussed before, both the nuclear EOSs and GTR actually suggest the existence of more compact NSs. The self-gravitational energy of a static relativistic star is given by $$E_g=\rho c^2𝑑V\left\{1\left[1\frac{2GM(R)}{Rc^2}\right]^{1/2}\right\}$$ (3) Then recalling the definition of $`z`$ from Eq.(1), we may write $$E_g=z(R)c^2𝑑M\alpha z_sMc^2z_sMc^2$$ (4) where $`\alpha 1`$ is a model dependent parameter. The binding energy, i.e., the energy liberated in the formation of the eventually cold stellar mass compact object, is given by virial theorem to be $`E_B(1/2)E_g`$. Most of this binding energy is expected to be radiated in the form of $`\nu \overline{\nu }`$ during the final stages of formation of the UCO: $$Q_\nu E_B\frac{z_sMc^2}{2}$$ (5) So, given the most restricted limit $`z_c=0.615`$ the maximum value of $`Q_\nu 0.6M_{}c^2M_21.2\times 10^{54}M_2`$ erg where of $`M=M_22M_{}`$. This is in agreement with our similar previous crude estimate. The value of $`Q_\nu `$ measured near the compact object will be higher by a factor $`(1+z_s)`$: $`Q_\nu ^{}=z_s(1+z_s)M/2`$ (now we set $`c=1`$). For the NS-formation case, the neutrinos diffuse out of the hot core in a time $`t_\nu <10`$s and we may expect a somewhat longer time scale for the diffusion of neutrinos from the nascent hot UCO. However, here note that, the rather long value of $`t_\nu <10`$ s occurs because of coherent scattering of neutrinos by the heavy (Fe) nuclei. If the Fe-nucleons are already partially dissociated by an immediately preceding heating, the rise in the value of $`t_\nu `$ for the UCO formation need not be much larger. And the locally measured duration of the burst would be $`t_\nu ^{}=(1+z_s)^1t_\nu `$. Therefore, the mean (local) $`\nu \overline{\nu }`$ luminosity will be $`L_\nu ^{}`$ $`=`$ $`{\displaystyle \frac{Q_\nu ^{}}{t_\nu ^{}}}={\displaystyle \frac{z_s(1+z_s)^2M}{2t_\nu }}`$ (7) $`2\times 10^{53}z_s(1+z_s)^2M_2t_{10}^1erg/s`$ where $`t_\nu =t_{10}10s`$. It may be noted that this value of $`L_\nu ^{}`$ is well below the corresponding $`\nu `$-Eddington luminosity. The luminosity in each flavour will be $`L_i^{}=(1/3)L_\nu ^{}`$. By assuming the radius of the neutrinosphere to be $`R_\nu R`$, the value of effective local neutrino temperature $`T^{}`$ (assumed to be same for all the flavors), is obtained from the condition $$L_\nu ^{}=\frac{21}{8}4\pi R^2\sigma T^4$$ (8) where $`\sigma `$ is the Stephan-Boltzman constant. Therefore, we have, $`T^{}`$ $`=`$ $`\left({\displaystyle \frac{2z_s(1+z_s)^2Mc^2}{21\pi \sigma R^2t_\nu }}\right)^{1/4}`$ (10) $`13.3MeVz_s^{0.25}(1+z_s)^{0.5}M_2^{0.25}R_6^{0.5}t_{10}^{0.25}`$ where $`R=R_610^6`$. For a Fermi-Dirac distribution, the mean (local) energy of the neutrinos is $`E_\nu ^{}3.15T^{}48`$ MeV (for $`z_s=.6`$). The various neutrinos will collide with their respective antiparticles to produce electromagnetic pairs by the $`\nu +\overline{\nu }e^++e^{}`$ process. The rate of energy generation by pair production per unit volume per unit time, at a distance $`r`$ from the center of the star, is given by: $$\dot{q}_\pm (r)=\underset{i}{}\frac{K_{\nu i}G_F^2E_\nu ^{}L_i^2(r)}{12\pi ^2cR_\nu ^4}\phi (r)$$ (11) Here, $`L_i^{}(r)r^2`$ is the $`\nu `$-luminosity of a given flavour above the $`\nu `$-sphere, $`G_F^2=5.29\times 10^{44}cm^2MeV^2`$ is the universal Fermi weak coupling constant squared, $`K_{\nu i}=2.34`$ for electron neutrinos and has a value of 0.503 for muon and tau neutrinos. Here the geometrical factor $`\phi (r)`$ is $$\phi (r)=(1x)^4(x^2+4x+5);x=[1(R_\nu /r)^2]^{1/2}$$ (12) Now, considering all the 3 flavours, a simple numerical integration yields the local value of pair luminosity produced above the neutrinosphere : $`L_\pm ^{}`$ $`=`$ $`{\displaystyle _R^{\mathrm{}}}\dot{q}_\pm 4\pi r^2𝑑r{\displaystyle \underset{i}{}}{\displaystyle \frac{K_{\nu i}G_F^2E_{\nu ,l}L_{\nu ,l}^2}{27\pi cR_\nu }}7\times 10^{51}`$ (14) $`z_s^{2.25}(1+z_s)^{4.5}M_2^{2.5}t_{10}^{2.5}R_6^2erg/s`$ This estimate is obtained by assuming rectilinear propagation of neutrinos near the UCO. Actually, in the strong gravitational field near the UCO surface the neutrino orbits will be curved with significant higher effective interaction cross-section. Since, most of the interactions take place near the $`\nu `$-sphere, for a modest range of $`z_s`$, we may tentatively try to incoroprate this nonlinear effect by inserting a $`(1+z_s)^2`$ factor in the above expression. On the other hand, the value of this electromagnetic luminosity measured by a distant observer will be smaller by a factor of $`(1+z_s)^2`$, so that eventually, $`L_\pm =L_\pm ^{}`$ of Eq.(11). And the total energy of the electromagnetic FB at $`\mathrm{}`$ is $`Q_{FB}`$ $`=`$ $`t_\nu L_\pm =7\times 10^{52}`$ (15) $`=`$ $`z_s^{2.25}(1+z_s)^{4.5}M_2^{2.5}t_{10}^{1.5}R_6^2erg/s`$ (16) Thus, the efficiency for conversion of $`Q_\nu `$ into $`Q_{FB}`$ is $`ϵ_\pm `$ $`=`$ $`{\displaystyle \frac{Q_{FB}}{Q_\nu }}`$ (18) $`3.3\%z_s^{1.25}(1+z_s)^{4.5}M_2^{1.5}t_{10}^{1.5}R_6^2`$ In particular, for $`z_s=0.615`$, $`M_2=1`$, $`R_6=1`$ and $`t_{10}=1`$, we obtain a large $`ϵ_\pm 15.5\%`$, and it may be reminded here that the value of $`ϵ_\pm `$ should saturate to a limiting value of $`40\%`$, corresponding to a local statistical equilibrium between the 3 flavours of $`\nu ,\overline{\nu }`$ and $`e^+,e^{}`$. This highest value of efficiency may be attained, for instance, for $`R=7`$km and $`M=2.5M_{}`$. Correspondingly, we obtain a highest value of $`Q_{FB}4.8\times 10^{53}M_2`$ erg in this model. And thus we may explain the energy budget of GRB971214, $`Q_\gamma 3\times 10^{53}`$ erg without overstretching any theory or making any unusual assumption or invoking any unconfirmed exotic physics (like “strange stars’). Now, we shall address the question of baryonic pollution: $`\eta =Q_{FB}/\mathrm{\Delta }M>10^2`$. In general all models involving collision and full/partial disruption of compact object(s) will spew out thick and massive debris ($`fewM_{}>M_{}>0.1M_{}`$). Part of this debris is likely to settle into a torus and an uncertain small fraction ($`\mathrm{\Delta }M`$) may hang around the system and get accreted on a long time scale or may even be unbounded. It is practically, impossible to simulate the latter fraction dynamically even in a Newtonian theory. On the other hand, spherical implosion models are completely free from the presence of such unaccountable and intractable thick collisional debris. However, in a normal SN event (assumed to be basically spherical implosion), the ejection of baryonic mass $`0.1M_{}`$ occurs probably because of shock mediated hydrodynamic process. Since, by definition, the system is gravitationally bound, any normal hydrodynamic attempt of mass ejection can not be much successful in a spherical model. But the shock generates additional entropy and heat in its vicinity and might be able to effect the mass ejection. Yet, the shock is constantly depleted of energy and gets stalled because of $`\nu `$-losses, and disintegration of heavy nuclei. Probably, the shock might be rejuvenated by the “shock reheating mechanism”. The energy transfer between neutrinos and matter behind the shock is mediated primarily by the charged current reactions $`\nu _e+np+e^{}`$ and $`\overline{\nu }_e+pn+e^+`$. When these reactions proceed to the right, the matter heats up, and conversely, the matter cools. To have a successful and sufficient net heating is a critical phenomenon, and present day (realististic) SN codes are unable to find the shock mediated mass-ejection (explosion) even in a relatively weak nascent-NS gravitational field. It is not surprising then that the same numerical calculations, at present, do not find existence of UCO, whose study involves strong gravity, finite temperature EOS and complex physics. Probably, only for a narrow range of initial conditions and modestly deep gravitational potential well this mechanism of shock ejection is successful. Thus the real issue is not how to explain the non-ejection of mass by direct hydrodynamic processes by defying the extremely deep relativistic potential well. On the contrary, the meaningful question is how, for a weak Newtonian potential well, for certain range of initial conditions, there could be successful hydrodynamic mass ejection. Note that an UCO with a modest value of $`z_s0.5`$ has a potential well which is $`300\%`$ deeper than the one associated with a canonical NS, $`z_s0.16`$. Again the basic reason that a critical phenomenon like shock heated mass ejection might be successful for the SN case is that as one moves from a relativistic potential well (high $`z_s`$) to a Newtonian well ($`z_s0.2`$) is that while the local temperature due to $`\nu `$-hating may decrease slowly $`T^{}z_s^{0.25}`$ and the $`\nu `$-matter interaction cross-section $`\sigma _{\nu ,m}T^2z_s^{0.5}`$, the depth of the potential well drops rapidly $`z_s`$. Even then, it is far from clear how the hydrodynamic mass ejection can really occur. In fact there are ideas that departure from spherical symmetry induced by rotation and magnetic field might be important in effecting the SN mass-ejection. On the other hand, there is a genuine possibility, that all models of cosmological GRBs, irrespective of whether they explicitly invoke the $`\nu +\overline{\nu }e^++e^{}`$ process or not, should involve strong direct electromagnetic or $`\nu heated`$ mass loss. Even if an unusual pulsar is assumed to emit $`10^{52}`$ erg/s rather than $`10^{3840}`$ erg/s, the superstrong return current impinging back on the pulsar may drive a catastrophic wind, a possibility overlooked so far. On the other hand, for the thin outermost layers of the object (UCO or an hot accretion torus) emitting the neutrinos, well above the $`\nu `$-sphere, the $`\nu `$\- flux $`S_\nu `$ may induce a super-Eddington photon flux $`S_{ph}`$. Also, though, for a torus with uncertain dynamically changing geometry, it is difficult to make any semi-analytical or numerical estimate of such a process, in general this effect is expected to be much more pronounced because its gravitational self-binding ( $`z_s`$) is much weaker than that for a spherical UCO surface. And even if a steady state model calculation yields a high value of $`\eta `$, the eventual value of $`\eta `$ might be very low if the jet is intercepted by this debris. In fact, (by ignoring such unmanagable real life uncertainties and difficulties), detailed Newtonian and crude post Newtonian calculations for the NS-NS collision case have been presented by several authors; and the conclusion is that, it is difficult to understand a value of $`\eta `$ higher than few. For non-spherical configurations, it is difficult to ensure that a small fraction of the accreted matter itself is not contaminating the FB. And the estimate of $`\mathrm{\Delta }M`$ may be made with much larger confidence only for a spherical model, where by definition, the entire matter, in general, is moving inwardly. Here, a certain fraction of the matter lying above the neutrinosphere may be ejected out by the $`\nu `$-heating and it may be possible to crudely estimate the baryonic mass lying above the $`\nu `$-sphere independent of the details of the problem. The mean cross section for $`\nu _e`$-matter interaction is approximately given as $$\sigma _{\nu ,m}9\times 10^{44}(E_\nu ^{}/1MeV)^2cm^2$$ (19) so that, given our range of $`E_\nu ^{}4050`$ MeV, the value of $`\sigma _{\nu ,m}<10^{40}`$ cm<sup>2</sup>. Since the $`\nu `$-optical thickness of the layer above the $`\nu `$-sphere is $`2/3`$, the surface density of this layer $`\delta 2m_p/(3\sigma _{\nu ,m})10^{16}`$ g/cm<sup>2</sup>, where $`m_p`$ is the proton rest mass. Therefore, the mass of the matter above the $`\nu `$-sphere is $`\mathrm{\Delta }M4\pi R_\nu ^2\delta 10^{29}g10^4M_{}R_6^2`$. Probably, the most detailed work on this problem of $`\nu `$-driven mass ejection from a hot nascent NS is due to Duncan, Shapiro & Wasserman; and the Table 5 of it shows that for $`R10^6`$ cm, $`M=2M_{}`$, we have $`\mathrm{\Delta }M10^4M_{}`$, if $`T^{}=20MeV`$. On the other hand, for $`T^{}=30MeV`$, one has, $`\mathrm{\Delta }M7\times 10^4M_{}`$. This estimates were made in the framework of Newtonian gravity, and a GTR calculation, if possible, would certainly yield, lower values of $`\mathrm{\Delta }M`$. Even considering these Newtonian values of $`\mathrm{\Delta }M`$, we find that the value of $`\eta `$ could be easily lie between $`10^3>\eta >10^2`$. Now the occurrence of luminous and long GRBs can be understood by using the existing ideas. Previously, accretion induced collapse of a White Dwarf to a NS was suggested as a model for GRBs. The difficulty of this model was that (i) for a canonical NS with $`M=1M_{}`$ and $`z_s0.16`$, Eq.(12) yields a very low value of $`Q_{FB}4\times 10^{50}`$ erg, (ii) further, the value of $`\eta `$ is seen to be too low, and (iii) in the corresponding weak gravitational well, it can not be ensured that supernova shock is not launched. And it is probable that less luminous and short GRBs may occur by various other scenarios too including the so-called “collapsar” or “hypernova” type models. which, may explain the origin of a value of $`Q_\gamma 10^{49}10^{51}`$ erg for a duration of $`t_\gamma <1`$s (though in extreme cases they go beyond this range).
no-problem/9812/physics9812039.html
ar5iv
text
# X-ray Imaging Using a Hybrid Photon Counting GaAs Pixel Detector ## 1 INTRODUCTION The most widely used detection medium for medical X-ray imaging is still photographic film. In the last few years, also digital X-ray imaging systems have been playing an increasing role. The main advantages of digital sensors in comparison to film systems are the higher sensitivity due to better absorption (this implies lower dose for the patient), the avoidance of time and material consuming chemical processing and the benefits of digital data handling like the possibility to apply software image processing tools to analyze the image. The digital X-ray systems which are commercially available since a few years , mainly consist of silicon charge coupled devices (CCDs), with or without a scintillator conversion layer. Incident photons create electron hole pairs which are accumulated in potential wells formed by the electrodes of the CCD. These potential wells are located very close to the surface of the CCD. In contrast to visible light which can be absorbed very well in this thin region, the absorption of X-rays is much less efficient due to higher photon energy. To increase the absorption, the CCD is often covered with a scintillator layer. This concept has the disadvantage of decreasing image resolution and contrast because of scattering of conversion photons within the scintillator. Another concept is given by hybrid pixel assemblies, which consist of a detector and a readout chip being connected together by a flip-chip process. Different developments have been sucessfully made especially for high energy physics and recently medical applications. A big advantage of the hybrid solution compared to monolithic devices like a CCD is the fact, that both chips can be optimized separately. While for the readout circuit the well known silicon CMOS technology is preferred, materials with an enhanced absorption efficiency for X-rays in the energy range of 10-70 keV such as GaAs or CdTe can be used. A new step in the readout electronics is made by using a single photon counting technique instead of an integrating method. This implies a faster read-out, low noise and a higher dynamic range. In this work, detectors processed on semi-insulating LEC-GaAs (SI-GaAs) bulk material and bump-bonded to the Photon Counting Chip (PCC) were used. ## 2 READOUT ELECTRONICS The PCC is a further development of the LHC1/Omega3 chip, used in high energy physics, towards medical applications. It consists of a matrix of 64 x 64 matrix of identical square pixels (170 $`\mu `$m x 170 $`\mu `$m) and covers a total sensitive area of 1.2 cm<sup>2</sup>. The electronics in each cell comprises a preamplifier with a leakage current compensation up to 10 nA/pixel, an externally adjustable comparator with a 3-bit fine tuning for each pixel, a short delay line which feeds back to the latched comparator to produce a pulse and a 15-bit pseudo-random counter. The input of the preamplifier is connected via a bump-bond to one of the detector pixels or can receive alternatively test signals from an external pulse generator via a test capacitance. When the shutter signal is low the pulse coming from the delay line becomes the clock of the counter. When the shutter is high, the feedback loop is broken and an external clock can be used to shift out the data in a serial fashion. The maximum readout frequency is 10 MHz. There are two more fully static flip-flops to mask noisy pixel and to enable electrical testing. A summary of electrical measurements of the PCC before bump-bonding is given in table 1. ## 3 DETECTOR - MATERIAL AND <br>FABRICATION The detectors were fabricated in the Freiburg cleanroom facility on semi-insulating GaAs bulk material from FCM Freiberg, Germany. This material has typically a resistivity of 10$`{}_{}{}^{7}\mathrm{\Omega }`$ cm. It has been shown that this type of GaAs has very good properties as a material for radiation detectors in high energy physics and previously medical applications. The wafers were first lapped down from 650 $`\mu `$m to 300 $`\mu `$m and implanted on the backside with oxygen ($`310^{13}`$cm<sup>-2</sup> at 190 keV) to avoid backside firing. The Schottky contacts were processed on both sides by layers of Ti, Pl, Au and Ni. The front side is structured by photolithographic processes into a matrix of small pixels (gap: 10 $`\mu `$m or 20 $`\mu `$m) of the same dimension as the readout electronics. The bond pads have a diameter of 20 $`\mu `$m, the passivation was made with a layer of Si<sub>3</sub>N<sub>4</sub>. The so called underbump metallization for the flip-chip process was another layer of Au with an overlap of 2 $`\mu `$m. In figure 2 the absorption probability of X-rays in the energy range 10 to 150 keV for Si, GaAs and CdTe, each 300 $`\mu `$m thick, is plotted. It can be seen that the absorption of GaAs and CdTe in the interesting energy range 20 to 70 keV is much higher than that of Si: for example at 30 keV, which is the peak of the X-ray spectra for 70 kV tube voltage, the detection efficiency in Si is only 10%, in contrast to GaAs with nearly 90%. CdTe performs even better, but until now there are difficulties in terms of homogeneity and processing. To determine the suitable reverse bias voltage settings of the detector, a IV-characteristic was taken with one assembly after the flip-chip bonding. A diode characteristic is expected. In figure 3 the leakage current in $`\mu `$A which flows into the detector is plotted as a function of the reverse bias voltage. The characteristic has three distinct regions: * a region where the leakage current increases linear with reverse bias voltage to reach a plateau, * a saturation area in which the leakage current is approximately independent of the bias voltage, * a region where the current increases again with the applied voltage (soft breakdown region). The soft breakdown is obtained due to the implantation of the backside of the wafer. The leakage current density measures 27 nA/mm<sup>2</sup>. ## 4 IMAGING PROPERTIES To determine the imaging properties of the detector assembly we use a standard X-ray tube for dental diagnostics<sup>1</sup><sup>1</sup>1Supplier: Siemens Type:Heliodent MD.. Measurements using radioactive sources have been sucessfully done by other groups . In a first measurement the assembly was exposed to a 200 ms long, 70 kV X-ray pulse, and the mean counts per pixel for increasing reverse bias voltage were determined. We found that the mean counts per pixel reach a plateau at around 200 V (figure 4) and that there are almost no noisy pixels up to 250 V (figure 5). It should be mentioned that the bias settings of the readout chip were determined before, using an external pulse generator. The mean threshold of the pixels after adjustment was calculated to be 3794 e<sup>-</sup>. First images were taken from a 10 mm long M2 steel screw, placed on the back of the detector in a distance of 20 cm from the X-ray tube. The tube voltage was set to 60 kV, the exposure time to 50 ms. These are nearly the minimum settings of the tube. In figure 6 the raw data (the number of counts for each pixel) is plotted for the whole pixel matrix. The darker the pixel is plotted in the 8-bit greyscale, the higher the count rate and so the number of photons being detected in this pixel. Pixels which are plotted black, have counted more than 2500. All pixels are working, so the bump-bonding yield of this assembly seems to be nearly perfect. Nevertheless there are some small inhomogeneities, which can be attributed to variations in sensitivity of the detector. This non-uniform sensitivity is probably a characteristic of the used bulk material. In semi-insulating LEC-GaAs, the deep donor arsenic antisite defect EL2 is normally used to compensate residual impurities with a flat energy level and is responsible for the semi-insulating behaviour. Otherwise the influence of flat acceptor concentrations like carbon would leave the material conducting. It has been shown that these deep donors could limit the lifetime of charge carriers by acting as trapping centres . Electron-hole pairs generated by incoming X-ray photons can be trapped on their way to the readout electrodes, so that only a fraction of the generated charge is detected. This implies a reduced charge collection efficiency (CCE). The local inhomogeneities also reduce the signal to noise ratio, which is defined as follows: $$\mathrm{SNR}=\frac{\mathrm{signal}}{\mathrm{noise}}=\frac{\mathrm{n}}{\sigma }$$ Here $`\mathrm{n}`$ represents the mean number of counts per pixel in the region of interest and $`\sigma `$ is the standard deviation of the signal value. In case of photonic noise the SNR should have a square root dependency on the mean count rate as expected by the poisson statistic. Depending on the bias voltage of the detector, the exposure time to the X-rays and the optical density of the object and its spatial frequency, the SNR is not fixed. We obtained for the SNR a value of $`4.1\pm 0.1`$ by taking a flood image, i.e. a uniform exposure of the whole detector, for 200 V bias voltage and a 100 ms long X-ray pulse at 70 kV tube voltage without applying any corrections to the data. It has been shown that in the case of time independent inhomogeneities in detector sensitivity an image correction method can be used to ameliorate the imaging properties. This method also increases the SNR by decreasing the $`\sigma `$. Further investigations will show if this method is also suitable for our detector system. Another possibility to get a better homogeneity is given by the threshold adjust facility of the PCC. Instead of adjusting the individual pixel threshold with a pulse on the test capacitance like it is done till now, an adjustment using the mean detector response to X-ray exposure could be carried out. To improve the image quality in a first step, the image was inverted and interpolated. This is shown in figure 7. It should be mentioned that also the inner structure of the screw (thread, head) can be recognized. There are many ways to evaluate the quality of an image. The most common and suitable methods are the contrast transfer function (CTF) and the modulation transfer function (MTF). The CTF describes the relative contrast response of an imaging system to a square wave modulation, the MTF the response to a sinusoidale one. They are both dependent on the spatial frequency whose unity is line pairs per mm (lp/mm). The Nyquist frequency which is defined by $`N_y=1/(2\times \mathrm{pitch})`$ measures 2.95 lp/mm for our detector system. Images of small slits down to the pixel size were sucessfully taken and the determination of the line spread function (LSF) and the corresponding MTF will be done soon. ## 5 CONCLUSION AND <br>FUTURE WORK It has been shown that hybrid GaAs pixel detectors with photon counting electronics offer a promising alternative as digital X-ray imaging sensors. In this work SI-GaAs detectors, fabricated in Freiburg, were flip-chip bonded to 4096 Pixel Photon Counting Chips (PCC), developed at CERN. The leakage current density of the detector was determined by a IV-characteristic to 27 nA/mm<sup>2</sup>, which is in accordance to the expectation. A detector bias voltage scan showed that a voltage around 200 V is enough to have the detector fully active. There are almost no noisy pixel for voltages below 250 V, the soft breakdown region of the detector. Future work is given by investigations of the observed inhomogeneity in the taken X-ray images. If they can be attributed to variations in sensitivity of the detector and are time independent, an image correction methode can be developed and applied to the data. As a next step, characteristic quantities of an imaging system like the CTF and the MTF, will be determined and compared to other systems. ## 6 ACKNOWLEDGEMENTS This work was supported by the European Community under the Brite/Euram project XIMAGE (BE-1042). The readout chip was developed as part of the Medipix project, carried out by CERN, University of Freiburg, University of Glasgow and INFN-Pisa. We gratefully acknowledge the contribution of G. Humpston of GEC-Marconi Materials Technology Ltd., Caswell, England for the bumb-bonding, G. Magistrati of Laben S.p.A., Milano for the VME-based readout system and M. Conti and collaborators of INFN-Napoli provided the readout software.
no-problem/9812/astro-ph9812393.html
ar5iv
text
# Present and future gamma-ray burst experiments ## 1 Introduction The long-awaited breaktkhrough in our understanding of cosmic gamma-ray bursts (GRBs) has come about because accurate ($`<`$10 ′) burst positions have become available quickly ($`<`$1 day). Prior to the launch of BeppoSAX, accurate positions were available from the interplanetary networks, but unavoidable delays in the retrieval and processing of data delayed their availability. Similarly, rapidly determined positions were, and still are available from BATSE, but their utility is limited by the fact that their accuracy is in the several-degree range. The list of things we need to know about bursts is still long. Among the items on it are: * are burst sources in their host galaxies, or outside them? * what is the distribution of GRB distances? * are bursts beamed? * what is the intrinsic luminosity function for bursts? * are there different classes of bursts, e.g. long and short, soft-spectrum and hard-spectrum? * what is the multiwavelength behavior of GRB light curves immediately after the burst? Given that only $`50\%`$ of the GRBs studied to date have optical counterparts, and that only $`50\%`$ of the counterparts have measured redshifts, it is clear that answering these questions will require hundreds of GRB detections in the gamma-ray range. But the rate at which rapid, accurate positions become available is still quite small: $`<`$ 1 burst/month. Thus even minor improvements in the rate can have a major impact on progress in the near-term future. However, major improvements will be needed in the long-term future to make the next big step. ## 2 Current and future missions Figure 1 shows the approximate operating dates of the missions which are capable of providing GRB data to answer some of the questions listed above. Each of these missions will be reviewed briefly. ### 2.1 BeppoSAX BeppoSAX has now observed 14 GRBs in the Wide Field Camera, with location accuracies in the $`<10\mathrm{}`$ range. The resulting detection rate is $``$ 8/year. Eleven of these have been followed up with Narrow Field Instrument observations, resulting in many cases in a reduction of the error circle radii to $`1\mathrm{}`$ (Costa costa (1999)). There are delays of $``$ hours to obtain, analyze, and distribute the data. The approved lifetime of the mission is through 2001. ### 2.2 BATSE: GCN and Locburst The Global Coordinates Network (GCN: Barthelmy et al. barthelmy (1999)) distributes $``$ 300 GRB positions/year with delays of the order of seconds, determined directly onboard the CGRO spacecraft. The error circle radii are $`>4\mathrm{°}`$. The Locburst procedure (Kippen et al. (kippen (1998)) distributes $``$ 100 of the stronger bursts/year. As Locburst relies on ground-based processing, the delays are longer, $``$ 15 min, but the accuracy is improved: the error circle radii are $`>1.6\mathrm{°}`$. These data are useful for follow-up searches with rapidly moving telescopes like LOTIS (Park park (1999)), and with the RXTE Proportional Counter Array, as well as for triangulation with the 3rd Interplanetary Network. BATSE will remain operational at least through 2002; its lifetime is limited by the available funding, and is reviewed every two years in NASA’s “Senior Review” process. ### 2.3 3rd Interplanetary Network The 3rd IPN consists of the Ulysses and Near Earth Asteroid Rendezvous (NEAR) missions in interplanetary space, as well as numerous near-earth missions such as CGRO, RXTE, Wind, and BeppoSAX. The IPN observes and localizes $``$ 70 GRBs/year (Hurley hurley1 (1999) a,b). When a burst is observed by just two spacecraft, such as Ulysses and CGRO, the resulting error box is the intersection of the triangulation annulus with the BATSE error circle, with dimensions typically 5 $`\mathrm{}`$ by 5 $`\mathrm{°}`$. When Ulysses, NEAR, and say, BATSE detect the burst, the resulting error box may be as small as 1 $`\mathrm{}`$ by 5 $`\mathrm{}`$ (Cline cline (1999)). The delays involved are $``$ 1 day, imposed by the receipt of data from interplanetary spacecraft through NASA’s Deep Space Network. The lifetime of the 3rd IPN will be through 2001 at least. This is determined by the nominal end of the Ulysses mission, which will be reconsidered in NASA’s Senior Review of space physics missions in 1999. ### 2.4 The Rossi X-Ray Timing Explorer The All-Sky Monitor aboard RXTE detects $``$ 5 GRBs/year; $``$2-3 of them can be localized to $``$ arcminute accuracy with delays of only minutes (Bradt bradt (1999)). In addition, the PCA performs about one target-of-opportunity observation per month of BATSE Locburst positions to search for fading X-ray counterparts (Takeshima et al. takeshima (1998)). When successful, the counterpart position can be determined to $`10\mathrm{}`$ with a delay of hours. Like BATSE, RXTE’s lifetime, determined by the Senior Review, will extend through 2002 at least. ## 3 The next big step At present we rely on spacecraft instrumentation to provide X-ray positions which are accurate to arcminutes, and on rapid ground-based photometry from small to moderate-sized telescopes to identify optical counterparts to arcsecond accuracy. Only at that point can a large telescope be used to determine the redshift (for example, the spectrometer slits on the Keck Low Resolution Imaging Spectrometer are only 1 - 8 $`\mathrm{}`$). The next big step will be to determine GRB positions directly on the spacecraft to arcsecond accuracy, eliminating the delays involved in refining the positions on the ground. Some of the future missions discussed below will be capable of accomplishing this. ### 3.1 HETE-II The High Energy Transient Explorer-II combines a Wide Field X-ray Monitor and a Soft X-ray Camera to localize $``$50 GRBs/y to accuracies of 10 $`\mathrm{}`$ to 5 $`\mathrm{}`$ (Ricker ricker (1999)). Locations will be transmitted to the ground in near real-time. The HETE-II mission is planned for a two year lifetime starting in late 1999. ### 3.2 CATSAT The Cooperative Astrophysics and Technology Satellite (Forrest et al. forrest (1995)) will contain a soft X-ray spectrometer consisting of 190 cm<sup>2</sup> of Si avalanche photodiodes to measure the 0.5 - 20 keV spectra of GRBs and their afterglows. From these spectral measurements, the hydrogen column along the line of sight may be determined. CATSAT has only coarse localization capability, but measurements of N<sub>H</sub> will help to answer the question of the locations of GRBs with respect to their host galaxies. $`12`$ GRBs/year should be detected, with data available $``$ 5 hours after the bursts. A nominal one year mission in 2000 is planned. ### 3.3 INTEGRAL The International Gamma-Ray Laboratory can detect bursts with its Ge spectrometer array (the SPI), as well as with IBIS (the Imager on-Board the INTEGRAL Satellite), and with the BGO anticoincidence shield around the spectrometer. IBIS, a CdTe array with a coded mask, provides the most accurate, rapid locations. It can detect $``$ 20 GRBs/year and localize them to arcminute accuracy (Kretschmar et al. kretschmar (1999)). These positions can be distributed to observers within 5 - 100 s. The nominal INTEGRAL mission is two years long, starting in April 2001. ### 3.4 Future IPN A future Interplanetary Network, consisting of Mars Surveyor Orbiter 2001, the Near Earth Asteroid Prospector, INTEGRAL, and possibly BATSE and Ulysses, may exist around the year 2002. MSO has two GRB two instruments which will detect GRBs with good sensitivity and time resolution, a Ge spectrometer and a neutron detector. The BGO anticoincidence shield of the INTEGRAL SPI is similarly equipped to detect bursts (Hurley hurley3 (1999)). A small GRB detector has been proposed as a MIDEX mission of opportunity for NEAP, a mission to Nereus. With such a network, $``$ 70 GRBs/year could be localized to arcminute accuracies, with delays of the order of a day. This IPN might remain in place for one or two years, bridging the gap to a possible dedicated GRB MIDEX. ### 3.5 A dedicated MIDEX Approximately 6 proposals for dedicated GRB missions were submitted to NASA in response to the recent MIDEX announcement. A dedicated MIDEX could localize perhaps 100 GRBs/year to arcsecond accuracy onboard the spacecraft, and transmit the locations to the ground in near real-time. Such a mission may fly in the years 2003-2005. The announcement of the MIDEX selection is expected in January 1999. ## 4 Conclusions Table 1 summarizes the essential characteristics of the current and future GRB missions. Although we have much to learn about gamma-ray bursts, these missions promise to return the data we need to move forward in this exciting field.
no-problem/9812/cond-mat9812007.html
ar5iv
text
# Orbital Magnetism in Small Quantum Dots with Closed Shells ## Abstract It is found that various kind of shell structure which occurs at specific values of the magnetic field leads to the disappearance of the orbital magnetization for particular magic numbers of small quantum dots with an electron number $`A<30`$. Pacs: 73.20Dx, 73.23Ps The development of semiconductor technology has made possible the confinement of a finite number of electrons in a localized space of a few hundred Angstroms . These mesoscopic systems called quantum dots open new avenues in the study of the interplay between quantum and classical behavior at a low–dimensional scale. The smaller the quantum dot, the larger the prevalence of quantum effects upon the static and dynamic properties of the system. Their electronic properties are determined by the interplay of the external confinement and the electron-electron interaction which produces the effective mean field of the ”artificial atom” . The quasiparticle concept associated with an effective mean field is well established in many particle physics. For finite Fermi system like nuclei or metallic clusters the bunching of single particle levels known as shells is one consequence of this description, if the mean free path of the particles is comparable with the size of the system. A remarkable stability is found in nuclei and metallic clusters at magic numbers which correspond to closed shells in the effective potential. For small quantum dots, where the number of electrons is well defined $`(A<30)`$, the mean free path of the electrons appears to be comparable with the diameter of the dot. Transport phenomena are governed by the physics of the Coulomb blockade regime . In recent experiments shell structure effects have been observed clearly for quantum dots. In particular, the energy needed to place the extra electron (addition energy) into a vertical quantum dot at zero magnetic field has characteristic maxima which correspond to the sequence of magic numbers of a two-dimensional harmonic oscillator . The energy gap between filled shells is $`\mathrm{}\omega _0`$, where $`\mathrm{}\omega _0`$ is the lateral confinement energy. In fact, when the confining energy is comparable to or larger than the interaction energy, these atomic-like features have been predicted in a number of publications -. While the electron-electron interaction is important for the explanation of certain ground state properties like special values of angular momenta of quantum dots in a magnetic field , for small number of electrons the confinement energy becomes prevalent over the Coulomb energy . In it was demonstrated that the magnetoexciton spectrum in small quantum dots resembles well the spectrum of the noninteracting electron–hole pairs. In particular, the gaps in the spectrum, which are typical features of the shell structure, reappear at different values of the magnetic field. Recent calculations using the spin–density-functional approach nicely confirm shell closure for small magic electron numbers in a parabolic quantum dot. Orbital magnetism of an ensemble of quantum dots has been discussed for non–interacting electrons in , but little attention was paid to shell structure of an individual dot. We demonstrate within a simple model that the disappearance and re-appearance of closed shells in a quantum dot under variation of the magnetic field strength leads to a novel feature: the orbital magnetization disappears for particular values of the magnetic field strength, which are associated to particular magic numbers. Since the electron interaction is crucial only for partially filled electronic shells , we deal in this paper mainly with closed shells. It corresponds to the quantum limit $`\mathrm{}\omega _0e^2/\epsilon l_0`$ where $`e^2/\epsilon l_0`$ is the typical Coulomb energy with $`l_0=(\mathrm{}/m^{}\omega _0)^{1/2}`$, $`m^{}`$ is the effective electron mass and $`\epsilon `$ is the dielectric constant. In fact, for small dots, where large gaps between closed shells occur , the electron interaction plays the role of a weak perturbation which can be neglected. But even in the regime $`\mathrm{}\omega _0<e^2/\epsilon l_0`$ a distinctively larger addition energy is needed, if an electron is added to a closed shell . We choose the harmonic oscillator potential as the effective mean field for the electrons in an isolated quantum dot. Our discussion here is based upon the 2D version of the Hamiltonian including spin degree of freedom. The magnetic field acts perpendicular to the plane of motion, i.e. $`H=_{j=1}^Ah_j`$ with $$h=\frac{1}{2m^{}}(\stackrel{}{p}\frac{e}{c}\stackrel{}{A})^2+\frac{m^{}}{2}(\omega _x^2x^2+\omega _y^2y^2)+\mu ^{}\sigma _zB.$$ (1) where $`\stackrel{}{A}=[\stackrel{}{r}\times \stackrel{}{B}]/2,\stackrel{}{B}=(0,0,B)`$ and $`\sigma _z`$ is the Pauli matrix. We do not take into account the effect of finite temperature; this is appropriate for experiments which are performed at temperatures $`kT\mathrm{\Delta }`$ with $`\mathrm{\Delta }`$ being the mean level spacing. The units used are meV for the energy and Tesla for the magnetic field strength. The effective mass is $`m^{}=0.067m_e`$ for GaAs, which yields, for $`A15`$, the size $`R_0320\AA `$ and $`\mathrm{}\omega _0=3meV`$ . The effective mass determines the orbital magnetic moment for the electrons and leads to $`\mu _B^{\mathrm{eff}}=m_e/m^{}\mu _B15\mu _B`$. The effective spin magnetic moment is $`\mu ^{}=g_L\mu _B`$ with the effective Landé factor $`g_L=0.44`$ and $`\mu _B=|e|\mathrm{}/2m_ec`$. The magnetic orbital effect is much enhanced in comparison with the magnetic spin effect, yet the tiny spin splitting does produce signatures as we see below. Shell structure occurs whenever the ratio of the two eigenmodes $`\mathrm{\Omega }_\pm `$ of the Hamiltonian (1) (see Ref.) $$\mathrm{\Omega }_\pm ^2=\frac{1}{2}(\omega _x^2+\omega _y^2+4\omega _L^2\pm \sqrt{(\omega _x^2\omega _y^2)^2+8\omega _L^2(\omega _x^2+\omega _y^2)+16\omega _L^4})$$ (2) is a rational number with a small numerator and denominator. Here $`\omega _L=|e|B/(2m^{}c)`$. Closed shells are particularly pronounced if the ratio is equal to one (for $`B=0`$) or two (for $`B1.23`$ ) or three (for $`B2.01`$ ) and lesser pronounced if the ratio is 3/2 (for $`B=0.72`$) or 5/2 (for $`B=1.65`$) for a circular case $`\omega _x=\omega _y`$ (see Fig.1a). Note that, for better illustration, we used for the spin splitting the value $`2\mu _B`$ instead of the correct $`\mu ^{}`$ in all Figures; the discussions and conclusions are based on the correct value. The values given here for $`B`$ depend on $`m^{}`$ and $`\omega _{x,y}`$. As a consequence, a material with an even smaller effective mass $`m^{}`$ would show these effects for a correspondingly smaller magnetic field. For $`B=0`$ the magic numbers (including spin) turn out to be the usual sequence of the two dimensional isotropic oscillator, that is $`2,6,12,20,\mathrm{}`$ . For $`B1.23`$ we find new shells as if the confining potential would be a deformed harmonic oscillator without magnetic field. The magic numbers are $`2,4,8,12,18,24,\mathrm{}`$ which are just the numbers obtained from the two dimensional oscillator with $`\omega _>=2\omega _<`$ ($`\omega _>`$ and $`\omega _<`$ denote the larger and smaller value of the two frequencies). Similarly, we get for $`B2.01`$ the magic numbers $`2,4,6,10,14,18,24,\mathrm{}`$ which corresponds to $`\omega _>=3\omega _<`$. If we start from the outset with a deformed mean field $`\omega _x=(1\beta )\omega _y`$ with $`\beta >0`$, the degeneracies (closed shells) lifted at $`B=0`$ re-occur at higher values for $`B`$ (see Fig.2 and discussion relating to it). In Fig.1b we display an example referring to $`\beta =0.2`$. The significance of this finding lies in the restoration of closed shells by the magnetic field in an isolated quantum dot that does not give rise to magic numbers at zero field strength due to deformation. We mention that the choice $`\beta =0.5`$ would shift the pattern found at $`B1.23`$ in Fig.1a to the value $`B=0`$. The relation between $`B`$ and the deformation is displayed in Fig.2, where, for better illustration, $`B^{^{}}=\omega _L/\omega _x`$ rather than $`B`$ is plotted versus $`r=\omega _x/\omega _y`$. Closed shells are obtained for values of $`B`$ and $`\beta `$ which yield $`\mathrm{\Omega }_+/\mathrm{\Omega }_{}=k=1,2,3,\mathrm{}`$, that is for values on the trajectories of Fig.2. Note also the asymmetry of the curves in Fig.2: while $`\omega _x/\omega _y=r`$ is physically identical with $`\omega _x/\omega _y=1/r`$ without magnetic field, the two deformations become distinct by the presence of a magnetic field as it establishes a direction perpendicular to the $`xy`$plane. In we have obtained various shapes of the quantum dot by energy minimization. In this context it is worth noting that at the particular values of the magnetic field, where a closed shell occurs, the energy minimum would be obtained for circular dots, if the particle number is chosen to be equal to the magic numbers. Deviations from those magic numbers usually give rise to deformed shapes at the energy minimum. To what extent these ’spontaneous’ deformations actually occur (which is the Jahn–Teller effect ), is subject to more detailed experimental information. The far-infrared spectroscopy in a small isolated quantum dot could be a useful tool to provide pertinent data . The question arises as to what extent our findings depend upon the particular choice of the mean field. The Coulomb interaction lowers the electron levels for increasing magnetic quantum number $`|m|`$ . The addition of the term $`\lambda \mathrm{}\omega L^2`$ to the Hamiltonian (1), where $`L`$ is the dimensionless $`z`$-component of the angular momentum operator, mimics this effect for $`\lambda >0`$ in the Coulomb blockade regime of deformed quantum dots . In this way, it interpolates the single-particle spectrum between that of the oscillator and the square well . For $`\omega _x\omega _y`$ and $`\lambda 0`$ the Hamiltonian $`H^{}=H\lambda \mathrm{}\omega L^2`$ is non-integrable and the level crossings encountered in Figs.1 become avoided level crossings. The shell structure, which prevails for $`\lambda =0`$ throughout the spectrum at $`B1.23`$ or $`B2.01`$, is therefore disturbed to an increasing extent with increasing shell number. But even for $`\lambda 0.1`$ the structure is still clearly discernible for about seven shells, that is for particle numbers up to about twenty five. When the magnetic field is changed continuously for a quantum dot of fixed electron number, the ground state will undergo a rearrangement at the values of $`B`$, where level crossings occur . In fact, it leads to strong variations in the magnetization and should be observable also in the magnetic susceptibility as it is proportional to the second derivative of the total energy with respect to the field strength. While details may be modified by electron correlations, we think that the general features discussed below should be preserved. In Fig.3 we discern clearly distinct patterns depending on the electron number, in fact, the susceptibility appears to be a fingerprint pertaining to the electron number. Deforming the oscillator does not produce new features except for the fact that all lines in Fig.3 would be shifted in accordance with Fig.2. If there is no level crossing, the second derivative of $`E_{\mathrm{tot}}`$ is a smooth function. The crossing of two occupied levels does not change the smoothness. In contrast, if an unoccupied level crosses the last occupied level, the second derivative of $`E_{\mathrm{tot}}`$ must show a spike. In this way, we understand the even-odd effect when comparing $`A=8`$ with $`A=9`$ in Fig.3. The spin splitting caused by the magnetic field at $`B2.01`$ for $`A=8`$ is absent for $`A=9`$. This becomes evident when looking at a blow up of this particular level crossing which is illustrated in Fig.4, where the last occupied level is indicated as a thick line and the points where a spike occurs are indicated by a dot. Note that the splitting is proportional to the effective spin magnetic moment $`\mu ^{}`$. Spikes of the susceptibility are associated with a spin flip for even electron numbers. They are brought about by the crossing of the top (bottom) with the bottom (top) line of a double line. Hence, both lines of the double splitting in Fig.3 yield a spin flip ($`A=8`$), but neither of the single lines ($`A=9`$). Strictly speaking, the spikes are $`\delta `$-functions with a factor which is determined by the angle at which the two relevant lines cross. If the level crossings are replaced by avoided crossings (Landau-Zener crossings), the lines would be broadened. This would be the case in the present model for $`\lambda >0`$ and $`\beta >0`$. Finite temperature will also result in line broadening. We now focus on the special cases which give rise to closed shells, that is when the ratio $`\mathrm{\Omega }_+/\mathrm{\Omega }_{}=k=1,2,3,\mathrm{}`$. For the sake of clarity we analyze in detail the circular shape ($`\omega _x=\omega _y=\omega _0`$) for which the eigenmodes (Eq.(2)) become $`\mathrm{\Omega }_\pm =(\mathrm{\Omega }\pm \omega _L)`$ with $`\mathrm{\Omega }=\sqrt{\omega _0^2+\omega _L^2}`$ . We find for the magnetization $$M=\mu _B^{\mathrm{eff}}(1\frac{\omega _L}{\mathrm{\Omega }})(\underset{}{}k\underset{+}{})\mu ^{}<S_z>$$ (3) with $`_\pm =_j^A(n_\pm +1/2)_j`$ . For completely filled shell $`<S_z>=0`$, since, for the magnetic field strengths considered here, the spin orientations cancel each other (see Fig.1). From the orbital motion we obtain for the susceptibility $$\chi =dM/dB=\frac{\mu _{B}^{\mathrm{eff}}{}_{}{}^{2}}{\mathrm{}\mathrm{\Omega }}(\frac{\omega _0}{\mathrm{\Omega }})^2(\underset{+}{}+\underset{}{})$$ (4) It follows from Eq.(4) that, for a completely filled shell, the magnetization owing to the orbital motion leads to diamagnetic behavior. For zero magnetic field ($`k=1`$) the system is paramagnetic and the magnetization vanishes ($`_{}=_+`$). The value $`k=2`$ is attained at $`B1.23`$. When calculating $`_{}`$ and $`_+`$ we have to distinguish between the cases, where the shell number $`N`$ of the last filled shell is even or odd. With all shells filled from the bottom we find (i) for the last filled shell number even: $`_+=\frac{1}{12}(N+2)[(N+2)^2+2]`$, $`_{}=\frac{1}{6}(N+1)(N+2)(N+3)`$ which implies $`M=\mu _B^{\mathrm{eff}}(1\omega _L/\mathrm{\Omega })(N+2)/2`$; and (ii) for the last filled shell number odd: $`_+=\frac{1}{2}_{}=\frac{1}{12}(N+1)(N+2)(N+3)`$ which, in turn, implies $`M=0.`$ Therefore, if $`\mathrm{\Omega }_+/\mathrm{\Omega }_{}=2`$, the orbital magnetization vanishes for the magic numbers $`4,12,24,\mathrm{}`$ while it leads to diamagnetism for the magic numbers $`2,8,18,\mathrm{}`$. A similar picture is obtained for $`\mathrm{\Omega }_+/\mathrm{\Omega }_{}=3`$ which happens at $`B2.01`$: for each third filled shell number (magic numbers $`6,18,\mathrm{}`$) the magnetization is zero. Since the results presented are due to shell effects, they do not depend on the assumption $`\omega _x/\omega _y=1`$ which was made to facilitate the discussion. The crucial point is the relation $`\mathrm{\Omega }_+/\mathrm{\Omega }_{}=k=1,2,3,\mathrm{}`$ which can be obtained for a variety of combinations of the magnetic field strength and the ratio $`\omega _x/\omega _y`$ as is illustrated in Fig.2. Whenever the appropriate combination of field strength and deformation is chosen to yield, say, $`k=2`$, our findings apply. To summarize: the consequences of shell structure effects for the addition energy of a small isolated quantum dot have been analyzed. At certain values of the magnetic field strength closed shells appear in a quantum dot, also in cases where deformation does not give rise to magic numbers at zero field strength. Measurements of the magnetic susceptibility are expected to reflect the properties of the single-particle spectrum and should display characteristic patterns depending on the particle number. At certain values of the magnetic field and electron numbers the orbital magnetization vanishes due to shell closure in the quantum dot. Figure Captions Fig.1 Single-particle spectra as a function of the magnetic field strength. Spectra are displayed for (a) a plain isotropic and (b) deformed two dimensional oscillator. Fig.2 Relative magnetic field strength $`B^{}=\omega _L/\omega _x`$ as a function of the ratio $`r=\omega _x/\omega _y=1\beta `$ for fixed values of the ratio $`k=\mathrm{\Omega }_+/\mathrm{\Omega }_{}`$. Fig.3 Magnetic susceptibility $`\chi =^2E_{\mathrm{tot}}/B^2`$ in arbitrary units as a function of the magnetic field strength for the isotropic oscillator without $`L^2`$-term. $`E_{\mathrm{tot}}`$ is the sum of the single-particle energies filled from the bottom up to the electron number $`A`$. Fig.4 Blow–ups of the relevant level crossings explaining the features in Fig.3. The left and right hand side refers to $`A=8`$ and $`A=9`$, respectively.
no-problem/9812/astro-ph9812399.html
ar5iv
text
# Probing Galaxy Formation with TeV Gamma Ray Absorption ## 1 Introduction It has long been appreciated that high energy $`\gamma `$-rays from sources at cosmological distances will be absorbed via electron-positron pair production on the diffuse background of long wavelength photons produced over the history of the universe. Now that several extragalactic TeV sources have been discovered, it is beginning to become possible to use this process to probe the extragalactic background light (EBL). This technique will become increasingly powerful as many more sources will presumably be discovered at greater distances with the new generation of $`\gamma `$-ray telescopes (GLAST, CELESTE, STACEE, MAGIC, HESS, VERITAS, and Milagro). Broadly speaking, there are three approaches to studying the EBL/$`\gamma `$-absorption connection, represented by the three speakers on this topic here at the VERITAS workshop: * Limits on the EBL, and on models for its production, from $`\gamma `$-absorption data ; * Semi-empirical estimates of the EBL ; and * Prediction of the EBL and $`\gamma `$-absorption from physical theories of galaxy formation and evolution in a cosmological framework. The advantage of the last of these approaches, which we will follow in this talk, is that it permits one to deduce from $`\gamma `$-ray absorption data a great deal about galaxy formation and evolution, including the effects of the stellar initial mass distribution and of dust extinction and reradiation. It is also arguably the best way to estimate the extent of $`\gamma `$-ray absorption at various energies, as shown by the correct prediction from a simplified model of this sort \[4, hereafter MP96\] that there would be rather little absorption of TeV $`\gamma `$-rays from the nearest extragalactic sources, Mrk 421 and Mrk 501, at redshifts of only $`z=0.03`$. The calculations reported here (and in more detail in and ) are based on state-of-the-art semi-analytic models (SAMs) of galaxy formation, which we summarize briefly below. But it will be useful to start by summarizing our earlier calculations (MP96). ## 2 Simplified Cosmological Modeling of the EBL Although there is a long history of spectral synthesis models leading to predictions for the EBL (e.g., ), such models typically attempted to account only for the star formation history of the galaxies existing today — i.e., they are pure luminosity evolution models. Moreover in spectral synthesis, the star formation history of each galaxy is not determined from its cosmological history, taking into account the fact that gas is not available to form stars until it has cooled within a dense collapsed structure. But the evidence is certainly increasing that there was a great deal of galaxy formation and merging in the past, plausibly in agreement with the predictions of hierarchical models of galaxy formation of the CDM type. The motivation of the approach used by MP96 was to obtain theoretical predictions for the EBL and the resulting absorption of $`\gamma `$-rays in the context of hierarchical theories of structure formation, specifically within the CDM family of cosmological models. It is relatively straightforward to calculate the evolution of structure in the dark matter component within the CDM paradigm. In MP96, the number density of dark matter halos as a function of mass and redshift and its dependence on cosmology was modeled using Press-Schechter theory, which agrees fairly well with the predictions of N-body simulations. However, obtaining the corresponding radiation field as a function of time and wavelength involves complicated astrophysics with many unknown parameters. This problem was addressed empirically, using the simple assumption that each dark matter halo hosts one galaxy, with the galaxy luminosity assumed to be a monotonic function of the halo mass. To model the spectrum of each galaxy, the star formation rate (SFR) was assumed to be $`e^{t/\tau }`$, with ellipticals (assumed to be 28% of the galaxies) having $`\tau =0.5`$ Gyr, and spirals (the remaining 72%) having $`\tau =6`$ Gyr. The actual SFR for each galaxy was determined so that a desired local luminosity function (LLF) was reproduced at redshift $`z=0`$. As there is considerable variation in the observed B-band luminosity function derived from different redshift surveys, MP96 considered three representative choices. Stellar emission was modeled in a simplified way, with each stellar population of a given mass and age treated as a black body with the appropriate temperature. Three different power-law initial mass functions (IMFs) $`N(M)(M/M_{})^{1+\mathrm{\Gamma }}`$ describing the differential distribution of the initial stellar masses were considered: the standard Salpeter IMF with $`\mathrm{\Gamma }=1.35`$, and also steeper IMFs with $`\mathrm{\Gamma }=1.6`$ and -2.0. The evolution of the gas content and metallicity within each galaxy was treated in the instantaneous recycling approximation. Dust absorption was treated in a standard way , assuming that the mass of dust increases with the galaxy metallicity and gas fraction. The extinction curve was similar to a Galactic extinction curve, but was scaled according to the metallicity (galaxies with lower metallicities have steeper extinction curves in the UV, as indicated by observations of the LMC and SMC). Energy was conserved, so that any energy absorbed by dust was reradiated. The dust emission spectrum was modeled with three components: PAH molecules ($`10`$ to 30 $`\mu `$m), warm dust from active star forming regions (30 to 70 $`\mu `$m), and cold “cirrus” dust (70 to 1000 $`\mu `$m). Two cosmological models were considered, a standard (cluster-normalized, $`\sigma _8=0.67`$) $`\mathrm{\Omega }_{\mathrm{matter}}=1`$ cold dark matter (SCDM) model, and a COBE-normalized cold + hot dark matter (CHDM) model with the then-favored hot dark matter fraction $`\mathrm{\Omega }_\nu =0.3`$. Since both were $`\mathrm{\Omega }_{\mathrm{matter}}=1`$ models, the Hubble parameter was chosen to be $`h=0.5`$ ($`H_0=100h`$ km s<sup>-1</sup> Mpc<sup>-1</sup>) in order to obtain a Universe with an age of 13 Gyr. Galaxy formation occurs fairly early in the SCDM model (because of the large amount of power on small scales), and considerably more recently in the CHDM model. The main conclusion from this study was that cosmology is the dominant factor influencing the EBL in the range 1-10 $`\mu `$m, which is the range most relevant for absorption of $``$ TeV $`\gamma `$-rays from nearby extragalactic sources. In this wavelength range, the most extreme differences between the three different LLFs and three different IMFs considered were less than the difference between the SCDM and CHDM cosmological models, representing early and late galaxy formation respectively. It is not difficult to understand why this happens: since the optical luminosity at $`z=0`$ was fixed for each assumed LLF, the main factor determining the EBL in the near infrared was the star formation history. Because galaxies were assumed to trace halos in a simple way, the star formation history was almost entirely determined by the cosmology. As MP96 explained, the SCDM model predicted a larger EBL flux because (1) the stars have put out more light since they have been shining longer than in CHDM, (2) there was more redshifting of their light from the optical to the near infrared, and (3) the SCDM galaxies are older at a given redshift and hence are composed of more evolved stars, producing more flux in the red and near-infrared. As expected, when the optical depth for $`\gamma `$-rays due to $`e^+e^{}`$ production was calculated, more absorption was predicted for SCDM than for CHDM. But for sources as near as Mrk 421 and 501, the predicted absorption only steepens the spectrum a little in the 300 GeV - 10 TeV range for which results have thus far been published, with curvature noticeable mainly above about 3 TeV. These predictions appear to be consistent with the observations , unlike those from earlier and later calculations based on a semi-empirical approach. The predictions of our new, more complete treatment are qualitatively consistent with this earlier simplified approach. ## 3 Semi-Analytic Modeling of the EBL Our new approach is based on semi-analytic models (SAMs) of galaxy formation, which allow one to model the astrophysical processes involved in galaxy formation in a simplified but physical way within the framework of the hierarchical structure formation paradigm. The semi-analytic models used here are described in detail in , , and . These models are in reasonably good agreement with a broad range of local galaxy observations, including the relation between luminosity and circular velocity (the Tully-Fisher relation), the B-band luminosity function, cold gas contents, metallicities, and colors . Our basic approach is similar in spirit to the models originally presented by and , and subsequently developed by these groups in numerous other papers (reviewed in and ). Significant improvements included in are that we assumed a lower stellar mass-to-light ratio (in better agreement with observed values), included the effects of dust extinction, and developed an improved “disk-halo” model for supernovae feedback. With these new ingredients, we were able to overcome some of the difficulties of previous models, which did not simultaneously reproduce the Tully-Fisher relation and B-band luminosity function, and produced bright galaxies that were too blue. Instead of assuming a one-to-one relationship between galaxies and dark matter halos, as in MP96, we now determine the galaxy population residing in halos of a given mass by constructing the “merging history” of each halo using an extension of the Press-Schechter technique. Using the method described in , we create Monte-Carlo realizations of the masses of progenitor halos and the redshifts at which they merge to form a larger halo. These “merger trees” (each branch in the tree represents a halo merging event) reflect the collapse and merging of dark matter halos within a specific cosmology and have been shown to agree fairly well with merger trees extracted from N-body simulations . Each halo at the top level of the hierarchy is assumed to be filled with hot gas, which cools radiatively and collapses to form a gaseous disk. The cooling rate is calculated from the density, metallicity, and temperature of the gas. Cold gas is turned into stars using a simple recipe, depending on the mass of cold gas present and the dynamical time of the disk. Supernovae inject energy into the cold gas and may expell it from the disk and/or halo if this energy is larger than the escape velocity of the system. Chemical evolution is traced assuming a constant yield of metals per unit mass of new stars formed. The spectral energy distribution (SED) of each galaxy is then obtained by assuming an IMF and using stellar population models (e.g. ; in the present work we use the updated GISSEL98 models with solar metallicity). When halos merge, the galaxies contained in each progenitor halo retain their seperate identities until they either fall to the center of the halo due to dynamical friction and merge with the central galaxy, or until they experience a binding merger with another satellite galaxy orbiting within the same halo. All galaxies are assumed to start out as disks, and major (nearly equal mass) mergers result in the formation of a spheriod. New gas accretion and star formation may later form a new disk, resulting in a variety of bulge-to-disk ratios at late times. This may be used to divide galaxies into rough morphological types, and seems to reproduce observational trends such as the morphology-density relation and color-morphology trend . The recipes for star formation, feedback, and chemical evolution contain free parameters, which we set by requiring an average fiducial “Milky Way” galaxy to have an I-band magnitude, cold gas mass, and metallicity as dictated by observations of nearby galaxies. The star formation and feedback processes are some of the most uncertain elements of these models, and indeed of any attempt to model galaxy formation. We have investigated several different combinations of recipes for star formation and supernova feedback (sf/fb), discussed in detail in , and . The star formation history for several different scenarios in shown in Figure 1. Note that the three models shown in Figure 1 are for the same SCDM cosmology; the only difference is in the mechanism used to convert cold gas into stars. This illustrates that, unlike in the previous approach of MP96, the star formation history of the Universe is quite sensitive to the assumed astrophysics and not only the cosmology. Here we will discuss results for a single choice of sf/fb recipe, which corresponds to the fiducial “Santa Cruz” model discussed in , and is similar to the models of Kauffmann et al. (e.g. ). We shall elaborate on the effects of changing the sf/fb recipes on the EBL and gamma ray absorption in and . In , we included dust extinction using the same approach as MP96. As shown in , this led to better results for the B-band luminosity function, and improved galaxy colors. Using a similar approach to modelling dust extinction, confirmed these results and demonstrated that in addition the inclusion of dust greatly improves the agreement of the galaxy-galaxy correlation function with observations. The inclusion of dust extinction and the re-radiation of absorbed light at longer wavelengths is of course a crucial ingredient in modeling the EBL. The current dust model, discussed in , is an improved version of the one used in MP96, and is very similar to the approach used by Guiderdoni et al. . As in MP96, all of the absorbed starlight is re-radiated by the dust, assuming a three-component blackbody emission spectrum. However, in MP96 the shape of the dust emission spectrum was chosen to match that of the Galaxy, which is inconsistent with the global data for IRAS galaxies. In the current models, the temperatures and relative contributions of the three components are determined by requiring the colors at 12, 25, and 60 $`\mu `$m to match the observed colors of IRAS galaxies . Details will be given in . ## 4 Initial Mass Function The stellar initial mass function determines the wavelength distribution of starlight produced by a stellar population of a given age, and as such it is an important ingredient in the calculation of the EBL. In MP96 we considered only the Salpeter IMF and steeper power-law IMFs, but here we will discuss results both for the Salpeter IMF and the Scalo IMF (see Figure 2). These are two of the most commonly used IMFs, although recent studies (e.g., ) indicate that a better representation of the observed IMF may be a Salpeter-like slope at $`M>M_{}`$, with a flattening at $`M<M_{}`$. The most important difference between the Salpeter and Scalo IMFs for our present purposes is that if both are normalized to the same total mass of stars, the fraction of high-mass stars is higher with the Salpeter IMF. Since only high-mass stars emit significant amounts of ultraviolet light, this results in much more ultraviolet in the spectrum of a typical galaxy with Salpeter IMF as compared with Scalo IMF, as shown in Figure 3, both without and with inclusion of the effects of dust. Note that the much greater amount of absorbed ultraviolet light with the Salpeter IMF results in much more reradiated infrared light at long wavelengths. Also note that changing the IMF has a relatively small effect on the predictions in the 1-10 $`\mu `$m range. Previous SAM calculations of properties of local galaxies (e.g. ) assumed a Scalo IMF. However, in and , it was shown that a more “top-heavy” IMF (more high-mass stars) such as the Salpeter IMF is favored by observations of UV-bright galaxies at very high redshift (Lyman-break galaxies). In , we investigate the effects of using different IMFs (Scalo or Salpeter) on the observable properties of galaxies at $`z=0`$. In particular, we calculate the luminosity function at 2000Å, B, R, and K, the corresponding 2000Å-B, B-V, B-I, and B-K colors predicted by our models, and the Tully-Fisher relation in various bands. We find that because the mass-to-light ratio in the longer wavebands (I to K) is significantly higher for the Salpeter IMF, the luminosity of a galaxy with a given velocity dispersion is smaller. This makes it very difficult to obtain a bright enough Tully-Fisher zero-point in cosmologies with $`\mathrm{\Omega }_{\mathrm{matter}}=1`$, in which the baryon fraction is low (we assume a fixed value of $`\mathrm{\Omega }_bh^2=0.02`$, as suggested by observations ; the baryon fraction $`f_b\mathrm{\Omega }_b/\mathrm{\Omega }_{\mathrm{matter}}`$ is therefore higher in low-$`\mathrm{\Omega }_{\mathrm{matter}}`$ cosmologies). If the mass-to-light ratios predicted by the current generation of stellar population models are accurate and the Salpeter IMF is really representative of typical galaxies, this may suggest that the baryon fraction in bright galaxies must be $`0.150.2`$, similar to the value in groups and clusters. ## 5 Predicted EBL In , we calculate the predicted EBL for several cosmologies and also investigate the effects of the assumed IMF and star formation recipe, and the variations among stellar population models compiled by different groups. Here we can only present a subset of preliminary results. The cosmological models considered are summarized in Table 1. The SCDM model is shown for comparison with other work, but is ruled out by many independent observational considerations. The three remaining models represent currently favored variants of the CDM family of models. The shape and normalization of the luminosity function that we obtain from the SAMs depends on the cosmological model, as shown in Figure 4. The K-band luminosity function shown in the figure is relatively insensitive to dust and the star formation history. To put the models on an equal footing, following a similar logic to the B-band local luminosity function normalization of MP96, we renormalize each model to give the same integrated luminosity in the K-band. The common normalization is obtained by integrating the observed local luminosity function from Ref. . The resulting correction factors range from 0.42 for SCDM to 1.7 for LCDM. Note that this renormalization also sidesteps the known inaccuracy of the Press-Schechter model used to estimate the number density of dark matter halos. The factor of 0.42 for SCDM is consistent with the rule-of-thumb factor of 0.5 determined from comparison with N-body simulations . Figure 5 shows the EBL for the LCDM model for the Scalo and Salpeter IMFs. As expected from our previous discussion, the predicted EBL is much higher for the Salpeter IMF at both short and long wavelengths, but the predictions are very similar in the 1-10 $`\mu `$m band that is most relevant for $``$ TeV $`\gamma `$-ray attenuation from relatively nearby sources, in agreement with MP96. Note that both EBL curves are consistent with the lower limits from source counts at ultraviolet, optical, and near-infrared wavelengths (filled symbols), but neither curve is high enough to agree with the new DIRBE EBL detections at 140 and 240 $`\mu `$m . The remaining results that we present are all for the Salpeter IMF. Figure 6 shows the predicted EBL for the four cosmological models discussed above. Note that the three models in which galaxy and star formation are relatively early predict rather similar EBL, while CHDM predicts generally lower EBL. Figure 7 shows the reason for this in more detail. Only about 10% of the EBL in the 1-10 $`\mu `$m band comes from $`z1`$ for the CHDM model, compared to 20-40% for the LCDM model, in which galaxies form considerably earlier. (An alternative way of looking at the evolution of the EBL is given in Figure 8 of .) Keep in mind, however, that even in a model with an “early formation” cosmology like LCDM but which assumes less efficient star formation at high redshift (for example, the model with the star formation history indicated by the light solid line in Figure 1), the EBL will show a steeper evolution than that shown in Figure 7 for LCDM. So we expect some degeneracy between cosmology and astrophysics. However, to the extent that the background cosmology may soon be determined by other methods, measurements of the EBL will provide useful constraints on the star formation history of the Universe. All of the models fall short of the DIRBE detection at 140 $`\mu `$m by at least a factor of $`2.5`$. This may be due to a number of effects that we have not yet included in our modeling. Much of the far-infrared and sub-mm light is probably produced by heavily extinguished ultra-luminous starburst galaxies. The phenomenological work of Guiderdoni et al. suggests that the contribution from this population may increase with redshift. Our independent work suggests a physical reason for this: the galaxy interactions that probably trigger these starburst events are more frequent at higher redshift because of the higher density of the Universe, and galaxies are more gas rich so the starburst events may be more dramatic . Observations of nearby starburst galaxies (cf. ) suggest that the Galactic/SMC model that we have used here does not provide a good description of dust extinction in actively star forming galaxies. Additional contributions to the far-IR flux that we have not included may come from AGN and from energy that is injected into the gas by supernovae and later radiated at long wavelength. We will include the contribution of starburst galaxies and address these other effects in improved models that we are developing in collaboration with Bruno Guiderdoni and Julien Devriendt. However, we do not expect that this improved treatment will have much effect on the absorption of $`\gamma `$-rays with energies $`<10`$ TeV, which we now discuss briefly. ## 6 Attenuation of TeV Gamma Rays Here we have space to present only the results for one case, LCDM with Salpeter IMF. Figure 8 shows the optical depth of the universe $`\tau (E_\gamma )`$ as a function of $`\gamma `$-ray energy for varying source redshifts, $`z_s`$. The corresponding attenuation factors, $`\mathrm{exp}(\tau )`$, are shown in Figure 9. Note that for sources as near as Mrk 421 and 501 ($`z=0.03`$), attenuation is predicted to be rather small between 1-10 TeV, with little curvature in the spectrum. Gamma-ray energies $`E_\gamma >10`$ TeV for local sources, or sources at $`z>0.1`$ for lower energies, will likely be needed in order to see significant attenuation. In we will present results for several cosmological models, and discuss dependence on IMF and star formation prescriptions. Bullock et al. discusses the difference in predicted $`\gamma `$-attenuation between Salpeter and Scalo IMF for this same LCDM model. The early universe is much more transparent to 10-100 GeV $`\gamma `$-rays with the Scalo IMF, since the fraction of high-mass stars is lower, and the ultraviolet flux density is correspondingly reduced (cf. ). ## 7 Conclusions * Semi-analytic models (SAMs) of galaxy formation provide a convenient and powerful theoretical framework to determine how input assumptions — e.g., cosmology, star formation history, and IMF — affect the predicted extragalactic background light (EBL) and the resulting $``$TeV $`\gamma `$-ray attenuation. * The 1-10 $`\mu `$m EBL and the resulting attenuation of few-TeV $`\gamma `$-rays reflect mainly the history of star formation in the universe, with less attenuation for models such as CHDM in which galaxies form relatively late. * The EBL at $`<1`$ $`\mu `$m and $`>10`$ $`\mu `$m is significantly affected by the IMF and the modeling of the absorption and reradiation by dust. * Gamma-ray energies $`E_\gamma >10`$ TeV and/or sources at $`z>0.1`$ will probably be needed to provide clear evidence of attenuation due to $`\gamma \gamma e^+e^{}`$. * Therefore, both space- and ground-based $`\gamma `$-ray telescopes will be required to probe the spectra of AGNs at various redshifts, in order to determine both + the unabsorbed spectra, which will help determine how these $`\gamma `$-rays are produced, and + the intergalactic absorption, which as we have shown is affected by cosmology, star formation history, IMF, and dust. ## Acknowledgments JRP and JSB were supported by NSF and NASA grants at UCSC, and RSS was University Fellowship from The Hebrew University. JRP thanks Avishai Dekel for hospitality at Hebrew University. Donn MacMinn contributed significantly to an early stage of this work, especially the dust modeling. His life was tragically cut short by a hit-and-run driver as he was bicycling near Chicago on August 30, 1997.
no-problem/9812/hep-ph9812249.html
ar5iv
text
# References CUPP-98/3 hep-ph/9812249 December 1998 SOLAR NEUTRINO OSCILLATION DIAGNOSTICS AT SUPERKAMIOKANDE AND SNO Debasish Majumdar and Amitava Raychaudhuri Department of Physics, University of Calcutta, 92 Acharya Prafulla Chandra Road, Calcutta 700 009, India ABSTRACT Results for solar neutrino detection from the SuperKamiokande collaboration have been presented recently while those from the Sudbury Neutrino Observatory are expected in the near future. These experiments are sensitive to the <sup>8</sup>B neutrinos from the sun, the shape of whose spectrum is well-known but the normalisation is less certain. We propose several variables, insensitive to the absolute flux of the incident beam, which probe the shape of the observed spectrum and can sensitively signal neutrino oscillations. They provide methods to extract the neutrino mixing angle and mass splitting from the data and also to distinguish oscillation to sequential neutrinos from those to a sterile neutrino. PACS Nos.: 26.65.+t, 14.60.Pq The recent evidence of neutrino oscillations in the atmospheric neutrino data presented by SuperKamiokande (SK) has moved neutrino physics to the centrestage of research activity. A non-zero neutrino mass will have impact on many areas of particle physics, astrophysics, and cosmology and new results are eagerly awaited. It is widely expected that important information will emerge from the data on solar neutrinos. All earlier experiments have consistently signalled a depletion of the solar neutrino flux and high statistics results from SK (some already published ) and the other experiment of comparable size, the Sudbury Neutrino Observatory (SNO) , will further sharpen the situation. There are several issues pertaining to the solar neutrino problem which still remain unsettled. The observed flux depletion could be a consequence of vacuum neutrino oscillations or resonant flavour conversion . It is not possible to rule out any of these alternatives on the basis of the available data. Further, the electron neutrino may be mixed with either a sequential or a sterile neutrino. Three neutrinos are expected in association with the three known charged leptons. The inclusion of a fourth neutrino – sterile, in view of the LEP and SLC results – is suggested from the several evidences indicative of neutrino oscillations, namely, the solar neutrino puzzle, the atmospheric neutrino anomaly and the results of the LSND experiment, all of which cannot be accommodated together in a three neutrino framework . Finally, it is expected that the mass splitting and mixing angle will be tightly constrained from the new data. The solar neutrinos are produced in standard reactions (the p-p chain, CNO cycle, etc.) responsible for the generation of heat and light. Though the spectrum of neutrinos from each of the processes is well known, their absolute normalisations vary from one solar model to another . The two latest detectors, SNO and SK, are sensitive to neutrinos from only the Boron reaction in the p-p chain whose normalisation, for example, is known to vary like $`T_c^{18}`$, where $`T_c`$ is the temperature at the solar core. In this paper we examine the vacuum oscillation scenario. We propose several variables relevant for SK and SNO which are insensitive to the absolute normalisation of the <sup>8</sup>B neutrino flux and may be used (a) to distinguish mixing of the electron neutrino with a sequential neutrino from that to a sterile neutrino and (b) to determine the neutrino mass splitting and mixing angle. Other variables, insensitive to the absolute normalisation of the incident flux, have been explored earlier in refs. where the focus has been on the energy spectrum of the scattered electron neutrino at SNO, the MSW mechanism etc. The SuperKamiokande detector uses 32 ktons of light water in which electrons scattered by $`\nu _e`$ – through both charged current (CC) and neutral current (NC) interactions – are identified via their C̆erenkov radiation. If a sequential neutrino is produced by oscillation, it will contribute to the signal only through the NC interactions (roughly one eighth of the $`\nu _e`$ case) while a sterile neutrino will be entirely missed by the detector. The SNO detector has 1 kton of $`D_2O`$ and neutrinos are primarily detected through the charged and neutral current disintegration of the deuteron: $`\nu +de^{}+p+p,\nu +d\nu +p+n`$, respectively. While the $`e^{}`$ in the CC reaction is identified through its C̆erenkov radiation and can be used to determine the shape of the incident neutrino spectrum, the NC measurement, signalled by the detection of the neutron, is calorimetric. If oscillations to sequential neutrinos occur then they will not contribute to the CC signal while the NC channel will be unaffected. On the other hand if the $`\nu _e`$ oscillates to a sterile neutrino, which has no interactions whatsoever, then both the CC and NC signals will suffer depletions. The first class of variables to probe the shape of the observed neutrino spectra that we propose are $`M_n`$, the normalised $`n`$-th moments of the solar neutrino distributions seen at SK and SNO. Specifically, $$M_n=\frac{N_i(E)E^n𝑑E}{N_i(E)𝑑E}$$ (1) where $`i`$ stands for SK or SNO. It is seen from the definition that the uncertainty in the overall normalisation of the incident neutrino flux cancels out from $`M_n`$. To see how these variables are affected by neutrino oscillations, first consider oscillation of the electron neutrino to a sequential neutrino, say $`\nu _\mu `$. Since the oscillation probability is a function of the energy, the shape of the spectrum will be affected. As noted earlier, at SK the muon neutrino will only undergo NC reactions. Thus, for oscillation to a sequential neutrino, we have $$N_{SK}(E)=ϵ_{SK}f(E)\left\{P_{\nu _e\nu _e}(E,\mathrm{\Delta },\vartheta )\sigma _{SK}^e(E)+P_{\nu _e\nu _\mu }(E,\mathrm{\Delta },\vartheta )\sigma _{SK}^\mu (E)\right\}N_{SK}^0$$ (2) Here, $`f(E)`$ stands for the incident Boron-neutrino fluence, $`ϵ_{SK}`$ for the detection efficiency which, for the sake of simplicity, is assumed to be energy independent, and $`N_{SK}^0`$ for the number of electrons in the SK detector off which the neutrinos may scatter. $`\sigma _{SK}^e(E)`$ is the $`\nu _e`$ scattering cross-section with both NC and CC contributions whereas $`\sigma _{SK}^\mu (E)`$ is the $`\nu _\mu `$ cross-section obtained from the NC interaction alone. Only the CC contributions are relevant at SNO for the determination of the spectrum and we get: $$N_{SNO}^{c.c}(E)=ϵ_{SNO}^{c.c.}f(E)P_{\nu _e\nu _e}(E,\mathrm{\Delta },\vartheta )\sigma _{SNO}^{c.c.}(E)N_{SNO}^0$$ (3) $`N_{SNO}^0`$ is the number of deuteron nuclei in the SNO detector and $`ϵ_{SNO}^{c.c}`$ represents the CC detection efficiency assumed to be independent of the energy. If the $`\nu _e`$ oscillates to a sterile neutrino, which is decoupled from the weak interactions, it will escape the SK and SNO detectors completely. Thus, for sterile neutrinos, in place of eq. (2) we have $$N_{SK}(E)=ϵ_{SK}f(E)\left\{P_{\nu _e\nu _e}(E,\mathrm{\Delta },\vartheta )\sigma _{SK}^e(E)\right\}N_{SK}^0$$ (4) while eq. (3) is unchanged. In the two-flavour case, the probability of an electron neutrino of energy $`E_\nu `$ to oscillate to another neutrino (sequential or sterile), $`\nu _x`$, after the traversal of a distance $`L`$ is: $$P_{\nu _e\nu _x}=\mathrm{sin}^2(2\vartheta )\mathrm{sin}^2\left(\frac{\pi L}{\lambda }\right)$$ (5) where $`\vartheta `$ is the mixing angle, and the oscillation length, $`\lambda `$, is given in terms of the mass-squared difference $`\mathrm{\Delta }`$ by: $$\lambda =2.47\left(\frac{E_\nu }{\mathrm{MeV}}\right)\left(\frac{\mathrm{eV}^2}{\mathrm{\Delta }}\right)\mathrm{metre}$$ (6) From probability conservation: $`P_{\nu _e\nu _e}=1P_{\nu _e\nu _x}`$. In Fig. 1 we present the results for $`M_1`$, $`M_2`$, and $`M_3`$ as a function of the mass splitting $`\mathrm{\Delta }`$ for oscillation to sequential as well as sterile neutrinos. Results for the mixing angle $`\vartheta =45^o`$ and 15<sup>o</sup> are shown. As expected, for the smaller mixing angle the effects of neutrino oscillation are not very prominent. On the other hand, for $`\vartheta =45^o`$, the impact of neutrino oscillation is quite significant, especially for the smaller values of $`\mathrm{\Delta }`$, and it holds promise for distinguishing between the sequential and sterile neutrino alternatives. In order to evaluate the usefulness of these variables in conjunction with the actual data, it needs to be noted first that for both the SNO CC and SK signals, what is experimentally measured via the C̆erenkov technique is the energy of the outgoing electron. In the case of SNO, the large mass of the deutreon forces the electron to move in the direction of the incident neutrino. Further, since the recoiling hadrons are heavy, the electron’s energy equals the incident neutrino energy less the threshold energy for the CC reaction, 1.44 MeV. For SK there is a unique correlation between the electron’s energy and scattering angle with the neutrino energy. Thus the neutrino spectrum can be readily reconstructed from the measured electron energy for both experiments using the well-known cross-sections for the appropriate scattering process. The huge sizes of both detectors ensure that the error in the final results will be dominated by systematic uncertainties and careful estimates put these down to a few per cent . If the errors on the extracted neutrino spectrum are at the expected few per cent level, it is easy to convince oneself from Fig. 1 that $`M_1`$, $`M_2`$, and $`M_3`$ will be useful diagnostic tools. This gives us confidence that, if the mixing angle $`\vartheta `$ is not small (as indicated by the data from the other earlier experiments), the experimental results will enable a distinction between the sequential and the sterile neutrino alternatives and help focus on the mixng angle $`\vartheta `$ and mass splitting $`\mathrm{\Delta }`$ involved. We have also considered the ratios of the moments $`r_i=(M_i)_{SK}/(M_i)_{SNO}`$ as variables for the search for neutrino oscillations. We do not discuss these in this preliminary communication and results will be reported elsewhere . The SNO experiment will enable separate measurements of the neutrino flux through charge current and neutral current reactions. As noted earlier, if $`\nu _\mu `$ or $`\nu _\tau `$ are produced through oscillation of solar neutrinos then they will register via neutral current interactions with full strength but their energy will not permit charged current interactions. The ratio, $`R_{SNO}`$, of the calorimetrically measured signal in the NC channel, $`N_{SNO}^{n.c.}`$, to the total (energy integrated) signal in the CC channel, $`N_{SNO}^{c.c.}`$, is therefore a good probe for oscillations. Thus $$R_{SNO}=\frac{N_{SNO}^{n.c.}}{N_{SNO}^{c.c.}}$$ (7) where $$N_{SNO}^{n.c.}=ϵ_{SNO}^{n.c.}f(E)\sigma _{SNO}^{n.c.}(E)N_{SNO}^0𝑑E$$ (8) where $`ϵ_{SNO}^{n.c.}`$ is the efficiency of detection of for the NC channel and $$N_{SNO}^{c.c.}=ϵ_{SNO}^{c.c.}f(E)P_{\nu _e\nu _e}(E,\mathrm{\Delta },\vartheta )\sigma _{SNO}^{c.c.}(E)N_{SNO}^0𝑑E$$ (9) Clearly, $`R_{SNO}`$ is independent of the absolute normalisation of the incident neutrino flux $`f(E)`$ and only depends on its shape. If oscillations to sterile neutrinos take place then eq. (8) is replaced by: $$N_{SNO}^{n.c.}=ϵ_{SNO}^{n.c.}f(E)P_{\nu _e\nu _e}(E,\mathrm{\Delta },\vartheta )\sigma _{SNO}^{n.c.}(E)N_{SNO}^0𝑑E$$ (10) while eq. (9) is unchanged. Results for $`R_{SNO}`$ are presented in Table 1. For simplicity, we have assumed $`ϵ_{SNO}^{n.c.}`$ to be independent of the energy and further equal to the efficiency of the CC reaction $`ϵ_{SNO}^{c.c.}`$. If instead, $`ϵ_{SNO}^{n.c.}/ϵ_{SNO}^{c.c.}=r_ϵ`$ and it can be taken to be independent of the energy to a good approximation, then our results for $`R_{SNO}`$ will be multiplied by this factor. If no oscillations take place then we find $`R_{SNO}`$ = 0.382. Oscillation to sequential neutrinos decreases the denominator of eq. (7) while the numerator is unaffected. Thus $`R_{SNO}`$ increases if such oscillations take place. From Table 1 it is seen that, especially for larger mixing angles $`\vartheta =30^o`$ or $`45^o`$, $`R_{SNO}`$ is significantly different from the no-oscillation limit for the sequential neutrino case. For the sterile neutrino alternative, the change in $`R_{SNO}`$ is very marginal and it is unlikely that it will be observable. Thus $`R_{SNO}`$ offers a clear method for the distinction between the sequential and sterile neutrino alternatives, independent of the uncertainty in the overall normalisation of the incident neutrino flux. In Fig. 2 we present contours of constant values of $`R_{SNO}`$ in the $`\mathrm{\Delta }`$-$`\vartheta `$ plane for oscillation to sequential neutrinos. The symmetry of the contours about $`\vartheta =45^o`$ is expected. At $`\mathrm{\Delta }=0`$ or $`\vartheta =0^o`$ or $`90^o`$ the limit of no oscillations will be obtained. Values of $`R_{SNO}`$ as high as 0.99 can only be achieved for smaller values of $`\mathrm{\Delta }`$. In this work, we have considered the oscillation of $`\nu _e`$ to either (a) a sequential neutrino or (a) sterile neutrino. We have restricted ourselves to vacuum neutrino oscillations. We plan to examine the alternative of matter enhanced MSW resonant flavour conversion later. We have not extended the analysis to a three (or four) neutrino mixing scheme. This would have introduced too many parameters. We have also ignored a small contribution from hep-neutrinos. These variables can also be utilised to study oscillation of supernova neutrinos. Some results are presented in ref. . The behaviour of the variables $`M_1`$, $`M_2`$, and $`M_3`$ and that of $`R_{SNO}`$ leads us to believe that, as data from SuperKamiokande and SNO accummulate, used in conjunction they may be fruitful not only to look for oscillations of solar neutrinos but also to zero in on the mass splitting and mixing angles for solar neutrino oscillations. Acknowledgements This work is partially supported by the Eastern Centre for Research in Astrophysics. A.R. also acknowledges a research grant from the Council of Scientific and Industrial Research. TABLE CAPTION Table 1: $`R_{SNO}`$ for different values of the mixing angle, $`\vartheta `$, and the mass splitting, $`\mathrm{\Delta }`$. Results are presented for both mixing with sequential and sterile neutrinos. Figure Captions Fig. 1: The variables (a) $`M_1`$, (b) $`M_2`$, and (c) $`M_3`$ as a function of the mass splitting $`\mathrm{\Delta }`$ for the SuperKamiokande and SNO detectors. Results are presented for two values (45<sup>o</sup> and 15<sup>o</sup>) of the mixing angle $`\vartheta `$. Note that the SNO (charged current) signal does not distinguish between the sequential and sterile neutrino scenarios. Fig. 2: Contours of constant $`R_{SNO}`$ – the ratio of the NC signal to the energy integrated CC signal at SNO – in the $`\mathrm{\Delta }\vartheta `$ plane for oscillation to sequential neutrinos. No neutrino oscillation corresponds to $`R_{SNO}`$ = 0.382. | $`\mathrm{\Delta }`$ | $`R_{SNO}`$ | | | | | | | --- | --- | --- | --- | --- | --- | --- | | in | $`\vartheta =15^0`$ | | $`\vartheta =30^0`$ | | $`\vartheta =45^0`$ | | | $`10^{10}`$ eV<sup>2</sup> | Sequential | Sterile | Sequential | Sterile | Sequential | Sterile | | 0.0 | 0.382 | 0.382 | 0.382 | 0.382 | 0.382 | 0.382 | | 0.3 | 0.422 | 0.384 | 0.532 | 0.389 | 0.613 | 0.392 | | 0.6 | 0.480 | 0.383 | 0.991 | 0.387 | 2.117 | 0.396 | | 0.9 | 0.467 | 0.378 | 0.848 | 0.362 | 1.428 | 0.337 | | 1.2 | 0.438 | 0.380 | 0.623 | 0.375 | 0.788 | 0.370 | | 1.5 | 0.422 | 0.383 | 0.537 | 0.387 | 0.620 | 0.390 | | 1.8 | 0.417 | 0.383 | 0.512 | 0.386 | 0.577 | 0.388 | | 2.1 | 0.431 | 0.383 | 0.582 | 0.387 | 0.706 | 0.390 | | 2.4 | 0.444 | 0.382 | 0.660 | 0.383 | 0.873 | 0.384 | | 2.7 | 0.444 | 0.380 | 0.658 | 0.375 | 0.867 | 0.370 | | 3.0 | 0.444 | 0.381 | 0.659 | 0.379 | 0.869 | 0.377 | | 3.5 | 0.431 | 0.382 | 0.582 | 0.381 | 0.705 | 0.380 | | 4.0 | 0.434 | 0.383 | 0.597 | 0.386 | 0.735 | 0.388 | | 4.5 | 0.435 | 0.382 | 0.606 | 0.381 | 0.753 | 0.380 | | 5.0 | 0.440 | 0.382 | 0.634 | 0.381 | 0.813 | 0.381 | | 5.5 | 0.437 | 0.382 | 0.614 | 0.381 | 0.770 | 0.381 | | 6.0 | 0.434 | 0.382 | 0.597 | 0.382 | 0.735 | 0.382 |
no-problem/9812/astro-ph9812268.html
ar5iv
text
# 1 Introduction ## 1 Introduction The Sagittarius dwarf galaxy, discovered by Ibata et al (1994, 1995), at a distance of only 25 Kpc, offers a unique opportunity to study the stellar populations of an external galaxy. Comparison of the color-magnitude diagram of Sagittarius with those of Galactic globular clusters has suggested a spread in metallicity in the range $`0.7`$ \[Fe/H\] $`1.6`$ (Marconi et al 1998a). This may be the sign of a complex star-formation history characterized by several bursts. Because Sagittarius is some 16 Kpc behind the Galactic Center there is confusion between Sagittarius and Bulge stars. Sagittarius revealed itself as a population with a mean radial heliocentric velocity around 140 kms<sup>-1</sup> and a small velocity dispersion, thus probable membership may be ascribed on the basis of the radial velocity. Intermediate resolution spectra allow to determine radial velocities with a precision around 20 kms<sup>-1</sup>, sufficient to confirm membership. Moreover such low resolution spectra may be used to obtain a crude estimate of the metallicity which may be compared with the estimates based on the colour-magnitude diagrams. In this paper we report on radial velocities and abundances derived from grism spectra obtained with NTT+EMMI/MOS at ESO La Silla. ## 2 Observations We used the multi-object-spectroscopy (MOS) mode of the EMMI instrument on the NTT 3.5m telescope at ESO La Silla. The resolving power was about 1500, the usable spectral range was from about 480 nm to about 620 nm. We acquired spectra of 57 stars, the log of the observations and further details may be found in Marconi et al (1998b). ## 3 Radial velocities and abundances The range 480-530 nm was used to determine heliocentric radial velocities using cross-correlation with synthetic spectra as templates We considered the stars with heliocentric radial velocities in the range 100-180 kms<sup>-1</sup> to be members of Sagittarius, following Ibata et al (1997), this left us with a sample of 23 stars. In order to estimate abundances we defined six spectral indices which measure the Mgb triplet and some Fe and iron-peak elements features. We developed an iterative procedure which makes use of the SYNTHE code (Kurucz 1993) to determine the abundances which best match the observed to the synthetic indices; details may be found in Marconi et al (1998b). The procedure requires that the atmospheric parameters T<sub>eff</sub>, log g and $`\xi `$ be fixed for each star. The colour $`(VI)_0`$ of Marconi et al (1998a) was used to determine the effective temperatures using the calibration of Alonso et al (1996); this calibration refers to dwarfs only, however it is theoretically known that the $`(VI)`$ colour depends weakly on gravity; we expect the error introduced by neglecting the gravity dependence of $`(VI)`$ to be on the order of 100 K. The isochrones of Straniero, Chieffi & Limongi (1997) for an age of 8 Gyr and \[Fe/H\]$`=0.5`$ were used to estimate log g. Microturbulence cannot be determined from this intermediate-resolution data, we therefore performed the computations for both 1 kms<sup>-1</sup> and 2 kms<sup>-1</sup>, these values cover the range usually found in cool giants and allow to estimate the effect of microturbulence on the derived abundances. This work is still in progress; results for 8 out of the 23 stars with a radial velocity consistent with Sagittarius membership, are presented in Table 1 and displayed in Figure 1. ## 4 Discussion Our stars have been selected in order to highlight the spread in metallicity derived from the colour-magnitude diagram and in fact among the stars analyzed we see a spread of over one dex. However the metallicities range from super-solar to \[Fe/H\]$`1.0`$, about 0.5 dex more metal-rich than the range implied by the photometry. A systematic error in the spectroscopic or photometric (or both !) abundance estimates could reconcile the two results. However closer inspection of figure 1 reveals that of the 8 analyzed stars 3 are solar or super-solar while the remaining 5 show little or no dispersion in metallicity. It is legitimate to ask whether the metal-rich stars belong in fact to the Bulge, in spite of their radial velocity. This hypothesis may not be ruled out, we note however that the \[Mg/Fe\] ratios appear to be solar or sub-solar. The large errors associated with our data preclude any firm conclusion, however, taken at face value, the \[Mg/Fe\] in our metal-rich stars appears to be different from the enhanced ratios displayed by the Bulge K giants of McWilliam & Rich (1994), figure 20a. Our findings are in keeping with those of Smecker-Hane, McWilliam & Ibata (1998), who, on the basis of Keck HIRES spectra, found 2 out of 7 stars to be metal-rich and with $`\alpha `$ elements O and Ca under-abundant with respect to solar.
no-problem/9812/hep-ph9812433.html
ar5iv
text
# IMPROVED DETERMINATION OF THE 𝑏-QUARK MASS AT THE 𝑍 PEAK 11footnote 1Talk given at the IVth International Symposium on Radiative Corrections (RADCOR98), Barcelona, Catalonia, Spain, 8-12 Sep 1998. ## 1 Introduction Effects of the bottom-quark mass, $`m_b`$, were already noticed in the early tests of the flavour independence of the strong coupling constant, $`\alpha _s`$, in $`e^+e^{}`$-annihilation at the $`Z`$-peak. Motivated by the remarkable sensitivity of the three-jet observables to the value of the quark mass, the possibility of the determination of $`m_b`$ at LEP, assuming universality of the strong interactions, was considered . This question was analyzed in detail in , where the necessity of the next-to-leading order (NLO) calculation for the measurements of $`m_b`$ was also emphasized. The NLO calculation for the process $`e^+e^{}3jets`$, with complete quark mass effects, has been performed independently by three groups . These predictions are in agreement with each other and were successfully used in the measurements of the $`b`$-quark mass far above threshold and in the precision tests of the universality of the strong interaction at the $`Z`$-pole. In this talk we make a short review of such calculations. Furthermore, prospects for future improvements are discussed. It is surprising that at high energies the bottom-quark mass could be relevant, since it appears screened by the center of mass energy, $`m_b^2/m_Z^210^3`$ at LEP. Nevertheless, when more exclusive processes than a total cross section are considered, like a n-jet cross section, mass effects can be enhanced as $`m_b^2/m_Z^2/y_c`$, where $`y_c`$ is the parameter that defines the jet multiplicity. Since quarks are not free particles and, therefore, their mass can be considered like a coupling constant, one has the freedom to use different quark mass definitions, e.g. the perturbative pole mass $`M_b`$ or the $`\overline{MS}`$ scheme running mass $`m_b(\mu )`$. Physics should be independent of it but at a fixed order in perturbation theory there is a significant dependence on which mass definition is used, as well as on the renormalization scale $`\mu `$. The inclusion of higher orders to reduce these two uncertainties, due to mass definition and $`\mu `$ scale, is mandatory for an accurate description of mass effects. ## 2 Three jet observables and the measurement of $`m_b`$ The observable proposed some time ago to measure the bottom-quark mass at the $`Z`$-resonance was the ratio $$R_3^{bd}\frac{\mathrm{\Gamma }_{3j}^b(y_c)/\mathrm{\Gamma }^b}{\mathrm{\Gamma }_{3j}^d(y_c)/\mathrm{\Gamma }^d},$$ (1) where $`\mathrm{\Gamma }_{3j}^q`$ and $`\mathrm{\Gamma }^q`$ are the three-jet and the total decay widths of the $`Z`$-boson into a quark pair of flavour $`q`$ in a given jet-clustering algorithm. More precisely, the measured quantity is $$R_3^b\mathrm{}\frac{\mathrm{\Gamma }_{3j}^b(y_c)/\mathrm{\Gamma }^b}{\mathrm{\Gamma }_{3j}^{\mathrm{}}(y_c)/\mathrm{\Gamma }^{\mathrm{}}}=1+\frac{\alpha _s(\mu )}{\pi }a_0(y_c)+r_b\left(b_0(r_b,y_c)+\frac{\alpha _s(\mu )}{\pi }b_1(r_b,y_c)\right),$$ (2) where now the sum of the contributions of the three light flavours $`\mathrm{}=u,d,s`$ is included in the denominator. The $`R_3^{bd}`$ and the $`R_3^b\mathrm{}`$ observables differ only by the function $`a_0`$ (which is zero for $`R_3^{bd}`$). This contribution originates from the triangle diagrams . It is numerically very small (0.002 for the Durham jet algorithm) and almost independent of the $`b`$-quark mass. The $`b_0`$ and $`b_1`$ functions give respectively the leading-order (LO) and NLO mass corrections, once the leading dependence on $`r_b=M_b^2/m_Z^2`$, where $`M_b`$ is the bottom-quark pole mass, has been factorized out. Ratios of differential two-jet rates, where the two-jet width $`\mathrm{\Gamma }_{2j}`$ is calculated from the three- and the four-jet fractions through the identity $`\mathrm{\Gamma }_{2j}=\mathrm{\Gamma }\mathrm{\Gamma }_{3j}\mathrm{\Gamma }_{4j}`$, have been studied in . Ratios of event shape distributions have also been considered . Using the known relationship between the pole mass and the $`\overline{MS}`$ scheme running mass, $$M_b^2=m_b^2(\mu )\left[1+\frac{2\alpha _s(\mu )}{\pi }\left(\frac{4}{3}\mathrm{log}\frac{m_b^2}{\mu ^2}\right)\right],$$ (3) we can re-express Eq. (2) in terms of the running mass $`m_b(\mu )`$. Then, keeping only terms of order $`𝒪(\alpha _s)`$ we obtain $`R_3^b\mathrm{}`$ $`=`$ $`1+{\displaystyle \frac{\alpha _s(\mu )}{\pi }}a_0(y_c)+\overline{r}_b(\mu )\left(b_0(\overline{r}_b,y_c)+{\displaystyle \frac{\alpha _s(\mu )}{\pi }}\overline{b}_1(\overline{r}_b,y_c,\mu )\right),`$ (4) where $`\overline{r}_b(\mu )=m_b^2(\mu )/m_Z^2`$ and $`\overline{b}_1=b_1+2b_0(4/3\mathrm{log}r_b+\mathrm{log}(\mu ^2/m_Z^2))`$. Although at the perturbative level both expressions, Eq. (2) and Eq. (4), are equivalent they give different answers since different higher order contributions have been neglected. The spread of the results gives an estimate of the size of higher order corrections. In fig. 1 we present our results for $`R_3^b\mathrm{}`$ in the four clustering algorithms: EM <sup>2</sup><sup>2</sup>2A modification of the standard Jade scheme, convenient for massive parton calculations ., Jade, E and Durham. For all the algorithms we plot the NLO results written either in terms of the pole mass, $`M_b=4.6`$ GeV, or in terms of the running mass at $`m_Z`$, $`m_b(m_Z)=2.83`$ GeV. The renormalization scale is fixed to $`\mu =m_Z`$ and $`\alpha _s(m_Z)=0.118`$. For comparison we also show $`R_3^b\mathrm{}`$ at LO when the value of the pole mass, $`M_b`$, or the running mass at $`m_Z`$, $`m_b(m_Z)`$, is used for the quark mass. Note the different behaviour of the different algorithms. In particular the $`E`$ algorithm. As already discussed in , in this algorithm the shift in the resolution parameter produced by the quark mass makes the mass corrections positive while by kinematical arguments one would expect a negative effect, since massive quarks radiate less gluons than massless quarks. Furthermore, the NLO corrections are very large in the $`E`$ algorithm and strongly dependent on $`y_c`$. All this probably indicates that it is difficult to give an accurate QCD prediction for it. For the Jade algorithm the NLO correction written in terms of the pole mass starts to be large for $`y_c0.02`$. Note, however that the NLO correction written in terms of the running mass is still kept in a reasonable range in this region. Durham, in contrast, is the algorithm that presents a better behaviour for relatively low values of $`y_c`$ while keeping NLO corrections in a reasonable range. The theoretical predictions for the observables studied contain a residual dependence on the renormalization scale $`\mu `$. To give an idea of the uncertainties introduced by this we plot in fig. 2a the observable $`R_3^b\mathrm{}`$ as a function of $`\mu `$ for a fixed value of $`y_c`$. Here we only present plots for the Durham algorithm, the one with the better behaviour. We use the following one-loop evolution equations $$a(\mu )=\frac{a(m_Z)}{K},m_b(\mu )=m_b(m_Z)K^{\gamma _0/\beta _0},$$ (5) where $`a(\mu )=\alpha _s(\mu )/\pi `$, $`K=1+a(m_Z)\beta _0\mathrm{log}(\mu ^2/m_Z^2)`$ with $`\beta _0=(112/3N_F)/4`$, $`\gamma _0=1`$ and $`N_F=5`$ the number of active flavours, to connect the running parameters at different scales. Conversely, for a given value of $`R_3^b\mathrm{}`$ we can solve Eq. (2) (or Eq. (4)) with respect to the quark mass. The result, shown in fig. 2b for $`R_3^b\mathrm{}(y_c=0.02)=0.973`$, depends on which equation was used and has a residual dependence on the renormalization scale $`\mu `$. The curves in fig. 2b are obtained in the following way: first from Eq. (4) we directly obtain for an arbitrary value of $`\mu `$ between $`m_Z`$ and $`m_Z/10`$ a value for the bottom-quark running mass at that scale, $`m_b(\mu )`$, and then using Eq. (5) we get a value for it at the $`Z`$-scale, $`m_b(m_Z)`$. Second, using Eq. (2) we extract, also for an arbitrary value of $`\mu `$ between $`m_Z`$ and $`m_Z/10`$, a value for the pole mass, $`M_b`$. Then we use Eq. (3) at $`\mu =M_b`$ and again Eq. (5) to perform the evolution from $`\mu =M_b`$ to $`\mu =m_Z`$ and finally get a value for $`m_b(m_Z)`$. The two procedures give a different answer since different higher orders have been neglected in the intermediate steeps. The maximum spread of the two results, in this case of the order of $`\pm 200`$ MeV, can be interpreted as an estimate of the size of higher order corrections, that is, of the theoretical error in the extraction of $`m_b(m_Z)`$ from the experimental measurement of $`R_3^b\mathrm{}`$. These calculations were used by the DELPHI Coll. to extract $`m_b(m_Z)`$ from the experimental measurement of $`R_3^b\mathrm{}`$, see fig. 3, and the result was interpreted as the first experimental evidence (at 2-3 sigmas) for the running of a fermion mass since the data are better described by the NLO-$`m_b(m_Z)`$ curve. Also recently the SLD Coll. has presented results for $`m_b(m_Z)`$. The SLD analysis is compatible with the DELPHI measurement. Nevertheless, the central values of $`m_b(m_Z)`$ obtained from different clustering algorithms are scattered in the range $`\mathrm{\Delta }m_b(m_Z)=\pm 0.49`$ GeV. This is probably due to the fact that E-like algorithms, that are mainly used in this analysis, have huge NLO corrections thus making accurate QCD predictions difficult. ## 3 Tests of the flavour independence of the strong interaction Assuming a given $`b`$-quark mass the NLO calculation of heavy quark three-jet production cross section can be used to perform an improved test of the flavour independence of the strong coupling constant. Such analysis was done for the first time by the DELPHI Coll. in by using the $`R_3^b\mathrm{}`$ observable defined in the Durham algorithm. Recently, the OPAL Coll. has presented a similar analysis by using different ratios of event shapes distributions: $`D_2`$, $`1T`$, $`M_H`$, $`B_T`$, $`B_W`$ and $`C`$. Instead, SLD has presented results by analyzing the $`R_3^{bl}`$ ratio in the E, E0, P, P0, Durham and Geneva algorithms. All these results are consistent with unity. No flavour dependence has been observed. Furthermore, the inclusion of mass effects is mandatory to achieve such agreement. ## 4 Improving the $`b`$-quark mass measurements: the Cambridge algorithm The Cambridge algorithm has been introduced very recently in order to reduce the formation of spurious jets formed with low transverse momentum particles that appear in the Durham algorithm at low $`y_c`$. Therefore, compared to Durham, it allows to test smaller values of $`y_c`$ while still keeping NLO corrections relatively small. Both algorithms are defined by the same recombination procedure and the same test variable $$y_{ij}=2min(E_i^2,E_j^2)(1\mathrm{cos}\theta _{ij})/s,$$ (6) where $`E_i`$ and $`E_j`$ denote the energies of particles $`i`$ and $`j`$ and $`\theta _{ij}`$ is the angle between their momenta. The new ingredient of the Cambridge algorithm is the so called ordering variable $$v_{ij}=2(1\mathrm{cos}\theta _{ij}).$$ (7) In the Cambridge algorithm one first finds the minimal $`v_{ij}`$ and then tests $`y_{ij}`$. If $`y_{ij}<y_c`$, the $`i`$ and $`j`$ particles are recombined into a new pseudoparticle of momentum $`p_k=p_i+p_j`$ but if $`y_{ij}>y_c`$, the softer particle is resolved as a jet. The net effect of the new definition is that NLO corrections to the three-jet fraction become smaller. In fig. 3 we present the preliminary results from the DELPHI Coll. for the $`R_3^b\mathrm{}`$ ratio defined in the Cambridge algorithm and compare it with our NLO calculation written in terms of the running mass at the $`Z`$ peak, NLO-$`m_b(m_Z)`$, or in terms of the pole mass, NLO-$`M_b`$, for $`\mu =m_Z`$ and $`\alpha _s(m_Z)=0.118`$. As in Durham, the NLO-$`m_b(m_Z)`$ gives the best agreement. Furthermore, data are still compatible with the LO-$`m_b(m_Z)`$ showing that the bulk of higher order corrections is described by the running of the $`b`$-quark mass. In contrast, although data could also be well described by the NLO-$`M_b`$ curve, the NLO corrections become large when the pole mass parametrization is used. The studies of the NLO-$`m_b(m_Z)`$ curve show that it is remarkably stable with respect to the variation of the scale $`\mu `$. For the range $`m_Z/10<\mu <m_Z`$ the estimate of the error in the extracted $`m_b(m_Z)`$ is reduced to $`\pm 50`$ MeV in the Cambridge scheme, with respect to $`\pm 200`$ MeV for Durham ($`\pm 125`$ MeV if only NLO-$`m_b(m_Z)`$ is considered). In contrast, when the NLO-$`M_b`$ parametrization is used we get $`\pm 240`$ MeV but strongly dependent on the lower $`\mu `$ used. ## 5 Conclusions In the last few years an important progress was done in the description of the $`Z`$-boson decay into three-jets with massive quarks. Next-to-leading order calculations have been done by three groups and have been successfully used in the analysis of the LEP and SLC data where mass effects have been clearly seen. Further studies of different observables and different jet-algorithms are oriented on the reduction of the theoretical uncertainty. One good candidate might be the Cambridge jet-algorithm, where the NLO corrections are particularly small and where the predictions in terms of the running mass, $`m_b(m_Z)`$ are particularly stable with respect to the variation of the renormalization scale. ## Acknowledgments We are very pleased to thank S. Cabrera, J. Fuster and S. Martí for an enjoyable collaboration. G.R. acknowledges a postdoctoral fellowship from INFN (Italy). Work supported in part by CICYT (Spain), AEN-96-1718 and DGESIC (Spain), PB97-1261. ## References
no-problem/9812/cond-mat9812084.html
ar5iv
text
# Thermoelectric Properties of Anisotropic Systems ## I Introduction In the search to find systems with large thermoelectric figures of merit $`ZT`$, the emphasis has been more on new materials than on materials structures or crystallographic anisotropy. Typical of new structures are superlattices, quantum wells and quantum wires. It has been generally assumed that the direction of highest conductivity in an anisotropic material yields optimal thermoelectric properties. The correctness of this assumption is not obvious, since directions may exist along which the lattice thermal conductivity is abnormally low and the thermopower is high enough to result in large $`ZT`$ even though the electrical conductivity is less than maximum. This paper develops a more general transport theory of thermoelectricity in anisotropic systems. The “highest conductivity” assumption is found to be correct for materials having simple band structures typically of the parabolic variety and essentially isotropic lattice thermal conductivities. However, the formalism developed here suggests more generally that the optimal orientation corresponds to the principal direction along which the ratio of the electronic to lattice thermal conductivity $`\sigma /\kappa _{\mathrm{}}`$ is maximum. There are also several surprising results. These include the formation of possibly large induced transverse electric fields and temperature gradients, the fact that $`ZT`$ is strictly isotropic in anisotropic systems having parabolic bands if the lattice thermal conductivity is neglected, and that a nearly isotropic thermopower and Lorentz number results under these conditions even if the bands are extremely anisotropic and non-parabolic. The macroscopic formalism, based on the tensorial form of the usual phenomenological transport equations, is presented in Sec. II. The effects of sample boundaries are included by requiring that the transverse electric and heat currents vanish at the transverse surfaces. Anisotropic and isotropic systems are shown to differ both qualitatively, through the presence of induced transverse fields, and quantitatively, through the magnitude of the transport coefficients. More detailed statements concerning optimal orientations require use of a microscopic model. Section III introduces a model commonly used to study transport in semiconductors having multi-valley, anisotropic band structures. The transport properties follow from the linearized Boltzmann equation in the effective-mass and relaxation-time approximations. Intervalley scattering is neglected. The thermopower and Lorentz number somewhat surprisingly turn out to be isotropic. The same is true for $`ZT`$ when the lattice thermal conductivity $`\kappa _{\mathrm{}}`$ is neglected. Sec. III B shows that similar results hold for two- and one-dimensional systems. The realistic case corresponding to finite $`\kappa _{\mathrm{}}`$ is considered in Sec. III C. If $`\kappa _{\mathrm{}}`$ is sufficiently isotropic, the maximal $`ZT`$ is shown to occur for samples cut along the direction of highest electrical conductivity. This rather unsurprising result, however, may be modified if the anisotropy of $`\kappa _{\mathrm{}}`$ exceeds that of the electrical conductivity, leading to the conjecture concerning $`\sigma /\kappa _{\mathrm{}}`$ mentioned above. An explicit expression for the upper bound on $`ZT`$ is also derived, which is a generalization of that previously obtained to anisotropic systems. This microscopic description is then applied to several materials of current interest. Sec. IV A examines bulk n-type $`\mathrm{Bi}_2\mathrm{Te}_3`$, in particular, the magnitude of the induced fields, their influence on the effective electrical conductivity, and the dependence of $`ZT`$ on sample orientation. HgTe/HgCdTe superlattices (SLs), considered in Sec. IV B, provide an example of systems which have a tunable anisotropy. For sufficiently large anisotropy, the induced fields can be much larger than the external one. Sec. IV C focuses on isolated n-type $`\mathrm{Bi}_2\mathrm{Te}_3`$ quantum wells. Quantum confinement is shown to split the valley degeneracy and modify the effective masses relative to their bulk values. These effects are seen to be important in determining $`ZT`$ quantitatively. The effects of non-parabolicity in superlattices on the thermopower and Lorentz number are investigated in Sec. V. Both quantities are seen to be nearly isotropic. The thermopower is also bounded by its values along the principal axes, supporting the conjecture that optimal $`ZT`$ is obtained along those directions. ## II Electronic Transport Theory Consider a rectangular parallelepiped sample of an anisotropic material which is placed between a hot contact at $`x=0`$, temperature $`T_h`$, and a cold contact at $`x=L_x`$ of temperature $`T_c`$. An external electric field $`^{\mathrm{ext}}=(_{}^{\mathrm{ext}},0,0)`$ is applied to drive a current from the hotter to the colder electrode. Because of the anisotropy, the resulting electric current need not be parallel to the $`x`$-direction initially; instead it may have a transverse component that leads to an accumulation of charge on the faces of the sample, resulting in an induced polarization field. Similarly, the initial thermal current through the sample need not be parallel to $`x`$, and this may lead to an induced transverse temperature gradient. In the following, we compute the effect of these induced fields and temperature gradients on the electrical conductivity, thermopower, thermal conductivity, and $`ZT`$ using phenomenological linear-response theory. We assume that the transverse electrical and thermal currents vanish. The electric current is $$𝐉^e=\underset{¯}{\sigma }(\underset{¯}{S}T)\underset{¯}{\sigma }𝐅$$ (1) where $`\underset{¯}{\sigma }`$ and $`\underset{¯}{S}`$ are the conductivity and Seebeck tensors respectively. Write $`𝐅=(F_x,F_y,F_z)=(F_{},𝐅_{})`$ with $`F_{}`$ the thermoelectric field along $`x`$ and $`𝐅_{}`$ the induced transverse fields. The conductivity tensor takes the form $$\underset{¯}{\sigma }=\left(\begin{array}{cc}\sigma _{}& \underset{¯}{\sigma }_{od}^T\\ \underset{¯}{\sigma }_{od}& \underset{¯}{\sigma }_{}\end{array}\right),$$ (2) where $`\sigma _{}`$ is the component of the conductivity along the $`x`$-direction, and $`\underset{¯}{\sigma }_{}`$ is a $`2\times 2`$ tensor for the transverse $`y,z`$-directions. Since the full conductivity tensor is not block-diagonal, there will be a $`2\times 1`$ off-diagonal term $`\underset{¯}{\sigma }_{od}`$ and its transpose. Imposing the boundary condition $`𝐉^e=(J_{}^e,0,0)`$ on Eq. (1) corresponding to vanishing transverse current gives $$\left(\begin{array}{c}J_{}^e\\ \mathrm{𝟎}\end{array}\right)=\underset{¯}{\sigma }\left(\begin{array}{c}F_{}\\ 𝐅_{}\end{array}\right).$$ (3) Inserting Eq. (2) into this expression leads to a linear equation for $`𝐅_{}`$ with solution $$𝐅_{}=\underset{¯}{\sigma }_{}^{}{}_{}{}^{1}\underset{¯}{\sigma }_{od}F_{}.$$ (4) The off-diagonal elements of $`\underset{¯}{\sigma }`$ induce a current $$J_{}^{\mathrm{e},\mathrm{ind}}=\underset{¯}{\sigma }_{od}^T𝐅_{}=\underset{¯}{\sigma }_{od}^T\underset{¯}{\sigma }_{}^{}{}_{}{}^{1}\underset{¯}{\sigma }_{od}F_{}$$ (5) along the $`x`$-direction. Since $`\underset{¯}{\sigma }_{}`$ and $`\underset{¯}{\sigma }_{}^1`$ have positive eigenvalues, the quadratic form $`\underset{¯}{\sigma }_{od}^T\underset{¯}{\sigma }_{}^{}{}_{}{}^{1}\underset{¯}{\sigma }_{od}`$ is positive definite. Thus, the induced current opposes the external current $`J_{}^{\mathrm{e},\mathrm{ext}}=\sigma _{}F_{}`$. The total current $$J_{}^e=J_{}^{\mathrm{e},\mathrm{ext}}+J_{}^{\mathrm{e},\mathrm{ind}}=\sigma _1F_{}$$ (6) with $`\sigma _1`$ $`=`$ $`\sigma _{}\underset{¯}{\sigma }_{od}^T\underset{¯}{\sigma }_{}^{}{}_{}{}^{1}\underset{¯}{\sigma }_{od}`$ (7) $`=`$ $`\sigma _{xx}\sigma _{xy}{\displaystyle \frac{\sigma _{yx}\sigma _{zz}\sigma _{yz}\sigma _{zx}}{\sigma _{yy}\sigma _{zz}\sigma _{yz}\sigma _{zy}}}\sigma _{xz}{\displaystyle \frac{\sigma _{zx}\sigma _{yy}\sigma _{zy}\sigma _{yx}}{\sigma _{yy}\sigma _{zz}\sigma _{yz}\sigma _{zy}}}.`$ (8) The induced fields therefore lead to a reduced conductivity: $`\sigma _1\sigma _{}`$. If $`\underset{¯}{S}`$ is diagonal, $`\sigma _1`$ is the effective conductivity; otherwise, the induced temperature gradients give rise to additional terms considered below. In the isotropic case, the induced fields vanish, and $`\sigma _1=\sigma _{}`$. Note that in general $`\sigma _10`$ because $$\sigma _1F_{}^2=J_{}^eF_{}=𝐅^T𝐉^e=𝐅^T\underset{¯}{\sigma }𝐅0.$$ (9) The heat current $`𝐉^Q`$ is given by $$𝐉^Q=T\underset{¯}{S}𝐉^e\underset{¯}{\kappa }T,$$ (10) where $`\underset{¯}{\kappa }`$ is the thermal conductivity and the temperature gradient $`T=(_xT,_yT,_zT)=(_{}T,_{}T)`$. Using the same decomposition as in Eq. (2) for $`\underset{¯}{\kappa }`$ and $`\underset{¯}{S}`$ in Eq. (10) and setting the transverse components of the heat current to zero yields $$\left(\begin{array}{c}J_{}^Q\\ \mathrm{𝟎}\end{array}\right)=\left(\begin{array}{c}TS_{}J_{}^e\kappa _{}_{}T\underset{¯}{\kappa }_{od}^T_{}T\\ T\underset{¯}{S}_{od}J_{}^e\underset{¯}{\kappa }_{od}_{}T\underset{¯}{\kappa }_{}_{}T\end{array}\right).$$ (11) Solving for the induced temperature gradients yields $$_{}T=\underset{¯}{\kappa }_{}^1(T\underset{¯}{S}_{od}J_{}^e\underset{¯}{\kappa }_{od}_{}T).$$ (12) Substituting back into Eq. (11) gives $$J_{}^Q=TS_{\mathrm{eff}}J_{}^e\kappa _{\mathrm{eff}}_{}T$$ (13) with $`\kappa _{\mathrm{eff}}`$ $`=`$ $`\kappa _{}\underset{¯}{\kappa }_{od}^T\underset{¯}{\kappa }_{}^1\underset{¯}{\kappa }_{od}`$ (14) $`S_{\mathrm{eff}}`$ $`=`$ $`S_{}\underset{¯}{\kappa }_{od}^T\underset{¯}{\kappa }_{}^1\underset{¯}{S}_{od}.`$ (15) Observe that $`\sigma _1`$, Eq. (7), and $`\kappa _{\mathrm{eff}}`$, Eq. (14), have the same form. As in the $`\sigma _1`$ case, the induced temperature gradients produce an induced heat current along $`x`$ which opposes the external heat current $`\kappa _{}_{}T`$. The net result is a reduced effective thermal conductivity: $`0\kappa _{\mathrm{eff}}\kappa _{}`$. Returning to Eq. (6), we have $$J_{}^e=\sigma _1F_{}=\sigma _1\left(_{}S_{}_{}T\underset{¯}{S}_{od}^T_{}T\right).$$ (16) Substituting Eq. (12) into Eq. (6) leads to $$J_{}^e=\sigma _{\mathrm{eff}}(_{}S_{\mathrm{eff}}T_{}).$$ (17) with $$\sigma _{\mathrm{eff}}=\frac{\sigma _1}{1+\sigma _1T\underset{¯}{S}_{od}^T\underset{¯}{\kappa }_{}^1\underset{¯}{S}_{od}}$$ (18) and $$S_{\mathrm{eff}}=S_{}\underset{¯}{S}_{od}^T\underset{¯}{\kappa }_{}^1\underset{¯}{\kappa }_{od}.$$ (19) From the properties of $`\sigma _1`$, $`0\sigma _{\mathrm{eff}}\sigma _{}`$. Therefore, induced fields and temperature gradients always reduce the effective conductivity. Also, $`S_{\mathrm{eff}}`$ in Eq. (19) is the same as $`S_{\mathrm{eff}}`$ in Eq. (15) because, by the Onsager relations, $`\underset{¯}{\kappa }_{}`$ and hence $`\underset{¯}{\kappa }_{}^1`$ are symmetric. In the steady state, the energy-conservation equation reads $$𝐉^Q=𝐉^e.$$ (20) Using Eqs. (10) and (1), and assuming that the Thompson heat is negligible, $$𝐉^Q=(T\underset{¯}{S}𝐉^e\underset{¯}{\kappa }T)=T\underset{¯}{S}𝐉^e(\underset{¯}{\kappa }T)$$ (21) and $$𝐉^e=𝐉^e(\underset{¯}{\sigma }^1𝐉^e+\underset{¯}{S}T)=𝐉^e\underset{¯}{\sigma }^1𝐉^e+𝐉^e\underset{¯}{S}T.$$ (22) Thus, Eq. (20) becomes $$𝐉^e\underset{¯}{\sigma }^1𝐉^e+(\underset{¯}{\kappa }T)=0.$$ (23) Since $`𝐉^e=(J_{}^e,0,0)`$, the first term is $`(\underset{¯}{\sigma }^1)_{11}J_{}^{e2}=(det\underset{¯}{\sigma }_{}/det\underset{¯}{\sigma })J_{}^{e2}=\sigma _1^1J_{}^{e2}`$. The last step is justified by expanding the determinants and comparing to Eq. (8). With $`T=(_{}T,_{}T)`$ and using Eq. (12), the second term is $$\left(\begin{array}{c}\kappa _{}_{}T+\underset{¯}{\kappa }_{od}^T_{}T\\ \underset{¯}{\kappa }_{od}_{}T+\underset{¯}{\kappa }_{}_{}T\end{array}\right)=\left(\begin{array}{c}(\kappa _{}\underset{¯}{\kappa }_{od}^T\underset{¯}{\kappa }_{}^1\underset{¯}{\kappa }_{od})_{}T+\underset{¯}{\kappa }_{od}^T\underset{¯}{\kappa }_{}^1\underset{¯}{S}_{od}TJ_{}^e\\ T\underset{¯}{S}_{od}J_{}^e\end{array}\right)$$ (24) $$=\kappa _{\mathrm{eff}}_{}^2T+\underset{¯}{S}_{od}^T\underset{¯}{\kappa }_{}^1\underset{¯}{S}_{od}TJ_{}^{e2}.$$ (25) Therefore Eq. (20) reduces to $$\sigma _{\mathrm{eff}}^1J_{}^{e2}+\kappa _{\mathrm{eff}}_{}^2T=0.$$ (26) With the help of Eqs. (17), (13) and (26), the thermoelectric figure of merit becomes $$ZT=T\sigma _{\mathrm{eff}}S_{\mathrm{eff}}^2/\kappa _{\mathrm{eff}}.$$ (27) The transport coefficients are simply replaced by their effective versions. These results are derived more succinctly using a general formalism based directly on the Onsager coefficients in Appendix A. Rigorous bounds can be placed on the magnitude and sign of $`\sigma _{\mathrm{eff}}`$ and $`\kappa _{\mathrm{eff}}`$, but not on $`S_{\mathrm{eff}}`$ without introducing a microscopic model that relates these transport coefficients. The next Section considers an example of such a model. ## III Microscopic Model: Parabolic Bands ### A Three-Dimensional Structures According to semiclassical transport theory, the Boltzmann equation in the relaxation-time approximation yields the transport coefficients $`\sigma _{ij}`$ $`=`$ $`e^2{\displaystyle 𝑑\epsilon (f_0/\epsilon )\mathrm{\Sigma }_{ij}(\epsilon )}`$ (28) $`T(\sigma S)_{ij}`$ $`=`$ $`e{\displaystyle 𝑑\epsilon (f_0/\epsilon )\mathrm{\Sigma }_{ij}(\epsilon )(\epsilon \mu )}`$ (29) $`T\kappa _{0,ij}`$ $`=`$ $`{\displaystyle 𝑑\epsilon (f_0/\epsilon )\mathrm{\Sigma }_{ij}(\epsilon )(\epsilon \mu )^2},`$ (30) where $`f_0`$ is the Fermi-Dirac distribution $`f_0(\epsilon )=1/(\mathrm{exp}((\epsilon \mu )/k_BT)+1)`$, $`\mu `$ the chemical potential, and $$\mathrm{\Sigma }_{ij}(\epsilon )=\frac{2d^3𝐤}{(2\pi )^3}v_i(𝐤)v_j(𝐤)\tau (𝐤)\delta (\epsilon \epsilon (𝐤))$$ (31) are the components of the transport distribution tensor, the generalization of the function discussed by Mahan and Sofo. Here $`\epsilon (𝐤)`$ is the electronic dispersion relation, $`v_i(𝐤)=\mathrm{}^1\epsilon (𝐤)/k_i`$ the electronic group velocity, and $`\tau (𝐤)`$ the relaxation time. Note that $`\underset{¯}{\kappa }_0`$ is the electronic thermal conductivity at zero electrochemical potential gradient inside the sample; the usual electronic thermal conductivity at zero electric current, $`\underset{¯}{\kappa }_e`$, is given in terms of $`\underset{¯}{\kappa }_0`$ by $`\underset{¯}{\kappa }_e=\underset{¯}{\kappa }_0T\underset{¯}{\sigma }\underset{¯}{S}^2`$. The microscopic model to be used here assumes the conduction to be taking place in a single parabolic band having $`N`$ degenerate valleys centered at $`𝐤^{(n)},n=1,\mathrm{},N`$ respectively. The dispersion relation for each valley is $$\epsilon ^{(n)}(𝐤)=\epsilon _0+(\mathrm{}^2/2)\underset{i,j}{}(k_ik_i^{(n)})M_{ij}^{(n)1}(k_jk_j^{(n)})$$ (32) with $`\underset{¯}{M}^{(n)1}`$ the inverse effective-mass tensor. The corresponding group velocity is $$v_i^{(n)}(𝐤)=\mathrm{}^1\epsilon (𝐤)/k_i=\mathrm{}\underset{j}{}M_{ij}^{(n)1}(k_jk_j^{(n)}).$$ (33) Intervalley scattering will be neglected. Thus the transport distribution tensor involves just a sum over the $`N`$ valleys. Assuming the relaxation time to be a function of energy alone, $`\tau (𝐤)=\tau (\epsilon (𝐤))`$, and independent of crystal orientation, $`\mathrm{\Sigma }_{ij}(\epsilon )`$ $`=`$ $`\tau (\epsilon ){\displaystyle \underset{n=1}{\overset{N}{}}}{\displaystyle \frac{2d^3𝐤}{(2\pi )^3}\mathrm{}^2\underset{i^{},j^{}}{}M_{ii^{}}^{(n)1}k_i^{}M_{jj^{}}^{(n)1}k_j^{}\delta (\epsilon \epsilon (𝐤+𝐤^{(n)}))}`$ (34) $`=`$ $`\tau (\epsilon )\mathrm{}^2{\displaystyle \underset{n=1}{\overset{N}{}}}{\displaystyle \underset{i^{}j^{}}{}}M_{ii^{}}^{(n)1}M_{jj^{}}^{(n)1}{\displaystyle k_i^{}k_j^{}\delta (\epsilon 𝐤\underset{¯}{X}^{(n)}𝐤)\frac{2d^3𝐤}{(2\pi )^3}}`$ (35) $`=`$ $`{\displaystyle \frac{2\tau (\epsilon )\mathrm{}^2}{(2\pi )^3}}{\displaystyle \underset{n=1}{\overset{N}{}}}{\displaystyle \underset{i^{}j^{}}{}}M_{ii^{}}^{(n)1}M_{jj^{}}^{(n)1}\left[{\displaystyle \frac{}{X_{i^{}j^{}}^{(n)}}}{\displaystyle \mathrm{\Theta }(\epsilon 𝐤\underset{¯}{X}^{(n)}𝐤)d^3𝐤}\right]`$ (36) $`=`$ $`{\displaystyle \frac{2^{3/2}\tau (\epsilon )\epsilon ^{3/2}}{3\pi ^2\mathrm{}^3}}{\displaystyle \underset{n=1}{\overset{N}{}}}\sqrt{det\underset{¯}{M}^{(n)}}M_{ij}^{(n)1},`$ (37) where $`X_{ij}^{(n)}`$ are the components of $`\underset{¯}{X}^{(n)}=(\mathrm{}^2/2)\underset{¯}{M}^{(n)1}`$. This transformation relies explicitly on the validity of the effective-mass approximation, the neglect of intervalley scattering, and the relaxation-time approximation used here. The square bracket in Eq. (36) is evaluated using the identity $$\frac{}{X_{ij}^{(n)}}det\underset{¯}{X}^{(n)}=(\mathrm{det}\underset{¯}{X}^{(n)})X_{ij}^{(n)1}$$ (38) and the change of variable $`𝐤^{}=\sqrt{\underset{¯}{X}^{(n)}}𝐤`$. Thus, $$\underset{¯}{\mathrm{\Sigma }}(\epsilon )=\underset{¯}{A}𝒯(\epsilon )$$ (39) with $$\underset{¯}{A}=\underset{n=1}{\overset{N}{}}\left(m_0^{1/2}\sqrt{\mathrm{det}\underset{¯}{M}^{(n)}}\right)\underset{¯}{M}^{(n)1}$$ (40) and $$𝒯(\epsilon )=\frac{2^{3/2}m_0^{1/2}}{3\pi ^2\mathrm{}^3}\epsilon ^{3/2}\tau (\epsilon ).$$ (41) The constant, dimensionless matrix $`\underset{¯}{A}`$ contains the full tensorial structure and separates it from the energy dependence in $`𝒯(\epsilon )`$. The conductivity then becomes $$\underset{¯}{\sigma }=e^2𝑑\epsilon (f_0/\epsilon )𝒯(\epsilon )\underset{¯}{A}\sigma _0\underset{¯}{A}.$$ (42) Since $$\underset{¯}{(\sigma S)}=(e/T)𝑑\epsilon (f_0/\epsilon )𝒯(\epsilon )(\epsilon \mu )\underset{¯}{A}\underset{¯}{A}\sigma _0S_0,$$ (43) $$\underset{¯}{S}=\underset{¯}{\sigma }^1(\underset{¯}{\sigma S})=S_0\underset{¯}{1}.$$ (44) The Seebeck tensor is therefore necessarily isotropic. Further, $$\underset{¯}{\kappa }_0=\underset{¯}{A}\kappa _0$$ (45) with $$\kappa _0=T^1𝑑\epsilon (f_0/\epsilon )𝒯(\epsilon )(\epsilon \mu )^2.$$ (46) The zero-current electronic thermal conductivity becomes $$\underset{¯}{\kappa }_e=\underset{¯}{\kappa }_0T\underset{¯}{\sigma }\underset{¯}{S}^2\underset{¯}{A}\kappa _e$$ (47) for $`\kappa _e=\kappa _0T\sigma _0S_0^2`$. These results lead to the following surprising conclusion: When the lattice thermal conductivity is neglected, then, within the effective-mass approximation as specified here, the thermoelectric figure of merit $`ZT`$ is independent of the sample orientation. Note that $$\underset{¯}{\kappa }_e\underset{¯}{\sigma }^1=\underset{¯}{A}\kappa _e\underset{¯}{A}^1/\sigma _0=(\kappa _e/\sigma _0)\underset{¯}{1}L_0T\underset{¯}{1},$$ (48) where the Lorentz number $`L_0=\kappa _e/\sigma _0T`$. Thus, $`\kappa _{e,ij}=L_0T\sigma _{ij}`$ and $$\kappa _{\mathrm{eff}}=L_0T\sigma _{\mathrm{eff}}$$ (49) as expected. Finally, since $`\underset{¯}{S}`$ is isotropic, $`S_{\mathrm{eff}}=S_0`$. Combining these effective transport coefficients yields $$ZT=T\sigma _{\mathrm{eff}}S_{\mathrm{eff}}^2/\kappa _{\mathrm{eff}}=S_0^2/L_0,$$ (50) a constant independent of direction. ### B Lower-dimensional Structures Dimensionality enters the transport coefficients through the $`𝐤`$-space integrals $$d^3k(2\pi /L_z)d^2k((2\pi )^2/L_yL_z)dk$$ (51) for three, two or one dimensions respectively, where $`L_y`$ and $`L_z`$ are the sample sizes in the $`y`$\- and $`z`$-directions. Dimensionality also enters through the confinement energies and effective masses for carriers constrained to move in a lower-dimensional device. For the two-dimensional case with $`=(_x^{\mathrm{ext}},_y^{\mathrm{ind}})`$ and $`T=(_xT,_yT)`$, the induced fields are given by Eqs. (4) and (12), the transport coefficients by Eqs. (14), (15), and (18) with $`\underset{¯}{\sigma }_{}`$ replaced by $`\sigma _{yy}`$, $`\underset{¯}{\sigma }_{od}`$ replaced by $`\sigma _{yx}`$; similarly for the other transport coefficients. The analogue of Eq. (31) for the components of the transport distribution tensor is obtained within the effective-mass approximation using Eq. (51): $`\mathrm{\Sigma }_{ij}(\epsilon )`$ $`=`$ $`{\displaystyle \frac{2\tau (\epsilon )\mathrm{}^2}{4\pi ^2L_z}}{\displaystyle \underset{n=1}{\overset{N}{}}}{\displaystyle \underset{i^{}j^{}}{}}M_{ii^{}}^{(n)1}M_{jj^{}}^{(n)1}{\displaystyle d^2𝐤k_i^{}k_j^{}\delta (\epsilon 𝐤\underset{¯}{X}^{(n)}𝐤)}.`$ (52) This expression has the same form as Eq. (39) with $$\underset{¯}{A}=\underset{n=1}{\overset{N}{}}\sqrt{\mathrm{det}\underset{¯}{M}^{(n)}}\underset{¯}{M}^{(n)1}$$ (53) and $$𝒯(\epsilon )=\epsilon \tau (\epsilon )/\pi \mathrm{}^2L_z.$$ (54) These equations are to be contrasted with Eqs. (40) and (41) for the three-dimensional case. Thus, analogously to Eqs. (42), (44), and (47), $`\underset{¯}{\sigma }=\underset{¯}{A}\sigma _0,\underset{¯}{S}=S_0\underset{¯}{1}`$ and $`\underset{¯}{\kappa }_e=\underset{¯}{A}\kappa _e`$ where the two-dimensional $`\underset{¯}{A}`$ and $`𝒯(\epsilon )`$ must be used in defining $`\sigma _0`$, $`S_0`$ and $`\kappa _0`$. Finally, $`\underset{¯}{\kappa }_e\underset{¯}{\sigma }^1=L_0T\underset{¯}{1}`$ so that $$\kappa _{\mathrm{eff}}=L_0T\sigma _{\mathrm{eff}}.$$ (55) Also, $`S_{\mathrm{eff}}=S_0`$ because $`\underset{¯}{S}`$ is isotropic in two dimensions as well. The corresponding figure of merit $$ZT=T\sigma _{\mathrm{eff}}S_{\mathrm{eff}}^2/\kappa _{\mathrm{eff}}=S_0^2/L_0$$ (56) is again independent of direction, as in three dimensions. It is seen that the two-dimensional and three-dimensional results are entirely analogous and that the former are obtained from the latter by taking the limit as one of the effective masses tends to infinity. In this limit, the ellipsoidal surface in $`𝐤`$-space of constant energy becomes increasingly prolate, until it reaches the edge of the Brillouin zone, after which it assumes a cylindrical shape extending from $`\pi /L_z`$ to $`\pi /L_z`$ upon further increase in the effective-mass parameter. Furthermore $`ZT`$ in two dimensions can be enhanced due to the factor of $`1/L_z`$ in the density of states, which, as pointed out in Ref. , becomes large for small thicknesses. In the one-dimensional case there is no transport in the transverse direction. Thus there are no transverse fields and the microscopic and effective transport coefficients are the same. Moreover, all transport coefficients are scalars. The transport distribution function is found to be $$\mathrm{\Sigma }(\epsilon )=\underset{n=1}{\overset{N}{}}\frac{2}{\pi L_yL_z}\sqrt{\frac{2\epsilon }{\mathrm{}^2m_x^{(n)}}}\tau (\epsilon ).$$ (57) Just as in two dimensions, the $`1/L_yL_z`$ factor leads to an enhancement of the density of states and thus of $`ZT`$ for thin wires. (The wire becomes approximately one-dimensional when it is thin enough that the confinement energy places all states associated with its three-dimensional structure sufficiently high in energy that they do not contribute.) ### C Implications for $`ZT`$ We shall now derive an upper bound for $`ZT`$ of the Sofo and Mahan form and show that the highest conductivity direction gives optimal values for $`ZT`$. The inclusion of an isotropic lattice thermal conductivity $`\kappa _{\mathrm{}}`$ causes $`ZT`$ to lose its isotropy. The figure of merit including an isotropic lattice thermal conductivity $`\underset{¯}{\kappa }_{\mathrm{}}=\kappa _{\mathrm{}}\underset{¯}{1}`$ may be written in the form $$ZT=T\sigma _{\mathrm{eff}}S_{\mathrm{eff}}^2/\kappa _{\mathrm{eff}}$$ (58) where $`\sigma _{\mathrm{eff}}`$ and $`S_{\mathrm{eff}}`$ are as in Sec. II. $`\kappa _{\mathrm{eff}}=\kappa _e^{}+\kappa _{\mathrm{}}`$ with $`\kappa _e^{}`$ $`=\kappa _{e,xx}`$ $`\kappa _{e,xy}{\displaystyle \frac{\kappa _{e,yx}(\kappa _{e,zz}+\kappa _{\mathrm{}})\kappa _{e,yz}\kappa _{e,zx}}{(\kappa _{e,yy}+\kappa _{\mathrm{}})(\kappa _{e,zz}+\kappa _{\mathrm{}})\kappa _{e,yz}\kappa _{e,zy}}}`$ (60) $`\kappa _{e,xz}{\displaystyle \frac{\kappa _{e,zx}(\kappa _{e,yy}+\kappa _{\mathrm{}})\kappa _{e,zy}\kappa _{e,yx}}{(\kappa _{e,yy}+\kappa _{\mathrm{}})(\kappa _{e,zz}+\kappa _{\mathrm{}})\kappa _{e,yz}\kappa _{e,zy}}}.`$ defining the electronic thermal conductivity in the presence of the non-vanishing $`\kappa _{\mathrm{}}`$ and the sample boundaries. As shown in Appendix B, the upper bound on $`ZT`$ is given by $$ZTa_0(\kappa _0/\kappa _{\mathrm{}}).$$ (61) The dimensionless quantity $`a_0`$ is defined by Eq. (B2). In the isotropic case, $`a_0=1`$; in the present case, $`a_0`$ is of order unity. The maximum value of $`\xi `$ defined in Ref. and in Appendix B \[Eq. (B14)\] of unity is attained if and only if $`𝒯(\epsilon )`$ is proportional to a $`\delta `$-function. In the more physical case $`𝒯(\epsilon )\epsilon ^r`$ with $`r`$ varying between -0.5 and 2, numerical computations show that $`\xi `$ tends to 1 as $`\mu /k_BT\mathrm{}`$ and to zero as $`\mu /k_BT\mathrm{}`$. For $`\mu /k_BT=0`$, $`\xi `$ ranges from 0.5 to 0.8. Thus, the upper bound can be reached at the cost of going to low carrier concentrations, whereas higher carrier concentrations imply smaller $`\xi `$. The optimum concentration lies somewhere in between. We now show that, in the effective-mass, relaxation-time, no intervalley scattering, and isotropic-thermal-conductivity approximations, $`ZT`$ is highest in the direction of maximum electrical conductivity. In the anisotropic case, $`\sigma _{\mathrm{eff}}`$ and $`\kappa _{\mathrm{eff}}`$ have an angular dependence obtained in the frame of the sample by a rotation from their common principal frame. $`S_{\mathrm{eff}}`$ does not because in our microscopic model $`\underset{¯}{S}`$ is isotropic. $`ZT`$ is therefore also anisotropic. Let now $`\underset{¯}{P}`$ be any symmetric and positive matrix and $`\lambda 0`$ a positive number; then by the properties of positive matrices, $`\underset{¯}{P}^1(\underset{¯}{P}+\lambda \underset{¯}{1})^1`$. For the application to thermoelectrics let $$\underset{¯}{\kappa }_e=\left(\begin{array}{cc}\kappa _{}& \underset{¯}{\kappa }_{od}^T\\ \underset{¯}{\kappa }_{od}& \underset{¯}{\kappa }_{}\end{array}\right).$$ (62) In analogy with Eq. (14) we find $$\kappa _e^{}=\kappa _{}\underset{¯}{\kappa }_{od}^{}{}_{}{}^{T}(\underset{¯}{\kappa }_{}+\kappa _{\mathrm{}}\underset{¯}{1})^1\underset{¯}{\kappa }_{od}.$$ (63) Taking $`\underset{¯}{P}=\underset{¯}{\kappa }_{}`$ and $`\lambda =\kappa _{\mathrm{}}`$, the relation $`\underset{¯}{P}^1(\underset{¯}{P}+\lambda \underset{¯}{1})^1`$ shows that $$\underset{¯}{\kappa }_{od}^{}{}_{}{}^{T}(\underset{¯}{\kappa }_{}+\kappa _{\mathrm{}}\underset{¯}{1})^1\underset{¯}{\kappa }_{od}\underset{¯}{\kappa }_{od}^{}{}_{}{}^{T}\underset{¯}{\kappa }_{}^{}{}_{}{}^{1}\underset{¯}{\kappa }_{od}.$$ (64) Thus, for any $`\kappa _{\mathrm{}}`$ $$\kappa _e^{}(\kappa _{\mathrm{}})\kappa _e^{}(\kappa _{\mathrm{}}=0)=(\kappa _e/\sigma _0)\sigma _{\mathrm{eff}}.$$ (65) This shows that the ratio $`\kappa _e^{}/\sigma _{\mathrm{eff}}`$ is minimized when the axis of current flow in steady state lies along one of the principal directions, where, whatever the value of $`\kappa _{\mathrm{}}`$, the induced-field related terms vanish and so equality obtains in Eq. (65). Writing $`ZT`$ in the form $$ZT=\frac{S_0^2}{\kappa _e^{}/T\sigma _{\mathrm{eff}}+\kappa _{\mathrm{}}/T\sigma _{\mathrm{eff}}},$$ (66) the second term in the denominator is seen to be smallest along the principal direction with largest electrical conductivity and therefore $`ZT`$ is maximized for current flow along this direction. On the other hand, for a sufficiently anisotropic lattice thermal conductivity the favored direction might be determined by its minimum rather than that of the highest electrical conductivity. For crystals in which two of the principal values of the electrical conductivity are equal, such as $`\mathrm{Bi}_2\mathrm{Te}_3`$ and SLs, numerical results show that as long as the anisotropy of $`\underset{¯}{\kappa }_{\mathrm{}}`$ is smaller than that of $`\underset{¯}{\sigma }`$, the optimum $`ZT`$ is still to be found in the direction of greatest electrical conductivity. This suggests that, generally speaking, the figure of merit will be maximized in the principal crystal direction in which the ratio $`\sigma _i/\kappa _{\mathrm{},i}`$ is greatest, where $`\sigma _i`$ and $`\kappa _{\mathrm{},i}`$ are the principal values of the electrical and lattice thermal conductivity tensors obtained from summing over valleys. ## IV Applications In the Section, the formalism of Sec. III is applied to several systems in which the effective-mass approximation is valid and for which, as shown above, the thermopower is isotropic. ### A Bulk $`\mathrm{Bi}_2\mathrm{Te}_3`$ Bismuth telluride is a semiconductor with a room-temperature energy gap of 0.13 eV that crystallizes in a trigonal structure with space group $`\mathrm{D}_{3\mathrm{d}}^5`$ (R$`\overline{3}m`$). An orthogonal coordinate system can be defined in this structure, consisting of the trigonal (three-fold rotation) axis, a bisectrix axis that resides within a reflection plane and is normal to the trigonal axis, and a binary axis which is along the two-fold rotation axis orthogonal to the other two directions. This coordinate system will be used throughout the paper and will be referred to as the crystal frame. As shown in Fig. 1, the constant-energy surfaces of the conduction band at low doping are ellipsoidal. There are six degenerate valleys, each described by a highly anisotropic effective-mass tensor. As illustrated in the figure, two of these valleys are bisected by the trigonal-bisectrix plane and their principal axes are rotated approximately 1.2 from the bisectrix axis. The light effective masses are $`m_1=0.025m_0`$ near the bisectrix, $`m_2=0.26m_0`$ along the binary axis and $`m_3=0.19m_0`$ near the trigonal axis. These two surfaces are related to each other by inversion through the origin. The remaining four constant-energy surfaces are obtained from the first two by rotations through $`\pm 2\pi /3`$ about the trigonal axis. The tensorial transport distribution function, Eq. (39), involves the scalar part $`𝒯(\epsilon )`$ given by Eq. (41). The relaxation time $`\tau (\epsilon )`$ taken to be independent of energy and is determined from the experimental mobility along the bisectrix, 1200 cm<sup>2</sup> / V s. The matrix part $`\underset{¯}{A}`$ is constructed from the experimentally derived inverse mass matrix for a single conduction band ellipsoid ($`n=1`$).. Application of the point group operations of the crystal give the other inverse mass matrices needed to construct $`\underset{¯}{A}`$ in the crystal frame \[cf. Eq. (40)\], which is then transformed into the frame of the sample. $`ZT`$ is obtained from the resulting transport coefficients by taking the lattice thermal conductivity at 300 K to be 1.5 W / m K (Ref. ) and isotropic. We shall assume a carrier density $`n=5.2\times 10^{18}\mathrm{cm}^3`$, which gives the maximal $`ZT=0.71`$ in our model when the sample is cut along the bisectrix. The magnitude of the induced fields in a crystal cut at an angle $`\theta `$ with respect to the trigonal axis in the trigonal-bisectrix plane is plotted in Fig. 2(a). The induced field vanishes when the sample is cut along either the bisectrix or the trigonal directions. This field reaches its maximum of 76% of the external field for a sample cut 0.36$`\pi `$ radians with respect to the trigonal axis. As discussed in Sec. II, an external temperature gradient may induce temperature gradients along the transverse faces. The induced gradients are at most 12% of the external gradient and have little effect on $`ZT`$. The effective electrical conductivity $`\sigma _{\mathrm{eff}}`$ \[Eq. (18)\] is shown in Fig. 2(b). As the direction in which the sample is cut is changed from the bisectrix ($`\theta =\pi /2`$) to the trigonal ($`\theta =0`$), the conductivities computed with ($`\sigma _{\mathrm{eff}}`$) and without ($`\sigma _{xx}`$) sample boundary effects are both reduced by a factor of four due to the intrinsic anisotropy of the material. $`\sigma _{\mathrm{eff}}`$ is further reduced with respect to $`\sigma _{xx}`$ by the induced fields when the sample is oriented along low-symmetry directions by as much as 60%. The influence of the sample boundaries is therefore substantial. The combination of intrinsic anisotropy and the effects of the induced fields also affect $`ZT`$, but only if $`\kappa _{\mathrm{}}`$ is non-zero. As shown in Fig. 3(a) for a hypothetical material with $`\kappa _{\mathrm{}}=0`$, $`ZT=2.6`$ and is isotropic \[Eq. (50)\]. As seen in Fig. 3(b), when $`\kappa _{\mathrm{}}`$ assumes its bulk value, $`ZT`$ becomes anisotropic and decreases to a maximum value of 0.71. The maximal $`ZT`$ applies to samples cut along the high-conductivity bisectrix-binary plane, consistent with the results of Sec. III C. The minimal $`ZT=0.22`$ occurs for samples cut along the low-conductivity trigonal axis. Despite the complicated many-valley band structure, $`ZT`$ is independent of the azimuthal angle $`\varphi `$ defined in the inset to Fig. 3(a). This is consistent with a group-theoretical analysis and is suggested by the relative orientations of the ellipsoids in Fig. 1. Introducing anisotropy into $`\kappa _{\mathrm{}}`$ by increasing its value along the bisectrix reduces $`ZT`$ for samples cut along this direction, but not for those cut along the trigonal axis. When the ratio $`\sigma /\kappa _{\mathrm{}}`$ along the trigonal axis exceeds that along the bisectrix, the maximal $`ZT`$ shifts to the low-conductivity trigonal direction. $`ZT`$ is never maximal along a low-symmetry direction. These numerical results support the conjecture at the end of Sec. III C. ### B HgTe/HgCdTe Superlattices We now consider HgTe/HgCdTe SLs, one of many SLs whose well and barrier materials have direct $`\mathrm{\Gamma }`$-point band gaps. The constant energy surfaces of the lowest conduction subband $`C1`$ consist of single ellipsoids of revolution aligned with the growth axis of the SL. The conductivity will typically be a minimum along the growth axis ($`\sigma _{\mathrm{min}}`$) and a maximum within the SL plane ($`\sigma _{\mathrm{max}}`$). If the sample is cut at an angle $`\varphi `$ relative to the SL planes, the induced fields will be non-zero only for the faces intersected by the SL growth axis, as indicated in the inset to Fig. 4. Use of Eq. (4) then yields the relative magnitude of the induced electric field as $$\frac{^{\mathrm{ind}}}{F^{\mathrm{ext}}}=\frac{(\sigma _{\mathrm{max}}\sigma _{\mathrm{min}})\mathrm{sin}\varphi \mathrm{cos}\varphi }{\sigma _{\mathrm{max}}\mathrm{sin}^2\varphi +\sigma _{\mathrm{min}}\mathrm{cos}^2\varphi }.$$ (67) This relation is plotted in Fig. 4 for several values of the ratio $`\sigma _{\mathrm{min}}/\sigma _{\mathrm{max}}`$. When the system is isotropic, $`\sigma _{\mathrm{min}}/\sigma _{\mathrm{max}}=1`$ and the induced field vanishes. With increasing anisotropy, $`^{\mathrm{ind}}`$ becomes non-zero away from the high-symmetry growth and in-plane orientations and peaks at an angle $$\varphi _{\mathrm{max}}=\mathrm{sin}^1\sqrt{\sigma _{\mathrm{min}}/(\sigma _{\mathrm{max}}+\sigma _{\mathrm{min}})}.$$ (68) The maximum induced field is $$^{\mathrm{ind}}/F^{\mathrm{ext}}|_{\mathrm{max}}=(\sigma _{\mathrm{max}}\sigma _{\mathrm{min}})/2\sqrt{\sigma _{\mathrm{max}}\sigma _{\mathrm{min}}}.$$ (69) In the limit $`\sigma _{\mathrm{min}}/\sigma _{\mathrm{max}}<<1`$, these relations reduce to $`\varphi _{\mathrm{max}}\sqrt{\sigma _{\mathrm{min}}/\sigma _{\mathrm{max}}}`$ and $`^{\mathrm{ind}}/F^{\mathrm{ext}}|_{\mathrm{max}}(1/2)\sqrt{\sigma _{\mathrm{max}}/\sigma _{\mathrm{min}}}`$. Thus, as illustrated in Fig. 4, a relative induced field much larger than unity is obtained for sufficiently small $`\sigma _{\mathrm{min}}/\sigma _{\mathrm{max}}`$. From Eq. (69), the maximal induced field exceeds the external one when $`\sigma _{\mathrm{min}}/\sigma _{\mathrm{max}}32\sqrt{2}0.172`$. This geometrical effect may be practically useful. As specific examples, consider two superlattices: (1) 30-Å HgTe / 20-Å $`\mathrm{Hg}_{0.15}\mathrm{Cd}_{0.85}\mathrm{Te}`$ and (2) 20-Å HgTe / 40-Å $`\mathrm{Hg}_{0.15}\mathrm{Cd}_{0.85}\mathrm{Te}`$. Realistic band structures for these materials are obtained through an 8-band $`𝐊𝐩`$ theory within the envelope function approximation based on the parameters in Tab. I. Application of this approach to other II-VI and III-V SLs yields accurate band structures which reproduce experimental optical absorption spectra with no adjustable parameters. The masses of the $`C1`$ subband along the growth and in-plane directions required for the transport calculations are given in Tab. II. The room-temperature energy gaps are seen to satisfy $`E_g10k_BT`$ in both structures, so hole conduction is negligible. Transport in the next highest conduction subband is also negligible, since it lies more than $`20k_BT`$ higher in energy than $`C1`$ in both SLs. The relaxation time and lattice thermal conductivity enter $`ZT`$ significantly only in the ratio $`\kappa _{\mathrm{}}/\sigma _{\mathrm{eff}}T`$ \[Eq. (66)\]. Their values are estimated from those of an alloy of the same composition as the SL (Tab. II). Thus, interface scattering is neglected. The thermoelectric performance of these SLs is shown in Tab. II. Since $`\sigma _{\mathrm{min}}/\sigma _{\mathrm{max}}=m_{}/m_{}`$, SL (1)’s 0.66 mass ratio between the in-plane and growth directions yields a maximum induced field of only $`0.2F^{\mathrm{ext}}`$. SL (2)’s much smaller 0.39 mass ratio, indicating a larger anisotropy, yields a correspondingly larger induced field of $`0.5F^{\mathrm{ext}}`$. For even larger anisotropies, the effective mass approximation breaks down, but estimates using the band structure of Sec. V suggest that induced fields up to $`17F^{\mathrm{ext}}`$ can be obtained in a 40-Å HgTe / 50-Å CdTe SL. As for bulk $`\mathrm{Bi}_2\mathrm{Te}_3`$, induced temperature gradients are small, at most 2% of the external gradient for SL (1) and SL (2). Note that the maximal $`ZT`$s shown in Tab. II, although small in magnitude, represent a substantial increase over the equivalent bulk alloy. ### C $`\mathrm{Bi}_2\mathrm{Te}_3`$ Quantum Wells Consider a quantum well formed from a multi-valley semiconductor. Within the single-band envelope function approximation, the wave function for the $`n`$th valley, $`\psi ^{(n)}(𝐫)`$, satisfies $$\left[\underset{ij}{}\alpha _{ij}^{(n)}_i_j+V(𝐫)\right]\psi ^{(n)}(𝐫)=E^{(n)}\psi ^{(n)}(𝐫),$$ (70) where $`\alpha _{ij}=(\mathrm{}^2/2)M_{ij}^{(n)1}`$ is proportional to the bulk inverse effective mass tensor $`M_{ij}^{(n)1}`$, $`E^{(n)}`$ the energy measured from the bottom of the valley in the bulk, and $`i,j=x`$, $`y`$, or $`z`$. $`V(𝐫)`$ is the confining potential, which is taken to be a square well of width $`L_z`$ and infinite height. Solution of Eq. (70) yields the eigenfunctions $$\psi _m^{(n)}(𝐫)=\sqrt{2/L_z}\mathrm{exp}\left[i\left(k_xx+k_yy(\alpha _{zx}^{(n)}k_x+\alpha _{zy}^{(n)}k_y)z/\alpha _{zz}^{(n)}\right)\right]\mathrm{sin}(m\pi z/L_z)$$ (71) and the eigenenergies $$E_m^{(n)}(k_x,k_y)=\underset{i,j=x,y}{}k_i\alpha _{ij}^{(n)\mathrm{Well}}k_j+E_m^{(n)\mathrm{Conf}}$$ (72) with $$\alpha _{ij}^{(n)\mathrm{Well}}=\left(\begin{array}{cc}\alpha _{xx}^{(n)}\alpha _{xz}^{(n)}\alpha _{zx}^{(n)}/\alpha _{zz}^{(n)}& \alpha _{xy}^{(n)}\alpha _{xz}^{(n)}\alpha _{zy}^{(n)}/\alpha _{zz}^{(n)}\\ \alpha _{yx}^{(n)}\alpha _{yz}^{(n)}\alpha _{zx}^{(n)}/\alpha _{zz}^{(n)}& \alpha _{yy}^{(n)}\alpha _{yz}^{(n)}\alpha _{zy}^{(n)}/\alpha _{zz}^{(n)}\end{array}\right)$$ (73) and the confinement energy $$E_m^{(n)\mathrm{Conf}}=\alpha _{zz}^{(n)}\pi ^2m^2/L_z^2,$$ (74) for $`m=1,2,3,\mathrm{}`$ Thus, the quantum well band structure differs from that of the bulk in two ways. First, the energy of the bottom of the $`n`$th valley is shifted with respect to the bulk by an amount $`E_1^{(n)\mathrm{Conf}}`$ that is inversely proportional to $`m_{zz}^{(n)}`$. Since $`m_{zz}^{(n)}`$ is generally different for different $`n`$, the confinement splits the valley degeneracy. If the well is sufficiently narrow, only the valleys having the lowest confinement energy will contribute to the transport. Second, as seen from Eq. (73), the masses in the quantum well differ from those in the bulk by terms second order in the off-diagonal elements of the bulk inverse effective-mass tensor. Suppose the well material is n-type $`\mathrm{Bi}_2\mathrm{Te}_3`$. The bulk band structure was described in Sec. IV A. The relaxation time and lattice thermal conductivity can be obtained as in Ref. , which neglects interface effects. Eqs. (73), (39), and (53)-(54) then lead directly to the transport coefficients and $`ZT`$. For simplicity, $`\kappa _{\mathrm{}}`$ of the barriers is neglected. Thus, $`ZT`$ is an overestimate. Examine two orientations of the quantum well: one in which the growth direction is along the trigonal axis and one in which it is along the binary. In both cases, take the currents to flow along the bisectrix. These choices are expected to yield the best $`ZT`$s, since the currents are along the bulk high-conductivity direction and the alignment of the well with the crystal axes eliminates detrimental induced fields. The maximal $`ZT`$ for these configurations is plotted as a function of well width in Fig. 5. As seen in this figure, quantum well growth along the trigonal axis yields larger $`ZT`$s. From Fig. 1, the masses of each ellipsoid along the trigonal axis are the same, but they differ along the binary axis. Thus, all six ellipsoids contribute to transport in wells grown along the trigonal axis, but valley splitting allows only two ellipsoids to contribute in wells grown along the binary axis. The resulting larger density of states in the former case leads to higher $`ZT`$s. For a given orientation, the density of states is also larger for smaller well widths, resulting in an increase of $`ZT`$ with decreasing well widths in both orientations. For comparison, Fig. 5 also shows $`ZT`$ computed using the simpler $`\mathrm{Bi}_2\mathrm{Te}_3`$ band structure of Ref. . This band structure consists of six equivalent ellipsoids whose principal axes are all aligned with those of the crystal. As in the multi-ellipsoid model, $`ZT`$ is larger for wells grown along the trigonal axis and for smaller well widths. However, the magnitude of $`ZT`$ is larger than in the multi-ellipsoid model. This difference arises because each ellipsoid shares the same orientation, allowing all of the ellipsoids’ low-mass directions to lie along the bisectrix and preventing any valley splitting. ## V Microscopic Model: Non-Parabolic Bands Within the effective-mass approximation, the analysis of Sec. III showed that the thermopower and Lorentz number are isotropic. To see how a non-parabolic band structure affects these conclusions, consider an electronic dispersion relation of the Esaki-Tsu form: $$\epsilon (𝐤)=\mathrm{}^2k_{}^2/2m_{}+\mathrm{\Delta }(1\mathrm{cos}k_zd)$$ (75) with wave vector $`𝐤=(k_{}\mathrm{cos}\phi ,k_{}\mathrm{sin}\phi ,k_z)`$. This relation models a superlattice of period $`d`$. The in-plane dispersion is parabolic with mass $`m_{}`$, and that along the growth direction has a tight-binding form with band width $`2\mathrm{\Delta }=(\epsilon (0,0,\pi /d)\epsilon (0,0,0))`$. Since the mass along the growth direction $`m_z=\mathrm{}^2/\mathrm{\Delta }d^2`$, the anisotropy can be increased by reducing $`\mathrm{\Delta }`$. In the principal frame of the SL, the transport distribution tensor \[Eq. (31)\] is diagonal with components $$\mathrm{\Sigma }_{\mathrm{Growth}}(\epsilon )=\frac{m_{}d\tau (\epsilon )}{2\pi ^2\mathrm{}^4}\{\begin{array}{cc}\mathrm{\Delta }^2\mathrm{cos}^1(1\epsilon /\mathrm{\Delta })+(\epsilon \mathrm{\Delta })\sqrt{\epsilon (2\mathrm{\Delta }\epsilon )},& \epsilon <2\mathrm{\Delta }\\ \pi \mathrm{\Delta }^2,& \epsilon 2\mathrm{\Delta }\end{array}$$ (76) along the growth direction and $$\mathrm{\Sigma }_{\mathrm{In}\mathrm{plane}}(\epsilon )=\frac{\tau (\epsilon )}{\pi ^2\mathrm{}^2d}\{\begin{array}{cc}(\epsilon \mathrm{\Delta })\mathrm{cos}^1(1\epsilon /\mathrm{\Delta })+\sqrt{\epsilon (2\mathrm{\Delta }\epsilon )},& \epsilon <2\mathrm{\Delta }\\ \pi (\epsilon \mathrm{\Delta }),& \epsilon 2\mathrm{\Delta }\end{array}$$ (77) along the planes. The transport coefficients are obtained from Eqs. (28)-(30) in the principal frame and then transformed into the sample frame. As in Sec. III, the relaxation time $`\tau (𝐤)`$ is assumed to be a function of energy only. Direct computation with $`\tau \epsilon ^r`$ indicates that the qualitative features discussed below are independent of the choice of $`r`$. Quantitatively, the thermopower is an approximately linear function of $`r`$ at fixed sample orientation and chemical potential $`\mu `$ and increases by approximately 50% as $`r`$ goes from 0 to 1.5. In what follows, $`r=0`$, $`d=`$100 Å, $`m_{}/m_0=0.021`$, and $`\mathrm{\Delta }=57`$ meV, corresponding to the C1 subband in a 50-Å $`\mathrm{Hg}_{0.75}\mathrm{Cd}_{0.25}\mathrm{Te}`$ / 50-Å $`\mathrm{Hg}_{0.7}\mathrm{Cd}_{0.3}\mathrm{Te}`$ SL. The anisotropy in the resulting effective thermopower $`S_{\mathrm{eff}}`$ \[Eq. (15)\] and Lorentz number $`L_{0,\mathrm{eff}}=\kappa _{\mathrm{eff}}/\sigma _{\mathrm{eff}}T`$ is shown in Fig. 6. For $`\mu <0`$, $`S_{\mathrm{eff}}`$ along the growth and in-plane directions differ by $`<10`$%. This near isotropy is expected, since the carriers determining $`S_{\mathrm{eff}}`$ are near the zone center, where the effective-mass approximation is good. The anisotropy increases substantially as $`\mu `$ increases past $`2\mathrm{\Delta }`$, reaching over 6000 at $`\mu =0.4`$ eV. From Fig. 6(b), the anisotropy in $`L_{0,\mathrm{eff}}`$ is at most 30% and goes to zero in the large-$`\mu `$ limit, approaching the metallic value of $`(\pi ^2/3)(k_B/e)^2`$. As $`ZT`$ for this band structure is maximal for $`\mu 0`$, the anisotropy in $`S_{\mathrm{eff}}`$ and $`L_{0,\mathrm{eff}}`$ in the relevant parameter range are small. These numerical results are also consistent with the idea that the induced fields are detrimental to $`ZT`$, as discussed in Appendix A. As shown by the dotted lines in Fig. 6(a), the induced fields present along low-symmetry directions reduce $`S_{\mathrm{eff}}`$ below $`S_{xx}`$. However, $`S_{\mathrm{eff}}`$ is always bounded by $`S_{\mathrm{eff}}=S_{xx}`$ along the principal directions. Thus, optimal thermoelectric performance is achieved for samples cut along the principal axes. ## VI Summary This paper developed the transport theory relevant for anisotropic, multi-valleyed materials, taking into account the full tensorial structure of the electronic transport coefficients and including the effects of sample boundaries. Induced transverse fields are associated with samples cut along directions in which the conductivity had off-diagonal elements. These fields can be larger than the applied fields. Within the effective-mass and relaxation-time approximations and the neglect of intervalley scattering, the tensor characterizing the structure appears as a simple multiplicative factor of each of the transport coefficients. This factorization results in an isotropic thermopower and Lorentz number even in the presence of anisotropy. In a hypothetical material having no lattice thermal conductivity $`ZT`$ would therefore be isotropic. For non-vanishing and sufficiently isotropic lattice thermal conductivity $`ZT`$ is maximal for samples cut along the high-conductivity directions. Widely ranging numerical results suggest that the maximal $`ZT`$ generally occurs along the principal direction which maximizes the ratio of the electronic to lattice thermal conductivity. An upper bound for $`ZT`$ is given which generalizes the result of Mahan and Sofo to anisotropic systems. Application of these formal results to some systems of interest showed the following. (1) Bulk n-type $`\mathrm{Bi}_2\mathrm{Te}_3`$ exhibits induced transverse fields, reduced effective conductivity, and anisotropic $`ZT`$s. These effects should be easily observable. (2) In $`\mathrm{HgTe}/\mathrm{Hg}_{1\mathrm{x}}\mathrm{Cd}_\mathrm{x}\mathrm{Te}`$ SLs, whose anisotropy is adjustable by varying the layer widths and composition, increased anisotropy is associated with larger induced fields. (3) The confining potential in isolated $`\mathrm{Bi}_2\mathrm{Te}_3`$ quantum wells splits the valley degeneracy and changes the masses from their bulk values. The latter features suggest that optimal $`ZT`$s are associated with wells grown along the trigonal direction. (4) The effect of non-parabolic dispersion on the thermopower is very small for the doping range over which $`ZT`$ is maximal. The maximal thermopower occurs along principal directions, which supports the suggestion given in the previous paragraph. ###### Acknowledgements. Discussions with E. Runge are gratefully acknowledged. This work was supported by DARPA through ONR Contract No. N00014-96-1-0887 and the NSF through Che9610501. ## A In this appendix we present a unified form of the linear-response theory which, because of its explicit use of the Onsager coefficients, has the advantage of treating the electric and heat currents more symmetrically than in Sec. II. Note that, since the Onsager coefficients are assumed to be symmetric matrices, this formalism does not apply to monoclinic, triclinic, or chiral materials. The generalized fluxes $`𝐉`$ are given in terms of the generalized forces $`𝐟`$ by $$𝐉=\underset{¯}{L}𝐟$$ (A1) with $$𝐉=\left(\begin{array}{cccccc}J_x^e& J_x^Q& J_y^e& J_y^Q& J_z^e& J_x^Q\end{array}\right)^T=\left(\begin{array}{c}𝐉_{}\\ 𝐉_{}\end{array}\right)$$ (A2) and $$𝐟=\left(\begin{array}{cccccc}e_x& _xT/T& e_y& _yT/T& e_z& _zT/T\end{array}\right)^T=\left(\begin{array}{c}𝐟_{}\\ 𝐟_{}\end{array}\right)$$ (A3) where $`𝐉_{}`$ and $`𝐟_{}`$ contain the first two components and $`𝐉_{}`$ and $`𝐟_{}`$ the remaining four components of $`𝐉`$ and $`𝐟`$, respectively. With respect to these conventions the matrix of Onsager coefficients, $`\underset{¯}{L}`$, assumes the form $$\underset{¯}{L}=\left(\begin{array}{cc}\underset{¯}{L}_{}& \underset{¯}{L}_{od}^T\\ \underset{¯}{L}_{od}& \underset{¯}{L}_{}\end{array}\right)$$ (A4) with $$\underset{¯}{L}_{}=\left(\begin{array}{cc}L_{xx}^{11}& L_{xx}^{21}\\ L_{xx}^{12}& L_{xx}^{22}\end{array}\right),$$ (A5) $$\underset{¯}{L}_{od}=\left(\begin{array}{cccc}L_{xy}^{11}& L_{xy}^{21}& L_{xz}^{11}& L_{xz}^{21}\\ L_{xy}^{12}& L_{xy}^{22}& L_{xz}^{12}& L_{xz}^{22}\end{array}\right)^T$$ (A6) and $$\underset{¯}{L}_{}=\left(\begin{array}{cccc}L_{yy}^{11}& L_{yy}^{21}& L_{yz}^{11}& L_{yz}^{21}\\ L_{yy}^{12}& L_{yy}^{22}& L_{yz}^{12}& L_{yz}^{22}\\ L_{zy}^{11}& L_{zy}^{21}& L_{zz}^{11}& L_{zz}^{21}\\ L_{zy}^{12}& L_{zy}^{22}& L_{zz}^{12}& L_{zz}^{22}\end{array}\right).$$ (A7) Equation (A1) and the condition that $`𝐉_{}=0`$ yield $$𝐟_{}=\underset{¯}{L}_{}^1\underset{¯}{L}_{od}𝐟_{};$$ (A8) hence, $$𝐉_{}=\underset{¯}{L}_{\mathrm{eff}}𝐟_{}$$ (A9) with $$\underset{¯}{L}_{\mathrm{eff}}=\underset{¯}{L}_{}\underset{¯}{L}_{od}^T\underset{¯}{L}_{}^1\underset{¯}{L}_{od}\underset{¯}{L}_{}\underset{¯}{N}_{}$$ (A10) It can be shown that $`\underset{¯}{N}_{}>0`$ and $`\underset{¯}{L}_{\mathrm{eff}}0`$. We may decompose $`𝐉_{}`$ into external and induced parts as follows: $$𝐉_{}=𝐉^{\mathrm{ext}}+𝐉^{\mathrm{ind}}$$ (A11) with $$𝐉^{\mathrm{ext}}=\underset{¯}{L}_{}𝐟_{}$$ (A12) and $$𝐉^{\mathrm{ind}}=\underset{¯}{N}_{}𝐟_{}.$$ (A13) In Sec. II it was found that, for an applied electric field only ($`𝐟_{}=(e_x,0)`$), the induced electric current opposes the external electric current because $`\sigma _{\mathrm{eff}}\sigma _{}`$, and likewise for an applied thermal gradient only, the induced heat current opposes the external heat current. Physical intuition suggests that these results should generalize to the case where both an applied electric field and applied thermal gradient are present. Then we expect that, if both components of f have the same sign, $`𝐉^{\mathrm{ind}}`$ should oppose $`𝐉^{\mathrm{ext}}`$ in the sense that they lie in opposite quadrants of the plane spanned by $`J_x^e`$ and $`J_x^Q`$. Although we have not proven this conjecture, it is consistent with numerical results for randomly constructed matrices of Onsager coefficients. Undoubtedly a proof requires a general microscopic model. Finally, we note that the entropy generation is given by $$T\frac{S}{t}|_{\mathrm{field}}=𝐉𝐟=𝐉^{\mathrm{ext}}𝐟_{}+𝐉^{\mathrm{ind}}𝐟_{}.$$ (A14) Since $`\underset{¯}{L}_{}>0`$ and $`\underset{¯}{N}_{}>0`$, we have $`𝐉^{\mathrm{ext}}𝐟_{}>0`$ and $`𝐉^{\mathrm{ind}}𝐟_{}<0`$. Thus, external currents decrease $`\dot{S}|_{\mathrm{field}}`$ but induced currents increase $`\dot{S}|_{\mathrm{field}}`$. ## B We present here a derivation of an upper bound of the Sofo and Mahan form for an anisotropic material. From $`\underset{¯}{\sigma }=\underset{¯}{A}\sigma _0`$ we infer $$\sigma _{\mathrm{eff}}=\sigma _0a_0$$ (B1) with $$a_0=A_{xx}A_{xy}\frac{A_{yx}A_{zz}A_{yz}A_{zx}}{A_{yy}A_{zz}A_{yz}A_{zy}}A_{xz}\frac{A_{zx}A_{yy}A_{zy}A_{yx}}{A_{zz}A_{yy}A_{yz}A_{zy}}.$$ (B2) When the $`\kappa _{\mathrm{}}`$-dependence in Eq. (60) is taken into account, we find similarly $$\kappa _e^{}=\kappa _ea(\kappa _{\mathrm{}})$$ (B3) with $`a(\kappa _{\mathrm{}}=0)=a_0`$ and $`a(\kappa _{\mathrm{}})`$ $`=A_{xx}`$ $`A_{xy}{\displaystyle \frac{A_{yx}(A_{zz}+\kappa _{\mathrm{}}/\kappa _e)A_{yz}A_{zx}}{(A_{yy}+\kappa _{\mathrm{}}/\kappa _e)(A_{zz}+\kappa _{\mathrm{}}/\kappa _e)A_{yz}A_{zy}}}`$ (B5) $`A_{xz}{\displaystyle \frac{A_{zx}(A_{yy}+\kappa _{\mathrm{}}/\kappa _e)A_{zy}A_{yx}}{(A_{yy}+\kappa _{\mathrm{}}/\kappa _e)(A_{zz}+\kappa _{\mathrm{}}/\kappa _e)A_{yz}A_{zy}}}.`$ Therefore $$ZT=\frac{T\sigma _0S_0^2a_0}{(\kappa _0T\sigma _0S_0^2)a(\kappa _{\mathrm{}})+\kappa _{\mathrm{}}}.$$ (B6) Following Mahan and Sofo, introduce dimensionless integrals $$I_n=_{\mu /k_BT}^{\mathrm{}}𝑑x\frac{e^x}{(e^x+1)^2}s(x)x^n,s(x)=\mathrm{}r_0𝒯(\mu +xkT),$$ (B7) where $`r_0`$ is the Bohr radius. In terms of these moments, $`\sigma _0`$ $`=`$ $`\stackrel{~}{\sigma }_0I_0`$ (B8) $`\sigma _0S_0`$ $`=`$ $`(k_B/e)\stackrel{~}{\sigma }_0I_1`$ (B9) $`\kappa _0`$ $`=`$ $`(k_B/e)^2T\stackrel{~}{\sigma }_0I_2`$ (B10) where $`\stackrel{~}{\sigma }_0=e^2/\mathrm{}r_0`$ has dimensions of conductivity. Then $`ZT`$ $`=`$ $`{\displaystyle \frac{T(k_B/e)^2\stackrel{~}{\sigma }_0^2I_1^2/\stackrel{~}{\sigma }_0I_0}{((k_B/e)^2T\stackrel{~}{\sigma }_0I_2T(k_B/e)^2\stackrel{~}{\sigma }_0I_1^2/I_0)a(\kappa _{\mathrm{}})/a_0+\kappa _{\mathrm{}}/a_0}}`$ (B11) $`=`$ $`{\displaystyle \frac{\stackrel{~}{\alpha }I_1^2/I_0}{(\stackrel{~}{\alpha }I_2\stackrel{~}{\alpha }I_1^2/I_0)a(\kappa _{\mathrm{}})/a_0+1/a_0}}`$ (B12) $`=`$ $`{\displaystyle \frac{\xi }{(1\xi )a(\kappa _{\mathrm{}})/a_0+B}}`$ (B13) with $`\stackrel{~}{\alpha }=(k_B/e)^2T\stackrel{~}{\sigma }_0/\kappa _{\mathrm{}}`$, $$\xi =I_1^2/I_0I_2,$$ (B14) and $`B=1/\stackrel{~}{\alpha }I_2a_0=\kappa _{\mathrm{}}/\kappa _0a_0`$. By the Cauchy-Schwarz inequality, $`0\xi 1`$. The limit as $`\xi `$ tends to 1 maximizes the figure of merit by maximizing the numerator and minimizing the denominator in Eq. (B13). From Eq. (B5) it may be seen that for $`\kappa _{\mathrm{}}/\kappa _e=\kappa _{\mathrm{}}/(k_B/e)^2T\stackrel{~}{\sigma }_0I_2(1\xi )`$ sufficiently large, $`a(\kappa _{\mathrm{}})`$ tends to $`A_{xx}`$, and certainly this condition is met as $`1\xi `$ tends to zero. Thus as $`\xi 1`$ we have that $$ZT\frac{\xi }{(1\xi )A_{xx}/a_0+B}\frac{1}{B}=\frac{\kappa _0a_0}{\kappa _{\mathrm{}}}.$$ (B15)
no-problem/9812/cond-mat9812160.html
ar5iv
text
# Spin current in ferromagnet/insulator/superconductor junctions ## I INTRODUCTION The transport properties in hybrid structures between ferromagnets and superconductors have received considerable theoretical and experimental attentions. Interest in such structures includes spin-dependent spectroscopy of superconductors and possible device applications. Since the Cooper pairs in spin singlet superconductors are formed between up and down spins, the high density of spin injection through a tunneling barrier induces a spin imbalance. This non-equilibrium state is expected to result in a suppression of the critical temperature and the critical current density in the superconductor. A large number of experimental studies on spin-polarized tunneling have already been performed using conventional metal superconductors such as Al and Nb about 20 years ago . However, the recent discovery of so-called colossal magneto-resistance (CMR) in Mn oxides compound has aroused new interest in this field , because hybrid structure fabrication of the spin-polarized ferromagnets with high-$`T_c`$ superconductors is now possible using these materials . On the other hand, the properties of ferromagnet/insulator/superconductor (F/I/S) and ferromagnet/ferromagnetic-insulator/superconductor (F/FI/S) junctions have been analyzed based on the assumption that the conductance spectra correspond to the density of states (DOS) of the superconductor weighted by the spin polarization . A theory for F/I/S junctions based on a scattering method has been presented by de Jong and Beeneker, and new aspects of Andreev reflection have been revealed, and also detailed comparisons between theory and experiments have been accomplished . However, these results are restricted to isotropic $`s`$-wave superconductors. In contrast to $`s`$-wave superconductor cases, at the interface of a $`d_{x^2y^2}`$-wave superconductor, zero-energy states (ZES) are formed due to the interference effect of the internal phase of the pair potential . Tunneling theory for $`d_{x^2y^2}`$-wave superconductors has already been presented by extending the BTK formula to include the anisotropy of the pair potential . The theory predicts the existence of zero-bias conductance peak (ZBCP) which reflects the formation of the surface bound states on the $`d`$-wave superconductors. In this paper, an exchange interaction is introduced on the normal side of the junction and on the insulator in order to analyze the spin polarized tunneling effects. The bound-state condition and tunneling spectroscopy of ferromagnet/$`d`$-wave superconductor junctions have already been analyzed in two papers . They have revealed several important features in charge transport. Here we will argue that the properties of the Andreev reflection is largely modified due to the presence of the exchange interaction. In particular, the existence of an evanescent type of the Andreev reflection, which is referred to as virtual Andreev reflection (VAR), is explained for the first time (see Ref. 27). This process has significant roles on the transports especially for junctions between half-metallic ferromagnets and superconductors. The conductance formulas for the charge and the spin currents are presented based on the scattering method by fully taking account of the VAR process. The merit of a formula based on the scattering methods is that the conductance spectra can easily be calculated for arbitrary barrier heights cases without the restriction of the high-barrier limit. The spin current is, we believe, the most important physical quantity in spin injection devices based on the following two reasons: one is that the spin current gives a direct criteria to estimate the effect of the spin imbalance induced by the tunneling current, the other is that the charge and the spin conductivity may illuminate the study of electron systems that undergo spin-charge separation, such as Tomonaga-Luttinger liquids and possibly underdoped high-$`T_c`$ superconductors . We will also analyze ferromagnetic insulator effects, which includes the spin-filtering effect , due to the presence of an exchange field in the insulator. It is shown that a spin-dependent energy shift during the tunneling process induces a splitting of the ZBCP. Based on the detailed analysis of the conductance spectra, we propose a simple method to distinguish the broken time-reversal symmetry (BTRS) states inducement at the surface from spin-dependent tunneling effects. The implications of the ferromagnetic insulator effects on tunneling experiments of high-$`T_c`$ superconductors and a proposal for possible device applications are also presented. ## II Formulation For the model of formulation, a planar $`F/FI/S`$ junction with semi-infinite electrodes in the clean limit is assumed. A flat interface is assumed to be located at $`x=0`$, and the insulator for up \[down\] spin is described by a potential $`V_{[]}(𝒙)`$ {$`V_{[]}(𝒙)=(\widehat{V}_0[+]\widehat{U}_B)\delta (x)`$}, where $`\delta (x)`$, $`\widehat{V}_0`$ and $`\widehat{U}_B`$ are the $`\delta `$-function, a genuine barrier amplitude and an exchange amplitude in the barrier, respectively. The effective mass m in the ferromagnet and in the superconductor are assumed to be equal. For the model of the ferromagnet, we adopt the Stoner model where the effect of the spin polarization is described by the one-electron Hamiltonian with an exchange interaction similarly to the case of Ref. . For the description of the $`d_{x^2y^2}`$-wave superconductor, we apply the quasi-classical approximation where the Fermi energy $`E_F`$ in the superconductor is much larger than the pair potential following the model by Bruder . The effective Hamiltonian (Bogoliubov-de Gennes equation) is give by $$\left[\begin{array}{cc}H_0(𝒙)\rho U(𝒙)& \mathrm{\Delta }(𝒙,\theta )\\ \mathrm{\Delta }^{}(𝒙,\theta )& \{H_0(𝒙)+\rho U(𝒙)\}\end{array}\right]\left[\begin{array}{c}u(𝒙,\theta )\hfill \\ v(𝒙,\theta )\hfill \end{array}\right]=E\left[\begin{array}{c}u(𝒙,\theta )\hfill \\ v(𝒙,\theta )\hfill \end{array}\right]$$ (1) Here, $`E`$ is the energy of the quasiparticle, $`U(𝒙)`$ is the exchange potential given by $`U\mathrm{\Theta }(x)`$ ($`U0`$) where $`\mathrm{\Theta }(x)`$ is the Heaviside step function, $`\rho `$ is 1 \[-1\] for up \[down\] spins, $`\mathrm{\Delta }(𝒙,\theta )`$ is the pair potential and $`H_0(𝒙)\mathrm{}^2^2/2m+V(𝒙)E_F`$. To describe the Fermi surface difference in F and S, we assume $`E_F=E_{FN}`$ for $`x<0`$ and $`E_F=E_{FS}`$ for $`x>0`$. The pair potential $`\mathrm{\Delta }(𝒙,\theta )`$ is taken as $`\mathrm{\Delta }(\theta )\mathrm{\Theta }(x)`$ for simplicity. The number of up \[down\] spin electrons is described by $`N_{}`$ \[$`N_{}`$\]. The polarization and the wave-vector of quasiparticles in the ferromagnet for up \[down\] spin are expressed as $`P_{}\frac{N_{}}{N_{}+N_{}}=\frac{E_{FN}+U}{2E_{FN}}`$ \[$`P_{}\frac{N_{}}{N_{}+N_{}}=\frac{E_{FN}U}{2E_{FN}}`$\] and $`k_{N,}=𝒌_{N,}\sqrt{\frac{2m}{\mathrm{}^2}(E_{FN}+U)}`$ \[$`k_{N,}=𝒌_{N,}\sqrt{\frac{2m}{\mathrm{}^2}(E_{FN}U)}`$\], respectively . We assume the quasiparticle injection of up spin electrons at an angle $`\theta _N`$ to the interface normal as shown in Fig. 1. Four possible trajectories exist; they are Andreev reflection (AR), normal reflection (NR), transmission to superconductor as electron-like quasiparticles (ELQ), and transmission as hole-like quasiparticles (HLQ). The spin direction is conserved for NR but not for AR. When the superconductor has $`d_{x^2y^2}`$-wave symmetry, the effective pair potentials for ELQ and HLQ are given by $`\mathrm{\Delta }_+\mathrm{\Delta }_0\mathrm{cos}2(\theta _S\beta )`$ and $`\mathrm{\Delta }_{}\mathrm{\Delta }_0\mathrm{cos}2(\theta _S+\beta )`$, respectively, where $`\beta `$ is the angle between $`a`$-axis of the crystal and the interface normal. Results for various pairing symmetries are obtained by setting proper values to $`\mathrm{\Delta }_+`$ and $`\mathrm{\Delta }_{}`$ similarly to the previous formulas . The wave vectors of ELQ and HLQ are approximated by $`k_S=𝒌_S\sqrt{\frac{2mE_{FS}}{\mathrm{}^2}}`$ following the model by Andreev . Since translational symmetry holds along the $`y`$-axis direction, the momentum components of all trajectories are conserved ($`k_{N,}\mathrm{sin}\theta _N=k_{N,}\mathrm{sin}\theta _A=k_S\mathrm{sin}\theta _S`$) . Note that $`\theta _N`$ is not equal to $`\theta _A`$ except when $`U=0`$, which means retro-reflectivity of AR is broken. Such novel behavior is a consequence of the fact that in the presence of an exchange field the BCS paring is formed not strictly between states of equal but opposite $`k`$-vectors, the so-called Fulde-Ferrell effect . The wave-function in the ferromagnet ($`x<0`$) for up \[down\] spin with injection angle $`\theta _N`$ is described by $$\left(\begin{array}{c}u(𝒙,\theta _N)\\ v(𝒙,\theta _N)\end{array}\right)=e^{i𝒌_{N,[]}𝒙}\left(\begin{array}{c}1\\ 0\end{array}\right)+a_{[]}(E,\theta _N)e^{i𝒌_{}^{\mathbf{\prime \prime }}{}_{N,[]}{}^{}𝒙}\left(\begin{array}{c}0\\ 1\end{array}\right)+b_{[]}(E,\theta _N)e^{i𝒌_{}^{\mathbf{}}{}_{N,[]}{}^{}𝒙}\left(\begin{array}{c}1\\ 0\end{array}\right),$$ (2) where the signs of the $`x`$-components of $`𝒌_{N,[]}`$ and $`𝒌_{N,[]}^{\mathbf{}}`$ are the reversed of each other. The reflection probabilities of the two processes are obtained by solving Eq. (1) and by connecting the wave-function and its derivative at $`x=0`$. Next, we will simply explain the Fermi surface effect by assuming up spin injection. Various kinds of reflection process are expected depending on the values of $`E_{FN}`$, $`E_S`$ and $`U`$. For example, when $`k_S<k_{N,}`$, total reflection ($`b_{[]}(E,\theta _N)^2=1`$) occurs when $`\theta _N>\mathrm{sin}^1(k_S/k_{N,})\theta _{c1}`$ . In this case, the net currents of the spin and the charge from the ferromagnet to the superconductor vanish. On the other hand, when $`k_{N,}<k_S<k_{N,}`$, the $`x`$-component of wave-vector in AR process ($`\sqrt{k_{N,}^2k_S^2\mathrm{sin}^2\theta _S}`$) becomes purely imaginary for $`\theta _{c1}>\theta _N>\mathrm{sin}^1(k_{N,}/k_{N,})\theta _{c2}`$. In this case, although transmitted quasiparticles from ferromagnet to superconductor do propagate, the Andreev reflected quasiparticles do not propagate (VAR process). A finite amplitude of the evanescent AR process still exists ($`a_{}(E,\theta _N)^2>0`$) and the net currents of the spin and the charge from the ferromagnet to the superconductor do not vanish. It is easy to check the conservation laws for the charge, the excitation, and the spin on the VAR process following the method presented in Ref. . The existence of the VAR process has not been treated in the one-dimensional model because it is a peculiar feature of a two or three dimensional F/S interface. The conductance of the junctions are obtained by extending previous formula to include the effect of spin . In the following, consider a situation where $`k_{N,}<k_S<k_{N,}`$. To analyze the transport properties of an $`F/I/S`$ junction, two kinds of conductance spectrum are introduced. The conductance for the charge current is defined by the charge flow induced by the up \[down\] spin quasiparticle injection and is given by $$\widehat{\sigma }_{q,[]}(E,\theta _N)\mathrm{Re}\left[1+\frac{\lambda _2}{\lambda _1}a_{[]}(E,\theta _N)^2b_{[]}(E,\theta _N)^2\right]$$ (3) (for $`0<\theta _N<\theta _{c2}`$) $$=\frac{4\lambda _1\left[4\lambda _2\widehat{\mathrm{\Gamma }}_+^2+(1+\lambda _2)^2+Z_{[]}^2\widehat{\mathrm{\Gamma }}_+\widehat{\mathrm{\Gamma }}_{}^2\{(1\lambda _2)^2+Z_{[]}^2\}\right]}{(1+\lambda _1+iZ_{[]})(1+\lambda _2iZ_{[]})(1\lambda _1iZ_{[]})(1\lambda _2+iZ_{[]})\widehat{\mathrm{\Gamma }}_+\widehat{\mathrm{\Gamma }}_{}^2},$$ (4) (for $`\theta _{c2}<\theta _N<\theta _{c1}`$) $$=\frac{4\lambda _1(1\widehat{\mathrm{\Gamma }}_+\widehat{\mathrm{\Gamma }}_{}^2)\{1+(\kappa _2+Z_{})^2\}}{(1+\lambda _1+iZ_{})\{1i(\kappa _2+Z_{})\}(1\lambda _1iZ_{})\{1+i(\kappa _2+iZ_{})\}\widehat{\mathrm{\Gamma }}_+\widehat{\mathrm{\Gamma }}_{}^2},$$ (5) (for $`\theta _{c1}<\theta _N<\pi /2`$) =0, where $`Z_{[]}={\displaystyle \frac{Z_{0,[]}}{\mathrm{cos}\theta _S}},Z_{0,[]}={\displaystyle \frac{2m(\widehat{V}_0[+]\widehat{U}_B)}{\mathrm{}^2k_S}},`$ $`\widehat{\mathrm{\Gamma }}_\pm =\mathrm{\Gamma }_\pm \mathrm{exp}(i\varphi _\pm ),\mathrm{exp}i\varphi _\pm ={\displaystyle \frac{\mathrm{\Delta }_\pm }{\mathrm{\Delta }_\pm }},\mathrm{\Gamma }_\pm ={\displaystyle \frac{E\sqrt{E^2\mathrm{\Delta }_\pm ^2}}{\mathrm{\Delta }_\pm }},`$ $`\lambda _1={\displaystyle \frac{k_{N,[]}\mathrm{cos}\theta _N}{k_S\mathrm{cos}\theta _S}},\lambda _2={\displaystyle \frac{k_{N,[]}\mathrm{cos}\theta _A}{k_S\mathrm{cos}\theta _S}},\kappa _2=i\lambda _2={\displaystyle \frac{\sqrt{k_S^2\mathrm{sin}^2\theta _Sk_{N,}^2}}{k_S\mathrm{cos}\theta _S}}.`$The conductance for the spin current is defined by the spin imbalance induced by the up \[down\] spin quasiparticle injection, $$\widehat{\sigma }_{s,[]}(E,\theta _N)\mathrm{Re}\left[1\frac{\lambda _2}{\lambda _1}a_{[]}(E,\theta _N)^2b_{[]}(E,\theta _N)^2\right]$$ (6) (for $`0<\theta _N<\theta _{c2}`$) $$=\frac{4\lambda _1\left[4\lambda _2\mathrm{\Gamma }_+^2+(1+\lambda _2)^2+Z_{[]}^2\widehat{\mathrm{\Gamma }}_+\widehat{\mathrm{\Gamma }}_{}^2\{(1\lambda _2)^2+Z_{[]}^2\}\right]}{(1+\lambda _1+iZ_{[]})(1+\lambda _2iZ_{[]})(1\lambda _1iZ_{[]})(1\lambda _2+iZ_{[]})\widehat{\mathrm{\Gamma }}_+\widehat{\mathrm{\Gamma }}_{}^2},$$ (7) (for $`\theta _{c2}<\theta _N<\theta _{c1}`$) $$=\frac{4\lambda _1(1\widehat{\mathrm{\Gamma }}_+\widehat{\mathrm{\Gamma }}_{}^2)\{1+(\kappa _2+Z_{})^2\}}{(1+\lambda _1+iZ_{})\{1i(\kappa _2+Z_{})\}(1\lambda _1iZ_{})\{1+i(\kappa _2+iZ_{})\}\widehat{\mathrm{\Gamma }}_+\widehat{\mathrm{\Gamma }}_{}^2},$$ (8) (for $`\theta _{c1}<\theta _N<\pi /2`$) =0. The Andreev reflected quasiparticles positively contribute to the charge current, but since their spins are reversed, they have negative contribution to the spin current. Second terms in r.h.s. of Eqs. (3) and (6) do not have finite contribution on net current in the VAR process, since the corresponding $`\lambda _2`$ is purely imaginary. The normalized total conductance spectra for the charge current $`\sigma _q(E)`$ and the spin current $`\sigma _s(E)`$ are given by $$\sigma _q(E)=\sigma _{q,}(E)+\sigma _{q,}(E),$$ (9) $$\sigma _{q,[]}(E)=\frac{1}{R_N}_{\pi /2}^{\pi /2}𝑑\theta _N\mathrm{cos}\theta _N\widehat{\sigma }_{q,[]}(E,\theta _N)P_{[]}k_{F,[]},$$ (10) $$\sigma _s(E)=\sigma _{s,}(E)\sigma _{s,}(E),$$ (11) $$\sigma _s(E)=\frac{1}{R_N}_{\pi /2}^{\pi /2}𝑑\theta _N\mathrm{cos}\theta _N\widehat{\sigma }_{s,[]}(E,\theta _N)P_{[]}k_{F,[]},$$ (12) where $$R_N=_{\pi /2}^{\pi /2}𝑑\theta _N\mathrm{cos}\theta _N\left[\widehat{\sigma }_{N,}(\theta _N)P_{}k_{F,}+\widehat{\sigma }_{N,}(\theta _N)P_{}k_{F,}\right],$$ (13) $`\widehat{\sigma }_{N,[]}(\theta _N)={\displaystyle \frac{4\lambda _1}{1+\lambda _1+iZ_{[]}^2}}.`$In the above, $`R_N`$, $`\sigma _{q,[]}(E)`$ and $`\sigma _{s,[]}(E)`$ correspond to the conductance when the superconductor is in the normal state and the spin-resolved normalized conductance spectra for charge and spin, respectively. The net polarization $`J_p(eV)`$ as a function of the bias voltage $`V`$ is give by $$J_p(eV)=\frac{_{\mathrm{}}^{\mathrm{}}𝑑E\sigma _s(E)\{f(EeV)f(E)\}}{_{\mathrm{}}^{\mathrm{}}𝑑E\sigma _q(E)\{f(EeV)f(E)\}},$$ (14) where $`f(E)`$ is the Fermi distribution function. Since the convolution with $`f(E)`$ gives only a smearing effect in the conductance spectra, the temperature is set to zero in the following discussions. In the above formulation, we have neglected the self-consistency of the pair potential in order to get analytical formulas . However, the present formula is easily extended to include this effect simply by replacing $`\mathrm{\Gamma }_\pm `$ with $`\mathrm{\Gamma }_\pm (x)_{x=0}`$, where $`\mathrm{\Gamma }_\pm (x)`$ follows the Ricatti equations described by $$\frac{d}{dx}\widehat{\mathrm{\Gamma }}_+(x)=\frac{1}{i\mathrm{}^2k_F\mathrm{cos}\theta _S}\left[\mathrm{\Delta }_+(x)\widehat{\mathrm{\Gamma }}_+^2(x)\mathrm{\Delta }_+^{}(x)+2E\widehat{\mathrm{\Gamma }}_+(x)\right],$$ (15) $$\frac{d}{dx}\widehat{\mathrm{\Gamma }}_{}(x)=\frac{1}{i\mathrm{}^2k_F\mathrm{cos}\theta _S}\left[\mathrm{\Delta }_{}^{}(x)\widehat{\mathrm{\Gamma }}_{}^2(x)\mathrm{\Delta }_{}(x)+2E\widehat{\mathrm{\Gamma }}_{}(x)\right],$$ (16) Here the spatial dependence of the pair potential is assumed as $`\mathrm{\Delta }_\pm (x)`$ (functions of $`x`$). The most important differences in the present formula from previous ones are; i) a novel formula for the non-linear spin current, ii) a capability to treat the ferromagnetic insulator effects based on the scattering method, iii) the introduction of the breakdown in the retro-reflectivity of the AR process and consequently the vanishing of the propagating AR (VAR process). In particular, the concept of the VAR process is a new physical process presented in this paper. If we would not accept the existence of this process, the total reflection independent of $`E`$ is naively expected. Since finite transmission is possible in this angle region ($`\theta _{c2}<\theta <\theta _{c1}`$) above $`T_c`$, this total reflection would induce a sudden decrease of the conductance just below $`T_c`$ for highly polarized ferromagnets junctions. As far as we know, no trends for such effect has been reported thus far. This fact may be the direct evidence for the existence of the VAR process. The VAR process is shown to have an important role on the Josephson current in superconductor/ferromagnet/superconductor junctions, because the evanescent wave carries a net Josephson current in this configuration . Note that the suppression mechanism of the AR process presented here is essentially different from that discussed in one-dimensional model where the contribution of the AR to the net current is simply governed by the ratio $`k_{N,}/k_{N,}`$ . ## III Results ### A Effects of polarization In this subsection, to reveal the influence of the polarization on the tunneling conductance spectra, we assume F/I/S junction by setting $`\widehat{U}_B=0`$ ($`Z_{0,}=Z_{0,}Z_0`$). At first, let us discuss several analytical results obtained from above formulation in order to check the validity of the formula. When $`U=0`$, the ferromagnet reduces to a normal metal, and as expected $`\sigma _q(E)`$ reproduces the results of Ref. , and $`\sigma _s(E)`$ vanishes. For half-metallic ferromagnets ($`U=E_{FN}`$), the Fermi-surface for the down spins has shrunk to zero. In this case, the VAR process occurs for all $`\theta _N`$. Under the condition of VAR, $`\widehat{\sigma }_q(E,\theta _N)=\widehat{\sigma }_s(E,\theta _N)`$ applies, which corresponds to the fact that the tunneling current is completely spin-polarized. Furthermore, the conductance spectra in the energy gap ($`E<\mathrm{\Delta }_+`$, $`E<\mathrm{\Delta }_{}`$) become completely zero \[$`\sigma _q(E,\theta _N)=\sigma _s(E,\theta _N)=0`$\]. In the tunneling limit ($`H\mathrm{}`$) and in the absence of VAR, $`\widehat{\sigma }_{q,[]}(E,\theta )`$ gives the angle resolved surface DOS of an isolated superconductor. Then $`\sigma _q(E)`$ converges to the surface DOS weighted by the tunneling probability distribution . At this limit, we can reproduce a well-known result that the ratio of the peak heights in the spin-resolved spectra directly reflect the polarization in the ferromagnet . On the other hand, $`\sigma _{s,[]}(E,\theta )`$ reduces to a function similar to the surface DOS, but where the divergence at the energy levels of the surface bound states is missing. Next, the calculated results based on above formula are presented for $`d_{x^2y^2}`$-wave superconductors. In the following, we assume $`E_{FN}=E_{FS}`$. Figures 2 and 3 show the conductance spectra of charge current for the transparent limit ($`Z_0=0`$, $`\beta =0`$) and high-barrier case ($`Z_0=5`$, $`\beta =\pi /4`$) as the function of exchange interaction $`X(U/E_{FN})`$. For $`X=0`$, results in Ref. are reproduced. However, as $`X`$ increases, the conductance inside the gap ($`E<\mathrm{\Delta }_0`$) is largely reduced for both cases. Especially, the ZBCP disappears for the half-metallic ferromagnet case. Since the spin-polarization has such a drastic influence on the ZBCP, the height of ZBCP can be used in principle as a measurement of the magnitude of the spin polarization. Figure 4 shows the difference of the spin current and the charge current when $`X`$=0.85, $`Z_0=5`$ and $`\beta =\pi /4`$. It is clear that the ZBCP is not present for the spin current. This corresponds to the fact that the charge current components corresponding to the ZES are carried by condensed Cooper pairs in the superconductor, and therefore they do not contribute to the spin imbalance. As a result, the spin current becomes relatively insensitive to the orientation of the junctions. Figure 5 shows the conductance spectra for the spin current as the function of spin polarization ($`Z_0=5`$). It is clear that the spin current increases as $`X`$ becomes larger. Note that $`\sigma _s(E)`$ is larger than unity around $`E=\mathrm{\Delta }_0`$ when $`X1`$. This corresponds to the fact that the peak in the DOS has an influence even for the spin current. Next, the net polarization $`J_p(eV)`$ is calculated for $`d_{x^2y^2}`$-wave superconductors as a function of the orientation ($`\beta `$) when $`T=0`$. Four lines of Fig. 6 show the results for various values of barrier parameter when $`eV=2\mathrm{\Delta }_0`$. It is clear that the orientational effect is much smaller compared to the effect of $`Z_0`$. In the same figure, results for $`s`$-wave superconductors ($`\mathrm{\Delta }_+=\mathrm{\Delta }_{}=\mathrm{\Delta }_0`$ independent of $`\theta _N`$) are also shown as closed dots. The large deviations of $`d_{x^2y^2}`$-wave from $`s`$-wave for small values of $`Z_0`$ are originated from the distribution of the pair amplitude in $`k`$-space. As the barrier parameter becomes larger, the spin injection efficiency becomes to be insensitive to the symmetry of the pair potential. ### B Spin filtering effects and the ZBCP splitting It has been experimentally verified that a ferromagnetic semiconductor used as the insulator in tunneling junctions works as a ferromagnetic barrier. Since the transmission probabilities for up and down spins are not equal, an spin-filtering effect is expected to be realized . Also it has been theoretically verified that a ferromagnetic insulator placed in the vicinity of superconductor induces a spin-splitting on the DOS of $`s`$-wave superconductors . In the following, we will analyze the influence of the exchange interaction existing inside the insulator on the transport properties based on the formulation described in Sec. II. Figure 7 shows the response of the conductance spectra $`\sigma _q(E)`$ on the exchange interaction in the insulator when $`X=0`$. ZBCP splittings are obtained for finite exchange amplitude ($`U_B`$) cases. As $`U_B`$ is increased and consequently as the difference between $`Z_{0,}`$ and $`Z_{0,}`$ becomes larger, the amplitude of the splitting becomes larger and the two peaks become broader and smaller. The peaks in the gap disappear when the difference between $`Z_{0,}`$ and $`Z_{0,}`$ becomes prominent. To see more clearly these trends, the spin-resolved conductance spectra $`\sigma _{q,[]}(E)`$ and $`\sigma _s(E)`$ for $`Z_{0,}=2.5`$ and $`Z_{0,}=7.5`$ are plotted in Fig. 8. The spectra for up \[down\] spins are shifted for lower \[higher\] energy level. Furthermore, $`\sigma _s(E)`$ becomes finite even though $`X=0`$ in the ferromagnet. In order to check the effect of the polarization, Fig. 9 shows the response of the charge current as a function of polarization $`X`$ for a fixed barrier parameter. The spin polarization in the ferromagnet induces the imbalance of the peak heights, thus the ratio of the splitted peak heights can be used as a criteria for the spin-polarization. These results are interpreted as follows: i) The peaks corresponding to the up \[down\] spin components are shifted because of the energy gain (loss) during the tunneling process. ii) Since this energy gain (loss) has $`k`$-dependence , the peak becomes broader comparing to the magnetic-field induced peak splitting (see below). iii) The amplitude of the peak splitting depends on the genuine barrier amplitude $`\widehat{V}_0`$ as well as the exchange amplitude $`\widehat{U}_B`$. For example, the splitted peaks merge into a single peak at the tunneling limit ($`\widehat{V}_0\mathrm{}`$) even if $`\widehat{U}_B`$ is kept constant. iv) The current corresponding to the ZBCP is carried by the Cooper pair in the superconductor as described in the previous subsection. This corresponds to the fact that the AR process is the second-lowest order tunneling process which requires both up and down spins tunneling. Hence, as $`Z_{0,}`$ becomes larger and as the tunneling probability for down spins are suppressed, the conductance peaks and the AR process are rapidly reduced even if $`Z_{0,}`$ is kept zero. v) The spin current is increased as $`\widehat{U}_B`$ is raised from zero even if $`X`$ in the ferromagnet is kept at zero (unpolarized). This feature directly corresponds to the spin-filtering effect that the spin-selective tunneling occurs due to the presence of the exchange field in the insulator. Next, various types of the ZBCP splitting expected for $`d`$-wave superconductors and their polarization effects are analyzed. Mainly two possibilities other than the ferromagnetic insulator effects have been proposed for the origins of the ZBCP splitting on high-$`T_c`$ superconductor junctions. One is the Zeeman effect due to an applied magnetic field, and the other is the inducement of the BTRS states such as $`d_{x^2y^2}`$+$`is`$-wave. The conductance spectra in an applied magnetic field is calculated from above formula by simply using the relation $$\sigma _{q[s]}(E)=\sigma _{q[s],}(E\mu _BH)+\sigma _{q[s],}(E+\mu _BH),$$ (17) where $`\mu _BH`$ is the Zeeman energy. Calculated charge conductance spectra for $`d_{x^2y^2}`$-wave superconductor as a function of $`X`$ are shown in Fig. 10. The amplitude of the splitting is linear to the applied field independent of the barrier heights. Moreover, since the energy shift induced by the magnetic field does not have $`k`$-dependence, the broadening of the peaks are not observed. The ratio of the splitted peak heights simply reflects the polarization in the ferromagnet, which is consistent with the results by Tedrow and Meservey . On the other hand, $`\sigma _q(E)`$ for $`d_{x^2y^2}`$+$`is`$-wave superconductor is calculated by setting $`\mathrm{\Delta }_\pm =\mathrm{\Delta }_0\mathrm{cos}2(\theta _S\beta )+i\mathrm{\Delta }_s`$. Calculated charge conductance spectra for various $`X`$ values are shown in Fig. 11. The amplitude of the splitting is almost equivalent to the amplitude of the $`s`$-wave component. The shape of the spectrum without the polarization ($`X=0`$) is quite similar to that shown in Fig. 10 ($`X=0`$). As $`X`$ becomes larger, the heights of the two peaks are reduced, which is consistent with that shown in Fig. 3. On the other hand, differently from Figs. 9 and 10, since the peak splitting is not induced by spin-dependent effects in this case, the polarization in the ferromagnets does not yield an imbalance in the peak heights. Thus the heights of the two peaks are reduced symmetrically. The responses of the ZBCP on the variation of the polarization and the applied magnetic field are summarized as follows: i) the peak splitting due to the ferromagnetic insulator and the Zeeman effect are spin dependent. Therefore, the polarization in the ferromagnet induces the asymmetrical splitting of the ZBCP. ii) The amplitude of the peak splitting is linear to the applied field in the case of the Zeeman effect. However, it is non-linear in the cases of the ferromagnetic insulator effects and the BTRS states . In particular, the peak splittings are expected even in the absence of the applied field for these two cases. iii) The combination of the BTRS states and the Zeeman effect induces an additional peak splitting, that is, the ZBCP splits into four peaks. However, the combination of the Zeeman and the ferromagnetic insulator effects yields two peaks. The experimental observations of the ZBCP splitting have been reported for normal metal / high-$`T_c`$ superconductor junctions . It is a really interesting experiment to observe the same features by using ferromagnets/high-$`T_c`$ superconductor junctions in order to distinguish the spin-dependent effects from the BTRS states inducement. Recently, Sawa, $`et.al`$. have detected an asymmetric magnetic field response in La<sub>0.67</sub>Sr<sub>0.33</sub>MnO<sub>3</sub> / YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> junctions . The qualitative features on the magnetic field responses of their junctions are consistent with F/FI/S with $`d_{x^2y^2}`$-wave explained here. Detailed comparison between above formulas and their experiments is strongly expected. Finally, a simple proposal is given for a possible device application utilizing the ferromagnetic insulator effects. The thickness of the insulator is the order of 1nm in usual tunneling junctions. Since the controlling of properties in such a thin layer requires high technology, as far as we know, not so many experimental trials have been accomplished thus far. However, as shown in this paper, a small change in the insulator property causes a drastic change on the transport properties. Therefore, the controlling of the barrier properties is one of the most promising methods to create new functional devices. For example, consider a F/FI/S junction with a $`d_{x^2y^2}`$-wave superconductor ($`\beta =\pi /4`$). The sharp ZBCP is drastically modified as the difference between $`Z_{}`$ and $`Z_{}`$ becomes larger as shown in Fig. 7. This means that, for a fixed bias voltage, a large response in current is expected due to a small variation in the exchange interaction in the insulator. This response is applicable for the high-sensitive magnetization measurement of a thin insulating film by inserting the film into a junction as a tunneling barrier. If the exchange interaction is sensitive to the external field, this effect can be used as a magnetic sensor. Alternatively, if the magnetization of the insulator shows a hysteresis on the external field variation, a memory function can be realized. The current gain of the junction as a function of the external field is largely enhanced by using a superconductor /ferromagnetic insulator /superconductor junction with $`d`$-wave, because negative-conductance regions are expected just beside the ZBCP in this configuration . Differently from conventional superconducting memories based on a flux-quantum logic, a large-scale integration circuit may be possible based on the present principle. ## IV Summary In this paper, the conductance spectra for the charge and the spin currents under the influence of the exchange interaction have been calculated based on the scattering method. The influence of the spin polarization on the transport properties has been clarified. It is shown that the retro-reflectivity of the standard Andreev reflection process is broken in the presence of an exchange field and that the surface bound states due to superconducting pair potentials do not contribute to the spin current. Next, the ferromagnetic insulator including the spin-filtering effect are analyzed. It is shown that the spin-polarization gives asymmetric peak splitting. Moreover, various features in the splitting of ZBCP due to the ferromagnetic insulator, the Zeeman splitting, and the BTRS states effects are analyzed in detail. It is shown that the spin-polarized tunneling gives quite important information to identify the origin of the ZBCP splitting. By comparing the present analysis with experimental data, we expect that the mechanism of the peak splitting in high-$`T_c`$ superconductors will be well identified. In the present model, we have neglected the effects of spin-orbit scattering and the non-equilibrium properties of superconductors . Inclusion of these effects would be necessary for a complete theory. The formulation for triplet superconductors will be presented in another publication . ## V Acknowledgments We would like to thank M. Koyanagi, K. Kajimura, M. Yamashiro, J. Inoue, A. Sawa, and D. Worledge for fruitful discussions. This work has been partially supported by the the U. S. Airforce Office of Scientific Research.
no-problem/9812/gr-qc9812060.html
ar5iv
text
# References 1. The problem of particle production by the electric field of a black hole has been discussed repeatedly \[1-6\]. The probability of this process was estimated in these papers using in some way or another the result obtained previously \[7-9\] for the case of an electric field constant all over the space. This approximation might look quite natural with regard to sufficiently large black holes, for which the gravitational radius exceeds essentially the Compton wave length of the particle $`\lambda =1/m`$. (We use in the present paper the units with $`\mathrm{}=1,c=1`$; the Newton gravitational constant $`k`$ is written down explicitly.) However, in fact, as will be demonstrated below, the constant-field approximation, generally speaking, is inadequate to the present problem, does not reflect a number of its essential peculiarities. It is convenient to start the discussion just from the problem of particle creation by a constant electric field. Here and below we restrict to the consideration of the production of electrons and positrons, first of all because the probability of emitting these lightest charged particles is the maximum one. Besides, the picture of the Dirac sea allows one in the case of fermions to manage without the second-quantization formalism, thus making the consideration most transparent. To calculate the main, exponential dependence of the effect, it is sufficient to restrict to a simple approach due to (see also the textbook ). In the potential $`eEz`$ of a constant electric field $`E`$ the usual Dirac gap (Fig. 1) tilts (see Fig. 2). As a result, a particle which had a negative energy in the absence of the field, can now tunnel through the gap (see the dashed line in Fig. 2) and go to infinity as a usual particle. The hole created in this way is nothing but antiparticle. An elementary calculation leads to the well-known result for the probability of particle creation: $$W\mathrm{exp}\left(\frac{\pi m^2}{eE}\right).$$ (1) This simple derivation explains clearly some important properties of the phenomenon. First of all, the action inside the barrier does not change under a shift of the dashed line in Fig. 2 up or down. Just due to it expression (1) is independent of the energy of created particles. Then, for the external field to be considered as a constant one, it should change weakly along the path inside the barrier. However, the length of this path is not directly related to the Compton wave length of the particle. In particular, for an arbitrary weak field the path inside the barrier becomes arbitrary long. Thus, one may expect that the constant-field approximation is not, generally speaking, applicable to the problem of a charged black hole radiation, and that the probability of particle production in this problem is strongly energy-dependent. The explicit form of this dependence will be found below. We restrict in the present work to the case of a non-rotating black hole. 2. We start the solution of the problem with calculating the action inside the barrier. The metric of a charged black hole is well-known: $$ds^2=fdt^2f^1dr^2r^2(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2),$$ (2) where $$f=1\frac{2kM}{r}+\frac{kQ^2}{r^2},$$ (3) $`M`$ and $`Q`$ being the mass and charge of the black hole, respectively. The equation for a particle 4-momentum in these coordinates is $$f^1\left(ϵ\frac{eQ}{r}\right)^2fp^2\frac{l^2}{r^2}=m^2.$$ (4) Here $`ϵ`$ and $`p`$ are the energy and radial momentum of the particle. We assume that the particle charge $`e`$ is of the same sign as the charge of the hole $`Q`$, ascribing the charge $`e`$ to the antiparticle. Clearly, the action inside the barrier is minimum for the vanishing orbital angular momentum $`l`$. It is rather evident therefore (and will be demonstrated in the next section explicitly) that after the summation over $`l`$ just the $`s`$-state defines the exponential in the total probability of the process. So, we restrict for the moment to the case of a purely radial motion. The equation for the Dirac gap for $`l=0`$ is $$ϵ_\pm (r)=\frac{eQ}{r}\pm m\sqrt{f}.$$ (5) It is presented in Fig. 3. It is known that at the horizon of a black hole, for $`r=r_+=kM+\sqrt{k^2M^2kQ^2}`$, the gap vanishes. Then, with the increase of $`r`$ the lower boundary of the gap $`ϵ_{}(r)`$ decreases monotonically, tending asymptotically to $`m`$. The upper branch $`ϵ_+(r)`$ at first, in general, increases, and then decreases, tending asymptotically to $`m`$. It is clear from Fig. 3 that those particles of the Dirac see whose coordinate $`r`$ exceeds the gravitational radius $`r_+`$ and whose energy $`ϵ`$ belongs to the interval $`ϵ_{}(r)>ϵ>m`$, tunnel through the gap to infinity. In other words, a black hole looses its charge due to the discussed effect, by emitting particles with the same sign of the charge $`e`$, as the sign of $`Q`$. Clearly, the phenomenon takes place only under the condition $$\frac{eQ}{r_+}>m.$$ (6) For an extreme black hole, with $`Q^2=kM^2`$, the Dirac gap looks somewhat different (see Fig. 4): when $`Q^2`$ tends to $`kM^2`$ the location of the maximum of the curve $`ϵ_+(r)`$ tends to $`r_+`$, and the value of the maximum tends to $`eQ/r_+`$. It is obvious however that the situation does not change qualitatively due to it. Thus, though an extreme black hole has zero Hawking temperature and, correspondingly, gives no thermal radiation, it still creates charged particles due to the discussed effect. In the general case $`Q^2kM^2`$ the doubled action inside the barrier entering the exponential for the radiation probability is $$2S=2_{r_1}^{r_2}𝑑r|p(r,ϵ)|$$ $$=2_{r_1}^{r_2}\frac{drr}{r^22kMr+kQ^2}\sqrt{p_0^2r^2+2(ϵeQkm^2M)r(e^2km^2)Q^2}.$$ (7) Here $`p_0=\sqrt{ϵ^2m^2}`$ is the momentum of the emitted particle at infinity, and the turning points $`r_{1,2}`$ are as usual the roots of the quadratic polynomial under the radical; we are interested in the energy interval $`mϵeQ/r_+`$. Of course, the integral can be found explicitly, though it demands somewhat tedious calculations. However, the result is sufficiently simple: $$2S=2\pi \frac{m^2}{(ϵ+p_0)p_0}[eQ(ϵp_0)kM].$$ (8) Certainly, this expression, as distinct from the exponent in formula (1), depends on the energy quite essentially. Let us note that the action inside the barrier does not vanish even for the limiting value of the energy $`ϵ_m=eQ/r_+`$. For a nonextreme black hole it is clear already from Fig. 3. For an extreme black hole this fact is not as obvious. However, due to the singularity of $`|p(r,ϵ)|`$, the action inside the barrier is finite for $`ϵ=ϵ_m=eQ/r_+`$ for an extreme black hole as well. In this case the exponential factor in the probability is $$\mathrm{exp}\left(\pi \frac{\sqrt{k}m}{e}kmM\right).$$ (9) Due to the extreme smallness of the ratio $$\frac{\sqrt{k}m}{e}10^{21},$$ (10) the exponent here is large only for a very heavy black hole, with a mass $`M`$ exceeding that of the Sun by more than 5 orders of magnitude. And since the total probability, integrated over energy, is dominated by the energy region $`ϵϵ_m`$, the semiclassical approach is applicable in the case of extreme black holes only for these very heavy objects. Let us note also that for the particles emitted by an extreme black hole, the typical values of the ratio $`ϵ/m`$ are very large: $$\frac{ϵ}{m}\frac{ϵ_m}{m}=\frac{eQ}{kmM}=\frac{e}{\sqrt{k}m}10^{21}.$$ In other words, an extreme black hole in any case radiates highly ultrarelativistic particles mainly. Let us come back to nonextreme holes. In the nonrelativistic limit, when $`eQ/r_+m`$ and, correspondingly, the particle velocity $`v0`$, the exponential is of course very small: $$\mathrm{exp}\left(\frac{2\pi kmM}{v}\right).$$ (11) Therefore, we will consider mainly the opposite, ultrarelativistic limit where the exponential is $$\mathrm{exp}\left(\pi \frac{m^2}{ϵ^2}eQ\right).$$ (12) Of course, here also the energies $`ϵϵ_meQ/kM`$ are essential, so that the ultrarelativistic limit corresponds to the condition $$eQkmM.$$ (13) But then the semiclassical result (12) is applicable (i.e., the action inside the barrier is large) only under the condition $$kmM1.$$ (14) Let us note that this last condition means that the gravitational radius of the black hole ($`r_+kM`$) is much larger than the Compton wave length of the electron $`1/m`$. In other words, the result (12) refers to macroscopic black holes. Combining (13) with (14), we arrive at one more condition for the applicability of formula (12): $$eQ1.$$ (15) We will come back to this relationship below. Let us note that in the action inside the barrier was being calculated under the same assumptions as formula (12). However, the answer presented in , $`2S=\pi m^2r_+^2/eQ`$, is independent of energy at all (and corresponds to formula (1) which refers to the case of a constant electric field). I do not understand how such an answer could be obtained for the discussed integral in the general case $`ϵϵ_m`$. 3. The obtained exponential is the probability that a particle approaching the turning point $`r_1`$ (see Figs. 3, 4) from the left, will tunnel through the potential barrier. One should recall that in the general case the position of the turning point depends not only on the particle energy $`ϵ`$, but on its orbital angular momentum $`l`$ as well. The total number of particles with given $`ϵ`$ and $`l`$, approaching a spherical surface of the radius $`r_1`$ in unit time, is equal to the product of the area of this surface $$S=4\pi r_1^2(ϵ,l)$$ (16) times the current density of the particles $$j^r(ϵ,l)=\frac{\rho }{\sqrt{g_{00}}}\frac{dr}{dt}$$ (17) (see, e.g., , §90). The particle velocity is as usual $$v^r=\frac{dr}{dt}=\frac{ϵ}{p}$$ (18) (the subscript $`r`$ of the radial momentum $`p`$ is again omitted). To obtain an explicit expression for the particle density $`\rho `$, we will use the semiclassical approximation (the conditions of its applicability for the region $`r_+rr_1`$ will be discussed later). Let us note that the volume element of the phase space $$2\frac{dp_xdp_ydp_zdxdydz}{(2\pi )^3}$$ (19) is a scalar. (The factor 2 here is due as usual to two possible orientations of the electron spin.) On the other hand, the number of particles in the elementary cell $`dxdydz`$ equals (see , §90) $$\rho \sqrt{\gamma }dxdydz,$$ (20) where $`\gamma `$ is the determinant of the space metric tensor. Since all the states of the Dirac sea are occupied, we obtain by comparing formulae (19) and (20) that the following expression $$\frac{\rho }{\sqrt{g_{00}}}=\frac{2}{\sqrt{g_{00}\gamma }}\frac{dp_xdp_ydp_z}{(2\pi )^3}=\frac{2}{\sqrt{g}}\frac{dp_xdp_ydp_z}{(2\pi )^3}$$ should be plugged in formula (17) for the current density (the summation here and below is performed with fixed $`ϵ`$ and $`l`$, see (17)). In our case the determinant $`g`$ of the four-dimensional metric tensor does not differ from the flat one, so that the radial current density of the particles of the Dirac sea is $$j^r(ϵ,l)=2\frac{d^3p}{(2\pi )^3}\frac{ϵ}{p}.$$ (21) The summation in the right-hand-side reduces in fact to the multiplication by the number $`2l+1`$ of possible projections of the orbital angular momentum $`𝐥`$ onto the $`z`$ axis and to the integration over the azimuth angle of the vector $`𝐥`$, which gives $`2\pi `$. With the account for the identity $$\frac{ϵ}{p_r}dp_r=dϵ,$$ we obtain in the result $$j^r(ϵ,l)=2\frac{2\pi (2l+1)}{(2\pi )^3r_1^2(ϵ,l)}.$$ (22) Finally, the pre-exponential factor in the probability, differential in energy and orbital angular momentum, is $$\frac{2(2l+1)}{\pi }.$$ (23) Correspondingly, the number of particles emitted per unit time is $$\frac{dN}{dt}=\frac{2}{\pi }𝑑ϵ\underset{l}{}(2l+1)\mathrm{exp}[2S(ϵ,l)].$$ (24) In the most interesting, ultrarelativistic case $`dN/dt`$ can be calculated explicitly. Let us consider the expression for the momentum in the region inside the barrier for $`l0`$ $$|p(ϵ,l,r)|=f^1\sqrt{\left(m^2+\frac{l^2}{r^2}\right)f\left(ϵ\frac{eQ}{r}\right)^2}.$$ (25) The main contribution to the integral over energies in formula (24) is given by the region $`ϵϵ_m`$. In this region the functions $`f(r)`$ and $`ϵeQ/r`$, entering expression (25), are small and change rapidly. As to the quantity $$\mu ^2(r,l)=m^2+\frac{l^2}{r^2},$$ (26) one can substitute in it for $`r`$ its average value, which lies between the turning points $`r_1`$ and $`r_2`$. Obviously, in the discussed limit $`ϵϵ_m`$ the near turning point coincides with the horizon radius, $`r_1=r_+`$. And the expression for the distant turning point is in this limit $$r_2=r_+\left[1+\frac{2\mu ^2}{ϵ_m^2\mu ^2}\frac{\sqrt{k^2M^2kQ^2}}{r_+}\right].$$ (27) Assuming that for estimates one can put in formula (26) $`rr_+`$, one can easily show that the correction to 1 in the square bracket is bounded by the ratio $`l^2/(eQ)^2`$. Assuming that this ratio is small (we will see below that this assumption is self-consistent), we arrive at the conclusion that $`r_2r_+`$, and hence $`\mu ^2`$ can be considered independent of $`r`$: $`\mu ^2(r,l)=m^2+l^2/r_+^2`$. As a result, we obtain $$2S(ϵ,l)\pi eQ\left(\frac{m^2}{ϵ^2}+\frac{l^2}{r_+^2ϵ^2}\right).$$ (28) Now we find easily $$\frac{dN}{dt}=m\left(\frac{eQ}{\pi mr_+}\right)^3\mathrm{exp}\left(\frac{\pi m^2r_+^2}{eQ}\right).$$ (29) Let us note that the range of orbital angular momenta, contributing to the total probability (29), is effectively bounded by the condition $`l^2eQ`$. Since $`eQ1`$, this condition allows one to change from the summation over $`l`$ in formula (24) to the integration. On the other hand, this condition justifies the used approximation $`\mu ^2(r,l)=m^2+l^2/r_+^2`$. However, up to now we have not considered one more condition necessary for the derivation of formula (29). We mean the applicability of the semiclassical approximation to the left of the barrier, for $`r_+rr_1`$. This condition has the usual form $$\frac{d}{dr}\frac{1}{p(r)}<1.$$ (30) In other words, the minimum size of the initial wave packet should not exceed the distance from the horizon to the turning point. Using the estimate $$p(r)\frac{r_+(eQϵr_+)}{(rr_+)(rr_{})}$$ for the momentum in the most essential region, one can check that for an extreme black hole the condition (30) is valid due to the bound $`eQ1`$. In a non-extreme case, for $`r_+r_{}r_+`$, the situation is different: the condition (30) reduces to $$ϵ<\frac{eQ1}{r_+}\frac{eQ}{r_+}.$$ (31) Thus, for a non-extreme black hole in the most essential region $`ϵϵ_m`$ the condition of the semiclassical approximation is not valid. Nevertheless, the semiclassical result (24) remains true qualitatively, up to a numerical factor in the pre-exponential. In concluding this section few words on the radiation of light charged black holes, for which $`kmM<1`$, i.e., for which the gravitational radius is less than the Compton wave length of the electron. In this case the first part, $$ϵ<\frac{eQ1}{r_+},$$ of inequality (31), which guarantees the localization of the initial wave packet in the region of a strong field, means in particular that $$eQ=Z\alpha >1$$ (32) (we have introduced here $`Z=Q/e`$). It is well-known (see, e.g., ) that the vacuum for a point-like charge with $`Z\alpha >1`$ is unstable, so that such an object looses its charge by emitting charged particles. It is quite natural that for a black hole whose gravitational radius is smaller than the Compton wave length of the electron, the condition of emitting a charge is the same as in the pure quantum electrodynamics. (Let us note that the unity in all these conditions should not be taken too literally: even in quantum electrodynamics, where the instability condition for the vacuum of particles of spin 1/2 is for a point-like nucleus just $`Z\alpha >1`$, for a finite-size nucleus it changes to $`Z\alpha >1.24`$. On the other hand, for the vacuum of scalar particles in the field of a point-like nucleus the instability condition is : $`Z\alpha >1/2`$.) As has been mentioned already, for a light black hole, with $`kmM<1`$, the discussed condition $`eQ>1`$ leads to a small action inside the barrier and to the inapplicability of the semiclassical approximation used in the present article. The problem of the radiation of a charged black hole with $`kmM<1`$ was investigated numerically in . 4. The exponential $$\mathrm{exp}\left(\frac{\pi m^2r_+^2}{eQ}\right)$$ in our formula (29) coincides with the expression arising from formula (1), which refers to a constant electric field $`E`$, if one plugs in for this field its value $`Q/r_+^2`$ at the black hole horizon. As has been mentioned already, an approach based on formulae for a constant electric field was used previously in Refs. \[1-6\]. Thus, our result for the main, exponential dependence of the probability integrated over energies, coincides with the corresponding result of these papers. Moreover, our final formula (24) agrees with the corresponding result of Ref. up to an overall factor 1/2. (This difference is of no interest by itself: as has been noted above, for a non-extreme black hole the semiclassical approximation cannot guarantee at all an exact value of the overall numerical factor.) Nevertheless, we believe that the analysis of the phenomenon performed in the present work, which demonstrates its essential distinctions from the particle production by a constant external field, is useful. First of all, it follows from this analysis that the probability of the particle production by a charged black hole has absolutely nontrivial energy spectrum. Then, in no way are real particles produced by a charged black hole all over the whole space: for a given energy $`ϵ`$ they are radiated by a spherical surface of the radius $`r_2(ϵ)`$, this surface being close to the horizon for the maximum energy. (It follows from this, for instance, that the derivation of the mentioned result of Ref. for $`dN/dt`$ has no physical grounds: this derivation reduces to plugging $`E=Q/r^2`$ into the well-known Schwinger formula , obtained for a constant field, with subsequent integrating all over the space outside the horizon.) Let us compare now the radiation intensity $`I`$ due to the effect discussed, with the intensity $`I_H`$ of the Hawking thermal radiation. Introducing additional weight $`ϵ`$ in the integrand of formula (24), we obtain $$I=\pi m^2\left(\frac{eQ}{\pi mr_+}\right)^4\mathrm{exp}\left(\frac{\pi m^2r_+^2}{eQ}\right).$$ (33) As to the Hawking intensity, the simplest way to estimate it, is to use dimensional arguments, just to divide the Hawking temperature $$T_H=\frac{1}{4\pi r_+}$$ by a typical classical time of the problem $`r_+`$ (in our units $`c=1`$). Thus, $$I_H\frac{1}{4\pi r_+^2}.$$ (34) More accurate answer for $`I_H`$ differs from this estimate by a small numerical factor $`210^2`$, but for qualitative estimates one can neglect this distinction. The intensities (33) and (34) get equal for $$eQ\frac{\pi }{6}\frac{(mr_+)^2}{\mathrm{ln}(mr_+)}\frac{\pi }{6}\frac{(kmM)^2}{\mathrm{ln}(kmM)}.$$ (35) (One cannot agree with the condition $`eQ1/(4\pi )`$ for the equality of these intensities, derived in Ref. from the comparison of $`ϵ_m=eQ/r_+`$ with $`T_H=1/(4\pi r_+)`$.) Let us consider in conclusion the change of the horizon surface of a black hole, and hence of its entropy, due to the discussed non-thermal radiation. To this end, it is convenient to introduce, following Ref. , the so-called irreducible mass $`M_0`$ of a black hole: $$2M_0=M+\sqrt{M^2Q^2};$$ (36) here and below we put $`k=1`$. This relationship can be conveniently rewritten also as $$M=M_0+\frac{Q^2}{4M_0}.$$ (37) Obviously, $`r_+=2M_0`$, so that the horizon surface and the black hole entropy are proportional to $`M_0^2`$. When a charged particle is emitted, the charge of a black hole changes by $`\mathrm{\Delta }Q=e`$, and its mass by $`\mathrm{\Delta }M=eQ/r_++\xi `$, where $`\xi `$ is the deviation of the particle energy from the maximum one. Using the relationship (37), one can easily see that as a result of the radiation, the irreducible mass $`M_0`$, and hence the horizon surface and entropy of a non-extreme black hole do not change if the particle energy is the maximum one $`eQ/r_+`$. In other words, such a process, which is the most probable one, is adiabatic. For $`\xi >0`$, the irreducible mass, horizon surface, and entropy increase. As usual, an extreme black hole, with $`M=Q=2M_0`$, is a special case. Here for the maximum energy of an emitted particle $`ϵ_m=e`$, we have $`\mathrm{\Delta }M=\mathrm{\Delta }Q=e`$, so that the black hole remains extreme after the radiation. In this case $`\mathrm{\Delta }M_0=e/2`$, the irreducible mass and the horizon surface decrease. In a more general case, $`\mathrm{\Delta }M=e+\xi `$, the irreducible mass changes as follows: $$\mathrm{\Delta }M_0=\frac{e\xi }{2}+\sqrt{\left(M_0\frac{e}{2}+\frac{\xi }{4}\right)\xi }.$$ (38) Clearly, in the case of an extreme black hole of a large mass, already for a small deviation $`\xi `$ of the emitted energy from the maximum one, the square root is dominating in this expression, so that the horizon surface increases. I am grateful to I.V. Kolokolov, A.I. Milstein, V.V. Sokolov, and O.V. Zhirov for the interest to the work and useful comments. The work was supported by the Russian Foundation for Basic Research through Grant No. 98-02-17797, and by the Federal Program Integration-1998 through Project No. 274.
no-problem/9812/hep-ph9812201.html
ar5iv
text
# Acknowledgments ## Acknowledgments This research was supported in part by the National Nature Science Foundation of China and the post doctoral foundation of China. S.H. Zhu gratefully acknowledges the support of K.C. Wong Education Foundation, Hong Kong.
no-problem/9812/astro-ph9812001.html
ar5iv
text
# Warped discs and the directional stability of jets in Active Galactic Nuclei ## 1 Introduction The angular momentum of gas accreting onto massive black holes in Active Galactic Nuclei (AGN) is likely to change with time, either as a result of mergers funneling fresh gas towards the nucleus (Hernquist & Mihos 1995), or because of radiation or disc wind driven warping instabilities in an existing accretion flow (Pringle 1996, 1997; Maloney, Begelman & Pringle 1996; Maloney, Begelman & Nowak 1998). In either case, the angular momentum of gas at large radius in the disc will be misaligned with the rotation axis of the black hole, while at small radius the combined action of viscosity and differential precession induced by the Lense-Thirring effect (Lense & Thirring 1918) leads to alignment of the disc and hole angular momenta (Bardeen & Petterson 1975; Kumar & Pringle 1985). Achieving this alignment requires that the black hole exert a torque on the disc gas, and implies that an equal and opposite torque act on the black hole itself. Over time, this causes a change in the spin axis of the hole towards alignment with the large angular momentum reservoir provided by the disc at large radius. Whether the timescale for alignment is long or short compared to the lifetime of an AGN has implications for our understanding of the accretion history of these objects. If the black hole is spinning, then jets, regardless of whether they derive power from the aligned inner disc or directly from the hole itself (Blandford & Znajek 1977; see also Ghosh & Abramowicz 1997; Livio, Ogilvie & Pringle 1999), trace the spin of the black hole. If the alignment timescale is long, then we expect that the jet direction reflects the initial spin of the black hole established during (or prior) to the formation of the host galaxy (Rees 1978). Moreover the jet direction is expected to be stable over time, irrespective of variations in the angular momentum of accreting gas. Conversely, if the timescale is short then jets define the angular momentum of inflowing material, and the observation of systems where they appear to be stable over long time periods of $`10^7`$ \- $`10^8`$ yr (Alexander & Leahy 1987; Liu, Pooley & Riley 1992; Scheuer 1995) implies a constancy in the average angular momentum of accreting gas. The rate of realignment was computed by Scheuer & Feiler (1996), who found an approximate analytic solution to the equations governing the evolution of a warp. The timescale depends on two poorly known viscosities, $`\nu _1`$ and $`\nu _2`$, which correspond to the $`(R,\varphi )`$ and $`(R,z)`$ components of the shear as defined by Papaloizou & Pringle (1983). If $`\nu _1\nu _2`$ (i.e. the timescale for warp evolution is comparable to that for the evolution of the surface density) then the formula of Scheuer & Feiler (1996) is similar to that previously assumed by Rees (1978). Subsequently, Natarajan & Pringle (1998) revisited the problem, and pointed out that for the parameters of AGN discs the assumption that $`\nu _1\nu _2`$ is at variance with the results of detailed analyses of the hydrodynamics of viscous discs, which suggest instead that $`\nu _2\nu _1`$. Recalculating the alignment timescale with this modification they found that it was short — much less than any reasonable estimate of AGN lifetime — and used this to argue that the spin of black holes in AGN, and the direction of jets produced in the inner regions of the accretion flow, rapidly adjust to trace the angular momentum of gas at large radius. The analysis in Natarajan & Pringle (1998) employed the same torque formula as Scheuer & Feiler (1996), which was derived under the restrictive conditions of small amplitude warps and a disc viscosity that is constant with radius (or, equivalently, that in a steady-state the disc surface density is independent of radius). Since AGN may well harbour strongly warped discs, and almost certainly have more complex surface density profiles, our goal in this paper is to generalise their results. Section 2 below sets out the equations for the evolution of a warped disc, which we solve numerically in Section 3 for comparison with the approximate analytic solution. The resulting torque is calculated in Section 4, and the alignment timescale derived for a simple AGN disc model in Section 5. Section 6 discusses the implications of our results. ## 2 Warped disc equations ### 2.1 Governing equations The evolution of a thin, warped accretion disc in a Keplerian potential was studied by Papaloizou & Pringle (1983). For a disc with a surface density profile $`\mathrm{\Sigma }(R,t)`$, with angular momentum at radius $`R`$ parallel to the unit vector $`\widehat{l}`$, the evolution can be expressed most compactly in terms of an equation for the angular momentum density $`𝐋=(GMR)^{1/2}\mathrm{\Sigma }\widehat{l}`$. Adopting the simplest description (Pringle 1992; see also Papaloizou & Pringle 1983), the time evolution is given by, $`{\displaystyle \frac{𝐋}{t}}`$ $`=`$ $`{\displaystyle \frac{3}{R}}{\displaystyle \frac{}{R}}\left[{\displaystyle \frac{R^{1/2}}{\mathrm{\Sigma }}}{\displaystyle \frac{}{R}}(\nu _1\mathrm{\Sigma }R^{1/2})𝐋\right]`$ (1) $`+`$ $`{\displaystyle \frac{1}{R}}{\displaystyle \frac{}{R}}\left[(\nu _2R^2\left|{\displaystyle \frac{\widehat{l}}{R}}\right|^2{\displaystyle \frac{3}{2}}\nu _1)𝐋\right]`$ $`+`$ $`{\displaystyle \frac{1}{R}}{\displaystyle \frac{}{R}}\left({\displaystyle \frac{1}{2}}\nu _2R|𝐋|{\displaystyle \frac{\widehat{l}}{R}}\right)+{\displaystyle \frac{\stackrel{}{\omega }_p\times 𝐋}{R^3}}.`$ Here $`\nu _1`$ and $`\nu _2`$ are viscosities acting on the $`(R,\varphi )`$ and $`(R,z)`$ components of the shear, as defined by Papaloizou & Pringle (1983). Neither is equal to the shear viscosity ‘$`\nu `$’ that enters into the Navier-Stokes equations, and the general relation between these three quantities is extremely complex (Ogilvie 1999). For the parameters of AGN discs, the simplest assumptions suggest that $`\nu _2\nu _1`$ (Natarajan & Pringle 1998), though for most of this paper we will allow both viscosities to be general power laws with radius, $`\nu _1`$ $`=`$ $`\nu _{10}R^\beta `$ $`\nu _2`$ $`=`$ $`\nu _{20}R^\beta .`$ (2) For thin, vertically averaged accretion disc models, the shear viscosity $`\nu `$ is a power-law in radius in a given opacity regime (e.g. Frank, King & Raine 1992). Later, the index $`\beta `$ will be chosen to correspond to the scaling expected at radii in the disc where the bulk of the realignment torque acts. However, for a general warped disc, the radial run of both $`\nu _1`$ and $`\nu _2`$ will also depend on the shape of the warp (Ogilvie 1999). This effect is not taken into account in this paper. Our assumption that $`\nu _1`$ and $`\nu _2`$ are simple power-laws in radius, independent of warp shape, is therefore strictly valid in the limit of small warps, and is an approximation for larger amplitude warps. The range of behaviour allowed by equation (1) is considerably more diverse than simple diffusion of $`𝐋`$. Nonetheless, we will make frequent use of the characteristic timescales for diffusion of surface density and warp, which we define conventionally as, $$t_{\nu _1}=\frac{R^2}{\nu _1},t_{\nu _2}=\frac{R^2}{\nu _2}.$$ (3) The final term on the right hand side of equation (1) represents the effect of Lense-Thirring precession on the disc. For a black hole with mass $`M`$, we take $`\stackrel{}{\omega }_p=\omega _𝐩(0,0,1)`$, where $$\omega _𝐩=2ac\left(\frac{GM}{c^2}\right)^2$$ (4) where $`a`$ is the dimensionless angular momentum of the black hole. Several assumptions and simplifications are necessary to derive this equation, and these are set out in detail in Papaloizou & Pringle (1983) (see also Pringle 1999; Ogilvie 1999). Particularly important is the boundary between the diffusive behaviour described by equation (1) and wave-like evolution, which occurs when the usual Shakura & Sunyaev (1973) viscosity parameter $`\alpha H/R`$, where $`H`$ is the disc scale height (Papaloizou & Lin 1995). The observational constraints on $`\alpha `$ in AGN discs are weak (Siemiginowska & Czerny (1989) and Siemiginowska, Czerny & Kostyunin (1996) suggest $`\alpha =10^110^2`$), but in the ionized parts of the disc it is plausible to assume that $`\alpha 0.1`$, as in better studied disc systems (eg. Cannizzo 1993). In any case, theoretical estimates for $`H/R`$ at the relevant radii (Collin-Souffrin & Dumont 1990) suggest that in AGN $`\alpha H/R`$, so that we are safely in the diffusive regime. In this respect AGN discs are very different from protostellar discs, for example, where wave-like behaviour is likely to be important (e.g. Larwood et al. 1996). We also note that although Papaloizou & Pringle (1983) considered only small amplitude warps describable using linear perturbation theory, Ogilvie (1999) has shown that strongly warped discs can also be described using a similar approach. ### 2.2 The Scheuer & Feiler analytic solution Scheuer & Feiler (1996) obtained an approximate steady state analytic solution to equation (1) for the case where $`\beta =0`$, in the limit where the angle of misalignment of the disc and the hole spin was small. In this case, the surface density obeys the usual relation for a planar disc with accretion rate $`\dot{M}`$ and zero torque inner boundary condition imposed at $`R_{\mathrm{in}}`$, $$\nu _1\mathrm{\Sigma }=\frac{\dot{M}}{3\pi }\left(1\sqrt{\frac{R_{\mathrm{in}}}{R}}\right).$$ (5) The shape of the disc is given by $`\widehat{l}=(l_x,l_y,l_z)`$, where $`l_x=Ke^\varphi \mathrm{cos}\varphi `$ $`l_y=Ke^\varphi \mathrm{sin}\varphi `$ (6) and $`\varphi =2(\omega _p/\nu _2R)^{1/2}`$. The constant $`K`$ depends on the disc inclination $`i`$ at large radius, at large radius $`K=\mathrm{sin}i`$. ### 2.3 Numerical solutions The above analytic solution applies only for $`\beta =0`$, and as derived assumes that the disc warp is of small amplitude. Relaxing these assumptions requires a steady state numerical solution of equation (1). For $`\beta <1/2`$, steady state solutions can be calculated efficiently by solving directly the ordinary differential equations (ODEs) obtained by setting $`\mathrm{\Sigma }/t=0`$ in equation (1). We give a full description of our methods in the Appendix, but briefly we have found it simplest to use equations expressed in terms of the surface density $`\mathrm{\Sigma }(R)`$ and unit tilt vector $`\widehat{l}=𝐋/|𝐋|`$, which can readily be solved iteratively using a finite-difference technique described by Pereyra (1979). We use the Numerical Algorithms Group’s implementation, and impose boundary conditions of $`\mathrm{\Sigma }(R_{\mathrm{in}})=0`$, $`\mathrm{\Sigma }(R_{\mathrm{out}})=\mathrm{\Sigma }_{\mathrm{out}}`$, $`\widehat{l}(R=R_{\mathrm{in}})=(0,0,1)`$, $`\widehat{l}(R=R_{\mathrm{out}})=\widehat{l}_{\mathrm{out}}`$. These boundary conditions assume that the inner disc is aligned with the spin axis of the hole, we therefore have to check post facto that this is indeed a valid assumption for each model calculated. For $`\beta 1/2`$, we have been unable to obtain a converged numerical solution to the steady state ODEs using these methods. In this regime, we instead evolve the time dependent equation (1) from an arbitrary initial condition (of a flat, uniformly tilted disc) until a steady state is obtained, using the numerical method described by Pringle (1992). This is straightforward but computationally expensive, since many viscous times at the outer edge of the disc are required to obtain a steady state solution. Defining the viscous time of the disc by $`t_{\nu _1}=R_{\mathrm{out}}^2/\nu _1`$, we find that more than 10 viscous times of evolution are necessary. Moreover, for either method capturing the torque accurately requires a large range between the inner and outer disc radii, of at least $`R_{\mathrm{out}}/R_{\mathrm{in}}\stackrel{>}{}10^4`$. For these reasons, we are only able to obtain solutions using the time dependent code for high values of $`\beta `$ ($`\beta =1.5`$), when the viscous time is a weak function of increasing radius. ## 3 Steady-state disc shape ### 3.1 Comparison with the analytic solution Figure 1 shows the numerical steady-state solution for the $`\beta =0`$ case, and compares it to the approximate analytic solution. We adopt units in which $`R_{\mathrm{in}}=1`$, and take $`\nu _{10}=\nu _{20}=1`$. We choose $`\omega _p=200`$, which gives an inner aligned region extending out to around $`10^2R_{\mathrm{in}}`$. This is roughly appropriate for plausible AGN parameters, though we defer until later consideration of detailed disc models. Convergence to the limiting inclination at infinity is slow, so it is necessary to impose the outer boundary condition of $`i=0.1`$ at a large radius, $`R=10^5R_{\mathrm{in}}`$. At this radius $`l_x`$ and $`l_y`$ are chosen to facilitate comparison with the analytic solution of equation (6). Evidently, the analytic solution provides an excellent description of the disc shape. The inner disc is aligned to the rotation axis of the hole out to close to $`10^2R_{\mathrm{in}}`$, and warps to around half the limiting inclination by $`10^3R_{\mathrm{in}}`$. Moving inward from infinity, the warp is twisted by an angle of $`\pi `$ before being flattened effectively into the aligned plane. ### 3.2 Aligned radius For an annulus in the disc, the local timescales for precession and for transmission of warp are given by, $$t_p=\frac{R^3}{\omega _p},t_{\nu _2}=\frac{R^2}{\nu _{20}R^\beta }.$$ (7) We expect that the disc will be aligned with the spin axis of the hole at radii where the precession timescale is much shorter than the timescale over which the disc can communicate warp inwards. The characteristic radius of the aligned region $`R_{\mathrm{align}}`$ will then scale with the radius where $`t_p=t_{\nu _2}`$, which is given by, $$R_{\mathrm{align}}=C_1\left(\frac{\omega _p}{\nu _{20}}\right)^{1/(1+\beta )},$$ (8) where $`C_1`$ is a constant expected to be of the order of unity which we will use later to fit the numerical results. Note that this expression assumes both that the Lense-Thirring torque is strong enough to ensure that the inner disc is aligned, and that $`\beta >1`$, so that $`t_p`$ increases with radius more rapidly than $`t_{\nu _2}`$. Figure 2 shows how the disc shape varies with $`\beta `$ for a set of models with fixed $`\omega _p`$ and $`\nu _1=\nu _2=1`$. From the numerical solutions, we compute $`R_{\mathrm{align}}`$ as the radius where $`i`$ first exceeds some small inclination $`\delta `$. With $`C_1=0.165`$ there is excellent agreement between the computed $`R_{\mathrm{align}}`$ and the scaling given by equation (8), i.e. the simple timescale argument given above suffices to give an accurate idea of the scaling of the radius out to which the hole can enforce inner disc alignment. ## 4 Torque The torque exerted on the misaligned disc as a result of the Lense-Thirring precession is given by the integral, $$\frac{\mathrm{d}𝐉}{\mathrm{d}t}=\frac{\stackrel{}{\omega }_p\times 𝐋}{R^3}2\pi RdR.$$ (9) Of course an oppositely directed torque of the same magnitude is exerted on the black hole. Computing $`\dot{𝐉}`$ for the disc model shown in Fig. 1 we find that the torque agrees with that obtained from the Scheuer & Feiler solution at the level of a few percent (the small discrepancy being primarily due to the ignored inner boundary condition in the analytic solution, which leads to an error in the surface density in the warp region). The torque scales as $`(\omega _p\nu _2)^{1/2}\dot{M}`$, again as found analytically. We have computed additional models with large inclination angles at infinity (up to $`80^{}`$ angle of misalignment) and for those models there is a larger discrepancy between the numerical and analytic results, but still only at the tens of percent level. ### 4.1 Radial dependence Figure 3 shows the tilt angle and surface density profile for three fiducial models, all normalised to the same accretion rate. The solid line is computed for the parameters used for Figure 1, namely $`\beta =0`$ and $`\nu _{10}=\nu _{20}`$. The short dashed line maintains $`\beta =0`$, but has $`\nu _{10}=1`$, $`\nu _{20}=8`$. Natarajan & Pringle (1998) find that consideration of the hydrodynamics of a viscous disc suggests that in AGN, $$\frac{\nu _2}{\nu _1}\frac{1}{2\alpha ^2}$$ (10) in which case this ratio of viscosities would be appropriate for a plausible Shakura-Sunyaev $`\alpha `$ of around 0.25. Finally the long dashed line illustrates the effect of a declining surface density profile, plotted is a model with $`\beta =1/4`$ and $`\nu _1=\nu _2`$. All three of these models are chosen to have the same viscous timescale (both for the surface density and for warp) at the (arbitrary) choice of inner edge radius. In conventional thin disc models (Shakura & Sunyaev 1973), fixing the viscous time at a given radius corresponds to a fixed disc thickness $`H/R`$ at that radius. We note that in such models the value of $`\beta `$ is fixed by solving for the vertical disc structure, i.e. it is not a ‘parameter’ that can be varied but rather a property of the disc. In Section 5 we discuss the appropriate value for $`\beta `$ in the region of the disc where the dominant torque arises. Also shown for each of the models is where radially the greatest contribution to the torque arises, we plot the components of the integrand in equation (9) (the extra factor of $`R`$ in the plotted quantity takes account of the logarithmic radial scale to give a true impression of where the strongest torque on the disc is found). For all the models, strong torques arise from a broad region that extends from just outside the alignment radius out to one or two orders of magnitude larger radius. The large radial range used in these calculations is thus essential to capture the torque accurately. Increasing the ratio of $`\nu _2`$ relative to $`\nu _1`$ reduces the timescale for diffusion of warp, $`t_{\nu _2}=R^2/\nu _2`$, and allows the warp to push in closer to the black hole. At fixed accretion rate increasing $`\nu _2`$ even by this large factor leaves the surface density profile almost unaltered, and thus the net effect is simply to increase the magnitude of the torque and move it to smaller disc radius. Larger values of $`\beta `$ likewise decrease the warp timescale at large radius and lead to a shrinkage of the aligned region. For the parameters used here, the disc shape is in fact rather similar for the two models in which $`\nu _2>1`$ at large radius. However, for a given accretion rate, models with higher $`\beta `$ have lower surface density at large radii, which tends to decrease $`𝐋`$ in equation (9) and reduce the integrated torque. Both these effects have a significant impact on the calculation of the total torque. ### 4.2 Scaling with $`\beta `$ The numerical results show that the Scheuer & Feiler (1996) solution, equation (6), provides an accurate description of the disc shape and the resultant torque on the black hole. For this solution, the torque integral in equation (9) has a magnitude given by, $$|\dot{𝐉}|=\frac{\sqrt{2}K\dot{M}}{3\nu _{10}}(GM)^{1/2}(\omega _p\nu _{20})^{1/2}.$$ (11) Guided by the numerical results, a straightforward extension of this result to non-zero $`\beta `$ is possible. We first assume that the radius $`R_w`$ out to which we need to integrate equation (9) to find the torque scales as does the aligned radius, $$R_w=C_2\left(\frac{\omega _p}{\nu _{20}}\right)^{1/(1+\beta )},$$ (12) If we additionally assume that the disc shape is primarily a function of $`R_w`$, then the form of the integral in equation (9) suggests the following scaling with $`\beta `$, $$|\dot{𝐉}|=\frac{K\dot{M}}{3\sqrt{2}\nu _{10}}(GM)^{1/2}\frac{C_2^\beta \omega _p}{\beta +1/2}\left(\frac{\nu _{20}}{\omega _p}\right)^{\frac{2\beta +1}{2\beta +2}},$$ (13) which reduces to the formula in equation (11) if $`\beta =0`$. We have allowed ourselves a single adjustable parameter in this expression, to best match the numerical results we adopt $`C_2=0.55`$. Figure 4 shows how equation (13) compares to the numerical results for choices of parameters testing the scalings implied by the formula. Excellent agreement (at better than the percent level) is obtained for two models with $`\nu _1=\nu _2`$ and varying $`\omega _p`$, while a single model computed with the time-dependent code at $`\beta =1.5`$ also agrees well with the fitting formula. A model with $`\nu _2>\nu _1`$ displays some systematic error, caused by the influence of the increased ratio of $`\nu _2/\nu _1`$ on the surface density profile in the warp region, but equation (13) still provides a reasonable approximation to the numerical values for the torque. The main result is that the torque for a fixed accretion rate (and fixed viscous time at $`R_{\mathrm{in}}`$) drops by roughly an order of magnitude for disc models with $`\beta `$ varying between $`\beta =0`$ and $`\beta =1.5`$, while the dependence on other parameters ($`\omega _p`$, $`\nu _{20}`$) remains close to that found in the $`\beta =0`$ case. ## 5 Alignment timescale The torque on the black hole due to the interaction with the warped disc causes it to precess and become aligned with the outer disc’s angular momentum vector on a timescale that, to factors of order unity, is just $`t_{\mathrm{align}}=|𝐉|K/|\dot{𝐉}|`$ (e.g. Scheuer & Feiler 1996). $`|\dot{𝐉}|`$ is specified for a given disc model by equation (13), while the angular momentum of a hole of mass $`M`$ and spin parameter $`a`$ is $`|𝐉|=acM(GM/c^2)`$. The resulting expression for $`t_{\mathrm{align}}`$ is similar, but not equivalent, to the estimate given by Natarajan & Pringle (1998), $`t_{\mathrm{align}}=(|𝐉|/|𝐉_{\mathrm{disk}}|)(R_{\mathrm{warp}}^3/\omega _p)`$, where $`𝐉_{\mathrm{disk}}`$ is the angular momentum of the disc within a warp radius $`R_{\mathrm{warp}}`$ defined similarly to equation (12). Our expression takes more complete account of the need to integrate the torque for all radii less than $`R_w`$. For models of the disc where most of the angular momentum within the warp radius lies close to that radius, the final inferred timescale for alignment is similar. Obtaining an estimate for $`\beta `$ and $`\nu _{10}`$ requires a model for the vertical structure of the disc, which serves the purpose of relating the unknown central disc temperature, that controls the viscosity, to the effective temperature fixed by the requirement that viscous dissipation balance radiative losses. At the radii of the warp, gas pressure and an electron scattering opacity $`\kappa =0.4\mathrm{cm}^2\mathrm{g}^1`$ are dominant. In this regime, Collin-Souffrin & Dumont (1990) obtain a radial profile of column density $`N_{25}`$ (in units of $`10^{25}\mathrm{cm}^2`$) described by, $`N_{25}=98\alpha ^{4/5}\left({\displaystyle \frac{ϵ}{0.1}}\right)^{3/5}\left({\displaystyle \frac{L/L_{\mathrm{Edd}}}{0.1}}\right)^{2/5}`$ $`\times \left({\displaystyle \frac{L}{10^{44}\mathrm{erg}/\mathrm{s}}}\right)^{1/5}\left({\displaystyle \frac{R}{10^4R_g}}\right)^{3/5},`$ (14) where $`ϵ`$ is the radiative efficiency, $`L=ϵ\dot{M}c^2`$ is the bolometric luminosity, $`L_{\mathrm{Edd}}=4\pi c^3R_g/\kappa `$ is the Eddington luminosity, and $`R_g=2GM/c^2`$. For this model, $`\beta =0.6`$. Although this disc model includes the effects of irradiation of the upper layers of the disc by the central source, we note that the scaling $`\mathrm{\Sigma }\alpha ^{4/5}\dot{M}^{3/5}M^{1/5}R^{3/5}`$ is identical to that of non-irradiated models (e.g. Sincell & Krolik 1998). The normalisation is within a factor of two. Other uncertainties thus dominate over that arising from different treatments of the vertical disc structure. Figure 5 shows the calculated alignment timescale for a $`10^8M_{}`$ black hole accreting at rates between $`10^2\dot{M}_{\mathrm{Edd}}`$ and $`\dot{M}_{\mathrm{Edd}}`$, where $`\dot{M}_{\mathrm{Edd}}=4\pi cR_g/(ϵ\kappa )`$. We take a radiative efficiency $`ϵ=0.1`$, and assume a maximal Kerr hole, $`a=1`$. For these parameters, $`R_w`$ as defined in equation (12) is $`R_w10^3R_g`$ for reasonable ratios of $`\nu _2/\nu _1`$, so the main warped region of the disc falls self-consistently within the regime where gas pressure and electron scattering opacity are dominant. The Figure shows results calculated assuming that $`\nu _2=\nu _1`$, along with the expectation if $`\nu _2/\nu _1=1/(2\alpha ^2)`$ for a range of plausible $`\alpha `$. The latter models of course predict more rapid realignment, as noted previously (Natarajan & Pringle 1998). For holes accreting at rates of the order of the Eddington limit, the alignment timescale is found to be $`10^5`$ \- $`10^6`$ yr. This is short compared to lifetimes of AGN, irrespective of what one assumes for the ratio of $`\nu _2/\nu _1`$. Producing the giant structures observed in radio galaxies may require an active epoch of the order of $`10^8`$ yr, while some estimates for the quasar lifetime are similar (e.g. Haehnelt & Rees 1993, though these estimates are subject to considerable uncertainty). In these systems we expect that there was ample time for a rotating black hole, whatever its initial spin axis, to become aligned with the angular momentum of the disc. Thus jets accelerated from the inner disc should be perpendicular to the plane of the outer disc, and constancy in the direction of such jets implies a corresponding stability in the disc plane over time. Figure 6 plots the alignment timescale as a function of black hole mass, for a range of accretion rates. The alignment timescale is found to be only weakly dependent on the mass of the hole, as expected given the $`M^{1/16}`$ dependence derived by Natarajan & Pringle (1998). For this figure we have assumed that $`\nu _1=\nu _2`$, and that $`\alpha =0.1`$. Shorter alignment timescales are of course predicted if $`\nu _2>\nu _1`$, as shown in Figure 5. For holes accreting at lower rates (relative to the Eddington limit), the timescale for alignment grows. It is not unambiguously shorter than estimates for the active phase of a hole in a galactic nucleus, and the answer depends crucially on differences in the disc’s response to shear and warp. It remains true that $`t_{\mathrm{align}}t_{\mathrm{grow}}`$, where $`t_{\mathrm{grow}}=M/\dot{M}`$, for any $`\dot{M}`$, so that alignment will always occur if an accretion event lasts long enough to contribute significantly to the hole mass. Perhaps more probably, however, low luminosity AGN may accrete negligible fractions of the hole mass in many brief episodes. In this scenario, it remains possible that the hole spin accumulated over the whole accretion history will be sufficient to control the jet direction during subsequent low luminosity outbursts. The outer disc would then not be expected to be generally perpendicular to the jet direction. ## 6 Discussion In this paper we have considered the interaction between a misaligned accretion disc and a rotating black hole for the parameters appropriate to Active Galactic Nuclei. A combination of forced precession and disc viscosity allows the black hole to force the inner disc into a plane perpendicular to the hole’s spin axis, but the resulting warped disc exerts a torque that eventually aligns the rotation of the hole with the angular momentum of the outer disc. We have presented steady-state numerical solutions to the full warped disc equations, verified that the analytic solution of Scheuer & Feiler (1996) provides an accurate representation of the disc shape and torque on the hole, and generalised the results for a range of disc models with varying viscosity. The range in models we consider lead to almost an order of magnitude variation in the integrated torque on the hole. We have computed the timescale for alignment to occur, and find that for holes accreting at rates of the order of the Eddington limit (for a $`10^8M_{}`$ hole, this implies $`L_{\mathrm{bolometric}}=2.5\times 10^{46}\mathrm{erg}/\mathrm{s}`$, so these are luminous AGN) the timescale is short, $`10^5`$ \- $`10^6`$ yr. This is the case for a range of assumptions as to the ratio $`\nu _2/\nu _1`$, which characterizes the disc’s relative response to azimuthal shear and to warp. The short derived timescale implies that the spin of black holes in such AGN rapidly adjusts to match the angular momentum of accreting gas, even if the hole gains only a small ($``$ 10%) of its mass from accretion. For low luminosity AGN the prediction is more model-dependent, but the timescale for alignment in these systems is sufficiently long that the spin axis of the hole may be able to remain stable if individual accretion episodes are brief. Here the assumed value for $`\nu _2/\nu _1`$ is crucial, and efforts to determine this ratio for specific angular momentum transport mechanisms and disc models would be valuable. Some powerful radio galaxies display jets that appear to have maintained their direction for long periods of time. Our results imply that this cannot be due to any intrinsic stability imparted by the spin of the black hole, but instead must reflect a long term constancy in the angular momentum of the outer regions of the accretion disc. If the gas derives from a reservoir set up by a single accretion event then this constancy would not be surprising. Alternatively, a preferred axis for gas arriving in the nuclear regions might be the consequence of interactions between inflowing gas and the galactic potential. We have calculated the torque assuming that the disc is able to relax into and maintain a steady-state configuration as realignment occurs. This assumption is justified in the strongly warped inner region, where $`t_{\nu _1}`$ and $`t_{\nu _2}`$ are indeed smaller than the derived $`t_{\mathrm{align}}`$, but fails at large radii of $`10^4`$ \- $`10^5R_g`$ where our solutions show that the disc maintains a small but significant warp. This will not increase the alignment timescale (before the disc reaches a steady-state the torque in our time dependent calculations is larger than the limiting values we solve for), but does suggest that following alignment of the hole the outer disc might retain a modest warp for a long period. For a $`10^8M_{}`$ hole a radius of $`10^4R_g`$ corresponds to $`0.1\mathrm{pc}`$, a scale which is observable with VLBI in nearby AGN such as NGC4258 (eg. Miyoshi et al. 1995). Some of the warps in maser discs could then simply reflect the slow decay of the initial conditions of a past accretion event, and be devoid of a persistent forcing mechanism. Of course NGC4258 itself has an accretion rate much lower than the models considered here (Gammie, Narayan & Blandford 1999), and self-gravity is likely to be important in its disc (Papaloizou, Terquem & Lin 1998), but the main point — that $`t_{\nu _2}`$ at a few tenths of a parsec is likely to be long — remains true. A related question is whether the disc at $`R10^2R_g`$ is able to attain a steady state, even in the absence of a warp. This region of the disc may be thermally unstable (Lin & Shields 1986; Clarke 1989; Siemiginowska, Czerny & Kostyunin 1996) and prone to a limit cycle involving large changes in the accretion rate. The interplay between such a cycle and the dynamics of a warped disc would be complex and intriguing, though a more promising setting for exploring such a scenario would be in galactic black hole systems, if those possess warped discs. For AGN with rapidly spinning holes, the main warped region occurs at radii of $`10^2R_g`$. During realignment episodes there is a substantial deposition of energy, originating in the spin energy of the hole, into this region of the disc. This is particularly the case if $`\nu _2\nu _1`$, as we have argued is likely in this paper and previously (Natarajan & Pringle 1998). The disc in the vertical structure models we have considered is quite thin at these radii and can easily radiate locally a significantly enhanced luminosity, but substantial changes to the vertical structure might be expected (note that we have assumed that the vertical structure of a warped disc is just like that of a planar disc, which is wrong on several counts). Moreover, the enhanced luminosity would increase the disc flux at optical and infrared wavelengths, which would in any event be raised due solely to the greater covering fraction of the warped disc as seen from the central source. Since the alignment timescale is short, systems of this kind would be rare — most AGN would harbour quite flat central discs — but potentially bright. ## Acknowledgements We gratefully acknowledge useful discussions with Jim Pringle, Mitch Begelman, Roger Blandford, Julian Krolik and Martin Rees, and thank the referee for a very helpful report. PJA thanks Space Telescope Science Institute, where part of this paper was completed, for support and hospitality. ## Appendix The ODEs for the steady state shape of the disc are most easily solved by expressing equation (1) in terms of separate equations for the tilt vector $`\widehat{l}`$ and surface density $`\mathrm{\Sigma }`$. For a Keplerian potential these are (Pringle 1992), $`{\displaystyle \frac{\widehat{𝐥}}{t}}=\left[3\nu _1^{}+{\displaystyle \frac{\mathrm{\Sigma }^{}}{\mathrm{\Sigma }}}\left(3\nu _1+{\displaystyle \frac{1}{2}}\nu _2\right)+{\displaystyle \frac{\nu _2}{R}}\left({\displaystyle \frac{3}{4}}+R^2\left|{\displaystyle \frac{\widehat{𝐥}}{R}}\right|^2\right)\right]{\displaystyle \frac{\widehat{𝐥}}{R}}`$ $`+{\displaystyle \frac{}{R}}\left({\displaystyle \frac{1}{2}}\nu _2{\displaystyle \frac{\widehat{𝐥}}{R}}\right)+{\displaystyle \frac{1}{2}}\nu _2\left|{\displaystyle \frac{\widehat{𝐥}}{R}}\right|^2\widehat{𝐥}+{\displaystyle \frac{\stackrel{}{\omega }_p\times \widehat{𝐥}}{R^3}},`$ (15) where the primes denote derivatives with respect to $`R`$, and $`{\displaystyle \frac{\mathrm{\Sigma }}{t}}`$ $`=`$ $`{\displaystyle \frac{3}{R}}{\displaystyle \frac{}{R}}\left[R^{1/2}{\displaystyle \frac{}{R}}\left(\nu _1\mathrm{\Sigma }R^{1/2}\right)\right]`$ (16) $`+`$ $`{\displaystyle \frac{1}{R}}{\displaystyle \frac{}{R}}\left[\nu _2\mathrm{\Sigma }R^2\left|{\displaystyle \frac{\widehat{𝐥}}{R}}\right|^2\right].`$ Setting the time derivatives to zero these constitute 3 second order equations for $`\mathrm{\Sigma }`$ and any two components of $`\widehat{l}`$. We solve these with the boundary conditions described in Section 2.3 using the NAG routines D02GAF and D02RAF, which implement the finite difference technique with deferred correction described by Pereyra (1979) (for a general introduction to such methods, see e.g. Press et al. 1992). An iterative approach is employed, in which we first solve equation (16) assuming zero tilt, and then use the resulting surface density profile in the solution of equation (15). The solution for the tilt vector $`\widehat{l}`$ is then recycled for use in equation (16), and we loop until convergence is achieved. Typically only a small number of iterations are required, since the changes to the surface density profile for even quite strong warps vary smoothly with changing disc shape. A finely space finite difference mesh is required to obtain a solution using this scheme, we use between 8000 and 16,000 mesh points evenly spaced in $`\mathrm{log}R`$. Even so, we have found that obtaining solutions with this scheme is still rather difficult, especially when $`\beta >0`$ or $`\nu _1\nu _2`$. For these cases, we start with the easy $`\beta =0`$, $`\nu _1=\nu _2`$ solution, and step towards the desired parameters using the previous solution at each step as the initial guess. We have been unable to obtain solutions using the above method for large values of $`\beta `$. In this regime, we instead evolve directly equation (1) using the numerical code described by Pringle (1992) until a steady-state is obtained. This code uses an explicit first order finite difference technique, which conserves angular momentum to machine precision. We have modified the boundary conditions to be as close as possible to those described in Section 2.3, so that the results are directly comparable between the two solution methods. The time dependent code is vastly more expensive to run, for the runs described here we are restricted to 100 radial grid points, again spaced evenly in $`\mathrm{log}R`$. Nevertheless this still provides reasonable accuracy for computing the torque.
no-problem/9812/cond-mat9812360.html
ar5iv
text
# Thermopower of atomic-size metallic contacts ## Abstract The thermopower and conductance of atomic-size metallic contacts have been simultaneously measured using a mechanically controllable break junction. For contacts approaching atomic dimensions, abrupt steps in the thermopower are observed which coincide with jumps in the conductance. The measured thermopower for a large number of atomic size contacts is randomly distributed around the value for large contacts and can be either positive or negative in sign. However, it is suppressed at the quantum value of the conductance $`G_0=2e^2/h`$. We derive an expression that describes these results in terms of quantum interference of electrons backscattered in the banks. In recent years stable metallic contacts consisting of a single atom have become experimentally accessible . The interesting interplay between quantization of the electron modes and the atomic structure of the contacts has resulted in intensive research in this field. Information obtained from experiments on atomic-size metallic contacts has mainly been limited to the measurement of the conductance. Two exceptions stand out: the simultaneous measurements of force and conductance by Rubio et al. , which prove that the conductance steps produced by contact elongation are due to atomic rearrangements, and the measurement of the subgap structure in atomic size superconducting aluminum contacts which characterize the conduction modes, by Scheer et al. . In this paper we present measurements of the thermopower in atomic-size metallic contacts. The thermopower $`S`$ is the constant of proportionality between an applied temperature difference $`\mathrm{\Delta }\theta `$ and the induced voltage, $`V_{tp}=S\mathrm{\Delta }\theta `$. The relationship between the thermopower and the electrical conductance $`G`$ is given in the linear-response approximation by $`S={\displaystyle \frac{\pi ^2k_B^2\theta }{3e}}{\displaystyle \frac{\mathrm{ln}G}{\mu }},`$ with $`\mu `$ the chemical potential. One can view the thermopower as a measure for the difference in conductance between electron and hole quasiparticle excitations, or as the energy dependence of the conductance. We will argue that the dominant contribution to the thermopower in atomic size contacts comes from quantum interference terms as a result of backscattering of electrons on defects near the contact. We have studied the thermopower of atomic-size gold contacts using a mechanically controllable break junction (MCB) . A schematic diagram of the sample configuration is shown in Fig. 1. By bending the phosphor bronze substrate, the 100 $`\mu `$m gold wire breaks at the notch, allowing atomic-size contacts to be adjusted. This gold wire is attached to long thin (25 $`\mu `$m) gold wires at both ends. They connect the notched wire to the current and voltage leads, anchored at the bath temperature, hence forming an open gold loop. The central gold wire is tightly wound and varnished on each side of the contact around a calibrated 5 k$`\mathrm{\Omega }`$ RuO<sub>2</sub> thermometer and a 500 $`\mathrm{\Omega }`$ RuO<sub>2</sub> heater. Using one heater, a temperature gradient over the contact can be applied. The glass plates and thin gold wires serve as thermal resistances to the substrate and bath temperatures, respectively. The sample is placed in a regular MCB setup in an evacuated can immersed in liquid helium. The conductance and thermopower were measured in three steps: The voltage over the contact was measured with a nanovoltmeter at $``$100 nA, +100 nA and at zero dc current bias, while maintaining a constant temperature gradient over the contact by applying about 2 mW heating power to a heater on one side of the constriction. The conductance is then obtained from the voltage difference for the two current polarities, and $`S`$ is obtained from the voltage at zero bias current. Each cycle takes about 4 s and is continuously repeated as we slowly sweep the piezovoltage up in order to decrease the contact size. A curve (Fig. 2) was generally taken from 10 $`G_0`$ to tunneling in 30 min ($`G_0`$ is the quantum conductance unit, $`2e^2/h`$). Every few traces, it was necessary to readjust the contact manually, which inevitably leads to the contact being pushed completely together. Low pass RC filters (10 Hz) were mounted in the circuit near the sample to prevent rectification of ac disturbances by the asymmetry of the voltage dependence of the conductance. Extensive measurements have been performed on two samples (referred to as samples 1 and 2). The primary limitation of the sample design is the thickness of the glass plates and thus the thermal insulation of the gold sample wire from the phosphor bronze substrate. The thickness is a trade-off between stability and thermal insulation. As a result of thermal currents flowing to the substrate, a thermal gradient is established in the sample wire between the thermometer and the contact. The measured temperature is hence not the actual temperature of the “hot” side of the constriction. We calibrate this thermal gradient by measuring the thermopower for large contacts, with resistances in the range 1 – 10 $`\mathrm{\Omega }`$, as a function of heating power. First, we stress that the thermal resistance of the contact is orders of magnitude larger than that of the wire on either side, therefore the temperature difference over the contact can be taken to be independent of the contact size. This is corroborated by experiment, which shows that the mean value of the thermopower over the contact as a function of contact diameter in the range 0.1 – 100 $`\mathrm{\Omega }`$ remains constant within an accuracy of 1%. We take advantage of the fact that for conventional point contacts the phonon drag contribution to the thermopower becomes negligible . Since the contact is part of a uniform gold loop, the measured themopower corresponds to the phonon drag contribution of the leads only. The side that is not heated remains equal to the substrate temperature and we assume that the actual temperature difference over the contact is a fixed fraction, $`\alpha `$, of the measured temperature difference. We then determine this fraction by comparing the measured large-contact thermopower as a function of temperature with literature values for the bulk thermopower of pure gold , which is nearly linear between 10 and 25 K, with a slope of $``$0.05 $`\mu `$V/K<sup>2</sup>. For the two gold samples discussed below, the model provides a good description when the fraction $`\alpha `$ is taken as 0.4 and 0.5, respectively. We estimate an error of about 20% for the temperature difference obtained. To have a reasonable signal level we need to apply a temperature difference of several kelvins. In the case of sample 1 the temperature difference $`\mathrm{\Delta }\theta `$ = 4 K, and the average of the temperatures on both sides of the contact is $`\theta _{av}`$ = 11.5 K. For sample 2, $`\mathrm{\Delta }\theta `$ = 6 K and $`\theta _{av}`$ = 12 K. As it is not obvious that this $`\mathrm{\Delta }\theta `$ can be regarded as a small (linear) perturbation, we will take the full dependence of $`S`$ on $`\mathrm{\Delta }\theta `$ into account in the analysis below. For all the results presented below, the bulk thermopower of the leads has been subtracted. While breaking the contact by increasing the piezovoltage, the usual plateaus in the conductance are observed . When heating one side of the contact we observe steps in the thermopower which occur simultaneously with conductance jumps from one plateau to the next. Each measured curve produces a different conductance and thermopower trace and a typical example is shown in Fig. 2. Even tiny jumps or changes of slope of the conductance can be accompanied by large steps in the thermopower. On a conductance plateau, even though the conductance hardly changes, smooth variations in the thermopower are usually observed. Note that the thermopower of the contact can have both a positive or negative sign. When we do not heat, or heat both sides of the contact to the same temperature, we observe no thermopower voltage within the noise level of 300 nV peak to peak. In order to obtain statistical information about a possible correlation between the measured thermopower and conductance values, a density plot was constructed from the combined data of the 72 and 148 individual curves from sample 1 and 2, respectively (Fig. 3). The conductance axis was divided into 10 partitions per $`G_0`$ and the thermopower axis in partitions of 0.125 $`\mu `$V/K. Then, the number of data points falling in each range of conductance and thermopower was counted and the results are represented in gray scale in Fig. 3. In this figure we observe an increase in the spread of the thermopower values with decreasing contact size, with both positive and negative sign. A conductance histogram of the 220 curves is, within the statistical accuracy, in agreement with other gold histograms taken at low temperatures (e.g. Ref. ). Although the data are not presented here, similar results for the thermopower have also been observed in silver and copper samples, albeit for a more limited number of curves. The thermopower has a random value and sign, and seems to be much more sensitive to small changes in the atomic geometry of the contact than the conductance. This is not expected from the simple adiabatic models for point contacts, which only predict a positive sign . We propose an interpretation of this behavior in terms of coherent backscattering of the electrons near the contact: As a result of the interference of waves with different path length, the transmission of the contact will show fluctuations as a function of energy . Each atomic rearrangement at a conductance step will alter the interference paths of the backscattered electrons by a significant fraction of $`\lambda _F`$, and hence change the energy dependence of the transmission in an unpredictable way causing each step in the conductance to result in an unpredictable jump in the thermopower. Along a plateau, the contact gradually changes position with respect to the scattering centers nearby and a gradual change in the interference pattern occurs. We now derive an expression for the thermopower based on these concepts. The thermopower in quantum point contacts can be written as $`S=L/G`$, with : $`{\displaystyle \frac{L}{G}}={\displaystyle \frac{\frac{2e}{h}_0^{\mathrm{}}(\text{Tr}𝐭𝐭^{})[f(\theta +\mathrm{\Delta }\theta ,E)f(\theta ,E)]𝑑E}{\frac{2e^2}{h}_0^{\mathrm{}}(\text{Tr}𝐭𝐭^{})\frac{f}{E}𝑑E}},`$ where $`(\text{Tr}𝐭𝐭^{})`$ is the sum of the transmission probabilities, $`f(\theta ,E)`$ is the temperature- and energy-dependent Fermi function, $`h`$ is Plank’s constant, and $`e=|e|`$ is the electron charge. The thermopower is characterized by the energy dependence of the transmission probabilities. We have approached the problem using the same method as presented in Ref. for the derivation of the conductance fluctuations in atomic size contacts. The point contact is taken as a ballistic central constriction with diffusive regions on both sides. The ballistic section (using the Landauer-Büttiker formalism) is characterized by $`N`$ conductance modes, each with a transmission probability $`T_n`$ and a contribution $`T_nG_0`$ to the conductance. After transmission through the contact, within the dephasing time $`\tau _\varphi `$, the electron scatters elastically in the diffusive region and has a finite probability amplitude, $`a`$, to return to the contact. When the diameter of the contact is small compared to the elastic scattering length $`l_e`$, the return probability, $`|a|^2`$, is small and we need only consider lowest-order processes. To lowest order in $`a`$ we can write the transmission of the three sections combined as $`\text{Tr}𝐭𝐭^{}={\displaystyle T_n[1+2\text{Re}(r_na_{l,n}+r_n^{}a_{r,n})]}.`$ (1) Here, $`r_n`$ and $`r_n^{}`$ are the reflection coefficients in the transfer matrix describing the central ballistic section of the contact when coming from the left and right, respectively, with $`|r_n|^2=|r_n^{}|^2=1T_n`$. $`a_{l,n}`$ and $`a_{r,n}`$ are the return amplitudes for mode $`n`$ from the left and right diffusive regions, respectively. The latter are sums over all possible paths of length $`l`$, containing phase factors $`e^{i(EE_F)l/v_F\mathrm{}}`$. The second term in Eq. (1) describes the interference of the directly transmitted wave with the fraction that, after transmission, is first backscattered to the contact and subsequently reflected at the contact. Assuming the dominant energy dependence is in the phase factors, the integration over $`E`$ in the expression for $`L`$ can be performed. We consider the square of $`L`$, averaged over an ensemble of scattering configurations, $`L^2=\left({\displaystyle \frac{2ek_B}{\mathrm{}}}{\displaystyle \frac{\theta }{\mathrm{\Delta }\theta }}\right)^2{\displaystyle \underset{n}{}}T_n^2(1T_n){\displaystyle _0^{\mathrm{}}}|a(\tau )|^2\left({\displaystyle \frac{1}{\mathrm{sinh}(z)}}{\displaystyle \frac{1+\mathrm{\Delta }\theta /\theta }{\mathrm{sinh}(z(1+\mathrm{\Delta }\theta /\theta ))}}\right)^2𝑑\tau ,`$ (2) with $`z=\pi k_B\theta \tau /\mathrm{}`$ and $`\tau =l/v_F`$. Here we have assumed that $`a_{l,n}`$ and $`a_{r,n}`$ are uncorrelated and have the same average return probability $`|a|^2`$, independent of the mode number $`n`$. For $`|a(\tau )|^2`$ we substitute the semiclassical probability to return to the contact into any of the $`N`$ modes after a time $`\tau `$, $`|a(\tau )|^2=v_F/[\sqrt{12\pi }k_F^2(D\tau )^{3/2}]`$, with $`D=v_Fl_e/3`$ the diffusion constant. The integral in Eq. (2) can be performed numerically. It only weakly depends on the ratio $`\mathrm{\Delta }\theta /\theta `$. For the standard deviation of the thermopower $`\sigma _S=\sqrt{S^2S^2}=\sqrt{L^2/G^2}`$ this finally results in $`\sigma _S={\displaystyle \frac{ck_B}{ek_Fl_e\sqrt{1\mathrm{cos}\gamma }}}\left({\displaystyle \frac{k_B\theta }{\mathrm{}v_F/l_e}}\right)^{1/4}{\displaystyle \frac{\sqrt{_{n=1}^NT_n^2(1T_n)}}{_{n=1}^NT_n}}.`$ (3) Here, $`c`$ is a numerical constant which equals 5.658 in the limit $`\mathrm{\Delta }\theta /\theta 0`$, and increases by about 5% for $`\mathrm{\Delta }\theta /\theta =0.5`$. We have also introduced a factor ($`1\mathrm{cos}\gamma `$) to account for the finite geometrical opening angle of the contact, where the limit $`\gamma `$ = 90<sup>o</sup> corresponds to an opening in an infinitely thin insulating layer between two metallic half spaces. Note that $`\sigma _S`$ in Eq. (3) is equal to zero when all $`T_n`$ are equal to either 0 or 1. In Fig. 4 we plot the standard deviation of the thermopower, determined from the experimental data by sorting all data points as a function of $`G`$ from the combined set of the 220 curves and averaging over 1000 consecutive data points. We compare these data to the theoretical curve calculated from the above expression for the case where the modes contributing to the conductance open one by one. That is, for all conductances with $`N`$ modes contributing to the conductance, only one (i.e., $`T_N`$) differs from 1 and $`\sigma _ST_N\sqrt{1T_N}/(N1+T_N)`$. The observed deep minimum at $`G=G_0`$ suggests that this conductance is dominantly carried by a single mode. This is in agreement with measurements of the shot noise on atomic size gold contacts , and the observed suppression of conductance fluctuations in Ref. . For $`G>G_0`$, the limited statistics in combination with the property that the effect in the thermopower scales inversely with conductance prevent the definite identification of minima near quantized values. From the amplitude of the curve we obtain an estimate for the elastic mean free path of $`l_e=5\pm 1`$ nm, using reasonable values for the opening angle of the contact of 35<sup>o</sup>–50<sup>o</sup> . All data points should be on or above the full curve in Fig. 4, since contributions by more conductance modes can only increase the variance in the thermopower. Therefore, $`l_e`$ cannot be much smaller than 4 nm. For a much larger value of $`l_e`$ many modes would have to contribute to the conductance, in which case we would not expect to find any minima at quantized values. Apart from the thermopower effects described here, for a quantum point contact positive peaks in the thermopower, centered at conductance values $`(n+\frac{1}{2})G_0`$, $`n`$ = 0,1,2,…, were predicted due to the structure of the electron density of states . This effect has indeed been observed in two-dimensional electron gas devices , but is much smaller than the fluctuations we observe here, and therefore cannot be resolved in the mean value $`S`$ for our metallic point contacts. The mechanism we present to explain the thermopower is the same as the one proposed for the voltage dependence of the conductance . Indeed, when we plot $`\sigma _S`$ and $`\sqrt{(G/V)^2}/G`$ for gold, the data show very similar behavior. The energy scales with which both measurements have been performed are so different (6 K temperature difference versus 20 mV amplitude) that comparison between the parameters obtained by both methods is a test for the validity of the theoretical derivation. The mean free path value obtained from the conductance fluctuations is 5 nm, in good agreement with the estimate obtained here. Hence, the fact that both works are not only in good qualitative but also in good quantitative agreement is strong support for the model used. This work was part of the research program of the “Stichting FOM”, which was financially supported by NWO. We have greatly profited from many discussions with C. Urbina, D. Esteve, M.H. Devoret, and we thank L.J. de Jongh for his stimulating support.
no-problem/9812/physics9812016.html
ar5iv
text
# Lifetimes of agents under external stress ## Abstract An exact formula for the distribution of lifetimes in coherent-noise models and related models is derived. For certain stress distributions, this formula can be analytically evaluated and yields simple closed expressions. For those types of stress for which a closed expression is not available, a numerical evaluation can be done in a straightforward way. All results obtained are in perfect agreement with numerical experiments. The implications for the coherent-noise models’ application to macroevolution are discussed. Agents under externally imposed stress have been recently studied in coherent-noise and related models . These models display scale free distributions in a number of quantities, such as event sizes and lifetimes, or in the decay pattern of aftershocks. Coherent-noise models are very different from other models displaying scale free behavior, such as sand pile models , as they do not rely on local interactions or feedback. Hence, they are not self organized critical. Considered the abundance of power-law distributed quantities in nature , models such as the ones of the coherent-noise type can help understanding to what extent self-organized criticality is the right paradigm for describing driven systems, and to what extent other mechanisms can provoke similar power-law distributions. Despite the simplicity of the original coherent-noise model—agents have thresholds $`x_i`$; if global stress exceeds a threshold $`x_i`$, agent $`i`$ gets replaced; with prob. $`f`$, an agent gets a new threshold—, no exact analytical results have been obtained so far. The distributions of event sizes and aftershocks have been studied in detail in (event sizes) and in (aftershocks). Both distributions can be regarded as being well understood. Nevertheless, the theoretical results are only of approximative character in both cases. In the case of the distribution of lifetimes, there are even less theoretical results. Sneppen and Newman have given an expression based on their time-averaged approximation. This expression is right for certain stress distributions, as we will show below. However, it breaks down for slowly decaying distributions such as the Lorentzian distribution. Moreover, it is not clear when exactly it can be applied. In a recent paper , a different approach of calculating the distribution of lifetimes has been taken, and the author claimed that the lifetimes obey multiscaling, with a $`L^2`$ decrease for small lifetimes, and a $`L^1`$ decrease for large lifetimes. Here, we will demonstrate that this statement is wrong. We will calculate the distribution of lifetimes exactly, without any approximations, and we will show that our results are in perfect agreement with numerical simulations. Our calculations are based on the observation that it is not necessary to know the distribution of thresholds $`\rho (x)`$ for calculating the distribution of lifetimes. All we have to know is the distribution according to which agents enter the system, which is called $`p_{\mathrm{thresh}}(x)`$ in the notation of , and the stress distribution $`p_{\mathrm{stress}}(x)`$. Once an agent has entered the system, it has a well defined life expectancy, which is closely related to the probability that the agent will be hit by stress or mutation. Note that in this picture, we are considering only a single agent. Therefore, if we talk about lifetimes, it does not matter whether the stress acts coherently on a large number of agents, or whether it is drawn for all agents independently. In this respect, the results we obtain in this work are of a much more general nature than the results found previously for event sizes or aftershocks. An agent with threshold $`x`$ will survive stress and mutation in one time step with a probability $`p(x)`$ equal to $`p(x)`$ $`=`$ $`(1f)[1p_{\mathrm{move}}(x)]`$ (1) $`=`$ $`(1f){\displaystyle _0^x}p_{\mathrm{stress}}(x^{})𝑑x^{}.`$ (2) What is the distribution of the survival probabilities $`p`$? We denote the corresponding density function by $`u(p)`$. Clearly, we have $`u(p)dp`$ $`=`$ $`p_{\mathrm{thresh}}(x)dx`$ (3) $`=`$ $`dx\text{for}0x<1.`$ (4) In the second step, we have assumed that the threshold distribution is uniform. This can always be achieved after a suitable transformation of variables . Hence, we find $$u(p)=\frac{dx}{dp}.$$ (5) The derivative $`dx/dp`$ can be calculated from Eq. (1), $$\frac{dx}{dp}=\frac{1}{(1f)p_{\mathrm{stress}}[x(p)]},$$ (6) and $`x(p)`$ can be obtained from Eq. (1) by inversion. The density function is thus defined for $`p<p_{\mathrm{max}}`$, where $$p_{\mathrm{max}}=p(1)=(1f)_0^1p_{\mathrm{stress}}(x)𝑑x$$ (7) stems from the condition that the thresholds are distributed uniformly between 0 and 1. Above $`p_{\mathrm{max}}`$, the density function $`u(p)`$ is equal to zero. All agents with the same survival probability $`p`$ generate a distribution of lifetimes which reads $$g(L)=p^{L1}(1p).$$ (8) Here, $`g(L)`$ is the probability density function for the lifetimes $`L`$. Note that the model works in discrete time steps, therefore the lifetimes $`L`$ are integers, and $`g(L)`$ is only defined for integral arguments. We can calculate the distribution of lifetimes $`h(L)`$ by averaging over the distributions generated by different survival probabilities $`p`$, weighted with their density function $`u(p)`$: $$h(L)=_0^{p_{\mathrm{max}}}u(p)p^{L1}(1p)𝑑p.$$ (9) Equation (9) can be explicitly evaluated for exponentially distributed stress, $`p_{\mathrm{stress}}=\mathrm{exp}(x/\sigma )/\sigma `$. We find $$u(p)=\frac{\sigma }{1fp}\text{for}0p<p_{\mathrm{max}},$$ (10) with $$p_{\mathrm{max}}=(1f)[1\mathrm{exp}(1/\sigma )].$$ (11) After inserting this into Eq. (9) and doing some basic calculations, we obtain $$h(L)=\frac{\sigma p_{\mathrm{max}}^L}{L}+f\sigma _0^{p_{\mathrm{max}}}\frac{p^{L1}}{1fp}𝑑p.$$ (12) It is possible to calculate the remaining integral with the aid of the identity (see , 15.3.1) $`{\displaystyle _0^1}t^{b1}(1t)^{cb1}(1tz)^a𝑑t=`$ (13) $`={\displaystyle \frac{\mathrm{\Gamma }(b)\mathrm{\Gamma }(cb)}{\mathrm{\Gamma }(c)}}F(a,b;c;z),`$ (14) where $`F(a,b;c;z)`$ is the hypergeometric function. We find $$h(L)=\sigma \frac{p_{\mathrm{max}}^L}{L}\left[1+\frac{f}{1f}F(L,1;L+1;\frac{p_{\mathrm{max}}}{1f})\right].$$ (15) The leading term $`\sigma p_{\mathrm{max}}^L/L`$ is responsible for a $`L^1`$ decay with cut off at $`L1/f`$. This behavior has been reported previously, and it corresponds to the approximation derived in . The correcting term vanishes with $`f`$. It is of importance only for extremely long lifetimes of the order $`1/f`$, for which it modifies the detailed cut off behavior. In Fig. 1 we display Eq. (15) together with results from direct numerical simulations, for different values of $`f`$. The theoretical result is in perfect agreement with the measured distributions. The dependency of the cut off on $`f`$ is clearly visible in Fig. 1. Another stress distribution for which we can derive a closed analytic form for $`h(L)`$ is the uniform distribution, $`p_{\mathrm{stress}}(x)=1`$ for $`0x<1`$. We find $$u(p)=\frac{1}{1f}\text{for}0p<1f$$ (16) and $$h(L)=\frac{(1f)^{L1}}{L(L+1)}(1+fL).$$ (17) As in the case of Eq. (12), we get a leading and a correcting term. The leading term decays as $`L^2`$ with cut-off at $`L1/f`$, and the second term modifies the cut-off behavior. Interestingly, the distribution of lifetimes is scale-free, although the distribution of event sizes in a coherent-noise model with uniform stresses is not a power law . A plot of Eq. (17) is given in Fig. 2, together with the corresponding measured distribution. For the most other stress distributions, the integral in Eq. (9) can only be done numerically. This is the case, for example, for the Gaussian distribution, $`p_{\mathrm{stress}}(x)=\sqrt{2/(\pi \sigma ^2)}\mathrm{exp}[x^2/(2\sigma ^2)]`$. Under Gaussian stress, an agent with threshold $`x`$ will survive a single time step with probability $$p(x)=(1f)\mathrm{erf}\left(\frac{x}{\sqrt{2}\sigma }\right),$$ (18) where $`\mathrm{erf}(x)`$ is the error function $$\mathrm{erf}(x)=\frac{2}{\sqrt{\pi }}_0^x\mathrm{exp}(t^2)𝑑t.$$ (19) Inversion of Eq. (18) yields $$x(p)=\sqrt{2}\sigma \mathrm{erf}^1\left(\frac{p}{1f}\right).$$ (20) Here, by $`\mathrm{erf}^1(z)`$ we denote the inverse error function, obtained by solving the equation $`z=\mathrm{erf}(x)`$ for $`x`$. We can calculate the density function of the survival probabilities with the aid of Eqs. (6) and (20). The resulting expression reads $$u(p)=\sqrt{\frac{\pi }{2}}\frac{\sigma }{1f}\mathrm{exp}\left(\left[\mathrm{erf}^1\left(\frac{p}{1f}\right)\right]^2\right).$$ (21) The numerical integration of $`u(p)p^{L1}(1p)`$ is somewhat tricky for choices of $`\sigma `$ such, that $`p_{\mathrm{max}}/(1f)`$ is very close to 1, since the inverse error function has a singularity at 1. However, for moderately small $`\sigma `$, the integration can be carried out without too much trouble. The resulting density function $`h(L)`$ is shown in Fig. 2 for $`\sigma =0.15`$ and $`f=10^4`$. We find that, for $`L1/f`$, the function $`h(L)`$ is almost linear in the log-log plot. A fit to the linear region of $`h(L)`$ gives an exponent $`\tau =1.177\pm 0.01`$, which means $`h(L)`$ decays slightly steeper than the $`L^1`$ decay predicted by the approximation of Sneppen and Newman. However, if we evaluate $`h(L)`$ for much larger $`L`$ and much smaller $`f`$, we find that the exponent $`\tau `$ decreases slowly towards the value 1 (Fig. 3). Let us now turn to the Lorentzian distribution $`p_{\mathrm{stress}}(x)=(2a/\pi )/(x^2+a^2)`$. In this case, a calculation along the lines of Eqs. (1)–(7) yields the following distribution of survival probabilities: $$u(p)=\frac{\pi }{2}\frac{a}{1f}\left(\mathrm{cos}^2\left[\frac{\pi }{2}\frac{p}{1f}\right]\right)^1.$$ (22) Here, $`p_{\mathrm{max}}=(2/\pi )(1f)\mathrm{arctan}(1/a)`$. The result of the numerical integration is shown in Fig. 4. As in the previous cases, we observe a perfect agreement between the analytic expression for $`h(L)`$ and the distribution measured in computer experiments. In the case of Lorentzian stresses, the distribution of lifetimes is clearly not scale invariant. In it has been claimed that the distribution of the agents’ lifetimes under external stress decays as $`L^2`$ for small $`L`$. Among the four stress distributions considered in this work, we found a $`L^2`$ decay only for the uniform stress distribution. Hence the statement made in is wrong in general. We could verify the $`L^1`$ decay reported in for exponential or Gaussian stresses. As it was also stated there, the Lorentzian stress distribution does not produce a scale free distribution of lifetimes. A surprising result of this work is the observation that the properties of the distribution of lifetimes and of the distribution of event sizes in a coherent-noise model are largely independent from each other. We do find power-law distributed lifetimes under uniform stress, under which the distribution of event sizes is not scale free, and we do not find power-law distributed lifetimes under Lorentzian stress, which generates a scale free distribution of event sizes. Consequently, we cannot infer from a power-law distribution of event sizes to one of lifetimes, and vice versa. Both distributions have to be investigated independently for every type of stress. Let us conclude with some remarks on the implications of our results for the application of coherent-noise or related models to large scale evolution. In the context of macroevolution, the agents are regarded as species, or higher taxonomical units, such as genera or families . The distribution of genus lifetimes in the fossil record follows either a power-law decrease with exponent near 2, or an exponential decrease . A $`L^2`$ decay can be observed in coherent-noise models with uniform stress. However, in this case the distribution of extinction events does not follow the $`s^2`$ decay – with $`s`$ denoting the number of families gone extinct in one time step – found in the fossil record . The distribution of lifetimes closest to an exponential decay is, among the stress distributions we studied here, generated by Lorentzian stresses. But also in this case, the distribution of extinction events is significantly different from the needed $`s^2`$ decay of extinction events. On the other hand, it seems to be typical for distributions generating a $`s^2`$ decay, such as exponential, Gaussian, or Poissonian, that the distribution of lifetimes decays as $`L^1`$. It is arguable whether any type of stress can actually generate the right type of distribution for lifetimes and extinction events simultaneously. Hence, the coherent-noise models in their current formulation probably miss some important ingredient as a model of macroevolution. An effect which is not covered, and which has been shown recently to be of importance for the statistical patterns in the fossil record, is a decline in the extinction rate . For example, Sibani *et al.* have demonstrated that the $`L^2`$ decay in lifetimes might be closely related to the decline in the extinction rate.
no-problem/9812/cond-mat9812353.html
ar5iv
text
# Dynamical properties of the one-dimensional Holstein model ## I Introduction The Holstein model has been used for many years to study physical problems related to the electron-phonon interaction, such as the formation of polarons and bipolarons by self-trapping of charge carriers, or the existence of charge-density-wave (CDW) ground state due to the Peierls instability. While our knowledge of the ground state of this model has considerably progressed for the past few years, our understanding of its dynamical properties is still very limited and often disputed. The lack of reliable results is especially important in the non-adiabatic and intermediate electron-phonon coupling regimes, where most of the interesting physics occurs, such as self-trapping crossover and metal-insulator transition. Currently, there is no well-controlled analytical method to study these regimes and most reliable results come from numerical simulations, such as exact diagonalizations , quantum Monte Carlo (QMC) simulations , and recent density matrix renormalization group (DMRG) calculations . Among these various numerical methods, only exact diagonalizations can easily be used to compute dynamical properties of the system. Although this technique can only be applied to small clusters due to restrictions on computer resources, it often allows us to gain a useful insight into the physics of the system. Moreover, if the error due to the necessary truncation of the phonon Hilbert space is negligible, this method provides numerically exact results, which can be used to assess the accuracy of other analytical or numerical methods. In this paper we report our study of the dynamical properties of a six-site one-dimensional Holstein lattice with periodic boundary conditions. We consider three different electron concentrations: a single electron, two electrons of opposite spins and a half-filled band (six electrons with zero total spin). The Holstein model describes non-interacting electrons coupled to dispersionless phonons. Its Hamiltonian is $`H=\mathrm{\Omega }{\displaystyle \underset{i}{}}b_i^{}b_i\gamma {\displaystyle \underset{i}{}}\left(b_i^{}+b_i\right)n_i`$ (1) $`t{\displaystyle \underset{i\sigma }{}}\left(c_{i+1\sigma }^{}c_{i\sigma }+c_{i\sigma }^{}c_{i+1\sigma }\right),`$ (2) where $`c_{i\sigma }^{}`$($`c_{i\sigma }`$) creates (annihilates) an electron with spin $`\sigma `$ on site $`i`$, $`n_i=c_i^{}c_i+c_i^{}c_i`$, and $`b_i^{}`$ and $`b_i`$ are creation and annihilation operators of the local phonon mode. The model parameters are the hopping integral $`t`$, the electron-phonon coupling constant $`\gamma `$, and the bare phonon frequency $`\mathrm{\Omega }`$. For all the results presented in this paper, the phonon frequency is chosen to be in the non-adiabatic regime $`\mathrm{\Omega }=t`$ and we study the variations of the system properties when the electron-phonon coupling $`\gamma `$ goes from zero to the strong-coupling regime $`\gamma >\mathrm{\Omega },t`$. We perform exact diagonalizations of the Holstein Hamiltonian using the efficient local phonon Hilbert space reduction method that we have recently introduced. This approach uses the information contained in a reduced density matrix to generate an optimal phonon basis which allows us to truncate the phonon Hilbert space without significant loss of accuracy. In our previous work, this method has been demonstrated on the ground state of the six-site Holstein model at half filling. Here, we extend this approach to different band fillings and to excited state calculations. We also show how to use the optimal phonon basis to dress electrons with local phonons. Using the Lanczos algorithm , single-particle and pair spectral functions are calculated for bare electrons and dressed electrons, and the Drude weight and optical conductivity are computed. This paper is organized as follows: in the next section, we describe our method to obtain an optimized phonon basis and to dress electrons with these phonons, and introduce the dynamical quantities we have calculated. In Sec. III we present our results for the six-site Holstein model. Finally, Sec. IV contains our conclusions. ## II Methods ### A Optimal phonon basis In order to perform an exact diagonalization of the Hamiltonian (2), one needs to introduce a finite basis to describe the phonon degrees of freedom. If one uses a bare phonon basis (the basis made from the lowest eigenstates of the operators $`b_i^{}b_i`$), the number of phonon levels needed for an accurate treatment can be quite large in the strong-coupling regime. However, this number can be strongly reduced by using an optimal basis (a basis that minimizes the error due to the truncation of the phonon Hilbert space). In a previous work we have introduced a density matrix approach for generating an optimal phonon basis. The key idea of this approach is identical to the key idea of DMRG: in order to eliminate states from a part of a system without loss of accuracy, one should transform to the basis of eigenvectors of the reduced density matrix, and discard states with low probability. To be specific, consider any wave function $`|\psi `$ in the Hilbert space of the Holstein model. Let $`\alpha `$ label the four possible electronic states of a particular site (empty, occupied by a single electron of spin up or down, or occupied by two electrons of opposite spins) and let $`n`$ label the bare phonon levels of this site. Let $`j`$ label the combined states of all of the rest of the sites. Then $`|\psi `$ can be written as $$|\psi =\underset{\alpha ,n,j}{}\psi _{\alpha n,j}|\alpha |n|j.$$ (3) The reduced density matrix $`\rho `$ of the state $`|\psi `$ for this site is $$\rho =\underset{\alpha }{}\left[|\alpha \alpha |\left(\underset{n,r}{}\rho _{n,r}^\alpha |nr|\right)\right],$$ (4) where $`r`$ is another index labeling the bare phonon levels. This density matrix is always diagonal for the electronic states because of the conservation of the number of electrons. The phonon density matrix for each electronic state $`\alpha `$ of the site is given by $$\rho _{n,r}^\alpha =\underset{j}{}\psi _{\alpha n,j}\psi _{\alpha r,j}^{}.$$ (5) Let $`w_{\alpha k}`$ be the eigenvalues and $`\varphi _{\alpha k}(n)`$ the eigenvectors of $`\rho _{n,m}^\alpha `$, where k labels the different eigenstates for a given electronic state of the site. The states $$|\varphi _{\alpha k}=\underset{n}{}\varphi _{\alpha k}(n)|n,k=1,2,\mathrm{}$$ (6) form a new basis of the phonon Hilbert space for each electronic state $`\alpha `$ of the site. The $`w_{\alpha k}`$ are the probabilities of the site being in the state $$|\alpha ,k=|\alpha |\varphi _{\alpha k}$$ (7) if the system is in the state $`|\psi `$. These states $`|\alpha ,k`$ form a new basis of the Hilbert space associated with each site. If $`w_{\alpha k}`$ is negligible, then the corresponding state $`|\alpha ,k`$ can be discarded from the basis for the site, without affecting the state $`|\psi `$. If one wishes to keep a limited number of phonon states $`m`$ for a site, then the best states to keep corresponds to the $`m`$ eigenstates of $`\rho `$ with largest eigenvalues for each electronic state of the site. The corresponding phonon states $`|\varphi _{\alpha k}`$ form an optimal phonon basis. In the Holstein model we have found that keeping $`m=35`$ optimal states $`|\varphi _{\alpha k}`$ per phonon mode for each electronic state of the site gives results as accurate as with hundred or more bare phonon states per site for a wide range of parameters. Unfortunately, in order to obtain the optimal phonon states, we need the target state (3), for instance, the ground state. Usually do not know this state – we want the optimal states to help get it. This problem can be circumvented in several ways . Here, we describe the algorithm that we have used to obtain optimal phonon bases for the ground state and low-lying states of the Holstein model. First, we calculate a large optimal phonon basis in a two-site Holstein system with appropriate parameters. In such a small system we can carry out calculations with enough bare phonon levels to render completely negligible errors due to the truncation of the phonon Hilbert space. Thus, target states can be obtained directly by diagonalization in the bare phonon basis. Then, the optimal phonon states of the two-site system are used as the initial basis states for calculations on larger lattices. The simplest of algorithms described in Ref. 13 is used for the six-site system. A single site (called the big site) contains a large number of the optimal phonon states obtained in the two-site systems (up to a few hundreds). Each other site of the lattice is allowed to have a much smaller number of optimal phonon states, $`m35`$, for each electronic state of the site. Initially these states are also optimal phonon states of the two-site system. The ground state of the Hamiltonian (2) is calculated in this reduced Hilbert space by exact diagonalization. Then, the density matrix (4) of the big site is diagonalized. The most probable $`m`$ eigenstates for each electronic state of the big site form new optimal states which are used on all of the other sites for the next diagonalization. These new phonon states are now optimized for the six-site system and thus are different from the optimal states of the two-site system. After the first diagonalization, the new optimal states of the six-site system are not very accurate. Thus, diagonalizations of the Hamiltonian (2) and of the density matrix are repeated until the optimal states have converged. In each diagonalization, the big site always has a large number of phonon states, so that it can generate improved optimal states for the next iteration. After full convergence of the optimal phonon basis, the error made by using 3-5 optimal states instead of hundreds of bare levels is negligible: typically, the error in the ground state energy is smaller than $`10^5`$ with 3 or more optimal states when only the ground state is targeted. To study dynamical properties we can extend the above approach by targeting the ground state $`|\psi _0`$ as well as some low-lying excited states $`|\psi _s,s>0`$. In this case, the density matrix of the big site is formed by adding the density matrices $`\rho _s`$ of each state $`|\psi _s`$, with weighting factors $`a_s`$ (normalized by $`a_s=1`$), $$\rho =\underset{s}{}a_s\rho _s.$$ (8) Thus, information from several states can be included to select the optimal phonon basis. The weighting factors allow us to vary the influence of each state $`|\psi _s`$ in the formation of the optimal phonon basis. Not surprisingly, we have found that more optimal states must be kept on each site to reach a given accuracy when several states are targeted than when only the ground state is targeted. Obviously, these additional optimal states are necessary to describe accurately the excited states $`|\psi _s`$. However, they do not seem to be necessary to obtain a qualitative description of dynamical properties (see the discussion in Sec. II D). Therefore, we usually target only the ground state in our calculations. ### B Dressing of electronic operators A very interesting feature of our method is that it provides a very simple way to dress electrons with phonons. Let assume that the density matrix eigenstates $`\varphi _{\alpha k}(n),k=1,2,\mathrm{}`$ are ranked by decreasing weight $`w_{\alpha k}`$ for each electronic state $`\alpha `$. For a given index $`k`$, the weights $`w_{\alpha k}`$ and eigenstates $`\varphi _{\alpha k}(n)`$ of the different electronic states $`\alpha `$ often seem completely unrelated. However, we can consider the relative weight $$r_{\alpha k}=\frac{w_{\alpha k}}{_qw_{\alpha q}}$$ (9) of an optimal state $`|\varphi _{\alpha k}`$ compared to the weight of all states in the corresponding optimal basis. As noted previously, $`w_{\alpha k}`$ (and thus $`r_{\alpha k}`$) decreases rapidly with $`k`$. We have found that, for a given index $`k`$ the variations of $`r_{\alpha k}`$ as a function of the electronic state $`\alpha `$ are much smaller (at least by one order of magnitude) than the variations of $`r_{\alpha k}`$ between successive values of $`k`$. Therefore, there is an unambiguous relation between optimal phonon states $`|\varphi _{\alpha k}`$ for different electronic occupations of a site $`\alpha `$ given by their relative weights $`r_{\alpha k}`$. This one-to-one mapping between optimal states can be used to dress electronic operators. All electronic operators can be written as the sum and product of operators acting on a single site, for instance $`c_{i\sigma }^{}`$ and $`c_{i\sigma }`$. Such a local operator is diagonal in the bare phonon basis and can be written as $$O=\left(\underset{\alpha ,\beta }{}O_{\alpha ,\beta }|\alpha \beta |\right)I_{ph},$$ (10) where $`\alpha `$ and $`\beta `$ label the four possible electronic states of the site and $`I_{ph}`$ is the identity operator acting in the Hilbert space of the local phonon mode. Now we can define the corresponding dressed operator as $$\stackrel{~}{O}=\underset{\alpha ,\beta }{}\left(O_{\alpha ,\beta }|\alpha \beta |U_{\alpha ,\beta }\right),$$ (11) where $`U_{\alpha ,\beta }`$ is a unitary operator in the phonon Hilbert space given by $$U_{\alpha ,\beta }=\underset{k}{}|\varphi _{\alpha k}\varphi _{\beta k}|.$$ (12) Obviously, for $`\alpha =\beta `$ we have $`U_{\alpha ,\beta }=I_{ph}`$ because the density matrix eigenstates $`\varphi _{\alpha k}(n)`$ satisfy the orthonormalization condition $`_k\varphi _{\alpha k}(n)\varphi _{\alpha k}(r)=\delta _{n,r}`$. However, for $`\alpha \beta `$, $`U_{\alpha ,\beta }`$ is not trivial because the eigenstates for different electronic states are unrelated, in general. Nevertheless, $`U_{\alpha ,\beta }`$ is unambiguously defined thanks to the one-to-one mapping between eigenstates discussed above. To be rigorous, the operator $`U_{\alpha ,\beta }`$ is unitary only if we sum up the index $`k`$ in Eq. (12) over the infinite number of states in the basis (6). As we generally know only $`m35`$ optimal phonon states, the operator $`U_{\alpha ,\beta }`$ also involves a projection onto the subspace spanned by these few states. However, by construction all wave functions that we calculate have a negligible weight out of this subspace. Thus, $`U_{\alpha ,\beta }`$ can be regarded as a unitary transformation for all practical purposes. Clearly, the operator $`\stackrel{~}{O}`$ transforms electronic states as the bare operator does, but it also transforms the phonon degrees of freedom accordingly. For instance, the operator $`\stackrel{~}{c}_{i,\sigma }`$ not only removes an electron with spin $`\sigma `$ form the site $`i`$, but it also transforms the phonon mode on this site, changing optimal phonon states for a site with two electrons into optimal states for a site with an electron of spin $`\sigma `$ and changing optimal states for a site with one electron of spin $`\sigma `$ into the optimal states of an unoccupied site. Therefore, the operator $`\stackrel{~}{O}`$ acts on electrons dressed by the local optimal phonon states as the bare operator acts on bare electrons. For instance, the operator $`\stackrel{~}{c}_{i,\sigma }^{}`$ creates a dressed electron of spin $`\sigma `$ on the site $`i`$. We note, however, that the dressing of electrons by phonons at a finite distance from the electrons is completely neglected with this method. In two cases ($`\gamma =0`$ and $`t=0`$) it is possible to calculate the optimal phonon basis analytically and thus to understand the transformation (11). In the weak coupling limit ($`\gamma /t,\gamma /\mathrm{\Omega }0`$) the optimal phonon states resemble the bare phonon levels for each electronic state of the site. Therefore, the unitary transformation $`U_{\alpha ,\beta }`$ is similar to the identity operator for any values of $`\alpha `$ and $`\beta `$ and thus, $`\stackrel{~}{O}O`$. In the strong-coupling anti-adiabatic limit ($`\gamma ,\mathrm{\Omega }>>t`$) the optimal phonon states are simply the eigenstates of quantum oscillators with an equilibrium position shifted by $`2\gamma /\mathrm{\Omega }N_\alpha `$, where $`N_\alpha `$ is the number of electrons on the site in the electronic state $`\alpha `$, $`n_i|\alpha =N_\alpha |\alpha `$. This corresponds to the states obtained by applying the Lang-Firsov unitary transformation $$S(g)=e^{g_i(b_i^{}b_i)n_i}$$ (13) with $`g=\gamma /\mathrm{\Omega }`$ to the bare states. Therefore, in this limit we have $$\stackrel{~}{O}=S(\gamma /\mathrm{\Omega })OS^1(\gamma /\mathrm{\Omega })$$ (14) and the electronic operators obtained with the transformation (11) are completely equivalent to those defined in other works using the Lang-Firsov transformation. However, in the general case, the transformation (11) is more accurate than the Lang-Firsov transformation. The later is only an (analytical) approximation of the transformation of a local phonon mode as a function of the electronic occupation of a site, while the former is based on a (numerically) exact transformation of the phonon states. ### C Dynamical quantities We compute dynamical properties such as spectral weight functions and optical conductivity using the Lanczos algorithm combined with the continued fraction method. This algorithm yields not only the dynamical correlation and response functions of the system, but also the most important eigenstates $`|\psi _s`$ that contribute to these functions and that we need to build the density matrix (8). We define the spectral weight function as $$A(p,\omega )=\frac{1}{\pi }Im\left[\psi _0|c_{p\sigma }^{}\frac{1}{H\omega E_0iϵ}c_{p\sigma }|\psi _0\right],$$ (15) where $`|\psi _0`$ is the ground state wave function for a given number of electrons, $`E_0`$ is the ground state energy, and the operators $`c_{p\sigma }^{}`$ and $`c_{p\sigma }`$ create and annihilate an electron with momentum $`p`$ and spin $`\sigma `$, respectively. In all the results presented in this paper, we have used a broadening factor $`ϵ=0.1`$. The total weight of this spectral function (obtained by integrating over $`\omega `$) is equal to the momentum density distribution $$n_\sigma (p)=\psi _0|c_{p\sigma }^{}c_{p\sigma }|\psi _0.$$ (16) We also define a spectral function $`\stackrel{~}{A}(p,\omega )`$ and its total weight $`\stackrel{~}{n}_\sigma (p)`$, where we substitute dressed operators $`\stackrel{~}{c}_{p\sigma }^{}`$ and $`\stackrel{~}{c}_{p\sigma }`$ for the corresponding bare operators in (15) and (16). When there are more than one electron in the lattice, one expects electrons to form tightly bound pairs at strong electron-phonon coupling. Thus, it is interesting to study the pair spectral function $$P(p,\omega )=\frac{1}{\pi }Im\left[\psi _0|\mathrm{\Delta }_p^{}\frac{1}{H\omega E_0iϵ}\mathrm{\Delta }_p|\psi _0\right],$$ (17) where $$\mathrm{\Delta }_p^{}=\frac{1}{\sqrt{N}}\underset{j}{}e^{ipj}c_j^{}c_j^{}$$ (18) and its hermitian conjugate $`\mathrm{\Delta }_p`$ are pair operators with momentum $`p`$, and $`N`$ being the number of sites. The total weight of this spectral function is given by $$d(p)=\psi _0|\mathrm{\Delta }_p^{}\mathrm{\Delta }_p|\psi _0.$$ (19) In this case too, we introduce a spectral function $`\stackrel{~}{P}(p,\omega )`$ and its total weight $`\stackrel{~}{d}(p)`$ using dressed electronic operators $`\stackrel{~}{\mathrm{\Delta }}_p^{}`$ and $`\stackrel{~}{\mathrm{\Delta }}_p`$ in Eqs. (17) and (19). The real part of the optical conductivity is made up of a Drude peak at $`\omega =0`$ and an incoherent part for $`\omega >0`$, $`\sigma (\omega )=D\delta (\omega )+\sigma ^{}(\omega )`$. The incoherent part of the conductivity is given by $$\sigma ^{}(\omega )=\frac{e^2}{\omega N}Im\left[\psi _0|J^{}\frac{1}{HE_0\omega iϵ}J|\psi _0\right],$$ (20) where the current operator is defined as $$J=it\underset{j\sigma }{}(c_{j+1\sigma }^{}c_{j\sigma }c_{j\sigma }^{}c_{j+1\sigma }).$$ (21) $`\sigma ^{}(\omega )`$ can be calculated using the Lanczos method, then the Drude weight $`D`$ can be evaluated using the well-known sum rule $$_0^{\mathrm{}}\sigma (\omega )𝑑\omega =\frac{\pi e^2}{2}(T)$$ (22) relating the total weight of the optical conductivity to the electronic kinetic energy per site $$T=\frac{t}{N}\underset{j\sigma }{}\psi _0|c_{j+1\sigma }^{}c_{j\sigma }+c_{j\sigma }^{}c_{j+1\sigma }|\psi _0.$$ (23) In this paper, the spectral functions and the incoherent part of the conductivity are always shown in arbitrary units. Quantitative results for the Drude weight $`D`$ are expressed in units of $`2\pi e^2t`$. ### D Comparison with other approaches As a first check of our method, we have compared our exact diagonalization results for the lowest eigenstates of small Holstein clusters (up to six sites) with DMRG calculations. We have always found a good quantitative agreement. For instance, the eigenenergies obtained with both methods agree within $`10^3t`$ for at least up to the 18th lowest eigenstates in a six-site lattice. We have also carried out calculations of dynamical properties in the two-site Holstein model where we can keep enough phonon levels to obtain numerically exact results. In this case, we have calculated the optimal phonon basis in various ways, changing both the number of targeted states $`|\psi _s`$ and the weighting factors $`a_s`$ to build the density matrix (8). We have found that overlaps between corresponding optimal phonon states in the different basis are always larger than $`90\%`$. The spectral functions and conductivity calculated using the different basis also agree qualitatively. Therefore, the inclusion of excited states in the density matrix (8) does not seem to be necessary to obtain a qualitative description of dynamical properties. Finally we have also calculated the optical conductivity of the six-site Holstein model of spinless fermions at half filling. We have found a satisfactory agreement with the results obtained recently by Weisse and Fehske using completely different approaches to truncate the phonon Hilbert space and to calculate the conductivity. The optimal phonon approach used in this work is so efficient that we can easily carry out calculations for dynamical quantities that require powerful parallel computers when a standard phonon Hilbert space truncation method is used. All results presented here have been obtained on a workstation with a 133MHz processor and 150Mb of RAM memory. ## III Results ### A Single electron The case of the Holstein model with a single electron is known as the polaron problem. This case has been extensively studied with both analytical and numerical methods. For weak coupling ($`\gamma /\mathrm{\Omega }<1`$ and $`\gamma ^2/\mathrm{\Omega }<2t`$) the ground state is a quasi-free electron dragging a phonon cloud. For strong coupling ($`\gamma /\mathrm{\Omega }>1`$ and $`\gamma ^2/\mathrm{\Omega }>2t`$) the electron becomes trapped by the lattice distortion that it generates. The quasi-particle composed of this self-trapped electron and the accompanying lattice distortion is called a polaron. The polaron is said to be small when the spatial extension of the self-trapped state is limited to one site. It is known that a smooth crossover occurs from the quasi-free electron ground state to a small polaron ground state as the electron-phonon coupling increases. First, we examine the optimal phonon states obtained with our method. Figure 1 shows the optimal phonon wave functions $`\varphi (q)`$ as a function of the phonon coordinate $`q=b+b^{}`$ for different electron-phonon couplings. Only the most important optimal state is shown for each of the two possible electronic occupations of a site ($`N_\alpha =0,1`$). For weak coupling ($`\gamma =0.3t`$) the optimal states are similar to the bare phonon levels and thus, the wave function $`\varphi (q)`$ is just the ground state of a quantum oscillator. As the coupling increases, the optimal states change smoothly and become increasingly distinct. For all coupling each wave function $`\varphi (q)`$ has a large overlap with the lowest eigenstate of a quantum oscillator with a shifted equilibrium position. This shift is always very small for $`N_\alpha =0`$, but for $`N_\alpha =1`$ it increases with increasing coupling and tends to $`2\gamma /\mathrm{\Omega }`$ at strong coupling. This is in agreement with the strong-coupling theory that predicts optimal phonon states given by the Lang-Firsov transformation (13) with $`g=\gamma /\mathrm{\Omega }`$. However, in the general case, the optimal phonon states are different from the states obtained with this transformation. First, one can see in Fig. 1 that the oscillator shift for $`N_\alpha =1`$ is smaller than $`2\gamma /\mathrm{\Omega }`$ for intermediate couplings ($`\gamma =1.1t`$) and reaches this value only for strong coupling ($`\gamma =3t`$). Of course, this difference could be taken into account simply by using an effective parameter $`g`$ smaller than $`\gamma /\mathrm{\Omega }`$ in Eq. (13). However, there are other features of the optimal states that a simple Lang-Firsov transformation can not reproduce. For instance, for $`\gamma =2t`$ the optimal wave function $`\varphi (q)`$ for $`N_\alpha =1`$ has an important tail at low $`q`$. This can be understood as a retardation effect due to the finite phonon frequency $`\mathrm{\Omega }/t`$. Most of the time a site is unoccupied and the phonon mode is in the optimal state for $`N_\alpha =0`$. When the electron hops on this site, the phonon mode can not adapt instantaneously. Thus, its state for $`N_\alpha =1`$ becomes a combination of the states obtained using the Lang-Firsov transformation for $`N_\alpha =0`$ and $`N_\alpha =1`$. In Fig. 1 we also give the weight $`W_0`$ and $`W_1`$ of the most important optimal state for $`N_\alpha =0`$ and $`N_\alpha =1`$, respectively. $`W_1`$ is much smaller than $`W_0`$ because the probability of finding the electron on a given site is only 1/6 while the probability of a site being empty is 5/6. For $`\gamma =0`$ and $`\gamma \mathrm{}`$ one can show that there is only one optimal state for each electronic state of a site and thus, $`W_0=5/6`$ and $`W_1=1/6`$. For intermediate couplings, $`W_0`$ and $`W_1`$ become smaller, showing the increasing importance of the higher optimal phonon states. However, for all cases presented in Fig. 1, the two most important optimal states constitute more than $`98\%`$ of the total weight. For weak electron-phonon coupling both spectral functions $`A(p=0,\omega )`$ and $`\stackrel{~}{A}(p=0,\omega )`$ have a single peak at $`\omega =2t`$ because the ground state wave function is simply $$|\psi _0c_{p=0,\sigma }^{}|0\stackrel{~}{c}_{p=0,\sigma }^{}|0,$$ (24) where $`|0`$ is the vacuum state (without electron nor phonon), with a ground state energy of about $`2t`$. Figure 2(a) shows that satellite peaks appear above the dominant peak energy in both spectral functions for larger couplings. The position of the dominant peak shift to higher energy as the coupling $`\gamma `$ increases. The distance between these peaks is roughly $`\mathrm{\Omega }`$, with some peaks too small to be seen. We can easily understand the structure of these spectral functions. Equation (15) with $`ϵ0`$ can be written as $$A(p,\omega )=\underset{n}{}|\varphi _n|c_{p\sigma }|\psi _0|^2\delta (\omega (\epsilon _nE_0)),$$ (25) where $`E_0`$ and $`|\psi _0`$ are the ground state energy and wave function for a single electron, and $`\epsilon _n`$ and $`|\varphi _n`$ are a complete set of energies and eigenstates for a lattice containing no electron and thus, only non-interacting local phonons. In this case, all eigenenergies are of the type $`\epsilon _n=m\mathrm{\Omega }`$, where $`m`$ is an positive integer number. Therefore, $`A(p,\omega )`$ and $`\stackrel{~}{A}(p,\omega )`$ contain only peaks with spacing $`\mathrm{\Omega }`$ starting from $`E_0`$. In $`A(p=0,\omega )`$ the weight of the dominant peak shifts increasingly to the satellite peaks and in the strong coupling regime ($`\gamma =2t`$), no dominant peak can be identified \[Fig. 2(a)\]. Moreover, the total weight $`n_\sigma (p=0)`$ of this spectral function decreases continuously as $`\gamma `$ increases \[see Fig. 2(b)\] and in the strong-coupling limit it tends to the value $`1/6`$ signaling a completely localized electron. (More precisely, if the electron is localized on a single site of the six-site lattice then $`n_\sigma (p)=1/N=1/6`$ for all momentum $`p`$.) This decrease and the considerable incoherent contribution to $`A(p=0,\omega )`$ shows that the free electron state $`c_{p=0,\sigma }^{}|0`$ is no longer a good starting point for a description of the ground state for $`\gamma 1.5t`$. On the other hand, $`\stackrel{~}{A}(p=0,\omega )`$ contains a well defined quasi-particle peak for all couplings \[Fig. 2(a)\], indicating that the motion of the electron is closely accompanied by a local phonon cloud represented by the optimal phonon states. The position of this peak is determined by the ground state energy $`E_0`$ as explained above. For $`\gamma =2t`$ this position gives a polaron energy that approaches the strong-coupling result $`E_0=\gamma ^2/\mathrm{\Omega }`$. The total weight $`\stackrel{~}{n}_\sigma (p=0)`$ of this spectral function is very interesting \[Fig. 2(b)\]. First, $`\stackrel{~}{n}_\sigma (p=0)`$ is always larger than 0.9, showing that the ground state is very well described by $$|\psi _0\stackrel{~}{c}_{p=0,\sigma }^{}|0$$ (26) for all couplings. Furthermore, we note that $`\stackrel{~}{n}_\sigma (p=0)`$ first decreases slightly as $`\gamma `$ increases and then tends to 1 in the strong-coupling limit. We think that the initial decrease of $`\stackrel{~}{n}_\sigma (p=0)`$ shows the increasing importance of the extended phonon cloud following the electron as one goes from the non-interacting limit to the intermediate electron-phonon coupling regime. The operator $`\stackrel{~}{c}_{p=0,\sigma }^{}`$ dressed only by local phonons does not describe this extended phonon cloud. For larger coupling $`\gamma `$ the phonon cloud collapses to a single site as a small polaron is formed and thus, the dressed operator $`\stackrel{~}{c}_{p=0,\sigma }^{}`$ becomes again an almost exact description of the ground state. In this regime, the operators $`\stackrel{~}{c}_{p,\sigma }^{}`$ and $`\stackrel{~}{c}_{p,\sigma }^{}`$ obtained with the transformation (11) are similar to the “small polaron operators” obtained using the Lang-Firsov transformation as discussed in the previous section. The evolution of the Drude $`D`$ weight and of the kinetic energy per site $`T`$ is shown as a function of $`\gamma `$ in Fig. 3(a). The kinetic energy gives the total weight of the optical conductivity according to Eq. (22) while the Drude wei- ght measures the contribution of coherent motion to the optical conductivity. In Fig. 3(a) the units have been chosen so that both quantities appear equal when there is no contribution from the incoherent part of the conductivity $`\sigma ^{}(\omega )`$, as in a non-interacting system ($`\gamma =0`$). We see that both the Drude weight and the average kinetic energy decreases smoothly as the coupling increases. However, the Drude weight decreases much faster and becomes very small for $`\gamma >2t`$. The slow decrease of $`D`$ for small $`\gamma `$ shows the slight increase of the electron effective mass as it has to drag an increasingly important phonon cloud. The small but finite value of $`D`$ for large $`\gamma `$ reflects the fact that a polaron moves coherently, as a free carrier, in the Holstein model, but its effective mass is much larger than the bare mass of the electron. The decrease of the ratio $`D/T`$ implies that incoherent processes become more important in the optical conductivity as $`\gamma `$ increases. As seen in Fig. 3(b), $`\sigma ^{}(\omega )`$ becomes also fairly complex in the polaronic regime ($`\gamma =2.5t`$). The structure of $`\sigma ^{}(\omega )`$ is different from the optical conductivity expected for a small polaron in the adiabatic regime . In particular, no dominant peak is visible at the energy corresponding to the depth of the lattice potential which traps the electron ($`\omega 2\gamma ^2/\mathrm{\Omega }`$). This discrepancy is probably the consequence of the non-adiabatic phonon frequency $`\mathrm{\Omega }=t`$ used in our calculations. In this regime, many-phonon optical excitations become as important as the purely electronic transition at $`\omega 2\gamma ^2/\mathrm{\Omega }`$. For weak electron-phonon coupling ($`\gamma =0.8t`$) the conductivity $`\sigma ^{}(\omega )`$ has very little weight and its structure is mostly determined by the discrete electronic energy levels of the non-interacting system. Our results agree perfectly with the known features of the single-electron Holstein model discussed at the beginning of this section. In particular, they confirm that the ground state is formed by an itinerant quasi-particle (quasi-free electron or polaron) for all electron-phonon couplings, but that the crossover to the polaronic regime is accompanied by a substantial enhancement of the quasi-particle effective mass. In the polaronic regime our results also show that the weight of incoherent processes is much more important than the Drude weight in the optical conductivity. Therefore, small perturbations which are neglected in the Holstein model, such as disorder or thermal phonons, are likely to suppress the coherent motion of the small polaron in more realistic models or actual materials. ### B Two electrons with opposite spins The case of two electrons of opposite spins is not as well understood as the single electron case. For strong enough coupling electrons are trapped by the lattice distortion that they generate and form an itinerant bound pair called a bipolaron. If both electrons are localized on a single site, the bipolaron is said to be small. It is known that a small bipolaron is formed in the strong coupling limit of the Holstein model with two electrons. What happens at weaker coupling before the onset of small bipolaron formation is still debated. Three different scenarios are possible: both electrons remain free, two independent polarons are formed, or a “large” bipolaron is formed (a bipolaron whose spatial extension is larger than one site). Figure 4 shows the most important optimal phonon wave functions for each possible electronic occupation of a site ($`N_\alpha =0,1,2`$) for different electron-phonon couplings. (In this system, optimal phonon states are similar if the site is occupied by one electron with spin up or down, so we do not distinguish between these two cases). As in the single-electron system, the optimal phonon states are just the ground state of the operator $`b^{}b`$ in the weak coupling limit. The weights of these optimal states are determined by the probability of a given site being empty ($`W_0=25/36`$), occupied by one electron ($`W_1=10/36`$), or by both electrons ($`W_2=1/36`$) when there are two independent electrons on a six-site lattice. As $`\gamma `$ increases, $`W_1`$ decreases and tends to zero in the strong coupling limit while both $`W_0`$ and $`W_2`$ increase and tend to 5/6 and 1/6, respectively. These weights correspond to the occupation probability of a particular site when the two electrons form an itinerant tightly bound pair, as in a small bipolaron. For intermediate couplings, the combined weight of the most probable phonon states in Fig. 4 is smaller than 1, showing the importance of higher optimal phonon states, but always remains larger than 98% of the total weight. The wave functions for $`N_\alpha =0,2`$ have large overlaps with the ground state of a shifted harmonic oscillator for any coupling. For $`\gamma 1.5t`$ this shift is very close to the strong-coupling theory prediction $`2N_\alpha \gamma /\mathrm{\Omega }`$. Therefore, these optimal states can be obtained with the Lang-Firsov transformation (13) using an appropriate parameter $`g`$, but $`g`$ reaches the theoretical value $`\gamma /\mathrm{\Omega }`$ only at strong coupling, as in the single-electron case. The optimal wave function for $`N_\alpha =1`$, however, changes significantly as the coupling increases. In Fig. 4 one can see that this wave function is approximately the superposition of the wave functions for $`N_\alpha =0`$ and $`N_\alpha =2`$ when $`\gamma =1.5t`$. This wave function cannot be obtained by applying the Lang-Firsov transformation to a bare phonon state. The shape of this optimal state can be understood as a retardation effect. Most of the time a site is empty or occupied by both electrons and the phonon modes are in the corresponding optimal states. The electronic states with $`N_\alpha =1`$ are essential intermediate states for allowing the coherent motion of the bipolaron, as the Holstein Hamiltonian (2) contains only one-electron hopping terms, but they have very low probability. Electrons do not spend enough time in these states for the phonon modes to adapt. Therefore, optimal phonon states for $`N_\alpha =1`$ are determined by the optimal phonon states for $`N_\alpha =0`$ and $`N_\alpha =2`$. The single-particle spectral functions $`A(p=0,\omega )`$ and $`\stackrel{~}{A}(p=0,\omega )`$ are similar to those of the single-electron system at weak-coupling. A single peak is observed at $`\omega 2t`$ because both electrons occupy the lowest one-particle eigenstates with energy $`2t`$, and the ground state wave function is $$|\psi _0c_{p=0,}^{}c_{p=0,}^{}|0\stackrel{~}{c}_{p=0,}^{}\stackrel{~}{c}_{p=0,}^{}|0.$$ (27) In Fig. 5(a) we show both spectral functions for different electron-phonon couplings. The results for bare electrons are qualitatively similar to those observed in the single-electron system. As the coupling strength increases, the weight of the dominate peak shifts to an increasing number of satellites peaks until no well-defined quasi-particle peak can be observed. Dressing the fermion operator simplifies the spectral weight structure a bit, but important incoherent contributions are still observable. Moreover, the total spectral weight becomes small at strong coupling for both bare and dressed electrons \[Fig. 5(b)\]. Therefore, the ground state can no longer be described by two independent electrons or quasi-particles as was done above for weak coupling. At weak electron-phonon coupling the pair spectral functions $`P(p=0,\omega )`$ and $`\stackrel{~}{P}(p=0,\omega )`$ have a single peak at $`\omega 4t`$ on a finite system, but the weight of this peak vanishes as $`1/N`$ in the thermodynamic limit. Figure 6(a) shows both functions for stronger couplings. The evolution of the structure of $`P(p=0,\omega )`$ for increasing coupling is qualitatively similar to that of the single-particle spectral function. The weight of the dominant peak is progressively shifted to satellite peaks until no dominant peak can be observed. However, the total spectral weight $`d(p=0)`$ remains almost constant (close to 1/N=1/6) for all couplings \[Fig. 6(b)\]. As this value is a finite size effect for $`\gamma =0`$, we think that the pair spectral function vanishes uniformly in an infinite Holstein lattice. The dressed pair spectral function is much more interesting \[see Fig. 6(a)\]. It shows a single dominant peak for all electron-phonon couplings, though small satellite peaks are present at intermediate coupling. Moreover, the total spectral weight $`\stackrel{~}{d}(p=0)`$ increases with $`\gamma `$ and tends to 1 for strong coupling \[Fig. 6(b)\]. This means that the ground state is given by $$|\psi _0\stackrel{~}{\mathrm{\Delta }}_{p=0}^{}|0$$ (28) in the strong-coupling regime. This state describes an itinerant small bipolaron with momentum $`p=0`$. As the optimal phonon states for $`N_\alpha =0,2`$ are close to the states generated with the Lang-Firsov transformation (13) for strong-enough coupling, the dressed pair operator $`\stackrel{~}{\mathrm{\Delta }}_p`$ is similar to the “bipolaron operator” built with this transformation in Ref. 4. The structure of both $`P(p=0,\omega )`$ and $`\stackrel{~}{P}(p=0,\omega )`$ can be understood using the same arguments as for $`A(p=0,\omega )`$ and $`\stackrel{~}{A}(p=0,\omega )`$ in the single-electron case. Therefore, $`P(p=0,\omega )`$ and $`\stackrel{~}{P}(p=0,\omega )`$ contain only peaks with spacing $`\mathrm{\Omega }`$ (with some peaks to small to be seen) starting from $`E_0`$, where $`E_0`$ is now the ground state energy of the two-electron system. For $`\gamma =1.5t`$, the position of the dominant peak of $`\stackrel{~}{P}(p=0,\omega )`$ shown in Fig. 6(a) gives a bipolaron energy close to the strong-coupling result $`E_0=4\gamma ^2/\mathrm{\Omega }`$. Figure 7(a) shows the evolution of the Drude weight and of the kinetic energy per site as a function of the electron-phonon coupling. In this case too, the units are chosen so that both quantities are equal in the absence of incoherent contributions to the optical conductivity. Both quantities decrease smoothly as the coupling increases. As in the single-electron case, the small reduction of $`D`$ at weak coupling shows the slightly renormalized effective mass of the quasi-free electrons while the small but finite value of $`D`$ at strong coupling shows that the bipolaron is a heavy itinerant quasi-particle. The diminishing ratio $`D/T`$ means that incoherent processes become more important as $`\gamma `$ increases. The incoherent part of the optical conductivity $`\sigma ^{}(\omega )`$ is shown in Fig. 7(b) for both the quasi-free-electron regime and the bipolaronic regime. In the quasi-free-electron regime ($`\gamma =0.3t`$), $`\sigma ^{}(\omega )`$ has a very low weight but a simple structure which is determined by the discrete electronic levels of a non-interacting six-site lattice. In the bipolaronic regime ($`\gamma =1.5t`$), $`\sigma ^{}(\omega )`$ is fairly complex and the features predicted for a bipolaron in the adiabatic regime are not visible. For instance, there is no clear peak at the energy $`\omega 4\gamma ^2/\mathrm{\Omega }`$ corresponding to the depth of the lattice potential trapping the electrons. As already suggested in the single-electron case, this is probably due to the non-adiabatic phonon frequency ($`\mathrm{\Omega }=t`$) used in this work. Our results show that the ground state of the two-electron system is composed of two independent quasi-free electrons at least up to $`\gamma =0.8t`$ but it is a small bipolaron at least from $`\gamma =1.3t`$. There is a smooth crossover from one regime to the other as the electron-phonon coupling increases. The nature of the ground state for $`0.8t<\gamma <1.3t`$ is not directly determined by our study. Clearly, there is no sign of a polaron in any of the quantities we have calculated and the small polaron appears at stronger coupling ($`\gamma 1.5t`$) than the small bipolaron. Thus, we can conclude that a pair of small polarons would be instable with respect to the formation of a small bipolaron. The only reasonable candidate for the ground state in the intermediate regime is the large bipolaron, but this state is difficult to study in the small lattice considered here. DMRG calculations for long Holstein chains confirm that the ground state evolves smoothly from a pair of quasi-free electrons to a large bipolaron and then to a small bipolaron as the electron-phonon coupling increases. As in the polaron case, the large effective mass of the bipolaron and the dominance of incoherent processes in its optical conductivity mean that coherent motion of a small bipolaron is unlikely in more realistic models. ### C Half filling Mean-field theory predicts that the ground state of the half-filled Holstein model is a doubly degenerate CDW state with a dimerized lattice and a gap at the Fermi surface for any finite electron-phonon coupling due to the well-known Peierls instability. This result is exact in the adiabatic limit $`\mathrm{\Omega }0`$ and some early works suggested that the ground state was also a Peierls insulating state for arbitrary coupling at finite phonon frequency. More recent results suggest that quantum lattice fluctuations can destroy the Peierls insulating state at weak coupling or large phonon frequency. Furthermore, it is known rigorously that the ground state of a finite half-filled Holstein lattice is not degenerate for any finite electron-phonon coupling and any finite phonon frequency. Therefore, the ground state of the Holstein model can be a true Peierls state with CDW order and lattice dimerization only in the thermodynamic limit. We also note that, in the Holstein model of spinless electrons at half filling, it is well established that a metal-insulator transition occurs at finite values of the electron-phonon coupling and phonon frequency. We have carried out a study of the ground state of the one-dimensional Holstein model for electrons with spin-$`\frac{1}{2}`$ at half filling using the DMRG method. Calculations of static correlation functions in long chains (up to 80 sites) clearly show that there is a transition from a metallic ground state to a Peierls insulating state with long range CDW order in the thermodynamic limit. For $`\mathrm{\Omega }=t`$ this transition occurs between $`\gamma =0.8t`$ and $`\gamma =0.9t`$. In this paper we concentrate on the dynamical properties of the six-site system and our DMRG results will be reported elsewhere. We had previously noticed that static correlation functions reveal a crossover from a uniform ground state to a Peierls CDW ground state in a half-filled six-site Holstein lattice around $`\gamma =t`$ for $`\mathrm{\Omega }=t`$. In fact, despite the absence of a true broken symmetry ground state in a finite Holstein lattice, signs of this crossover and the existence of an “insulating” Peierls CDW phase are clearly seen in static and dynamical properties. This is due to a quasi-degeneracy of the ground state at strong enough coupling. In Fig. 8(a) we show the energy difference between the ground state $`|\psi _0`$ and the first excited state $`|\psi _1`$. Above $`\gamma =1.1t`$ the difference is very small and the two states are almost degenerate. The eigenstates $`|\psi _0`$ and $`|\psi _1`$ have momentum $`0`$ and $`\pi `$, respectively. We know that they have a constant density $`\psi _{0,1}|n_i|\psi _{0,1}=1`$ and a uniform lattice structure $`\psi _{0,1}|b_i^{}+b_i|\psi _{0,1}=2\gamma /\mathrm{\Omega }`$, because of the translation symmetry of the Holstein Hamiltonian (2) with periodic boundary conditions. If they were exactly degenerate, we could build two broken symmetry eigenstates $`|\psi _\pm =|\psi _0\pm |\psi _1`$ with a charge modulation $`\psi _\pm |n_i|\psi _\pm =1\pm (1)^i\delta `$ and a dimerized lattice $`\psi _\pm |b_i^{}+b_i|\psi _\pm =2\gamma /\mathrm{\Omega }(1\pm (1)^i\delta )`$. These two states would correspond to the two possible phases, of the degenerate Peierls CDW state obtained in mean-field approximation or in the adiabatic limit. As the two-lowest eigenstates $`|\psi _0`$ and $`|\psi _1`$ are quasi-degenerate for $`\gamma >1.1t`$, the six-site system properties are almost indis- tinguishable from those of a true degenerate Peierls CDW state. The optimal phonon wave functions for the half-filled Holstein model have already been discussed in our previous work. As expected, they are similar to the wave function obtained in the two-electron system and only the relative weights of the optimal states are quite different due to the different number of electrons in the system. The total weight of optimal phonon states associated with zero or two electrons on a site tends to 1, while the weight of those associated with one electron on a site becomes very small for strong coupling. This aspect of the crossover from independent electrons to the Peierls CDW state is also illustrated by Fig. 8(b). The density of doubly occupied electronic sites increases rather sharply from the independent electron result $`1/4`$ to the maximal value $`1/2`$ possible in a half-filled system. Thus, above the crossover regime the electrons form tightly bound pairs. These electronic pairs are heavily dressed by phonons and thus, can be seen as small bipolarons. In the strong-coupling anti-adiabatic limit, it is known exactly that at half-filling these small bipolarons form an ordered phase, which is fully equivalent to the Peierls CDW state in this regime. The spectral weight functions of the half-filled system are harder to interpret than those of the single- and two -electron system. Nevertheless, some of their properties can be understood. In the non-interacting limit, the spectral functions $`A(p,\omega )`$ and $`\stackrel{~}{A}(p,\omega )`$ show a single peak with total weight 1 for momentum $`|p|\pi /3`$ corresponding to occupied single-electron states and are uniformly zero for momentum $`|p|>\pi /3`$ corresponding to unoccupied single-electron states. As the electron-phonon coupling increases, both $`A(p,\omega )`$ and $`\stackrel{~}{A}(p,\omega )`$ become fairly complex for all values of $`p`$. As an example, Figure 9(a) shows both functions for $`p=\pi /3`$. In the strong-coupling regime ($`\gamma =1.5t`$), one can see that the highest peak of the bare spectral function $`A(p,\omega )`$ is located at an energy $`\omega 4\gamma ^2/\mathrm{\Omega }`$ corresponding to the energy required to remove one electron of the half-filled system without perturbing the dimerized lattice structure of the Peierls CDW ground state. On the other hand, the spectral weight for dressed electrons $`\stackrel{~}{A}(p,\omega )`$ shows a relatively sharp dominant peak, but with only a fraction of the total weight, for all couplings. The position of this peak corresponds to the difference between the ground state energy of the half-filled system and that of a system with one electron removed from a half-filled band. For $`\gamma =1.5t`$ we find that this peak is around $`\omega =3\gamma ^2/\mathrm{\Omega }`$ in agreement with strong-coupling theory predictions. The total spectral weight $`n_\sigma (p)`$ is shown in Fig. 9(b) as a function of $`\gamma `$. Results for $`\stackrel{~}{n}_\sigma (p)`$ are similar. One can see that $`n_\sigma (p)`$ evolves smoothly from the momentum density distribution of free electrons, $`n_\sigma (p)=1`$ for $`|p|\pi /3`$ and $`n_\sigma (p)=0`$ for $`|p|>\pi /3`$, to that of completely localized electrons, $`n_\sigma (p)=0.5`$ for all $`p,\sigma `$. At weak coupling, the properties of the pair spectral functions $`P(p,\omega )`$ and $`\stackrel{~}{P}(p,\omega )`$ depend strongly on the momentum $`p`$ but can be easily understood from weak-coupling perturbation theory. In the Peierls CDW regime both functions acquire a simple structure, which is largely similar for all values of the momentum $`p`$. For instance, Figure 10(a) shows both functions for $`p=0`$ from weak to strong coupling. For very weak coupling the bare pair spectral function $`P(p=0,\omega )`$ has only two peaks at $`\omega =2t`$ and $`4t`$, which correspond to zero momentum pairs made of two electrons with momentum $`p=\pm \pi /3`$ and $`p=0`$, respectively. As the coupling increases toward the crossover regime, the spectral weight is shifted to an increasing number of satellite peaks. In the Peierls CDW regime we observe a relatively broad cluster of peaks. This cluster seems to be centered around an energy which is slightly lower than the energy $`8\gamma ^2/\mathrm{\Omega }`$ required to remove two bare electrons from the half-filled ground state without disturbing the dimerized lattice structure. For all couplings $`\stackrel{~}{P}(p=0,\omega )`$ has a well-defined dominant peak, although incoherent contributions are significant in the quasi-free-electron regime. The position of this peak gives the energy difference between the half-filled band ground state and the ground state of a system with two electrons removed from a half-filled band. For $`\gamma =1.5t`$ one can see that this energy difference is about $`4\gamma ^2/\mathrm{\Omega }`$ in agreement with strong-coupling theory. As noted above similar results are obtained for all other momentum in the Peierls CDW phase. The well-defined peaks observed in $`\stackrel{~}{P}(p,\omega )`$ confirm that the ground state is made of small bipolarons in this regime. In such a case, the total weight $`\stackrel{~}{d}(p)`$ can be seen as the bipolaron momentum distribution. The distribution $`\stackrel{~}{d}(p)0.5`$ for all momentum $`p`$ that we observe at strong-coupling \[see Fig. 10(b)\] indicate that bipolarons are completely localized in the Peierls CDW regime. The study of static correlation functions shows that these localized small bipolarons form an ordered phase even for finite electron-phonon coupling and finite phonon frequency. Therefore, we think that the ground state of the half-filled six-site Holstein lattice can be seen either as a Peierls CDW state with lattice dimerization or as an ordered phase of localized small bipolarons. The first point of view corresponds to the adiabatic limit ($`\mathrm{\Omega }/t0`$) result and the second is more appropriate in the strong-coupling or anti-adiabatic limit ($`t/\mathrm{\Omega }0`$). For the intermediate case $`\mathrm{\Omega }=t`$ discussed here, both pictures appear completely equivalent. Figure 11(a) shows the evolution of the Drude weight and of the kinetic energy per site as the electron-phonon coupling increases. Again the units are chosen so that both quantities are equal in the absence of incoherent processes in the optical conductivity. The kinetic energy decreases rather smoothly as in the single- and two-electron cases. On the other hand, we can clearly see a sharp decrease of the Drude weight around $`\gamma =t`$. Above $`\gamma =1.5t`$, $`D`$ vanishes within numerical errors. (Of course, $`D`$ is never really zero on a finite cluster because of quantum tunneling.) This negligible value of $`D`$ is consistent with the ground state being made up of localized small bipolarons. Although a six-site cluster can not be metallic or insulating, the behavior of $`D`$ shown here illustrates perfectly the metal-insulator transition observed in the thermodynamic limit. The incoherent part of the optical conductivity is shown in Fig. 11(b). For weak coupling ($`\gamma =0.3t`$), $`\sigma ^{}(\omega )`$ has very little weight and contains a single dominant peak that can be explained by the discrete electronic energy levels of the non-interacting system. In the Peierls CDW regime ($`\gamma =1.5t`$), the structure of $`\sigma ^{}(\omega )`$ is more complex but clearly shows a significant peak around an energy $`\omega 4\gamma ^2/\mathrm{\Omega }`$ corresponding to the Peierls gap in the strong-coupling limit. In the adiabatic limit the conductivity is zero below the Peierls gap. The significant tail observed below the peak in Fig. 11(b) is due to phonons. It is an excellent illustration of the subgap optical absorption predicted in Peierls systems because of quantum lattice fluctuations. Our results clearly show that there are two distinct regimes in the half-filled six-site Holstein lattice. The crossover from one regime to the other occurs rather sharply at a critical electron-phonon coupling, which is $`\gamma t`$ for $`\mathrm{\Omega }=t`$. Below this critical coupling, the static and dynamical properties of the system are those of quasi-free electrons on a finite cluster. Above the critical coupling, the ground state is made of ordered localized small bipolarons, which can also be seen as the Peierls state with CDW order and lattice dimerization predicted by mean-filed theory. These two regimes are the finite system precursors of the metallic and insulating Peierls CDW phase of the infinite system, and the crossover between both regimes signals the quantum phase transition observed at finite electron-phonon coupling in the thermodynamic limit of the half-filled Holstein model. ## IV Conclusion We have studied the dynamical properties of the six-site Holstein model for three different electron concentrations using an exact diagonalization technique. A density matrix approach allows us to generate an optimal phonon basis and to truncate the phonon Hilbert space without significant loss of accuracy. With this very efficient method we are able to observe the evolution of the system properties as one goes from the weak electron-phonon coupling regime to the strong-coupling regime. For all three electron concentrations studied a smooth crossover is observed from quasi-free electrons at weak coupling to a strongly correlated state in the strong-coupling regime. This strongly correlated state is a heavy itinerant small polaron in the single-electron case, a heavy itinerant small bipolaron in the two-electron case, and a set of ordered localized small bipolarons similar to a Peierls CDW state in the half-filled band case. The study of the optimal phonon states reveal that they often are the eigenstates of a quantum oscillator with a shifted equilibrium position. These states can be obtained by applying the Lang-Firsov transformation to the bare phonon states. The amplitude of the shift is generally smaller than the exact results for $`t=0`$ but tends to this value for strong coupling. However, in some cases we have observed some retardation effects due to the electron motion and the slow response of the phonon modes. These effects are small but they are essential for an accurate description of the electron motion along the lattice, either as independent particles or as part of a composite quasi-particle (polaron or bipolaron). Although we have presented only results for $`\mathrm{\Omega }=t`$, we have checked that these retardation effects become more important for smaller values of $`\mathrm{\Omega }/t`$ (adiabatic limit) but vanish for larger values of $`\mathrm{\Omega }/t`$ (anti-adiabatic limit). We have obtained a wealth of information from the single-particle and pair spectral weight functions. By dressing the electron operators with the optimal phonon states, we are often able to simplify the structure of these spectral functions and to obtain well-defined quasi-particle peaks. For instance, we can identify the formation of a small polaron and of a small bipolaron by the appearance of a single dominant peak in the single electron spectral function $`\stackrel{~}{A}(\omega )`$ and in the pair spectral function $`\stackrel{~}{P}(\omega )`$, respectively. We have also studied the optical conductivity for these systems. In all cases the Drude weight decreases substantially as the electron-phonon coupling increases from the non-interacting limit to the strong-coupling limit. At half filling, the Drude weight decreases abruptly around $`\gamma =t`$ and is negligibly small for larger $`\gamma `$. This suppression of coherent transport is linked to the appearance of a quasi-degenerate Peierls CDW ground state. These results support measurements of static correlation functions in larger systems which shows the existence of a metal-insulator transition in the thermodynamic limit of the half-filled one-dimensional Holstein model. One obvious limitation of this study is the complete neglect of electron-electron interaction when there is more than one electron on the lattice. This interaction is likely to strongly affect the properties of the system, especially in the regime where bipolarons are formed in the absence of electronic repulsion. The approach used in this work can be applied without difficulty to models with an electron-electron interaction and we have started to investigate the Holstein-Hubbard model, which includes an on-site electron-electron repulsion. ## V Acknowledgments We would like to thank A. Weisse and H. Fehske for putting some of their results at our disposal before publication. E.J. thanks the Institute for Theoretical Physics of the University of Fribourg, Switzerland, for its kind hospitality during the preparation of this paper. S.R.W. acknowledges support from the NSF under Grant No. DMR-98-70930, and from the University of California through the Campus Laboratory Collaborations Program.
no-problem/9812/cond-mat9812435.html
ar5iv
text
# Magnetic skyrmion lattices in heavy fermion superconductor UPt3 \[ ## Abstract Topological analysis of nearly SO(3)<sub>spin</sub> symmetric Ginzburg–Landau theory, proposed for UPt<sub>3</sub> by Machida et al, shows that there exists a new class of solutions carrying two units of magnetic flux: the magnetic skyrmion. These solutions do not have singular core like Abrikosov vortices and at low magnetic fields they become lighter for strongly type II superconductors. Magnetic skyrmions repel each other as $`1/r`$ at distances much larger then the magnetic penetration depth $`\lambda ,`$ forming a relatively robust triangular lattice. The magnetic induction near $`H_{c1}`$ is found to increase as $`(HH_{c1})^2`$. This behavior agrees well with experiments. \] Heavy fermion superconductors have surprising properties on both the microscopic and the macroscopic level. Charge carrier’s pairing mechanism is unconventional. The vortex state is also rather different from that of s-wave superconductors: there exist unusual asymmetric vortices, and phase transitions between numerous vortex lattices take place. For the best studied material UPt<sub>3</sub> several phenomenological theories have been put forward which utilize a multicomponent order parameter. In particular, great effort has been made to qualitatively and quantitatively map the intricate $`HT`$ phase diagram. Most attention has been devoted to the region of magnetic fields near $`H_{c2}`$. Magnetization curves of UPt<sub>3</sub> near $`H_{c1}`$ are also rather unusual (see Fig.1). Theoretically, if the magnetization is due to penetration of vortices into a superconducting sample then one expects $`4\pi M`$ to drop with an infinite derivative at $`H_{c1}`$ (dotted line). On the other hand experimentally $`4\pi M`$ continues to increase smoothly (squares and triangles represent rescaled data taken from refs. and respectively). Such a behavior was attributed to strong flux pinning or surface effects . However both experimental curves in Fig.1, as well as the other ones found in literature, are close to each other if plotted in units of $`H_{c1}`$. There might be a more fundamental explanation of the universal smooth magnetization curve near $`H_{c1}`$. If one assumes that fluxons are of unconventional type for which interaction is long range then precisely this type of magnetization curve is obtained. Magnetization near $`H_{c1}`$ due to fluxons carrying $`N`$ units of flux $`\mathrm{\Phi }_0hc/2e`$, with line energy $`\epsilon `$ and mutual interaction $`V(r)`$, is found by minimizing the Gibbs energy of a very sparse triangular lattice: $$G(B)=\frac{B}{N\mathrm{\Phi }_0}\left[\epsilon +3V(a)\right]\frac{BH}{4\pi },$$ (1) where $`a=(\mathrm{\Phi }_0/B\sqrt{3})^{\frac{1}{2}}`$ is lattice spacing. When $`V(r)\mathrm{exp}[\lambda r],`$ the magnetic induction has the conventional behavior $`B\left[\mathrm{log}\left(HH_{c1}\right)\right]^2`$, while if it is long range, $`V(r)1/r^n,`$ then one finds $`B\left(HH_{c1}\right)^{n+1}`$. The physical reason for this different behavior is very clear. For a short range repulsion, if one fluxon penetrated the sample, many more can penetrate almost with no additional cost of energy. This leads to the infinite derivative of magnetization. On the other hand for a long range interaction making a place for each additional fluxon becomes energy consuming. Derivative of magnetization thus becomes finite. It is generally assumed that although vortices in UPt<sub>3</sub> differ from the usual Abrikosov vortices in many details two important characteristics are preserved. First, their size $`\lambda `$ is well defined: magnetic field and interactions between vortices vanish exponentially beyond this length. Second, their energy is proportional to $`\mathrm{log}\kappa `$. However, in this note we show on the basis of topological analysis of a model by Machida et al. that there exists an additional class of fluxons which we call magnetic skyrmions. They carry two units of magnetic flux $`N=2`$ and do not have a singular core, similar to ATC texture in superfluid <sup>3</sup>He . We show that their line energy $`\epsilon 2\epsilon _0`$, $`\epsilon _0\left(\mathrm{\Phi }_0/4\pi \lambda \right)^2`$ is independent $`\kappa `$ and is smaller than that of Abrikosov vortices for strongly type II superconductors like UPt<sub>3</sub> ($`\kappa 50`$). Magnetic skyrmion lattice becomes the ground state at low magnetic fields $`(HH_{c1})/H_{c1}1`$. We further find that the repulsion of magnetic skyrmions is, in fact, long range: $`V(r)1/r`$. This allows us to produce a nice fit to the magnetization curve (solid line in Fig.1) which is universal (independent of $`\kappa `$). The order parameter in the weak spin–orbit coupling model of UPt<sub>3</sub> is a three dimensional complex vector: $`\psi _i(\stackrel{}{r})`$. The Ginzburg–Landau free energy reads: $$F=F_{sym}+\mathrm{\Delta }F,$$ (2) $`F_{sym}=\alpha \psi _i\psi _i^{}+{\displaystyle \frac{\beta _1}{2}}(\psi _i\psi _i^{})^2+{\displaystyle \frac{\beta _2}{2}}|\psi _i\psi _i|^2`$ (3) $`+K_1\left(|𝒟_x\psi _i|^2+|𝒟_y\psi _i|^2\right)+K_2\left|𝒟_z\psi _i\right|^2+{\displaystyle \frac{1}{8\pi }}B_j^2,`$ (4) $`\mathrm{\Delta }F=\gamma |\psi _x|^2\lambda |\psi _z|^2+{\displaystyle \frac{\mathrm{\Delta }\chi }{2}}|\psi _iB_i|^2`$ (5) $`+{\displaystyle \underset{i=x,y}{}}\left[k_1^i\left(|𝒟_x\psi _i|^2+|𝒟_y\psi _i|^2\right)+k_2^i\left|𝒟_z\psi _i\right|^2\right],`$ (6) where $`𝒟_j_ji(2e/\mathrm{}c)A_j`$ and $`B_j=(\times \stackrel{}{A})_j`$. We separated eq.(2) into a symmetric part $`F_{sym}`$ which is invariant under the spin rotation group SO(3)<sub>spin</sub> acting on the index $`i`$ of the order parameter, and into terms breaking the SO(3)<sub>spin</sub> symmetry (anisotropy, coupling to antiferromagnetic spin fluctuations and spin-orbit coupling). Although $`\mathrm{\Delta }F`$ is crucial in explaining the double superconducting phase transition in UPt<sub>3</sub> at zero external magnetic field and the shape of $`H_{c2}(T)`$ curve on the $`HT`$ phase diagram, it can be considered as a small perturbation in the low temperature superconducting phase (phase B) well below its critical temperature $`TT_c^{}.45`$ K and at low magnetic fields $`HH_{c1}`$. Indeed, estimation of coefficients of $`\mathrm{\Delta }F`$ at $`T=T_c^{}/2`$ yields: $`\gamma /\alpha .2,\lambda /\alpha .05`$ and $`\left(\frac{\mathrm{\Delta }\chi }{2}H_{c1}^2\right)/\left(\frac{\alpha ^2}{2\beta _1}\right)10^6`$, and also $`kK`$ . Therefore, in a certain range of magnetic fields and temperatures there is an approximate O(3) symmetry and we first turn to minimize $`F_{sym}`$. In the vacuum of phase B the order parameter is $`\stackrel{}{\psi }=\psi _0(\stackrel{}{n}+i\stackrel{}{m})/\sqrt{2}`$,$`\psi _0^2\alpha /\beta _1`$, $`\stackrel{}{n}\stackrel{}{m}`$, $`\stackrel{}{n}^2=\stackrel{}{m}^2=1.`$ Stability requires: $`\alpha >0`$, $`\beta _1>0`$ and $`\beta _2>\beta _1.`$ The symmetry breaking pattern of phase B is as follows. Both the spin rotations SO(3)<sub>spin</sub> and the U(1) gauge symmetries are spontaneously broken, but a diagonal subgroup U(1) survives. The subgroup consists of combined transformations: rotations by angle $`\vartheta `$ around the axis $`\stackrel{}{l}`$ $`\stackrel{}{n}\times \stackrel{}{m}`$ accompanied by gauge transformations $`e^{i\vartheta }`$. Each vacuum state is specified by orientation of a triad of orthonormal vectors $`\stackrel{}{n},`$ $`\stackrel{}{m},`$ $`\stackrel{}{l}`$. The vacuum manifold is therefore isomorphic to SO(3). Topological defects might be of two kinds: regular and ”singular”. To find regular topological line defects, it is enough to consider the London approximation , i.e. to assume that the order parameter gradually changes in space from one vacuum to another. The triad $`\stackrel{}{n},`$ $`\stackrel{}{m},`$ $`\stackrel{}{l}`$ then becomes a field. The Abrikosov vortex is a singular defect: it has a core where the modulus of the order parameter vanishes and energy diverges logarithmically. Accordingly, a cutoff parameter – the correlation length – should be introduced and one obtains $`\mathrm{log}\kappa `$ dependence for the vortex line energy. This fact alone means that if there exists a regular solution it is bound to become energetically favorable for large enough $`\kappa `$. Below we consider a situation when the external magnetic field is oriented along $`z`$ axis and all configurations are translationally invariant in this direction. We use dimensionless units: $`r\lambda \stackrel{~}{r}`$, $`A\left(\mathrm{\Phi }_0/2\pi \lambda \right)\stackrel{~}{A}`$, $`B\left(\mathrm{\Phi }_0/2\pi \lambda ^2\right)\stackrel{~}{B}`$ and $`F=(\epsilon _0/2\pi \lambda ^2)\stackrel{~}{F},`$ where $`\lambda (\mathrm{\Phi }_0/2\pi )\sqrt{\beta _1/4\pi \alpha K_1}`$ (the tilde will be dropped hereafter). The free energy takes form $$F_L=1/2\left(_k\stackrel{}{l}\right)^2+\left(\stackrel{}{n}_k\stackrel{}{m}A_k\right)^2+B_k^2$$ (7) and the field equations are $`n_p\stackrel{}{}m_p\stackrel{}{A}=\stackrel{}{}\times \left(\stackrel{}{}\times \stackrel{}{A}\right)=\stackrel{}{j},`$ (8) $`\mathrm{\Delta }\stackrel{}{l}\stackrel{}{l}(\stackrel{}{l}\mathrm{\Delta }\stackrel{}{l})+2j_k(\stackrel{}{l}\times _k\stackrel{}{l})=0.`$ (9) Eq.(8) shows that the superconducting velocity is given by $`n_p\stackrel{}{}m_p=\stackrel{}{}\vartheta `$, where the angle $`\vartheta `$ specifies the orientation of vector $`\stackrel{}{n}`$ or $`\stackrel{}{m}`$ in the plane perpendicular to $`\stackrel{}{l}`$ (see insert in Fig.2). Thus, $`\vartheta `$ is the superconducting phase. Now we proceed to classify the boundary conditions. Magnetic field vanishes at infinity, while topology of the orientation of the triad $`\stackrel{}{n},`$ $`\stackrel{}{m},`$ $`\stackrel{}{l}`$ at different distant points is described by the first homotopy group of vacuum manifold: $`\pi _1`$(SO(3))$`=`$Z<sub>2</sub> . It yields a classification of solutions into two topologically distinct classes (”odd” and ”even”). This classification is too weak, however, for our purposes because it does not guarantee nontrivial flux penetrating the plane. We will see that configurations having both ”parities” are of interest. In the presence of the magnetic flux possible configurations are further constrained by the flux quantization condition. The vacuum manifold is naturally factored into SO(3)$``$SO(2)$``$S<sub>2</sub> where the S<sub>2</sub> is set of directions of $`\stackrel{}{l}`$ and the SO(2) is the superconducting phase $`\vartheta `$. For a given number of flux quanta $`N`$, the phase $`\vartheta `$ makes $`N`$ windings at infinity, see Fig. 2. The first homotopy group of this part is therefore fixed: $`\pi _1`$(SO(2))$`=`$Z. If, in addition, $`\stackrel{}{l}`$ is constant, there is no way to avoid singularity in the phase $`\vartheta `$ where $`|\stackrel{}{\psi }|=0`$. However, general requirement that a solution has finite energy is much weaker. It tells us that the direction of $`\stackrel{}{l}`$ should be fixed only at infinity . The relevant homotopy group is nontrivial: $`\pi _2`$(S<sub>2</sub>)$`=`$Z. The second homotopy group appears because fixing $`\stackrel{}{l}`$ at infinity (say, up) effectively ”compactifies” two dimensional physical space into S$`_2.`$ Unit vector $`\stackrel{}{l}`$ winds towards the center of the texture. The new topological number is $`Q=(1/8\pi )\epsilon _{ij}\stackrel{}{l}\left(_i\stackrel{}{l}\times _j\stackrel{}{l}\right)d^2r`$. Therefore, all configurations fall into classes characterized by the two integers $`N`$ and $`Q`$. For regular solutions, however, these two numbers are not independent. Upon integrating the supercurrent equation, eq.(8) along a remote contour and making use of the identity $`\epsilon _{pqs}l_p(_il_q)(_jl_s)=(_in_p)(_jm_p)(_im_p)(_jn_p),`$ we obtain: $`Q=N/2.`$ We call these regular solutions magnetic skyrmions. The lowest energy solution within the London approximation corresponds to $`N/2=Q=1`$ (or $`N/2=Q=+1`$). We analyze a cylindrically symmetric situation and choose the triad $`\stackrel{}{n},`$ $`\stackrel{}{m},`$ $`\stackrel{}{l}`$ in the form: $`\stackrel{}{l}`$ $`=`$ $`\stackrel{}{e}_z\mathrm{cos}\mathrm{\Theta }(\rho )+\stackrel{}{e}_\rho \mathrm{sin}\mathrm{\Theta }(\rho ),`$ (10) $`\stackrel{}{n}`$ $`=`$ $`\left(\stackrel{}{e}_z\mathrm{sin}\mathrm{\Theta }(\rho )\stackrel{}{e}_\rho \mathrm{cos}\mathrm{\Theta }(\rho )\right)\mathrm{sin}\phi +\stackrel{}{e}_\phi \mathrm{cos}\phi ,`$ (11) $`\stackrel{}{m}`$ $`=`$ $`\left(\stackrel{}{e}_z\mathrm{sin}\mathrm{\Theta }(\rho )\stackrel{}{e}_\rho \mathrm{cos}\mathrm{\Theta }(\rho )\right)\mathrm{cos}\phi \stackrel{}{e}_\phi \mathrm{sin}\phi ,`$ (12) where $`\rho `$ and $`\phi `$ are polar coordinates and $`\mathrm{\Theta }=\widehat{\stackrel{}{e}_z\stackrel{}{l}}`$. Boundary conditions are: $`\mathrm{\Theta }(0)=\pi `$ and $`\mathrm{\Theta }(\mathrm{})=0`$. The vector potential is given by $`\stackrel{}{A}=A(\rho )\stackrel{}{e}_\phi `$. The general form of such a configuration is shown in Fig.2. The unit vector $`\stackrel{}{l}`$ (solid arrows) flips its direction from up to down as it moves from infinity toward the origin. The phase $`\vartheta `$ (arrow inside small circles in Fig.2) winds twice while completing an ”infinitely remote” circle. If in eq. (7) only the first term were present we would deal with a standard $`SO(3)`$ invariant nonlinear $`\sigma `$-model . Being scale invariant, it possesses infinitely many pure skyrmion solutions $`\mathrm{\Theta }_s(\rho ;\delta )=2\mathrm{arctan}(\delta /\rho ),`$ which have the same energy equal to $`2`$ (in units of $`\epsilon _0`$) for any size $`\delta `$ of a skyrmion. However, in the present case the structure of the order parameter is more complex and the above degeneracy is lifted by the second and third terms of eq. (7). Below we make use of the functions $`\mathrm{\Theta }_s(\rho ;\delta )`$ to explicitly construct the variational configurations. We show that as size of these configurations increases the energy is reduced to a value arbitrarily close to the absolute minimum of $`\epsilon _{ms}=2`$. Substituting eq.(11) into eq.(7) and integrating over the $`xy`$ plane we obtain the energy of the magnetic skyrmion in the form: $`\epsilon _{ms}=\epsilon _s+\epsilon _{cur}+\epsilon _{mag}`$, where $`\epsilon _s\rho 𝑑\rho \left(\mathrm{\Theta }^2/2+\mathrm{sin}^2\mathrm{\Theta }/2\rho ^2\right)`$, $`\epsilon _{cur}\rho 𝑑\rho \left[(1+\mathrm{cos}\mathrm{\Theta })/\rho +A\right]^2`$ and $`\epsilon _{mag}\rho 𝑑\rho \left(A/\rho +A^{}\right)^2`$. The first term $`\epsilon _s`$ is the same as in nonlinear $`\sigma `$-model without magnetic field. It is bound from below by $`2`$, the energy of a pure skyrmion. The second term $`\epsilon _{cur}`$, the ”supercurrent” contribution, is positive definite. One still can maintain zero value of this term when the field $`\mathrm{\Theta }(\rho )`$ is a pure skyrmion $`\mathrm{\Theta }_s(\rho ;\delta )`$ of certain size $`\delta `$. Assuming this one gets: $`A(\rho )=(1+\mathrm{cos}\mathrm{\Theta })/\rho =2\rho /(\rho ^2+\delta ^2)`$. The third term, the magnetic field contribution (which is also positive definite), becomes $`\epsilon _{mag}=8/3\delta ^2`$. It is clear that when $`\delta \mathrm{}`$ we obtain energy arbitrarily close to the lower bound: $`\epsilon _{ms}2+8/3\delta ^22`$. Single magnetic skyrmion therefore blows up. If many magnetic skyrmions are present, then their interactions can stabilize the system. They repel each other, as we will see shortly, and therefore form a lattice. Since they are axially symmetric, the interaction is axially symmetric and thus a triangular lattice is expected. Assume that the lattice spacing is $`a`$. At the boundaries of the hexagonal unit cells the angle $`\mathrm{\Theta }`$ is zero, while at the centers it is $`\pi `$. The magnetic field $`B`$ is continuous on the boundaries. Therefore, to analyze a magnetic skyrmion lattice we should solve eqs.(8)–(9) on the unit cell with these boundary conditions demanding that two units of flux pass through the cell (by adjusting the value of magnetic field on the boundary). We have approximated the hexagonal unit cell by a circle of radius $`R=3^{3/4}a/\sqrt{\pi }`$ and the same area, and performed numerical integration of the equations $`A^{\prime \prime }+{\displaystyle \frac{A^{}}{\rho }}{\displaystyle \frac{A}{\rho ^2}}A{\displaystyle \frac{1+\mathrm{cos}\mathrm{\Theta }}{\rho }}`$ $`=`$ $`0,`$ (13) $`\mathrm{\Theta }^{\prime \prime }+{\displaystyle \frac{\mathrm{\Theta }^{}}{\rho }}+{\displaystyle \frac{\mathrm{sin}\mathrm{\Theta }}{\rho }}\left({\displaystyle \frac{2+\mathrm{cos}\mathrm{\Theta }}{\rho }}+2A\right)`$ $`=`$ $`0,`$ (14) which follow from the cylindrically symmetric Ansatz of eq.(11). Calculations for $`R`$ from $`R=5`$ till $`R=600`$ were done by means of a finite element method. The energy per unit cell in a wide range of $`R`$ is satisfactory described (deviation at $`R=10`$ is $`1\%`$) by the function $`\epsilon _{cell}2+5.62/R`$. Note that in the limit $`R\mathrm{}`$ we recover our previous variational estimate: $`\epsilon _{cell}\epsilon _{ms}=2.`$ The dominant contribution to magnetic skyrmion energy at large $`R`$ comes from the first term $`\epsilon _s`$, similar to the analytical variational state described above. The contribution to $`\epsilon _{cell}`$ from magnetic field, $`\epsilon _{mag}`$, is small for large $`R`$ but becomes significant in denser lattices. The most interesting feature of the solution is that the supercurrent contribution $`\epsilon _{cur}`$ to the energy of magnetic skyrmion is negligibly small for all considered values of $`R`$. This is to be compared with the usual Abrikosov vortex where at high $`\kappa `$ the total energy is dominated by magnetic and supercurrent contributions which are of the same order of magnitude. Most of the flux goes through the region where the vector $`\stackrel{}{l}`$ is oriented upwards. In other words, the magnetic field is concentrated close to the center of a magnetic skyrmion. Line energy of Abrikosov vortices $`\epsilon _v`$ for the present model was calculated numerically (beyond London approximation) in . For $`\kappa =20`$ and $`50`$ we obtain $`2\epsilon _v/\epsilon _{ms}3.5`$ and $`4.4`$ respectively. Therefore we expect that the lower critical field of UPt<sub>3</sub> is determined by magnetic skyrmions: $`h_{c1}=\epsilon _{ms}/2N`$. Returning to physical units, $$H_{c1}=\mathrm{\Phi }_0/4\pi \lambda ^2.$$ (15) To find magnetization, we now utilize eq.(1). Interactions among magnetic skyrmions follows easily from the energy of a unit cell of the hexagonal lattice: $`V(r)=2(\epsilon _{cell}2)/61.87/r`$. The resulting averaged magnetic induction, in units of $`\mathrm{\Phi }_0/2\pi \lambda ^2`$, reads $$B0.25\left(H/H_{c1}1\right)^2.$$ (16) This agrees very well with the experimental results, see Fig.1. For fields higher then several $`H_{c1}`$ London approximation is not valid anymore since magnetic skyrmions will start to overlap. In this case one expects that ordinary Abrikosov vortices, which carry one unit of magnetic flux, become energetically favorable. The usual vortex picture has indeed been observed at high fields by Yaron et al. Curiously, our result is similar to conclusions of Burlachkov et al. who investigated stripe-like (quasi one dimensional) spin textures in triplet superconductors. Having established the magnetic skyrmion solution of $`F_{sym}`$ we next estimated how it is influenced by various terms of $`\mathrm{\Delta }F`$ eqs.(5)–(6). It was found that these perturbations do not lead to destabilization of a magnetic skyrmion. In conclusion, we have performed a topological classification of the solutions in SO(3)<sub>spin</sub> symmetric GL free energy. This model, with addition of very small symmetry breaking terms, describes heavy fermion superconductor UPt<sub>3</sub> and possibly other p-wave superconductors. A new class of topological solutions in weak magnetic field was identified. These solutions, magnetic skyrmions, do not have normal core. At small magnetic fields the magnetic skyrmions are lighter then Abrikosov vortices and therefore dominate the physics. Magnetic skyrmions repel each other as $`1/r`$ at distances much larger then magnetic penetration depth forming a relatively robust triangular lattice. $`H_{c1}`$ is reduced by a factor $`\mathrm{log}\kappa `$ as compared to that determined by usual Abrikosov vortex (see eq.(15)). The following characteristic features, in addition to the slope of the magnetization curve, can allow experimental identification of a magnetic skyrmions lattice. 1. Unit of flux quantization is $`2\mathrm{\Phi }_0`$. 2. Superfluid density $`|\stackrel{}{\psi }|^2`$ is almost constant throughout the mixed state. This can be tested using STM techniques. 3. Due to the fact that there is no normal core, in which usually dissipation and pinning take place, one expects that pinning effects are reduced. It is interesting to note that our results are actually applicable to another model of UPt<sub>3</sub> with accidentally degenerate AE representations . Although this model adopts the strong spin–orbit coupling scheme, it has a structure closely related to $`F_{sym}`$ of eqs.(3)–(4) at low temperatures where both order parameters become of equal importance and can be viewed as a single three dimensional order parameter. The authors are grateful to B. Maple for discussion of results of Ref. 5, to L. Bulaevskii, T.K. Lee and J. Sauls for discussions and to A. Balatsky for hospitality in Los Alamos. The work is supported by NSC, Republic of China, through contract #NSC86-2112-M009-034T.
no-problem/9812/astro-ph9812108.html
ar5iv
text
# Pressure Ionization Instability: Connection between Seyferts and GBHCs ## 1. Introduction The X-ray spectra of S1G and GBHCs indicate that the reflection and reprocessing of incident X-rays into lower frequency radiation is an ubiquitous and important process (Pounds et al. 1990, Nandra & Pounds 1994; Zdziarski et al. 1996). It is generally believed that the universality of the X-ray spectral index in S1G ($`\mathrm{\Gamma }1.9`$) may be attributed to the fact that the reprocessing of X-rays within the disk-corona of the two-phase model leads to an electron cooling rate that is roughly proportional to the heating rate inside the active regions (AR) where the X-ray continuum originates (Haardt & Maraschi 1991, 1993; Haardt, Maraschi & Ghisellini 1994; Svensson 1996). Although the X-ray spectra of GBHCs are similar to that of Seyfert galaxies, they are considerably harder (most have an intrinsic power-law index of $`\mathrm{\Gamma }1.51.7`$), and the reprocessing features are less prominent (Zdziarski et al. 1996). Dove et al. (1997) recently showed that a Rossi X-ray observation of Cygnus X-1 shows no significant evidence of reflection features. The relatively hard power law and the weak reprocessing/reflection features led Dove et al. (1997, 1998), Gierlinski et al. (1997) and Poutanen, Krolik & Ryde (1997) to conclude that the PCD model does not apply to Cygnus X-1. This conclusion is sensitive to the assumption that the accretion disk is relatively cold, such that $`90`$% of the reprocessed coronal radiation is re-emitted by the disk as thermal radiation (with a temperature $`150`$ eV). To test validity of this assumption, we extend earlier work of Nayakshin & Melia (1997), who investigated the X-ray reflection process in AGNs assuming that the ARs are magnetic flares above the disk, on the case of GBHCs. ## 2. Thermal Instability of the Transition Region The relevant geometry is shown in Figure (1). Since the flux of the ionizing radiation from the active region is rapidly declining with distance away from the flare, only the gas near the active regions (with a radial size $``$ a few times the size of the active region) may be highly ionized. To distinguish these important X-ray illuminated regions from the “average” X-ray skin of the accretion disk (i.e., far enough from active magnetic flares), we will refer to these as transition layers (or regions). We will only consider the structure of the cold disk in the transition layer and only solve the radiation transfer problem for these regions as well, because this is where most reprocessed coronal radiation will take place. The compactness parameter of the active region, $`l`$, is defined as $$l\frac{F_\mathrm{x}\sigma _T\mathrm{\Delta }R}{m_ec^3},$$ (1) and is expected to be larger than or of order of unity (e.g., Poutanen & Svensson 1996, Poutanen, Svensson & Stern 1997, Chapter 2 in Nayakshin 1998b). Here, $`F_\mathrm{x}`$ is the X-ray illuminating flux from the flare, $`\sigma _T`$ is the Thomson cross section, and $`\mathrm{\Delta }R`$ is the size of the active region $`\mathrm{\Delta }R`$, which is thought to be of the order of the accretion disk height scale $`H`$ (e.g., Galeev et al. 1979). Inverting this definition, one gets an estimate for $`F_\mathrm{x}`$ for a given compactness parameter. Due to space limitations, we just mention that one can show that (1) during the flare, the ionizing X-ray flux $`F_\mathrm{x}`$ substantially exceeds the disk thermal flux $`F_{\mathrm{disk}}`$ in parameter space appropriate for both S1Gs and GBHCs; (2) the X-radiation ram pressure, $`F_\mathrm{x}/c`$ is much larger than the gas pressure in the disk atmosphere (Nayakshin 1998b, Nayakshin & Dove 1998, paper I & II hereafter). Under these conditions, the ionizing radiation ram pressure compresses the disk atmosphere (in the transition layer only, of course) so that the gas pressure there matches the ram pressure, i.e., $`P\stackrel{<}{}F_\mathrm{x}/c`$. Thermal instability was discovered by Field (1965) for a general physical system. He introduced the “cooling function” $`\mathrm{\Lambda }_{\mathrm{net}}`$, defined as difference between cooling and heating rates per unit volume, divided by the gas density $`n`$ squared. Energy equilibria correspond to $`\mathrm{\Lambda }_{\mathrm{net}}=0`$. He argued that a physical system is usually in pressure equilibrium with its surroundings. Thus, any perturbation of the temperature $`T`$ and the density $`n`$ of the system should occur at a constant pressure. The system is unstable when $$\left(\frac{\mathrm{\Lambda }_{\mathrm{net}}}{T}\right)_P<0,$$ (2) since then an increase in the temperature leads to heating increasing faster than cooling, and thus the temperature continues to increase. Similarly, perturbation to a lower $`T`$ will cause the cooling to exceed heating, and $`T`$ will continue to decrease. In ionization balance studies, it turns out convenient to define two parameters. The first one is the “density ionization parameter” $`\xi `$, equal to (Krolik, McKee & Tarter 1981) $`\xi =4\pi F_\mathrm{x}/n`$. The second one is the “pressure ionization parameter”, defined as $`\mathrm{\Xi }=F_\mathrm{x}/(2cnkT)P_{\mathrm{rad}}/P`$, where $`P`$ is the gas pressure. This definition of $`\mathrm{\Xi }`$ is the one used in the ionization code XSTAR (see below), and is different by factor $`2.3`$ from the original definition of Krolik et al. (1981), who used the hydrogen density instead of the electron density. In papers I & II, we showed that Fields instability criterion is equivalent to the following condition (see also Krolik et al. 1981): $$\left(\frac{d\mathrm{\Xi }}{dT}\right)_{\mathrm{\Lambda }_{\mathrm{net}}=0}<0$$ (3) We now apply the X-ray ionization code XSTAR, written by T. Kallman and J. Krolik, to the problem of the the transition layer. A truly self-consistent treatment would involve solving radiation transfer in the optically thick transition layer, and, in addition, finding the distribution of the gas density in the transition layer that would satisfy pressure balance. Since radiation force acting on the gas depends on the opacity of the gas, this is a difficult non-linear problem. We defer such a detailed study to future work, and simply solve (using XSTAR) the local energy and ionization balance for an optically thin layer of gas in the transition region. We assume that the ionizing spectrum consists of the incident X-ray power law with the energy spectral index typical of GBHCs in the hard state, i.e., $`\mathrm{\Gamma }=1.51.75`$, exponentially cutoff at 100 keV, and the blackbody spectrum at temperature $`T_{\mathrm{min}}`$ with flux equal to the X-ray flux. We include the former component to mimic the spectrum reflected from the cold disk below the transition layer (usually as much as $`8090`$ % of the reflected flux comes out as the cold black body emission, see §3). When applying the code, one should be aware that it is not possible for the transition region to have temperature lower than the effective temperature of the X-radiation, i.e., $`T_{\mathrm{min}}=(F_\mathrm{x}/\sigma )^{1/4}`$. The reason why simulations may give temperatures lower than $`T_{\mathrm{min}}`$ is that in this parameter range XSTAR neglects certain de-excitation processes, which leads to an overestimate of the cooling rate for $`TT_{\mathrm{min}}`$ (Źycki et al. 1994; see their section 2.3). In the spirit of one zone approximation for the transition layer, we use an average X-ray flux $`F_\mathrm{x}`$ as seen by the transition region, which we parameterize as $`F_\mathrm{x}=0.1F_\mathrm{x}/q_1`$, where $`q_1=q/10`$, and $`q`$ is a dimensionless number of order 10 (see figure 1; $`F_\mathrm{x}`$ is the X-ray flux at the active region). Nayakshin (1998b) shows that $$T_{\mathrm{min}}5.0\times 10^6l^{1/4}q_1^{1/4}\left(\frac{\dot{m}}{0.05}\right)^{1/20}\alpha ^{1/40}M_1^{9/40}\left[1f\right]^{1/40}$$ (4) where $`l0.01`$ is the compactness parameter, $`\dot{m}`$ is the dimensionless accretion rate, $`M_1M/10M_{}`$, $`f`$ is the fraction of power supplied from the disk to corona (see Nayakshin 1998a) and $`\alpha `$ is the viscosity parameter. Figure 2 shows results of our calculations for several different X-ray ionizing spectra. A stable solution for the transition layer structure will have a positive slope of the curve, and also satisfy the pressure equilibrium condition. As discussed earlier, $`PF_\mathrm{x}/c`$ (i.e., $`\mathrm{\Xi }1`$). In addition, if the gas is completely ionized, the absorption opacity is negligible compared to the Thomson opacity. Because all the incident X-ray flux is eventually reflected, the net flux is zero, and so the net radiation force is zero. In that case $`P`$ adjusts to the value appropriate for the accretion disk atmosphere in the absence of the ionizing flux (see also Sincell & Krolik 1996), i.e., $`\mathrm{\Xi }1`$. Thus, the upper branch of the ionization equilibrium curve, where the transition layer is at the Compton equilibrium temperature, is stable, because the ionization curve has a positive slope and the large values of $`\mathrm{\Xi }`$ are physically allowed. In addition to the Compton equilibrium state, there is a smaller stable region for temperatures in the range between $`100`$ and $`200`$ eV. The presence of this region is explained by a decrease in heating, rather than an increase in cooling (cf. equation 2 and recall $`\mathrm{\Lambda }_{\mathrm{net}}=`$ cooling – heating). The X-ray heating decreases in the temperature range $`100200`$ eV with increasing $`T`$ because of consequent destruction (ionization) of ions with ionization energy close to $`kT`$. This is highly unlikely that the transition region will stabilize at the temperature $`100`$$`200`$ eV, because the effective temperature $`T_{\mathrm{min}}`$ is at or above this temperature range. Further, Nayakshin (1998b) considered the effects of the radiation pressure in the transition layer more accurately by computing the gas cross sections to the incident and reprocessed fluxes. He shows that the stable state with $`kT100200`$ eV is forbidden on the grounds of the pressure equilibrium. Rounding this discussion up, we believe that the only stable configuration available for the transition layer of GBHCs in the hard state is the one at the local Compton temperature. Future work should concentrate on finding not only the exact value of $`\tau _x`$, but the exact distribution of gas temperature, density and ionization state in the atmosphere of the accretion disk as well. For now, however, we will treat $`\tau _x`$ as a free parameter and numerically investigate the ramifications of the transition layer on the spectrum of escaping radiation and the physical properties of the corona. ## 3. “Three-Phase” Model for GBHCs To explore how the structure of the ionized transition region affects the X-ray spectrum from magnetic flares, we computed the X-ray spectrum from a magnetic flare above the transition layer with a range of $`\tau _{\mathrm{trans}}`$. The gas in the active region is heated uniformly throughout the region and is cooled by the Compton interactions with radiation re-entering the active region from below. Even though the geometry of the AR is probably closer to a sphere or a hemisphere than a slab, we shall adopt the latter for numerical convenience, neglecting the boundary effects. Experience has shown that spectra produced by Comptonization in different geometries are usually qualitatively similar (i.e., a power-law plus an exponential roll-over), and it is actually the fraction of soft photons entering the corona that accounts for most of the differences in the various models, because it is this fraction that affects the AR energy balance. To crudely take geometry into account, we permit only a part of the reprocessed radiation to re-enter the corona, and fix this fraction at $`0.5`$ (cf. Poutanen & Svensson 1996). The Thomson optical depth of the corona is fixed at $`\tau _\mathrm{c}=0.7`$. We employ the Eddington (two-stream) approximation for the radiative transfer in both the AR and the transition layer. The disk below the flare is broken into two regions: (i) the completely ionized transition region, situated on the top of (ii) the cold accretion disk, which emits blackbody radiation at a specified temperature. We model the transition layer as being one dimensional. The X-radiation enters the transition region through its top. In this region, the only process taken into account is the Compton scattering. After being down-scattered, the X-radiation is “incident” on the cold accretion disk from the bottom of the transition layer. The incident spectrum is reflected in the standard manner (Magdziarz & Zdziarski 1995). The total radiation spectrum re-entering the transition layer from below is the sum of the reflection component and the blackbody component due to the disk thermal emission, which is normalized such that the incident flux from the transition region is equal to the sum of the fluxes from the reflection component and the blackbody. The optically thick cold disk is held at a temperature $`T_{\mathrm{bb}}=2.4\times 10^6`$ Kelvin. The observed spectrum consists of the direct component, emerging through the top of the AR, and a fraction of the reflected radiation that emerges from the transition layer and does not pass through the corona on its way to us (see Fig. 1). This fraction is chosen to be 0.5 as well. Physically, it accounts for the fact that, as viewed by an observer, a part of the transition region itself is blocked by the active region. The overall setup of the active region - disk connection is very similar to the one used by Poutanen & Svensson (1996), except for the addition of transition layer on the top of the cold disk. Figure (3a) shows the “observed” spectrum for several values of $`\tau _{\mathrm{trans}}`$: $`0`$, $`0.6`$, $`2.5`$, and $`10`$. It can be seen that the spectrum hardens as $`\tau _{\mathrm{trans}}`$ increases and the fraction of energy in reprocessed (soft) component below $`2`$ keV decreases, which can be understood by noting that a larger fraction of the photons from the AR is reflected before they have a chance to penetrate into the cold disk where the blackbody component is created. In Figure (3b) we show the components that contribute to the overall spectrum for $`\tau _{\mathrm{trans}}=3`$. The solid, dashed and dotted curves show the total spectrum, the AR intrinsic spectrum and the reprocessed spectrum (emerging from the top of the transition layer). Notice that the reprocessed component has about equal amount of power below and above 2 keV, whereas the usual division of power in the reflected spectrum (from a neutral reflector) is $`8090`$ in the soft and $`2010`$ % in the hard components, correspondingly (e.g., Magdziarz & Zdziarski 1995). This is the most profound difference between our calculations and those of previous workers, who assumed that the disk boundary is infinitely sharp, so that there is no transition layer between the AR and the cold disk. Gierlinski et al. (1997) have attempted to fit broad-band spectrum of Cyg X-1 with active regions above a cold accretion disk, and showed that the most difficult issue for the two-phase model is the too small observed amount of the reprocessed soft X-radiation. For example, Zheng et al. (1997) shows that Cyg X-1 luminosity in the hard state below 1.3 keV is about $`5\times 10^{36}`$ erg/s, whereas the luminosity above 1.3 keV is $`34\times 10^{37}`$ erg/s. This is impossible in the context of the simple PCD model, since about half of the X-radiation impinge on the cold disk and get reprocessed into the blackbody radiation. Accordingly, the minimum luminosity in soft X-rays below 1.3 keV should be about that of the hard component. However, we find that, with the advent of the transition layer, the combined power below $`2`$ keV accounts for only $`25\%`$ of the total for $`\tau _{\mathrm{trans}}=3`$. Further, notice that the spectra are correspondingly harder in X-rays, which explains why GBHCs spectra are harder than those of typical S1G. In paper I we address the other reprocessing features (e.g., the iron line and anisotropy break) and show that the theory predictions are consistent with Cyg X-1 spectrum. In paper II, we apply the non-linear Monte-Carlo routine (for details of the routine and geometry, see Dove, Wilms, & Begelman 1997), and demonstrate that our results obtained with the simpler Eddington approximation code hold true. ## 4. The Pressure Ionization Instability for AGN We now discuss the thermal instability of the surface layer for AGN. The most important distinction from the GBHC case is the much higher mass of the AGN, and thus the ionizing X-ray flux is smaller by $`7`$ orders of magnitude (since $`F_\mathrm{x}L/R^2\dot{m}L_{\mathrm{Edd}}/R_s^2\dot{m}M^1`$). The minimum X-ray skin temperature is again approximated by setting the blackbody flux equal to the incident flux. The gas pressure dominated solution gives (paper I) $$T_{\mathrm{min}}1.5\times 10^5l^{1/4}\alpha ^{1/40}M_8^{9/40}\left[\frac{\dot{m}}{0.005}\right]^{1/20}\left(1f\right)^{1/40}\left(\frac{q}{10}\right)^{1/4},$$ (5) whereas the radiation-dominated one yields $$T_{\mathrm{min}}1.24\times 10^5l^{1/4}M_8^{1/4}\left[\frac{\dot{m}}{0.005}\right]^{1/4}\left(1f\right)^{1/4}\left(\frac{q}{10}\right)^{1/4}$$ (6) These estimates show our main point right away: the lower X-ray flux density in AGN may allow the transition layer to saturate at either the cold equilibrium state or the “island” state with $`T100200`$ eV, whereas that was not possible for GBHCs. To investigate this idea, we ran XSTAR as described in §2, but for parameters appropriate for an AGN transition layer. The X-rays illuminating the transition region are assumed to mimic the typical Seyfert hard spectra, i.e., a power-law with photon index $`\mathrm{\Gamma }=1.9`$ and the exponential roll-over at 100$``$200 keV range. We also add the reflected blackbody component as described in §3. We show results of two such simulations in Figure (4). The solid curve corresponds to $`T_{\mathrm{min}}=6`$ eV and the rollover energy of $`100`$ keV, while for the dotted curve $`T_{\mathrm{min}}=12`$ eV and the rollover energy of $`200`$ keV. As explained earlier, XSTAR produces inaccurate results below $`TT_{\mathrm{min}}`$, so that these regions of the ionization equilibrium curve should be disregarded. Notice that the “cold” equilibrium branch, i.e., the region with $`T10^5`$ K is broader in terms of $`\mathrm{\Xi }`$ than the island state. Further, preliminary more detailed pressure equilibrium considerations (paper I) show that the island state is unlikely to satisfy the pressure equilibrium, so that the two truly stable solutions for the transition layer in AGN are the cold stable state with $`kT1030`$ eV and the hot Compton equilibrium state, which we already discussed for GBHCs in §3. In addition, the Rosseland mean optical depth to the UV emission is of order 1 to few for the temperature range $`kT1030`$ eV (paper I). As one can check using Field (1965) stability criterion, the transition layer radiating via blackbody or modified emission is thermally stable. This consideration adds weight to our optically thin calculations in that the cold state of the transition layer in AGN disks should be unquestionably stable. It is interesting to note that the reflection component and the fluorescent iron line that are always present in the spectra of radio-quiet Seyfert Galaxies (e.g., Gondek et al. 1996, Zdziarski et al. 1996, George & Fabian 1991) can be best fitted with a neutral or weakly ionized reflector. From work of Matt, Fabian & Ross (1993, 1996) and Zycki et al. (1994), it is known that the ionization parameter $`\xi \stackrel{<}{}100`$ is required to fit S1G data. The cold stable solution found here corresponds to $`\xi `$ ranging from few tens to $`200`$ (paper I), thus being consistent with observations of X-ray reflection and iron lines in Seyferts. ## 5. The Origin of the Big Blue Bump (BBB) in Seyferts In recent years, there has been considerable progress in observations of the BBB (e.g., Walter & Fink 1993; Walter et al. 1994; Zhou et al. 1997). It was found that the observed spectral shape of the bump component in Seyfert 1’s hardly varies, even though the luminosity $`L`$ (of the bump) ranges over 6 orders of magnitude from source to source. This fact is uneasy to understand from the point of view of any disk emission mechanism (see, e.g., Nayakshin 1998b, Chapter 5 & references there). We believe that our theory of the ionization pressure instability may offer a plausible explanation for the BBB emission. As our ionization equilibria calculations show, there is no stable solution for the transition region in the temperature range $`3\times 10^5\stackrel{<}{}T\stackrel{<}{}10^7`$ Kelvin. Furthermore, temperatures below the effective temperature of the X-ray radiation are also forbidden. If PCD model is correct at all, $`l0.01`$. Preliminary estimates (Nayakshin 1998b) show that $`l0.1`$ is required to explain soft X-ray part of Cyg X-1 spectrum. Since we believe it is the same physics of the PCD model that explains both GBHCs and Seyferts, $`l0.1`$ yields $`T_{\mathrm{min}}10^5`$ K (see equations 5 & 6) for AGN. Thus, the only low temperature solution permitted by the stability analysis for AGN with $`M10^8M_{}`$ is the one with temperature $`13\times 10^5`$ Kelvin. From our calculations, we also found that the Rosseland mean optical depth to the UV emission is of order 1 to few in the cold stable state. The radiation spectrum produced by the transition layer will therefore be either a blackbody spectrum, or a modified blackbody (with recombination lines as well, of course). Since a moderately optically thick emission spectrum saturates at photon energy of $`24\times kT`$, $`T2\times 10^5`$ provides an excellent match to the observed roll-over energies of $`4080`$ eV (e.g., Walter et al. 1994). The most attractive feature of this suggestion is that the temperature of the BBB is fixed by atomic physics, in particular by the fact that many atomic species have ionization potential close to 1 Rydberg $`1.5\times 10^5`$ K, which may explain the fact that the BBB shape changes so little from source to source. The stable temperature range is independent of the number of magnetic flares, and so it is independent of the X-ray luminosity of the source, as found by Walter & Fink (1993) and Walter et al. (1994). Further, supplemented by our theory of the division of power between the corona and the disk, the pressure ionization instability can explain disappearance of the bump for AGN more luminous than typical S1G, e.g., quasars from Zheng et al. (1997) and Laor et al. (1997) samples (see Nayakshin 1998a and paper I). ## 6. Discussion By considering the irradiated X-ray skin close to an active magnetic flare above a cold accretion disk, we have shown that the skin equilibrium is in general unstable. Two stable states (one cold and one hot) exist. For the case of GBHCs, we showed that the low temperature equilibrium state is forbidden due to a high value of the ionizing flux. Thus, the X-ray irradiated skin of GBHCs must be in the hot equilibrium configuration, where the gas is at the local Compton temperature ($`kT`$ few keV). In an attempt to determine the effects of this skin on the spectrum from a magnetic flare, we modeled the ionization structure of the disk by assuming a completely ionized layer with Thomson optical depth of $``$ few to be situated on the top of the cold disk (cf. Fig. 1). We found that the transition layer alters the reflected spectrum significantly, and that it leads to GBHCs spectra being harder than Seyfert 1 spectra for same parameters of magnetic flares. We also found that the highly ionized transition layer can account for the disappearance/weakening of the reprocessing features, such as iron line. We thus conclude that spectrum of GBHCs in their hard state is consistent with PCD model when one takes into account the pressure ionization instability discussed here. Applying our results to AGN case, we found that, due to a substantially lower ionizing flux as compared to GBHCs case, there exists a stable solution for the transition layer in the temperature range $`T13\times 10^5`$ K. Thus, the reprocessed features are expected to be characteristic of cold, almost neutral reflector, being consistent with observations of AGN (e.g., Zdziarski et al. 1996). The narrow range in the temperature of the transition region in S1G may explain the observed roll-over energies in the BBB spectrum. Commenting on the distinction of our work on the ionization structure of the disk from extensive previous studies of this issue (e.g., Zycki et al. 1994, Ross, Fabian & Brandt 1996 and references there), we note that the difference is caused by two factors: (1) previous workers assumed that the corona is uniform and covers the whole disk, whereas here we test the case of strongly localized emission from magnetic flares, and (2), more importantly, previous studies fixed the X-ray skin gas density at the disk mid-plane value, which, we note, have little to do with the disk atmosphere density. The usual statement that the density of radiation-dominated disks is approximately constant is only correct as long as one stays deep inside the disk, far from the surface (see, e.g., §2a and Fig. 11 of Shakura & Sunyaev 1973). Further, the pressure ionization instability is not apparent in studies where the gas density is fixed to a constant value, regardless of its value. As shown by Field (1965), the thermal instability for the case with $`n=`$ const is always weaker than it is for the case of a system in pressure equilibrium (and it actually disappears in the given situation), which is apparently the reason why this instability was not recognized before. Thus, as far as we can see, observations of the hard state of the GBHCs do not rule out magnetic flares as the source of X-rays, and instead support this theory. We preliminary estimate that the observed X-ray spectrum of Cyg X-1 can be explained by the transition optical depth of $`3`$, which is physically plausible, and that, apart from the self-consistent difference in the structure of the transition layer, same parameters for magnetic flares might be used in both AGN and GBHCs to explain their spectra. ### Acknowledgments. The author is very thankful to the workshop organizers for the travel support, and to F. Melia for support and useful discussions in an early stage of this work. ## References Dove, J. B., Wilms, J., & Begelman, M. C., 1997, ApJ, 487, 747 Dove, J. B., Wilms, J., Nowak, M. A., Vaughan, M. A., & Begelman, M. C., 1998, MNRAS, in press Field, G.B. 1965, ApJ, 142, 531 Galeev, A. A., Rosner, R., & Vaiana, G. S., 1979, ApJ, 229, 318 George, I.M., & Fabian, A.C. 1991, MNRAS, 249, 352 Gierlinski, M. et al. 1997, MNRAS, 288, 958 Gondek, D., et al. 1996, MNRAS, 82, 646 Haardt F., & Maraschi, L., 1991, ApJ, 380, L51 Haardt F. & Maraschi L., 1993, ApJ, 413, 507 Haardt F., Maraschi, L., & Ghisellini, G. 1994, ApJ, 432, L95 Krolik, J.H., McKee, C.F., & Tarter, C.B. 1981, ApJ, 249, 422 Laor, A., et al. 1997, ApJ, 477, 93 Magdziarz, P. & Zdziarski, A.A. 1995, MNRAS, 273, 837 Matt, G., Fabian, A.C., & Ross, R. R. 1993, MNRAS, 262, 179 Matt, G., Fabian, A.C., & Ross, R. R. 1996, MNRAS, 278, 1111 Nandra, K., & Pounds, K. A., 1994, MNRAS, 268, 405 Nayakshin, S. & Melia, F. 1997, ApJ, 484, L103 Nayakshin, S. 1998a, these proceedings. Nayakshin, S. 1998b, PhD thesis, The University of Arizona, astro-ph/9811061. Nayakshin, S. & Dove, J. B., 1998, submitted to ApJ, astro-ph/9811059. Pounds, K.A., et al. 1990, Nature, 344, 132 Poutanen, J. & Svensson, R. 1996, ApJ, 470, 249 Poutanen, J, Krolik, J.H, & Ryde, F. 1997, in the Proceedings of the 4th Compton Symposium, astro-ph/9707244 Ross, R.R., Fabian, A.C., Brandt, W.N. 1996, MNRAS, 278, 1082 Shakura, N.I., & Sunyaev, R. A. 1973, ApJ, 24, 337 Sincell, M.W. & Krolik, J.H. 1997, ApJ, 476, 605S Svensson, R. 1996, ApJ Supplement, 120, 475 Walter, R., & Fink, H.H. 1993, A&A, 274, 105 Walter, R., et al. 1994, A&A, 285, 119 Zdziarski, A.A., Gierlinski, M., Gondek, D., & Magdziarz, P. 1996, A&A Suppl., 120, 553 Zheng, W. et al. 1997, ApJ, 475, 469 Zhou, Y., et al. 1997, ApJ, 475, L9 Źycki, P.T. et. al. 1994, ApJ, 437, 597
no-problem/9812/nucl-th9812051.html
ar5iv
text
# Triaxial projected shell model approach ## Abstract The projected shell model analysis is carried out using the triaxial Nilsson+BCS basis. It is demonstrated that, for an accurate description of the moments of inertia in the transitional region, it is necessary to take the triaxiality into account and perform the three-dimensional angular-momentum projection from the triaxial Nilsson+BCS intrinsic wavefunction. The major advancement in the studies of deformed nuclei has been the introduction of the Nilsson potential. It was shown that the rotational properties of deformed nuclei can be described by considering nucleons to move in a deformed potential. The description of the deformed nuclei in medium and heavy mass regions is impossible using the standard (spherical) shell model approach, despite the recent progress in the computing power. The Nilsson model has provided a useful nomenclature for the observed rotational bands. It is known that each rotational band is built on an intrinsic Nilsson state. The Nilsson or deformed state is defined in the intrinsic frame of reference in which the rotational symmetry has been broken and in order to calculate the observable properties, it is required to restore the broken symmetry. The rotational symmetry can be restored by using the standard angular momentum projection operator . This method has been used to project out the good angular momentum states from the Nilsson+BCS intrinsic state , see also the review article and references cited therein. In this approach, the angular momentum projection is carried out from a chosen set of Nilsson+BCS states near the Fermi-surface. The projected states are then used to diagonalize a shell model Hamiltonian. This approach referred to as the projected shell model (PSM) follows the basic philosophy of the standard shell model approach. The only difference is that, in the PSM, the deformed basis is employed rather than the spherical basis. This makes the truncation of the many-body basis very efficient, so that the shell model calculations even for heavier systems can be easily performed. The PSM approach has been used to describe a broad range of nuclear phenomena such as backbending , superdeformed and identical bands with considerable success. The assumption in the PSM approach has been the axial symmetry for the deformed system to keep the computation simple. In fact, this is a reasonable approximation for well deformed nuclei. However, for transitional nuclei, this assumption is questionable. The inadequacy of the axially symmetric basis has been clearly demonstrated by moments of inertia (the backbending plots) of the transitional nuclei in the rare-earth region. It has been shown that, in the low spin region, observed moments of inertia for lighter rare-earth nuclei (for instance <sup>156</sup>Er, <sup>158</sup>Er, <sup>158</sup>Yb and <sup>162</sup>Hf) and for the heavier rare-earth nuclei (for instance <sup>172</sup>W, <sup>174</sup>W and <sup>176</sup>W) increase quite steeply with increasing rotational frequency as compared to the moment of inertia calculated by the axially symmetric PSM approach (see Figs. 14–17 in ). This can be understood by noting that the horizontal (vertical) line in the backbending plot represents the rotational (vibrational) limit since the energy $`E(I)`$ as a function of spin $`I`$ is proportional to $`I^2`$ ($`I`$). In this sense, the experimental data slants towards the vibrational side in comparison with the existing PSM results. On the other hand, the spectrum of a triaxial rotor is known to vary from rotational spectrum to a vibrational one as the triaxiality parameter $`\gamma `$ increases from 0 to $`30^o`$ and, using this model, it has been demonstrated (see Fig. 18 in ) that the backbending plot indeed slants towards the vibrational limit when $`\gamma `$ increases. It is therefore expected that, by using the triaxial basis in the PSM, the moments of inertia and other properties of the transitional nuclei can be described more appropriately. As pointed out in , the major problem here lies rather in the ground state band. This part of the spectrum is quite insensitive to the configuration mixing since the energy and spin values are still low ($`I10`$), so that an improvement of the ground state by allowing some triaxiality is in order. The purpose of the present work is to develop a triaxial projected shell model (referred to as TPSM hereafter) approach for the description of transitional nuclei. This requires a three-dimensional angular momentum projection and has not been attempted so far except for a short investigation in early eighties . We have carried out the three-dimensional projection and would like to report our preliminary results. The shell model Hamiltonian employed is identical to the one used in the axially symmetric PSM approach . It consists of $`QQ`$ \+ monopole pairing + quadrupole pairing forces $$\widehat{H}=\widehat{H}_0\frac{\chi }{2}\underset{\mu }{}\widehat{Q}_\mu ^{}\widehat{Q}_\mu G_M\widehat{P}^{}\widehat{P}G_Q\underset{\mu }{}\widehat{P}_\mu ^{}\widehat{P}_\mu .$$ (1) Here, $`\widehat{H}_0`$ is the the spherical harmonic-oscillator single-particle Hamiltonian with a proper $`ls`$-force while the operators $`\widehat{Q}`$ and $`\widehat{P}`$ are defined as $$\widehat{Q}_\mu =\underset{\alpha \beta }{}Q_{\mu \alpha \beta }c_\alpha ^{}c_\beta ,\widehat{P}^{}=\frac{1}{2}\underset{\alpha }{}c_\alpha ^{}c_{\overline{\alpha }}^{},\widehat{P}_\mu ^{}=\frac{1}{2}\underset{\alpha \beta }{}Q_{\mu \alpha \beta }c_\alpha ^{}c_{\overline{\beta }}^{},$$ (2) where the quadrupole matrix-elements are given by $$Q_{\mu \alpha \alpha ^{}}=\delta _{NN^{}}(Njm|Q_\mu |N^{}j^{}m^{}).$$ (3) In Eq. (2), $`\alpha =\{Njm\}`$ while $`\overline{\alpha }`$ represents the time-reversed state of $`\alpha `$. The Hartree-Fock-Bogoliubov (HFB) approximation of the shell model Hamiltonian Eq. (1) leads to the quadrupole mean-field which is similar to the Nilsson potential. Therefore, instead of performing the HFB variational analysis of the Hamiltonian in Eq. (1), the Nilsson potential can be directly used to obtain the deformed basis. In the present work, we use the triaxial Nilsson potential specified by the deformation parameters $`ϵ`$ and $`ϵ^{}`$ $$\widehat{H}_N=\widehat{H}_0\frac{2}{3}\mathrm{}\omega (ϵ\widehat{Q}_0+ϵ^{}\frac{\widehat{Q}_{+2}+\widehat{Q}_2}{\sqrt{2}}),$$ (4) to generate the deformed single-particle wavefunctions. It can be easily seen that the rotation operator $`e^{ı\frac{\pi }{2}\widehat{J}_z}`$ transforms the Nilsson Hamiltonian $`\widehat{H}_N`$ into the opposite triaxiality $`(ϵ^{}ϵ^{})`$ leaving the eigenvalues unchanged. Later, it will be shown that the projected energy is independent of the sign of $`ϵ^{}`$ and it is sufficient to consider only the non-negative $`ϵ^{}`$. The volume conservation also restricts the range of $`ϵ`$ and $`ϵ^{}`$ values to $$3<ϵ<\frac{3}{2},|ϵ^{}|<\sqrt{3}(1+\frac{ϵ}{3}).$$ (5) The triaxial Nilsson potential has been solved for the rare-earth region with three major shells $`N=4,5,6(3,4,5)`$ for neutrons (protons). In the next step, the monopole pairing Hamiltonian is treated based on the triaxial Nilsson basis. We use the standard strengths for the pairing interaction of the form $$G_M=(G_1G_2\frac{NZ}{A})\frac{1}{A},$$ (6) where $``$ $`(+)`$ is for neutrons (protons) while $`G_1`$ and $`G_2`$ are chosen respectively as 21.24 and 13.86 MeV in the rare-earth region. The pairing correlations are treated by using the usual BCS approximation to establish the Nilsson+BCS basis. The three-dimensional angular momentum projection is then carried out on the quasiparticle states obtained in this way. The three-dimensional angular momentum projection operator is given by $$\widehat{P}_{MK}^I=\frac{2I+1}{16\pi ^2}𝑑\mathrm{\Omega }D_{MK}^I(\mathrm{\Omega })\widehat{R}(\mathrm{\Omega }),$$ (7) $`\widehat{R}(\mathrm{\Omega })=e^{ı\alpha \widehat{J}_z}e^{ı\beta \widehat{J}_y}e^{ı\gamma \widehat{J}_z}`$ being the rotation operator and $`D_{MK}^I(\mathrm{\Omega })=<\nu IM|\widehat{R}(\mathrm{\Omega })|\nu IK>^{}`$ its irreducible representation where $`\{|\nu IM>\}`$ is a complete set of states for the specified angular momentum quantum number $`IM`$. Since the spectral representation of the projection operator Eq. (7) is represented by $$\widehat{P}_{MK}^I=\underset{\nu }{}|\nu IM><\nu IK|,$$ (8) it is easy to see that $`|\mathrm{\Phi }^{}>e^{ı\frac{\pi }{2}\widehat{J}_z}|\mathrm{\Phi }>`$, i.e. the state of the opposite triaxiality to a state $`|\mathrm{\Phi }>`$, is projected to give $$\widehat{P}_{MK}^I|\mathrm{\Phi }^{}>=\widehat{P}_{MK}^Ie^{ı\frac{\pi }{2}\widehat{J}_z}|\mathrm{\Phi }>=()^{ı\frac{\pi }{2}K}\widehat{P}_{MK}^I|\mathrm{\Phi }>.$$ (9) This state differs only by a phase factor from $`\widehat{P}_{MK}^I|\mathrm{\Phi }>`$ and thus represents the same physical state. It therefore proves that the result of the angular momentum projection should be independent of the sign of $`ϵ^{}`$. We have used this property to check the programming since it is a non-trivial relation. Note that this justifies the above-mentioned restriction $`ϵ^{}0`$. Details of the projection technique and algorithm are discussed in an Appendix of . In the present work, we have diagonalized the Hamiltonian Eq. (1) within the space spanned by $`\{\widehat{P}_{MK}^I|\mathrm{\Phi }>\}`$ where $`|\mathrm{\Phi }>`$ is the (triaxial) quasiparticle vacuum state. The TPSM eigenvalue equation with the eigenvalue $`E^I`$ for a given spin $`I`$ thus becomes $$\underset{K^{}}{}\left(H_{KK^{}}^IE^IN_{KK^{}}^I\right)F_K^{}^I=0$$ (10) where the matrix elements are defined by $$H_{KK^{}}^I=<\mathrm{\Phi }|\widehat{H}\widehat{P}_{KK^{}}^I|\mathrm{\Phi }>,N_{KK^{}}^I=<\mathrm{\Phi }|\widehat{P}_{KK^{}}^I|\mathrm{\Phi }>.$$ (11) This TPSM equation has been solved for a range of nuclei in the rare-earth region and the results of a selected few are presented in Figs. 1 and 2. The deformation parameters $`ϵ`$ used in Fig. 1 are exactly the same as those used in the earlier calculations with the axially symmetric basis , i.e. $`ϵ`$ = 0.20, 0.20 and 0.225 for <sup>156</sup>Er, <sup>158</sup>Yb and <sup>176</sup>W, respectively. The results with $`ϵ^{}=0.0`$ in Figs. 1 and 2 represent these axially symmetric calculations. The experimental moments of inertia (represented by circles) increase very steeply. The calculations with $`ϵ^{}=0.0`$ on the other hand depict a very slow increase and is typical of an axially deformed rotational band. The moments of inertia in Fig. 1 become steeper with increasing value of $`ϵ^{}`$ and the value close to $`ϵ^{}=0.15`$ reproduces the experimental data. Roughly speaking, this $`ϵ^{}`$ value corresponds to $`\gamma =35^0`$. It should be noted that the experimental moment of inertia shown in Fig. 1 slightly increases, in particular for <sup>176</sup>W, at the higher end, whereas the theoretical moment of inertia shows a drop. This increase in the observed moment of inertia can be explained by noting that, at around spin I=$`12^+`$, a 2-quasiparticle band (i.e. the s-band) will cross with the ground band and the energy of the higher spin states will be depressed, so that the moment of inertia will effectively increase. In the present calculations, the projection has been carried out only from the ground (i.e. the 0-quasiparticle) band and this effect is not taken into account. The projection from 2- and higher-quasiparticle states requires further work and will be reported elsewhere. Fig. 2 shows the moments of inertia for some Os-isotopes. It is known that these isotopes are $`\gamma `$ soft with very low-lying $`\gamma `$ bands. It is clear from Fig. 2 that, for <sup>184</sup>Os, the moment of inertia is well reproduced with $`ϵ^{}=0.15`$. For <sup>186</sup>Os and <sup>188</sup>Os, the experimental moment of inertia can be explained with $`ϵ^{}`$ between 0.10 and 0.15. In summary, it has been clearly shown in the present work that three-dimensional angular momentum projection from triaxial Nilsson+BCS deformed intrinsic wavefunction is essential for an accurate description of the transitional nuclei. The moments of inertia of these transitional nuclei depict a steep increase as a function of rotational frequency in the low spin region and this can only be explained with triaxial deformation of $`\gamma 30^0`$ since the inclusion of 2- and higher-quasiparticle bands will affect little in this spin region. We would like to mention that the present work has been exploratory. For a detailed study, the energy-surface needs to be analyzed as a function of $`ϵ`$ and $`ϵ^{}`$ to look for the optimal deformation. In the present work, the deformation parameter $`ϵ`$ has been taken from the earlier studies in which the axial symmetry was assumed. In a more consistent treatment, both $`ϵ`$ and $`ϵ^{}`$ have to be varied in order to search the energy minimum for spin $`I=0`$ .
no-problem/9812/gr-qc9812048.html
ar5iv
text
# ISOMETRIC INVARIANCE OF THE POSITIVE-FREQUENCY KERNEL IN GENERALIZED FRW SPACETIMES ## 1 Quantization in curved spacetime ### 1.1 Isometric Invariance Principle Our goal is a theory of quantum free fields, in a given spacetime; before undertaking the construction of Fock space we need to define the Hilbert space suitable for describing the motion of a single particle <sup>1</sup><sup>1</sup>1Dedicated to Lluis Bel on the occasion of the Spanish Relativity Meeting 1998. Therefore we consider here the Klein-Gordon (KG) equation $$(^2+m^2)\mathrm{\Psi }=0$$ (1) for a complex valued wave function $`\mathrm{\Psi }(x)`$, describing the minimal coupling of a scalar particle with gravity. The sesquilinear form $$(\mathrm{\Phi };\mathrm{\Psi })=j^\mu (\mathrm{\Phi },\mathrm{\Psi })𝑑\mathrm{\Sigma }_\mu $$ (2) constructed from the Gordon current $`j^\nu (\mathrm{\Phi },\mathrm{\Psi })`$ is conservative with respect to changes of hypersurface $`\mathrm{\Sigma }`$ provided $`\mathrm{\Phi }`$ and $`\mathrm{\Psi }`$ are solutions to the KG equation. But it is not positive definite. In order to exhibit a candidate for one-particle Hilbert space, the linear space of solutions must be split in two subspaces. In one of them (further identified as positive-frequency space) the restriction of $`(\mathrm{\Phi };\mathrm{\Psi })`$ must be definite positive. The trouble is that, in nonstationary spacetimes, such splitting is not unique. Nevertheless, criteria for its determination have been given soon, either in terms of finding a real linear operator $`J`$ with $`J^2=Id`$, determining a complex structure in the space of real solutions , or alternatively in terms of a projector $`\mathrm{\Pi }^+=\frac{1}{2}(1+iJ)`$ which projects any complex solution into the positive-frequency subspace. Application of $`J`$ or $`\mathrm{\Pi }^+`$ to a solution $`\mathrm{\Psi }`$ is carried out with the help of a kernel , existence of which was proved by C. Moreno under very general global assumptions. But in spite of the enormous amount of litterature up to now devoted to quantization in curved background, it seems that the role of spacetime isometries has not received all the attention it deserves , with an exception for de Sitter spacetime . In view of the fundamental role played by Poincaré group with respect to quantum mechanics in Minkowski spacetime, we propose that quantization in any curved spacetime should satisfy this Principle: Quantum mechanics of free particles must be invariant under all spacetime symmetries continuously connected with the Identity. For simplicity we consider here only continuous isometries, and postpone a discussion about discrete ones. ### 1.2 The positive-frequency kernel The theory of retarded and advanced Green functions has been toroughly investigated for many years. These objects are unambiguously defined under very general assumptions (global hyperbolicity). More problematic is the kernel $`D^\pm `$ which allows for defining positive-frequency (resp. negative-frequency) solutions of the KG equation through the formula $$(\mathrm{\Psi }^\pm )(y)=(D_y^\pm ;\mathrm{\Psi })=j^\alpha (D_y^\pm ,\mathrm{\Psi })𝑑\mathrm{\Sigma }_\alpha $$ (3) where $`\mathrm{\Psi }^\pm =\mathrm{\Pi }^\pm \mathrm{\Psi }`$ is the positive-frequency part of $`\mathrm{\Psi }`$, and $`y`$ is an arbitrary point of $`V_4`$. Of course, $`D^\pm `$ must satisfy the KG equation in both arguments; the notation with subscript $`y`$ indicates that integration is performed on the variable $`x`$. Kernel $`D^+`$ has fundamental importance, for the quantum field of the particle must be defined as anihilation operator associated with the one-particle state $`D^+`$. By isometric invariance we mean $$D^+(x,y)=D^+(Tx,Ty)$$ for any metric-preserving transformation $`T`$ of the connected isometry group. When spacetime has the topology of $`𝐑\times V_3`$, with $`V_3`$ compact and connected, Moreno’s theorem ensures that many $`D^+`$ actually exist (whatever may be the local form of the metric). Two such kernels are related by a unitary transformation. But if spacetime admits Killing vectors, we cannot yet be satisfied with any choice of $`D^+`$, before we answer the question whether this kernel is invariant by action of the isometries of $`(V_4,g)`$. This issue remains an open problem for arbitrary metric, but in the sequel we focus on a class of spacetimes where isometries are under control and the KG equation is separable. ## 2 Quantization in Generalized FRW spacetimes We assume that the spacetime manifold is $`V_4=𝐑\times V_3`$ with connected and compact $`V_3`$, and for some time scale $`t`$ $$ds^2=B^6dt^2B^2d\sigma ^2$$ (4) where $`B`$ is a strictly positive function of $`x^0=t`$, and $`d\sigma ^2=\gamma _{ij}(x^k)dx^idx^j`$ defines an elliptic metric. Notice that $`(V_3,\gamma )`$ may have arbitrary curvature. The form (4) provides a generalization of Friedmann-Robertson-Walker line element. Lorentzian manifolds with such a metric have been systematically studied . Beyond the conventional FRW case, they can represent a universe filled with a non-perfect fluid characterized by anisotropic pressures . They have two nice properties: — In the generic case $`(V_4,g)`$ has no Killing vector beside trivial ones corresponding to the isometries of $`(V_3,\gamma )`$. Cases where additional Killing vectors arise require exceptional laws of evolution for the scale factor ; they naturally include the de Sitter manifold. —The KG equation is separable for special solutions associated with ”modes”; the time dependence of mode solutions is determined by an ordinary diferential equation of second order. This situation stems from existence of a certain first integral, proportional to kinetic energy. Indeed the three-dimensional Laplacian $`\mathrm{\Delta }_3`$ associated with metric $`\gamma `$ commutes with $`^2`$, thus also with the operator in l.h.s. of (1). This fact permits to reduce the wave equation into a one dimensional problem for time dependence, supplemented with an elliptic spectral problem for space dependence. ### 2.1 Kernel at a given mode By extension of the usual terminology we define Mode $`n`$ as the linear space $`_n`$ of solutions to (1) that are also eigenfunctions of $`\mathrm{\Delta }_3`$ with eigenvalue $`\lambda _n`$. We also refer to $`_n`$ as ”kinetic-energy shell”. Separation of frequencies is supposed to respect the mode decomposition, therefore in each shell $`_n`$ we look for projectors $`\mathrm{\Pi }_n^\pm `$ associated with kernels $`D_n^\pm `$. Notice that $`_n`$ is orthogonal to $`_l`$ for $`ln`$ in the sense of the sesquilinear form . In this Section kinetic energy is kept fixed and the label $`n`$ referring to a determined eigenvalue $`\lambda _n`$ is provisionally dropped. Let us consider a mode solution, say $`\mathrm{\Phi }`$. For some nonnegative $`\lambda Spec(V_3)`$ we have $`\mathrm{\Delta }_3\mathrm{\Phi }=\lambda \mathrm{\Phi }`$ so (1) reduces to $$(^0_0+\lambda B^2+\mu )\mathrm{\Phi }=0$$ (5) with $`^0=B^6_0`$ and $`\mu =m^2`$. The space variables $`x^j`$ can be ignored in solving (5) for $`\mathrm{\Phi }`$. This equation is always of second order. It is well-known that the eigenspace $``$ of $`\mathrm{\Delta }_3`$ in $`C^{\mathrm{}}(V_3)`$, associated with the eigenvalue $`\lambda `$ has finite dimension, say $`r`$. Let $`𝒮`$ be the two-dimensional space of $`C^{\mathrm{}}`$ complex-valued functions of (a single variable) $`t`$ satisfying the equation $`(^0_0+\lambda B^2+\mu )f=0`$ and let the functions $`f_1(t)`$ and $`f_2(t)=f_1^{}`$ form a basis of $`𝒮`$. They respectively span the one-dimensional subspaces $`𝒮^{(1)}`$ and $`𝒮^{(2)}`$. We can always chose our notation and normalize the basis of $`𝒮`$ according to the Wronskian condition $`W(f_1,f_2)=i`$, which amounts to associate $`f_1,f_2`$ respectively with positive and negative frequencies. Call admissible such a basis. With this convention we proved, Propo.3 in \[\], that the restriction of $`(\mathrm{\Phi };\mathrm{\Phi })`$ to $`𝒮^{(1)}`$ is positive definite (resp. negative, in $`𝒮^{(2)}`$). The Hilbertian scalar product is defined as $`<\mathrm{\Phi },\mathrm{\Psi }>=\pm (\mathrm{\Phi };\mathrm{\Psi })`$ respectively in $`𝒮^{(1)}`$ and $`𝒮^{(2)}`$. It crucially depends on the splitting we have performed (the choice of an admissible basis) in $`𝒮`$. Notice that $`𝒮^{(1)}`$ and $`𝒮^{(2)}`$ are mutually orthogonal in the sense of this scalar product as well as in the sense of the sesquilinear form (2). If $`\mathrm{\Psi }`$ is a positive-frequency solution in mode $`n`$ we must have, restoring now the mode label $$((D_n^+)_y;\mathrm{\Psi })=\mathrm{\Psi }(y)$$ where the notation $`(D_n^+)_y`$ indicates that $`D_n^+`$ is considered as a function of $`x`$ which additionally depend on $`y`$. Let $`E_{1,n},\mathrm{}.E_{r,n}`$ be a real orthonormal basis of $`_n`$. Use the notation $`x=(t,\xi ),y=(u,\eta )`$ with $`\xi ,\eta V_3`$. The only possibility for $`D_n^\pm `$ takes on the standard form $$D_n^+(y,x)=f_{1,n}^{}(u)f_{1,n}(t)\mathrm{\Gamma }_n(\eta ,\xi )$$ (6) where the expression $$\mathrm{\Gamma }_n(\eta ,\xi )=E_{a,n}(\eta )E_{a,n}(\xi )$$ is real and doesnot depend on the choice of a real orthonormal basis in $`_n`$. It is intrinsically determined by the spectral properties of $`V_3`$. It is straightforward to check that expression (6) of $`D_n^+`$ actually satisfies equation (3) as it should. The only arbitrariness in formula (6) is in the factor $`f_{1,n}^{}(u)f_{1,n}(t)=f_{2,n}(u)f_{1,n}(t)`$ which depends on the choice of an admissible basis in the two-dimensional space $`𝒮_n`$. For usual FRW spacetimes ($`V_3`$ is of constant curvature), a basis of $`_n`$ can be found in the literature . Let us now turn to isometric invariance. If $`T`$ is an isometry of the spatial metric $`\gamma _{ij}`$, it acts on functions according to $`TF=F(T\xi )`$. Invariance of $`\mathrm{\Delta }_3`$ entails that each eigenspace $`_n`$ is globally invariant. Moreover $`T`$ leaves invariant the three-dimensional scalar product $`((F,G))`$. Thus $`E_{1,n}(T\xi ),\mathrm{}.E_{r,n}(T\xi )`$ is another real orthogonal basis of $`_n`$. Finally $`\mathrm{\Gamma }_n(T\eta ,T\xi )=\mathrm{\Gamma }_n(\eta ,\xi )`$ and we can write $`D_n^+(Ty,Tx)=D_n^+(y,x)`$. In the generic case all isometries of $`V_4`$ are lifts of spatial isometries, so we summarize: The only kernel $`D_n^+`$ solution of the wave equation (1), eigenfunction of $`\mathrm{\Delta }_3`$ for eigenvalue $`\lambda _n`$ and satisfying (3) where $`\mathrm{\Psi }`$ is a solution in mode $`n`$, is given by (6) and defined up to a change of admissible basis in the two-dimensional complex space $`𝒮_n`$ (Bogoliubov transformation). In the generic case, it is isometrically invariant. ### 2.2 Sum over modes Define $`^+=𝒮_n^{(1)}_n`$. The infinite sum $`\mathrm{\Phi }={\displaystyle \stackrel{\mathrm{}}{}}\mathrm{\Phi }_n`$ where $`\mathrm{\Phi }_n_n`$, always exists in the distributional sense, if we define as test functions the sums $`\mathrm{\Psi }=\mathrm{\Psi }_n`$ having an arbitrary but finite number of terms (terminating sums). This definition is invariant under spatial isometries. In this sense we can assert: The only kernel solution of the wave equation, mode-wise defined and satisfying (3) is given by $`D^+={\displaystyle \stackrel{\mathrm{}}{}}D_n^+`$. It is defined up to a unitary transformation $`U=U_n`$ where $`U_n`$ is an arbitrary Bogolubov transformation in mode $`n`$. In the generic case, $`D^+`$ is invariant under the continuous isometries of $`V_4`$. ## 3 Conclusion As expected from the work of several authors, the positive-frequency kernel is defined up to a unitary transformation. For generalized FRW spacetimes we have additionally proved that this kernel is invariant under all spacetime isometries connected with the identity, except perhaps for very special forms of the scale-factor evolution. de Sitter manifold is one such exception. This does not contradict existence of a de Sitter invariant vacuum, but rather indicates that a mode-wize definition of $`D^+`$ is no longer satisfactory when the separation of space from time fails to be unique. Returning to the generic case, it can be easily read off from (6) that the connected group of spacetime isometries acts unitarily in $`^+`$ whatever is the choice of admissible basis made in each $`𝒮_n`$. This fact strongly suggests to consider that all isometrically invariant definitions of the one-particle sector are equivalent representations of the same physics. When expansion is statically bounded in past and future, this point of view amounts to by-pass the customary ”in and out” vacua in favor of a unique class of observers, not submitted to asymptotic conditions. This scheme at least provides a reasonable approximation insofar as the curvature gradient in $`V_3`$ remains small and the scale factor does not vary too rapidly. Notice that in some particular cases, among all equivalent definitions, a distinguished vacuum may arise after all, in agreement with recent works .
no-problem/9812/cond-mat9812377.html
ar5iv
text
# Superconducting Gap Anisotropy and Quasiparticle Interactions: a Doping Dependent ARPES Study ## Abstract Comparing ARPES measurements on Bi2212 with penetration depth data, we show that a description of the nodal excitations of the d-wave superconducting state in terms of non-interacting quasiparticles is inadequate, and we estimate the magnitude and doping dependence of the Landau interaction parameter which renormalizes the linear T contribution to the superfluid density. Furthermore, although consistent with d-wave symmetry, the gap with underdoping cannot be fit by the simple cos$`k_x`$-cos$`k_y`$ form, which suggests an increasing importance of long range interactions as the insulator is approached. There is little doubt about the fundamental importance of many-body interactions in high temperature cuprate superconductors . Quantifying these interactions is difficult in the normal state of these materials, given the lack of well-defined single-particle excitations as revealed by various experiments. On the other hand, well-defined quasiparticle excitations do exist in the superconducting state, and it is believed that a description of the low temperature state in terms of superfluid Fermi liquid theory is appropriate. In Fermi liquid theory, the quasiparticles are characterized by a renormalized Fermi velocity $`v_F`$, and their residual interactions described by Landau parameters, which manifest themselves through a renormalization of various response functions relative to that given by a non-interacting theory. For example, in the cuprates, the Fermi velocity $`v_F`$ has been determined by angle resolved photoemission (ARPES) studies in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> (Bi2212) to be renormalized by a factor of two to three over that given by band theory. The strong renormalization of the superfluid density $`\rho _s(0)`$ has also been known for some time, where one sees a scaling with the number of doped holes: the Uemura relation. In this paper we examine an issue which is at the heart of the nature of quasiparticles in the superconducting state of the cuprates, that is, whether the slope of the superfluid density at low temperatures, $`d\rho _s/dT`$, is affected by interactions or not, and what the relation of its renormalization is to that of $`\rho _s(0)`$, questions of considerable debate in the recent literature. The importance of $`\rho _s(T)`$ to an understanding of cuprate superconductivity derives from the early observation of a linear $`T`$ suppresion of $`\rho _s(T)`$ , since this is explained most naturally by the thermal excitations of quasiparticles near the nodes of a d-wave superconducting gap. Related to this is the interesting question of whether the gap around the node scales with $`T_c`$, as has been suggested from a recent analysis of magnetic penetration depth data. To address these issues we use the unique capability of ARPES to directly measure the Fermi wavevector $`k_F`$, velocity $`v_F`$, and the superconducting gap anisotropy near the node, from which we can estimate the slope of $`\rho _s(T)`$ assuming non-interacting quasiparticles. Comparing this with the actual value obtained by penetration depth experiments leads to a direct estimate of the renormalization due to quasiparticle interactions. This is done by exploiting the relation $$\left|\frac{d\rho _s}{dT}(T=0)\right|\left|\frac{d}{dT}\left(\frac{1}{\lambda ^2}\right)\right|=A\beta ^2\frac{v_Fk_F}{v_\mathrm{\Delta }}.$$ (1) where $`\lambda `$ is the penetration depth, and $`A`$ is a doping-independent constant: $`A=4\mathrm{ln}2\alpha k_Bn/cd`$ with $`\alpha `$ the fine structure constant, $`k_B`$ the Boltzmann constant, $`c`$ the speed of light, and $`n`$ the number of $`CuO_2`$ layers (4 for Bi2212) per c-axis lattice constant $`d`$ (30.9 Å for Bi2212). ARPES is used to determine the three parameters at the node: the Fermi velocity $`v_F`$, the Fermi wavevector $`k_F`$, and the slope of the superconducting gap $`v_\mathrm{\Delta }=1/2|d\mathrm{\Delta }/d\varphi |(\varphi =\pi /4)`$, where $`\varphi `$ is the Fermi surface angle. The latter is normalized such that $`v_\mathrm{\Delta }=\mathrm{\Delta }_{\mathrm{max}}`$ for the simple d-wave gap $`\mathrm{\Delta }(\varphi )=\mathrm{\Delta }_{\mathrm{max}}\mathrm{cos}(2\varphi )`$. The only unknown in Eq. 1 is the renormalization factor $`\beta `$ due to quasiparticle interactions; in the isotropic Fermi liquid theory $`\beta =1+F_{1s}/2`$, where $`F_{1s}`$ is the $`l=1`$ spin symmetric Landau parameter, and quantifies the backflow of the medium around the quasiparticles. By comparing ARPES and penetration depth data, we estimate $`\beta `$ and its doping dependence. In particular, different assumptions in the recent literature about the doping dependence of $`v_\mathrm{\Delta }`$ has led to different conclusions regarding the value and doping dependence of $`\beta `$ in Eq. 1. Our main results are as follows. (1) We determine the doping dependence of the gap anisotropy from ARPES. Although consistent with a node on the Fermi surface along the zone diagonal ($`\varphi =\pi /4`$) for all doping levels, the shape of the gap changes with underdoping: while its maximum value increases, we find the new result that the gap becomes flatter near the nodes, i.e. $`v_\mathrm{\Delta }`$ decreases. (2) Using our data on the doping dependence of $`v_\mathrm{\Delta }`$, we exploit Eq. 1 and use available values of the penetration depth $`\lambda (T)`$ to estimate the renormalization factor $`\beta `$. We find that $`\beta `$ is considerably smaller than unity and decreases with underdoping, in contrast to previous suggestions in the literature . (3) Our results on the doping dependence of the gap anisotropy and its relation to penetration depth data provide important evidence that the strength of both the pairing interaction and the quasiparticle interactions increase with reduced doping. The ARPES experiments were performed at the Synchrotron Radiation Center, Wisconsin, using both a high-resolution 4-meter normal incidence and plane grating monochromators, with a resolving power of $`10^4`$ at $`10^{11}`$ photons/sec. We used 22 eV photons, with a 17 meV (FWHM) energy resolution, and a momentum window of radius 0.045$`\pi `$ (in units of $`1/a`$ where $`a`$ is the Cu-Cu separation). The high quality single crystal samples were float-zone grown, with doping changed by varying the oxygen partial pressure during annealing. All samples show sharp x-ray diffraction rocking curves and flat surfaces after cleaving as determined from specular laser reflection. We label the samples by their doping (UD for underdoped, OD for overdoped) and onset $`T_c`$. Fig. 1 shows ARPES data at $`T`$=15 K for an UD75K sample at different $`𝐤`$-points along the Fermi surface. $`k_F`$ was carefully chosen using the criterion that the leading edge of the spectrum has minimum binding energy with the steepest slope, when compared with other spectra along a cut perpendicular to the Fermi surface, as discussed earlier. The zero of binding energy ($`E_F`$) was determined from the spectra (not shown) of a polycrystalline Pt reference in electrical contact with the Bi2212, recorded at regular intervals to ensure accurate determination of the Fermi energy, $`E_F`$. From the shift of spectral weight away from $`E_F`$, one clearly sees an anisotropic gap, which is maximal near the $`(\pi ,0)`$ point ($`\varphi =0`$) and zero near the $`(\pi ,\pi )`$ direction ($`\varphi =45^{}`$). For comparison we also plot (dashed line) in Fig. 1 ARPES spectra from an OD87K sample at two points on the Fermi surface. (For more OD data see Ref. .) We immediately see that the UD sample has a larger maximum gap ($`\varphi =0`$) than the OD one, but it has a smaller gap at the corresponding point ($`\varphi =38`$ degrees) near the node. Thus the raw data directly give evidence for an interesting change in gap anisotropy with doping. To quantitatively estimate the gap, we have modeled the low temperature data by a simple BCS spectral function, taking into account the measured dispersion and the known energy and momentum resolutions. Details of this analysis, and error estimates, have been described earlier in the context of OD samples . The resulting angular dependence of the gap is plotted in Fig. 2 for six samples. To further quantify this change in anisotropy, we have used the following expression to fit the gap: $`\mathrm{\Delta }_𝐤=\mathrm{\Delta }_{\mathrm{max}}[B\mathrm{cos}(2\varphi )+(1B)\mathrm{cos}(6\varphi )]`$ with $`0B1`$, where $`B`$ is determined for each data set. Note that $`\mathrm{cos}(6\varphi )`$ is the next harmonic consistent with $`d`$-wave symmetry. We find that while the overdoped data sets are consistent with $`B1`$, the parameter $`B`$ decreases significantly in the underdoped regime. To emphasize the significance of $`B<1`$, we plot in the panel of an UD75K sample of Fig. 2 a dashed curve with $`B=1`$ along with the best fit curve for that sample. From these fits, one easily determines the value $`v_\mathrm{\Delta }`$ discussed earlier in the context of Eq. 1. In Fig. 3a, we plot $`v_\mathrm{\Delta }/\mathrm{\Delta }_{\mathrm{max}}`$ for seven samples (the six analysed above plus an UD85K sample from Ref. ). One can clearly see from this figure the trend that underdoping leads to an increase in the maximum gap together with a decrease in the gap slope at the node. Several questions need to be addressed before proceeding further. First, could the flattening at the node be, in fact, evidence for a “Fermi arc” (a line of gapless excitations), especially since such arcs are seen above $`T_c`$ in the underdoped materials ? Given the error bars on gap estimates in Fig. 2, it is impossible to rule out arcs in all the samples. Nevertheless, it is clear that there are samples (especially OD87K, UD80K and UD75K) where there is clear evidence in favor of a point node rather than an arc at low temperatures. Furthermore, it is very important to note that a linear $`T`$ dependence of $`\rho _s(T)`$ at low temperature, for all doping levels, in clean samples gives independent evidence for point nodes . Second, is the change in gap anisotropy intrinsic, or related to impurity scattering? We can eliminate the latter explanation on two grounds. The maximum gap increases as the doping is reduced, opposite to what would be expected from pair breaking due to impurities. Also, impurity scattering is expected to lead to a characteristic “tail” to the leading edge , for which there is no evidence in the observed spectra (see Fig. 1). We suggest that the change in the gap function with underdoping is related to an increase in the range of the pairing interaction: the $`\mathrm{cos}(6\varphi )`$ term in the Fermi surface harmonics can be shown to be closely related to the tight binding function $`\mathrm{cos}(2k_x)\mathrm{cos}(2k_y)`$, which represents next nearest neighbours interaction, just as $`\mathrm{cos}(2\varphi )`$ is closely related to the near neighbor interaction $`\mathrm{cos}(k_x)\mathrm{cos}(k_y)`$. On very general grounds, the increasing importance of the $`\mathrm{cos}(6\varphi )`$ term with underdoping could arise from a decrease in screening as one approaches the insulator. Similar effects also arise in specific models. In models of spin-fluctuation mediated d-wave pairing, an increase in the antiferromagnetic correlation length with underdoping leads to a more sharply peaked pairing interaction in $`𝐤`$-space, causing a flattening of the gap around the node as we find here. In interlayer tunneling models, one also expects changes in the shape of the gap which might be correlated with doping . We note that the ratio of the dispersion normal to the Fermi surface ($`v_F`$) to that along the Fermi surface ($`v_\mathrm{\Delta }`$) is quite large, 20 in the overdoped case, and becomes even larger as the doping decreases, in contrast to the undoped insulator which exhibits an isotropic dispersion about the $`(\pi /2,\pi /2)`$ points. This implies that the electronic dispersion in the superconductor in this region of the zone may not be as closely related to the insulator as has been recently suggested. We now return to Eq. 1. It is known from previous ARPES measurements that the band dispersion along $`(0,0)(\pi ,\pi )`$ is rather strong and doping independent with an estimated $`v_F=2.5\times 10^7`$ cm/sec . It is also known that $`k_F`$ along this direction is 0.737 $`\AA ^1`$ and relatively doping independent . Using these inputs, together with the strongly doping dependent $`v_\mathrm{\Delta }`$, we can estimate the slope $`\left|d\lambda ^2/dT\right|`$ in the case of non-interacting quasiparticles $`(\beta =1)`$. As shown in Fig. 3b (filled circles) we find the resulting slope is of order $`6\times 10^9\AA ^2K^1`$ and is reduced by approximately 30% in going from UD75K to OD87K. Fig. 3b also shows the values of $`\left|d\lambda ^2/dT\right|`$ obtained from London penetration depth measurements . Although, there is considerable variation in the measured values of $`\lambda (0)`$ and low temperature $`d\lambda /dT`$ from one group to another, probably due to the use of different techniques, we find evidence for the following trend: the slope $`d\rho _s/dT`$ decreases with underdoping. For YBCO this effect is weak in the UBC data , but much stronger in the Cambridge data . The limited data available for Bi2212 are consistent with this trend . The striking feature is that, in all cases, this trend in $`d\rho _s/dT`$ is exactly the opposite of that deduced from a theory with non-interacting quasiparticles $`(\beta =1)`$ using ARPES input. That is, from Fig. 3b, it is clear that the renormalization factor $`\beta `$ is considerably smaller than unity and doping dependent, a conclusion different from that inferred earlier. To get an estimate of the doping dependence of $`\beta `$, we use the Bi2212 values of Ref. for OD85K and UD80K samples in comparison to our own values on OD87K and UD80K, obtaining a $`\beta ^2`$ of 0.32 and 0.17, respectively. This is roughly consistent with a $`\beta `$ which varies as $`x`$, the number of doped holes, which would be the expected result from the $`x`$ scaling of $`\rho _s(0)`$. On the other hand, as noted earlier, a weaker doping dependence of $`\beta `$ seems to be implied by the UBC data. Given the difficulties of measuring the superconducting gap in YBCO by ARPES, this points to the need for further penetration depth experiments on Bi2212 samples so that a more detailed comparison to ARPES data can be made. In conclusion, we find that the gap anisotropy of Bi2212 changes strongly as a function of doping, implying an increase in the range of the pairing interaction with underdoping. Moreover, a comparison of our data to penetration depth measurements indicates that the slope of the superfluid density is renormalized by a doping dependent factor, implying that a non-interacting picture of quasiparticle excitations around the nodes of the d-wave order parameter is inappropriate. This has obvious implications for other low temperature measurements in the high temperature cuprate superconductors, such as specific heat, NMR, and microwave and thermal conductivity, which are usually quantified by theories which do not take into account these renormalizations. This work was supported by the the U. S. Dept. of Energy, Basic Energy Sciences, under contract W-31-109-ENG-38, the National Science Foundation DMR 9624048, and DMR 91-20000 through the Science and Technology Center for Superconductivity, and the CREST of JST. JM is supported by the Swiss National Science Foundation, and MR by the Swarnajayanti fellowship of the Indian DST.
no-problem/9812/astro-ph9812275.html
ar5iv
text
# 1 Dynamical parameters of HI deficient galaxies
no-problem/9812/astro-ph9812132.html
ar5iv
text
# The fractal distribution of galaxies and the transition to homogeneity ## 1 Introduction The analysis of the distribution of large-scale structure in the Universe is done by applying statistical tools. The most basic descriptor is, aside from the average mass density, $`\overline{\rho }=\rho (𝐫)`$, the reduced two-point correlation function (Peebles (1993)), $$\xi (r)=\frac{\rho (𝐫)\rho (\mathrm{𝟎})}{\overline{\rho }^2}1=\frac{\delta \rho (𝐫)\delta \rho (\mathrm{𝟎})}{\overline{\rho }^2},$$ (1) where we assume statistical homogeneity and isotropy. Higher order correlation functions of the probability distribution are also of importance, but their determination becomes less reliable as the order increases. However, for the purpose of illustrating our point, it will suffice to concentrate on $`\xi (r)`$. The study of galaxy catalogues leads to fitting $`\xi (r)`$ by a power law on the length scales probed by these catalogues, $$\xi (r)=\left(\frac{r_0}{r}\right)^\gamma ,$$ (2) but the values of $`r_0`$ and $`\gamma `$, and even whether the latter is scale-dependent, are still being debated (Coleman & Pietronero (1992); Sylos Labini, Montuori & Pietronero (1998); Davis (1997); Guzzo (1997); Scaramella et al. (1998); Martínez et al. (1998)). At any rate, it seems that scale-invariance is approximately true on a non-negligible range of scales. The distance $`r_0`$, defined by $`\xi (r_0)=1`$ (or by some other qualitatively equivalent condition, e.g. Martínez et al. (1998)), has been called “correlation length” in the literature. However, it is well known in the theory of critical phenomena (Stanley (1971); Binney et al. (1992); Ma (1994)) that power-law correlated fluctuations let themselves be felt over the entire range of observation. There, the physical meaning of a correlation length corresponds to the length scale where power-law decay crosses over to exponential decay, and has nothing to do with the condition $`\xi 1`$. The distance $`r_0`$ has certainly the meaning of separating a regime of large fluctuations, $`\delta \rho \overline{\rho }`$, from a regime of small fluctuations, $`\delta \rho \overline{\rho }`$. The regime of small fluctuations has been identified with a homogeneous distribution and, because of this, $`r_0`$ has been called the scale of transition to homogeneity. However, as we will see, the sense in which $`\delta \rho \overline{\rho }`$ leads to homogeneity is not equivalent to absence of structure. In fact, fluctuations correlated by a power law, albeit small, may give rise to collective, macroscopic phenomena, such as for instance critical opalescence in fluids (Stanley (1971); Binney et al. (1992)). Absence of structure is not determined just by the condition that correlations vanish at large distances, but rather by how fast they do so. This is measured by the correlation length $`\lambda _0`$, separating slow from fast decay. Fast decay is generally identified with exponential decay, but it could also refer to another type of sufficiently rapid decay. The point, therefore, is to establish a clear distinction between the strength of the correlations (a measure of how large the fluctuations around the mean can be), and the range of the correlations (a measure of how far correlations are felt). These two characteristics of the correlation function, though related, are not equivalent to each other. This subtle difference has been usually overlooked in the cosmological literature or, at least, not clearly stated. ## 2 Relevant concepts In order to exemplify the main concepts, we consider a random distribution of particles whose average density is $`\overline{\rho }`$. In the next section we will discuss the implications for cosmology. The second moment of the density field is given by $$G(r)=\overline{\rho }^2[1+\xi (r)].$$ (3) Correlations are weak if $`\xi <1`$, and strong if $`\xi >1`$. Thus the scale $`r_0`$ defined by $`\xi (r_0)=1`$, which separates these two regimes, is related to the strength of the correlations. In the parlance of structure formation, $`r_0`$ is the scale separating at a given time the linear from the nonlinear dynamical regimes (Sahni & Coles (1995)), and we may call it the scale of nonlinearity. A more intuitive picture can be achieved if one examines the variance of the smoothed density field, $`\rho _L`$, the number of particles contained in a volume of size $`L^3`$ per unit volume. This variance is computed as an integral of $`\xi (r)`$. We consider the cosmologically interesting case where the correlation function has a simple scale-invariant form like (2), with given $`r_0`$ and $`\gamma <3`$ (this latter requirement assures convergence of the integral). Then we can write: $$\frac{(\delta \rho _L)^2}{\overline{\rho }^2}\left(\frac{a}{L}\right)^3+B\left(\frac{r_0}{L}\right)^\gamma ,$$ (4) where we have defined a length scale $`a=\overline{\rho }^{1/3}`$ ($`a^3`$ is the volume per particle), and $`B`$ is a positive numerical constant of order one. The first term is the shot-noise contribution, and the second is due to correlations. We conclude that if the volume containing the sample is large enough ($`Lr_0,a`$), the relative fluctuations of $`\rho _L`$ around its mean are very small, and a good estimate of $`\overline{\rho }`$ can be extracted from the sample. What happens for intermediate values of $`L`$ depends on the relative values of $`a`$ and $`r_0`$. We consider two physically interesting cases: (i) $`ar_0`$, as may be the case with the galaxy distribution, so that for scales $`aLr_0`$ fluctuations are large and the correlation term dominates; and (ii) $`ar_0`$, as in a fluid in thermal equilibrium, so that for scales $`La,r_0`$, fluctuations are large but dominated by shot noise. In either case no reliable estimate of $`\overline{\rho }`$ can be extracted. We also see that a Gaussian approximation to the one-point probability distribution of $`\rho _L`$ is certainly excluded in these cases, because $`\rho _L0`$ by definition and hence the probability distribution becomes prominently skewed if $`\delta \rho _L\overline{\rho }`$. On the other hand, the range of the correlations is related to how fast they decay at infinity, which is measured by the spatial moments of the two-point correlation function: $$_n𝑑𝐫𝐫\stackrel{(n)}{\mathrm{}}𝐫\xi (r)\underset{k0}{lim}_𝐤\stackrel{(n)}{\mathrm{}}_𝐤P(k),$$ (5) where we have introduced the Fourier transform of the two-point correlation function, the power spectrum $`P(k)=𝑑𝐫e^{i𝐤𝐫}\xi (r)`$. Notice that $`_n`$ vanishes for odd $`n`$ due to isotropy of the correlations. Starting from the interpretation of $`\xi (r)`$ as the density contrast of particles at a distance $`r`$ from any one particle, it is straightforward to give a physical meaning to the moments $`_n`$: they are the multipoles characterizing the morphology of the typical cluster generated by correlations. Correlations are short-ranged if all the moments are finite, which implies that they decay faster than any power of the distance and that $`P(k)`$ is analytic at $`𝐤=\mathrm{𝟎}`$. One can then define a typical size of the cluster, $`\lambda _0`$, the correlation length, as $$\lambda _0^2=\frac{1}{2}\left|\frac{Tr_2}{_0}\right|=\underset{k0}{lim}\frac{1}{2}\left|\frac{_𝐤^2P(k)}{P(k)}\right|,$$ (6) in agreement with the standard definition in condensed matter physics (see e.g. Ma (1994)); if $`P(0)`$ or $`_𝐤^2P(0)`$ vanishes, then the definition of $`\lambda _0`$ must be generalized in terms of a different quotient between higher-order, non-vanishing moments. Therefore, if one observes a realization of the field $`\varrho _L`$ with a smoothing length $`L\lambda _0`$, then clusters are unobservable and the distribution is indistinguishable from the case that there are no correlations at all. Conversely, if $`\xi (r)`$ decays as a power of $`r`$, so that it is long-ranged, then the moments $`_n`$ diverge from some $`n`$ onwards (depending on the exponent of decay), and $`P(k)`$ is not analytic at $`𝐤=\mathrm{𝟎}`$. Hence, no matter how large $`L`$ is, the existence of clusters will always be detectable, so that one associates an infinite correlation length to a power-law decay. In conclusion, it is $`\lambda _0`$ the scale that marks the transition to homogeneity in the sense of absence of structure. Notice that in principle $`\lambda _0`$ and $`r_0`$ are different and in fact they can even be orders-of-magnitude different, as a power-law decay of $`\xi (r)`$ exemplifies. It is clear now that there are in general at least three different length scales ($`a`$, $`r_0`$, $`\lambda _0`$), thus defining four regimes, which we now discuss. Notice that without further knowledge of the physics of the system, one cannot exclude that any two of the scales are of the same order, which is not a rare case in physics, so that some of the regimes below can be difficult to observe. In what follows we will assume the case most appropriate in a cosmological context, that $`a<r_0<\lambda _0`$. We then have: * Homogeneous regime, for scales $`r\lambda _0`$. On these scales, fluctuations are small ($`\xi (r)1`$) and the system structureless. Typically, correlations are exponentially damped and $`\rho _L`$ has a prominently peaked distribution, so that the mean density $`\overline{\rho }`$ can be reliably estimated. An example is a liquid in thermal equilibrium far from a phase transition, for which $`\xi (r)(r_0/r)\mathrm{exp}(r/\lambda _0)`$, where $`a`$, $`r_0`$ and $`\lambda _0`$ are all of the order of the molecular diameter (Landau & Lifshitz (1980); Goodstein (1985)). * Critical regime, for scales $`r_0r\lambda _0`$. Fluctuations are still small, so that the mean density $`\overline{\rho }`$ can be estimated with confidence too but now because fluctuations are correlated and therefore behave coherently on these scales, they give rise to spatially extended structures. Moreover, if the correlations follow a power law on these scales, fluctuations about the background have a self-similar, fractal structure. An example is a fluid at the critical point, for which $`\xi (r)`$ is like in equation (2) with $`\gamma <3`$. Again $`a`$ and $`r_0`$ are of the order of the molecular diameter, but now $`\lambda _0\mathrm{}`$. Notice that small fluctuations do not imply uninteresting behavior: since they are macroscopic in spatial extent, one observes phenomena such as critical opalescence (Stanley (1971); Binney et al. (1992)). The reader may feel uneasy with the apparent contradiction between the claim that fluctuations are small in a critical fluid and the standard claim that they are in fact large. The point is to identify with which quantity one is comparing the fluctuations. When compared with $`\overline{\rho }`$, as we do, they are small, which is a prerequisite to render a thermodynamic description valid. When compared with typical fluctuations far from the critical point, they are extremely large, and in fact the ratio (size of critical fluctuations)/(size of non-critical fluctuations) usually diverges in the thermodynamic limit, as is seen from equation (4). * Fractal regime, for scales $`arr_0`$. Now fluctuations exhibit structure and at the same time are very large ($`\xi 1`$), so that from equation (3) it follows that $`G(r)\overline{\rho }^2\xi (r)`$, and $`\overline{\rho }`$ cannot be estimated. For power-law correlations, this means that it is the density itself (and not only its fluctuations around the mean) that appears to have fractal structure (or multifractal, depending on how higher-order correlations scale). This manifests itself in that most realizations of the stochastic field $`\rho (𝐫)`$ do not look like a continuum field at all on these scales, but are very irregular and full of voids of every size separated by very high density condensations. This regime is not encountered in a critical fluid because $`r_0`$ is of the order of the molecular diameter, but may be relevant in cosmology, as we will see. * Shot-noise regime, for scales $`ra`$. The presence of correlations is masked by shot noise: fluctuations are very large, but simply because the discrete nature of the underlying point-process is dominant. It may be helpful at this point to show pictorially a distribution in the critical regime. Figure 1 is a snapshot of a critical two-dimensional lattice gas,<sup>1</sup><sup>1</sup>1It actually is isomorphic to the two-dimensional Ising model. The three-dimensional Ising model as a description of the galaxy distribution has been introduced in Hochberg & Pérez-Mercader (1996); Pérez-Mercader et al. (1996). with $`\lambda _0\mathrm{}`$, but $`a`$ and $`r_0`$ are both of the order of a pixel (=atom), barely discernible to the naked eye. In a critical fluid the average density is well defined, but one can clearly see in the figure structures (i.e., inhomogeneities) on scales much larger than $`r_0`$. Although one may observe what seem like voids of arbitrary size, a closer examination unveils that the voids contain clusters of atoms. A fractal however must contain voids of every size, owing to scale invariance, and the voids need to be totally empty (density exactly zero in them), so the fluctuations of the density over a volume of size $`L`$ are large at any scale. It is now clear that what has scale invariance, or fractal character, in a critical fluid is the fluctuations over the average density, rather than the density field itself. For comparison, Figure 2 shows the same model at very high temperature, with $`\lambda _0`$ of the order of a pixel (comparable to $`a`$ and $`r_0`$), and where it exhibits Poissonian fluctuations. Visual inspection shows immediately that fluctuations are less prominent than in the critical regime, and that there is no structure at all. ## 3 Implications for cosmology We now apply the above to cosmology. We have two different scales, $`r_0`$ and $`\lambda _0`$, related to the intensity and to the range of the correlations, respectively. This does not preclude the existence of other intermediate length scales, which do not however affect the physical meaning of $`r_0`$ and $`\lambda _0`$, nor the discussion to follow. Incidentally, one of these scales, which has also been termed correlation length, is defined as (Kofman et al. (1993); Sathyaprakash et al. (1995)) $$R_\varphi ^2=6\frac{\varphi ^2}{(\varphi )^2}=6\frac{𝑑𝐤k^4P(k)}{𝑑𝐤k^2P(k)},$$ (7) where $`\varphi `$ is the peculiar gravitational potential due to the fluctuations in the density field, $`\delta \rho `$. This length can be understood as the characteristic scale of spatial variation of the field $`\varphi `$. But this is not the correlation length either, because a correlation between the values of the field at two separate points does not imply in general that these values must be similar. As discussed in the previous section, the length scale $`\lambda _0`$ is given by the behavior of the power spectrum on the largest scales. Since one cannot observationally gather information about scales larger than the horizon, $`\lambda _0`$ can be estimated only by assuming that the known behavior of $`P(k)`$ on the largest observable scales holds beyond the horizon. The power spectrum on the largest scales can be extracted from the analysis of the microwave background, yielding $`P(k)k^{1.2}`$ (Bennett et al. (1996); Smoot (1999)) at the epoch of decoupling. If this non-analytic behavior were extrapolated to $`k0`$, it would imply $`\lambda _0=\mathrm{}`$. The physical interpretation of this conclusion is that the correlation length was certainly larger than the horizon at the epoch of decoupling. Hence, since causal processes can create correlations up to the scale of the horizon only, we must be observing the remaining, frozen imprints of a pre-inflationary epoch which have not been erased yet. Moreover, given that the evolution of the power spectrum by gravitational instability does not alter its shape in the linear regime (Peebles (1993)), one must conclude that $`\lambda _0`$ is still beyond the horizon in the present epoch. Thus, it may seem that the homogeneous regime is beyond reach and that the largest observable scales in the Universe belong to the critical regime. As a matter of fact, notice that the well-known pictures of the microwave background by COBE (Bennett et al. (1996)) resemble figure 1 much more than figure 2. Let us consider now the scale of nonlinearity, $`r_0`$. The smallness of density fluctuations as inferred from observations of the microwave background ( $`(\delta \rho _L)^2/\overline{\rho }_L^210^7`$ for $`L10^3h^1`$ Mpc) yields an upper bound on $`r_0`$. Smaller scales can be probed by means of the galaxy catalogues. The conclusions related to the matter distribution depend on the bias parameter, but, in agreement with theoretical models, this parameter can be assumed to be of order unity. This implies in turn that $`r_0`$ can be directly extracted from the galaxy-galaxy correlations. The standard result therefore is that $`r_05h^1`$ Mpc (Davis & Peebles (1983); Peebles (1993)), so that currently available galaxy catalogues, of size $`Lr_0`$, are probing the critical regime. Hence, the galaxy distribution should look very much like a fluid at the critical point: small but spatially extended fluctuations. As already stressed, this does not mean that the catalogues are probing the transition to homogeneity, since $`r_0`$ is not the correlation length. There has recently arisen, however, a controversy on the validity of $`r_05h^1`$ Mpc, due to suggestions that $`r_0`$ is in fact larger than the maximum effective size of the current largest galaxy catalogues, and that a simple power-law form like (2) with a fixed exponent holds at least up to this maximum scale (Coleman & Pietronero (1992); Sylos Labini, Montuori & Pietronero (1998)). If so, then the catalogues are probing the fractal regime. As shown in the previous section, this implies that the mean galaxy number density cannot be reliably estimated from these catalogues, and hence neither can $`r_0`$. This means that they do not allow to discriminate between a distribution that has a nonvanishing $`\overline{\rho }`$ and looks fractal up to scales of the order of $`r_0`$, and a distribution whose $`\overline{\rho }`$ vanishes and thus is fractal up to arbitrarily large scales. Not surprisingly, the idea of a universe with a vanishing average density as the size of the sample volume increases is very old, dating back to at least 1908 (Charlier (1908)). This construction was based on a hierarchy of clustering without limit, that is, clusters of galaxies which form clusters of clusters, which in turn also form clusters and so on, ad infinitum. It is illustrative to read now the old review by G. de Vaucouleurs advocating for a hierarchical cosmology (de Vaucouleurs (1970)). As we have seen before, it appears that the correlation length $`\lambda _0`$ is much larger than the scale of nonlinearity, $`r_0`$. This could lead to interesting consequences. As already remarked, and illustrated by the discussion of figures 1 and 2, there can exist structure in the critical regime, i.e. on intermediate scales $`r_0r\lambda _0`$. Hence, this could be the reason why clustering is observed, as galaxy clusters and superclusters, even on scales much larger than $`r_0`$, which is a “paradox” in the interpretation of galaxy catalogues. Similarly, the increase of $`r_0`$ with bias threshold should not be interpreted in terms of a larger correlation length for the corresponding structure (Gabrielli, Sylos Labini, & Durrer (1999)). Another consequence of a large $`\lambda _0`$ is related to the effect of gravitational lensing on the microwave background (Blanchard & Schneider (1987); Kashlinsky (1988); Cole & Efstathiou (1989); Martínez-González, Sanz, & Silk (1990); Fukushige, Makino, & Ebisuzaki (1994); Martínez-González, Sanz, & Cayón (1997)). The geodesic equation for light propagation is formally identical to the geometric-optics equation for light-rays in a medium of non-uniform refractive index, which allows one to identify the gravitational potential with an effective refractive index. Therefore, the fact that the correlation length of the potential diverges means that gravitational lensing of the microwave background could exhibit an effect akin to critical opalescence (a sort of “cosmological opalescence”, Domínguez & Gaite (1999)). In conclusion, we have provided a clear explanation and a precise mathematical formulation of the difference between the intensity of fluctuations and their spatial extent in the context of cosmological structures. These two properties can be characterized by two different length scales: the scale of nonlinearity, $`r_0`$, and the correlation length, $`\lambda _0`$. We have discussed the behavior of the correlations in the different regimes which these two lengths define and the connection with cosmological observations. It should be clear now that the statement “transition to homogeneity” as used in the cosmological literature really means “transition to the regime of small-amplitude fluctuations”, which does not imply absence of structure. In fact, we have also argued that this remaining “critical structure” could have non-trivial consequences for the interpretation of observations. For instance, no real transition to homogeneity is likely to be observed in the galaxy distribution. ###### Acknowledgements. We would like to acknowledge useful discussions with M. Kerscher and conversations with M. Carrión, R. García-Pelayo, J.R. Acarreta, J.M. Martín-García and L. Toffolatti, as well as remarks on the manuscript by T. Buchert, and correspondence with L. Guzzo and E. Gawiser. We also thank the anonymous referees for “critical” discussions on the manuscript. J.G. acknowledges support under Grant No. PB96-0887. Fig. 1: Critical fluid. Fig. 2: Non-critical fluid.
no-problem/9812/astro-ph9812019.html
ar5iv
text
# Distortion of Globular Clusters by Galactic Bulges ## 1 Introduction The mechanism by which globular clusters (GCs) are destroyed has received considerable attention \[Tremaine 1975, Chernoff & Shapiro 1987, Aguilar, Hut & Ostriker 1988, Chernoff & Weinberg 1990, Long et al., 1992, Capuzzo-Dolcetta 1993, Capuzzo-Dolcetta & Tesseri 1997\]. These studies include intrinsic processes like cluster evaporation, as well as environmental influences such as dynamical friction and tidal shocking from disks, bulges and supermassive black holes. Recently, Gnedin and Ostriker (1997) considered the effect of these processes on the present and, by extrapolation, on the initial population of GCs in the Milky Way. The conclusion they drew is that between 52% and 86% of the current population of GCs in our galaxy will be eventually destroyed by the combination of evaporation, dynamical friction and tidal shocking over the next Hubble time. They added that the initial population of GCs was likely to be substantially larger than the present one, with the clusters that pass closest to the galactic center subject to destruction by bulge shocking. Moreover, the remnant stars join the bulge and the halo of the galaxy, perhaps making a dominant contribution to their stellar populations. Murali and Weinberg (1997a) reached similar conclusions (i.e. a factor of two decrease in the initial population) considering the global evolution of the populations of Milky Way disk and halo clusters. Observationally, the distributions of GC shapes have been studied in the Milky Way \[White 1987\] and in nearby galaxies such as M31 \[Lupton 1989\], the LMC \[Kontizas 1989\], and the SMC \[Kontizas 1990\]. A comparison of GCs in the Milky Way and M31 \[Han & Ryden 1994\] shows that both galaxies have similar, nearly spherical distributions of clusters, with axial ratios peaked respectively at $`0.95`$ and $`0.93`$. On the other hand, GCs in the LMC and SMC differ significantly. Globulars in these galaxies require triaxial parameters to describe their shapes. The distribution of axial ratios of GCs in the LMC peaks at about $`0.85`$ and that of the SMC, at $`0.73`$. Han and Ryden (1994) explain this situation by a gradual decay of velocity anisotropy as a consequence of two body relaxation, so that the older clusters in the Milky Way and M31 are now supported mainly by rotation. However, Goodwin (1997) argues that there is no age/ellipticity relationship for GCs, and that in fact the stronger tidal field in larger galaxies is the dominant effect in producing the observed differences. Severe observed distortions in the globulars of giant galaxies are rare, but axial ratio values as low as $`0.75`$ have been measured for GCs in the Galaxy. Also, the discovery of populations of young GCs in various interacting galaxies opens a new laboratory for the study of destruction mechanisms \[Holtzman et al., 1992, Whitmore et al., 1993, OĆonnell, Gallagher, & Hunter 1994, Whitmore & Schweizer 1995, Meurer et al., 1995, Holtzman et al., 1996, Conti, Leitherer, & Vacca 1996, Watson et al., 1996, Schweizer et al., 1996\]. For external galaxies and for the Milky Way observations of distortion typically involve measurement of ellipticity at the half mass radius of the GC, but it is also known that the shape changes with distance from cluster center. Theoretical inferences about the destruction of GCs have mostly been considered from the statistical point of view. In a previous paper \[Charlton & Laguna 1995\], we instead considered how the destruction event itself proceeds. There we reported the outcome of a series of $`N`$–body numerical simulations of a single GC under the influence of the gravitational potential of a galactic bulge and/or supermassive black hole. Disk shocking also plays an important role, however, in the inner parts of galaxies it is the central spheroid that dominates \[Aguilar, Hut & Ostriker 1988\]. In our previous paper, we concentrated on computing the mass lost by the GC as a function of the distance of closest approach, the bulge concentration, and the mass of the black hole. A GC venturing within a distance of a few hundred pc of the galactic center was completely destroyed in a single passage. However, it is expected that these ultra-close encounters are relatively rare. More typically, the orbit of a GC would pass through a range of perigalactic distances for which only a small fraction of the stars become unbound. In these types of encounters, the GC would be noticeably distorted by tidal forces, but would not be totally destroyed until at least several passages had taken place. Many of the clusters on orbits that venture close to the bulge were destroyed billions of years ago, however as the GC system evolves there will be some clusters that reach closer and closer passages. Several Galactic clusters are presently found within 1–2 kpc of the Galactic center \[Gnedin & Ostriker 1997\]. The main goal of this paper is to characterize the distortions induced in a GC by its interaction with a galactic bulge. For some cases the effect of a supermassive black hole is also considered. Of particular interest is the feasibility of finding direct observational evidence of the destruction of GCs, both in the Milky Way and in external galaxies. Grillmair et al. (1995) have determined by star count analysis that most of the twelve Galactic globular clusters in their sample have some degree of distortion of their surface density contours. They suggest that this is the effect of tidal stripping of loosely bound stars by the gravitational potential of the galaxy. As with our previous work, we base the present study on $`N`$–body simulations that are designed both to quantify the nature of the distortions of GCs produced specifically by bulge shocking and to explore the region of parameter space in which such distortions occur. With our simulations, it is possible to address past and present distortions in the Milky Way and to consider implications in external galaxies of various types. Specifically, we ask: 1) Under which circumstances do clusters change shape due to bulge shocking?, 2) What is the detailed physical mechanism that gives rise to shape distortions and how can we characterize them?, 3) Can these distortions feasibly be observed in the Milky Way, and are the present constraints consistent with the properties of the Milky Way bulge and GC population?, 4) Where and when are bulge shocking distortions most easily observed?, and 5) How long do distortions persist and what is the fate of these distorted globulars? ## 2 Tracking the Shape of a Globular Cluster The main challenge in a numerical study aimed at characterizing shape distortions of GCs is to achieve a satisfactory level of resolution. Typical GC deformations often only involve the outer layers of the cluster. Thus, in order to obtain reasonable observational predictions on shapes of GCs from $`N`$–body simulations, it is important to model GCs (systems of $`10^5`$ stars) with at least $`10^4`$ particles. This lower bound on the number of particles is necessary because GCs are highly concentrated objects, requiring a large investment of computational resources to simultaneously resolve their cores as well as their outer layers. These days, $`N`$–body simulations with $`10^{45}`$ particles are not considered extremely demanding; however, it is important to utilize a code fast enough that parameter space searches are feasible. We use an $`N`$–body, parallel, oct–tree code for this purpose \[Salmon & Warren 1994\]. With this code, it is possible to carry out selected runs with each star in the GC represented by its own particle, and, at the same time, the code’s performance allows the exploration of substantial region of parameter space. The code does not include the effects of two–body relaxation, but it should not be important for the single passages through the bulge. Much of the previous work on the evolution of GCs has been based upon solution of the Fokker–Planck equation (see \[Oh & Lin 1992\], \[Murali & Weinberg 1997a\], \[Murali & Weinberg 1997b\], \[Gnedin & Ostriker 1997\], and references therein). Clearly, this technique is convenient for considering the statistical evolution of the initial Milky Way GC population, including both internal (stellar evolution) and external (tidal) effects. However, recent work by Zwart et al. (1998) questions the general applicability of this technique by comparing to an N–body approach, considering the effects of close encounters. For some cluster parameters and initial conditions the Fokker–Planck technique yields GC lifetimes a factor of ten smaller. This issue remains open for debate, however, since our goal in this paper is to focus on observability of GC distortions due to bulge shocking during a single passage, an N–body approach is more appropriate regardless. A GC experiences forces due to the other stars in the cluster and due to all the other material in its host galaxy. We model the background force of the galaxy by an analytic potential. A crucial question for this study was the choice of bulge potentials. There is convincing evidence in favor of triaxial bulges in the Milky Way and in other galaxies \[Zhao, Spergel, & Rich 1994, Bertola, Vietri, & Zeilinger 1991\]. Near–infrared observations from COBE DIRBE support a Galactic bulge resembling a prolate spheroid with 1:0.33:0.23 axis ratios \[Dwek et al., 1995\]. Orbits of GCs in triaxial potentials can certainly be quite different from those with spherical symmetry (e.g. box orbits). For that reason, initially we considered triaxial potentials such as the Schwarzschild triaxial potential \[de Zeeuw & Merritt 1983\]. However, we found that for a single passage with perigalactic distance $`r_p200`$pc, there was no significant difference in the nature and shape of the deformations between triaxial and spherical potentials as long as both potentials were comparable in depth and in extent (see below). As a consequence, information about the triaxiality of the bulge can only be extracted from study of GC shape distortions in a statistical sense, and this is beyond the scope of this paper. Based on the above observations and in order to directly compare with our previous work, we have chosen to model the bulge using the Hernquist potential \[Hernquist 1990\] $$\varphi =\frac{\varphi _o}{1+r/a},$$ (1) where $`\varphi _o=GM_{bg}/a`$, $`M_{bg}`$ is the mass of the bulge, and $`a`$ is a scale length related to the half-mass radius $`r_{1/2}=(1+\sqrt{2})a`$. We considered values of $`a=400`$ pc and $`a=800`$ pc. This represents a range typical of the bulges of spiral galaxies. In addition, the mass of the bulge was taken to be $`M_{bg}=10^5M_{gc}`$ or $`M_{bg}=10^6M_{gc}`$, where $`M_{gc}`$ is the mass of the GC (results are presented in units of $`M_{gc}`$). These bulge masses are consistent with a typical GC mass of $`10^5`$M if the bulge has a mass of $`10^{10}`$M or $`10^{11}`$M, respectively. The tidal force on a star that is on a line joining the centers of the bulge and the GC is $$F_T\frac{GM_{bg}r_{}}{(r+a)^3},$$ (2) where $`r`$ is the distance between the bulge and the GC, and $`r_{}`$ is the distance from the star to the center of the GC. Notice that at a fixed separation $`r`$, the strength of this force decreases as $`a`$ increases. One can find an estimate of the tidal radius, $`r_t`$, by equating expression (2) to the restoring force from the GC itself. This yields $$r_t46.4r_{}\left(\frac{M_{bg}}{10^5M_{gc}}\right)^{1/3}a.$$ (3) A GC will undergo tidal deformations if $`r_pr_t`$. Since observations suggest that $`a400`$pc and since we are not interested in ultra-close encounters (i.e., we consider only $`r_p200`$pc), one has from (3) that the region in the GC subject to distortions is $`r_{}13`$ pc. Typical core radii of GCs lie between 0.3 and 10 pc \[Spitzer 1987\], thus it is only the halo of the GC that gets distorted. To understand the role that triaxiality plays in producing tidal deformations, let us extend the Hernquist potential (1) to a triaxial form by defining $$\frac{r}{a}=\left(\frac{x^2}{x_o^2}+\frac{y^2}{y_o^2}+\frac{z^2}{z_o^2}\right)^{1/2},$$ (4) with $`x_o`$, $`y_o`$ and $`z_o`$ as parameters. The tidal force along the $`x`$-axis is $$F_T(x)\frac{GM_{bg}r_{}}{(x+x_o)^3},$$ (5) with similar expressions holding for the other axes. From Dwek et al. (1995), for the Milky Way bulge, $`x_o2`$kpc and $`y_oz_o0.5`$kpc. Thus, for most of the halo, tidal deformations along the $`x`$-axis are negligible. This in principle represents an anisotropic tidal field that could yield a clear signature in the cluster deformations. However, one has to remember that GCs orbits are not exactly radial and in general not aligned with the major axis of the bulge. Since the remaining two axes are quite similar, on average the GC sees mostly spherically symmetric tidal forces, thus it suffices to consider spherically symmetric potentials. For our simulations, we represent the GC by a King model with a central potential $`W_o=4`$ and a half mass radius of 10 pc. This is within the range of observed GCs in the Milky Way, but on the loosely bound end of this distribution. For a more tightly bound GC the disruption and distortion effects would be smaller. We consider parabolic orbits with perigalactic distances $`r_p=200,\mathrm{\hspace{0.17em}400},\mathrm{\hspace{0.17em}800},`$ and $`1600`$ pc. For bound orbits, the speed at perigalacticon would be smaller so in this sense our calculations give a lower limit on the amount of distortion. The initial location of the globular cluster was sufficiently far from the center of the galaxy so that tidal effects at the start of the simulation were negligible. Table 1 summarizes the input parameters for runs. In this paper we focus on identifying choices of parameters that induced less than 20% GC mass loss. We also conducted runs more appropriate for GC distortion in giant elliptical galaxies, considering the effect of superimposing a black hole of mass $`10^4M_{gc}`$ on the bulge with $`a=800`$ pc. In that case, for the same $`r_p`$, considerably more mass could be lost due to the increase in the depth of the potential. As a test, Case C3 was repeated with $`N=10^5`$ to verify that results are not sensitive to $`N`$ at this level. At each step during the evolution, we monitored the 3D shape of the GC using the inertia tensor $$I_{ab}=\frac{1}{N}\frac{x_ax_b}{r^2},$$ (6) where the sum is over all the particles in the cluster and $`a,b=x,y,z`$. Although this method provides useful information about the development of the deformations, it does not take into account projection effects. To connect with observations, following \[de Theije et al., 1995\], we also characterize the GC shape by projecting the cluster onto an arbitrary plane on the “sky.” The cluster’s ellipticity, $`ϵ`$, is then obtained from $$ϵ1\frac{\mathrm{\Lambda }_{}}{\mathrm{\Lambda }_+},$$ (7) where $$2\mathrm{\Lambda }_\pm =(I_{11}+I_{22})\pm \sqrt{(I_{11}+I_{22})^24(I_{11}I_{22}I_{12}^2)}$$ (8) and $`I_{ab}`$ ($`a,b=1,\mathrm{\hspace{0.17em}2}`$) are the eigenvalues of the 2D inertia tensor. This ellipticity, of course, depends on the orientation that is chosen to perform the projection. Thus we must compute the probability, $`P(ϵ)`$, of viewing the cluster with a given ellipticity. The procedure for estimating $`P(ϵ)`$ is as follows: Given a snapshot of a GC from an encounter with the bulge, we define first a coordinate system with its origin at the cluster’s center of mass. At this origin, we select $`51\times 51`$ directions with solid angle $`d\mathrm{\Omega }=\mathrm{sin}\theta d\theta d\varphi `$ that cover uniformly the $`(\theta ,\varphi )`$ plane. The GC is then projected in planes perpendicular to those $`51\times 51`$ directions, and its ellipticity is computed. Finally, $`P(ϵ)`$ is obtained from a histogram of these ellipticities. ## 3 Distorting the Shape of Globular Clusters A reasonable starting point to understand the onset of tidal deformations of GCs by galactic bulges is to show the outcome of a simulation for a typical orbit. Figure 1 shows the evolving GC along the orbit for case C3, with blow–up views at selected points. The GC has been projected on the orbital plane since distortions along this plane are dominant for these encounters. Distortion begins as soon as the GC reaches the tidal radius; for this case $`r_t528`$ pc for stars at twice the half mass radius of the cluster. The mechanism behind the distortion of a GC is basically the same as that for the case of self-gravitating spheres of gas (stars). That is, the energy required to induce deformation or even disruption is provided at the expense of the orbital kinetic energy of the GC. As mentioned before, and clearly shown in Fig. 1, the core of the GC remains almost intact. Only the halo of the GC develops a quadrupole distortion that can be spun up via gravitational torques \[Rees 1988\]. Stars closest to the center of the bulge move faster than those furthest, resulting in the spreading of energy of the stars in the GC. This gives rise to the “twisting” isophotes apparent in the distorted clusters shown in Fig. 1. Figure 2 shows the evolution of the eigenvalues of the 3D inertia tensor (6) for each of the cases in Table 1. A sense of the degree of deformation is obtained from the relative size of these eigenvalues. Two general types of deformation can be detected depending on the value of $`r_p`$; however, both types lead to final prolate configurations. For all cases, initially two of the principal axes (or equivalently eigenvalues) decrease, and the third one (initially pointing in the direction of the bulge’s center) increases. As the GC continues its orbit (still inside of the tidal radius), the alignment of the major axis with the line joining the bulge and GC’s centers is no longer possible, in spite of the induced torques on the GC, which act trying to restore the original alignment. Tidal forces halt the growth of the prolate distortion. In cases with $`r_p=200`$ pc, the situation is temporarily reversed, leading to a “bounce” with one of the short axes becoming the largest (see C9 and C13 in Fig. 2). The GC eventually settles down into a prolate configuration, with the major axis in the orbital plane. A similar, but not as dramatic, prolate configuration is obtained for larger $`r_p`$. In this case the bounce is absent, with the GC reaching a stable configuration on a shorter time-scale (about 300 time units or $`10^7`$ yrs after perigalacticon). The distortion is minimal outside 1 kpc for $`M_{bg}=10^5M_{gc}`$, but is still substantial outside this distance if $`M_{bg}=10^6M_{gc}`$. The loss of mass during each encounter is displayed in Fig. 3. The loss of mass steadily increases as $`r_p`$ decreases, as $`a`$ decreases, and as $`M_{bg}`$ increases. The GC experiences a rapid loss of mass near the perigalactic distance, followed by a gradual loss over the next $`10^7`$ yrs. In the cases with $`r_p=200`$ pc, some of the material that was lost in the closest approach becomes bound to the cluster again during the “bounce” phase, giving rise to an increasing $`M_{gc}`$ in Fig. 3 over a short interval of time. If $`M_{bg}=10^5M_{gc}`$ the GC is destroyed only if it passes within $`200`$ pc of the galactic center, and distortions could be observed for $`200r_p400`$ pc. For a more massive bulge, approaches with $`r_p=800`$ pc lead to destruction and distorted clusters will be observed for $`1r_p2`$ kpc. Even when a cluster is severely distorted, the observability in any one case depends upon the viewing angle. Figure 4 shows four series of realizations of the randomly oriented clusters from C3, C7, C11, and C15, all viewed at the time, after the encounter, when they are 3 kpc from the galactic center. These four cases all have $`r_p=800`$ pc, but varying bulge parameters. Although Case C10 yields the GC with the largest distortion and thus the larger mean ellipticities, the “twisted” isophotes are more apparent for Cases C3 and C7. In fact this is due to the fact that the GCs in C11 and C15 have not yet settled into their final states at the distance of 3 kpc. The last row in Fig. 4 shows randomly oriented realizations of the cluster from C15 at $`T=1.6\times 10^7`$ yrs after reaching the distance of closest approach, when it is at a distance of $`8`$ kpc. Figure 5 shows the distributions of ellipticities for each of the cases in Table 1 when the GC is at a distance 3 kpc from the center of the bulge. Generally, we see that with increasing $`r_p`$ (along a column in Fig. 5) the eccentricity distribution peaks at smaller values. However, this is not apparent for Cases C9–C11 and Cases C13–15. In these cases, which experienced the “bounce” described above, the GC has still not reached a stable configuration when it reaches a distance of 3 kpc. Fig. 6 shows eccentricity distributions for all cases at $`T=500`$ ($`2.3\times 10^7`$ yrs from the start of the simulation, when the GC was at 2 kpc distance). These distributions are all narrow and we see a steady progression with the exception of Case C9. This is also the case with the most severe “bounce”. The GC eccentricity distribution mean does not reach the large values we would expect by extrapolating from the $`r_p=1600`$, 800, and 400 pc cases (C12, C11, and C10). This is due to the rapid passage of this GC through the galactic center. As a result, it does not have time to fully respond to the tidal torques and does not reach as large an ellipticity. In giant elliptical galaxies, it seems possible that the effects of “bulge shocking” could be substantially enhanced by the presence of a central supermassive black hole. To address this specific possibility, we added a central $`10^4`$M<sub>gc</sub> black hole to runs C5 and C13 ($`a=800`$ pc and $`r_p=200`$ pc for both, but with $`M_{bg}=10^5`$ and $`10^6`$M<sub>gc</sub>, respectively). The results for mass loss and for the ellipticity distributions are shown as dotted lines in Figs. 3, 5, and 6. In the case with the less massive bulge (C5), the black hole leads to significantly larger distortions, but the bulge tidal effect dominates in the case with $`M_{bg}=10^6`$M<sub>gc</sub>. We have found that, for a range of bulge properties, there is a fairly large range of perigalactic distances over which a GC is distorted but not destroyed. Figure 7 illustrates the evolution of such a cluster as it continues along a parabolic orbit back out into the galaxy. The Case 2 cluster simulation is shown, but with only 10000 particles, so that its evolution can be traced for more than 1 billion years. The distortion is apparent at 1000 time units, corresponding to $`4.6\times 10^7`$ yrs, but is not pronounced at 3000 time units. If no further shocking occurs the bound particles remain behind as a globular cluster while, within hundreds of millions of years, the outer layers spread out into a stream extending through the galaxy. Once dispersed, this stream of stars joins the population of halo stars. ## 4 Are Tidal Deformations Observable? We find that GCs can be substantially distorted, without being destroyed, by the process of bulge shocking. Here we address the question of whether the predicted distortions of the GC halo can practically be observed in the Milky Way and in other nearby galaxies. ### 4.1 Globular Cluster Distortion in the Present Day Milky Way As demonstrated by Gnedin and Ostriker (1997), the rate of destruction of GCs and the importance of bulge shocking to their evolution were significantly larger in the past. Although the GC population in the Milky Way is likely to decrease by a factor of two over the next Hubble time, the probability of observable bulge shocking at present in the Milky Way is somewhat difficult to assess. We use our simulation results to consider this issue. The clusters most likely to have observable distortions in the Milky Way are those that have passed near to the Galactic center. There are several clusters observed within 2 kpc, and these are the ones with the highest probability of having a bulge shocking effect. As examples, we consider the clusters NGC6293 (at $`R_G=1.2`$ kpc) and NGC6440 (at $`R_G=1.4`$ kpc). The bulge of the Milky Way has mass $`10^{10}`$M and softening parameter $`a700`$ pc \[Dwek et al., 1995\], if the distribution of light is fit to a Hernquist potential. With these parameters (closest to the series C5–C8), a GC passing within a few hundred pc of the Galactic center undergoes substantial distortion of its outer layers. The outer distorted regions of such a GC has about 10–100 stars per square arcminute at the distance of NGC6293 or NGC6440. In order to determine whether such distortions are observable, we consider whether it is possible to distinguish the GC stars from the background of bulge and disk field stars in the Galaxy. The expected backgrounds in the directions of these two clusters have been calculated using a Galaxy model constructed by Hunsberger (private communication). For GCs $`8`$ kpc away from us the range of observed apparent magnitudes of their stars is about 12–20. For NGC6293, at $`l=357.6\mathrm{deg}`$ and $`b=7.8\mathrm{deg}`$, we expect 24 stars per square arcminute from the bulge and 23 stars per square arcminute from the disk in this magnitude range. For NGC6440, at $`l=7.7\mathrm{deg}`$ and $`b=3.8\mathrm{deg}`$, the bulge contribution is only 9 per square arcminute, as compared to 21 per square arcminute from disk stars. In either case the stellar densities in the outer, distorted regions of the cluster are 10–100 stars per square arcminute, comparable to these background levels, and should be observable. Further information could be extracted by considering the colors of the stars since a different distribution is expected for the cluster and for the Galactic background. We conclude that the main factor in assessing whether bulge shocking should be observable in one or more present–day Milky Way globulars is whether any of the clusters actually has passed within $``$400 pc of the bulge center. (The GC that we have chosen is realistic, but fairly loosely bound. The distances at which distortion and destruction become important would be smaller for a more tightly bound cluster.) The key period for observation of the tidal distortion effect is while the cluster is still within a couple of kpc of the Galactic Center. Figure 7 showed that the distortions do not persist once the cluster has passed out into the galaxy. A search for the distortion signature of bulge shocking can be used to place limits on the orbits of present day globulars. It is expected that most clusters that passed regularly within the bulge on their orbits were destroyed many billions of years ago. However, three of the 25 globular clusters for which proper motions were measured were found to have orbits that will bring them within 1 kpc of the Galactic Center, one of them within 300 pc \[Dauphole et al., 1996\]. A careful analysis of the shapes of globulars close to the Galactic Center is therefore of interest. Grillmair et al. (1995) have mapped the surface density distributions of 12 Galactic clusters over a range of Galactocentric distances, using star counts and aided in selection by color–magnitude diagrams. They present a comparison of one of their most distorted observed clusters, NGC 7089, to results from their own N–body simulations observed in similar manner to the data, and find agreement. We have performed a similar analysis and present in Fig. 8 contours of smoothed pixel maps for six of the clusters illustrated in Fig. 7. These were constructed by observing each of the oriented clusters from a distance of 8 kpc, and assigning the particles to pixels that were $`1.56\times 1.56`$ square arcminutes, i.e. $`3.6\times 3.6\mathrm{pc}^2`$. Contours were drawn by first applying a Gaussian smoothing filter with $`\sigma =3`$ pixels. Pixels near the center of the image, those with values larger than the threshhold of 1000 stars per pixel, were reset to 1000 so that the central structure did not overwhelm the outer contours in the smoothing process. The outer contour is drawn at 2 stars per pixel and the inner contour at 100 stars per pixel. These maps were produced using the imsmooth and contour routines in IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which are operated by AURA, Inc., under contract to the NSF.. These parameters used to produce the contours in Fig. 8 are similar to those used in producing the surface density maps for the 12 Galactic globulars presented by Grillmair et al. (1995). The resulting maps of our similated GCs are somewhat similar to some of the observed GCs, suggesting that perhaps tidal distortions have already been observed in our galaxy. However, we should note that the Grillmair clusters are all at galactocentric distances larger than 8 kpc. If the tidal distortions are in fact due to bulge shocking as we have discussed, this implies that these clusters have passed at least within 1 kpc of the Galactic center which is unlikely for the majority of the twelve. ### 4.2 Globular Cluster Distortion in Other Galaxies As discussed in the introduction, galaxies of various types at different stages of evolution show different distributions of ellipticities \[Han & Ryden 1994\]. In our Galaxy and M31 there has been time for two–body relaxation to destroy velocity anisotropies. We would predict that distortion should occur in many young galaxies due to bulge shocking. Depending on the initial distribution of orbits of GCs and on their spatial distribution, the early ellipticity distribution of clusters could be skewed to low values due to this process. Some other environments are more likely to currently be active sites of extensive bulge shocking. We repeated runs C5 and C13 to explore the distortions expected for a parabolic orbit passing within 200 pc of the center of an elliptical galaxy containing a supermassive black hole. Depending on the orbital distribution, in a young elliptical with a large GC population this situation may be more likely than it is for a GC to pass within 400 pc of the Galactic Center. In general, the initial distribution of clusters could be spatially concentrated toward the galaxy center, so that many orbits are subject to bulge shocking. In a young galaxy with close to its initial distribution of clusters, we expect many to have the “twisting isophotes” distortions that we describe here. The merger products of recent interacting giant galaxies perhaps provide the most ideal laboratory for direct detection of the destruction of GCs. As mentioned in the introduction, in several of these galaxies more than one thousand young star clusters are found, with masses somewhat larger on average than globulars \[Whitmore & Schweizer 1995\]. These clusters have properties consistent with the predecessors of GCs. We predict that a number of the globulars in merger products should exhibit the unique signature of distortion by bulge shocking. Globular clusters can just be resolved at the distances of these interacting pairs and distortions of their outer layers would be impossible to observe. However, the GCs produced by more extreme cases of bulge shocking (which should be more common in this environment than in the Milky Way) will have large eccentricities even at the half mass radius and should have larger effective radii. Both relatively gentle and gradual tidal distortions and more dramatic destructive shocking must play a role in shaping the distribution of new globulars. A statistical study of the globular cluster distribution as a function of the age of the merger product would provide constraints on its evolution. Some of the massive young star clusters in the merger product NCG 1569 seem to have relatively high ellipticities \[OĆonnell, Gallagher, & Hunter 1994\]. Furthermore, some of the young inner clusters in NGC 7252 have larger effective radii than any outer clusters in the same galaxy \[Miller et al., 1997\]. Constraints based on studies of the ellipticity distributions of galaxies of various types could lead to a better understanding of the formation and evolution of globulars in the Milky Way and in general. ## Acknowledgments We are grateful to M. Warren for making available his Tree-code. We also thank S. Hunsberger for providing the code used to estimate the Galactic background of disk and halo stars. This work was partially supported by NSF 93-57219 (NYI) grant to PL and by NSF grant 95-29242 awarded to JCC.
no-problem/9812/cond-mat9812118.html
ar5iv
text
# Crossover properties from random percolation to frustrated percolation ## I Introduction The cluster approach was introduced by Kasteleyn and Fortuin (KF) and Coniglio and Klein (CK) in ferromagnetic spin systems. In the Coniglio-Klein approach, one puts a bond between two nearest neighbor (NN) spins with probability $`p=1e^{2\beta J}`$ if the spins are parallel, where $`J`$ is the spin interaction and $`\beta =1/k_BT`$, and with probability zero if they are antiparallel. In this way a bond configuration $`C`$ on the lattice has a statistical weight $$W(C)=e^{\mu b(C)}q^{N(C)},$$ (1) where $`\mu =\mathrm{log}\left(\frac{p}{1p}\right)=\mathrm{log}(e^{2\beta J}1)`$ is the chemical potential of the bonds, $`b(C)`$ is the number of bonds and $`N(C)`$ the number of clusters of the configuration $`C`$, and $`q`$ is the multiplicity of the spins ($`q=2`$ for Ising spins). This defines a percolation model, in which clusters percolate at the ferromagnetic critical point, with the same critical indices of the original spin model. The KF-CK approach has been extended also to frustrated spin models, as for example spin glasses . In these models the disorder is produced by a quenched distribution of antiferromagnetic ($`J`$) and ferromagnetic ($`J`$) interactions on the lattice. Just like in the ferromagnetic case, one puts a bond between two NN spins if they satisfy the interaction, with a probability $`p=1e^{2\beta J}`$. The main difference here is that, due to frustration, the spins cannot satisfy simultaneously all the interactions on the lattice. More specifically, if a closed path on the lattice contains an odd number of antiferromagnetic interactions, not all the interactions belonging to the path can be satisfied simultaneously. Such a path is called “frustrated loop”, and since bonds can be put only between spins that satisfy the interactions, a frustrated loop cannot be completely occupied by bonds. Therefore, the statistical weight of a bond configuration $`C`$ will be now $$W(C)=\{\begin{array}{cc}e^{\mu b(C)}q^{N(C)}\hfill & \text{if }C\text{ is not frustrated,}\hfill \\ 0\hfill & \text{if }C\text{ is frustrated,}\hfill \end{array}$$ (2) where $`C`$ is said frustrated if it contains one or more frustrated loops completely occupied by bonds. It has been shown by renormalization group methods on a hierarchical lattice that this model exhibits two phase transitions, for every value of the multiplicity $`q`$ of the spins. The first transition, at a temperature $`T_{SG}(q)`$, is in the universality class of the Ising SG transition, while the other transition, at a temperature $`T_p(q)>T_{SG}(q)`$, is a percolation transition in the universality class of the ferromagnetic $`q/2`$-state Potts model . The frustrated percolation model has proven to be a suitable model to the study of complex systems, such as spin glasses , glasses and in general all those systems in which connectivity and frustration play a fundamental role, (for a review see ). The aim of the present paper is the study of the percolation transition of the model with $`q=1`$, the “bond frustrated percolation model”, for a variable density $`\pi `$ of antiferromagnetic interactions in the interval $`0\pi 0.5`$. We determine the critical probability $`p_c(\pi )`$ and the critical exponents $`\nu (\pi )`$, $`\beta (\pi )`$, and $`\gamma (\pi )`$, by performing Monte Carlo simulations on lattices of different size, and using scaling laws (see Sect. III). Finally the results are compared with the theoretical predictions for the two extreme cases, the pure ferromagnetic case $`\pi =0`$, that corresponds to random bond percolation ($`1/\nu =0.75`$, $`\beta /\nu =0.1042`$, $`\gamma /\nu =1.7917`$), and the symmetric case $`\pi =0.5`$, that corresponds to the $`1/2`$-state Potts model ($`1/\nu =0.5611`$, $`\beta /\nu =0.08276`$, $`\gamma /\nu =1.8346`$). ## II Definition of the frustrated percolation model The bond frustrated percolation model can be defined in the following way. Consider a two-dimensional square lattice, with NN interactions between sites. These interactions can be ferromagnetic with probability $`1\pi `$ and antiferromagnetic with probability $`\pi `$. The distribution of interactions is quenched, so it is set at the beginning and does not evolve with time in the dynamics of the system. Each edge of the lattice, connecting a pair of NN sites, can be connected by a bond or not, and the state of the system is completely specified by the bond configuration. We give to a bond configuration $`C`$ a statistical weight $$W(C)=\{\begin{array}{cc}e^{\mu b(C)}\hfill & \text{if }C\text{ is not frustrated,}\hfill \\ 0\hfill & \text{if }C\text{ is frustrated,}\hfill \end{array}$$ (3) where $`\mu =\mathrm{log}\left(\frac{p}{1p}\right)`$, $`p`$ is a probability that is connected to the temperature via the relation $`p=1e^{2\beta J}`$, and $`b(C)`$ is the number of bonds of the configuration $`C`$. The presence of frustration induces a complex behavior in both the static and dynamic properties of the system, with the presence of many metastable states, and high free energy barriers separating them. For $`\pi =0`$ the model coincides with random percolation. For $`\pi =1/2`$, that is equal density of ferromagnetic and antiferromagnetic interactions, the system is expected to have two phase transitions . The first at lower temperature in the same universality class of the Ising spin glass transition, which in two dimensions is at $`T=0`$, that is at $`p=1`$. Below this temperature the system is frozen, ergodicity is broken and the system remains trapped in a finite region of phase space. The second transition at a higher temperature (lower probability), is the percolation transition of the cluster of bonds, and belongs to a different universality class, namely that of the ferromagnetic $`1/2`$-state Potts model. We will study the percolation transition as a function of the density of antiferromagnetic interactions $`\pi `$. We will see that a very small amount of antiferromagnetic interactions is already sufficient to change the universality class of the transition. ## III Monte Carlo results We have studied the percolation transition of the model on a 2D square lattice with periodic boundary conditions, for different values of $`\pi `$ ($`0\pi 0.5`$) and for different lattice sizes $`L`$ ($`L=16,24,32,40`$, occasionally larger). For each size of the lattice a configuration of interactions is preliminary produced by setting randomly on the lattice a number $`N_a`$ of antiferromagnetic interactions and $`N_f=2L^2N_a`$ of ferromagnetic interactions. The corresponding density of antiferromagnetic interactions is given by $`\pi =N_a/2L^2`$. We have determined the critical probability of percolation $`p_c(\pi )`$ and the critical exponents $`\nu (\pi )`$, $`\gamma (\pi )`$, and $`\beta (\pi )`$, by performing Monte Carlo simulations and using scaling laws. In order to generate bonds configurations with the appropriate weights we used an algorithm devised by Sweeny , suitably modified to treat the frustration occurrence . For each $`\pi `$ the quantities of interest were averaged on many configurations of interactions. For each value of $`\pi `$ the critical probability of percolation has been determined using the following scaling law for the probability $`P_{\mathrm{}}`$ of a spanning cluster being present on the lattice $$P_{\mathrm{}}(L,p)=\stackrel{~}{P}_{\mathrm{}}[L^{1/\nu }(pp_c)],\text{for}L\mathrm{},pp_c.$$ (4) Therefore $`p_c`$ is found to be the point where the curves of $`P_{\mathrm{}}`$ as a function of $`p`$, with different lattice sizes, intersect. In Fig. 1 are shown the curves $`P_{\mathrm{}}(L,p)`$ for $`\pi =0.5`$, and in Fig. 2 the values obtained for $`p_c(\pi )`$. The intersection point is clearly singled out. This procedure enabled us to determine $`p_c`$ with an indetermination of $`\pm 0.001`$, or in the worst cases of $`\pm 0.002`$. The simulations were performed on lattices of size $`L=16`$, 24, 32, 40, for each $`L`$ about $`500`$ MC steps were produced to thermalize the system whereas $`20,000`$-$`30,000`$ MC steps were used to average. Moreover for each value of $`\pi `$ we have averaged on a number $`n`$ of configurations of interactions ranging from $`n=80`$ for $`L=16`$ to $`n=30`$ for $`L=40`$. The scaling law in Eq. (4) enables us to get the value of the exponent $`1/\nu `$ as well, once $`p_c`$ has been determined, by choosing the value which gives the best data collapse of the curves (see Fig. 3). In Fig. 4 we give the values of $`1/\nu (\pi )`$ obtained. As $`\pi `$ increases from zero to positive values, a sudden change in the universality class of the transition can be seen. A crossover region extending from $`\pi =0`$ to $`\pi 0.05`$ occurs. From this point the value of $`\nu `$ can be considered in agreement with the predicted value for the $`1/2`$-state Potts model ($`1/\nu =0.56`$, marked by the horizontal straight line). The errors $`\mathrm{\Delta }(1/\nu )`$ and $`\mathrm{\Delta }p_c`$ were computed as the amplitudes of the intervals in the values of $`1/\nu `$ and $`p_c`$ for which a good data collapse was obtained. We remark that these errors do not take into account the finiteness of the system. Thus the values for the critical exponents must be regarded as effective values which would give correct results only in the asyntotic limit ($`L\mathrm{}`$). The mean cluster size $`\chi `$ is defined as $$\chi =\frac{1}{V}s^2n_s,$$ (5) where $`V=L^2`$ is the number of sites on the lattice, $`n_s`$ is the number of clusters of size $`s`$, and the sum extends over the cluster sizes $`s`$. In the thermodynamic limit $`L\mathrm{}`$ the mean cluster size diverges as $`|pp_c|^\gamma `$, when the probability $`p`$ approaches its critical value. For finite systems, $`\chi `$ obeys a finite size scaling $$\chi (L,p)=L^{\gamma /\nu }\stackrel{~}{\chi }[L^{1/\nu }(pp_c)]\text{for}L\mathrm{},pp_c.$$ (6) The density of the largest cluster $`\rho _{\mathrm{}}`$ plays the role of the order parameter in the system, being zero for $`p<p_c`$ in the thermodynamic limit, and $`\rho _{\mathrm{}}(pp_c)^\beta `$ for $`p>p_c`$. This quantity as well obeys a finite size scaling $$\rho _{\mathrm{}}(L,p)=L^{\beta /\nu }\stackrel{~}{\rho }_{\mathrm{}}[L^{1/\nu }(pp_c)]\text{for}L\mathrm{},pp_c.$$ (7) Therefore simulating the system at the computed value of $`p_c`$ enables us to get $`\gamma `$ and $`\beta `$ from a log-log plot of the relations $`\chi (L,p=p_c)L^{\gamma /\nu }`$, $`\rho _{\mathrm{}}(L,p=p_c)L^{\beta /\nu }`$. In Fig. 5 one such plot is shown, for $`\pi =0`$, while in Fig. 6 and Fig. 7 are shown the plots of $`\gamma /\nu `$ and $`\beta /\nu `$ obtained, for different values of $`\pi `$. The horizontal straight lines mark the predicted values of the exponents, $`\gamma /\nu =1.83`$ and $`\beta /\nu =0.083`$, corresponding to the ferromagnetic $`1/2`$-state Potts model. We see that for $`\pi >0.05`$ the computed value of the exponents is in good agreement with the prediction. These results are consistent with the picture that there are two universality classes: random percolation at $`\pi =0`$, and frustrated percolation for $`\pi >0`$. Data at our disposal cannot exclude however the possibility that the random percolation behavior extends from $`\pi =0`$ to a value $`\pi ^{}`$ smaller than $`0.05`$. ## IV Conclusions We have investigated the percolation transition of the asymmetric frustrated percolation model in two dimensions by using Monte Carlo simulation. From the analysis of critical exponents $`\nu (\pi )`$, $`\gamma (\pi )`$, $`\beta (\pi )`$, in the interval $`0\pi 0.5`$, it seems reasonable to assume that a very small concentration ($`\pi 0.05`$) of antiferromagnetic interactions is already sufficient to produce the change in the universality class of the transition. This means that the effects of disorder and frustration are important even for such small values of $`\pi `$. Our results are consistent with the existence of a sharp crossover from random percolation for $`\pi =0`$, to frustrated percolation, characterized by the exponents of the ferromagnetic $`1/2`$-state Potts model, as soon as $`\pi >0`$. However we cannot rule out numerically the presence of a tricritical point, at low values of $`\pi `$, dividing random percolation from frustrated percolation exponents.
no-problem/9812/cond-mat9812065.html
ar5iv
text
# Corrections to scaling at the Anderson transition ## Abstract We report a numerical analysis of corrections to finite size scaling at the Anderson transition due to irrelevant scaling variables and non-linearities of the scaling variables. By taking proper account of these corrections, the universality of the critical exponent for the orthogonal universality class for three different distributions of the random potential is convincingly demonstrated. The possibility of the Anderson localization of electron states as a result of disorder was first suggested four decades ago . Following the proposal of the scaling theory of localization attention has focused on understanding the critical properties of the Anderson transition (AT), the quantum phase transition which occurs at a critical disorder separating a diffusive metallic phase from an insulating localized phase . Our current understanding of the AT is based on the non-linear $`\sigma `$ model (NL$`\sigma `$M) . This has been analyzed using an expansion in powers of $`ϵ`$, where $`d=2+ϵ`$ is the dimension of the system. According to the NL$`\sigma `$M it should be possible to classify the critical behavior using three universality classes: orthogonal, unitary and symplectic depending on the symmetry of the Hamiltonian with respect to time reversal and spin rotation. Here we focus on the orthogonal universality class corresponding to systems with both time reversal and spin rotation symmetries. Beyond the suggestion of the appropriate universality classes, there has not been much success in making detailed predictions about the critical behavior with the NL$`\sigma `$M. The problems are well illustrated by attempts to estimate the critical exponent $`\nu `$ which describes the divergence of the correlation length $`\xi `$ at the AT. In early work it was found that $`\nu =1/ϵ`$ which gives $`\nu =1`$ when extrapolated to $`dϵ+2=3`$. When combined with the Wegner scaling law $`s=\nu ϵ`$ this leads to a conductivity exponent $`s=1`$. Measurements on some, but not all, materials do indeed yield $`s=1`$ . However, calculations at higher orders in $`ϵ`$ produced strong corrections to the leading order when extrapolated to $`ϵ=1`$ , showing that that this agreement is fortuitous. There is now no accepted estimate of the exponent based on the $`ϵ`$ expansion or any other analytic technique. While the above can be regarded as a rather unfortunate technical difficulty, fears have also been expressed that there maybe an infinite number of relevant operators in the NL$`\sigma `$M and that the theory may be unsound. While it now seems unlikely in view of that this is actually the case, in this context it is nevertheless important to have independent confirmation that our understanding of the AT is correct. At present numerical simulations offer the only viable alternative. In this paper we demonstrate an important basic principle underlying our understanding of the AT: the universality of the critical properties of the AT. To do this has required us to address the principle uncertainty in previous numerical studies of the critical properties of the AT, the presence of systematic corrections to scaling in the numerical data due to the practical limitations on the sizes of system which can be studied. The computer time required in numerical studies of the AT increases very rapidly with increasing system size (as $`L^7`$ for the method used here.) This sets a severe limitation on the system sizes which can be simulated. However, systematic corrections to scaling are expected in smaller systems and their neglect leaves important question marks over the validity of any conclusions drawn from the analysis of the numerical data. Here we consider two ways in which such corrections can arise: the presence of irrelevant scaling variables and non-linearity of the scaling variables . These effects lead to systematic rather than random deviations from scaling and must be taken into account both when estimating the critical parameters and the likely accuracy of their estimation. Our work has also been inspired by the successful analyses of corrections to scaling in the Quantum Hall Effect (QHE) transition . The present problem is, however, more difficult since, unlike the QHE, the critical point is not known a priori on grounds of symmetry. The universality of the exponent for the box and Gaussian distributions of random potential was demonstrated to a limited extent in by taking account of corrections to scaling in an ad hoc manner. Here, taking account of corrections systematically, we confirm that result and extend its validity to include the Lloyd model . The Hamiltonian used in this study describes non- interacting electrons on a simple cubic lattice with nearest neighbor interactions only $`\begin{array}{ccc}<\stackrel{}{r}|H|\stackrel{}{r}>\hfill & =\hfill & V(\stackrel{}{r}),\hfill \\ <\stackrel{}{r}|H|\stackrel{}{r}\widehat{x}>\hfill & =\hfill & 1,\hfill \\ <\stackrel{}{r}|H|\stackrel{}{r}\widehat{y}>\hfill & =\hfill & 1,\hfill \\ <\stackrel{}{r}|H|\stackrel{}{r}\widehat{z}>\hfill & =\hfill & 1.\hfill \end{array}`$ Here $`\widehat{x}`$, $`\widehat{y}`$ and $`\widehat{z}`$ are the lattice basis vectors. The potential $`V`$ is independently and identically distributed with probability $`p(V)\text{d}V`$. We studied three models of the potential distribution: The box distribution $`\begin{array}{cccc}p(V)\hfill & =\hfill & 1/W\hfill & |V|W/2,\hfill \\ & =\hfill & 0\hfill & \mathrm{otherwise},\hfill \end{array}`$ the Gaussian distribution $`p(V)={\displaystyle \frac{1}{\sqrt{2\pi \sigma ^2}}}\mathrm{exp}\left({\displaystyle \frac{V^2}{2\sigma ^2}}\right),`$ with $`\sigma ^2=W^2/12`$, and the Lloyd model in which $`V`$ has a Lorentz distribution $`p(V)={\displaystyle \frac{W}{\pi \left(W^2+V^2\right)}}.`$ For this distribution all moments higher than the mean are divergent and the parameter $`W`$ is proportional to the full width at half maximum of the distribution. For these three models we analyzed the finite size scaling of the localization length $`\lambda `$ for electrons on a quasi-$`1d`$ dimensional bar of cross section $`L\times L`$. The length $`\lambda `$ was determined to within a specified accuracy using a standard transfer matrix technique . The starting point of our analysis is the renormalization group equation which expresses the dimensionless quantity $`\mathrm{\Lambda }=\lambda /L`$ as function of the scaling variables $`\mathrm{\Lambda }=f({\displaystyle \frac{L}{b}},\chi b^{1/\nu },\psi b^y).`$ In this equation $`b`$ is the scale factor in the renormalization group, $`\chi `$ the relevant scaling variable and $`\psi `$ the leading irrelevant scaling variable. We should find $`y<0`$ if $`\psi `$ is irrelevant. An appropriate choice of the factor $`b`$ leads to $$\mathrm{\Lambda }=F(\chi L^{1/\nu },\psi L^y),$$ (1) where $`F`$ is a function related to $`f`$. For $`L`$ finite there is no phase transition and $`F`$ is a smooth function of its arguments. Assuming the irrelevant scaling variable is not dangerous, we make a Taylor expansion up to order $`n_I`$ $$\mathrm{\Lambda }=\underset{n=0}{\overset{n_I}{}}\psi ^nL^{ny}F_n\left(\chi L^{1/\nu }\right),$$ (2) and obtain a series of functions $`F_n`$. Each $`F_n`$ is then expanded as a Taylor series up to order $`n_R`$ $$F_n(\chi L^{1/\nu })=\underset{m=0}{\overset{n_R}{}}\chi ^mL^{m/\nu }F_{nm}.$$ (3) To take account of non- linearities in the scaling variables we expand both in terms of the dimensionless disorder $`w=(W_cW)/W_c`$ where $`W_c`$ is the critical disorder separating the insulating $`(w<0)`$ and conducting phases $`(w>0)`$. $$\begin{array}{ccc}\chi (w)=_{n=1}^{m_R}b_nw^n& ,& \psi (w)=_{n=0}^{m_I}c_nw^n.\end{array}$$ (4) The orders of the expansions are $`m_R`$ and $`m_I`$ respectively. Notice that $`\chi (0)=0`$. The absolute scales of the arguments in (1) are undefined, we fix them by setting $`F_{01}=F_{10}=1`$ in (3). The total number of fitting parameters is $`N_p=(n_I+1)(n_R+1)+m_R+m_I+2`$ The qualitative nature of the corrections can be understood by looking at some special cases. First let us suppose that non-linearities are absent ($`m_R=1`$ and $`m_I=0`$) and truncate (2) at $`n_I=1`$ $`\mathrm{\Lambda }=F_0\left(\chi L^{1/\nu }\right)+\psi L^yF_1\left(\chi L^{1/\nu }\right).`$ From this equation we can infer that the estimate of the critical disorder, and possibly also the critical exponent, will appear to shift in a systematic way as the size of the system increases. To exhibit scaling it is necessary to subtract the corrections due to the irrelevant scaling variable. When $`n_I=1`$ we define $$\mathrm{\Lambda }_{\mathrm{corrected}}=\mathrm{\Lambda }\psi L^yF_1\left(\chi L^{1/\nu }\right),$$ (5) with the obvious generalization when $`n_I>1`$. We then have $$\mathrm{\Lambda }_{\mathrm{corrected}}=F_\pm \left(\frac{L}{\xi }\right).$$ (6) The functions $`F_\pm `$ are defined by $`F_\pm (x)=F_0(\pm (\xi _\pm x)^{1/\nu }).`$ In this case the correlation length $`\xi `$ has a simple power law dependence on the dimensionless disorder $`\xi =\xi _\pm \left|w\right|^\nu `$. The constants $`\xi _\pm `$ are not normally determined in finite size scaling studies. On the other hand, if we neglect the irrelevant variable and consider only non-linearity in the scaling variable we find $`\mathrm{\Lambda }=F_\pm \left(L/\xi \right)`$ without the need to subtract any corrections. No systematic shift of the estimated critical point should occur as the system size is increased. However, the correlation length $`\xi `$ no longer has a simple power law dependence on $`w`$ but behaves as $`\xi =\xi _\pm \left|\chi \right|^\nu `$. The critical exponent $`\nu `$, the irrelevant exponent $`y`$ and the functions $`F_n`$ are expected to be universal, while the coefficients $`\{b_n\}`$ and $`\{c_n\}`$ are not. Though we have explicitly considered corrections due to the leading irrelevant scaling variable only, the analysis can be easily extended to several such variables. In the simulation $`\lambda `$ was evaluated as function of disorder $`W`$ for a range of system sizes $`L`$. The best fit was determined by minimizing the $`\chi ^2`$ statistic . This is justified if we suppose a uniform prior probability for all parameters, that the deviations between the model and the simulation data are purely random in origin and distributed following a Gaussian distribution . This last assumption is also important in determining the likely accuracy to which the critical parameters have been estimated. Since the inclusion of corrections to scaling allows for systematic rather than just random deviations from scaling in the numerical data, this assumption is more reasonable here than when corrections to scaling are neglected. Therefore we expect the estimates of the accuracy of the critical exponent etc. to be more reliable. The model (2)-(4) is nonlinear in some parameters so the goodness of fit $`Q`$ has been checked using a Monte Carlo technique and the confidence intervals evaluated by re-sampling . The inclusion of the corrections in (2)-(4) leads to a rapid increase in the number of fitting parameters and high quality data are essential if meaningful fits are to be obtained. All data used here have an accuracy of either $`0.1\%`$ or $`0.05\%`$. To achieve this accuracy between $`10^6`$ and $`10^7`$ iterations in the transfer matrix method were required. When deciding which correction terms to include we attempted to maximize the goodness of fit $`Q`$ while keeping the number of correction terms to a minimum. The details of the simulations and the types of fit used are listed in Table I. The estimated critical parameters and their confidence intervals are given in Table II. Some typical data are displayed in Fig. 1. To exhibit scaling the data are re-plotted after subtraction of the appropriate corrections in Fig. 2. The corrected data now fall on a single curve clearly exhibiting scaling in agreement with (6). The magnitude of the corrections needed to obtain the scaling shown in Fig. 2 are of the order of $`2\%`$ or so for the smallest system size decreasing to around $`0.3\%`$ for the largest system size. The most important point to be drawn from Table II is that the estimates of the exponent $`\nu `$ for the three different disorder distributions are in almost perfect agreement. The same is true for the estimates of the critical parameter $`\mathrm{\Lambda }_c`$. This is strong evidence in favor of the universality of the critical exponent and other critical parameters. How do the results of the present analysis compare with those obtained when corrections to scaling are neglected? In Table III we give the results obtained for the same potentials neglecting corrections. The first thing to notice is that range of system sizes (and in the box distribution, the range of $`W`$) for which an acceptable fit ($`Q>0.1`$) can be achieved is very limited. After discarding data for the smaller system sizes reasonable agreement is obtained between Tables II and III. However, given the more limited range of system sizes, the estimates of the accuracy to which the critical parameters have been determined when corrections are neglected are too optimistic. The problem is more evident when looking at less accurate e.g. $`0.2\%`$ data for the box distribution. Ignoring corrections to scaling, it was found that $`W_c=16.45\pm .01`$, $`\mathrm{\Lambda }_c=0.586\pm .001`$ and $`\nu =1.59\pm 0.03`$. The estimates of $`W_c`$ and $`\mathrm{\Lambda }_c`$ are not consistent with Table II. The effect which gives rise to this inconsistency can also be seen in the data for Lloyd model displayed in Fig. 1. A systematic shift of the apparent critical disorder to a lower value as the system size increases is evident. For the box and Gaussian distributions the shift was found to be in the opposite sense to higher disorder. It seems likely that any analysis which assumes that deviations from scaling are purely random origin, rather than allowing for systematic corrections such as considered here, will lead to an over optimistic estimate of the accuracy to which the critical point has been determined and even to an incorrect determination of the critical point. In contrast the estimate of the critical exponent is quite consistent with that in Table II. Of course, the precise location of the critical point in any particular model is not in itself very important, but any inaccuracy in its estimate also affects the estimate of the critical conductance distribution . We should also mention that surface effects and the influence of boundary conditions may also give rise to corrections to scaling behavior. We have used periodic boundary conditions to minimize surface effects. Even so there may remain some influence of the boundary conditions. To quantify this we have evaluated $`\lambda `$ for some representative values of the parameters for fixed (fbc), periodic (pbc) and anti- periodic boundary conditions (apbc). A large statistically significant shift in the localization length $`\lambda `$ was found between fbc and pbc. However, even when calculating at a higher accuracy of $`0.02\%`$ no such difference between pbc and apbc was found. We therefore think it reasonable to neglect such corrections here. We have presented a numerical study of the Anderson transition in three dimensions in which systematic corrections to scaling have been explicitly taken into account when estimating the critical disorder and other critical parameters. The universality of the critical exponent with respect to the choice of the distribution of disorder has been accurately verified. While in this paper we concentrated on the scaling of the correlation length in the three dimensional Anderson model, corrections of a similar nature may also be important in finite size scaling analyses of the conductance distribution and energy level statistics. The method we have described here is applicable in these cases and, indeed, to any continuous quantum phase transition. Part of this work has been carried out on supercomputer facilities at the Institute for Solid State Physics, University of Tokyo.
no-problem/9812/cond-mat9812378.html
ar5iv
text
# Study of a microcanonical algorithm on the ±𝐽 spin glass model in 𝑑=3. ## 1 Introduction The Ising spin glass is the paradigm of the complex systems. It possesses two fundamental characteristics: disorder, because the couplings are random variables and frustration since the signs of the coupling are positive or negative. These two characteristics produce the main property of a complex system: a very slow dynamics. This slow dynamics is due to the existence and competition of a large number of pure and metastable states below the critical temperature. In some cases, a large number of metastable states above the critical temperature can produce this effect even in the paramagnetic phase. During the last two decades, the existence of a low-temperature phase in Ising spin glass in three dimensions has been investigated and it has consumed a large amount of CPU resources. At equilibrium it is possible to simulate up to $`L=16`$ lattices and the signature of the transition is very weak . In reference was found a behaviour for the dynamical critical exponent $`z(T)7T_c/T`$, where $`T_c1`$ means for the critical temperature and $`T`$ is the temperature, for the three dimensional Gaussian Ising spin glass using a Metropolis dynamics. This numerical fact has been corroborated in experiments using samples of CuMn (at 6%) and thiospinel . This result for the dynamical critical exponent implies Monte Carlo (MC) thermalization times proportional to $`L^7`$ near the phase transition ($`L`$ is the size of the system). One can compare this behaviour with that of the pure Ising model: the thermalization time diverges as $`L^2`$ (near its critical temperature). From the previous discussion follows that traditional approaches using only local algorithms should fail in thermalizing a large system. Moreover, the absence of a non local update algorithm and the high value of the dynamical critical exponent for the local ones convert this problem in a very challenging computational issue. Recent works using large amounts of computational power with a standard local MC simulation showed some evidences of a cold phase in this model. Other recent approaches in similar models have used more sophisticated update algorithms, based on a combination of a standard local MC run and an innovative update process in the temperature. These algorithms , the simulated tempering and parallel tempering, succeed in thermalizing systems at temperatures lower than those reachable by a standard Metropolis simulation. Moreover, in a canonical simulation, the most time-consuming task is the generation of the random numbers, and so, one possibility is to use a MC method that does not use random numbers. This calls for microcanonical methods. In particular, Creutz developed the so-called demon algorithm that does not need random number generation to work. The aim of this paper is to investigate the behavior of this microcanonical local update algorithm on this model. Since that, we will use only this algorithm, although it is clear that other tools, as parallel tempering, have to be implemented to improve the simulation in order to try to elucidate the low temperature regime of this model. In particular, we will study numerically the ergodicity of the algorithm, efficiency (i.e. autocorrelation times) and the difference with the canonical algorithm when we work at finite volume and how these differences go to zero with the volume. It is clear, that a study of this kind is essential if we will use the microcanonical algorithm in extensive numerical simulations. As further studies we will plan to analyze the performance of a combination of microcanonical and canonical algorithms. This combination has worked very well in the simulation of some physical systems as Quantum Chromodynamics and it could be of great interest to check if this combination will work well in spin glasses. Moreover, one of the authors of this paper is finalizing the construction of a dedicated machine to simulate this model , being this paper a preliminary study of the characteristics of the demon algorithm, previous to its hardware implementation. We remark that this algorithm could also be used on computers of general purpose not only in dedicated machines. ## 2 Model, observables and update algorithm The $`\pm J`$ spin glass model is defined by the Hamiltonian $$H\underset{<i,j>}{}\sigma _iJ_{ij}\sigma _j,$$ (1) where the spins $`\sigma _i`$ take values $`\pm 1`$. The nearest neighbor quenched couplings $`J_{ij}`$ take values $`\pm 1`$ with equal probability. The spins live in a cubic lattice containing $`V=L^3`$ sites. We have used helicoidal boundary conditions in two directions and periodic in the third one. The reason is because we wanted to check the special purpose computer developed for this physical model . As usual, for every realization of the bonds or sample two independent copies of the system are studied. The main quantity to be measured is the overlap between the two copies with the same disorder, which acts like an order parameter for this model. The overlap between two spin configurations $`\sigma `$ and $`\tau `$ is given by $$q(\sigma ,\tau )\frac{1}{V}\underset{i}{}\sigma _i\tau _i,$$ (2) which is usually denoted $`q`$. Using powers of this quantity one can compute different observables. The second and fourth power are used to build the Binder parameter $$g\frac{1}{2}\left[3\frac{\overline{q^4}}{\overline{q^2}^2}\right],$$ (3) where $`(\mathrm{})`$ stands for the thermal average for a given realization of the bonds, and $`\overline{(\mathrm{})}`$ means the average over the disorder. Since this quantity is dimensionless, it obeys the finite size scaling law (near the critical point) $$g=\stackrel{~}{g}\left(L^{1/\nu }\left(TT_c\right)\right),$$ (4) being independent of the volume at the critical temperature, $`T=T_c`$. This property makes it appropriate to investigate the existence of any spin glass phase transition by studying the intersections of the functions $`g(T)`$ for different lattice sizes. In addition, one can compute the spin glass susceptibility, $$\chi V\overline{q^2}.$$ (5) The algorithm we want to investigate is the demon algorithm proposed by Creutz. For this microcanonical algorithm, the physical system is the standard lattice plus a demon, which acts like an entity able to store energy. The update algorithm keeps constant the sum of the energy of the lattice and the demon. In order to carry out the MC simulation for a given total energy $`H`$, one can start from a spin configuration with that energy and the demon energy equal to zero. To generate new spin configurations, the spins are updated as follows: first, a spin is selected and its sign is proposed to be inverted. If the flip lowers the spin energy, the demon takes that energy and the flip is accepted. On the other hand, if the flip grows the spin energy, the change is only made if the demon has that energy to give to the spin. At this level, the value of the temperature $`T`$ is unknown and it can be obtained from the demon energy, whose probability distribution $`p(E_d)`$ is given by the expression $$p(E_d)e^{\beta E_d}.$$ (6) A fit to this function can provide the $`\beta `$ value, although a better estimate can be calculated if the mean energy of the demon on the sample, $`E_d`$, is computed. Thus, the $`\beta `$ value is obtained as $$\beta =\frac{1}{T}=\frac{1}{4}\mathrm{log}\left(1+\frac{4}{E_d}\right).$$ (7) In a spin glass model, given the energy of the simulation, a different value of the temperature is obtained for every sample. The average of them all gives the temperature of the simulation corresponding to the fixed energy. ## 3 Numerical results We have simulated some different energies per spin $`e`$ in three lattice sizes ($`L=8,12`$ and $`16`$). For every couple of parameters $`(L,e)`$ we generate a large set of samples. The initial demon energies and spin configurations are chosen to give a total energy $`H=eV`$. In order to generate them, we start with $`E_d=0`$ and all the spins equal to 1. We use the demon algorithm to change the spin configuration, but when $`Ed>4`$ we steal 4 units with a probability of 50%. In this way, we approach smoothly towards the energy desired for the simulation. When the required energy is reached, the simulation starts. The spins to be updated are sequentially chosen, having a completely updated spin configuration after every $`V`$ updates. Table 1 shows the parameters of the different runs: lattice size, energy per spin, number of samples and total number of Monte Carlo sweeps are showed in the first columns. The next one shows the thermalization time $`t_0`$ (we will discuss in detail below how we have computed the thermalization time). The last column is the number of measures of the overlap considered in every run to compute the thermodynamical average. The results obtained in these runs are shown in Table 2. We have computed the temperature according to Eq. (7). The second and fourth powers of the overlap have been also calculated to obtain the Binder cumulant. The work has been carried out on the RTNN computer, which holds 32 PentiumPro processors, for a total CPU time of approximately 20 days of the whole machine. The errors in the estimates of the observables have been calculated with the jack-knife method . In order to be sure that the system is thermalized before measuring, we check the symmetry on the probability distribution of the overlap and also the MC evolution of the spin glass susceptibility. For every run reported in this paper the mean value of the overlap is zero (within the statistical error). Moreover, we have checked the symmetry around zero of the probability distributions of the overlap. The MC evolution of the spin glass susceptibility is plotted in figure 1. Every point in the plot has been computed by averaging the values for the overlaps only in its corresponding MC time. One expects to see the susceptibility rising with the Monte Carlo time until a plateau is reached. The beginning of this plateau defines the thermalization time $`t_0`$. We use this criterion for the thermalization. As we said above, the temperature corresponding to the simulation can be computed by using Eq. 7. For every sample we obtain its own temperature. Figure 2 shows the normalized probability distribution of temperatures obtained for the highest and the lowest cases in both $`L=8`$ and $`L=12`$ lattices. Note the width of the histograms. ## 4 Demon vs. Canonical: a comparison The results exposed in Table 2 can be compared with those previously obtained in a canonical simulation running a heat bath update algorithm. Moreover we have performed canonical numerical simulations (using a Metropolis algorithm and periodic boundary conditions) in order to run just at the temperature given by the demon algorithm (and hence, compare at the same temperatures). We report the parameters of these canonical simulations in Table 3 (the colums are: lattice size, temperature, number of samples, number of Monte Carlo sweps, thermalization time ($`t_0`$) and number of measures) and their results in Table 4. In addition to this, in Table 4 we have computed the difference between the Binder cumulant computed in the demon simulation and the canonical one. Fig. 3 shows the second and fourth moments of the overlap. We take as reference the heat bath data from Kawashima and Young and our own Metropolis data. As a check of our Metropolis simulation it is clear that our data match very well (one standard deviation) between those from Kawashima and Young. Now we can confront the demon data with the canonical. While the squared overlap seems to fit perfectly in the data obtained with heat bath and Metropolis, the overlap to the fourth differs significantly, being the microcanonical data lower than the canonical. The discrepancy between the two ensembles decreases with the lattice size. These two quantities are used to calculate the Binder parameter, which is plotted in Fig. 4. The canonical simulation gives a cut point between the two Binder parameters. In our microcanonical simulation both Binder parameters approach at low temperature. Data converge to be compatible in the error bar, but no cut point is resolved using $`L=8`$ and $`L=12`$ data (we have simulated with the demon algorithm $`L=16`$ data in the region $`TT_c=1.11`$). To be sure of the correctness of the algorithm we have carried out some extra runs at $`e=1.650`$ and $`L=8`$. One of them was using periodic boundary conditions (to check the effect of periodic and helicoidal boundary conditions on the observables using the demon algorithm). In addition, to check the ergodicity of the algorithm, we repeated the simulation with the same realizations of the disorder but starting from spin configurations obtained from a thermalized heat bath simulation. In both cases, we obtained compatible results. The canonical and microcanonical ensembles must agree in the thermodynamical limit. The discrepancies between them have to decrease when volume goes to infinite. To check it, we have simulated $`e=1.650`$ at $`L=16`$. In this case, we obtained $`g=0.640(4)`$, nearer to the Binder cumulant coming from the canonical simulation than the $`L=8`$ and $`L=12`$ cases. We have seen that the discrepancy of the Binder cumulant (sixth column of Table 4) for $`e=1.650`$ goes to zero following a power law (with $`\chi ^2/\mathrm{d}.\mathrm{o}.\mathrm{f}1`$ where d.o.f stands for degrees of freedom): $`\mathrm{\Delta }g(e=1.65)L^{0.86(36)}`$. We can repeat this procedure for other energies. For instance, if $`e=1.700`$ then $`\mathrm{\Delta }g(e=1.70)L^{0.49(43)}`$ with $`\chi ^2/\mathrm{d}.\mathrm{o}.\mathrm{f}=1.56`$ with a confidence level of 21%.<sup>1</sup><sup>1</sup>1The confidence level is the probability that $`\chi ^2`$ were greater than the observed value assuming that the statistical model used is correct (in our case the power law behaviour). A very low value of this confidence level (e.g. $`<5\%`$) would imply that our statistical model is incorrect. See for instance . In any case, a more detailed study of this issue is needed. We can study in more detail the previous issue by comparing the probability distributions of the overlap, its second power, and its fourth power obtained with the demon algorithm with $`e=1.650`$. Moreover, we have also measured the previous three probability distributions carrying out a heat bath simulation at temperature $`T=1.272`$ and $`L=8`$ with the same sets of bounds and number of iterations of the case $`e=1.650`$. We show these probability distributions in Fig. 5.(a). The different shape of the distribution is clarified in the plots of the powers of the overlap, being the demon distribution more peaked than the canonical one. Other interesting issue is to compare the thermalization times needed in the Metropolis simulation and in the demon algorithm. All these thermalization times have been reported in the fifth column of Tables 1 and 3. We remark that these times have been obtained by monitoring the growth of the non linear susceptibility as a function of the Monte Carlo time. We take as $`t_0`$ (in the canonical as well as in the microcanonical simulations) the Monte Carlo time in which the numerical data achieve the equilibrium plateau (see figures 1 and 6 ). From the values of $`t_0`$ for the lowest temperatures follows that demon algorithm thermalizes slower (with a factor between 2 and 3) than Metropolis. Obviously the cost of introducing a random number generator (i.e. to run a canonical simulation instead of a microcanonical one) is less than the factor two and three found in the autocorrelation times, and so one of the conclusions of this paper is that (in an general purpose computer) the efficiency of canonical algorithm are bigger than that of the microcanonical one (bigger thermalization times). It is clear that a dedicated machine with programmable logic (running the microcanonical algorithm) with a speed 16 times bigger than a supercomputer (running a Metropolis algorithm) clearly compensates (for lattices of order $`16`$) the excess of thermalization time of the demon algorithm (which is between 2 and 3). This is the situation if we compare in a tower of APE100 supercomputer (which has a peak performance of 25 GigaFlops) , with a real performance of 5000 ps per spin. and a machine with programmable logic (312 ps per spin). Of course, that special purpose computer running a Metropolis algorithm (at the same speed) would be even more efficient because the smaller thermalization times. As a matter of fact, the special purpose computer referred in this work is able to run both algorithms at the same speed thanks to a fast random number generator implemented in hardware. The study of the efficiency of a combination of the two algorithms is left to a later work. We finally report our last check of the demon algorithm by checking the Guerra relations which seem to be fulfilled within a 0.5% precision in a canonical simulation . One of the Guerra’s relations reads $$\overline{q^2^2}=\frac{1}{3}\overline{q^4}+\frac{2}{3}\overline{q^2}^2.$$ (8) This relation has been shown exact for the Gaussian model . This relation can be rigourously demonstrated in the infinite volume limit. However, one would expect finite corrections in the Gaussian case. Even though there is no proof for the $`\pm J`$ mode, the difference between the two sides of the equation has to decrease with the volume. Table 5 shows our results for the left hand side (lhs) and the right hand side (rhs) of Eq. 8. The errors are calculated by a jack-knife analysis. In the sixth column of Table 5 we report the difference between the lhs and the rhs of Eq. 8. The maximum deviation is 2.5 standard deviations ($`L=8`$ and $`e=1.650`$). The rest of the differences of the Table 5 have fluctuations less than two standard deviations. Hence, we can conclude that the Guerra relation is satisfied in the demon algorithm. ## 5 Conclusions We have studied a microcanonical algorithm running on the three dimensional Ising spin glass in three dimensions. We have obtained compatible (within the statistical errors) values among the results of a canonical numerical simulation and the demon algorithm for the second and fourth moments of the overlap whereas the values of the Binder cumulant are different (but with the discrepancy going to zero following a power law). We remark that microcanonical and canonical algorithms should provide the same numerical results only in the thermodynamic limit. Moreover the microcanonical algorithm satisfies one of the Guerra relations. Finally we have shown that the thermalization times needed for the demon algorithm are two or three times larger than those for the Metropolis ones (for the larger simulated lattice $`L=16`$). We remark that we have checked numerically the ergodicity and the efficiency of the algorithm. From the point of view of the efficiency we have shown that the cost of introducing random numbers is less than the excess of thermalization which need the microcanonical simulation. Obviously, if we can design a dedicated machine where only it is possible to implementate (via hardware) a microcanonical algorithm, and if this dedicated machine runs to a speed which is bigger than 5 times the speed of a canonical code in a supercomputer the use of the microcanonical algorithm will be welcome. Obviously this work shows that if we can implementate random numbers with the cost of a factor two in time we should use the canonical algorithm instead of the microcanonical one. We wish to thank J. M. Carmona, L. A. Fernández, D. Iñiguez, G. Parisi and A. Tarancón for useful discussions. CLU is a DGA fellow. We also wish to thank P. Young for providing us the numerical data of his reference .
no-problem/9812/astro-ph9812135.html
ar5iv
text
# On the Nature of the Compact Objects in the AGNs ## I Introduction One of the cornerstones of modern physics and astronomy is the idea of the Black Holes (BH) in which everything can enter but from which nothing can come out, atleast at the classical level. Black holes are supposed to exist as the compact object in many X-ray binaries, in the center of core collapsed star clusters and in the core of the socalled Active Galactic Nuclei (AGN) as well as in the core of many normal galaxies, like the Milky way. They may exist in isolation too, but it is difficult to detect such isolated BHs. In this work, we will focus attention on the supposed Galactic core BHs. Despite such popular notions, it is simultaneously acknowledged that most of the evidences for the existence of BHs are highly circumstantial, and, what one observes is actually Massive Compact Dark Objects (MCDO). For instance, it is widely believed that the center of our galaxy harbors a BH of mass $`M10^6M_{}`$ having a radius $`R_{BH}3\times 10^{11}`$ cm. But actually the spatial resolution with which we can scan this region is $`0.1pc3\times 10^{17}cm10^6R_{BH}`$, and thus, as such, we can rule out the possibility that the core may actually comprise densely packed X-ray binaries, Wolf-Rayet stars or other massive stellar cores. In the course of time such objects might also form a single self-gravitating entity called Super Massive Star (SMS, Hoyle and Fowler 1963). The same is true for the AGNs too although the intraday variabilites observed in many AGNs would yield a tighter limit on the core size $`R<10^{15}`$ cm (however, note that the estimates of core masses are in general uncertain atleast by a factor of $`10^2`$). Even if such cores are accepted to be massive BHs, they are likely to have been formed by a preceding phase as SMSs (Begelman and Rees 1978). Thus at a given epoch, many such cores must be SMSs. It is believed that the unassaiable evidence in favor of BHs and their Event Horizons arise from the fact that accretion luminosity $`(L_{acc})`$ from many AGNs or galactic cores is insignifican compared to the corresponding Eddington values ($`L_{acc}L_{ed}`$). This could be so because for weak or excessively high accretion rates, the flow could be advection dominated with small radiative efficiency. As a result, for spherical accretion, not mediated by a disk, most of the accreted energy and mass will be lost inside the event horizon. However, without (first) entering into a debate on the possible existence BHs, we will show below that for SMSs too it is possible to have $`L_{acc}L_{ed}`$. Thus Super Massive Stars too may emulate the Mass-Energy gobbling properties of a BH. ## II Supermassive Stars A SMS is supported almost entirely by its radiation pressure $$p_r=\frac{1}{3}aT^4$$ (1) where $`a`$ is the radiation constant. On the other hand, by assuming the plasma to be made of hydrogen only, the matter pressure is given by $$p_m=nkT$$ (2) where $`k`$ is the Boltzman constant and n is the proton number density. The structure of a Newtonian SMS is closely given by a polytrope of index 3, and the ratio of matter pressure to gas pressure works out to be (Weinberg 1972): $$\beta =\frac{p_m}{p_g}8.3\left(\frac{M}{M_{}}\right)^{1/2}=8.3\times 10^4M_8^{1/2}$$ (3) where $`M=M_810^8M_{}`$. This shows that a SMS should have a minimum mass $`7200M_{}`$ (Weinberg 1972) in order to have a value of $`\beta <0.1`$. A Newtonian SMS may be defined as one for which the rest mass energy density of the plasma dominates over the radiation energy density even though $`p_mp_r`$, i.e., $`m_pnc^23p_r`$. It then follows that, the “compactness” of a Newtonian SMS is very small (Weinberg 1972): $$\frac{2GM}{Rc^2}0.78$$ (4) In other words the surface red-shift of a Newtonian SMS is very small $$z=\left(1\frac{2GM}{Rc^2}\right)^{1/2}10.39$$ (5) In this limit of small $`z`$, we can approximate $`zGM/Rc^2`$. Most of the stable (Newtonian) SMSs are likely to have $`10^4<z<10^2`$ (Shapiro & Teukolsky 1983). However, it is also possible to have real compact Relativistic SMSs with high values of $`z1`$ for which radiation dominates even in the energy budget. Such Relativistic SMSs may be described by a relativistic polytrope of degree 3 (Tooper 1964). But, in this work we shall discuss the case of Newtonian SMSs only. If a SMS is in hydrosttic equilibrium, its luminosity is close to the corresponding Eddington value : $$L=(1\beta )\frac{4\pi cGM}{\kappa }1.26\times 10^{46}M_8erg/s$$ (6) where $`\kappa `$ is the Thompson opacity. And as is well known, this fact was one of the basic reasons behind hypothesizing that the quasars could be powered by SMSs (Hoyle and Fowler 1963). Note, the hydrostatic equilibrium must be effected by the release of energy by nuclear fusion at the center. But the efficiency for energy generation by hydrogen fusion is only $`0.7\%`$, and given Eddington limited accrretion rate, the fusion process can not deliver the necessary luminosioty for massive SMSs. In fact, it is difficult to conceive of nuclear-fuel supported SMS for $`M>6\times 10^4M_{}`$ (Shapiro & Teukolsky 1983). Thus, more massive SMSs are not in hydrostatic equilibrium and must be undergoing slow gravitational contraction. ## III Highly Supermassive Stars (HSMS) Now we shall estimate the luminosity of Newtonian HSMS releasing energy by the Kelvin-Helmohltoz process and for which nuclear energy generation is insignificant: First note that, the effective adiabatic index of a pure H-SMS is given by (Weinberg 1972) $$\gamma \frac{4}{3}+\frac{\beta }{81}$$ (7) Then the Virial Theorem looks like $$E_{in}+3(\gamma 1)E_g=0;or,E_{in}+(1+\beta /27)E_g=0$$ (8) where $`E_{in}`$ is the total internal energy and $`E_g`$ is the self-gravitational energy. On the other hand, the Newtonian total energy of the star (polytrope of index 3) is $$E_N=E_{in}+E_g=\frac{\beta }{27}E_g=\frac{\beta }{18}\frac{GM^2}{R}$$ (9) The K-H contraction luminosity is then given by $$L_{KH}=\frac{dE_N}{dt}\frac{\beta }{18}\frac{GM^2}{R^2}v$$ (10) where $`vc`$ is the rate of slow contraction. In the limit of small $`z`$, by using Eqs. (3) and (5), the above expression can be rewritten in a physically significant manner: $$L_{KH}=\frac{\beta }{18}\frac{z^2c^4v}{G}8.3\times 10^4\frac{z^2c^4}{18G}M_8^{1/2}6.25\times 10^{38}z_3^2vM_8^{1/2}erg/s$$ (11) where $`z=z_310^3`$. By comparing this pure gravitational contraction luminosity with the corresponding Eddington value, we find that $$\frac{L_{KH}}{L_{ed}}5\times 10^8z_3^2vM_8^{3/2}$$ (12) Here, we emphasize that, the fact that the system is out of hydrodynamical equilibrium need not mean the system is undergoing free fall, and the value of $`v`$ could be much smaller than the corresponding free fall speed $`v_{ff}\sqrt{2z}c`$. The KH contraction may continue for thousand of years and the value of $`v`$ could be as low as $`fewkm/s`$ for the initial phase. Then we have $$\frac{L_{KH}}{L_{ed}}5\times 10^3z_3^2v_1M_8^{3/2}$$ (13) where $`v=v_11Km/s`$. Thus the intrinsic KH luminosity of a HSMS could be insignificant compared to the expected Eddington luminosity. ## IV Accretion Luminosity We have recently shown that the GTR expression for accretion luminosity from a “hard surface” (Mitra 1998b) is $$L_{acc}=\frac{z}{1+z}\dot{M}c^2z\dot{M}c^2L_{ed};if,z1$$ (14) In the limit of small $`z`$ the above general formula obviously yield the Newtonian formula : $`L_{acc}=\frac{GM\dot{M}}{R}`$. In this case, the accretion efficiency $`z`$ could be much smaller compared to the corresponding value of disk accretion onto a Schwarzschild BH, $`5.7\%`$. Further, the actual accretion efficiency could be much smaller than what is indicated by Eq. because a HSMS does not really have a “hard surface”. ## V Hard Surface ? The mean baryonic density of a SMS is $$\overline{\rho }=\frac{M}{(4\pi /3)R^3}=\frac{3z^3c^6}{4\pi G^3M^2}10^{13}z_3^3M_8^2g/cm^3$$ (15) And for a polytrope of index 3, the density of the external layers is atleast one order smaller: $$\overline{\rho }_{ex}10^{14}z_3^3M_8^2g/cm^3$$ (16) With the low luminosity inferred above, it may be found that, the temperatures of these layers could be $`<1eV`$, and the gas is actually partially ionized (heavy elements would be almost completely neutral) ! Then an incident electron of energy $`0.5zMeV`$ or a proton of energy $`zGeV`$ would primarily undergo low energy ionization/nuclear losses. Even otherwise, at such low densities, the test particle may penetrate deep inside the HSMS and any photon that it might emit may be trapped in the general soup of photon and plasma. Thus it is possible that $`L_{acc}L_{ed}`$. And this may be mistaken as an evidence in favor of the existence of an Event Horizon!! ## VI Discussion In this preliminary work we have outlined that Highly Supermassive stars with low values of compactness, $`z`$, are likely to undergo slow Kelvin-Helmoltz contaction. The resultant luminosity arising from either this contraction process or accretion of surrounding gas clouds, are likely to yield a luminosity which is insignificant compared to the corresponding Eddington value. And given such an observational result, in the absence of a serious consideration of the physics of the HSMSs, one might be tempted to describe the result as “Experimental Discovery of Black Holes”. On the other hand, in the relativistic regime (not described here), the Supermassive stars may be very compact $`z<0.615`$ and have central temperatures well above $`T_c>10^9`$ K. The gravitational contraction luminosity of such stars may be released in the form of neutrinos. Such compact stars will possess a “hard surface” and their accretion luminosity could be comparable to the corresponding Eddington value. In the context of suspected stellar mass BHs, it may be reminded that it might be possible to have novel kind of hadronic stars, called, Q-stars whose value could be as high as $`>100M_{}`$ (Bachall et al. 1989, Miller et al. 1998). Further, Q-stars may be formed at sub-nuclear densities and depending on the uncertain model QCD parameters, have a wide range of $`1<z1`$. In case, $`z1`$, one would have $`L_{acc}L_{Ed}`$, and again this may be mistaken as an evidence for an “event horizon”. Finally, in a very detailed work (Mitra 1998b, gr-qc/9810038), we have shown from all possible angles that BHs neither form in gravitational collapse of baryonic matter, nor can they be assumed to exist, in general. The only exact solution of Einstein eq. which (apparently) shows the production of BHs in spherical collapse is due to Oppenheimer and Snyder (1939, OS). As is well known, they (apparently) showed that the collapse of a homogeneous dust of mass $`M`$ gives rise to a BH of same mass in a proper time $`\tau M^{1/2}`$. However, what they overlooked in this historical paper is that the Schwarzschild time $`t`$ is related by a relation of the kind (see Eq. 36 of OS): $$t\mathrm{ln}\frac{y^{1/2}+1}{y^{1/2}1};yRc^2/2GM$$ (17) In order that $`t`$ is a real quantity in the above expression, one must have $$y^{1/2}>1;or,\frac{2GM}{Rc^2}<1$$ (18) Then in order to reach the central singularity, one must have $$M0;asR0;sothat\tau M^{1/2}\mathrm{}$$ (19) This means that the BH is never produced and even if it would be produced its mass must be zero! We have then proved this result in a most general fashion for an inhomogeneous dust as well as for a perfect fluid having arbitrary EOS and radiative property (Mitra 1998a, gr-qc/9810038). Thus we may say that whether the cores of galaxies are SMSs or not, they are certainly not BHs.
no-problem/9812/astro-ph9812065.html
ar5iv
text
# Optical and infrared observations of the luminous quasar PDS 456: A radio-quiet analogue of 3C 273? ## 1 Introduction Torres et al. (1997; hereafter T97) reported the discovery of a new bright ($`V=14.0`$) quasar at relatively low redshift ($`z=0.184`$). This object, called PDS 456, was discovered in the Pico dos Dias survey for young stellar objects, which uses optical magnitude and IRAS colours as selection criteria. Although somewhat fainter than 3C 273, PDS 456 lies close to the Galactic centre and is seen through $`A_V1.5`$ mag of extinction (T97), and it is therefore intrinsically more luminous. It is, however, radio-quiet, with $`S_{4.85}<42`$ mJy (Griffith et al. 1994) and $`S_{1.4}=22.7`$ mJy (PDS 456 can be identified with NVSS J172819$``$141555; Condon et al. 1998), where $`S_\nu `$ is the flux density at $`\nu `$ GHz. In this paper we present optical and infrared observations of PDS 456. We confirm the redshift found by T97, and make a more accurate determination based on the unblended narrow emission line of \[Fe II\] $`\lambda `$1.6435 $`\mu `$m. We also confirm the existence of significant Galactic reddening. We compare the properties of PDS 456 with those of 3C 273, with which it has a number of similarities, and investigate the possibility that it is a radio-quiet analogue of the latter source. We adopt $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.1`$. With this cosmology and our improved redshift determination, the proper distance to PDS 456 is 1.01 Gpc. ## 2 Observations and reduction ### 2.1 Infrared imaging JHKLM images of PDS 456 were obtained using IRCAM3 on the United Kingdom Infrared Telescope (UKIRT) on UT 1997 August 24. Standard near-infrared jittering techniques were used to allow flatfielding without the need to observe blank regions of sky. The dark-subtracted images were scaled to the same median pixel value and median-filtered to produce a flatfield image which was divided into the individual frames. These frames were then registered using the centroid of the quasar and averaged with bad pixel masking. Total integration times were 50 s at JHK, 270 s at $`L^{}`$, and 72 s at $`M`$. Conditions were photometric and flux calibration was performed using a solution derived from observations of a number of standard stars during the course of the night. Similar observations were also performed with the same instrument on UT 1997 September 16. ### 2.2 Infrared spectroscopy Near-infrared spectra of PDS 456 in the $`J`$ and $`K`$ windows were taken using UKIRT/CGS4 on UT 1997 August 12 and 10, respectively. Conditions were photometric throughout and both spectra have a resolution $`R900`$ and total integration times of 640 s. Bad pixel masking and interleaving of the separate integrations was performed with the Starlink CGS4DR software, and the remaining reduction was undertaken with the IRAF data reduction package. The spectra were taken through a 1.2-arcsec slit and extracted along a single 1.2-arcsec pixel. The $`J`$ spectrum was corrected for atmospheric absorption and flux calibrated using the A2 star HD 161903. The $`K`$ spectrum used the F0 star SAO 121857 as an atmospheric dividing standard and was flux calibrated using HD 225023. The spectra were wavelength calibrated using krypton ($`J`$-band) and argon ($`K`$-band) arc lamps and the r.m.s. deviation from the adopted fit was $`\mathrm{}<1`$ Å. An 8–13 $`\mu `$m spectrum of the source was taken with UKIRT/CGS3 on UT 1997 August 30 as a service observation. A 5.5-arcsec aperture was used with the low resolution grating, which gives a resolution $`R50`$. Triple sampling was employed and the total integration time was 3600 s. A spectrum was taken of $`\alpha `$ Aql (Altair) at similar airmass, which was used for flux calibration using its ratio with the spectrum of $`\alpha `$ Lyr (Vega, assumed to be a 9400 K blackbody) from Cohen & Davies (1995). ### 2.3 Optical spectroscopy Optical spectra were taken with the Intermediate Dispersion Spectrograph (IDS) on the 2.5-m Isaac Newton Telescope (INT) as service observations on the night of UT 1997 August 22. Four grating settings were used to cover the entire optical spectrum from 3320–9150 Å at a dispersion of $`1.6`$ Å pixel<sup>-1</sup>. A 1.5-arcsec slit was used, and the seeing was reported to be 1.0 arcsec. The total integration time at each position was 500 s, split into three separate exposures to facilitate cosmic ray removal. Wavelength calibration was achieved through the use of argon and neon lamps, and the r.m.s. deviation from the adopted fit was $`\mathrm{}<0.1`$ Å for all wavelength regions except in the range $`5800\mathrm{\AA }\mathrm{}<\lambda \mathrm{}<6100`$ Å, where the arc lines were saturated, and the deviations became as large as 1.5 Å. We determined the flux scale using Bohlin’s (1996) spectrum of BD +332642 (which was also observed with the same instrumental setup), available from the Space Telescope Science Institute. The three exposures at each grating position were averaged together after masking out cosmic ray hits, and the spectra merged by using the regions of overlap to compute necessary greyshifts between the different grating positions. Shifts of up to 8% were needed to match up the flux scales, and it was noted that the fluxes in the different exposures taken at a single grating position differed by as much as 25%. This strongly suggests the presence of clouds or significant slit losses, and as such the flux scale must be considered approximate. ## 3 Results and analysis ### 3.1 Photometry The results of our aperture photometry are presented in Table 1. The flux calibration solutions for the night of 1997 Sep 16 were slightly more uncertain than for the night of 1997 Aug 24, but we find no evidence for variability in any of the five filters at greater than $`2\sigma `$ significance. This supports the lack of variability observed by T97 at optical wavelengths over a period of three weeks. A power-law fit to the observed $`HK`$ colour (and $`K`$-band continuum from our spectrum) gives $`\alpha =2.4\pm 0.1`$ ($`S_\nu \nu ^\alpha `$). This is very steep for quasars, which have typical spectral indices $`\alpha =1.4\pm 0.3`$ (Neugebauer et al. 1987), and suggests significant reddening is present. The near-infrared colours imply $`A_V=1.5\pm 1.0`$, in line with the estimate of T97. ### 3.2 Spectroscopy We present our near-infrared spectra in Figs 1 and 2. We confirm the redshift found by T97 on the basis of the \[Fe II\] $`\lambda `$1.6435 $`\mu `$m emission line, the only unblended forbidden line in the entire spectrum. The sharpness of this line ($`\mathrm{FWHM}1000`$ km s<sup>-1</sup>) allows a more accurate redshift determination than from the broad lines, and we measure $`z=0.18375\pm 0.00030`$. The optical spectrum is shown in Fig. 3. There are a number of emission features in our near-infrared spectra which we are unable to identify. Their rest wavelengths (assuming they are at the redshift of PDS 456) do not correspond to lines seen in the spectra of other quasars or Seyfert galaxies (e.g. Hill, Goodrich & DePoy 1996; Thompson 1995). However, they also do not correspond to features in the dividing standards, whose spectra we also show in Figs 1 and 2. The emission features observed at 1.185 $`\mu `$m and 1.245 $`\mu `$m are the most clearly real, since the emission feature observed at 1.095 $`\mu `$m may be composed partly of Pa$`\gamma `$ from the dividing standard and Pa$`\zeta `$ from the quasar. However, the former line is in a fairly clean part of the atmosphere and was readily removed, and Pa$`\zeta `$ should be weaker than Pa$`ϵ`$. It therefore appears that there may be another unidentified emission line at this wavelength. The weak broad feature near 2 $`\mu `$m is at the wrong wavelength to be caused by imperfectly-subtracted atmospheric CO<sub>2</sub> absorption, and again we suspect it is a real emission feature in the quasar spectrum. We have re-reduced the $`J`$-band spectrum with a dividing standard of completely different spectral type (K0III), and we also re-observed PDS 456 in the $`J`$ window with a different camera/grating combination of CGS4 on UT 1998 August 26. Both times the final spectrum was indistinguishable from Fig. 1. The wavelengths of the lines are inconsistent with their being lines from higher orders (e.g. H$`\alpha `$), and their relative wavelengths do not correspond to any pair of strong lines, so we can rule out their being from a single system at a different redshift (either lower or higher). We obviously cannot conclusively rule out their being more than one additional system along the line of sight, although this is very unlikely, especially given our small spectroscopic aperture. The H I lines have such broad wings ($`\mathrm{FWZI}\mathrm{}>\mathrm{30\hspace{0.17em}000}`$ km s<sup>-1</sup>) that fluxes are difficult to measure reliably. In addition, many of the lines are blended, further hampering measurements of their fluxes. We have opted to use the Pa$`\alpha `$ line as a template for measuring the fluxes of the blended H I lines. We first subtract a low-order cubic spline from our $`K`$-band spectrum, masking out regions contaminated by emission lines, and interpolating across the Br$`\delta `$ line. We then progressively subtract a scaled version of the Pa$`\alpha `$ line from the locations of the other emission lines until the resultant spectrum shows no evidence of line emission. We determine the strength of the other line in the blend by measuring the flux above an adopted continuum level in the spectrum with the hydrogen line subtracted. In the case of He I $`\lambda `$1.0830 $`\mu `$m (blended with Pa$`\gamma `$), this results in a very broad line ($`\mathrm{FWHM}7000`$ km s<sup>-1</sup>, compared to $`\mathrm{FWHM}3500`$ km s<sup>-1</sup> for the H I lines) with a pronounced blue wing, as can be anticipated from Fig. 2. While no strong emission lines have been observed in this wavelength region in other objects, we cannot rule out the possibility that the He I line is further blended because of the presence of the unidentified emission lines in our spectrum. We note, however, that Netzer (1976) suggests that the He I lines should be broader than those of H I. A determination of the H$`\beta `$ flux is further complicated by the strong Fe II emission which produces a “pseudo-continuum” on both sides of the emission line. We removed the Fe II emission from the spectrum by first performing a fit (by eye) to the emission lines so that the continuum was approximately linear over short wavelength intervals. A Gaussian fit to the isolated 4177 Å and 5534 Å lines was used to model the profiles of the lines. We then took each individual line in turn and determined the flux which produced the minimum sum-of-squares residual about a continuum level which was linearly interpolated between points either side of the line. A new spectrum was constructed using these line fluxes and the process repeated until the result converged. The Fe II-subtracted spectrum is shown in Fig. 3. From this analysis, we measure the flux of the Fe II 37,38 multiplets as $`F(\lambda 4570)=(5.1\pm 0.7)\times 10^{16}`$ W m<sup>-2</sup> and of the H$`\beta `$ line as $`F(\mathrm{H}\beta )=(1.1\pm 0.2)\times 10^{15}`$ W m<sup>-2</sup>, three times larger than that determined by T97. We believe this discrepancy is due to an incorrect placement of the continuum level by T97 when performing their simple flux determination. The value we obtain from our more detailed method is likely to be far more accurate. The line fluxes are listed in Table 2. Note that for this Table and future discussions, we have brightened our $`K`$-band spectrum by 25% to match our photometry, suspecting poor seeing and slit losses for the difference; no such shift was needed for the $`J`$-band spectrum. The equivalent width of H$`\beta `$ is fairly typical of nearby luminous quasars (Miller et al. 1992) and the ratio Fe II $`\lambda `$4570/H$`\beta 0.5`$ is not unusually strong. We also have reason to suspect T97’s \[O III\] $`\lambda `$5007 flux, which is blended with the $`\lambda `$5018 line of Fe II multiplet 42, and which we fail to detect in our spectrum after removing the Fe II and H$`\beta `$ emission as described above. T97 deblend these two lines and obtain Gaussians of similar, narrow ($`\mathrm{FWHM}700`$ km s<sup>-1</sup>), widths whose wavelengths are significantly different from those expected. The mean wavelength of T97’s two deblended components is very close to the wavelength expected for the $`\lambda `$5018 line alone (our revised redshift improves the agreement), and their combined flux is in excellent agreement with the flux we determine for this line alone. The Fe II lines are also known to be broad (we measure $`\mathrm{FWHM}1500`$ km s<sup>-1</sup>), casting doubt on the narrow line produced by T97’s deblending. We also fail to detect \[O III\] $`\lambda `$4959 at its expected flux level. We place an upper limit of $`F([\mathrm{O}\mathrm{III}]\lambda 5007)<2\times 10^{17}`$ W m<sup>-2</sup>, corresponding to a rest-frame equivalent width of $`<2`$ Å. ### 3.3 Reddening Whilst the H I line ratios can be used to estimate the reddening, they are too uncertain to provide an accurate value, although they are broadly consistent with the $`A_V=1.5`$ mag determined by T97 from the equivalent width of the Na D 1 line, and the extinction maps of Burstein & Heiles (1982). We make an additional estimate of the extinction by comparing the optical spectrum of PDS 456 to that of the quasar 3C 273. As Fig. 4 shows, there is excellent agreement (at least blueward of H$`\alpha `$; we discuss the disagreement at longer wavelengths in the next section) when PDS 456 is dereddened by $`A_V=1.4`$. Since 3C 273 is itself reddened by $`A_V0.1`$ mag (Burstein & Heiles 1982), we infer a total Galactic extinction of 1.5 mag towards PDS 456, and adopt this value in our later analysis. ## 4 Discussion In the previous section, we noted the similarity between the optical spectra of PDS 456 and 3C 273. In Fig. 5, we present the dereddened optical–infrared spectral energy distributions of the two quasars. We use the data of Rieke & Lebofsky (1985) to correct the data at $`\lambda >3.5\mu `$m, which is beyond the range of Cardelli et al.’s (1993) extinction law. The similarity is once again striking, with the exception of the 10 $`\mu `$m flux, which is nearly a factor of two higher in PDS 456 than 3C 273, relative to the optical–near-infrared data. Note that although these data are measured through different apertures, the quasar is more than 300 times brighter than an $`L^{}`$ galaxy in the $`K`$-band (Gardner et al. 1997), and so the stellar contribution will be negligible. There is clearly a problem with the continuum level redward of H$`\alpha `$, since it does not match the interpolation between the rest of the optical spectrum and the $`J`$-band spectrum. This is not a simple flux calibration error affecting the reddest of the four INT sub-spectra which can be corrected with a constant shift, since the match is excellent in the region of overlap with the adjoining subspectrum. The dereddened optical continuum is well-described by a power law with spectral index $`\alpha =0.11\pm 0.11`$. This is bluer than the mean optical spectral index for quasars, but still within the observed range (Neugebauer et al. 1987). The inflexion at $`\lambda _{\mathrm{rest}}1.2\mu `$m is a ubiquitous feature of quasar SEDs (Neugebauer et al. 1987; Elvis et al. 1994). However, the near-infrared bump usually extends to longer wavelengths, a power law often being able to fit the SED throughout the range 1–10 $`\mu `$m (see Neugebauer et al. 1987). Given the similarities between the optical and infrared properties of PDS 456 and 3C 273, it is natural to ask the question: Is PDS 456 a radio-quiet analogue of 3C 273? Certainly the low equivalent widths of the optical forbidden lines and blue optical continuum suggest blazar-like properties. A blazar nature should also be apparent at radio wavelengths, since the core emission should be strongly boosted by Doppler beaming (Falcke, Sherwood & Patnaik 1996), resulting in a flat radio spectrum which is more luminous than the general radio-quiet quasar population. To investigate this possibility, we have obtained VLA A-array data at 1.4 and 4.85 GHz. A detailed discussion of the data will be presented in a future paper, but they confirm the NVSS identification and flux, and reveal a steep radio spectrum. By extrapolating this spectrum to 8.4 GHz, we predict a flux of 4.6 mJy, which would cause PDS 456 to lie on the optical–radio luminosity relation for RQQs of Kukula et al. (1998). The high optical luminosity and low forbidden-line equivalent widths of PDS 456 are characteristic of the ‘Baldwin effect’ (Baldwin 1977). Although several explanations for this effect have been advanced, including Doppler-boosting (Browne & Murphy 1987), radiation from an accretion disc (Netzer 1987), and reddening (Baker 1997), these all incorporate an orientation dependence such that very luminous, very low equivalent width sources like PDS 456 should be seen nearly pole-on. The lack of a dominant, boosted radio core is therefore something of a mystery. ## 5 Summary We have presented optical and infrared spectra of the nearby luminous quasar PDS 456. We measure a redshift $`z=0.18375\pm 0.00030`$ based on the forbidden line of \[Fe II\] $`\lambda `$1.6435 $`\mu `$m, but do not detect any other forbidden emission lines. We detect at least three emission lines in the near-infrared which we are unable to identify. The dereddened optical continuum is rather blue and very similar to that of 3C 273. Despite the similarities at optical wavelengths, observations reveal that PDS 456 does not possess a strongly Doppler-boosted radio core. We defer detailed discussion of the radio observations to a later paper, wherein we will also present and discuss an X-ray spectrum of PDS 456. ## Acknowledgments The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the U. K. Particle Physics and Astronomy Research Council, and was ably piloted by Thor Wold. The Isaac Newton Telescope is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Rogue de los Muchachos of the Instituto de Astrofisica de Canarias. The Very Large Array is part of the National Radio Astronomy Observatory, which is operated by Associated Universities, Inc., under cooperative agreement with the National Science Foundation. The authors thank the staff at UKIRT and the INT for taking the service observations presented in this paper, and Duncan Law-Green for reducing the VLA data.
no-problem/9812/hep-th9812248.html
ar5iv
text
# Effect of dissipation on the decay-rate phase transition ## I Introduction It is well known that the decay of the metastable states at zero temperature is determined by pure quantum tunneling process whose dynamics is described by classical configurations called bounce in Euclidean space. Since, however, the decay rate is determined at high temperature by thermal activation which corresponds to classical configuration called sphaleron, there exists some temperature $`T_c`$ at which the transition from classical- to quantum-dominated decay occurs. This phase transition problem was firstly discussed by Affleck within quantum mechanics. He demonstrated that under certain assumptions for the shape of the barrier the transition between the thermal and quantum regimes is dominated by solutions with a finite period in the Euclidean time that smoothly interpolate between the zero-temperature bounce and the static high-temperature sphaleron. The transition is thus a second-order one. Chudnovsky, however, has shown that the type of the phase transition in the crossover from thermal activation to thermally assisted quantum tunneling is completely dependent on the shape of the potential barrier. He also has shown that the order of the phase transition is easily conjectured by $`P`$-vs-$`E`$ graph, where $`P`$ and $`E`$ are Euclidean period and energy, repectively. The sharp crossover between the thermal and thermally assisted tunneling occurs when $`P=P(E)`$-curve possesses a minimum at $`E=E_c`$, which is different from the energy of sphaleron solution. Based on Chudnovsky’s observation the sharp first-order transitions are found at spin tunneling systems with and without external magnetic field. Recently, a sufficient criterion for the first-order phase transition in decay problem of the metastable state is obtained by carrying out the nonlinear perturbation near the sphaleron solution in the two-dimensional string model. Inspired by spin-tunneling problem, the result of Ref. is subsquently extended to the quantum mechanical model when mass is position-dependent. The purpose of this paper is to derive a general criterion for the sharp first-order phase transition in the quantum mechanical tunneling models when the position-dependent mass and dissipation are involved. We first consider a system with Ohmic dissipation in which the spectral density $`J(\omega )`$ is linearly proportional to frequency, i.e., $`J(\omega )=\alpha \omega `$ where $`\alpha `$ is the dissipation coefficient. In this case Caldeira and Leggett already derived the effective action in their seminal paper. We will use their result for the derivation of the criterion. The extension to the case of super-Ohmic dissipation where $`J(\omega )=\alpha \omega ^3`$ is also examined. In the super-Ohmic case a counter term corresponding to the deformation of potential due to environment is introduced. The absence of the counter term in the Ohmic case is due to our use of the result of Ref., in which a proper counter term is already used for the Ohmic dissipation. Since we work at Euclidean space, we use a prescription for the dissipation term which does not break the time-reversal symmetry in the both Ohmic and super-Ohmic cases. Carrying out the nonlinear perturbation we will derive the general condition for the first-order phase transition in the dissipative quantum mechanical system when mass is position-dependent in Sec.II. It is found that the effect of dissipation is only deformation of the eigenvalues for the temporal fluctuation operator $`\widehat{l}`$ defined in this section. To get some physical intuition we will apply this general criterion to the simple quantum mechanical models in Sec.III. Brief conclusion and the direction of the future research in this field will be given at the final section. ## II Criterion of first-order phase transition for dissipative quantum systems The effect of dissipation on quantum tunneling is investigated in Ref. by introducing the infinite number of oscillators as an environment and assuming the linear coupling between the environment and system. We follow their formalism to obtain the effective action for the dissipative system. The effective action in Euclidean space is given as $$S[q(\tau )]=_0^T[\frac{1}{2}M(q)\dot{q}^2+V(q)+\delta V(q)]𝑑\tau +\frac{1}{2}_{\mathrm{}}^{\mathrm{}}𝑑\tau ^{}_0^T𝑑\tau \gamma (\tau \tau ^{})\{q(\tau )q(\tau ^{})\}^2,$$ (1) where $$\gamma (\tau \tau ^{})\frac{1}{2\pi }_0^{\mathrm{}}J(\omega )e^{\omega \tau \tau ^{}}𝑑\omega .$$ (2) Here, the mass of quantum system is generally taken as position-dependent which is motivated from spin tunneling models. The final non-local term in Eq.(1) represents the effect of dissipation. It is emphasizing to note that the counter term $`\delta V(q)`$ is introduced in Eq.(1). In fact, the above effective action without $`\delta V(q)`$ has been derived in consideration of a counter term which cancels divergence in the case of Ohmic dissipation. However, for the cases of non-Ohmic dissipation new divergence is expected to appear, so that appropriate conter term may have to be needed. Shortly, it will be shown that this is a generic case. Now, we take the spectral density generally as $$J(\omega )=\alpha \omega ^n$$ (3) where $`n`$ is an positive integer. Then, the effective action can be written as $$S[q(\tau )]=_0^T[\frac{1}{2}M(q)\dot{q}^2+V(q)+\delta V(q)]𝑑\tau +\frac{\alpha \mathrm{\Gamma }(n+1)}{4\pi }_{\mathrm{}}^{\mathrm{}}𝑑\tau ^{}_0^T𝑑\tau \frac{\{q(\tau )q(\tau ^{})\}^2}{\tau \tau ^{}^{n+1}},$$ (4) where $`\mathrm{\Gamma }(n)`$ is the Gamma function. From this effective action, the equation of motion is given by $$M(q)\ddot{q}+\frac{1}{2}M^{}(q)\dot{q}^2\frac{\alpha \mathrm{\Gamma }(n+1)}{\pi }_{\mathrm{}}^{\mathrm{}}𝑑\tau ^{}\frac{q(\tau )q(\tau ^{})}{\tau \tau ^{}^{n+1}}=V^{}(q)+\delta V^{}(q),$$ (5) where the prime denotes the derivative with respect to $`q`$. In the following subsections we will derive the criterion of the first-order phase transition in the cases of Ohmic dissipation and super-Ohmic dissipation($`J(\omega )=\alpha \omega ^3`$). ### A The Ohmic dissipation In the case of Ohmic dissipation, the spectral density is linearly proportional to frequency. The equation of motion is, thus, Eq.(5) with $`n=1`$. The dissipation term can be rewritten in terms of the Fourier transform partner of $`q(\tau )`$, i.e., $`\stackrel{~}{q}(\omega )`$, which makes the equation of motion to be $$M(q)\ddot{q}+\frac{1}{2}\frac{M(q)}{q}\dot{q}^2\alpha _{\mathrm{}}^{\mathrm{}}𝑑\omega \stackrel{~}{q}(\omega )\omega e^{i\omega \tau }=V^{}(q).$$ (6) In the derivation of this equation we take a prescription for the dissipation term $$\frac{\alpha }{\pi }_{\mathrm{}}^{\mathrm{}}𝑑\tau ^{}\frac{q(\tau )q(\tau ^{})}{(\tau \tau ^{})^2}\frac{\alpha }{\pi }_{\mathrm{}}^{\mathrm{}}𝑑\tau ^{}\frac{q(\tau )q(\tau ^{})}{(\tau \tau ^{}+iϵ)(\tau \tau ^{}iϵ)},$$ (7) which preserves the time-reversal symmetry, and $`\delta V^{}=0`$ as mentioned before. Since, in fact, the dissipation breaks the time-reversal symmetry in real space-time, another prescription was used at Ref.. However, in Eclidean space the dissipation does not yield damping motion, which justifies our choice of the prescription. Now, following Ref., let us determine the type of transition by expanding the equation of motion at the sphaleron solution $`q_s`$ as $$q(\tau )=q_s+a\eta _1(\tau )$$ (8) or equivalently $$\stackrel{~}{q}(\omega )=\stackrel{~}{q}_s+a\stackrel{~}{\eta }_1(\omega ),$$ (9) where $`a`$ represents an oscillation amplitude near sphaleron solution. Since it is sufficient in determining the order of phase transition to consider only the solutions near sphaleron, we can assume $`a`$ is very small constant. Substuting these expressions into Eq.(6) the equation of motion within the first order of $`a`$ becomes $$(\widehat{l}\widehat{h})\eta _1(\tau )=0,$$ (10) where operators $`\widehat{l}`$ and $`\widehat{h}`$ are defined as $`\widehat{l}`$ $`=`$ $`M(q_s){\displaystyle \frac{d^2}{d\tau ^2}}{\displaystyle \frac{\alpha }{2\pi }}{\displaystyle 𝑑\omega \omega e^{i\omega \tau }𝑑\tau e^{i\omega \tau }},`$ (11) $`\widehat{h}`$ $`=`$ $`V^{\prime \prime }(q_s),`$ (12) respectively. In order to solve this equation, we take a trial solution $$\eta _1(\tau )=\mathrm{cos}\mathrm{\Omega }\tau $$ (13) and equivalently $$\stackrel{~}{\eta }_1(\omega )=\frac{1}{2}(\delta (\mathrm{\Omega }\omega )+\delta (\mathrm{\Omega }+\omega )).$$ (14) Using this trial solution the frequency $`\mathrm{\Omega }`$ near the sphaleron solution is obtained within the first order of the amplitude $`a`$ as $$\mathrm{\Omega }_O^{(1)}=\pm \frac{1}{2}\left[\frac{\alpha }{M(q_s)}+\sqrt{(\frac{\alpha }{M(q_s)})^2+4\omega _s^2}\right]$$ (15) where $$\omega _s=\frac{V^{\prime \prime }(q_s)}{M(q_s)}.$$ (16) Comparing $`\mathrm{\Omega }_O^{(1)}`$ with $`\omega _s`$ which is $`\alpha 0`$ limit of $`\mathrm{\Omega }_O^{(1)}`$, it is easy to understand that the Ohmic dissipation reduces the frequency. Since in the case of second-order phase transition the transition temperature $`T_c`$ is determined by this $`\mathrm{\Omega }_O^{(1)}`$, i.e., $`T_c=\mathrm{\Omega }_O^{(1)}/2\pi `$, this result implies that the Omic dissipation decreases the transition temperature. As shown in Fig.1, the decrease of $`T_c`$ means increase of action value and, then, suppression of decay rate near $`T_c`$. We will, however, show that the super-Ohmic dissipation gives the opposite effect on decay rate within exponential approximation. Next order calculation can be conducted by taking $$q(\tau )=q_s+a\eta _1(\tau )+a^2\eta _2(\tau ).$$ (17) Then, we find equation of motion for $`\eta _2(\tau )`$ as $$a(\widehat{l}\widehat{h})\eta _2(\tau )=(\widehat{l}\widehat{h})\eta _1(\tau )+aW_1(\tau ),$$ (18) where $$W_1(\tau )=(\mathrm{\Omega }^2M^{}(q_s)+\frac{1}{2}V^{\prime \prime \prime }(q_s))\mathrm{cos}^2\mathrm{\Omega }\tau \frac{1}{2}\mathrm{\Omega }^2M^{}(q_s)\mathrm{sin}^2\mathrm{\Omega }\tau .$$ (19) Since $`(\widehat{l}\widehat{h})`$ is an hermitian operator, taking a scalar product with $`\eta _1>`$ on both sides of the equation of motion yields $$a(l(\mathrm{\Omega })h(\mathrm{\Omega }))<\eta _1\eta _2>=<\eta _1\widehat{l}\widehat{h}\eta _1>+a<\eta _1W_1>,$$ (20) where $`l(\mathrm{\Omega })`$ $`=`$ $`\mathrm{\Omega }^2M(q_s)\alpha \mathrm{\Omega },`$ (21) $`h(\mathrm{\Omega })`$ $`=`$ $`V^{\prime \prime }(q_s).`$ (22) As usual perturbation theory, $`<\eta _1\eta _2>`$ is zero. Furthermore, the second term of the right-hand side in Eq.(20) is zero because of $`\tau `$-integration. Then, the above equation(Eq.(20)) is identical with Eq.(10). Therefore, within the second order of $`a`$ we can not find the variation of the frequency from $`\mathrm{\Omega }_O^{(1)}`$, i.e., $$\mathrm{\Omega }_O^{(2)}=\mathrm{\Omega }_O^{(1)}.$$ (23) Before calculating next order frequency $`\mathrm{\Omega }_O^{(3)}`$, we would like to evaluate $`\eta _2(\tau )`$ explicitly for later use. This is achieved directly from Eq.(18), which yields $`\eta _2(\tau )`$ $`=`$ $`(\widehat{l}\widehat{h})^1W_1(\tau )`$ (24) $`=`$ $`g_1+g_2\mathrm{cos}2\mathrm{\Omega }\tau ,`$ (25) where $`g_1`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Omega }^2M^{}(q_s)+V^{\prime \prime \prime }(q_s)}{4V^{\prime \prime }(q_s)}},`$ (26) $`g_2`$ $`=`$ $`{\displaystyle \frac{3\mathrm{\Omega }^2M^{}(q_s)+V^{\prime \prime \prime }(q_s)}{4[4M(q_s)\mathrm{\Omega }^2+\alpha 2\mathrm{\Omega }+V^{\prime \prime }(q_s)]}}.`$ (27) Now, taking a third order correction into $`q(\tau )`$ as $$q(\tau )=q_s+a\eta _1(\tau )+a^2\eta _2(\tau )+a^3\eta _3(\tau ),$$ (28) and inserting the above expression into Eq.(6), the equation of motion for $`\eta _3(\tau )`$ is straightforwardly obtained as $$a^2(\widehat{l}\widehat{h})\eta _3=(\widehat{l}\widehat{h})\eta _1a(\widehat{l}\widehat{h})\eta _2+aW_1(\tau )+a^2W_2(\tau ),$$ (29) where $`W_2(\tau )`$ $`=`$ $`(\mathrm{\Omega }^2g_1M^{}(q_s)+g_1V^{\prime \prime \prime }(q_s))\mathrm{cos}\mathrm{\Omega }\tau `$ (34) $`+(5\mathrm{\Omega }^2g_2M^{}(q_s)+g_2V^{\prime \prime \prime }(q_s))\mathrm{cos}2\mathrm{\Omega }\tau \mathrm{cos}\mathrm{\Omega }\tau `$ $`+{\displaystyle \frac{1}{2}}(\mathrm{\Omega }^2M^{\prime \prime }(q_s)+{\displaystyle \frac{1}{3}}V^{\prime \prime \prime \prime })\mathrm{cos}^3\mathrm{\Omega }\tau `$ $`2\mathrm{\Omega }^2g_2M^{}(q_s)\mathrm{sin}\mathrm{\Omega }\tau \mathrm{sin}2\mathrm{\Omega }\tau `$ $`{\displaystyle \frac{1}{2}}\mathrm{\Omega }^2M^{\prime \prime }(q_s)\mathrm{cos}\mathrm{\Omega }\tau \mathrm{sin}^2\mathrm{\Omega }\tau .`$ Scalar product with $`\eta _1`$ in Eq.(29) yields an equation $$(l(\mathrm{\Omega })h(\mathrm{\Omega }))<\eta _1\eta _1>+a^2<\eta _1W_2>=0$$ (35) in which we can determine $`\mathrm{\Omega }_O^{(3)}`$. Since the first-order phase transition between thermal and thermally assisted quantum tunneling regimes occurs when the period of solution near the sphaleron increases with approaching to the sphaleron solution, we get a condition for sharp transition by $`\mathrm{\Omega }_O^{(3)}>\mathrm{\Omega }_O^{(1)}`$. This is identical to the condition that the value of the left-hand side of Eq.(35) at $`\mathrm{\Omega }=\mathrm{\Omega }_O^{(1)}`$ is negative. Therefore, we finally obtain the criterion of first-order phase transition as $$\mathrm{\Omega }_O^{(1)2}[(g_1+\frac{3}{2}g_2)M^{}(q_s)+\frac{1}{4}M^{\prime \prime }(q_s)]+(g_1+\frac{1}{2}g_2)V^{\prime \prime \prime }(q_s)+\frac{1}{8}V^{\prime \prime \prime \prime }(q_s)<0,$$ (36) where $`g_1`$ and $`g_2`$ are evaluated at $`\mathrm{\Omega }=\mathrm{\Omega }_O^{(1)}`$. Since the dissipative coefficient is involved at $`\mathrm{\Omega }^{(1)}`$, $`g_1`$, and $`g_2`$, the criterion of first-order phase transition is generally different from non-dissipative case. Particularly, when particle mass is constant, the dissipation coefficient $`\alpha `$ appears only in $`g_2`$, and from the fact $`V^{\prime \prime }(q_s)<0`$ we can conclude that the Ohmic dissipation enhances the possibility for the occurrence of the sharp first-order transition. ### B The super-Ohmic dissipation In this subsection we consider the criterion for the first-order phase transition when the dissipation is super-Ohmic, i.e., $`J(\omega )=\alpha \omega ^3`$, which appears in the analysis of macroscopic magnetization tunneling. The dissipation term in Eq.(5) with $`n=3`$ can be written again in terms of $`\stackrel{~}{q}(\omega )`$. After integrating it at complex $`\tau ^{}`$-plane by using same prescription, one can find the equation of motion (Eq.(5)) to be in the form $$M(q)\ddot{q}+\frac{1}{2}\frac{M(q)}{q}\dot{q}^2+\alpha _{\mathrm{}}^{\mathrm{}}𝑑\omega \stackrel{~}{q}(\omega )\omega ^3e^{i\omega \tau }=V^{}(q).$$ (37) In deriving Eq.(37) we used $`\delta V^{}=\frac{3\alpha }{2ϵ}\ddot{q}`$ to cancel a divergence which appears in the course of integration. The presence and absence of divergence in super-Ohmic and Ohmic cases, respectively, are because that we start with the effective action (Eq.(1)) as mentioned in the previous section. The remaining calculation for super-Ohmic case is equivalent to that for the Ohmic case. The differences are followings. The $`\widehat{l}`$ operator is changed in this case into $$\widehat{l}=M(q_s)\frac{d^2}{d\tau ^2}+\frac{\alpha }{2\pi }𝑑\omega \omega ^3e^{i\omega \tau }𝑑\tau e^{i\omega \tau },$$ (38) and the frequency near the sphaleron solution within the first order of the amplitude $`a`$, $`\mathrm{\Omega }_S^{(1)}`$, becomes the root of the equation $$M(q)\mathrm{\Omega }^2\alpha \mathrm{\Omega }^3+V^{\prime \prime }=0.$$ (39) This equation tells that for the second-order phase transition the transition temperature $`T_c=\mathrm{\Omega }_S^{(1)}/2\pi `$ becomes higher than that of the non-dissipative case, $`\omega _s/2\pi `$. This means that contrary to the Ohmic dissipation case, the tunneling rate near $`T_c`$ is enhanced by the super-Ohmic dissipation. Next order calculation shows that $`\eta _2`$ has a same form with the previous case(Eq.(25)), where $`g_1`$ is given at Eq.(27) and $`g_2`$ is changed in the form $$g_2=\frac{3\mathrm{\Omega }^2M^{}(q_s)+V^{\prime \prime \prime }(q_s)}{4[4M(q_s)\mathrm{\Omega }^2\alpha 2\mathrm{\Omega }^3+V^{\prime \prime }(q_s)]}.$$ (40) Finally, the criterion of first-order phase transition is equivalent to Eq.(36) except that $`g_2`$ is replaced by Eq.(40) and Eq.(36) should be evaluated at $`\mathrm{\Omega }=\mathrm{\Omega }_S^{(1)}`$ instead of $`\mathrm{\Omega }_O^{(1)}`$. It is, then, evident that when particle mass is constant the super-Ohmic dissipation reduces the possibility for the occurrence of the sharp first-order phase transition within exponential approximation. ## III Application to quantum mechanical tunneling models ### A Asymmetric double well case Consider a usual double well potential with a symmetry-breaking term proportional to $`q^3`$, i.e., $$V(q)=\frac{1}{2}(q^21)^2fq^3.$$ (41) This type of potential is frequently used in field theories for the application to cosmology. In this case the sphaleron solution is $$q_s=0.$$ (42) We assume that particle mass $`M`$ is constant, so that the derivaltives of mass are zero. Substuting the derivatives of the potential at $`q_s`$ $$V^{\prime \prime }(q_s)=2,V^{\prime \prime \prime }(q_s)=6f,V^{\prime \prime \prime \prime }(q_s)=12$$ (43) and Eq.(27) into Eq.(36), it is easy to obtain following criterion $$\frac{\alpha }{2}[\sqrt{(\frac{\alpha }{M})^2+\frac{8}{M}}\frac{\alpha }{M}]>\frac{5}{2}+\frac{1}{2(3f^2+1)}$$ (44) for the first-order phase transition when the dissipation is Ohmic. Unfortunately, for any values of positive $`\alpha `$ and $`M`$ the left-hand side cannot exceed 2, i.e., always second order. Hence, dissipation does not change the order of the phase transition. This result is maintained even in the super-Ohmic case. We can also obtain the criterion for a double well potential with $`q`$ asymmetric term as $$V(q)=\frac{1}{2}(q^21)^2Fq.$$ (45) In this case, the result is very similar to $`q^3`$ case. There is no first-order phase transition. This result can be expected since the both asymmetric potentials Eqs.(41) and (45), when the dissipation term is ignored, give actions proportional to each other, which can be easily shown by a appropriate translation of $`q`$ and scalings of $`q`$ and $`\tau `$. ### B Case for spin tunneling-inspired model Consider an hamiltonian $$H=\frac{p^2}{2M(\varphi )}+V(\varphi ),$$ (46) where $$M(\varphi )=\frac{1}{2K_1(1\lambda \mathrm{sin}^2\varphi )},$$ (47) and $$V(\varphi )=K_2S(S+1)\mathrm{sin}^2\varphi .$$ (48) Although this hamiltonian can be derived by the coherent representation from the hamiltonian of spin tunneling model $$H=K_1S_z^2+K_2S_y^2$$ (49) where $`K_1`$ and $`K_2`$ represent an anisotropic constants and $`\lambda =K_2/K_1`$, we will not go through the content of physics on spin tunneling in this paper. Instead, we will restrict ourselves into the discussion on the effect of dissipation in the hamiltonian, Eq.(46). Since the sphaleron solution is simply $$\varphi _s=\frac{\pi }{2},$$ (50) it is easy to obtain $`V^{\prime \prime }(\varphi _s)`$ $`=`$ $`2K_2S(S+1),V^{\prime \prime \prime }(\varphi _s)=0,V^{\prime \prime \prime \prime }(\varphi _s)=8K_2S(S+1),`$ (51) $`M(\varphi _s)`$ $`=`$ $`{\displaystyle \frac{1}{2K_1(1\lambda )}},M^{}(\varphi _s)=0,M^{\prime \prime }(\varphi _s)={\displaystyle \frac{\lambda }{K_1}}{\displaystyle \frac{1}{(1\lambda )^2}}.`$ (52) Hence the use of Eq.(36) for the Ohmic dissipation yields the condition for first-order phase transition as $$\frac{\alpha }{\sqrt{S(S+1)}}<\frac{2\lambda 1}{1\lambda }.$$ (53) This result is shown in Fig.2. It is shown that, unlike the case of constant mass, the effect of the Ohmic dissipation reduces the range of parameters for the first-order phase transition. Since it is impossible to get an analytic expression of criterion for the super-Ohmic case, one has to resort to the numerical calculation. The type of transition in the parameter space is given at Fig.3. The effect of the super-Ohmic dissipation enlarges the parameter range for the first-order phase transition, which is opposite behavior to the Ohmic case. It is worthwhile noting that the above criterion may not be realized in real spin system. In order to study the effect of dissipation in the real spin system, one has to derive the effective hamiltonian from the original hamiltonian(Eq.(49)) with the dissipation term fully through coherent representation or particle mapping. Since this procedure in general produces a complicated dissipation term, our cirterions does not hold in this case. Therefore, what we have done in this subsection is to obtain the general criterion for the sharp transition for the model hamiltonian (Eq.(46)) with Ohmic and super-Ohmic dissipation and to show the possibility of the change in type of phase tansition due to dissipation. ## IV conclusion Performing the nonlinear perturbation, the general condition for the sharp first-order phase transition of decay rate between thermal regime and thermally assisted quantum tunneling regime has been derived when position-dependent mass and dissipation are involved. It has been shown that in the models with constant mass Ohmic dissipation enhances the possibility for the occurrence of the sharp transition, while super-Ohmic dissipation reduces it. Application to two simple quantum mechanical models is given at the previous section. In the asymmetric double well case the phase transition is always second order regardless with or without dissipation. In this case, by comparing $`\mathrm{\Omega }_O^{(1)}`$ and $`\mathrm{\Omega }_S^{(1)}`$ with $`\omega _s`$ one can perceive that the Ohmic and super-Ohmic dissipation suppresses and enhances the decay rate, respectively, near the transition temperature $`T_c`$ within exponential approximation. Similar feature at zero temperature has been reported in Ref.. For the case of $`J(\omega )=\alpha \omega ^2`$ it is not clear how to give a proper prescription which does not break the time-reversal symmetry. This difficulty does not seem to originate from the property of Euclidean space. In real time space, a retarded Green function is generally choosed in order to break the time-reversal symmetry. However, when $`J(\omega )=\alpha \omega ^2`$, in spite of using the retarded Green function, the time-reversal symmetry is not broken. The generalization of our result, Eq.(36), to nonlinear coupling between an environment and system and general type of dissipation might be highly non-trivial. In this case the dissipation term in the equation of motion, Eq.(5), is proportional to $$\frac{\alpha }{\pi }_{\mathrm{}}^{\mathrm{}}𝑑\tau ^{}\frac{[q(\tau )q(\tau ^{})]^{z_2}}{\tau \tau ^{}^{z_1}},$$ where $`z_1`$ and $`z_2`$ are some constants. Hence, the integration with respect to $`\tau ^{}`$ through Fourier transform as was done in Eq.(6) is impossible. In our opinion the only way to break through this difficulty is to rely on the numerical analysis. This will be discussed elsewhere. It is also interesting to apply the present method to the spin tunneling problems. Since dissipation arises from the coupling with an environment which contains quasi-particles like photons or phonons, it is possible to construct a hamiltonian which contains the spin-environment coupling. Then, by integrating over the environmental degrees of freedom and with the help of coherent representation or particle mapping, one can derive an effective hamiltonian in which the effect of dissipation is fully involved as a non-local term. From this procedure, one can investigate the effect of the dissipation on the phase transition in spin tunneling problem. This will also be discussed elsewhere. ACKNOWLEDGMENT S.Y. Lee acknowledges the financial support of KOSEF through a domestic postdoctoral program. D.K. Park is grateful to D.A. Gorokhov for useful discussions.
no-problem/9812/quant-ph9812083.html
ar5iv
text
# Narrow resonances with excitation of finite bandwidth field ## Abstract The effect of the laser linewidth on the resonance fluorescence spectrum of a two-level atom is revisited. The novel spectral features, such as hole-burning and dispersive profiles at line centre of the fluorescence spectrum are predicted when the laser linewidth is much greater than its intensity. The unique features result from quantum interference between different dressed-state transition channels. The study of resonance fluorescence spectrum has provided much fundamental insight into the subject of atom-light interactions. It is well known that for weak laser field excitation, the spectrum exhibits a Lorentzian lineshape, whereas it develops into the Mollow triplet for strong field excitation . The latter is a direct signature of stimulated emissions and absorptions of the atom during the time interval for one spontaneous decay. Recently, considerable attention has been paid to modifying the standard resonance fluorescence spectrum. Indeed, there are many ways to achieve this. One method is to place the atom inside a cavity, a wide variety of spectral features, such as dynamical suppression and enhancement, and spectral line narrowing of the Mollow triplet, has been predicted and detected . Another method is to bathe the atom in a squeezed vacuum. Swain and co-worker then predicted anomalous fluorescence spectral features for weak excitation: hole-burning and dispersive profiles at line center, which are qualitatively different from any seen previously in resonance fluorescence. Of late, Gawlik et al. have shown that rapidly elastic collisions between monochromatically driven atoms can also give rise to these anomalous profiles in resonance fluorescence. In this Letter we report that the anomalous fluorescence spectral features, such as hole-burning and dispersive profiles at line center, can even take place in a system of which a two-level atom is damped by a standard vacuum, and driven by a laser field with a finite bandwidth due to phase diffusions. The effect of the linewidth of the driving laser on the spectrum and the intensity fluctuations of the resonance fluorescence had been extensively investigated both theoretically and experimentally. However, most of these studies concentrated on the case of which the laser intensity is much greater than its linewidth, where the spectral components are well resolved. The spectral line broadening, suppression and asymmetry in the Mollow triplet were reported . We are here mainly interested in the region of which the laser linewidth is larger than its intensity. Novel resonance lineshapes, — the hole-burning and dispersive profiles at the spectral line centre of the resonance fluorescence are predicted. We consider a single two-level atom with transition frequency $`\omega _A`$ driven by a laser field with amplitude $``$ and frequency $`\omega _L`$ and fluctuating phase $`\varphi (t)`$. The master equation for the atomic density matrix operator $`\rho `$, in a frame rotating at the frequency $`\omega _L`$, is of the form $$\dot{\rho }=i[H_{AL},\rho ]+\rho ,$$ (1) where $`H_{AL}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }}{2}}\sigma _z+{\displaystyle \frac{\mathrm{\Omega }}{2}}\left[e^{i\varphi (t)}\sigma _++e^{i\varphi (t)}\sigma _{}\right],`$ (2) $`\rho `$ $`=`$ $`\gamma (2\sigma _{}\rho \sigma _+\sigma _+\sigma _{}\rho \rho \sigma _+\sigma _{}),`$ (3) where $`H_{AL}`$ is the Hamiltonians of the coherently driven atom and $`\rho `$ describes the atomic spontaneous decay with the rate $`\gamma `$, $`\sigma _\pm `$ and $`\sigma _z`$ are the atomic upper (lower) transition and population inversion operators, respectively, $`\mathrm{\Omega }=2|\mu _{01}|/\mathrm{}`$ is the driving Rabi frequency, $`\mathrm{\Delta }=\omega _A\omega _L`$ is the detuning between the atomic transition and the driving laser. The fluctuating phase, $`\varphi (t)`$, results in a stochastic frequency $`\vartheta (t)=\dot{\varphi }(t)`$, which is assumed to be a Gaussian random process with the properties $$\vartheta (t)=0,\vartheta (t)\vartheta (t^{})=2L\delta (tt^{}),$$ (4) where $`L`$ is the strength of the frequency fluctuations and physically describes the effective bandwidth of the laser beam due to the phase diffusion. This is the situation most appropriate for describing the radiation from a diode laser, which has a very stable amplitude and very large phase-diffusions when it is operated far above threshold . After averaging over the stochastic phase, one can obtain the optical Bloch equation to be $`\dot{\sigma }_{}=(\mathrm{\Gamma }+i\mathrm{\Delta })\sigma _{}+{\displaystyle \frac{i}{2}}\mathrm{\Omega }\sigma _z,`$ (5) $`\dot{\sigma }_z=\gamma _z\sigma _z+i\mathrm{\Omega }\left(\sigma _{}\sigma _+\right)\gamma _z,`$ (6) where $`\mathrm{\Gamma }=\gamma +L`$ and $`\gamma _z=2\gamma `$ represent respectively the transverse and longitudinal relaxation rates. The resonance fluorescence spectrum in the far radiation zone can be expressed, in term of the steady-state atomic correlation function, as $$\mathrm{\Lambda }(\omega )=\text{Re}_0^{\mathrm{}}\underset{t\mathrm{}}{lim}\sigma _+(t+\tau )\sigma _{}(t)e^{i\omega \tau }d\tau =\text{Re}[𝒟(i\omega )],$$ (7) where $`𝒟(z)`$ is the Laplace transform of the atomic correlation function $`lim_t\mathrm{}\sigma _+(t+\tau )\sigma _{}(t)`$, which is obtained, by invoking the quantum regression theorem, together with the Bloch equations (6), to be $$𝒟(z)=\frac{\left[\left(\mathrm{\Gamma }+i\mathrm{\Delta }+z\right)(\gamma _z+z)+\mathrm{\Omega }^2/2\right](1+\sigma _z_s)+i\mathrm{\Omega }\left(\mathrm{\Gamma }+i\mathrm{\Delta }+z\right)(1+\gamma _z/z)\sigma _{}_s}{2(\gamma _z+z)\left[(\mathrm{\Gamma }+z)^2+\mathrm{\Delta }^2\right]+2\mathrm{\Omega }^2(\mathrm{\Gamma }+z)},$$ (8) where $`\sigma _{}_s`$ and $`\sigma _z_s`$ are the steady-state solutions of the Bloch equation (6). Our formulae (7)-(8) can reproduce the previous predictions of Mollow (when $`L=0`$) and Kimble et al. (when $`\mathrm{\Omega }L`$). However, here we exploit novel spectral features in the regime of $`L\mathrm{\Omega }`$, which has been paid little attention in the past. Figure 1 presents the resonance fluorescence spectrum of the atom with excitation of a resonant ($`\mathrm{\Delta }=0`$), strong laser field ($`\mathrm{\Omega }=50\gamma \gamma `$) with various laser linewidths ($`L=10\gamma ,\mathrm{\hspace{0.17em}50}\gamma ,\mathrm{\hspace{0.17em}100}\gamma ,\mathrm{\hspace{0.17em}200}\gamma `$). It is obvious that when the laser linewidth $`L`$ is much less than the Rabi frequency $`\mathrm{\Omega }`$, see, for example, in the frame (a) for $`L=10\gamma `$, as Kimble et al. predicted, the spectrum still exhibits a three-peak structure, but with a suppressed central peak and narrowed sidebands, comparing to the standard Mollow triplet . As the laser linewidth widens, the central peak is greatly suppressed while the sideband resonances are merged, therefore, a dip (i.e., a hole burning profile) places at line centre, —see in Fig. 1(b) where $`L=50\gamma `$. When the value of the laser linewidth is much larger than the Rabi frequency, the dip becomes very narrow, as depicted in Figs. 1(c) and 1(d) where $`L=100\gamma `$ and $`200\gamma `$, respectively. Figure 2 illustrates a three dimensional fluorescence spectrum against the laser linewidth, for $`\gamma =1,\mathrm{\Omega }=50\gamma `$ and $`\mathrm{\Delta }=0`$, from which one can see how the Mollow triplet is suppressed and the dip is developed at line centre as the laser linewidth increases. Figure 3 clearly shows that the resonance fluorescence spectrum may exhibit another narrow resonance feature—dispersive (Rayleigh-wing) profile, when the laser with a very wide linewidth is appropriately detuned from the atomic transition frequency. We have taken the parameters $`\gamma =1,\mathrm{\Omega }=50\gamma ,L=200\gamma `$ in Fig. 3. When the laser-atom detuning is comparable with the laser linewidth, e.g., in Figs. 3(b)-3(c) for $`\mathrm{\Delta }=100\gamma ,\mathrm{\hspace{0.17em}200}\gamma `$, the dispersive profile is most pronounced, otherwise, it is less pronounced, see, for instance, in Fig. 3(a) ($`\mathrm{\Delta }=50\gamma L`$) and Fig. 3(d) ($`\mathrm{\Delta }=400\gamma L`$). The latter frame demonstrates the narrow peak at the laser frequency (line centre) and a broad peak at the atomic transition frequency. The both resonances are well split, which is the case of Kimble et al. . When the laser bandwidth is much larger than the other parameters, i.e. $`\mathrm{\Gamma }\mathrm{\Omega },\mathrm{\Delta },\gamma _z`$, the resonance fluorescence spectrum (7) approximately takes the form, $$\mathrm{\Lambda }(\omega )\frac{\mathrm{\Gamma }}{4(\gamma _z\mathrm{\Gamma }+\mathrm{\Omega }^2)}\left[\frac{\mathrm{\Omega }^22\mathrm{\Delta }\omega }{\mathrm{\Gamma }^2+\omega ^2}+\frac{\mathrm{\Omega }^2+2\mathrm{\Delta }\omega }{\left(\mathrm{\Gamma }\mathrm{\Omega }^2/\mathrm{\Gamma }\right)^2+\omega ^2}\left(\frac{\mathrm{\Omega }}{\mathrm{\Gamma }}\right)^4\frac{\mathrm{\Omega }^2+2\mathrm{\Delta }\omega }{\left(\gamma _z+\mathrm{\Omega }^2/\mathrm{\Gamma }\right)^2+\omega ^2}\right],$$ (9) which consists of three resonances located at line center, but with different resonance linewidths. The first two resonances have positive weights and linewidths of an order of $`2\mathrm{\Gamma }`$ (noting that $`\mathrm{\Omega }^2/\mathrm{\Gamma }1`$), whereas the last one has a very narrow linewidth of $`2\gamma _z`$, comparing to $`2\mathrm{\Gamma }`$, and a negative weight. The latter gives rise to a typical spectral profile of which a narrow hole is bored into a broad peak. The approximate expression (9) of the resonance fluorescence spectrum also shows that when the laser is resonant with the atom ($`\mathrm{\Delta }=0`$), all the three resonances are the Lorentzian lineshape, therefore, the spectrum is symmetry, as shown in Figs. 1-2. Otherwise, these resonances mix up the Lorentzian and Rayleigh-wing (dispersive) lineshapes when the laser is detuned from the atom. As a results, the spectrum exhibits asymmetry, see, for example in Fig. 3. The hole-burning and dispersive profiles at the spectral centre of the resonance fluorescence are attributed to quantum interference . To explain this, we work in the basis of the semiclassical dressed states $`|\pm `$, defined by the eigenvalue equation $`H_{AL}|\pm =\pm (\overline{\mathrm{\Omega }}/2)|\pm `$. For simplicity, we consider ony the case of $`\mathrm{\Delta }=0`$. The dressed states are then given by $`|\pm =(|0\pm |1)/\sqrt{2}`$. In the limit of $`\mathrm{\Omega }\gamma `$, the equations of motion then simplify to $`\dot{R}_{++}=\mathrm{\Gamma }R_{++}+{\displaystyle \frac{\mathrm{\Gamma }}{2}},`$ (10) $`\dot{R}_{}=\mathrm{\Gamma }R_{}+{\displaystyle \frac{\mathrm{\Gamma }}{2}},`$ (11) $`\dot{R}_+=\left(\mathrm{\Gamma }_+i\mathrm{\Omega }\right)R_++\mathrm{\Gamma }_{}R_+,`$ (12) $`\dot{R}_+=\left(\mathrm{\Gamma }_++i\mathrm{\Omega }\right)R_++\mathrm{\Gamma }_{}R_+,`$ (13) where $`\mathrm{\Gamma }_\pm =(\mathrm{\Gamma }\pm \gamma _z)/2`$, $`R_{lk}=|lk|`$ ($`l,k=\pm `$) is an atomic downward transition operator between the dressed states $`|l`$ and $`|k`$ of two near-lying dressed doublets. Eqs. (10) and (11) describe the atomic downward transitions between the same dressed states of two adjacent dressed doublets, whereas, eq. (12) (eq. (13)) represents transitions from the dressed state $`|+`$ ($`|`$) of one dressed doublets to the dressed state $`|`$ ($`|+`$) of the next dressed doublets. If the Rabi frequency $`\mathrm{\Omega }`$ is much greater than $`\mathrm{\Gamma }_\pm `$, the terms associated with different resonant frequencies, $`\mathrm{\Gamma }_{}R_+`$ in eq. (12) and $`\mathrm{\Gamma }_{}R_+`$ in eq. (13) are negligible under the secular approximation . Consequently, the both transitions, $`|+|`$ and $`||+`$ are independent. Otherwise, they are correlated , i.e., as the atom decays from $`|+`$ to $`|`$ it drives the other transition from $`|`$ to $`|+`$, and vice versa. This reflects the fact that fluorescent photons emitting from these transitions are indistinguishable so that quantum interference between these transition channels dominates. It is well known that the resonance fluorescence can be described by spontaneous emissions of the atom downward the ladder of the dressed-state doublet . The atomic decays between same dressed states of two adjacent dressed doublets, governed by eqs. (10) and (11), give rise to a spectral component $$\mathrm{\Lambda }_0(\omega )=\frac{\mathrm{\Gamma }}{4\left(\mathrm{\Gamma }^2+\omega ^2\right)},$$ (14) which centres at the laser frequency and has a linewidth $`2\mathrm{\Gamma }`$ and a height $`1/(4\mathrm{\Gamma })`$. The spectrum broadens and is suppressed as the laser linewidth increases. Whereas, the other transitions, described by eqs. (12) and (13), result in a spectrum $$\mathrm{\Lambda }_1(\omega )=\frac{1}{4}\text{Re}\left[\frac{\gamma _z+i\omega }{(\mathrm{\Gamma }_++i\omega )^2+\mathrm{\Omega }^2\mathrm{\Gamma }_{}^2}\right],$$ (15) whose position and feature are dependent on the laser linewidth $`L`$ and intensity $`\mathrm{\Omega }`$. The total fluorescence spectrum consists of the two spectra, $`\mathrm{\Lambda }(\omega )=\mathrm{\Lambda }_0(\omega )+\mathrm{\Lambda }_1(\omega )`$, which are demonstrated in Fig. 4 for different laser linewidths. The spectrum $`\mathrm{\Lambda }_0(\omega )`$ always shows a Lorentzian shape located at line centre, which is independent of the laser intensity, but varies with the laser linewidth. As the linewidth increases the spectral height is suppressed and the spectral width is brodened. Whereas, $`\mathrm{\Lambda }_1(\omega )`$ is sensitive to the both parameters. When $`L\mathrm{\Omega }`$, $`\mathrm{\Lambda }_1(\omega )`$ exhibits a well-resolved, two-peak structure, as shown in Fig. 4a. This is because the dressed doublet $`|\pm `$ is well separated, and the resultant transitions $`|+|`$ and $`||+`$ have different (distinguished) resonance frequencies $`\omega \pm \mathrm{\Omega }`$. Correspondently, the total fluorescence spectrum has a three-peak Mollow structure. When $`L\mathrm{\Omega }`$, $`\mathrm{\Lambda }_1(\omega )`$ shows a dip bored into a wide bell-shape spectrum , as depicted in Figs. 4b–4d. The total spectrum thus has a hole burning profile at line centre. The larger the laser linewidth is, the narrower the hole burning is. It is obvious that the hole burning (dip) profile originates from the correlated transitions (quantum interference) between the dressed states $`|\pm `$ in two adjacent dressed doublets. When the atom-laser detuning is taken into account, the populations in the dressed states $`|\pm `$ are not same. Hence, the transitions $`|+|`$ and $`||+`$ have different probability amplitudes. The resultant fluorescence spectrum is asymmetric. The total spectrum thus exhibits a dispersive-like profile at line centre. In summary, we demonstrate that when a two-level atom is excited with a strong laser field with a broad bandwidth due to phase diffusions, the resonance fluorescence spectrum may exhibit the anomalous, narrow resonance features, such as hole-burning and dispersive profiles at line centre. The physics behind the anomalous spectral features is quantum interference between different dressed-state transition channels. From the experimental point of view, observing these features in the system is much easier than that in a squeezed vacuum . ###### Acknowledgements. This work is supported by the National Natural Science Foundation of China and the United Kingdom EPSRC. P.Z. wishes to thank S. Swain for discussions and W. Gawlik for providing with his reprint.
no-problem/9812/astro-ph9812288.html
ar5iv
text
# Baryon/anti-baryon inhomogeneity and big bang nucleosynthesis ## 1 Introduction Big bang nucleosynthesis (BBN) has been one of the most successful enterprises of modern cosmology. Comparison of observationally inferred primordial light element abundances with predictions from theoretical BBN calculations provide strong constraints on conditions of the Early Universe. The Early Universe somehow must either produce the simple and appealing homogeneous conditions of the standard BBN model, or it must produce an exceptional set of inhomogeneous conditions that generate the observed light element abundances. Prompted by work on the QCD phase transition, the effect of high-amplitude, sub-horizon scale, baryon-number fluctuations on BBN were studied . A detailed numerical calculation of the evolution of such inhomogeneities through nucleosynthesis was done by Jedamzik, Fuller, and Mathews . Neutrino-, baryon-, and photon-induced dissipative processes were coupled to nuclear reaction rates in multiple regions in an extension of the standard Wagoner nucleosynthesis code. In Ref. , it was found that any significant inhomogeneities overproduced helium-4 and/or deuterium. The effect of a first-order electroweak transition that produced similar baryon-number inhomogeneities has also been studied . Recently, Giovannini and Shaposhnikov have suggested that electroweak baryogenesis could take place through baryon-number-violating interactions with a primordial (hyper-)magnetic field. The primordial field could fluctuate across regions much larger than the horizon at the electroweak epoch, and also could cause both positive and negative baryon-number fluctuations. This could leave baryon/anti-baryon homogeneities that conceivably could survive until the BBN epoch. In Ref. , constraints were placed on the fields that guaranteed that they would have no effect on standard BBN. That is, the baryon/anti-baryon regions were constructed to be smaller than the baryon diffusion length at the epoch of BBN, so that such regions were damped out early. These fluctuations were constructed so that the weak interactions reset the resulting neutron-to-proton ratio, $`n/p`$, to its standard BBN value. The $`n/p`$ ratio is extremely important in determining the resulting element abundances. Since almost all neutrons are eventually incorporated into alpha particles, any change in the $`n/p`$ ratio will affect the <sup>4</sup>He abundance. Roughly, the mass fraction, $`Y_p`$, of <sup>4</sup>He will be $$Y_p\frac{4n_4}{n_n+n_p}=\frac{2(n/p)}{(n/p)}1/4.$$ (1) Baryon/anti-baryon regions that are larger than the baryon diffusion length could significantly affect the $`n/p`$ ratio. Rehm and Jedamzik have recently studied these issues. ## 2 Nucleosynthesis with baryon/anti-baryon inhomogeneities Baryon and anti-baryon regions that survive until the neutron–proton weak interaction freeze-out epoch subsequently will face neutron/antineutron diffusion and, therefore, baryon/anti-baryon annihilation. Ultimately, that which we call “matter” must win out in annihilations and thus must have a greater net number than antimatter. Weak freeze-out occurs at a temperature $`T_{WFO}1\mathrm{MeV}`$ when neutrons and protons (and their antiparticles) cease to be rapidly interconverted one to another by lepton capture processes. Separate regions of baryons or anti-baryons will be preserved for epochs $`T>T_{WFO}`$ if the length scales associated with these fluctuations are large compared to the baryon (anti-baryon) diffusion length. However for $`T<T_{WFO}`$, neutrons and antineutrons will diffuse efficiently between these regions, while protons and antiprotons remain relatively fixed in their respective regions because of coulomb interations with the $`e^\pm `$ background. Neutrons and antineutrons are free from such interactions and their diffusion length is several orders of magnitude larger. Neutrons “free stream” into anti-baryon regions, and antineutrons “free stream” out into the baryon regions. Assuming the baryon regions are significantly larger (in total number of baryons), annihilations between neutrons and anti-baryons in the former anti-baryon regions have two important effects: (1) depletion of neutrons in the baryon regions, and (2) the formation of extremely neutron-rich “bubbles” in place of the anti-baryon regions. Recent work on the observationally inferred primordial abundances of the light elements D, <sup>4</sup>He, and <sup>7</sup>Li have left a slight, yet significant disparity between the predicted standard BBN abundances and measurements . Observationally inferred abundances and corresponding BBN-inferred baryon-to-photon ratios ($`\eta `$’s) are: $`Y_p=0.234\pm 0.002,\eta _{\mathrm{He}}=(1.8\pm 0.3)\times 10^{10};\mathrm{D}/\mathrm{H}=(3.4\pm 0.3\pm 0.3)\times 10^5,\eta _\mathrm{D}=(5.1\pm 0.3)\times 10^{10};{}_{}{}^{7}\mathrm{Li}/\mathrm{H}=(3.2\pm 0.12\pm 0.05)\times 10^{10},\eta _{\mathrm{Li}}=(1.7_{0.3}^{+0.5})\times 10^{10}\mathrm{or}(4.0_{0.9}^{+0.8})\times 10^{10}`$. Specifically, the production of <sup>4</sup>He is inconsistent (too high) with the $`\eta `$ inferred from the primordial deuterium abundance. Neutron-depleted baryon regions and the neutron bubbles have interesting consequences for BBN. The depletion of neutrons lowers the $`n/p`$ ratio and, therefore, lowers the predicted <sup>4</sup>He abundance. We can show that the number density, $`n_{\overline{b}}`$, of the anti-baryon regions that occupy a fraction of the horizon volume, $`f_v`$, is related to the number density of the baryons, $`n_b`$, in the baryon regions by $$n_{\overline{b}}=\left[\frac{k_{\mathrm{NSE}}(1f_v)k_{\mathrm{He}}}{f_v(1f_v+k_{\mathrm{He}})}\right]n_b.$$ (2) Here, $`k_{\mathrm{NSE}}`$ is the standard BBN $`n/p`$ ratio, and $`k_{\mathrm{He}}`$ is the $`n/p`$ ratio that corresponds to the observed <sup>4</sup>He abundance. The number densities $`n_b`$ and $`n_{\overline{b}}`$ refer to the epoch just prior to $`T_{WFO}`$. If the initial number density of the anti-baryon regions is equal to the number density of the baryon regions, then the fractional volume that can be occupied by anti-baryons for concordance between <sup>4</sup>He and observationally inferred deuterium abundance is $`f_v=10^4`$. This small fraction reflects the small order of disparity between the light element abundances. Anti-baryon regions with slightly larger $`f_v`$ will underproduce <sup>4</sup>He; anti-baryon regions with significantly larger $`f_v`$ will not have all of their anti-baryons annihilated prior to nucleosynthesis. Nucleosynthesis in the neutron-rich regions is also important when considering baryon and anti-baryon inhomogeneities. In order to simulate nucleosynthesis in these regions, we have forced the weak interactions rates to freeze-out the $`n/p`$ ratio at $`6`$ and $`10`$. The “isospin symmetric” nucleosynthesis of $`n/p6`$ would be expected to evolve identically to the standard case of $`n/p1/6`$, with nucleosynthesis being limited by the lack of protons rather than neutrons. However, the decay of neutrons provides a source for protons in these neutron-rich regions and can lead to unique nucleosynthesis results. In our numerical calculations, the production of beryllium-9 and boron-11 become significantly amplified. The recent limits on the abundance of these elements in low metallicity stars provide a strong upper bound on their primordial production . We have also included reactions leading to heavier element production in neutron-rich regions: $${}_{}{}^{14}\mathrm{C}(\alpha ,\gamma )^{18}\mathrm{O}(\mathrm{n},\gamma )^{19}\mathrm{O}(\beta )^{19}\mathrm{F}(\mathrm{n},\gamma )^{20}\mathrm{F}(\beta )^{20}\mathrm{Ne}(\mathrm{n},\gamma )^{21}\mathrm{Ne}(\mathrm{n},\gamma )^{22}\mathrm{Ne}.$$ (3) Through further $`(n,\gamma )`$ reactions and $`\beta `$-decays, the products of these reactions may trigger the r-process in the neutron-rich bubbles. Primordial production of heavy elements is also constrained by older Population II stars. However, a limited primordial source of heavy elements can not be ruled out. Whether or not values of the parameters $`n_b`$, $`n_{\overline{b}}`$, and $`f_v`$ exist for which all of the nucleosynthesis constraints can be met remains an open question. It appears to us, however, that it will be extremely difficult to find such values, especially in light of potential B, Be, and r-process production in these schemes. ## 3 Acknowledgment This work was supported in part by grants from NSF and from NASA.
no-problem/9812/cond-mat9812191.html
ar5iv
text
# Thermodynamics of doped Kondo insulator in one-dimension - Finite-temperature DMRG study - \[ ## Abstract The finite-temperature density-matrix renormalization-group method is applied to the one-dimensional Kondo lattice model near half filling to study its thermodynamics. The spin and charge susceptibilities and entropy are calculated down to $`T=0.03t`$. We find two crossover temperatures near half filling. The higher crossover temperature continuously connects to the spin gap at half filling, and the susceptibilities are suppressed around this temperature. At low temperatures, the susceptibilities increase again with decreasing temperature when doping is finite. We confirm that they finally approach to the values obtained in the Tomonaga-Luttinger (TL) liquid ground state for several parameters. The crossover temperature to the TL liquid is a new energy scale determined by gapless excitations of the TL liquid. The transition from the metallic phase to the insulating phase is accompanied by the vanishing of the lower crossover temperature. \] The Kondo insulator is a typical strongly correlated insulator and develops a spin gap at low temperatures. The half-filled Kondo lattice (KL) model has been studied as its theoretical model, particularly in one dimension (1D) by both numerical and analytical approaches. The ground state of this model is shown to have both spin and charge gaps, $`\mathrm{\Delta }_\text{s}`$, $`\mathrm{\Delta }_\text{c}`$, and strong correlation effects appear in their difference. The spin gap is always smaller than the charge gap, and for small exchange coupling $`J`$ the spin gap is exponentially small, $`\mathrm{\Delta }_\text{s}\mathrm{exp}(1/\alpha \rho J)`$, while the charge gap is linear in $`J`$. At finite temperatures, the spin gap characterizes unique temperature dependence of the excitation spectrum. With increasing temperature, the structure of the charge gap in the dynamic charge structure factor and the quasiparticle density of states disappears at $`T\mathrm{\Delta }_\text{s}`$, much lower than $`\mathrm{\Delta }_\text{c}`$, although the latter is the energy scale of charge excitations at $`T=0`$. This feature is also seen in the temperature dependence of the charge susceptibility, which drastically decreases below $`T\mathrm{\Delta }_\text{s}`$. As for the spin susceptibility, it decreases exponentially with the energy scale of $`\mathrm{\Delta }_\text{s}`$, as expected. When a finite density of carriers are doped, the 1D KL model belongs to another universality class. The ground state and the low energy excitations are described as a Tomonaga-Luttinger (TL) liquid. In contrast to the half filled case, the ground state has gapless excitations in both spin and charge channels. Consequently the spin and charge susceptibilities are finite at $`T=0`$ and determined by the velocities of the collective excitations, $`\chi _\text{s}=1/(2\pi v_\text{s})`$ and $`\chi _\text{c}=2K_\text{c}/(\pi v_\text{c})`$ where $`K_\text{c}`$ is the Luttinger-liquid parameter, whereas $`\chi _\text{s}=0`$, $`\chi _\text{c}=0`$ at half filling due to the gap. In the present paper we study thermodynamics of the 1D KL model near half filling, i.e. in the vicinity of metal-insulator transition. Thermodynamic quantities are calculated by the finite-temperature density-matrix renormalization-group (finite-$`T`$ DMRG) method, and we find that the spin gap at half filling appears as a crossover temperature such that the susceptibilities are suppressed around there. We also find an new lower crossover temperature which is the energy scale of the gapless excitations of the TL liquid ground state. This is sensitive to the hole doping, and the transition from the metallic phase to the insulating phase corresponds to the vanishing of the lower crossover temperature. The Hamiltonian we use in the present study is the 1D KL model described as $$=t\underset{i,s}{}(c_{is}^{}c_{i+1s}+\text{H.c.})+J\underset{i,s,s^{}}{}𝐒_i\frac{1}{2}𝝈_{ss^{}}c_{is}^{}c_{is^{}},$$ (1) where $`𝝈_{ss^{}}`$ are the Pauli matrices and $`𝐒_i=_{s,s^{}}\frac{1}{2}𝝈_{ss^{}}f_{is}^{}f_{is^{}}`$ is the localized spin at site $`i`$. The model has hoppings $`t`$ ($`t>0`$) for only nearest neighbor pairs. The density of conduction electron $`n_\text{c}`$ is unity at half filling, and hole doping ($`n_\text{c}=1\delta `$) is physically equivalent to electron doping ($`n_\text{c}=1+\delta `$) due to the particle-hole symmetry. In order to study thermodynamics we employ the finite-$`T`$ DMRG method. By iteratively increasing the Trotter number of the quantum transfer matrix, we can obtain the eigenvector for the largest eigenvalue with desired accuracy. Thermodynamic quantities are directly calculated from this eigenvector, and the extrapolation in the system size is not needed. This method was first applied to the quantum spin systems and shown to be reliable down to the low temperature $`T=0.01J`$. This method is free from statistical errors and the negative sign problem, which are advantages compared with the quantum Monte Carlo method. To obtain thermodynamic quantities with fixed hole density $`\delta =1n_\text{c}`$, we need chemical potential $`\mu `$ at each temperature. This requires many DMRG calculations at different fixed chemical potentials: for each $`J`$ we use 36 sets of $`\mu `$ with a typical interval $`\mathrm{\Delta }\mu =0.025t`$. The number of states used in the present study is typically 54 and corresponding truncation error is $`10^3`$ at the lowest temperature $`T=0.03t`$ with the Trotter number 60. The $`T`$-dependence of the chemical potential for several $`\delta `$’s is shown in Fig. 1 for $`J=1.6t`$ and $`1.2t`$. Note that the chemical potential at half filling is always zero due to the particle-hole symmetry. At high temperatures, $`\mu `$ is similar to the value for the free conduction electrons and indicates metallic behavior. At low temperatures, however, a significant increase in $`|\mu |`$ appears for small $`\delta `$. In the limit of $`T=0`$, $`\mu `$ approaches $`T=0`$ value and this clearly shows the presence of the quasiparticle gap $`\mathrm{\Delta }_{\text{qp}}`$ as shown in Fig. 1. Previous $`T=0`$ DMRG calculations show that $`\mathrm{\Delta }_{\text{qp}}=0.7t`$ for $`J=1.6t`$ and $`0.47t`$ for $`J=1.2t`$, which is consistent with the present calculation. In the following, we calculate thermodynamic quantities for fixed $`\mu `$’s and convert them for fixed $`\delta `$’s using these data of $`\mu (\delta ,T)`$. Temperature dependence of spin susceptibility $`\chi _\text{s}(T)`$ is plotted in Fig. 2 for $`J/t=1.6`$ and $`1.2`$ at $`0\delta 0.2`$. At high temperatures $`\chi _\text{s}`$ is asymptotically determined by the sum of the Curie term of the localized spins, $`1/(4T)`$, and the Pauli susceptibility, $`\chi _{\text{Pauli}}`$, of the free conduction electrons. This is actually seen in the inset of Fig. 2(a). For $`\delta 0.2`$, the change in the density of states of the free conduction band due to hole doping is within $`5\%`$ leading to little change in $`\chi _{\text{Pauli}}`$, and it is reasonable that the $`\delta `$-dependence of $`\chi _\text{s}`$ is small at high temperatures. With decreasing temperature the spin susceptibility increases owing to the Curie term of the localized spins, but around $`T\mathrm{\Delta }_\text{s}`$, the spin susceptibility starts to be suppressed as in the $`\delta =0`$ case. Previous $`T=0`$ DMRG calculations show that $`\mathrm{\Delta }_\text{s}=0.4t`$ for $`J=1.6t`$, and $`\mathrm{\Delta }_\text{s}=0.16t`$ for $`J=1.2t`$. These behaviors suggest that the spin gap at $`\delta =0`$ persists as the crossover temperature characterizing the suppression of $`\chi _\text{s}`$ even away from half filling. With further decreasing temperature, $`\chi _\text{s}`$ sharply increases again when doping is finite, whereas it exponentially decreases at half filling with the energy scale of $`\mathrm{\Delta }_\text{s}`$. The increase in $`\chi _\text{s}`$ seems to be proportional to $`\delta `$ at low temperatures. In order to see this $`\delta `$-dependence in more detail, we plot the difference in $`\chi _\text{s}`$ between $`\delta >0`$ and $`\delta =0`$ divided by $`\delta `$. As shown in Fig. 3 for $`J=1.6t`$, the universal behavior is observed at low temperatures indicating $`\chi _\text{s}(T)\chi _\text{s}(T,\delta =0)+\delta /(4T)`$. This means that the doped holes induce almost free spins of $`S=\frac{1}{2}`$ with density $`\delta `$. In the limit of $`T0`$, the thermodynamic properties are determined by the gapless collective excitations of the TL liquid, and the spin susceptibility is given by the spin velocity $`v_\text{s}`$ as $`\chi _\text{s}=1/(2\pi v_\text{s})`$. Thus there must be a crossover temperature where $`\chi _\text{s}(T)`$ deviates from $`\chi _\text{s}(T)\chi _\text{s}(T,\delta =0)+\delta /(4T)`$ towards $`1/(2\pi v_\text{s})`$. This crossover temperature is expected to be proportional to $`v_\text{s}`$, because it is the energy scale of the spin excitations. One can estimate this crossover temperature from $`\chi _\text{s}`$ at $`T=0`$ through the relation $`v_\text{s}=1/(2\pi \chi _\text{s})`$. The $`\chi _\text{s}`$($`T`$$`=`$$`0`$) are plotted in Fig. 4 for $`J=2.0t`$. They are calculated by the zero-temperature DMRG method with open boundary conditions via the size dependence of the lowest spin excitation energy, $`\mathrm{\Delta }E(L)=\pi v_\text{s}/L`$. The $`T=0`$ susceptibility is very sensitive to the hole doping $`\delta `$ and diverges in the limit of $`\delta 0`$. It reflects the behavior that the spin velocity as a characteristic energy scale vanishes. Thus the crossover temperature to the TL liquid phase may correspondingly vanish as $`\delta 0`$. The $`\delta `$-dependence of $`\chi _\text{s}`$ seems to be exponential in the present calculation, but we will discuss this point again later. The crossover to the TL liquid is more clearly seen in the charge susceptibility, $`\chi _\text{c}`$. The charge susceptibility is defined as the change in the conduction electron density due to a small shift of chemical potential, $`\chi _\text{c}=n_\text{c}/\mu =\delta /\mu `$. The results for $`J=1.6t`$ and $`1.2t`$ are shown in Fig. 5. At high temperatures, $`\chi _\text{c}`$ is almost the same as that of free conduction electrons and proportional to $`1/T`$. With decreasing temperature, $`\chi _\text{c}`$ for small doping is suppressed around $`T\mathrm{\Delta }_\text{s}`$, as for $`\chi _\text{s}`$. The suppression of $`\chi _\text{c}`$ at $`\delta =0`$ is due to the development of the charge gap. Below $`T\mathrm{\Delta }_\text{s}`$, the correlation length of the localized spins becomes longer than the charge correlation length, and this induces an internal staggered magnetic field for the conduction electrons through the Kondo coupling. When the charge excitations are concerned, their time scale is much shorter than that for the spin excitations ($`\mathrm{\Delta }_\text{s}^1`$), and the staggered field may be considered as almost static. This staggered field induces a unit-cell doubling for the conduction electrons and a finite gap appears at the Fermi energy of the otherwise free conduction electrons. Since this behavior is induced by the development of long-range spin correlations, it becomes noticeable only below $`T\mathrm{\Delta }_\text{s}`$ rather than $`\mathrm{\Delta }_\text{c}`$. The present results show that contributions to $`\chi _\text{c}`$ from the small number of holes are small at temperature $`T\mathrm{\Delta }_\text{s}`$ and the suppression is visible near half filling. With further decreasing temperature, the temperature dependence of $`\chi _\text{c}`$ at $`\delta >0`$ differs from the $`\delta =0`$ case as for $`\chi _\text{s}(T)`$. $`\chi _\text{c}`$ at half filling becomes exponentially small below $`T\mathrm{\Delta }_\text{s}`$ with the energy scale of the quasiparticle gap, $`\chi _\text{c}\mathrm{exp}(\mathrm{\Delta }_{\text{qp}}/T)`$, but for finite $`\delta `$, $`\chi _\text{c}`$ sharply increases. The increase of $`\chi _\text{c}`$ at low temperatures seems to be larger for smaller $`\delta `$. This behavior near half filling is consistent with the doping dependence of $`\chi _\text{c}`$ at $`T=0`$, which is obtained by the zero-temperature DMRG method. At $`T=0`$, $`\chi _\text{c}`$ is given by the difference in the chemical potential $`\chi _\text{c}=2/L(\mu (L,N_\text{c}+1)\mu (L,N_\text{c}1))`$, where $`2\mu (L,N_\text{c}+1)=E_\text{g}(L,N_\text{c}+2)E_\text{g}(L,N_\text{c})`$ and $`E_\text{g}(L,N)`$ is the ground state energy of $`N`$ conduction electrons in the system of length $`L`$. The results are shown in Fig. 6, which actually shows that the $`\chi _\text{c}`$ increases with decreasing $`\delta `$ and it seems to diverge in the limit of $`\delta 0`$. The divergent behavior of $`\chi _\text{c}`$ corresponds to the vanishing charge velocity as the characteristic energy of the TL liquid as $`\delta 0`$. The $`\delta `$-dependence of $`\chi _\text{c}`$ is close to $`1/\delta `$ and different from the exponential dependence observed in $`\chi _\text{s}`$ in the present DMRG calculation. The $`\chi _\text{s}`$ seems to diverge stronger than $`\chi _\text{c}`$ but this point is not clear at the moment. The new energy scale characterizing gapless excitations of the TL liquid may also be seen in the temperature dependence of entropy $`𝒮(T)`$. The entropy is obtained as the derivative of the free energy with respect to temperature, and the results are shown in Fig. 7 for $`J=1.6t`$. At low temperatures the entropy at half filling becomes exponentially small reflecting finite spin and charge gaps. When $`\delta >0`$, the entropy is enhanced at low temperatures. This enhancement is due to the appearance of gapless excitations in the TL liquid. Entropy of the TL liquid is proportional to $`T`$ at low temperatures, $`𝒮=\pi T(v_\text{s}^1+v_\text{c}^1)/3`$. However, for small $`\delta `$, the $`T`$-dependence is small even at temperatures $`T0.05t`$. The remaining entropy is consistent with the entropy of free spin-$`\frac{1}{2}`$ carriers with density $`\delta `$. This implies that the characteristic energy scale for gapless excitations of the TL liquid is small near half filling and the $`T`$-linear dependence will be seen at further low temperatures $`T0.05t`$. In conclusion, we have applied finite-$`T`$ DMRG method to the one-dimensional Kondo lattice model away from half filling, and found two crossover temperatures which characterize thermodynamics near half filling. One is the spin gap at half filling and the other is the characteristic energy of collective excitations in the TL-liquid ground state. The energy scales of the TL liquid becomes smaller with approaching half filling and seems to vanish in the limit of $`\delta 0`$. This work is financially supported by Grant-in-Aid from the Ministry of Education, Science, Sports and Culture of Japan.
no-problem/9812/astro-ph9812074.html
ar5iv
text
# The Cepheid Distance Scale after Hipparcos ## 1 Introduction The cepheid distance scale is the central link of the cosmic distance ladder. Because even the nearest cepheids are more than 100 pc away from the Sun (Polaris at $`130`$ pc, $`\delta `$ Cep at $`300`$ pc), no reliable parallax measurements were available for cepheids before the Hipparcos mission. The zero-point of the Period-Luminosity (PL) relation was calibrated by secondary methods, using cepheids in open clusters and associations or Baade-Wesselink techniques. The Hipparcos data for cepheids opened the possibility of a geometric determination of the zero-point of the PL relation. Soon after the Hipparcos data release, Feast & Catchpole (1997, FC) announced that Hipparcos cepheid data indicated a zero-point about 0.2 mag brighter than previously thought (implying $`\mu `$=18.70 for the LMC). Adopting the FC notation for the PL relation : $$M_V=\delta \mathrm{log}P+\rho $$ they find $`\rho =1.43\pm 0.13`$ (for $`\delta =`$2.81 assumed). This result attracted considerable attention, since it implied a $``$10% downward revision of the value of $`H_0`$ obtained from galaxy recession velocities. In turn, the higher expansion ages derived could become compatible with the new, lower ages for globular clusters obtained from Hipparcos subdwarfs, resolving the “cosmic age problem” (the fact that globular cluster ages were found to be much older than the expansion age of the Universe). In the past year however, several authors reconsidered the Hipparcos cepheid data, and voiced criticism against the FC result. All subsequent studies obtain calibrations for the zero-point of the PL relation similar to pre-Hipparcos values, or fainter (see sketch Fig. 1). Szabados (1997) pointed out that known or suspected binaries were abundant in the cepheid sample, and that the PL relation found after removing them was compatible with pre-Hipparcos values. Madore & Freedman (1998) repeated the whole analysis with multi-wavelength data (BVIJHK). They also concluded on a fainter zero-point. Oudmaijer et al. (1998), in a study devoted to the effect of the Lutz-Kelker bias on Hipparcos luminosity calibrations, analyse the cepheid data as an illustration and derive $`\rho =1.29\pm 0.02`$. Finally, Luri et al. (1998) apply the “LM method” (Luri et al. 1996) to the cepheid sample and find a very faint zero-point of $`\rho =1.05\pm 0.17`$. The situation, one year after the Hipparcos data release, is therefore very perplexing : while before Hipparcos the zero-point of the cepheid PL relation seemed reasonably well determined within 0.1 mag (e.g. Gieren et al. 1998), values derived from the same Hipparcos parallax data cover a range of 0.4 mag! Fortunately, this distressing situation may be only temporary. As all the studies considered use the same parallax data and similar photometric and reddening values for Hipparcos cepheids, the differences are almost entirely due to the statistical procedures used in the analyses. Contrarily, for instance, to the case of the very metal-poor globular cluster distance scale where observational uncertainties may remain the limiting factor (see review by Cacciari in this volume), it may be hoped that the disagreements about the PL zero-point can be resolved by evaluating the different approaches. This is what we are attempting here. From statistical considerations on one hand, Monte Carlo simulations of the different procedures on the other hand, we have tested the robustness and possible biases of the different procedures. The results are presented below in Par. 3 to 8, study by study, beginning with an imaginary author using the unsophisticated ”direct method”. It appears that the difficulties involved in the treatment of high relative error parallax data are at the core of the question. On this subject, the review by Arenou in this volume constitute an excellent complement to the present contribution – showing the pitfalls easiest to overlook. As far as cepheids are concerned, we shall contend here that, in fact, the situation is rather clear, and that much of the apparent disagreement is caused by statistical biases that have already been reported in dealing with parallax data. ## 2 Hipparcos cepheids Around 200 cepheids were measured with Hipparcos. All but the nearest ones are so remote that $`\pi _{real}<\sigma _\pi `$<sup>1</sup><sup>1</sup>1Parallaxes are noted $`\pi `$ and their uncertainty $`\sigma _\pi `$. There are 19 of them with $`\sigma _\pi /\pi <50`$%. The closest cepheid, Polaris, was measured at $`\pi =7.56\pm 0.48`$ mas. $`\delta `$ Cep itself has $`\pi =3.32\pm 0.58`$ mas. Despite the low accuracy of individual parallax determinations, it is not unreasonable to attempt an accurate determination of the PL relation zero-point, because relative distances are precisely known through the PL and PLC relations. Each individual parallax can therefore be seen as a measurement of the PL zero-point. A large number of low-accuracy parallaxes can yield a reliable combined value of the zero-point. ## 3 Dangers of the “direct method” The most natural way to infer absolute magnitude from parallax would be to calculate distances from the inverse of the parallax and use Pogson’s law : $$m_VM_V=5\mathrm{log}(1/\pi )5+a_V$$ (1) Then, the zero-point of the PL relation could be fit in the Period-$`M_V`$ plane, by some kind of weighted least-square. In fact, this is an extremely biased way to proceed when the relative errors on parallax are high ($`\sigma _\pi /\pi >`$20%). Fig. 2 shows graphically why the inverse of the parallax is a biased estimator of the distance, and how the magnitude derived from Equ. 1 has a very asymmetrical and skewed distribution if the statistical distribution of the parallax is gaussian. This effect was pointed out by several authors in the wake of the publication of the Hipparcos catalogue (e.g. Luri & Arenou 1997). An essential condition to the use of least-square fit is the symmetry of the error distribution, and if it is not satisfied, first-order biases are to be expected. Moreover, in order to use Equ. 1, negative parallaxes must be ignored and the data selected by some cut in $`\sigma _\pi /\pi `$. Both selection criteria introduce further biases. By discriminating against low $`\pi _{meas}`$ at a given $`\sigma _\pi `$ and $`\pi _{real}`$, they bias the result towards lower distances and fainter magnitudes. Monte Carlo simulations with samples similar to the actual Hipparcos cepheid sample show that this “direct method” leads to a bias of $``$0.2 mag towards a fainter PL relation zero-point. An important bias indeed, due exclusively to the incorrect statistical treatment of parallax data. ## 4 Shifting to parallax space : Feast & Catchpole 1997 FC avoid the difficulties of the $`\pi M_V`$ transformation with a change of variable that allows the parallaxes to be combined linearly. Instead of deriving the PL relation zero-point from $`M_V=\delta \mathrm{log}P+\rho `$ and Equ. 1, FC compute $`10^{0.2\rho }`$ from the mathematically equivalent relation $$10^{0.2\rho }=0.01\pi 10^{0.2(m_Va_V\delta \mathrm{log}P)}$$ (2) The final value of $`\rho `$ is recovered from the average of the values of $`10^{0.2\rho }`$ weighted by the uncertainty on the right term of (2). At first sight the procedure adopted by FC looks unnecessarily complicated. But we saw in the previous section why a more straightforward approach is not preferable. The FC method removes the statistical biases affecting the direct method: since the parallax appears linearly in (2), negative parallaxes can be kept, no $`\sigma _\pi /\pi `$ cut is needed, and the uncertainties are symmetrical. However, the condition for the use of this method is that the uncertainties on the exponent $`0.2(m_Va_V\delta \mathrm{log}P)`$ be smaller than the uncertainties on $`\pi `$. For this reason, the method is only reliable for a group of objects with errors on the relative distances much smaller than the parallax errors, such as the Hipparcos cepheids. Any dispersion of the exponent $`0.2(m_Va_V\delta \mathrm{log}P)`$ will make the distribution of errors on the right term of (2) asymmetrical again, and result in a bias towards brighter magnitudes. We have tested the FC procedure with Monte Carlo simulations on synthetic samples of various composition. The effect of modifying several assumptions was tested, such as varying the slope of the PL relation, the width of the instability strip or the spatial distribution of cepheids. Samples were also drawn from a larger volume, so that classic Lutz-Kelker biases would be modeled. Representative results are shown on Tables 1 and 2. The conclusion is that the FC method is sound and robust, and that systematic biases are smaller than 0.03 mag<sup>2</sup><sup>2</sup>2Similar results were obtained by X. Luri (priv. comm.) who has also extensively tested the FC procedure. These small residual biases are caused by the asymmetrical effect of the dispersion in the exponent $`0.2(V_0\delta \mathrm{log}P)`$ of Equ. 2.. However, the dispersion of the results recovered in the simulations is substantially higher than that stated in FC, indicating that the final uncertainty may have been underestimated. We return on this point in Par. 10 ## 5 Possible effect of binaries : Szabados 1997 A large fraction of known cepheids are confirmed or suspected binaries. Szabados (1997, SZ) has pointed out that binary cepheids showed more scatter in the PL diagram than single cepheids, and suggested that this could be due to the noise induced by binarity on Hipparcos parallax determinations. As an orbit of $``$1 AU has the amplitude of the parallax at any distance, unrecognized companions could in principle interfere with the parallax measurement. By fitting a PL relation on single cepheids only (Fig. 3a), SZ recovers a zero-point equivalent to the pre-Hipparcos values (about 0.2 mag fainter than FC). As shown by Fig 3a, suspected binaries have larger error bars on average than single cepheids. But given these error bars, the scatter for binary suspects is compatible with the uncertainties (as confirmed by a Kolmogorov-Smirnov test). The question is to understand why binaries have larger error bars, and for this the $`\mathrm{log}PM_V`$ plane is not a good representation, as explained in Par. 3, because the same parallax uncertainty can result in greatly different $`M_V`$ uncertainties. When the uncertainties are considered in parallax space (Fig 3b./3c.), it appears that at a given magnitude the binary cepheids do not have significantly larger parallax uncertainties than the single cepheids. The high dispersion of the binary group on Fig. 3a is simply due to the fact that, on average, suspected binaries are fainter and more remote. Now, this has to be a chance effect due to low-number statistics (unless some weird mechanism weeds out binary cepheids off the solar neighbourhood). Another aspect of Fig 3a is that eight single stars look much nearer to the mean relation than their uncertainties would indicate, adding to the visual impression that binary suspects are much more scattered. These stars have $`\sigma _\pi `$=0.5-0.7 mas, and much smaller residuals : $`<\pi _{obs}\pi _{PL}>`$=0.07 mas. Again, this can only be due to chance. The alternative is to suggest that, for some reason, Hipparcos parallaxes are a factor 7-10 more precise for single cepheids than for any other star in the catalogue, a rather unreasonable hypothesis. If this sounds like a strange coincidence, one should keep in mind that the cepheid sample was split into two parts under several criteria to check for systematic effects – single and binary, overtone and fundamental pulsator, low and high period, low and high reddening – and that a slightly strange-looking distribution for 8 points according to one of these criteria should not be over-interpreted. A Kolmogorov-Smirnov test indicates that the normalised parallax residuals $`(\pi _{obs}\pi _{PL})/\sigma _\pi `$ (see Fig. 3c) for binaries only, for single stars only and for the combined sample, are all compatible with a normal distribution. The lowest KS coefficient, that for single stars, is 0.27. Therefore, there is no statistically significant indication that the suspected binary cepheids suffer some additional noise on the Hipparcos parallax measurements. Does the exclusion of suspected binaries change the zero-point derived from Hipparcos parallaxes? SZ uses a straightforward fit in magnitude space to calculate the PL relation of single cepheids. We saw in Par. 3 how biased the results could become. In fact, when analysed with the procedure used by FC, the single cepheids only, the binaries only and the whole sample give similar results ($`\rho =`$1.51, $``$1.36 and $``$1.43 respectively). The zero-point $``$0.2 mag fainter found by SZ is not due to the exclusion of suspected binaries, but to the biases caused by the use of magnitudes calculated from parallaxes. The implications are that : 1- There is no statistically significant indication that binarity affects Hipparcos parallaxes for cepheids. 2- Keeping or removing the suspected binaries gives essentially the same result for the PL relation zero-point. ## 6 Multi-wavelength magnitude analysis : Madore&Freedman 1997 Madore & Freedman (1998, MF) reconsider the calibration of Hipparcos cepheids, using data in several visible and infrared wavelengths (BVIJHK) rather than the traditional B and V. This reduces the number of objects available, as only 7 Hipparcos cepheids have been measured in all six wavelengths. MF compute magnitudes from the standard formula $$m_iM_i=5\mathrm{log}(1/\pi )5+a_i$$ (3) and calculate the PL zero-point $`\rho `$ by averaging the magnitude residuals with weights $`\omega \pi ^2/\sigma _\pi ^2`$. This procedure yields values for $`\rho `$ that depend significantly on wavelength (see column two of Table 3), a dependence that MF attribute to reddening problems <sup>3</sup><sup>3</sup>3MF take individual reddenings from the Fernie et al. (1995) catalogue (and not from the reference given in the article, Fernie, Kamper & Seager 1993, that does not contain reddenings). MF state that they use the same unreddening procedure as FC. However, Fernie et al. use mulicolour calibrations for reddenings, whereas FC calculate reddenings from a mean Period-Colour (PC) relation. Thus MF do not benefit from the compensating effect of combining PC reddenings with a PL relation (see for instance Pont et al. 1997 or FC). As a consequence, the biases are amplified, which may explain part of the dependence of their zero-point on wavelength.. At the longer wavelengths, MF recover the values found by FC, whereas in the infrared fainter values are found, corresponding to the pre-Hipparcos calibration and $`\mu _{LMC}18.5`$ mag. It should now be clear that the MF procedure is subject to very large biases, as it corresponds to the type of approach outlined in Par. 3 above. The $`M_i`$ calculated from Equ. 3 are very biased estimators of the absolute magnitude for high values of $`\sigma _\pi /\pi `$. The introduction of weights depending on the observed $`\sigma _\pi /\pi `$ ratio only makes matters worse. The effect may be illustrated with a simple example : consider 3 cepheids at the same distance, say 500 pc, measured by Hipparcos with $`\sigma _\pi `$ = 1 mas. The real $`\pi `$ is 2 mas (1/500 pc), and let the measured $`\pi `$ be 1, 2 and 3 mas respectively. Table 4 shows the magnitudes derived for these 3 objects from the measured parallaxes using Equ. 3 and the weights from MF. A weighted average gives a calibration that is 0.46 mag too faint! A better procedure would be to work in parallax space, in this case averaging the parallaxes weighted by $`\sigma _\pi `$, to obtain the parallax of the zero-point, a procedure similar in essence to FC. The biases affecting the MF results were also checked by applying the FC method to the MF multi-wavelength data. The resulting zero-point was compared in V, J and K to the Laney & Stobie (1994) calibrations (Column 4 of Table 3). The resulting zero-point is coherent at all three wavelengths, $``$ 0.2 mag brighter than Laney & Stobie, and corresponds to the “bright” FC calibration. Biases are also apparent when the MF procedure is applied to synthetic samples. As expected, the recovered luminosity zero-point is systematically too faint, on average 0.25 mag too faint with synthetic samples similar in distance distribution to the actual sample. Thus the disagreement of MF with FC is not due to the use of multi-wavelength data, but can be entirely attributed to the treatment of parallax data. This confirms the necessity of carefully considering the subtleties involved in deriving magnitude calibrations from high $`\sigma _\pi /\pi `$ parallax (see Brown et al. 1997, Luri & Arenou 1997). ## 7 Presence of the Lutz-Kelker bias : Oudmaijer et al. 1998 The Oudmaijer et al. (1998, OGS) study is devoted to showing the presence of the Lutz-Kelker and Malmquist biases in Hipparcos data, and calculating statistical corrections on individual measurements to compensate for these biases. OGS consider the plot of the magnitude residual $`\mathrm{\Delta }M_V`$ versus $`\sigma _\pi /\pi `$ (see Figure 4a for the cepheids) as evidence for the presence of such biases. The dependence of $`\mathrm{\Delta }M_V`$ on $`\sigma _\pi /\pi `$ is indeed impressive. But this interpretation is incorrect – as detailed by Arenou at this meeting: the features in Fig 4a are primarily due to the fact that the abscissa and ordinate are heavily correlated. Let us suppose that $`\sigma _\pi `$ is a constant, as is nearly the case for Hipparcos parallaxes, then the abscissa is $`\sigma _\pi /\pi 1/\pi `$, while the ordinate is $$\mathrm{\Delta }M_VM_V^{par}M_V^{true}=5\mathrm{log}(1/\pi )5M_V(true)log(\pi )$$ Both axes strongly depend on the same measured parallax $`\pi `$, and the relation observed in Fig. 4a (and Fig. 2, 3 and 4 of OGS) only reflects this direct correlation, and does not as such reveal any bias. Fig. 4a would be more useful if the abscissa contained the real $`\sigma _\pi /\pi `$, and not the observed $`\sigma _\pi /\pi `$. Unfortunately the real $`\pi `$ is unknown. For a given real $`\pi `$, the variations of the observed $`\pi _{obs}`$ due to parallax uncertainties affects $`\sigma _\pi /\pi _{obs}`$ and $`\mathrm{\Delta }M_V`$ in a correlated way and move data points along diagonal lines in the diagram, as illustrated in Fig. 4b. $`\sigma _\pi /\pi _{obs}`$ is a reliable indication of $`\sigma _\pi /\pi _{real}`$ only if $`\sigma _\pi <<\pi `$, which is not the case for cepheids. OGS propose the following correction on individual data $$\delta M=\left\{1\left[\frac{\sigma _{M_0}^2}{\sigma _{M_0}^2+4.715(\sigma _\pi /\pi )^2}\right]\right\}(M_0M_{obs})$$ (4) where $`M_0`$ is the expected magnitude (the “true” magnitude). As this correction depends on the unknown true magnitude $`M_0`$, it is not determined unless a true magnitude $`M_0`$ is assumed. In the case of cepheids, OGS chose the procedure of trying different values of $`\rho `$ until the residuals of the magnitudes corrected by Equ. 4 reach a minimum, and recover a value of $`\rho `$ similar again to the pre-Hipparcos calibrations. Their procedure is illustrated by the arrows in Fig. 4b: a large correction depending on $`M_0`$ and $`\sigma _\pi /\pi `$ is applied (Equ. 4), and different $`\rho `$ are tried until the residuals are minimum. OGS take as a proof that their procedure has corrected for the biases the fact that, after correction, the $`\sigma _\pi /\pi `$ vs. $`\mathrm{\Delta }M_V`$ plot looks very tidy. A closer look at Equ. 4 shows that this has nothing to do with bias correction : because the correction itself tends to ($`M_0M_{obs}`$) as $`\sigma _\pi /\pi `$ becomes high, the corrected $`M_{obs}`$ is forced to $`M_0`$ ($`M_{obs}+\delta MM_{obs}+(M_0M_{obs})=M_0`$), so that the residuals artificially tend to zero! If an abscissa independent of the ordinate is chosen, e.g. $`\sigma _\pi /\pi _{true}`$, where $`\pi _{true}`$ is the parallax expected from the PL relation, the largest part of the apparent bias vanishes (Fig. 4c), and correction Equ. 4 becomes unnecessary<sup>4</sup><sup>4</sup>4The remaining part of the bias is the much smaller classical Lutz-Kelker bias, due to the fact that different parallax intervals cover very different space volumes. In the case of Hipparcos cepheids selected as in FC, it amounts to $``$0.02 mag. We have tested the OGS procedure for cepheids on synthetic samples and find that it gives systematically too faint results by $``$0.17 mag. It also fails the “3 cepheid” test of the previous section (giving a bias of 0.36 mag). The crux of the matter here is that, as the relative error on parallax increases, the measured $`\sigma _\pi /\pi `$ becomes an increasingly bad estimator of the real $`\sigma _\pi /\pi `$. In our small “3 cepheid” example, the measured $`\sigma _\pi /\pi `$ is 33%, 50% and 100%, while the true $`\sigma _\pi /\pi `$ is 50%. As a rule of thumb, $`\sigma _\pi /\pi `$ should no longer be used, even in statistical corrections, for values higher than about 20%. Due to its indirect nature (the true magnitude must first be assumed and then corrected residuals are minimised), the OGS method is also quite unstable. One can get a feeling of this instability by comparing the results of the OGS method with that of the FC method on several cepheid subsamples (Table 5). We conclude that while the OGS method might be justified to estimate the absolute magnitude of individual objects from a high $`\mathrm{\Delta }M_V`$ population with low $`\sigma _\pi /\pi `$ and about which nothing is known, it is far from optimal in the case of the cepheids, which have high $`\sigma _\pi /\pi `$ and low $`\mathrm{\Delta }M_V`$. In that case, $`\pi _{true}`$ is better known from the PL relation itself than from $`\pi _{meas}`$, and the use of $`\sigma _\pi /\pi `$ can be avoided by working in parallax space. ## 8 Maximum likelihood method : Luri et al. 1998 Luri et al. (1996) have devised a maximum-likelihood method of determination of absolute magnitudes from Hipparcos data that takes all available data into account, including proper motion and radial velocity. Luri et al. (1998, LGT) apply this method to the zero-point of the cepheid PL relation and derive $`\rho `$=1.05$`\pm 0.17`$ mag for a fixed slope of $`\delta 2.81`$, a value 0.38 mag fainter than FC using exactly the same sample. First, it should be noted that LGT use the global reddening model of Arenou et al. (1992), that gives reddenings on average 0.05 mag higher than those usually adopted for cepheids. If the reddening scale of FC is adopted instead, the result of the LM method becomes $`\rho =`$0.89 mag, an even fainter value. In collaboration with X. Luri, we tested the method with synthetic samples, and no significant bias was found. The key to this puzzling problem was pointed out by F. van Leeuwen at this meeting : further tests showed that the LGT solution is much more sensitive to kinematical data (proper motions and radial velocities) that to the parallaxes. In fact, to the first order, the parallaxes have no influence on the solution, so that the LM method becomes similar in principle to a statistical parallax analysis. It cannot be directly compared to the geometrical distance determinations considered above. The large disagreement of the LM method with FC remains a question that deserves detailed study, as it may contain precious hints on cepheid distances or kinematics (see Par 12 below). ## 9 A note on overtone pulsators Overtone pulsators, cepheids that pulsate in the first harmonic rather than the fundamental mode, are usually identified by their low amplitude and sinusoidal light curve. Their period is about 30% shorter than a fundamental pulsator of the same luminosity. In a luminosity calibration, detected overtones can either be removed or their period adjusted by 30%, but the presence of undetected overtones may bias the result. The possible presence of undetected overtones was tested by repeating the zero-point determination on subsamples selected by period intervals. Because overtones usually have small periods, their undetected presence should appear as a period dependence of the solution. No significant trend was found, but the sample is not large enough to exclude such a trend below the $``$0.1 mag level. ## 10 Value and uncertainty of the Hipparcos cepheid PL zero-point Fig. 5 graphically summarizes our conclusions : the Hipparcos parallaxes for cepheids do indeed indicate a magnitude calibration brighter than previously accepted ($`\rho =1.43\pm 0.16`$ for a fixed PL slope $`\delta `$2.81), as found by FC. All other subsequent analyses that we considered suffer from strong systematic biases due to the procedures used to infer magnitude calibrations from parallax data <sup>5</sup><sup>5</sup>5With the interesting exception of LGT, that, as explained in Par. 8, cannot be strictly considered as a parallax calibration, but should rather be seen as a kinematical calibration.. The impression one could infer by “weight of numbers”, from an uncritical list of all calibrations, namely that after all Hipparcos cepheid data confirm previous magnitude calibrations, is therefore misleading. On the contrary, we confirm the conclusion by FC that a brighter cepheid PL calibration is implied by the Hipparcos parallaxes. Let us now consider the uncertainty on this value. In our simulations, the dispersion in the recovered $`\rho `$ came out 15% to 50% higher than the uncertainties derived in FC. In fact, the error in FC is lower that the uncertainty caused by the propagation of Hipparcos parallax uncertainties alone. This is due to the fact that FC calculate the uncertainty from the residuals, not from the Hipparcos $`\sigma _\pi `$, and that the residuals are on average lower than the uncertainties: $$<\frac{\pi _{observed}\pi _{PL}}{\sigma _\pi }>=0.87$$ Obtaining normalised residuals lower than unity can have two causes: either the uncertainties were overestimated, or the effect is due to chance and low-number statistics. It is unlikely that Hipparcos parallax errors were overestimated for cepheids, and we shall rather assume that a statistical fluctuation causes the residuals to be smaller than the uncertainties. In that case, the final error to be used is not the one from the residuals, but the one propagated from the uncertainties on the parallaxes. With this consideration, we modify the error in FC upwards to 0.16 mag, noting that this uncertainty is due only to the $`\sigma _\pi `$ and than any other source of uncertainty (e.g. reddening scale, PL slope) would be additional. Our Monte Carlo simulations (Table 1) show typical scatters as high as 0.20 mag in the recovered $`\rho `$ for the sample without $`\alpha `$ UMi. Recalling the discussion of section 4, we add a possible systematic bias of $`{}_{0}{}^{}{}_{}{}^{+0.03}`$ mag, for a final Hipparcos cepheid PL relation zero-point modified from FC of $$M_V=2.81\mathrm{log}P\underset{¯}{1.43\pm 0.16[\mathrm{stat}]_0^{+0.03}[\mathrm{syst}]}$$ (5) If our rediscussion of the uncertainties of the FC result is correct, the Hipparcos calibration is not incompatible with previous calibrations from cluster cepheids or from surface brightness techniques. The uncertainty on the geometrical calibration remains high and does not force a shift of the distance scale. The Hipparcos parallax data do however indicate that the real PL relation is probably situated near the bright end of previous uncertainty intervals. ## 11 Distance of the LMC The Hipparcos cepheid distance scale can be compared to that obtained from other methods, taking the distance of the LMC as a point of comparison. Hipparcos cepheids give : $$\mu _{LMC}=18.70\pm 0.16[0.03](\mathrm{Hiparcos}\mathrm{parallaxes},\mathrm{modified}\mathrm{from}\mathrm{FC})$$ (6) The other two main calibrations of the cepheid PL zero-point are the surface brightness technique and cepheids in clusters and associations. From the surface brightness method, Gieren, Fouqu & Gomez (1998) obtain: $$\mu _{LMC}=18.46\pm 0.02[+0.06]$$ where the 0.02 term is the internal statistical uncertainty, and the \[+0.06\] term is a metallicity correction that the authors chose not to implement. In order to put this value on the same scale as (6), we force the slope of the PL relation to $`\delta `$2.81 as in FC, which yields $`\mu _{LMC}=18.52`$ (P. Fouqu , priv. comm.). In the absence of an external error budget we increase the uncertainty to 0.1 mag : $$\mu _{LMC}=18.52\pm 0.1\mathrm{?}[+0.06](\mathrm{surface}\mathrm{brigthness},\mathrm{modified}\mathrm{from}\mathrm{Gieren}\mathrm{et}\mathrm{al}\mathrm{.\; 1998})$$ A recent cluster cepheid calibration is Laney & Stobie (1994) who find : $$\mu _{LMC}=18.49\pm 0.04[+0.04]$$ where the last term is a metallicity correction. We adjust this result to the new Hipparcos Hyades parallax of $`\mu `$=3.33 : $$\mu _{LMC}=18.55\pm 0.04[+0.04](\mathrm{cluster}\mathrm{cepheids},\mathrm{modified}\mathrm{from}\mathrm{Laney}\&\mathrm{Stobie}1994)$$ It is noticeable that the final agreement between the three different calibration is compatible with the statistical uncertainties. A weighted mean, applying half the systematic corrections given in brackets, yields for the distance modulus of the LMC : $$\mu _{LMC}=18.58\pm 0.05$$ We conclude that while the Hipparcos parallax calibration does indicate that the PL zero-point may be at the bright end of previous uncertainty intervals, it is not incompatible with other determinations. Or, depending on our viewpoint, that while the Hipparcos PL zero-point is within one sigma of previous calibrations, it does indicate that the PL zero-point is near the bright end of previous uncertainties. ## 12 A note on kinematical determinations Feast et al. (1998) have calculated the cepheid PL zero-point $`\rho `$ from kinematical data, by comparing Hipparcos proper motions and radial velocities. They adjust $`\rho `$ in order to obtain the same value for the Oort constant A from proper motions and from radial velocities. They find that this approach favours a brighter zero-point similar to FC. A brighter zero-point would also decrease the mismatch between the rotation curve of cepheids and HII regions in the outer disc (see Pont et al. 1997). One could then conclude that the kinematics favours a longer distance scale. It is striking however that the “LM method”, also based on kinematics, finds such a faint zero-point for cepheids (see Par. 8). In addition, the RR Lyrae statistical parallax analyses (such as Fernley et al. 1998), also kinematical, yield a much fainter distance modulus for the LMC ($`\mu 18.3`$). This could lead us to wonder whether all assumptions implicit in the kinematical methods about the cepheid or RR Lyrae velocity field are really fulfilled. If not, it could turn out to be the key to understanding the puzzling disagreement between the Hipparcos cepheid zero-point on one hand, the LM method for cepheids and RR Lyrae statistical parallaxes on the other. ###### Acknowledgements. This study has benefited a lot from enlightening discussions with Laurent Eyer, Michael Feast, Pascal Fouqu and Xavier Luri. I also thank Martin Groenewegen and Barry Madore for useful exchanges.
no-problem/9812/cond-mat9812306.html
ar5iv
text
# Generalized Fokker-Planck Equation For Multichannel Disordered Quantum Conductors ## Abstract The Dorokhov-Mello-Pereyra-Kumar (DMPK) equation, which describes the distribution of transmission eigenvalues of multichannel disordered conductors, has been enormously successful in describing a variety of detailed transport properties of mesoscopic wires. However, it is limited to the regime of quasi one dimension only. We derive a one parameter generalization of the DMPK equation, which should broaden the scope of the equation beyond the limit of quasi one dimension. Quantum transport in a disordered N-channel mesoscopic conductor can be described in the scattering approach, initiated by Landauer , in terms of the joint probability distribution of the transfer matrices . Under very general conditions based on the symmetry properties of the transfer matrices and within the random matrix theory framework , the joint probability density of the transmission eigenvalues can be expressed as an evolution with increasing length of the system according to a Fokker-Planck equation known as the Dorokhov-Mello-Pereyra-Kumar (DMPK) equation . Such a random matrix approach has been found to be very useful in our understanding of the universal properties in a wide variety of physical systems in condensed matter as well as nuclear and particle physics . In particular, the DMPK equation has been shown to be equivalent to the description of a disordered conductor in terms of a non-linear sigma model obtained from the microscopic tight binding Anderson Hamiltonian for non interacting electrons, and is consistent with perturbative calculations and experiments . The equation has been solved exactly , and level correlation functions can be obtained using the method of biorthogonal functions . Because it is extremely difficult to evaluate any higher order correlation function in the sigma model approach, the DMPK equation is more suitable to study the conductance distribution in mesoscopic systems. In recent years it has been applied to a variety of physical phenomena, including conductance fluctuations, weak localization, Coulomb blockade, sub-Poissonian shot noise, etc. . One major limitation of the DMPK equation however is that it is valid only in the regime of quasi one dimension (1D), where the length of the system is much larger than its width . While the dependence on geometry of some of the transport properties have been obtained perturbatively in the metallic regime, only limited progress have been made on the extension of the DMPK equation to higher dimensions . Currently, there exists no theory for the statistics of transmission levels for all strengths of disorder beyond quasi 1D. This is a particularly severe shortcoming; the important question of the nature of the expected novel kind of universality of the distribution of conductance near the metal-insulator transition can not be studied within the powerful DMPK framework, because the transition exists only in higher dimensions. In this work we argue that the generalization of the DMPK equation to higher dimensions require the relaxation of certain approximations made in the derivation, and suggest a phenomenological way to implement them within the random matrix framework. This allows us to obtain a simple generalization using a phenomenological parameter and the conservation of the total probability. We obtain corrections to the mean and variance of conductance as a function of the parameter using the generalized equation and discuss the implications of the results. We argue that the generalized equation should be valid beyond quasi one dimension. In the scattering approach, the conductor of length $`L`$ is placed between two perfect leads of finite width. The scattering states at the Fermi energy define $`N`$ channels. The $`2N\times 2N`$ transfer matrix $`M`$ relates the flux amplitudes on the right of the system to that on the left . Flux conservation and time reversal symmetry (in this paper, for simplicity, we will restrict ourselves to the case of unbroken time reversal symmetry only) restricts the number of independent parameters of $`M`$ to $`N(2N+1)`$, and can be represented as $$M=\left(\begin{array}{cc}u& 0\\ 0& u^{}\end{array}\right)\left(\begin{array}{cc}\sqrt{1+\lambda }& \sqrt{\lambda }\\ \sqrt{\lambda }& \sqrt{1+\lambda }\end{array}\right)\left(\begin{array}{cc}v& 0\\ 0& v^{}\end{array}\right),$$ (1) where $`u,v`$ are $`N\times N`$ unitary matrices, and $`\lambda `$ is a diagonal matrix, with positive elements $`\lambda _i,i=1,2\mathrm{}N`$. The physically observable conductance of the system is given by $`g=_i(1+\lambda _i)^1`$. Thus the distribution of conductance can be obtained from the distribution of the variables $`\lambda _i`$. In order to understand the nature of the approximation used in DMPK and to motivate our generalization, we will briefly review the derivation of DMPK following ref. . In this approach, an ensemble of random conductors of macroscopic length $`Ll`$, where $`l`$ is the mean free path, is described by an ensemble of $`M`$ random matrices, whose differential probability depends parametrically on $`L`$ and can be written as $`dP_L(M)=p_L(M)d\mu (M)`$. Here $`d\mu (M)`$ is the invariant Haar measure of the group, given in terms of the parameters in (1) by $$d\mu (M)=J(\lambda )\left[\underset{i}{\overset{N}{}}d\lambda _i\right]d\mu (u)d\mu (v),$$ (2) where $$J(\lambda )=\underset{i<j}{}|\lambda _i\lambda _j|$$ (3) and $`d\mu (u)`$ and $`d\mu (v)`$ are the invariant measures of the unitary group $`U(N)`$. When a conductor of length $`L_0`$ described by a transfer matrix $`M_0`$ is added to a conductor of length $`L_1`$ and transfer matrix $`M_1`$ to form a conductor of length $`L=L_1+L_0`$ and transfer matrix $`M=M_0M_1`$, the probability density $`p_L(M)`$ satisfies the combination rule $$p_{L_1+L_0}(M)_{L_0}=p_{L_1}(MM_0^1)p_{L_0}(M_0)𝑑\mu (M_0),$$ (4) where the angular bracket represensts an ensemble average. For $`L_0l`$, the small change in the transfer matrix can be expected to lead to a small change in the parameters $`\lambda _i`$, and one can expand the probability density as $$p_{L_1+L_0}(\lambda )_{L_0}=p_{L_1}(\lambda +\delta \lambda )_{L_0}=p_{L_1}(\lambda )+\underset{a}{}\frac{p_{L_1}(\lambda )}{\lambda _a}\delta \lambda _a_{L_0}+\frac{1}{2}\underset{ab}{}\frac{^2p_{L_1}(\lambda )}{\lambda _a\lambda _b}\delta \lambda _a\delta \lambda _b_{L_0}.$$ (5) Since the changes in $`\lambda _a`$ are small, we can use perturbation theory to evaluate their averages. We can also expand the left hand side in powers of $`L_0`$. The resulting equation, keeping only terms first order in $`L_0`$ on the left hand side, is given by $$L_0\frac{p}{L}=\underset{a}{}(1+2\lambda _a)\frac{p}{\lambda _a}\underset{c}{}\lambda _c^{}v_{ca}^{}{}_{}{}^{}v_{ca}^{}_{L_0}+\underset{a}{}\lambda _a(1+\lambda _a)\frac{^2p}{\lambda _a^2}\underset{c}{}\lambda _c^{}(1+\lambda _c^{})|v_{ca}^{}|^4_{L_0}$$ $$+\underset{ab}{}\frac{\lambda _a+\lambda _b+2\lambda _a\lambda _b}{\lambda _a\lambda _b}\frac{p}{\lambda _a}\underset{c}{}\lambda _c^{}(1+\lambda _c^{})v_{ca}^{}{}_{}{}^{}v_{cb}^{}{}_{}{}^{}v_{cb}^{}v_{ca}^{}_{L_0}.$$ (6) Here the primed variables correspond to the added small conductor of length $`L_0`$. The above equation (6) is quite general. It is based on the symmetry properties of the transfer matrices and the combination principle for adding two conductors. These principles should remain valid beyond quasi one dimension. It is the further approximations on the averages in equation (6) made in deriving DMPK that limits DMPK to quasi one dimension. There are two major approximations involved: (i) The ‘isotropy’ assumption is used to decouple the averages over the products of the parameters $`\lambda `$ and the unitary matrices $`v`$. Once decoupled, the averages over the products of the unitary matrices alone can be explicitly obtained to give $$v_{ca}^{}{}_{}{}^{}v_{ca}^{}=\frac{1}{N};v_{ca}^{}{}_{}{}^{}v_{cb}^{}{}_{}{}^{}v_{cb}^{}v_{ca}^{}=\frac{1}{N(N+1)};|v_{ca}^{}|^4=\frac{2}{N(N+1)},$$ (7) while the average over the trace of $`\lambda _c^{}`$ is taken to be proportional to $`L_0`$. In particular, $`_c\lambda _c^{}=NL_0/l`$, where $`l`$ is the mean free path, consistent with the Born approximation for the transmission amplitude valid for small $`L_0`$. (ii) The second approximation is based on the expectation that the averages of the products of $`\lambda _c^{}`$ are higher orders in $`L_0`$, and therefore negligible. In particular, this means that the terms proportional to $`_c\lambda _{c}^{}{}_{}{}^{2}`$ are neglected in equation (6). The above two approximations, together with the identity $$\underset{b(a)}{}\frac{\lambda _a+\lambda _b+2\lambda _a\lambda _b}{\lambda _a\lambda _b}=(N1)(1+2\lambda _a)+2\lambda _a(1+\lambda _a)\underset{b(a)}{}\frac{1}{\lambda _a\lambda _b},$$ (8) lead to the well known DMPK equation: $$\frac{p}{(L/l)}=\frac{2}{N+1}\frac{1}{J(\lambda )}\underset{a}{}\frac{}{\lambda _a}\left[\lambda _a(1+\lambda _a)J(\lambda )\frac{p(\lambda )}{\lambda _a}\right],$$ (9) where $`J(\lambda )`$ is defined in (3). We will first show that beyond quasi one dimension, the second approximation fails, namely $`_c\lambda _{c}^{}{}_{}{}^{2}`$ is of the same order in $`L_0`$ as $`_c\lambda _c^{}`$ and therefore can not be neglected. In this case we will show that the total probability can not be conserved within the decoupling approximation. We will then introduce phenomenological parameters for the averages over the products in (6), and show that the conservation of total probability require a very specific generalization of the DMPK equation involving a single additional parameter. Finally we will evaluate the corrections to the mean and variance of the conductance using the generalized DMPK as a function of the parameter and interpret the results. To go beyond quasi 1D, we start with a conductor of length $`L_0`$ along $`x`$ and width $`W`$ along $`y`$ and $`z`$, with scattering potential $`V(x,y,z)`$. To see how the second approximation fails, we will consider, for simplicity, a square well potential adequately approximated by a repulsive delta function at $`x=0`$, i.e $`V(x,y,z)=V_T(y,z)\delta (x)`$. Writing the Schrödinger wavefunction as $`\mathrm{\Psi }(x,y,z)=_i\psi _i(y,z)\varphi _i(x)`$, where $`\psi _i(y,z)`$ are the transverse eigenfunctions in the perfectly conducting lead, chosen to be real, we obtain the system of coupled equations for the $`N`$ channels $$\varphi _i^{\prime \prime }(x)+k_i^2\varphi _i(x)=\underset{i}{}\kappa _{ij}(x)\varphi _j(x),$$ (10) where the prime denotes a derivative with respect to $`x`$, $`k_i`$ are the wavevectors in channel $`i`$, and $`\kappa _{ij}`$ are the coupling constants given by $`\kappa _{ij}(x)=(2m/\mathrm{})𝑑y𝑑z\psi _j(y,z)V_T(y,z)\psi _i(y,z)`$. We are interested in the transfer matrix $`M`$ that connects the solution $`\varphi `$ on the left side of the conductor with that on the right side. The transfer matrix satisfying the flux conservation and time reversal symmetry can be written in the form $$M=\left(\begin{array}{cc}\mathrm{𝟏}+\mathrm{\Delta }& \mathrm{\Delta }\\ \mathrm{\Delta }^{}& \mathrm{𝟏}+\mathrm{\Delta }^{}\end{array}\right),$$ (11) where 1 and $`\mathrm{\Delta }`$ are $`N\times N`$ matrices and $`\mathrm{\Delta }_{ij}=\kappa _{ij}/2ik_i`$. Note that $`\mathrm{\Delta }`$ is pure imaginary but not symmetric. The parameters $`\lambda `$ that satisfy the DMPK equation in quasi 1D are the eigenvalues of the matrix $`X=[Q+Q^12\mathrm{𝟏}]/4`$, where $`Q=M^{}M`$ . From flux conservation, $`Q^1=\mathrm{\Sigma }_zQ\mathrm{\Sigma }_z`$ where $`\mathrm{\Sigma }_z`$ is the third Pauli matrix with 1 and 0 replaced by ($`N\times N`$) 1 and 0 matrices. It is easy to see that $`X`$ is block diagonal, each block given by a sum of two matrices $`X_1=(\mathrm{\Delta }+\mathrm{\Delta }^{})/2`$ and $`X_2=\mathrm{\Delta }^{}\mathrm{\Delta }`$. The important point is that $`X_1`$ is traceless, so tr$`(\lambda _i)`$ is given by tr$`(X_2)`$=tr$`(\mathrm{\Delta }^{}\mathrm{\Delta })`$. On the other hand, $`X_1`$ does contribute to tr$`(\lambda _i^2)`$=tr$`(X_1+X_2)^2`$, where tr$`(X_1)^2`$ =tr$`(\mathrm{\Delta }^2+\mathrm{\Delta }_{}^{}{}_{}{}^{2}+\mathrm{\Delta }^{}\mathrm{\Delta }+\mathrm{\Delta }\mathrm{\Delta }^{})/4`$. Clearly it is of the same order as tr$`(\lambda _i)`$, and can not be neglected. It is now straightforward to show that keeping the tr$`(\lambda _i^2)`$ terms in (6) and using the decoupling approximation of the averages of $`v`$ and $`\lambda `$ lead to a breakdown of the conservation of total probability. Suppose $`_c\lambda _{c}^{}{}_{}{}^{2}=\alpha L_0/l`$. Then using (7) for the averages over $`v`$, we get a correction term to the DMPK equation equal to $$\frac{\alpha }{2}\underset{a}{}(1+2\lambda _a)\frac{p}{\lambda _a}.$$ (12) Clearly this is not a sum of total derivatives and the resulting equation does not conserve total probability . It is therefore clear that in order to go beyond quasi 1D, we need to relax both approximations. We propose a simple phenomenological way to take care of both. Instead of computing the three averages in (6) explicitly, we start with the following very general ansatz: $$\underset{c}{}\lambda _c^{}v_{ca}^{}{}_{}{}^{}v_{ca}^{}_{L_0}=\frac{L_0}{l};\underset{c}{}\lambda _c^{}(1+\lambda _c^{})v_{ca}^{}{}_{}{}^{}v_{cb}^{}{}_{}{}^{}v_{cb}^{}v_{ca}^{}_{L_0}=\frac{L_0}{l}\frac{1}{N+1}\mu _1;$$ $$\underset{c}{}\lambda _c^{}(1+\lambda _c^{})|v_{ca}^{}|^4_{L_0}=\frac{L_0}{l}\frac{2}{N+1}\mu _2$$ (13) where $`\mu _1`$ and $`\mu _2`$ are arbitrary dimensionless parameters, which can be functions of $`N`$. Clearly, $`\mu _1=\mu _2=1`$ gives back the quasi 1D limit. Note that any additional parameter in the first term will only serve to redefine the mean free path, so there are only two additional parameters possible. With this ansatz, equation (6) becomes $$\frac{p}{(L/l)}=(1\mu _1\frac{N1}{N+1})\underset{a}{}(1+2\lambda _a)\frac{p}{\lambda _a}+\frac{2\mu _2}{N+1}\underset{a}{}\lambda _a(1+\lambda _a)\frac{^2p}{\lambda _a^2}$$ $$+\frac{2\mu _1}{N+1}\underset{a}{}\lambda _a(1+\lambda _a)\frac{1}{J}\frac{J}{\lambda _a}\frac{p}{\lambda _a}.$$ (14) We now demand that the parameters $`\mu _1`$ and $`\mu _2`$ are such that the right hand side can be written as a sum of total derivatives in order to ensure the conservation of total probability. Note that the special choice $`\mu _1=\mu _2=1`$ makes the coefficients of all three terms on the right hand side of (14) the same, and then the three terms can be written as a sum of derivatives after multiplying by $`J(\lambda )`$. It may appear at first that with two parameters and three terms, no other choice is possible, except for a trivial multiplicative factor for all three terms which can be absorbed in the redefinition of the mean free path. However, we note that if we choose $$(1\mu _1\frac{N1}{N+1})=\frac{2\mu _2}{N+1},$$ (15) together with a renormalization of the measure $$JJ^\gamma ;\gamma =\frac{\mu _1}{\mu _2},$$ (16) then (14) can be rewritten as $$\frac{p}{(L/l^{})}=\frac{2}{N+1}\frac{1}{J^\gamma (\lambda )}\underset{a}{}\frac{}{\lambda _a}\left[\lambda _a(1+\lambda _a)J^\gamma (\lambda )\frac{p(\lambda )}{\lambda _a}\right],$$ (17) where $`l^{}=l/\mu _2`$ is a renormalized mean free path. Equation (17) is our one parameter generalization of the DMPK equation (9), where the parameter $`\gamma `$ enters in the renormalization of the measure as in (16). Note that in the absence of time reversal symmetry or in the presence of spin-orbit scattering, the measure is changed in a similar way by an exponent $`\beta =2,4`$ respectively . However, in our present case with time reversal symmetry, $`\beta =1`$, and the exponent $`\gamma `$ is in general non integral. Clearly $`\gamma =1`$ is the quasi 1D limit. From the relation between $`\mu _1`$ and $`\mu _2`$, and the condition that both $`\mu _1`$ and $`\mu _2`$ must be positive, we find the following restrictions: $$0<\mu _1<\frac{N+1}{N1};\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}0}<\mu _2<\frac{N+1}{2}.$$ (18) This means that the only restriction on the parameter $`\gamma `$ is that it is positive. In general, it can be a function of $`N`$. We can try to interpret the phenomelogical parameter $`\gamma `$ by comparing with known results. The expectation value of any function $`F(\lambda )`$, defined as $$F_{(L/l^{})}=F(\lambda )p_{(L/l^{})}(\lambda )J^\gamma (\lambda )\underset{a=1}{\overset{N}{}}d\lambda _a,$$ (19) follows an evolution equation which can be obtained by multiplying both sides of (17) by $`J^\gamma (\lambda )F(\lambda )`$ and integrating over all $`\lambda _a`$, giving $$\frac{F_s}{s}=\underset{a}{}\left[(1+2\lambda _a)\frac{F}{\lambda _a}+\lambda _a(1+\lambda _a)\frac{^2F}{\lambda _a^2}\right]+\frac{\gamma }{2}\underset{ab}{}\frac{\lambda _a(1+\lambda _a)\frac{F}{\lambda _a}\lambda _b(1+\lambda _b)\frac{F}{\lambda _b}}{\lambda _a\lambda _b},$$ (20) where $`s=L/l^{}`$. If $`\gamma `$ is independent of $`N`$, then we can use the method of moments in to obtain the average and variance of the conductance $`g=_i(1+\lambda _i)^1`$ as a power series in $`s/N1`$ in the large $`N`$ and large $`s`$ limit. We find that to leading order, $`g=Nl^{}/L(2\gamma )/3\gamma `$, and var$`(g)=g^2g^2=2/15\gamma `$. As expected, $`\gamma =1`$ gives back the quasi 1D results. However, in general the variance decreases with increasing $`\gamma `$. Comparing with the result var$`(g)\sqrt{L_yL_z}/L`$ for a rectangular conductor with length $`L_x=L`$ and cross-section $`L_yL_z`$ , we see that the parameter $`\gamma `$ can be identified with the aspect ratio $`L/\sqrt{L_yL_z}`$ in this diffusive transport regime. If $`\gamma 1/N`$, obtained for $`\mu _2=\nu N`$, which is consistent with the restriction (18) if $`\nu <1/2`$, then we need to divide both sides of (20) by $`\gamma `$, so that the renormalized mean free path $`l^{\prime \prime }=l^{}/\gamma =l/\mu _1`$ is independent of $`N`$. Then assuming $`\nu 1`$, it is possible to obtain corrections to the $`1/N`$ expansion up to linear order in $`\nu `$ using the above method of moments. The result is that the corrections are larger by a factor $`\nu \mu _1s`$, which signals the breakdown of the expansion in the large $`s`$ limit. It would be interesting to obtain a more rigorous solution of (17) for arbitrary $`\gamma `$. In summary, by relaxing certain approximations in the derivation of the DMPK equation (9) which limits it to the quasi 1D regime only, we have derived a one parameter generalization given in eq. (17), based on a phenomenological ansatz and the conservation of total probability. The geometry dependence of the parameter, obtained in the diffusive limit by evaluating the correction to the variance of the conductance beyond its quasi 1D value, suggests that the generalized equation should be applicable beyond the quasi 1D regime. This should broaden the scope of the DMPK approach. Stimulating discussions with D. Maslov are gratefully acknowledged. KAM acknowledges useful discussions with B. Altshuler and P. Pereyra at earlier stages of the work, and thanks V. Kravtsov and Y. Lu for the hospitality at the ICTP, Trieste, during the extended research workshop on disorder, chaos and interaction in mesoscopic systems.
no-problem/9812/gr-qc9812029.html
ar5iv
text
# From Superstrings to M Theory ## Introduction Superstring theory first achieved widespread acceptance during the first superstring revolution in 1984-85. There were three main developments at this time. The first was the discovery of an anomaly cancellation mechanism green84 , which showed that supersymmetric gauge theories can be consistent in ten dimensions provided they are coupled to supergravity (as in type I superstring theory) and the gauge group is either SO(32) or $`E_8\times E_8`$. Any other group necessarily would give uncanceled gauge anomalies and hence inconsistency at the quantum level. The second development was the discovery of two new superstring theories—called heterotic string theories—with precisely these gauge groups gross84 . The third development was the realization that the $`E_8\times E_8`$ heterotic string theory admits solutions in which six of the space dimensions form a Calabi–Yau space, and that this results in a 4d effective theory at low energies with many qualitatively realistic features candelas85 . Unfortunately, there are very many Calabi–Yau spaces and a whole range of additional choices that can be made (orbifolds, Wilson loops, etc.). Thus there is an enormous variety of possibilities, none of which stands out as particularly special. In any case, after the first superstring revolution subsided, we had five distinct superstring theories with consistent weak coupling perturbation expansions, each in ten dimensions. Three of them, the type I theory and the two heterotic theories, have $`𝒩=1`$ supersymmetry in the ten-dimensional sense. Since the minimal 10d spinor is simultaneously Majorana and Weyl, this corresponds to 16 conserved supercharges. The other two theories, called type IIA and type IIB, have $`𝒩=2`$ supersymmetry (32 supercharges) green82 . In the IIA case the two spinors have opposite handedness so that the spectrum is left-right symmetric (nonchiral). In the IIB case the two spinors have the same handedness and the spectrum is chiral. The understanding of these five superstring theories was developed in the ensuing years. In each case it became clear, and was largely proved, that there are consistent perturbation expansions of on-shell scattering amplitudes. In four of the five cases (heterotic and type II) the fundamental strings are oriented and unbreakable. As a result, these theories have particularly simple perturbation expansions. Specifically, there is a unique Feynman diagram at each order of the loop expansion. The Feynman diagrams depict string world sheets, and therefore they are two-dimensional surfaces. For these four theories the unique $`L`$-loop diagram is a closed orientable genus-$`L`$ Riemann surface, which can be visualized as a sphere with $`L`$ handles. External (incoming or outgoing) particles are represented by $`N`$ points (or “punctures”) on the Riemann surface. A given diagram represents a well-defined integral of dimension $`6L+2N6`$. This integral has no ultraviolet divergences, even though the spectrum contains states of arbitrarily high spin (including a massless graviton). From the viewpoint of point-particle contributions, string and supersymmetry properties are responsible for incredible cancellations. Type I superstrings are unoriented and breakable. As a result, the perturbation expansion is more complicated for this theory, and the various world-sheet diagrams at a given order (determined by the Euler number) have to be combined properly to cancel divergences and anomalies green85 . ## M Theory In the 1970s and 1980s various supersymmetry and supergravity theories were constructed. (See salam , for example.) In particular, supersymmetry representation theory showed that ten is the largest spacetime dimension in which there can be a supersymmetric Yang–Mills theory, with spins $`1`$ brink77 . This is a pretty (i.e., very symmetrical) classical field theory, but at the quantum level it is both nonrenormalizable and anomalous for any nonabelian gauge group. However, as we indicated earlier, both problems can be overcome for suitable gauge groups (SO(32) or $`E_8\times E_8`$) when the Yang–Mills theory is embedded in a type I or heterotic string theory. The largest possible spacetime dimension for a supergravity theory (with spins $`2`$), on the other hand, is eleven. Eleven-dimensional supergravity, which has 32 conserved supercharges, was constructed 20 years ago cremmer78a . It has three kinds of fields—the graviton field (with 44 polarizations), the gravitino field (with 128 polarizations), and a three-index antisymmetric tensor gauge field $`C_{\mu \nu \rho }`$ (with 84 polarizations). These massless particles are referred to collectively as the supergraviton. 11d supergravity is also a pretty classical field theory, which has attracted a lot of attention over the years. It is not chiral, and therefore not subject to anomaly problems.<sup>1</sup><sup>1</sup>1Unless the spacetime has boundaries. The anomaly associated to a 10d boundary can be canceled by introducing $`E_8`$ supersymmetric gauge theory on the boundary horava95 . It is also nonrenormalizable, and thus it cannot be a fundamental theory. However, we now believe that it is a low-energy effective description of M theory, which is a well-defined quantum theory witten95a . This means, in particular, that higher dimension terms in the effective action for the supergravity fields have uniquely determined coefficients within the M theory setting, even though they are formally infinite (and hence undetermined) within the supergravity context. Intriguing connections between type IIA string theory and 11d supergravity have been known for a long time. If one carries out dimensional reduction of 11d supergravity to 10d, one gets type IIA supergravity campbell84 . In this case dimensional reduction can be viewed as a compactification on a circle in which one drops all the Kaluza–Klein excitations. It is easy to show that this does not break any of the supersymmetries. The field equations of 11d supergravity admit a solution that describes a supermembrane. This solution has the property that the energy density is concentrated on a two-dimensional surface. A 3d world-volume description of the dynamics of this supermembrane, quite analogous to the 2d world volume actions of superstrings, has been constructed bergshoeff87 . The authors suggested that a consistent 11d quantum theory might be defined in terms of this membrane, in analogy to string theories in ten dimensions.<sup>2</sup><sup>2</sup>2It is now clear that this cannot be done in any straightforward manner, since there is no weak coupling limit in which the supermembrane describes all the finite-mass excitations. Another striking result was the discovery of double dimensional reduction duff87 . This is a dimensional reduction on a circle, in which one wraps one dimension of the membrane around the circle and drops all Kaluza–Klein excitations for both the spacetime theory and the world-volume theory. The remarkable fact is that this gives the (previously known) type IIA superstring world-volume action green84b . For many years these facts remained unexplained curiosities until they were reconsidered by Townsend townsend95a and by Witten witten95a . The conclusion is that type IIA superstring theory really does have a circular 11th dimension in addition to the previously known ten spacetime dimensions. This fact was not recognized earlier because the appearance of the 11th dimension is a nonperturbative phenomenon, not visible in perturbation theory. To explain the relation between M theory and type IIA string theory, a good approach is to identify the parameters that characterize each of them and to explain how they are related. Eleven-dimensional supergravity (and hence M theory, too) has no dimensionless parameters. As we have seen, there are no massless scalar fields, whose vevs could give parameters. The only parameter is the 11d Newton constant, which raised to a suitable power ($`1/9`$), gives the 11d Planck mass $`m_p`$. When M theory is compactified on a circle (so that the spacetime geometry is $`R^{10}\times S^1`$) another parameter is the radius $`R`$ of the circle. The parameters of type IIA superstring theory are the string mass scale $`m_s`$, introduced earlier, and the dimensionless string coupling constant $`g_s`$. An important fact about all five superstring theories is that the coupling constant is not an arbitrary parameter. Rather, it is a dynamically determined vev of a scalar field, the dilaton, which is a supersymmetry partner of the graviton. With the usual conventions, one has $`g_s=e^\varphi `$. We can identify compactified M theory with type IIA superstring theory by making the following correspondences: $$m_s^2=2\pi Rm_p^3$$ (1) $$g_s=2\pi Rm_s.$$ (2) Conventional string perturbation theory is an expansion in powers of $`g_s`$ at fixed $`m_s`$. Equation (2) shows that this is equivalent to an expansion about $`R=0`$. In particular, the strong coupling limit of type IIA superstring theory corresponds to decompactification of the eleventh dimension, so in a sense M theory is type IIA string theory at infinite coupling.<sup>3</sup><sup>3</sup>3The $`E_8\times E_8`$ heterotic string theory is also eleven-dimensional at strong coupling horava95 . This explains why the eleventh dimension was not discovered in studies of string perturbation theory. These relations encode some interesting facts. The fact relevant to eq. (1) concerns the interpretation of the fundamental type IIA string. Earlier we discussed the old notion of double dimensional reduction, which allowed one to derive the IIA superstring world-sheet action from the 11d supermembrane (or M2-brane) world-volume action. Now we can make a stronger statement: The fundamental IIA string actually is an M2-brane of M theory with one of its dimensions wrapped around the circular spatial dimension. No truncation to zero modes is required. Denoting the string and membrane tensions (energy per unit volume) by $`T_{F1}`$ and $`T_{M2}`$, one deduces that $$T_{F1}=2\pi RT_{M2}.$$ (3) However, $`T_{F1}=2\pi m_s^2`$ and $`T_{M2}=2\pi m_p^3`$. Combining these relations gives eq. (1). It should be emphasized that all the formulas in this section are exact, due to the large amount of unbroken supersymmetry. Type II superstring theories contain a variety of $`p`$-brane solutions that preserve half of the 32 supersymmetries. These are solutions in which the energy is concentrated on a $`p`$-dimensional spatial hypersurface. (Adding the time dimension, the world volume of a $`p`$-brane has $`p+1`$ dimensions.) The corresponding solutions of supergravity theories were constructed by Horowitz and Strominger horowitz91 . A large class of these $`p`$-brane excitations are called D-branes (or D$`p`$-branes when we want to specify the dimension), whose tensions are given by polchinski95 $$T_{Dp}=2\pi m_s^{p+1}/g_s.$$ (4) This dependence on the coupling constant is one of the characteristic features of a D-brane. It is to be contrasted with the more familiar $`g^2`$ dependence of soliton masses (e.g., the ’t Hooft–Polyakov monopole). Another characteristic feature of D-branes is that they carry a charge that couples to a gauge field in the Ramond-Ramond (RR) sector of the theory. (Such fields can be described as bispinors.) The particular RR gauge fields that occur imply that even values of $`p`$ occur in the IIA theory and odd values in the IIB theory. D-branes have a number of special properties, which make them especially interesting. By definition, they are branes on which strings can end—D stands for Dirichlet boundary conditions. The end of a string carries a charge, and the D-brane world-volume theory contains a $`U(1)`$ gauge field that carries the associated flux. When $`n`$ D$`p`$-branes are coincident, or parallel and nearly coincident, the associated $`(p+1)`$-dimensional world-volume theory is a $`U(n)`$ gauge theory. The $`n^2`$ gauge bosons $`A_\mu ^{ij}`$ and their supersymmetry partners arise as the ground states of oriented strings running from the $`i`$th D$`p`$-brane to the $`j`$th D$`p`$-brane. The diagonal elements, belonging to the Cartan subalgebra, are massless. The field $`A_\mu ^{ij}`$ with $`ij`$ has a mass proportional to the separation of the $`i`$th and $`j`$th branes. This separation is described by the vev of a corresponding scalar field in the world-volume theory. In particular, the D2-brane of the type IIA theory corresponds to our friend the supermembrane of M theory, but now in a background geometry in which one of the transverse dimensions is a circle. The tensions check, because (using eqs. (1), (2), and (4)) $`T_{D2}=2\pi m_s^3/g_s=2\pi m_p^3=T_{M2}`$. The mass of the first Kaluza–Klein excitation of the 11d supergraviton is $`1/R`$. Using eq. (2), we see that this can be identified with the D0-brane. More identifications of this type arise when we consider the magnetic dual of the M theory supermembrane. This turns out to be a five-brane, called the M5-brane.<sup>4</sup><sup>4</sup>4In general, the magnetic dual of a $`p`$-brane in $`d`$ dimensions is a $`(dp4)`$-brane. Its tension is $`T_{M5}=2\pi m_p^6`$. Wrapping one of its dimensions around the circle gives the D4-brane, with tension $`T_{D4}=2\pi RT_{M5}=2\pi m_s^5/g_s`$. If, on the other hand, the M5-frame is not wrapped around the circle, one obtains the so-called NS5-brane of the IIA theory with tension $$T_{NS5}=T_{M5}=2\pi m_s^6/g_s^2.$$ This 5-brane, which is the magnetic dual of the fundamental IIA string, exhibits the conventional $`g^2`$ solitonic dependence. To summarize, type IIA superstring theory is M theory compactified on a circle of radius $`R=g_s\mathrm{}_s`$. M theory is believed to be a well-defined quantum theory in 11d, which is approximated at low energy by 11d supergravity. Its supersymmetric excitations (which are the only ones known when there is no compactification) are the massless supergraviton, the M2-brane, and the M5-brane. These account both for the (perturbative) fundamental string of the IIA theory and for many of its nonperturbative excitations. The identities presented here are exact, because they are protected by supersymmetry. ## Important Unresolved Issues One issue that needs to be settled if superstring theory is to be used for phenomenology is where supersymmetry fits into the story. It is clear that at the string scale ($`10^{18}`$ GeV) or the Planck scale the underlying theory has maximal supersymmetry (32 conserved supercharges). The question that needs to be answered is at what scales they are broken and by what mechanisms. The traditional picture (which looks the most plausible to me) is that at the compactification/GUT scale ($`10^{16}`$ GeV) the symmetry is broken to $`𝒩=1`$ in $`d=4`$ (four conserved supercharges), and this persists to the TeV scale, where the final susy breaking occurs. The TeV scale is indicated by three separate arguments: the gauge hierarchy problem, supersymmetric grand unification, and the requirement that the lightest superparticle (LSP) be a cosmologically significant component of dark matter. It would be astonishing if this coincidence turned out to be a fluke. There is other support for this picture such as the mass of the top quark and the ease with which it gives electroweak symmetry breaking. Despite all these indications, we cannot be certain that this picture is correct until it is demonstrated experimentally. As I once told a newspaper reporter: discovery of supersymmetry would be more profound than life on Mars. Another important issue is the problem of vacuum degeneracy and the stabilization of moduli. Let me explain. The underlying theory is completely unique, with no dimensionless parameters. Nevertheless, typical quantum vacua have continuous parameters, called moduli, which arise as the vacuum values of scalar fields. Notable examples are the sizes of extra dimensions and the string coupling constant. For typical string vacua, the effective potential has many flat directions, so there is a continuum, or moduli space, of minima. The fields that correspond to the flat directions describe massless spin zero particles. These particles typically interact with roughly gravitational strength, which is a problem because the gravitational force is observed to be pure tensor to better than 1% accuracy. So it seems that we should seek a vacuum without moduli, which is very difficult to do. However, if a realistic vacuum of this type is ever found, it will not have any continuously adjustable parameters, and therefore it will be completely predictive (at least in principle). Perhaps the most challenging unresolved issue of all, is the cosmological constant. This is a term in the effective action that describes the energy density of the vacuum, which is observable in a gravitational theory. Observationally, there are indications that it may be nonzero, but it is extremely small. Taking the $`1/4`$ power, the energy scale is $`10^{11}`$ GeV. In a fundamental theory it receives contributions from many sources such as vacuum condensates and zero point energies. Supersymmetry ensures that boson and fermion zero point energies cancel, so the natural scale would seem to be the TeV susy breaking scale, which is many orders of magnitude too high. This is a fine-tuning problem that is reminiscent of the gauge hierarchy problem. Presumably string theory will provide an elegant solution. Until we know what the relevant mechanism is, it is hard to be confident that there is not an alternative to supersymmetry for solving the gauge hierarchy problem. I believe that when the correct solution to the problem of the cosmological constant is found, it will spark another revolution in our understanding. Recently toy models without supersymmetry that seem to have a vanishing cosmological constant have been constructed by Kachru, Kumar, and Silverstein kachru . These models are far from realistic, and have not led yet to new qualitative understandings. However, they are the best lead we have at the present time. To conclude, there has been dramatic progress in understanding string theory in the past few years, but there are still crucial issues that remain unresolved. Future experimental discoveries will be essential to help guide our thinking. Sooner (Tevatron) or later (LHC) exciting new phenomena are bound to show up. My bet is on Higgs and superparticles. But if I should turn out to be wrong, that would not mean that string theory is wrong.
no-problem/9812/hep-th9812142.html
ar5iv
text
# ILL-(TH)-98-07 hep-th/9812142 String Junctions and Bound States of Intersecting Branes ## 1 Introduction It is by now well-known that systems of intersecting branes correspond to blackholes, and the entropy of such a system may be accounted for by enumerating string states . At least when sufficient supersymmetry is preserved, the configuration of branes is a bound state at threshold. In many cases, these bound states signal the existence of degrees of freedom localized on the intersection manifold. It will be the aim of this note to understand in more detail the nature of these new states. We are interested here in an intuitive problem: what is the detailed mechanism for binding together a collection of many (more than two) branes, and in particular, what are the relevant microscopic degrees of freedom? For a bound state of a pair of branes, we can certainly expect that ordinary strings stretching between them are responsible for the binding. However, in intersections of more than two branes, binding by ordinary strings cannot account for the entropy of the configuration, as we will discuss in some detail below. The system that we will have in mind throughout this paper is the four-dimensional blackhole obtained from an M5-brane wrapped on a divisor of a Calabi-Yau threefold. However, it will be useful to consider directly a collection of three types of M5-branes wrapped on orthogonal cycles of a $`T^6`$. In much of the paper, we will discuss directly the case of $`T^6`$, although we explore Calabi-Yau’s in the final section. In the case of $`T^6`$, we may take the M5-branes to be arranged as follows: | Brane | 0 | 10 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`M5_1`$ | $``$ | $``$ | $``$ | $``$ | $``$ | | | $``$ | | | | | $`M5_2`$ | $``$ | $``$ | $``$ | | | $``$ | $``$ | $``$ | | | | | $`M5_3`$ | $``$ | | | $``$ | $``$ | $``$ | $``$ | $``$ | | | | | $`P_L`$ | | | | | | | | $``$ | | | | The four-dimensional blackhole has an $`E_{7,7}`$ U-duality group; a useful diagonal basis identifies four charges as the number of M5-branes of each of three types plus momentum along the eleventh direction. The entropy of this blackhole is given, at least to leading order, by the product of these charges, and may be thought of as counting all of the excitations of the blackhole. Let us briefly review what is known about this system. There are several points of view. In the limit where the compact manifold is small, one attains an effective description in terms of a $`1+1`$-dimensional field theory on the intersection manifold of the M5-branes. This theory is a superconformal field theory in the infrared, with $`(0,4)`$ supersymmetry. First, there is an important analysis of Ref. (see also Ref. ) which computes the central charge of this theory in terms of the cohomology of the complex divisor upon which the M5-branes are wrapped. Thus the entropy is computed, in leading order, by the triple-self-intersection number of the divisor. This number can be thought of as the number of free fields required to describe the entropy of the system. String states stretching between two types of branes would only account for double intersections, and thus fall short. To our knowledge, a concrete proposal for the target space of a $`\sigma `$-model has not been given, although perhaps intuitively one expects that this then is related to the complexified moduli space of the divisor. Whatever this spacetime CFT is, it is known that on $`T^6`$ it must have a moduli space of deformations given by $`F_{4(4)}(\text{ZZ})\backslash F_{4(4)}/Sp(2)\times Sp(6)`$. The low energy physics of the bound states may be understood in terms of deformation theory. Locally, we can discuss the triple intersection in $`\text{CI}^3`$, coordinatized by $`z^1,z^2,z^3`$. An equation for the divisor is of the form $$P_{N_1,N_2,N_3}(z^1,z^2,z^3)=0=P_{N_1}(z^1)P_{N_2}(z^2)P_{N_3}(z^3)$$ (1) where $`N_i`$ are the degrees of each polynomial. The zeroes of this polynomial correspond to the position of each M5-brane. The holomorphic deformations of the divisor are of the form $$P_{N_1,N_2,N_3}(z^1,z^2,z^3)+Q_{N_11,N_21,N_31}(z^1,z^2,z^3)=0$$ (2) The degrees of the polynomial $`Q`$ have been chosen such that this deformation does not alter the asymptotic form. The deformations are localized at triple intersections. To see this, fix $`z^{1,2}`$ very large away from the zeroes of the polynomial; it is then clear that the third variable will be very small. That is, the deformations can only be large when $`z^{1,2}`$ are close to the zero of their respective polynomial; this may be verified explicitly. We can choose to write the deformations in the following form: $$Q_{N_1,N_2,N_3}=\underset{i,j,k}{}a_{ijk}\frac{P_{N_1}(z^1)P_{N_2}(z^2)P_{N_3}(z^3)}{(z^1r_i^1)(z^2r_j^2)(z^3r_k^3)}$$ (3) The $`a_{ijk}`$ are the localized deformations, and appear as fields in the low energy description. The number of degrees of freedom then is simply counted as the number of triple intersections; because of supersymmetry, these must come in supermultiplets, with $`c=6`$. When we compactify, care must be taken with boundary conditions, and so not all of these deformations are allowed. One expects, however, that these effects are subleading compared to the number of triple intersections. We will see evidence of this below. Furthermore, the near-horizon limit of this blackhole displays geometry $`AdS_3\times S^3/\text{ZZ}_N\times M_4`$; the supergravity spectrum on $`AdS_3`$ has been computed, and recently, the quantization of strings in this background has been considered. In this paper, we are not directly interested in such SCFT descriptions. Instead, we would like to elucidate the microscopic stringy physics responsible for the existence of the boundstate. The physics that we are interested in will appear quite different from the point of view of different U-dual frames. We discuss several different U-frames here; perhaps the most intuitively appealing picture is within a Type IIB frame, where the binding of three branes is related to the existence of massless string junctions localized at the triple intersection. The identification of these non-perturbative states is hampered by the absence of BPS states in this background, although we give strong arguments for the existence of the boundstates. Another Type IIB frame involves intersecting D3-branes localized at an orbifold singularity; the bound states are understood in terms of twisted strings. The latter frame leads to a perturbative UV gauge theory description of this system. ## 2 String Junctions We begin with a short review of the essential properties of string junctions. In Type IIB string theory, 1-branes are classified by a pair of integers $`(p,q)`$. In this notation, the fundamental string is a $`(1,0)`$-brane, and the $`D1`$-brane a $`(0,1)`$-brane. It is known that, subject to some conditions, there is a BPS state consisting of three such branes meeting at a junction. Since $`p`$ and $`q`$ are the charges with respect to the 2-forms $`B_{NS}`$ and $`B_R`$, they must be conserved at the vertex: $$\underset{i}{}p_i=\underset{i}{}q_i=0.$$ (4) In addition, there is a condition on the tensions of the branes, and this condition depends on the string coupling . Now note that there is a U-duality frame in which the 3 M5-branes become an NS5-brane, a D5-brane and a D3-brane in Type IIB string theory. This is attained (refering to the table in Section I) by compactifying the 10-direction, then performing T-duality along, say, the 2-direction. These three branes intersect along a string as did the M5-branes. The low energy theory then is expected to be a $`1+1`$-dimensional CFT with $`(0,4)`$ supersymmetry. | Brane | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`D3`$ | $``$ | $``$ | | $``$ | | | $``$ | | | | | $`D5`$ | $``$ | $``$ | $``$ | | $``$ | $``$ | $``$ | | | | | $`NS5`$ | $``$ | | $``$ | $``$ | $``$ | $``$ | $``$ | | | | | $`P_L`$ | | | | | | | $``$ | | | | It is well known that fundamental strings may end on D-branes, and by S-duality, the D1-brane may end on the NS5-brane. Since the D3-brane is S-invariant, any $`(p,q)`$-1-brane may end on it. Thus, at least from the point of view of charge conservation, the state shown in Figure 1 exists. Furthermore, the string junction is massless when the three branes intersect; the junction may be made massive by moving the branes away from each other in the 789-directions. Now, each of the ends of the string junction may terminate on any of the $`N`$ branes of the appropriate type. Thus, we see that there are of order $`N_1N_2N_3`$ states present here. Furthermore, since the junction must organize itself into a representation of the $`(0,4)`$ supersymmetry, there are $`4N_1N_2N_3`$ bosonic states and their superpartners. String junctions then account for the entropy of this configuration. Note that in this frame, open string states stretching between branes are not this numerous. Thus, at least to leading order, the entropy is accounted for by non-perturbative states. There are several potential problems with this picture however, and we now turn to a discussion of the relevant issues. We have claimed above that the string junctions are massless when the branes intersect. Although this is clearly true geometrically at the classical level, it is not true that the mass of a massive state is protected. To understand the relevant issues, we should consider the details of $`(0,4)`$ supersymmetry algebra in two dimensions. The algebra takes the form $$\{Q,Q\}=P_R$$ (5) In particular, there are no central charges as that requires both left and right moving supersymmetries. The BPS bound is thus simply $`P_R0`$; the only states saturating the bound are massless and may have $`P_L0`$. This implies that in any ultraviolet description, only the massless states with $`P_R=0`$ will necessarily survive down to the infrared conformal theory and contribute to the entropy of the configuration we are studying. For this massless state to be present then, we must argue that the classical moduli space is unmodified quantum mechanically, at least at the origin. Indeed, we do not expect such modifications because of the $`(0,4)`$ supersymmetry. This is actually more restrictive than $`(2,2)`$; for example, the metric of the target space manifold must be hyperkähler. Further evidence will be presented below. If we identify the states localized at the intersection to be of a non-perturbative origin (at least in this frame), then we must become comfortable with the idea that the conformal field theory of ordinary string states is somehow insufficient. Indeed, we can think of this situation as akin to a conifold singularity–at the origin, there is a new branch of the moduli space, parameterized by vev’s of the fields corresponding to string junctions. This is not obviously inconsistent, as near the NS5-five branes the string theory is strongly coupled which invalidates perturbation theory. In the next section, we consider a different U-frame, in which these states appear in the perturbative spectrum. ## 3 The Orbifold Frame In this section, we discuss another U-frame which is perturbative, and the localized states at the intersection are twisted strings. To attain this, we may begin with the configuration of the last section, and perform a T-duality along $`X^{1,2}`$: | Brane | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`D3_1`$ | $``$ | | $``$ | $``$ | | | $``$ | | | | | $`D3_2`$ | $``$ | | | | $``$ | $``$ | $``$ | | | | | $`KK5`$ | $``$ | $`\times `$ | $``$ | $``$ | $``$ | $``$ | $``$ | | | | The interpretation of this configuration is that of a pair of D3-branes intersecting along a line ($`X^6`$), at a $`\text{ZZ}_{N_3}`$ orbifold singularity.<sup>1</sup><sup>1</sup>1In the table, the symbol $`\times `$ refers to the Taub-NUT direction. We take $`X^{1,7,8,9}`$ to be noncompact. This ensures that the singularity is isolated. Here, $`N_3`$ is the number of $`NS5`$-branes in the original picture, and there are $`N_1`$ ($`N_2`$) D3-branes of each type. Note that in this frame, there is no manifest triality between $`N_1,N_2`$ and $`N_3`$. This occurs simply because of taking a definite U-duality frame; triality will be recovered in U-invariant quantities, such as the entropy. This is an interesting configuration in its own right. There has been several appearances of D3-branes at orbifold singularities in the literature, giving rise to interesting $`3+1`$-dimensional gauge theories. In the present configuration, we find a gauge theory description of the $`1+1`$-dimensional intersection. This theory is an ultraviolet description where gravity has been decoupled, which will flow to the relevant conformal field theory in the infrared. In this theory, we will be able to identify the states that are localized at the intersection, and which contribute the predominant amount of entropy. Since the configuration is perturbative, the analysis is reliable. Furthermore, we will be able to map these states to the string junctions of the previous section. The spectrum of this gauge theory may be obtained via a straightforward application of familiar techniques. Note first that if we concentrate on the states of a single D3-brane but dimensionally reduce along a two torus, we expect to see multiplets of $`(4,4)`$ supersymmetry. The supersymmetry preserved by each of the two D3-branes is incompatible, and at the end we are only left with $`(0,4)`$ supersymmetry; the string states connecting $`D3_1`$ to $`D3_2`$ do not form full $`(4,4)`$-multiplets. In fact, we will find that the orbifolding acts as to shift the gauge quantum numbers of fermions with respect to those of bosons. To construct the spectrum, account for the orbifolding by $`N_3`$ images of the collections of $`N_1`$ ($`N_2`$) D3-branes. String states that stretch between D3-branes of the same type, as mentioned, give multiplets of $`(4,4)`$ supersymmetry–the fermions and bosons are in the same gauge multiplets. Those multiplets which correspond to string states between branes at the same image, turn out to be hypermultiplets, whereas those stretching horizontally (see Fig. 2) are vector multiplets, (this nomenclature comes from looking at the four dimensional theory on the intersection of two D5-branes, where the vector directions are along the intersection manifold, and the hypermultiplet directions are orthogonal). The resulting gauge group is then<sup>2</sup><sup>2</sup>2We consider the low energy ultraviolet theory, and so do not concern ourselves with the possible decoupling of $`U(1)`$’s. $$\underset{k=1}{\overset{N_3}{}}\left[U(N_1)\times U(N_2)\right].$$ (6) The string states that stretch between D3-branes of different type however are acted upon non-trivially by the orbifold. It should be noted that the $`\text{ZZ}_{N_3}`$ acts chirally on the $`SU(2)\times SU(2)`$ R-symmetry on either of the D3-branes. There are several reasons for this choice. First, this particular orbifold action is important for preserving $`(0,4)`$ supersymmetry and the resulting hyperkähler structure. More importantly, the corresponding 4-dimensional blackhole is, as in Ref. , related to a configuration of NS5-branes and KK monopoles, for which the near-horizon geometry is given by $`AdS_3\times S^3/\text{ZZ}_{N_3}`$. In the near-horizon region, the orbifold of the sphere indeed acts chirally. This is no coincidence; in fact both the near-horizon geometry, as well as this gauge theory description, share the same geometrical features. The gauge theory, then, is an ultraviolet description of the spacetime conformal field theory which controls the physics of the near-horizon region of the blackhole. The detailed form of this CFT, as mentioned, is not known; however, at the very least, the gauge theory discussed here should be capable of reproducing some of the features of the CFT, in particular the chiral ring.<sup>3</sup><sup>3</sup>3Note that in order to check non-chiral operators, we would need to control the non-perturbative details of the gauge theory in the infrared limit. We do not attempt to demonstrate this here. Given this orbifold action, bosons and right moving fermions form supermultiplets, and the left moving fermions are singlets under supersymmetry. The field content is summarized in Fig. 2. The fields are supermultiplets for the vertical lines, and left-moving fermions for the diagonal lines. The nodes and edges have a supermultiplet and left-moving fermion singlets, as is required in order to complete representations of (4,4) supersymmetry. Note that this portion of the spectrum is an example of “misaligned supersymmetry” of Ref. , as bosons and fermions are degenerate but they are in different representations of the symmetry groups. Thus much of the structure of a $`(4,4)`$-supersymmetric theory is present; only the gauge representations are aware of the breaking to $`(0,4)`$. As the configuration is made only out of D3-branes, the value of the type IIB coupling constant is not fixed at any value, and we can actually take a weakly coupled limit, so that the field theory analysis is accurate. Next, we would like to count (gauge invariant) modes, in order to probe the entropy of the corresponding blackhole. To facilitate this, we move on the moduli space to a generic point, where the gauge group is broken as much as possible. To this end, we move all D3-branes apart in the 2345-directions (but not away from the orbifold singularity). The gauge group is Abelian, $`U(1)^{N_1}\times U(1)^{N_2}`$, and massless charged states are present. While most of the $`(4,4)`$ vectors and hypermultiplets have been lifted, the twisted states survive. These states are localized at the orbifold singularity, and have multiplicity $`N_1N_2N_3`$ (since they are in $`(N_1,\overline{N}_2)`$ representations, and there are $`N_3`$ images). For each of these, we have two complex bosonic modes and two complex fermionic modes (as in Fig. 2). In a sector with fixed $`P_L`$, these states dominate the entropy, giving by a standard argument $`S=2\pi \sqrt{6N_1N_2N_3P_L+\mathrm{}}`$. It was found in Ref. that the central charge of the spacetime conformal theory contains no subleading corrections. However, in the present construction, it appears that there is a problem. There are massless fields which are the remnant of the adjoint hypermultiplets. There is one such supermultiplet remaining per vertex of Fig. 2, and thus one would expect that these fields contribute to the entropy at order $`(N_1+N_2)N_3`$. It is possible that the correct central charge is nevertheless obtained as follows, by canceling this contribution. We have assumed that all triple intersections contribute an independent supersymmetric degree of freedom to the entropy, but this is not really true, as not all of the local deformations can produce a smooth manifold. This means that some fraction of the (vertical) fields have a superpotential and therefore do not contribute to the entropy. It is quite possible that this correction to the leading term in the entropy above precisely cancels the effect of the adjoint fields. A similar mechanism is known to occur in the D1-D5 system–the dimension of the moduli space is smaller than the number of fields because of D-term constraints (in our case we have F-terms). It is clear then that the present description is far from being a free CFT, at least at finite $`N`$. The gauge theory description is useful however in the long string limit, where these effects are subleading. A ueful application would be the computation of the chiral ring. ### 3.1 Relation to String Junctions Now note that we expect that this discussion of the spectrum is robust– the entropy is accounted for by twisted string states, as long as the singularity itself is not modified by quantum corrections. Furthermore, this description of the states localized at the intersection is T-dual to the description in the previous section in terms of string junctions, this is, from one description to the other we do a discrete Fourier transform. We regard this then as definitive evidence (if duality is to be believed) for the existence of massless string junctions in that frame, and hence for their contribution to the entropy. ## 4 Other U-frames It is of interest to consider other U-frames in the same context. We confine ourselves to brief discussions of three such frames; in most cases, an understanding of the localized states is considerably more difficult. ### 4.1 M-theory and 3 M5-branes First, we consider the original M-theory configuration, and account for the entropy there. This may be understood by beginning with the string junction; if we lift this to M-theory, we find that the junction becomes a M2-brane ”pants section”. Each $`(p,q)`$-leg has one direction wrapped along the vector $`(p,q)`$ in the $`X^{2,10}`$ torus . Thus the bound state degrees of freedom are these pants sections; at a triple intersection point of the M5-brane, they have zero area and so should go massless. A smooth point in moduli space then is attained by turning on vevs for these low energy fields. ### 4.2 Type IIA and the $`4440`$-system By compactifying the M-theory configuration along $`X^6`$, we obtain a system of three different types of D4-branes, plus D0-branes from momentum along $`X^6`$. This is a system that has been well-studied in the blackhole context. The pants section of the preceding paragraph descends to a similar D2-brane, while the momentum descends to a constant D0-flux through the D2-brane, $`F_2=P_LdVol`$; where the volume is normalized to unity. The localized states of this system are counted as follows. At a given intersection, a pants section is massless, and with an arbitrary D0-flux it’s energy is just the D0 brane charge. States with fixed D0-flux $`P_L`$ are then obtained by partitioning that flux over $`n`$ pants sections (where $`1nP_L`$, in the normalization where $`P_L`$ is an integer.) Thus we find factorial growth of states exactly like the free field theory calculation, with<sup>4</sup><sup>4</sup>4This may be obtained by taking the partition function $`Z\left(\eta (q)\theta (q)\right)^{4N_1N_2N_3}`$, for fixed $`E=N_0`$. $`S2\pi \sqrt{6N_0N_1N_2N_3}`$. Notice that the $`D2`$ branes are in a sense auxiliary to the construction as the total $`D2`$ brane charge is zero. ### 4.3 Type IIB and the $`3333`$-system If we T-dualize the Type IIA configuration along three directions, such as $`X^{1,3,5}`$, we find four different types of D3-branes, which now intersect at a point: | Brane | 0 | 10 | 1 | 2 | 3 | 4 | 5 | 7 | 8 | 9 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`D3_1`$ | $``$ | $``$ | | $``$ | | | $``$ | | | | | $`D3_2`$ | $``$ | $``$ | | | $``$ | $``$ | | | | | | $`D3_3`$ | $``$ | | $``$ | $``$ | | $``$ | | | | | | $`D3_4`$ | $``$ | | $``$ | | $``$ | | $``$ | | | | The fourth D3-brane comes from the D0-branes of the Type IIA frame. Note that each pair of these D3-branes intersects along a line, but the four branes intersect at most at a point. Thus, the low energy description would again be a $`1+0`$ quantum mechanical system. In this U-frame, a description of the bound states appears to be very complicated. To see this consider the transformation of the string junction under the T-duality mentioned above. Depending upon which three directions that we T-dualize along, we get a pants section of D5-branes, or D3-branes, or a mixture of the two. It would seem counterintuitive to attempt an explanation of bound states of D3-branes in terms of D5-branes! However, we note that there are global conditions that must be satisfied to maintain charge conservation. These conditions (vanishing of total brane charge) imply that the description of the bound state in terms of, say, D5-branes is unstable. Perhaps there is a description of these bound states in terms of some remnant, along the lines of Refs. . ## 5 M-branes on Calabi-Yau Threefolds It is also of interest to discuss the case of M5-branes on Calabi-Yau 3-folds more directly. To begin, consider the case of $`K3\times T^2`$, with M5-branes wrapped on different complex 2-cycles. In particular, there are M5-branes wrapped on the whole $`K3`$ manifold; in a Type IIB description, these branes give rise to an $`A_{N_3}`$ singularity times a $`K3`$ surface. Other M5-branes that wrap the $`T^2`$ as well as a 2-cycle of the $`K3`$ correspond to D3-branes. Again, we can go to a weakly coupled type IIB picture and repeat the steps to get the open string quiver diagram corresponding to the configuration. The twisted open strings are again the relevant degrees of freedom. This construction can be immediately generalized to an elliptically fibered Calabi-Yau manifold $`M`$ with a section. Clearly, we should distinguish M5-branes which wrap a cycle on the base plus the elliptic fibre from those which wrap the base completely. The latter appear as KK monopoles while the former become D3-branes wrapped on 2-cycles of the base, once we turn to the IIB F-theory configuration. Therefore, we expect a local description as D3-branes wrapping cycles of the base at an orbifold singularity. The description in terms of twisted open strings should still be good locally on the D3 branes, yet the choice of which string is light changes as we move around the D3 brane, and they certainly become massless at the intersection points of these. ## 6 Concluding remarks In this note, we have considered configurations of branes which form bound states at threshold. The entropy of these objects may be understood from the counting of (not necessarily perturbative) states which becomes massless when the different constituents of the black hole are brought together. The identification of these modes as string junctions is particularly appealing, as all of the degrees of freedom can be seen geometrically, but are never perturbative in this U-frame. We have also found a perturbative picture in which the microscopic states are twisted string states on the intersection of D3-branes at an orbifold singularity. The ultraviolet theory then is a gauge theory. We have been unable, by deforming the moduli space, to find a description of the spacetime infrared conformal field theory in terms of free fields however, either on the torus, or for those Calabi-Yau manifolds for which the construction makes sense. It is expected however that this construction is capable of reproducing the chiral ring. Acknowledgments: We wish to thank F. Larsen for discussions. Work supported in part by the United States Department of Energy grant DE-FG02-91ER40677 and an Outstanding Junior Investigator Award.
no-problem/9812/cs9812002.html
ar5iv
text
# 1 Introduction ## 1 Introduction In the framework of delayed reinforcement learning, a system receives input from its environment, decides for a proper sequence of actions, executes them, and thereafter receives a reinforcement signal, namely a grade for the made decision. A system at any instant is described by its, so called, state variables. The objective of a broad class of reinforcement problems, is to learn how to control a system in such a way, so that its state variables remain at all times within prescribed ranges. However, if at any instant, the system violates this requirement, it is penalized by receiving a ”bad grade” signal, and hence its policy in making further decisions is influenced accordingly. There are many examples of this kind of problems, like the pole balancing problem, teaching an autonomous robot to avoid obstacles, the ball and beam problem etc. In general we can distinguish two kinds of approaches that have been developed for delayed reinforcement problems : the critic-based approaches and the direct approaches. There is also the Q-learning approach which exhibits many similarities with the critic-based ones. The most well-studied critic-based approach is the Adaptive Heuristic Critic (AHC) method which assumes two separate models: an action model that receives the current system state and selects the action to be taken and the evaluation model which provides as output a prediction $`e(x)`$ of the evaluation of the current state $`x`$. The evaluation model is usually a feedforward neural network trained using the method of temporal differences, i.e. it tries to minimize the error $`\delta =e(x)(r+\gamma e(y))`$ where $`y`$ is the new state, $`r`$ the received reinforcement and $`\gamma `$ a discount factor . The action model is also a feedforward network that provides as output a vector of probabilities upon which the action selection is based. Both networks are trained on-line through backpropagation using the same error value $`\delta `$ described previously. The direct approach to delayed reinforcement learning problems considers reinforcement learning as a general optimization problem with an objective function having a straightforward formulation but which is difficult to optimize . In such a case only the action model is necessary to provide the action policy and optimization techniques must be employed to adjust the parameters of the action model so that a stochastic integer-valued function is maximized. This function is actually proportional to the number of successful decisions (i.e. actions that do not lead to the receipt of penalty signal). A previous direct approach to delayed reinforcement problems employs real-valued genetic algorithms to perform the optimization task . In the present study we propose another optimization strategy that is based on the polytope method with random restarts. Details concerning such an approach are presented in the next section, while section 3 provides experimental results from the application of the proposed method to the pole balancing problem and compares its performance against that of the AHC method and of the evolutionary approach. ## 2 The Proposed Training Algorithm As already mentioned, the proposed method belongs to the category of direct approaches to delayed reinforcement problems. Therefore, only an action model is considered that in our case has the architecture of a multilayer perceptron with input units accepting the system state at each time instant, and sigmoid output units providing output values $`p_i`$ in the range $`(0,1)`$. The decision for the action to be taken from the values of $`p_i`$ can be made either stochastically or deterministically. For example in the case of one output unit the value $`p`$ may represent the probability that the final output will be one or zero, or the final output may be obtained deterministically using the rule: if $`p>0.5`$ the final output will be one, otherwise it will be zero. Learning proceeds in cycles, with each cycle starting with the system placed at a random initial position and ending with a failure signal. Since our objective is to train the network so that the system ideally never receives a failure signal, the number of time steps of the cycle (ie. its length), constitutes the performance measure to be optimized. Consequently, the training problem can be considered as a function optimization problem with the adjustable parameters being the weights and biases of the action network and with the function value being the length of a cycle obtained using the current weight values. In practice, when the length of a cycle exceeds a preset maximum number of steps, we consider that the controller has been adequately trained. This is used as a criterion for terminating the training process. The training also terminates if the number of unsuccessful cycles (i.e. function evaluations without reaching maximum value) exceed a preset upper bound. Obviously, the function to be optimized is integer-valued, thus it is not possible to define derivatives. Therefore, traditional gradient-based optimization techniques cannot be employed. Moreover, the function posesses an amount of random noise since the initial state specification as well as the action selection at the early steps are performed at random. On one hand the incorporation of this random noise may disrupt the optimization process. For example, if the evaluation of the same network is radically different at different times, then the learning process will be misled. On the other hand, the global search certainly benefits from it and hence the noise should be kept, however under control. It is clear that the direct approach has certain advantages which we summarize in the following list. * Instead of using an on-line update strategy for the action network, we perform updates only at the end of each cycle. Therefore, the policy of the action network is not affected in the midst of a cycle (during which the network actually performs well). The continuous on-line adjustment of the weights of the action network may lead due to overfitting, to the corruption of correct policies that the system has acquired so far . * Several sofisticated, derivative-free, multidimensional optimization techniques may be employed instead of the naive stochastic gradient descent. * Stochastic action selection is not necessary (except only at the early steps of each cycle). In fact stochastic action selection may cause problems, since it may lead to choices that are not suggested by the current policy . * There is no need for a critic. The absence of a critic and the small number of weight updates contribute to the increase of the training speed. The main disadvantage of the direct approach is that its performance relies mainly on the effectiveness of the used optimization strategy. Due to the characteristics of the function to be optimized one cannot be certain that any kind of optimization approach will be suitable for training. As already stated, a previous reinforcement learning approach that follows a direct strategy, employs optimization techniques based on genetic algorithms and provides very good results in terms of training speed (required number of cycles) . In this work, we present a different optimization strategy based on the polytope algorithm , which is described next. ### 2.1 The Polytope Algorithm The Polytope algorithm belongs to the class of direct search methods for non-linear optimization. It is also known by the name Simplex, however it should not be confused with the well known Simplex method of linear programming. Originally this algorithm was designed by Spendley et al. and was refined later by Nelder and Mead . A polytope (or simplex) in $`R^{(n)}`$ is a construct with $`(n+1)`$ vertices (points in $`R^{(n)}`$) defining a volume element. For instance in two dimensions the simplex is a triange, in three dimensions it is a tetrahydron, and so on so forth. In our case each vertex point $`w_i=(w_{i1},\mathrm{},w_{in})`$ describes the $`n`$ parameters (weights and thresholds) of an action network. The input to the algorithm apart from a few parameters of minor importance, is an initial simplex, ie $`(n+1)`$ points $`w_i`$. The algorithm brings the simplex in the area of a minimum, adapts it to the local geometry, and finally shrinks it around the minimizer. It is a derivative-free, iterative method and proceeds towards the minimum by manipulating a population of $`n+1`$ points (the simplex vertices) and hence it is expected to be tolerant to noise, inspite its deterministic nature. The steps taken in each iteration are described below. (We denote by $`f`$ the objective function and by $`w_i`$ the simplex vertices). 1. Examine the termination criteria to decide whether to stop or not. 2. Number the simplex vertices $`w_i`$, so that the sequence $`f_i=f(w_i)`$ is sorted in ascending order. 3. Calculate the centroid of the first $`n`$ vertices: $`c=\frac{1}{n}\mathrm{\Sigma }_{i=0}^{n1}w_i`$ 4. Invert the ”worst” vertex $`w_n`$ as: $`r=c+\alpha (cw_n)`$ (usually $`\alpha =1`$) 5. If $`f_0f(r)f_{n1}`$ then set $`w_n=r,f_n=f(r)`$, and go to step 1 endif 6. If $`f(r)<f_0`$ then Expand as: $`e=c+\gamma (rc)`$ ($`\gamma >1`$, usually $`\gamma =2`$) If $`f(e)<f(r)`$ then set $`w_n=e,f_n=f(e)`$ else set $`w_n=r,f_n=f(r)`$ endif go to step 1 endif 7. If $`f(r)f_{n1}`$ then If $`f(r)f_n`$ then contract as: $`k=c+\beta (w_nc)`$, ($`\beta <1`$, usually $`\beta =\frac{1}{2}`$) else contract as: $`k=c+\beta (rc)`$ endif If $`f(k)<min(f(r),f_n)`$, then set $`w_n=k,f_n=f(k)`$ else Shrink the whole polytope as: Set $`w_i=\frac{1}{2}(w_0+w_i),f_i=f(w_i)`$ for $`i=1,2,\mathrm{},n`$ endif go to step 1 endif In essence the polytope algorithm considers at each step a population of $`(n+1)`$ action networks whose weight vectors $`w_i`$ are properly adjusted in order to obtain an action network with high evaluation. In this sense, the polytope algorithm, although developed earlier, exhibits an analogy with genetic algorithms which are also based on the recombination of a population of points. The initial simplex may be constructed in various ways. The approach we followed was to pick the first vertex at random. The rest of the vertices were obtained by line searches originating at the first vertex, along each of the $`n`$ directions. This initialization scheme proved to be very effective for the pole balancing. Other schemes such as, random initial vertices or constrained random vertices on predefined directions, etc, did not work well. The termination criterion relies on comparing a measure for the polytope’s ”error” to a user preset small positive number. Specifically the algorithm returns if: $`\frac{1}{n+1}\mathrm{\Sigma }_{i=0}^n|f_i\overline{f}|ϵ`$ where $`\overline{f}=\frac{1}{n+1}\mathrm{\Sigma }_{i=0}^nf_i`$. The use of the polytope algorithm has certain advantages like robustness in the presence of noise, simple implementation and derivative-free operation. These characteristics make this algorithm a suitable candidate for use as an optimization tool in a direct reinforcement learning scheme. Moreover, since the method is deterministic, its effectiveness depends partly on the initial weight values. For this reason, our training strategy employs the polytope algorithm with random restarts as it will become clear in the application presented in the next section. It must also be stressed that the proposed technique does not make any assumption concerning the architecture of the action network, (which in the case described here is a multilayer perceptron), and can be used with any kind of parameterized action model (e.g. the fuzzy-neural action model employed in the GARIC architecture ). ## 3 Application to the Pole Balancing Problem The pole balancing problem constitutes the best-studied reinforcement learning aplication. It consists of a single pole hinged on a cart that may move left or right on a horizontal track of finite length. The pole has only one degree of freedom (rotation about the hinge point). The control objective is to push the cart either left or right with a force so that the pole remains balanced and the cart is kept within the track limits. Four state variables are used to describe the status of the system at each time instant: the horizontal position of the cart ($`x`$), the cart velocity ($`\dot{x}`$), the angle of the pole ($`\theta `$) and the angular velocity ($`\dot{\theta }`$). At each step the action network must decide the direction and magnitude of force $`F`$ to be exerted to the cart. Details concerning the equations of motion of the cart-pole system can be found in . Through Euler’s approximation method we can simulate the cart-pole system using discrete-time equations with time step $`\mathrm{\Delta }\tau =0.02`$ sec. We assume that the system’s equations of motion are not known to the controller, which perceives only the state vector at each time step. Moreover, we assume that a failure occurs when $`|\theta |>12`$ degrees or $`|x|>2.4m`$ and that training has been successfully completed if the pole remains balanced for more than 120000 consequtive time steps. Two versions of the problem exist concerning the magnitude of the applied force $`F`$. We are concerned with the case where the magnitude is fixed and the controller must decide only the direction of the force at each time step. Obviously the control problem is more difficult comprared to the case where any value for the magnitude is allowed. Therefore, comparisons will be presented only with fixed magnitude approaches and we will not consider architectures like the RFALCON , which are very efficient but assume continuous values for the force magnitude. The polytope method is embeded in the MERLIN package for multidimensional optimization. Other derivative-free methods, provided by MERLIN have been tested in the pole-balancing example (random, roll ), but the results were not satisfactory. On the contrary, the polytope algorithm was very effective being able to balance the pole in a relative few number of cycles (function evaluations) which was less than 1000 in many cases. As mentioned in the previous section, the polytope method is deterministic, thus its effectiveness depends partly on the initial weight values. For this reason we have employed an optimization strategy that is based on the polytope algorithm with random restarts. Each run starts by randomly specifying an initial point in the weight space and constructing the initial polytope by performing line minimizations along each of the $`n`$ directions. Next, the polytope algorithm is run for up to 100 function evaluations (cycles) and the optimization progress is monitored. If a cycle has been found lasting more than 100 steps, application of the polytope algorithm continues for additional 750 cycles, otherwise we consider that the initial polytope was not proper and a random restart takes place. A random restart is also performed when after the additional 750 function evaluations the algorithm has not converged, i.e., a cycle has not been encountered lasting for more than 120000 steps (this maximum value is suggested in ). In the experiments presented in this article a maximum of 15 restarts was allowed. The strategy was considered unsuccessfully terminated if 15 unsucessful restarts were performed or the total number of function evaluations was greater than 15000. The above strategy was implemented using the MCL programming language that is part of the MERLIN optimization environment. The initial weight values at each restart were randomly selected in the range $`(0.5,0.5)`$. Experiments were also conducted that considered the ranges $`(1.0,1.0)`$ and $`(2.0,2.0)`$ and the obtained results were similar, showing that the method exhibits robustness as far as the initial weights are concerned. For comparison purposes the action network had also the same architecture with the architecture reported in . It is a multilayer perceptron with four input units (accepting the sytem state), one hidden layer with five sigmoid units and one sigmoid unit in the output layer. There are also direct connections from the input units to the output unit. The specification of the applied force characteristics from the output value $`y(0,1)`$ was performed in the following way. At the first ten steps of each cycle the specification was probabilistic, i.e. $`F=10N`$ with probability equal to $`y`$. At the remaining steps the specification was deterministic, i.e., if $`y>0.5`$ then $`F=10N`$, otherwise $`F=10N`$. In this way, a degree of randomness is introduced in the function evaluation process that assists in escaping from plateaus and shallow local minima. Experiments have been conducted to assess the performance of the proposed training method both in terms of training speed and generalization capabilities. For comparison purposes we have also implemented the AHC approach , while experimental results concerning the genetic reinforcement approach on the same problem using the same motion equations and the same network architecture are reported in . Training speed is measured in terms of the number of cycles (function evaluations) required to achieve a successful cycle. A series of 50 experiments were conducted using each method, with each cycle starting with random initial state variables. Obtained results are summarized in Table 1, along with results from concerning the genetic reinforcement case with population of 100 networks (GA-100) that exhibited the best generalization performance. In accordance with previous published results, the AHC method does not manage to find a solution in 14 of the 50 experiments (28%), so the displayed results concern values obtained considering only the successful experiments. On the contrary, the proposed training strategy was successful in all the experiments and exhibited significantly better performance with respect to the AHC case in terms of the required training cycles. From the displayed results it is also clear that the polytope method outperforms the genetic approach, which is also better than the AHC method. Moreover, we have tested the generalization performance of the obtained action networks. These experiments are useful since a successful cycle starting from an arbitrary initial position, does not nessecarily imply that the system will exhibit acceptable performance when started with different initial state vectors. The generalization experiments were conducted following the guidelines suggested in : for each action network obtained in each of the 50 experiments either using the polytope method or using the AHC method, a series of 5000 tests were performed from random initial states and we counted the percentage of the tests in which the network was able to balance the pole for more than 1000 time steps. The same failure criteria that were used for training were also used for testing. Table 2 displays average results obtained by testing the action networks obtained using the polytope and the AHC method (in the case of successful training experiments). Moreover, it provides generalization results provided in concerning the GA-100 algorithm, using the same testing criteria. As the results indicate the action networks obtained by all methods exhibit comparative generalization performance. As noted in it is possible to increase the generalization performance by considering stricter stopping criteria for the training algorithm. It must also be noted that, in what concerns the polytope method, there was not any connection between training time and generalization performance, i.e., the networks that resulted by longer training times did not nessecarlily exhibit better generalization capabilities. From the above results it is clear that direct approaches to delayed reinforcement learning problems constitute a serious alternative to the most-studied critic-based approaches. While critic-based approaches are mainly based on their elegant formulation based on temporal differences and stochastic dynamic programming, direct approaches base their success on the power of the optimization schemes they employ. Such an optimization scheme based on the polytope algorithm with random restarts has been presented in this work and was proved to be very successful in dealing with the pole balancing problem. Future work will be focused on the employment of different kinds of action models (for example RBF networks) as well as the exploration of other derivative-free optimization schemes. One of us (I. E. L.) acknowledges partial support from the General Secretariat of Research and Technology under contract PENED 91 ED 959.
no-problem/9812/astro-ph9812478.html
ar5iv
text
# Gamma Ray Burst Afterglows and their Implications ## 1 Introduction: Simple “Standard” Afterglows One can understand the dynamics of the afterglows of GRB in a fairly simple manner, independently of any uncertainties about the progenitor systems, using a relativistic generalization of the method used to model supernova remnants. The simplest hypothesis is that the afterglow is due to a relativistic expanding blast wave, which decelerates as time goes on (Mészáros & Rees 1997a; earlier simplified discussions were given by Katz 1994b, Paczyński & Rhoads 1993, Rees & Mészáros 1992). The complex time structure of some bursts suggests that the central trigger may continue for up to 100 seconds. However, at much later times all memory of the initial time structure would be lost: essentially all that matters is how much energy and momentum has been injected; the injection can be regarded as instantaneous in the context of the much longer afterglow. Detailed calculations and predictions from such a model (Mészáros & Rees, 1997a ) preceded the observations of the first afterglow detected, GRB970228 (Costa et al, (1997); van Paradijs et al., (1997)). The simplest spherical afterglow model produces a three-segment power law spectrum with two breaks. At low frequencies there is a steeply rising synchrotron self-absorbed spectrum up to a self-absorption break $`\nu _a`$, followed by a +1/3 energy index spectrum up to the synchrotron break $`\nu _m`$ corresponding to the minimum energy $`\gamma _m`$ of the power-law accelerated electrons, and then a $`(p1)/2`$ energy spectrum above this break, for electrons in the adiabatic regime (where $`\gamma ^p`$ is the electron energy distribution above $`\gamma _m`$). A fourth segment and a third break is expected at energies where the electron cooling time becomes short compared to the expansion time, with a spectral slope $`p/2`$ above that. With this third “cooling” break $`\nu _b`$, first calculated in Mészáros , Rees & Wijers, (1998) and more explicitly detailed in Sari, Piran & Narayan, (1998), one has what has come to be called the simple “standard” model of GRB afterglows. This assumes spherical symmetry (also valid for a jet whose opening angle $`\theta _j\text{ }>\mathrm{\Gamma }^1`$). As the remnant expands the photon spectrum moves to lower frequencies, and the flux in a given band decays as a power law in time, whose index can change as breaks move through it. The standard model assumes an impulsive energy input lasting much less than the observed $`\gamma `$-ray pulse, characterized by a single energy and bulk Lorentz factor value (delta or top-hat function). Estimates for the time needed for the expansion to become non-relativistic could then be $`\text{ }<`$ month (Vietri 1997a ), especially if there is an initial radiative regime $`\mathrm{\Gamma }r^3`$. However, even when electron radiative times are shorter than the expansion time, it is unclear whether such a regime occurs, as it would require strong electron-proton coupling (Mészáros , Rees & Wijers 1998). The standard spherical model can be straightforwardly generalized to the case where the energy is assumed to be channeled initially into a solid angle $`\mathrm{\Omega }_j<4\pi `$. In this case (Rhoads 1997a, 1997b) there is a faster decay of $`\mathrm{\Gamma }`$ after sideways expansion sets in, and a decrease in the brightness is expected after the edges of the jet become visible, when $`\mathrm{\Gamma }`$ drops below $`\mathrm{\Omega }_j^{1/2}`$. A calculation using the usual scaling laws for a single central line of sight leads then to a steepening of the light curve. The simple standard model has been remarkably successful at explaining the gross features of GRB 970228, GRB 970508, etc. (Wijers, Rees & Mészáros 1997, Tavani 1997, Waxman 1997, Reichart 1997). Spectra at different wavebands and times have been extrapolated according to the simple standard model time dependence to get spectral snapshots at a fixed time (Waxman 1997, Wijers & Galama 1998), allowing fits for the different physical parameters of the burst and environment, e.g. the total energy $`E`$, the magnetic and electron-proton coupling parameters $`ϵ_B`$ and $`ϵ_e`$ and the external density $`n_o`$. In GRB 971214 (Ramaprakash et.al 1997), a similar analysis and the lack of a break in the late light curve of GRB 971214 could be interpreted as indicating that the burst (including its early gamma-ray stage) was isotropic, leading to an (isotropic) energy estimate of $`10^{53.5}`$ ergs. Such large energy outputs, whether beamed or not, are quite possible in either NS-NS, NS-BH mergers (Mészáros & Rees 1997b) or in hypernova/collapsar models (Paczyński 1998, Popham et al.1998), using MHD extraction of the spin energy of a disrupted torus and/or a central fast spinning BH. However, it is worth stressing that what these snapshot fits constrain is only the energy per solid angle (Mészáros , Rees & Wijers 1998b). The expectation of a break after only some weeks or months (e.g., due to $`\mathrm{\Gamma }`$ dropping either below a few, or below $`\mathrm{\Omega }_j^{1/2}`$) is based upon the simple impulsive (angle-independent delta or top-hat function) energy input approximation. The latter is useful, but departures from it would be quite natural, and certainly not surprising. As discussed below, it would be premature to conclude at present that there are any significant constraints on the anisotropy of the outflow. ## 2 “Post-standard” Afterglows Models In a realistic situation, one could expect any of several fairly natural departures from the simple standard model to occur. The first one is that departures from a delta top-hat approximation (e.g. having more energy emitted with lower Lorentz factors at later times, still shorter than the gamma-ray pulse duration) would drastically extend the afterglow lifetime in the relativistic regime, by providing a late “energy refreshment” to the blast wave on time scales comparable to the afterglow time scale (Rees & Mészáros 1998). The transition to the $`\mathrm{\Gamma }<\theta _j^1`$ regime occurring at $`\mathrm{\Gamma }`$ few could then occur as late as six months to more than a year after the outburst, depending on details of the brief energy input. Another important effect is that the emitting region seen by the observer resembles a ring (Waxman 1997b, Panaitescu & Mészáros 1998b, Sari 1998). A numerical integration over angles (Panaitescu & Mészáros 1998d) shows that the sideways expansion effects are not so drastic as inferred from the scaling laws for the material along the central-angle line of sight. This is because even though the flux from the head-on part of the remnant decreases faster, this is more than compensated by the increased emission measure from sweeping up external matter over a larger angle, and by the fact that the extra radiation, arising at larger angles, arrives later and re-fills the steeper light curve. Thus, the sideways expansion (even for a simple impulsive injection) actually mitigates the flux decay, rather than accelerating it. Combined with the possibility of an extended relativistic phase due to nonuniform injection, and the fact that numerical angle integrations show that any steepening would occur over factors $`23`$ in time, one must conclude that we do not yet have significant evidence for whether the outflow is jet-like or not. One expects afterglows to show a significant amount of diversity. This is expected both because of a possible spread in the total energies (or energies per solid angle as seen by a given observer), a possible spread or changes in the injected bulk Lorentz factors, and also from the fact that GRB may be going off in very different environments. The angular dependence of the outflow, and the radial dependence of the density of the external environment can have a marked effect on the time dependence of the observable afterglow quantities (Mészáros , Rees & Wijers, (1998)). So do any changes of the bulk Lorentz factor and energy output during even a brief energy release episode (Rees & Mészáros (1998)). Strong evidence for departures from the simple standard model is provided by, e.g., sharp rises or humps in the light curves followed by a renewed decay, as in GRB 970508 (Pedersen et al., (1998); Piro, et al., (1998)). Detailed time-dependent model fits (Panaitescu, Mészáros & Rees 1998) to the X-ray, optical and radio light curves of GRB 970228 and GRB 970508 show that, in order to explain the humps, a non-uniform injection or an anisotropic outflow is required. These fits indicate that the shock physics may be a function of the shock strength (e.g. the electron index $`p`$, injection fraction $`\zeta `$ and/or $`ϵ_b,ϵ_e`$ change in time), and also indicate that dust absorption is needed to simultaneously fit the X-ray and optical fluxes. The effects of beaming (outflow within a limited range of solid angles) can be significant (Panaitescu & Mészáros 1998c), but are coupled with other effects, and a careful analysis is needed to disentangle them. Spectral signatures, such as atomic edges and lines, may be expected both from the outflowing ejecta (Mészáros & Rees 1998a) and from the external medium (Perna & Loeb 1998, Mészáros & Rees 1998b, Bisnovatyi-Kogan & Timokhin 1997) in the X-ray and optical spectrum of afterglows. These may be used as diagnostics for the outflow Lorentz factor, or as alternative measures of the GRB redshift. An interesting prediction (Mészáros & Rees, 1998b ; see also Ghisellini, et al., (1998); Böttcher, et al., (1998)) is that the presence of a measurable Fe K-$`\alpha `$ emission line could be a diagnostic of a hypernova, since in this case one can expect a massive envelope at a radius comparable to a light-day where $`\tau _T\text{ }<1`$, capable of reprocessing the X-ray continuum by recombination and fluorescence. The location of the afterglow relative to the host galaxy center can provide clues both for the nature of the progenitor and for the external density encountered by the fireball. A hypernova model would be expected to occur inside a galaxy, in fact inside a high density ($`n_o>10^310^5`$). Some bursts are definitely inside the projected image of the host galaxy, and some also show evidence for a dense medium at least in front of the afterglow (Owen, et al., (1998)). On the other hand, for a number of bursts there are strong constraints from the lack of a detectable, even faint, host galaxy (Schaefer, (1998)). In NS-NS mergers one would expect a BH plus debris torus system and roughly the same total energy as in a hypernova model, but the mean distance traveled from birth is of order several Kpc (Bloom, Sigurdsson & Pols, (1998)), leading to a burst presumably in a less dense environment. The fits of Wijers & Galama (1998) to the observational data on GRB 970508 and GRB 971214 in fact suggest external densities in the range of $`n_o=`$ 0.04–0.4 cm<sup>-1</sup>, which would be more typical of a tenuous interstellar medium (however, Reichart & Lamb (1998) report a fit for GRB 980329 with $`n_o10^4`$ cm<sup>-3</sup>). These could arise within the volume of the galaxy, but on average one would expect as many GRB inside as outside. This is based on an estimate of the mean NS-NS merger time of $`10^8`$ years; other estimated merger times (e.g. $`10^7`$ years, van den Heuvel (1992)) would give a burst much closer to the birth site. BH-NS mergers would also occur in timescales $`\text{ }<10^7`$ years, and would be expected to give bursts well inside the host galaxy (Bloom, Sigurdsson & Pols, (1998)). ## 3 Conclusions The blast wave model of gamma-ray burst afterglows has proved quite robust in providing a consistent overall interpretation of the major features of these objects at various frequencies. The “standard model” of afterglows, involving four spectral slopes and three breaks, is quite useful in understanding ‘snaphsot’ multiwavelength spectra of afterglows. However, the constraints on the angle-integrated energy, especially at $`\gamma `$-ray energies, are not strong, and beaming effects remain uncertain. Some caution is required in interpreting the observations on the basis of the simple standard model. For instance, if one integrates the flux over all angles visible to the observer, the contributions from different angles lead to a considerable rounding-off of the spectral shoulders, so that breaks cannot be easily located unless the spectral sampling is dense and continuous, both in frequency and in time. Some of the observed light curves with humps, e.g. in GRB 970508, require ‘post-standard’ model features (i.e. beyond those assumed in the standard model), such as either non-uniform injection episodes or anisotropic outflows. Time-dependent multiwavelength fits of this and other bursts also seem to indicate that the parameters characterizing the shock physics change with time. A relatively brief (1-100 s), probably modulated energy input appears the likeliest interpretation for most bursts. This can provide an explanation both for the highly variable $`\gamma `$-ray light curves and for late glitches in the afterglow decays. There has been significant progress in understanding how gamma-ray bursts can arise in fireballs produced by brief events depositing a large amount of energy in a small volume, and in deriving the generic properties of the ensuing long wavelength afterglows. There still remain a number of mysteries, especially concerning the identity of their progenitors, the nature of the triggering mechanism, the transport of the energy and the time scales involved. However, independently of the details of the gamma-ray burst central engine, even if beaming reduces their total energy requirements, these objects are the most extreme phenomena that we know about in high energy astrophysics, and may provide useful beacons for probing the universe at $`z\text{ }>5`$. With new experiments coming on-line in the near future, there is every prospect for continued and vigorous developments both in the observational and theoretical understanding of these fascinating objects. ###### Acknowledgements. I am grateful to Martin Rees for stimulating collaborations on this subject, as well as to Ralph Wijers, Hara Papathanassiou and Alin Panaitescu. This research is supported in part by NASA NAG5-2857
no-problem/9812/cond-mat9812247.html
ar5iv
text
# 1 Introduction ## 1 Introduction After four decades of research the transport properties of systems in which both disorder and strong interactions are equally important are still not even qualitatively understood. Based on Andersons seminal paper investigations of non-interacting disordered electrons led to the development of the scaling theory of localization . In the absence of external symmetry breaking it predicts that all states are localized in one and two dimensions. In contrast, in three dimensions states are extended for weak disorder while they are localized for sufficiently strong disorder. This gives rise to a disorder-driven metal-insulator transition (MIT) at a certain value of disorder strength. Later also the influence of electron-electron interactions on the transport properties of disordered electrons was investigated intensively by means of many-body perturbation theory, field theory, and the renormalization group . This led to a qualitative analysis of the MIT and the identification of the different universality classes. The topic has reattracted a lot of attention after unexpected experimental and theoretical findings. In order to investigate the localized phase and to check the validity of the perturbative results it seems to be important to investigate the problem of interacting disordered electrons non-perturbatively. One possible way in this direction is numerical simulations although they are very costly for disordered many-body systems. Recently, we have developed the Hartree-Fock based diagonalization (HFD) method for the simulation of disordered quantum many-particle systems. We have already applied this very efficient method for calculating the transport properties of one-dimensional and two-dimensional disordered interacting electrons. In this paper we extend the investigation to three spatial dimensions. ## 2 Model and calculations The model, called the quantum Coulomb glass model , is defined on a cubic lattice of $`L^3`$ sites occupied by $`N=KL^3`$ electrons ($`0<K<1`$). To ensure charge neutrality each lattice site carries a compensating positive charge of $`Ke`$. The Hamiltonian is given by $$H=t\underset{ij}{}(c_i^{}c_j+c_j^{}c_i)+\underset{i}{}\phi _in_i+\frac{1}{2}\underset{ij}{}(n_iK)(n_jK)U_{ij}$$ (1) where $`c_i^{}`$ and $`c_i`$ are the electron creation and annihilation operators at site $`i`$, respectively, and $`ij`$ denotes all pairs of nearest neighbor sites. $`t`$ gives the strength of the hopping term and $`n_i`$ is the occupation number of site $`i`$. For a correct description of the insulating phase the Coulomb interaction between the electrons is kept long-ranged, $`U_{ij}=U/r_{ij}`$ ($`r_{ij}`$ is measured in units of the lattice constant), since screening breaks down in the insulator. The random potential values $`\phi _i`$ are chosen independently from a box distribution of width $`2W`$ and zero mean. (In the following we always set $`W=1`$.) Two important limiting cases of the quantum Coulomb glass are the Anderson model of localization (for $`U_{ij}=0`$) and the classical Coulomb glass (for $`t=0`$). For the present study we have simulated systems with $`3^3`$ sites and 13 electrons using periodic boundary conditions. The calculations are carried out by means of the HFD method . This method which is based on the idea of the configuration interaction approach adapted to disordered lattice models is very efficient in calculating low-energy properties in any spatial dimension and for short-range as well as long-range interactions. It consists of 3 steps: (i) solve the Hartree-Fock (HF) approximation of the Hamiltonian, (ii) use a Monte-Carlo algorithm to find the low-energy many-particle HF states, (iii) diagonalize the Hamiltonian in the basis formed by these states. The efficiency of the HFD method is due to the fact that the HF basis states are comparatively close in character to the exact eigenstates in the entire parameter space. Thus it is sufficient to keep only a small fraction of the Hilbert space in order to obtain low-energy quantities with an accuracy comparable to that of exact diagonalization. For the present systems of 13 electrons on 27 sites we found it sufficient to keep 500 to 1000 (out of $`210^8`$) basis states. In order to calculate the conductance we employ the Kubo-Greenwood formula which connects the conductance with the current-current correlation function in the ground state. Using the spectral representation of the correlation function the real (dissipative) part of the conductance (in units of the conductance quantum $`e^2/h`$) is obtained as $$ReG^{xx}(\omega )=\frac{2\pi ^2L}{\omega }\underset{\nu }{}|0|j^x|\nu |^2\delta (\omega +E_0E_\nu )$$ (2) where $`j^x`$ is the $`x`$ component of the current operator and $`\nu `$ denotes the eigenstates of the Hamiltonian. The finite life time $`\tau `$ of the eigenstates in a real d.c. transport experiment (where the system is not isolated but connected to contacts and leads) results in an inhomogeneous broadening $`\gamma =1/\tau `$ of the $`\delta `$ functions in the Kubo-Greenwood formula . Here we use $`\gamma =0.05`$ which is of the order of the single-particle level spacing. Since the resulting conductances vary strongly from sample to sample we logarithmically average all results over several disorder configurations. ## 3 Results We now present results on the dependence of the conductance on the interaction strength. In Fig. 1 we show the conductance as a function of frequency for two sets of parameters. The data represent logarithmic averages over 400 disorder configurations. In the lower diagram the kinetic energy is very small ($`t=0.03`$), i.e., the system is in the highly localized regime. Here not too strong Coulomb interactions ($`U=0.5,1.0`$) lead to an increase of the conductance at low frequencies. If the interaction becomes stronger ($`U=2`$) the conductance finally decreases again. The behavior is qualitatively different at higher kinetic energy ($`t=0.3`$) as shown in the upper diagram. Here already a weak interaction ($`U=0.5`$) leads to a reduction of the low-frequency conductance compared to non-interacting electrons. If the interaction becomes stronger ($`U=2`$) the conductance decreases further. We have carried out analogous calculations for kinetic energies $`t=0.01,\mathrm{},0.5`$ and interaction strengths $`U=0,\mathrm{},2`$. The resulting d.c. conductances are presented in Fig. 2 which is the main result of this paper. In Fig. 2 we now present results on the dependence of the d.c. conductance $`G(0)`$ on the interaction strength for kinetic energies $`t=0.01,\mathrm{},0.5`$ and interaction strength $`U=0,\mathrm{},2`$. The data show that the influence of weak repulsive electron-electron interactions on the d.c. conductance depends on the ratio between disorder and kinetic energy. The conductance of strongly localized samples ($`t=0.01,\mathrm{},0.03`$) becomes considerably enhanced by a weak Coulomb interaction which can be understood from the suppression of localizing interferences by electron-electron scattering. With increasing kinetic energy the relative enhancement decreases as does the interaction range where the enhancement occurs. The conductance of samples with high kinetic energies ($`t0.1`$) is not enhanced by weak interactions. Sufficiently strong interactions always reduce the conductance. In this regime the main effect of the interactions is the reduction of charge fluctuations which reduces the conductance. In the limit of infinite interaction strength the system approaches a Wigner crystal/glass which is insulating for any disorder. Overall, this is the same qualitative behavior as in the cases of one and two spatial dimensions although the disorder induced enhancement of the conductance seems to be weaker in three dimensions. Up to now it is not clear whether this is a true dimensionality effect or due to the different linear system sizes studied. A systematic investigation of the size dependence is in progress. To summarize, we have used the Hartree-Fock based diagonalization (HFD) method to investigate the conductance and localization properties of disordered interacting spinless electrons in three dimensions. We have found that a weak Coulomb interaction can enhance the conductivity of strongly localized samples by almost one order of magnitude, while it reduces the conductance of weakly disordered samples. If the interaction becomes stronger it eventually reduces the conductance also in the strongly localized regime. We acknowledge financial support by the Deutsche Forschungsgemeinschaft.
no-problem/9812/astro-ph9812103.html
ar5iv
text
# Substructure in the ENACS clusters ## 1 Introduction In the last two decades considerable attention has been focused on the study of substructure within rich clusters of galaxies. The importance of subclustering lies in the information it conveys on the properties and dynamics of these systems, which has chief implications for theories of structure formation. A number of authors have developed and applied a variety of methods to evaluate the clumpiness of galaxy clusters both in the optical and X-ray domains (e.g. Geller & Beers 1982GB82 ; Fitchett & Webster 1987FW87 ; West et al. 1988We88 ; Dressler & Shectman 1988aDS88a , hereafter DS88; West & Bothun 1990WB90 ; Rhee et al. 1991Rh91 ; Jones & Forman 1992JF92 ; Mohr et al. 1993Mo93 ; Salvador-Solé et al. 1993aSa93a ; Bird 1994Bi94 ; Escalera et al. 1994Es94 ; Serna & Gerbal 1996SG96 ; Girardi et al. 1997Gi97 ; Gurzadyan & Mazure 1998GM98 ). Consensus on the results, however, has been frequently hindered by differences on the definition of substructure adopted, on the methodology applied, on the scale used to examine the spatial distribution of the galaxies, and even on the levels of significance chosen to discriminate between real structure and statistical fluctuations. The debate on the existence of substructure in clusters has been also fueled by the lack of adequate cluster samples to look at the problem. Optical datasets (we will not discuss here X-ray data) which combine both positional and velocity information are essential to determine unambiguously cluster membership and, hence, to eliminate projection uncertainties on the evaluation of subclustering. On the other hand, meaningful estimates of the amount of substructure within rich clusters of galaxies require large catalogs of these systems, free from sampling biases and representative of the total population. Fortunately, a great deal of progress is now being made in this direction thanks to the rapid development of multi-object spectroscopy, which has made possible the emergence of extensive redshift surveys of galaxies in clusters (e.g. Dressler & Shectman 1988bDS88b ; Teague et al. 1990Te90 ; Zabludoff et al. 1990Za90 ; Beers et al. 1991Be91 ; Malumuth et al. 1992Ma92 ; Yee et al. 1996Ye96 ). The recently compiled ESO Nearby Abell Cluster Survey (ENACS) catalog (Katgert et al. 1996Ka96 , 1998Ka98 ) is the result of the last and, by far, most extensive multi-object spectroscopic survey of nearby rich clusters of galaxies. The survey was specifically designed to provide good kinematical data for the construction, in combination with literature data, of a large statistically complete volume-limited sample of rich ACO (Abell, Corwin, & Olowin 1989)ACO89 clusters in a region of the sky around the South Galactic Pole (Mazure et al. 1996)Ma96 . The catalog contains positions, isophotal (red) R-magnitudes within the 25 mag arcsec<sup>-2</sup> isophote, and redshifts of more than 5600 galaxies in the directions of 107 southern ACO clusters with richness $`R_{\mathrm{ACO}}1`$ and mean redshifts $`z0.1`$. More importantly, numerous ENACS systems offer the possibility of extracting extended magnitude-limited galaxy samples with a good level of completeness, which is essential for many aspects of the study of the properties of rich clusters, in particular, for detecting substructure. In this paper, we investigate substructure in a large subset of the ENACS cluster catalog formed by 67 well-sampled systems. Previous studies of subclustering in cluster samples of comparable size have relied on matching separate datasets and thus could not attain a high degree of homogeneity. We apply to our clusters a variety of well-known and complementary statistical tests for substructure, which analyze information from the projected positions of the galaxies and/or their radial velocities. Our aim is to evaluate the fractions of clumpy ENACS systems implied by the different techniques and to compare them with results from former studies relying on the same substructure diagnostics. We begin by discussing in Sect. 2 the selection of our cluster sample. Subclustering is investigated in Sect. 3 by means of three powerful classical tests which examine the velocity dimension of the cluster data. The moment-based coefficients of skewness and kurtosis are used to detect deviations from Gaussianity in the velocity distributions, which are often correlated with the presence of substructure in galaxy clusters. We also apply the 3D diagnostic for substructure defined in DS88, known as the $`\mathrm{\Delta }`$ test, to search for localized spatial-velocity correlations. These statistics are complemented in Sect. 4 by the two-point correlation formalism (Salvador-Solé et al. 1993b; hereafter Sa93)Sa93b , which is used to look for signs of small-scale subclustering in the two dimensional galaxy distributions. Section 5 contains a summary and discussion of our results. ## 2 The cluster sample A total of 220 compact redshift systems with at least 4 member galaxies and redshifts up to $`z0.1`$ have been identified in the ENACS catalog by Katgert et al. (1996; see their Table 6)Ka96 . These systems were defined by grouping all the galaxies separated by a gap of less than 1000 $`\text{km\hspace{0.17em}s}^1`$ from its nearest neighbor in velocity space along the directions of the clusters targeted in the course of the project. Membership for the systems with at least 50 galaxies in the original compilation suffered further refinement through the removal of interlopers (i.e. galaxies that are unlikely system members but that were not excluded by the 1000 $`\text{km\hspace{0.17em}s}^1`$ fixed-gap criterion) by means of an iterative procedure that relies on an estimate of the mass profile of the system (see Mazure et al. 1996Ma96 for details). The completeness (number of redshifts obtained vs number of galaxies observed) of the ENACS data varies from one sample to another and as a function of apparent magnitude. Katgert et al. (1998)Ka98 show that the completeness of the entire catalog reaches a maximum of about 80% at intermediate magnitudes and stays approximately constant up to $`R_{25}=17`$. Most of the ENACS clusters have indeed its maximum completeness (which oscillates between 60% and 90%) at about this limit (Katgert et al. 1996)Ka96 . At the bright end, the completeness decreases slightly due to the low central surface brightness of some of the brightest galaxies with sizes larger than the diameter of the Optopus fibers, while for $`R_{25}17`$ it falls abruptly due to the smaller S/N-ratio of the spectra of the fainter galaxies. According to these results, and in order to deal with galaxy samples with the maximum level of completeness, we have removed from the ENACS systems all the galaxies with an $`R_{25}`$ magnitude larger than 17. Furthermore, to obtain minimally robust results we have excluded from the present analysis those systems with less than 20 galaxies left after the trimming in apparent brightness. These restrictions lead to a final cluster dataset of 67 compact redshift systems with a good level of completeness in magnitude and containing a minimum of 20 member galaxies each. All but one (Abell 3559) of the 29 clusters for which several Optopus plates were taken (within each plate spectroscopy was attempted only for the 50 brightest galaxies) are included in our cluster sample. These “multiple-plate” clusters identify the richest and more compact in redshift space systems surveyed. One of these, the “double” cluster Abell 548, has been separated into its SW and NE components (see e.g. Davis et al. 1995Da95 ), hereafter referred to as A0548W and A0548E, respectively. Our database also includes 3 large secondary systems seen in projection in the fields of two of the 29 multiple-plate clusters: the systems in the foreground and in the background of Abell 151, designated here as A0151F and A0151B, respectively, and the background galaxy concentration seen in the field of Abell 2819, designated here as A2819B. The remaining 35 systems are “single-plate” clusters for which a unique Optopus field was defined (they all have, then, $`N50`$). These systems are identified in tables and figures by an asterisk. Detailed information about each one of the systems selected, including robust estimates of their main physical properties, can be found along the series of ENACS papers, especially in the articles cited in this section. ## 3 Substructure diagnostics relying on velocity data ### 3.1 Description of the tests To detect deviations from Gaussianity in the cluster’s velocity distributions, we use the classical coefficients of skewness and kurtosis, which have been shown to offer greater sensitivity than other techniques based on the order statistics or the gaps of the datasets (see e.g. Bird & Beers 1993)BB93 . The coefficient of skewness, which is the third moment about the mean, measures the asymmetry of the distribution. It is computed as $$S=\frac{1}{\sigma ^3}\left[\frac{1}{N}\underset{i=1}{\overset{N}{}}(v_i\overline{v})^3\right],$$ (1) with $`\overline{v}`$ and $`\sigma `$ the mean velocity and standard deviation determined from the observed line-of-sight velocities $`v_i`$ of the $`N`$ cluster members. A positive (negative) value of $`S`$ implies that the distribution is skewed toward values greater (less) than the mean. The kurtosis is the fourth moment about the mean and measures the relative population of the tails of the distribution compared to its central region. Since the kurtosis of a normal distribution is expected to be equal to 3, the kurtosis coefficient is usually defined to be neutrally elongated for a Gaussian, in the form $$K=\frac{1}{\sigma ^4}\left[\frac{1}{N}\underset{i=1}{\overset{N}{}}(v_i\overline{v})^4\right]3.$$ (2) Positive values of $`K`$ indicate distributions peakier than Gaussian and/or with heavier tails, while negative values reflect boxy distributions that are flatter than Gaussian and/or with lighter tails. The significance of the empirical values of the above two coefficients is simply given by the probability that they are obtained by chance in a normal distribution. Together with the above normality tests, we apply also the $`\mathrm{\Delta }`$ test of DS88, which is a simple and powerful 3D substructure diagnostic designed to look for local correlations between galaxy positions and velocity that differ significantly from the overall distribution within the cluster. It is based on the comparison of a local estimate of the velocity mean $`\overline{v}_\mathrm{l}`$ and dispersion $`\sigma _\mathrm{l}`$ for each galaxy with measured radial velocity, with the values of these same kinematical parameters for the entire sample. The presence of substructure is quantified by means of a sole statistic defined from the sum of the local kinematic deviations $`\delta _i`$ over the $`N`$ cluster members, in the form (Bird 1994)Bi94 $`\mathrm{\Delta }`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{N}{}}}\delta _i`$ (3) $`=`$ $`{\displaystyle \underset{i=1}{\overset{N}{}}}\left[{\displaystyle \frac{\mathrm{nint}(\sqrt{N})+1}{\sigma ^2}}\left((\overline{v}_{\mathrm{l},i}\overline{v})^2+(\sigma _{\mathrm{l},i}\sigma )^2\right)\right]^{\frac{1}{2}},`$ with $`\mathrm{nint}(x)`$ standing for the integer nearest to $`x`$. To avoid the formulation of any hypothesis on the form of the velocity distribution of the parent population, the $`\mathrm{\Delta }`$ statistic is calibrated by means of Monte-Carlo simulations (we perform 1000 per cluster) that randomly shuffle the velocities of the cluster members while keeping their observed positions fixed. In this way any existing local correlation between velocities and positions is destroyed. The probability of the null hypothesis that there are no such correlations is given in terms of the fraction of simulated clusters for which their cumulative deviation is smaller than the observed value. ### 3.2 Results Table 1 summarizes, for each one of the 67 magnitude-limited galaxy samples defined in Sect. 2, the number of galaxies $`N`$ meeting the selection criteria and the probabilities that the empirical values of the three statistics described above could have arisen by chance (the smaller the quoted value the more significant is the departure from the null hypothesis). At the 5% significance level (in this section all results will be referred to this level of significance) about 30% (20 of 67) of the systems exhibit a non-Gaussian velocity distribution according to at least one of the two normality tests. This is a little small fraction if compared with the results of previous studies by West & Bothun (1990)WB90 , Bird & Beers (1993)BB93 , and Bird (1994)Bi94 , in which $`40\%50\%`$ of the clusters investigated had radial velocity distributions with non-normal values of the skewness and/or kurtosis. The discrepancies, however, are not statistically significant and point to possible biases towards the inclusion of clumpy systems in former cluster datasets (see, for instance, the selection criteria applied by Dressler & Shectman 1988bDS88b ). The normality tests do not detect either significant differences between the single- and multiple-plate subsets, which indicate frequencies of rejection of the Gaussian hypothesis, 26% (9/35) and 34% (11/32) respectively, fully compatible within the statistical uncertainties. On the other hand, 31% (21/67) of our clusters are found to show substructure according to the $`\mathrm{\Delta }`$ test. In a recent investigation of the kinematics and spatial distribution of the Emission-Line Galaxies (ELG) in clusters, Biviano et al. (1997)Bi97 have applied this same test to the 25 ENACS systems with $`N50`$ that contain at least one ELG, finding evidence for substructure in $`40`$% of the cases. As was to be expected, this value is in excellent agreement with the 38% (12/32) of the multiple-plate systems which demonstrate substructure in our dataset. Previous analysis of subclustering also relying on the $`\mathrm{\Delta }`$ statistic by Escalera et al. (1994)Es94 and Bird (1994)Bi94 claim similar percentages of clumpy systems, 38% (6/16) and 44% (11/25) respectively, while the fraction quoted in the original work by DS88 is somewhat higher, 53% (8/15), but still compatible with the other results within the statistical uncertainties. We emphasize, however, that none of the preceding works payed attention to the completeness in magnitude of the galaxy samples under scrutiny. As in the case of the Gaussianity tests, we do not find significant differences between the fractions of substructured single-plate systems (9/35) and multiple-plate ones (12/32) indicated by the $`\mathrm{\Delta }`$ statistic. ## 4 The average two-point correlation function ### 4.1 Definition and practical implementation The average two-point correlation function $`\overline{\xi }`$ (see Sa93 for details) was introduced for the statistical characterization of subclustering in inhomogeneous systems with isotropy around one single point. Given a circularly symmetric galaxy cluster, this statistic can be calculated exactly as the usual two-point correlation function in the homogeneous and isotropic case through the expression $$\overline{\xi }(s)=\frac{(\rho \rho )(s)(nn)(s)}{(nn)(s)}\frac{1}{N},$$ (4) with $`\rho `$ some continuous function approximating the observed number density distribution of galaxies, and $`n`$ the mean radial number density profile estimated from the azimuthal average of $`\rho `$. Notice that, contrarily to $`\rho `$, $`n`$ is insensitive to the existence of correlation in galaxy positions. The additive constant $`1/N`$ in Eq. (4) corrects for the negative bias caused by the fact that each cluster galaxy chosen at random has only $`N1`$ neighbors, one less than the number expected for a fully random process. The autocorrelation products $`\rho \rho `$ and $`nn`$ are computed via the sequence of transformations (see also Salvador-Solé et al. 1993a)Sa93a $$(\rho \rho )(s)=_1𝒜\left[𝒜_1^1\left(2_s^{\mathrm{}}P(x)\text{d}x\right)\right],$$ (5) and $$(nn)(s)=_1𝒜\left[𝒜_1^1\left(_s^{\mathrm{}}\mathrm{\Pi }(x)\text{d}x\right)\right]^2,$$ (6) which rely, respectively, on the calculation of the cumulative forms of $`P(s)\text{d}s`$, the number of pairs of galaxies with observed separation between $`s`$ and $`s+\text{d}s`$ among the $`N(N1)/2`$ galaxy pairs obtained from the cluster sample, and of $`\mathrm{\Pi }(s)\text{d}s`$, the number of galaxies at projected distances between $`s`$ and $`s+\text{d}s`$ from the center of symmetry of the galaxy distribution. In Eqs. (5) and (6) $`_1`$ and $`𝒜`$ stand, respectively, for the one-dimensional Fourier and Abel transformations, while the symbol “$``$” denotes the composition of functions. From the latter two equations it is readily apparent that the numerical estimate of $`\overline{\xi }`$ is independent of the bin size used for the integrals $`_s^{\mathrm{}}P(x)\text{d}x`$ and $`_s^{\mathrm{}}\mathrm{\Pi }(x)\text{d}x`$, which merely determines the sampling interval of the solution, so there are no lower limits on the size of the subclumps that can be detected (nor a priori assumptions on their possible number and shapes are required). Nonetheless, it is advisable to attenuate the statistical noise of $`\overline{\xi }(s)`$ at galactic scales (Sa93). Thus, we apply a low-passband hamming filter leading to a final resolution length of 0.05 $`h^1\text{Mpc}`$. Notice also that the use of the cumulative forms of the distributions $`P(s)`$ and $`\mathrm{\Pi }(s)`$ makes this statistic particularly well suited for galaxy samples containing a small number of objects. The statistical significance of substructure for each cluster is obtained by checking the null hypothesis that the observed $`\rho (s)`$ arises from a Poissonian realization of an unknown theoretical density profile, which is approximated by $`n(s)`$. In practice, this translates to a comparison between the empirical function given by Eq. (4) with the mean and one standard deviation of the same function obtained from a large number of Poissonian cluster simulations (i.e. both the radius and the azimuthal angle of each galaxy are chosen at random) that reproduce the profile $`n(s)`$. ### 4.2 Results In order to apply this diagnostic, circularly symmetric galaxy subsamples have been extracted from our dataset by means of a three-step procedure. The first step consists in the determination of the system barycenter through an iterative process that uses only those galaxies located inside the maximum circle, around the centroid obtained in the previous iteration, inscribed within the limits of the surveyed field. This procedure mitigates any incompleteness in position caused by the spatial filters used in the data acquisition and, when several structures are present in the same region, tends to focus on the main subsystem. A second iterative process calculates the system ellipticity $`e`$, which is assumed to be homologous, and the orientation $`\theta `$ of its major axis. Analogously to the barycenter determination, galaxies located in incomplete (elliptical) spatial bins around the barycenter are excluded from the calculations. Finally, circular symmetry is ensured by contracting the galaxy coordinates along the semimajor axis by $`\sqrt{e}`$ and expanding the semiminor-axis coordinates by the inverse of this same factor. In this manner, we take into account elongation effects that might artificially indicate clumpiness, while any true signal of subclustering is preserved. Table 2 lists, system by system, the barycenter coordinates, the values of the parameters $`e`$ and $`\theta `$ (relative to the WE direction)<sup>1</sup><sup>1</sup>1Adami et al. (1998)Ad98 have also inferred these parameters for a number of clusters in this list from Maximum-Likelihood fits to the COSMOS data, obtaining compatible results., the equivalent radius $`r_{\mathrm{e}q}`$ (i.e. the radius of a circle with an area equal to the maximum elliptical isopleth contour) in $`h^1\text{Mpc}`$, and the number of galaxies $`N_\mathrm{c}`$ included in the circularly symmetric subsamples. Physical units have been inferred from the cosmological distances of the clusters, which are calculated by correcting their mean heliocentric redshifts to the Cosmic Microwave Background rest frame according to the dipole measured by Kogut et al. (1993)Ko93 . To minimize small-number effects, the calculation of the average two-point correlation function was restricted to the 59 circularly symmetric galaxy subsamples with 15 or more objects. The results are depicted in Fig. 1, together with the mean solutions and $`1\sigma `$-errors resulting from 200 Poissonian realizations of each cluster. These plots show that only 6 systems, A0151, A0548W, A2755\*, A3128, A3223, and A3879, have a strictly positive signal raising above the noise at separations smaller than 0.2 $`h^1\text{Mpc}`$ (as in Sa93, we consider the presence of central maxima reaching at least the $`1\sigma `$ level as indicative of small-scale subclustering). Two other systems, A0118\* and A3691\*, exhibit also a positive departure of more than $`1\sigma `$ at these small scales, but have negative central values of $`\overline{\xi }`$. Indeed, about three fourths (46 of 59) of the clusters in our sample present negative central signals which, in 15 cases, even go over $`1\sigma `$. These results are in notorious contradistinction with those inferred in Sa93 from the analysis of 14 of the 15 Dressler & Shectman’s (1988b)DS88b clusters (Abell 548 was excluded). In this earlier study all systems gave positive central values of $`\overline{\xi }`$ and nine showed departures between 1 and $`2\sigma `$ at separations inferior to 0.2 $`h^1\text{Mpc}`$. The only cluster in common between both investigations, Abell 754, is found here to exhibit no evidence for substructure, yet in Sa93 this cluster was seen to produce, with a similar number of objects, a strong positive central signal. One plausible origin of that conflict could be the very different areal coverage of the galaxy samples used in the two studies for this particular cluster (see Fig. 2; but notice that the orientations, ellipticities and barycenter positions are, nevertheless, in very good agreement). This would be the case if the positive detection in Sa93 was produced by small subgroups located outside the cluster core. We also point out the suggestion made in Sa93 that the asymmetry shown by the projected galaxy distribution in the Dressler & Shectman’s field, not noticeable at short distances from the cluster center, could have caused the observed signal. It is interesting to note that similarly strong discrepancies can be observed with respect to the results of the wavelet analysis of substructure performed by Escalera et al. (1994)Es94 . These authors found that only three systems among the 16 that they investigated, most of them Dressler & Shectman’s clusters, did not show significant small-scale subclustering. Further, there is a good agreement between the results of this study and of Sa93 for the common clusters. Finally, it is also worth noting that, with the exception of the system A0151, the remaining five clusters with evidence for small-scale structure in the galaxy positional data show also signs of substructure in velocity space (see Table 1). ## 5 Summary and discussion We have evaluated here the frequency of subclustering in 67 well-sampled nearby rich galaxy clusters extracted from the list of 220 compact redshift systems identified in the homogeneous ENACS catalog. Three classical diagnostics sensitive to correlations in velocity space have registered amounts of substructure comparable with those found in earlier studies which applied the same estimators to datasets less representative of the nearby rich cluster population. The average two-point correlation function statistic has allowed us to investigate the clumpiness of the two dimensional galaxy distributions at small intergalactic separations. In doing so we have found that only about one of every 10 systems studied shows evidence for positive correlation among the projected positions of its member galaxies at scales inferior to $`0.2`$$`h^1\text{Mpc}`$. This result contrasts markedly with the very high fraction of Dressler & Shectman’s clusters which demonstrated signs of small-scale substructure in the earlier analysis by Sa93 (see also Escalera et al. 1994).Es94 It is possible that already mentioned factors such as cluster selection biases, likely affecting some of the existing catalogs, or the restricted coverage of the galaxy distributions of part of the clusters studied here may be partially responsible for this conflicting result. Nevertheless, there are grounds for believing that it could be caused too by an increase of the incompleteness of the ENACS galaxy samples at small scales. A telling argument in support of this latter viewpoint is that 25 of the 28 magnitude-limited single-plate systems (and 21 of the 31 multiple-plate ones) for which the $`\overline{\xi }`$ statistic has been inferred exhibit negative central values of this function. Since in the absence of correlation among galaxy positions positive and negative values of $`\overline{\xi }(0)`$ are equally probable (see Sa93), we infer that the ENACS clusters do show suggestive evidence of a systematic deficiency of galaxies at very short separations. What then could have originated this effect? Let us remember that the ENACS project was aimed to obtain extensive redshift data in the fields of more than 100 rich galaxy clusters. To achieve this goal in a reasonable amount of time the number of exposures for each targeted cluster was minimized, making it difficult to compensate the operational restrictions inherent to the fiber-optic system (limited number of fibers available, minimum distance allowed for the positioning of contiguous fibers, etc) with redundant exposures. This might well have affected the reproduction of the finest details of the cluster galaxy distributions, especially when the coverage was done by means of a single plate. Notice, in this regard, that only one single-plate cluster is among the 6 systems that show signs of small-scale substructure in our dataset. (One may wonder if this latter result could have been produced instead by the relatively small galaxy populations of the single-plate clusters; this possibility is challenged, however, by the fact that in Sa93 seven of the 9 systems which gave a positive detection had less than 50 objects.) The reduced success in the redshift measurement for the brightest galaxies (see Sect. 2, and Katgert et al. 1996)Ka96 , which are fair tracers of substructures within clusters (Biviano et al. 1996Bi96 ; Gurzadyan & Mazure 1998GM98 ) is another factor which may have also contributed to conceal the presence of small subgroups. To sum up, the amount of intermediate and large-scale subclustering detected in the ENACS systems is in fair agreement with, and therefore validates, the results of previous analysis of substructure in nearby rich clusters based on less homogeneous datasets. The present investigation, however, has revealed that the ENACS galaxy samples could suffer from an increasing incompleteness towards small intergalactic separations. In this regard, we caution that the ENACS data by themselves may be insufficient in applications requiring a detailed description of the small-scale substructuring properties of clusters, such as those that investigate the formation of these systems and its consequences on cosmological theories. ###### Acknowledgements. The authors would like to thank Peter Katgert for enlightening and useful discussions. GGC acknowledges support by the Universitat Politècnica de Catalunya, through research contract PR97–07. This work has been supported by the Dirección General de Investigación Científica y Técnica, under contract PB96–0173.
no-problem/9812/hep-ph9812492.html
ar5iv
text
# References MPI-PhT/96-14(extended version) July 1996 A Note on QCD Corrections to $`A_{FB}^b`$ using Thrust to determine the $`b`$-Quark Direction Bodo Lampe Max Planck Institut für Physik Föhringer Ring 6, D-80805 München Abstract I discuss one-loop QCD corrections to the forward backward asymmetry of the $`b`$-quark in a way appropriate to the present experimental procedure. I try to give insight into the structure of the corrections and elucidate some questions which have been raised by experimental experts. Furthermore, I complete and comment on results given in the literature. The forward backward asymmetry of the $`b`$ quark is one of the most interesting quantities which has been measured at LEP. It is defined as the ratio $$A_{FB}=\frac{\sigma _{FB}}{\sigma _{F+B}}$$ and in lowest order is given by $`A_{FB}^{\mathrm{Born}}=3\frac{v_ba_b}{v_b^2+a_b^2}\frac{v_ea_e}{v_e^2+a_e^2}`$ and therefore sensitive to the couplings of electron and $`b`$ quark to the $`Z`$. Even more interesting might be the measurement of the combined left right forward backward asymmetry $`A_{FB}^{LR}=\frac{(\sigma _{FB})_L(\sigma _{FB})_R}{\sigma _{\mathrm{total}}}`$ projected by SLC because in lowest order it involves the $`b`$ quark couplings $`v_b`$ and $`a_b`$ only . For a precision measurement of these quantities the understanding of higher order corrections is very important. Oneloop (electroweak and QCD) corrections have been reviewed in and and twoloop QCD corrections have been calculated in . Usually, these results are presented under the assumption that the $`b`$-quark direction can be experimentally precisely determined. However, with the existing detectors this is not the case. Instead, the LEP experiments apply the following procedure : * Events in which the $`b`$ (or the $`\overline{b}`$) decays semileptonically, $`bc\mu ^{}\overline{\nu }(\overline{b}\overline{c}\mu ^+\nu )`$ are selected. * For these events the thrust axis $`T`$ is determined as the $`max_\stackrel{}{n}_i|\stackrel{}{p}_i\stackrel{}{n}_i|`$ where the sum is over all charged momenta $`\stackrel{}{p}_i`$ in the event. * The orientation of the thrust axis is chosen in such a way that $`\stackrel{}{T}\stackrel{}{\mu }^{}`$ is positive (resp. $`\stackrel{}{T}\stackrel{}{\mu }^+`$ is negative). Then the event is counted forward if $`\stackrel{}{T}`$ points in the forward direction $`(\stackrel{}{T}\stackrel{}{e}^{}>0)`$ and backward if $`\stackrel{}{T}`$ points in the backward direction $`(\stackrel{}{T}\stackrel{}{e}^{}<0)`$. This procedure will be called the “$`\underset{¯}{T\text{ procedure}}`$” in the following (in contrast to the “$`\underset{¯}{b\text{ procedure}}`$” where the $`b`$ quark is used to determine the asymmetry). The $`T`$ procedure has several deficiencies: First, due to the missing momentum of the neutrino, $`\stackrel{}{T}`$ is not the “true” thrust axis. Secondly, due to the nature and kinematics of the $`b`$ decay, there are events where the $`\mu ^{}`$ goes forward while the $`b`$ goes backward (and vice versa). Thirdly, gluon emission may spoil the connection between thrust axis and $`b`$-quark direction. These deficiencies must all be corrected for. They can be corrected for separately. In this note we concentrate on point 3. The corrections to items no. 1 and 2 can be made using existing Monte Carlo programs. Note that in this procedure the muon is only used to determine the hemisphere in which its parent quark is to be expected. One may ask why not determine the asymmetry of the muon (“$`\mu `$ procedure”). The answer is that the muon asymmetry will be notably smaller than the $`b`$ asymmetry, because there is a loss of the original information through the missing neutrino. The $`\mu `$ procedure is therefore worse than the $`T`$ procedure. Before I address item no. 3 I will consider the structure of oneloop QCD corrections to $`A_{FB}`$ in general. For simplicity, I will neglect mass terms $`O(m_b/m_Z)`$ in the following. In addition to $`e^+e^{}b\overline{b}`$ one has processes $`e^+e^{}b\overline{b}g`$. When they are included the QCD effect can be written as an overall correction factor $$A_{FB}=A_{FB}^{\mathrm{Born}}(1+c\frac{\alpha _s}{\pi })$$ which we decompose as $$1+c\frac{\alpha _s}{\pi }=\frac{1+\frac{\alpha _s}{\pi }(p_2+p_3)}{1+\frac{\alpha _s}{\pi }(q_2+q_3)}$$ i.e. we write it as a correction factor $`p`$ to $`\sigma _{FB}`$ defined by a correction factor $`q`$ to $`\sigma _{F+B}=\sigma _{\mathrm{total}}`$. Both $`p`$ and $`q`$ can be split into a 2 jet and a 3 jet piece in the sense that one can split $`\sigma _{F\pm B}`$ in a 2 jet and a 3 jet piece, $$\sigma _{F\pm B}=\sigma _{F\pm B}^2(y)+\sigma _{F\pm B}^3(y),$$ with an invariant mass cut $`y`$ to define the jets. $`p_{2,3}(y)`$ are defined by $$\sigma _{FB}^2(y)=\sigma _{FB}^{\mathrm{Born}}(1+\frac{\alpha _s}{\pi }p_2)$$ $$\sigma _{FB}^3(y)=\frac{\alpha _s}{\pi }p_3\sigma _{FB}^{\mathrm{Born}}$$ $$\sigma _{F+B}^2(y)=\sigma _{F+B}^{\mathrm{Born}}(1+\frac{\alpha _s}{\pi }q_2)$$ $$\sigma _{F+B}^3(y)=\frac{\alpha _s}{\pi }q_3\sigma _{F+B}^{\mathrm{Born}}$$ Note that the sums $`p_2+p_3`$ and $`q_2+q_3`$ are independent of $`y`$. One could define a forward backward asymmetry based on 2 jet resp. 3 jet events only $$A_{FB}^2(y)=\frac{\sigma _{FB}^2}{\sigma _{F+B}^2}=A_{FB}^{\mathrm{Born}}(1+\frac{\alpha _s}{\pi }(p_2q_2))$$ $$A_{FB}^3(y)=\frac{\sigma _{FB}^3}{\sigma _{F+B}^3}=\frac{p_3}{q_3}A_{FB}^{\mathrm{Born}}$$ but we shall not consider these quantities in the following. No simple relation holds between $`A_{FB}`$, $`A_{FB}^2`$ and $`A_{FB}^3`$. The functions $`p_{2,3}(y)`$ and $`q_{2,3}(y)`$ have been given in the literature and I do not want to repeat them here because I am only interested in the inclusive correction factor $`c`$. Within the $`b`$ procedure one has $`c=1`$ (for $`m_b=0`$) and $`c0.8`$ (for $`m_b=4.7`$ GeV). It is a question of some interest to know the value of $`c`$ for the $`T`$ procedure, too. To determine this value we shall work on the parton level and mimic the $`T`$ procedure on the parton level. On the parton level the role of the muon direction is played by the $`b`$ quark direction and the thrust dirction $`\stackrel{}{t}`$ is given by the parton with the maximum energy. In lowest order and in the exact 2 jet limit $`(y0)`$ the thrust direction and the $`b`$ quark direction are identical so that no correction needs to be applied (as compared to the $`b`$ procedure). A difference arises, however, in the 3 jet region, where $`\stackrel{}{t}`$ can be either $`\stackrel{}{b}`$, $`\overline{b}`$ or $`\stackrel{}{g}`$. In $`O(\alpha _s)`$ the $`T`$ and $`b`$ procedure are equivalent only in the strict 2 jet limit $`y0`$. An event is forward if either $`\stackrel{}{t}\stackrel{}{b}>0`$ and $`\stackrel{}{t}\stackrel{}{e}^{}>0`$ or $`\stackrel{}{t}\stackrel{}{b}<0`$ and $`\stackrel{}{t}\stackrel{}{e}^{}<0`$, and backward otherwise. We have used this procedure and applied it to the QCD matrix element for the process $`e^+e^{}b\overline{b}g`$. One obtains the following results: $$c(T\text{ procedure, }m_b=0)=0.893$$ This number can be decomposed into a 2 jet and 3 jet contribution. $$c=(k_2+k_3)\frac{C_F}{2}\text{with}\frac{C_F}{2}k_{2,3}=(pq)_{2,3}$$ The colour factor $`C_F=4/3`$ has been introduced for convenience. The results for $`k_3`$ are given in the table, both for the $`b`$ and the $`T`$ procedure for several values of $`y`$, assuming $`m_b=0`$. $`k_2=\frac{3}{2}ck_3`$ vanishes for $`y0`$ because in the $`m_b=0`$ theory there is no contribution from the virtual gluon exchange diagram $`e^+e^{}b\overline{b}`$. Of course it is desirable to have the $`0(m_b)`$ dependence of $`c`$ in the $`T`$ procedure. This is done in a forthcoming publication. In summary one may state: when going from the $`b`$ procedure ($`c=1`$) to the $`T`$ procedure $`(c=0.893)`$ one gets a correction of about 10% to the correction, i.e. the effect is small and irrelevant on the basis of the present experimental accuracy and only important for some future precision experiment. This statement remains true if $`0(m_b)`$ corrections are included. Acknowledgements: This work was done while I was visiting CERN. I would like to thank Guido Altarelli for his encouragement and Roberto Tenchini and Duccio Abbaneo for several discussions. Appendix:Some Details of the Calculation We start with 3 different representations of the cross section $$\frac{d\sigma }{d\mathrm{cos}\theta _b}=\sigma _V^b\mathrm{cos}\theta _b+\frac{3}{4}\sigma _L^b\mathrm{sin}^2\theta _b+\frac{3}{8}\sigma _U^b(1+\mathrm{cos}^2\theta _b)$$ \+ aximuthal terms ($`\varphi _b`$); $$\frac{d\sigma }{d\mathrm{cos}\theta _{\overline{b}}}=\sigma _V^{\overline{b}}\mathrm{cos}\theta _{\overline{b}}+\frac{3}{4}\sigma _L^{\overline{b}}\mathrm{sin}^2\theta _{\overline{b}}+\frac{3}{8}\sigma _U^{\overline{b}}(1+\mathrm{cos}^2\theta _{\overline{b}})$$ \+ aximuthal terms ($`\varphi _{\overline{b}}`$); $$\frac{d\sigma }{d\mathrm{cos}\theta _g}=\sigma _V^g\mathrm{cos}\theta _g+\frac{3}{4}\sigma _L^g\mathrm{sin}^2\theta _g+\frac{3}{8}\sigma _U^g(1+\mathrm{cos}^2\theta _g)$$ \+ aximuthal terms ($`\varphi _g`$); where $$\sigma _V^i=\frac{3}{8}J\sigma _o\frac{\alpha _s}{2\pi }C_FB_V^i$$ $$\sigma _{U+L}^i=R\sigma _o\frac{\alpha _s}{2\pi }C_FB_{UL}$$ and $$B_{UL}=\frac{x_1^2+x_2^2}{y_{13}y_{23}}$$ $$B_V^i=\frac{x_1^2\mathrm{cos}\theta _{1i}x_2^2\mathrm{cos}\theta _{2i}}{y_{13}y_{23}}$$ $`\mathrm{cos}\theta _{ij}=1`$ for i=j and $$\mathrm{cos}\theta _{ij}=1+\frac{2}{x_ix_j}\frac{2}{x_i}\frac{2}{x_j}$$ for $`ij`$. The $`x_i`$ are the normalized energies of partons $`i`$, $`x_1+x_2+x_3=2`$, $`y_{13}=1x_2`$ etc. as usual. $`\sigma _o`$ is the tree level cross section for $`e^+e^{}\mu ^+\mu ^{}`$ (photon exchange only). Note that at leading order $`e^+e^{}b\overline{b}`$ $$A_{FB}^0=\frac{3J}{8R}.$$ Expressions for J and R can be found, for instance, in Nachtmann’s book. In it was shown that in the $`b`$ procedure the QCD correction factor can be written as $$1+\frac{\alpha _s}{2\pi }C_F(B_V^bB_{UL})_{\mathrm{all}}=1\frac{3}{2}\frac{\alpha _s}{2\pi }C_F$$ where $`()_{\mathrm{all}}`$ denotes $`_0^1𝑑x_1_0^1𝑑x_2\theta (x_1+x_21)`$ and the singularities of $`B_V^b`$ and $`B_{UL}`$ for $`y_{13}0,y_{23}0`$ drop out in the difference $`B_V^bB_{UL}`$. In the $`T`$ procedure the QCD correction factor is given by $$1+\frac{\alpha _s}{2\pi }C_F\left\{(B_V^bB_{UL})_{x_1>}+(B_V^{\overline{b}}B_{UL})_{x_2>}+(B_V^gB_{UL})_{x_3>}\right\}$$ where $`()_{x_1>}`$ denotes $`\underset{0}{\overset{1}{}}𝑑x_1\underset{0}{\overset{1}{}}𝑑x_2\theta (x_1+x_21)\theta (x_1x_2)\theta (x_1x_3)`$ etc. Numerically one finds $$(B_V^bB_{UL})_{x_1>}=(B_V^{\overline{b}}B_{UL})_{x_2>}=0.21$$ $$(B_V^gB_{UL})_{x_3>}=B_{UL}|_{x_3>}=0.92$$ For comparison we also give here $$(B_V^bB_{UL})_{x_2>}=0.74$$ $$(B_V^bB_{UL})_{x_3>}=0.55$$ The difference between $`b`$ and $`T`$ procedure is given by $$\frac{\alpha _s}{2\pi }C_F[(B_V^b+B_V^{\overline{b}})_{x_2>}+B_V^b|_{x_3>}]=\frac{\alpha _s}{2\pi }C_F(0.5273+0.3675)$$ $$=(0.160\pm 0.003)\frac{\alpha _s}{2\pi }C_F$$
no-problem/9812/hep-ex9812007.html
ar5iv
text
# Search for Glueballs from Three-body Annihilation of 𝑝̄⁢𝑝 in-Flight ## 1 STRATEGY FOR GLUEBALL HUNTING IN $`p\overline{p}`$ ANNIHILATION Proton-antiproton annihilation is regarded as a favorable process for glueball production. For $`p\overline{p}`$ annihilation, there are two possible ways to produce glueballs as shown in Fig.1(a&b), which are so called “production” and “formation” mechanisms, respectively. For the $`0^{++}`$ glueball ground state, lattice QCD predicts its mass to be $`1.451.8GeV/c^2`$ . For a glueball in such a mass range to be produced from $`p\overline{p}`$ annihilation, it can only come from the “production” mechanism. Indeed, by studying $`p\overline{p}3\pi ^0`$ & $`\pi ^0\eta \eta `$, Crystal Barrel Collaboration discovered the $`f_0(1500)`$ resonance which is now regarded as the best $`0^{++}`$ glueball candidate. There is also some new evidence for $`f_0(1770)`$. If the $`f_0(1500)`$ is really a glueball, it sets a mass scale for glueballs of other quantum numbers. The $`0^+`$, $`2^{++}`$ and $`2^+`$ glueballs are predicted to be around $`2.12.4`$ GeV by various theoretical models. For resonances in such an energy range, they should be mainly produced from the “formation” mechanism and three-body decay modes are expected to be large. Crystal Barrel has taken a lot of data for all neutral final states in flight. Then the question is which three body channels we should study first. Here we can take some lessons from charmonium decays. For $`\eta _c(0^+)`$, $`\chi _{c0}(0^{++})`$, $`\chi _{c2}(2^{++})`$ hadronic decays and $`J/\mathrm{\Psi }`$ radiative decays, they definitively go through two-gluon intermediate states. The $`\eta `$, $`\eta ^{}`$, $`\sigma `$ and $`f_0(1500)`$ seem to be favoured decay products of two-gluon states. The $`\eta \sigma `$, $`\eta ^{}\sigma `$ and $`\eta f_0(1500)`$ are expected to be large decay modes of $`0^+`$ glueballs while the $`\eta f_2`$ and $`\eta ^{}f_2`$ are expected to be large for $`2^+`$ and $`2^{++}`$ glueball decays. These decay modes have $`\pi ^0\pi ^0\eta `$, $`\pi ^0\pi ^0\eta ^{}`$ and $`3\eta `$ as their final states. Hence our strategy for hunting $`0^+`$, $`2^{++}`$ and $`2^+`$ glueballs is to study resonances formed by $`p\overline{p}`$ and decaying into $`\pi ^0\pi ^0\eta `$, $`\pi ^0\pi ^0\eta ^{}`$ and $`3\eta `$ final states first. ## 2 STATUS OF THREE-BODY ANNIHILATION IN-FLIGHT Crystal Barrel at LEAR has collected data triggering on neutral final states at beam momenta 0.6, 0.9, 1.05, 1.2, 1.35, 1.525, 1.642, 1.8 and 1.94 GeV/c, which correspond to center-of-mass energies ranging from 1.96 to 2.41 $`GeV/c^2`$. An average of 8.5 million all neutral events were taken at each momentum. These data have been processed. Rough number of selected events at each beam momentum and background level for reconstructed three-body channels from $`6\gamma `$ and $`7\gamma `$ events are listed in Table 1. Among them, the background level for the $`\pi ^0\pi ^0\omega `$ channel has not been investigated yet; the background level for $`\pi ^0\pi ^0\eta ^{}`$ is too high to do partial wave analysis. So we have been analyzing the first four channels. For $`\pi ^0\pi ^0\pi ^0`$ channel, the most obvious contributions come from $`f_2(1270)\eta `$ and $`f_0(1500)\eta `$ intermediate states. The partial wave analysis is in progress. For $`\pi ^0\eta \eta `$ channel, the statistics is not enough for a full partial analysis including both production and formation amplitudes. Some effective formalism was used to concentrate on searching for resonances in the production mechanism. The main results are: $`f_0(1500)\eta \eta `$ is clearly seen; $`f_0(17501800)`$, $`f_0(2100)`$ and $`a_2(1660)`$ are confirmed; in addition there is a broad $`f_2(1980)\eta \eta `$ with mass $`M=1980\pm 50`$ MeV and width $`\mathrm{\Gamma }=500\pm 100`$ MeV. For $`\eta \eta \eta `$ channel, the statistics is low. But we can still learn something. In Fig.2, we show the real data and some Monte Carlo Dalitz plots for $`\overline{p}p\eta \eta \eta `$ at 1.8 GeV/c. From the real data Dalitz plot in Fig.2, the $`f_0(1500)`$ is obviously there. For the beam momentum of 1.8 GeV/c, the orbital angular momentum between $`f_0(1500)`$ and $`\eta `$ is expected to be $`2`$, which corresponds to the initial states of $`0^{}(l_f=0)`$, $`1^+(l_f=1)`$ and $`2^{}(l_f=2)`$. From their Monte Carlo Dalitz plots in Fig.2, without fitting data, it is already clear that the $`0^{}f_0(1500)\eta `$ is the most obvious contribution to $`\overline{p}p\eta \eta \eta `$. Preliminary results of the cross section for $`\overline{p}p\eta \eta \eta `$ is shown in Fig.3. There are two peaks at about 2.15 GeV and 2.33 GeV. They may be due to statistical fluctuations. But if they are due to two resonances, they are most likely $`0^+`$ resonances with the $`f_0(1500)\eta `$ decay mode. The only channel which is favourable for hunting $`0^+`$, $`2^{++}`$ and $`2^+`$ glueballs and has enough statistics for a full amplitude partial wave analysis is the $`\pi ^0\pi ^0\eta `$ channel. Its cross section is shown in Fig.4. There are clear enhancements at around 2.05 and 2.3 GeV. Note that for a constant amplitude the cross section should decrease steadily as the energy increases. For the $`\pi ^0\pi ^0\eta `$ channel, projections on to $`M(\pi \pi )`$ and $`M(\pi \eta )`$ at 900, 1200, 1525 and 1800 MeV/c are shown in Fig.5. The $`f_2(1270)`$, $`a_0(980)`$ and $`a_2(1320)`$ are clearly visible. The $`f_0(980)`$ and $`f_0(1500)`$ are visible on the $`M(\pi \pi )`$ projection, but rather weak. As beam momentum increases, the $`f_2(1270)`$ peak becomes stronger while the $`a_2(1320)`$ peak gets weaker; this is a natural reflection of the rapidly opening phase space for $`f_2(1270)\eta `$, whose threshold is at a mass of 1820 MeV. Based on these data, a full amplitude analysis describing both production and decay of these resonances is carried out for each momentum. ## 3 AMPLITUDE ANALYSIS OF $`\overline{p}p\pi ^0\pi ^0\eta `$ For the $`\pi ^0\pi ^0\eta `$ final state, possible $`\overline{p}p`$ initial states are $`0^+`$, $`2^+`$, $`4^+`$ etc. for $`\overline{p}p`$ spin singlet states, and $`1^{++}`$, $`2^{++}`$, $`3^{++}`$, $`4^{++}`$, $`5^{++}`$ etc. for $`\overline{p}p`$ spin triplet states. For our case with center-of-mass energies below 2.41 GeV, only $`0^+`$, $`2^+`$, $`1^{++}`$, $`2^{++}`$, $`3^{++}`$ and $`4^{++}`$ are expected to be significant and this has been confirmed in our analysis; $`4^+`$ has been tried, but is not significant. Their corresponding $`\overline{p}p`$ total angular momentum J, orbital angular momentum L and total spin angular momentum S in the usual contracted form $`{}_{}{}^{2S+1}L_{J}^{}`$ are: $`{}_{}{}^{1}S_{0}^{}`$ for $`0^+`$, $`{}_{}{}^{1}D_{2}^{}`$ for $`2^+`$, $`{}_{}{}^{3}P_{1}^{}`$ for $`1^{++}`$, $`{}_{}{}^{3}P_{2}^{}`$ or $`{}_{}{}^{3}F_{2}^{}`$ for $`2^{++}`$, $`{}_{}{}^{3}F_{3}^{}`$ for $`3^{++}`$, and $`{}_{}{}^{3}F_{4}^{}`$ or $`{}_{}{}^{3}H_{4}^{}`$ for $`4^{++}`$. Let us choose the reaction rest frame with the z axis along the $`\overline{p}`$ beam direction. Then the squared modulus of the total transition amplitude is the following: $`I`$ $`=`$ $`|A_{0^+}+A_{2^+}|^2+|A_{1^{++}}^{M=1}+A_{3^{++}}^{M=1}|^2+|A_{1^{++}}^{M=1}+A_{3^{++}}^{M=1}|^2`$ $`+|A_{2^{++}}^{M=0}+A_{4^{++}}^{M=0}|^2+|A_{2^{++}}^{M=1}+A_{4^{++}}^{M=1}|^2+|A_{2^{++}}^{M=1}+A_{4^{++}}^{M=1}|^2`$ $`+2Re[(A_{2^{++}}^{M=1}+A_{4^{++}}^{M=1})(A_{1^{++}}^{M=1}+A_{3^{++}}^{M=1})^{}(A_{2^{++}}^{M=1}+A_{4^{++}}^{M=1})(A_{1^{++}}^{M=1}+A_{3^{++}}^{M=1})^{}]`$ where M is the spin projection on the z-axis. Partial wave amplitudes $`A_{J^{PC}}`$ are constructed from relativistic Lorentz covariant tensors, Breit-Wigner functions and Blatt-Weisskopf barrier factors . The barrier factors use a radius of 1 fm. The $`f_2(1270)\eta `$, $`a_2(1320)\pi `$, $`a_0(980)\pi `$, $`\sigma \eta `$, $`f_0(980)\eta `$ and $`f_0(1500)\eta `$ intermediate states are considered. The $`f_0(980)`$ is fitted with a Flatté formula using parameters determined previously . The $`\sigma `$ is fitted with the parameterization A of Ref. . Other resonances are fitted with simple Breit-Wigner amplitudes using constant widths. Full formulae and additional details will be given in Ref.. Based on these formulae, the data at each momentum are fitted by the maximum likelihood method. The fit is shown in Fig.5 for the mass spectra for beam momenta at 900, 1200, 1525 and 1800 MeV/c. The quality of the fit for other beam momenta is similar. The fit to projections is obviously not perfect. These may be due to additional components such as $`a_0(1450)\pi `$, $`a_2(1660)\pi `$ and even $`\widehat{\rho }(1405)\pi `$ intermediate states. An angular momentum decomposition of this small effect is not possible, but fits including such states produce little effect on the dominant components. Since we are mainly interested in scanning the larger components from $`f_2(1270)\eta `$, $`\sigma \eta `$, $`a_2\pi `$ and $`a_0(980)\pi `$ intermediate states, we ignore those smaller contributions for the present study. The data points with error bars shown in Fig.6 are our fitted results for the partial wave cross sections at each momentum for $`\overline{p}p\pi ^0\pi ^0\eta `$ with $`\eta \gamma \gamma `$. Only those partial waves with significant contributions are presented. There are rich peak and dip structures. For $`4^{++}`$, a peak around 2100 MeV is clear for all $`4^{++}`$ partial waves. It is fitted by a Breit-Wigner amplitude with the mass and width fixed to the PDG values for the well established $`4^+`$ resonance $`f_4(2050)`$ . The shift of the peak position is due to the centrifugal barrier factors for both initial and final states. It appears in $`f_2\eta `$ and $`a_2\pi `$ with comparable strength. In addition to the $`f_4(2050)`$, there is clearly another $`4^{++}`$ peak around $`2320`$ MeV in $`4^+f_2\eta `$ in the M=1 partial wave. This may be identified with the $`f_4(2300)`$ listed in the Particle Data Tables . A $`4^{++}`$ resonance around this mass has also been observed be VES in $`\eta \pi ^+\pi ^{}`$ in the $`\pi A`$ reaction . For $`2^{++}`$, two peaks around 2020 MeV and 2350 MeV have masses and widths compatible with $`f_2(2010)`$ and $`f_2(2340)`$ listed by the PDG as established particles. In addition, a peak around 2230 MeV shows up clearly in the $`f_2\eta `$ mode. It has a mass compatible with $`\xi (2230)`$ observed in $`J/\mathrm{\Psi }`$ radiative decays , but has a larger width of $`150`$ MeV. For $`2^+`$ and $`3^{++}`$, both have two peaks around 2050 and 2300 MeV. No corresponding entries exist for them as yet in the Particle Data Tables . For $`1^{++}`$, there is a strong enhancement at the low energy end, decaying dominantly into $`a_2\pi `$. For $`0^+`$, there seems to be a broad component plus a peak in $`\eta \sigma `$ at $`2140`$ MeV with width $`150`$ MeV. The broad component may correspond to the broad $`0^+`$ object used in describing $`J/\mathrm{\Psi }`$ radiative decays to $`\rho \rho `$, $`\omega \omega `$, $`K^{}\overline{K}^{}`$, $`\varphi \varphi `$ and $`\eta \pi \pi `$ . The peak at 2140 MeV decays dominantly into $`\sigma \eta `$. Using interfering sums of the Breit-Wigner amplitudes to fit the partial wave cross sections in Fig.6 as well as relative phases between different partial waves, we obtain the masses, widths and branching ratios shown in Table 2. The corresponding Argand plots for all partial waves are shown in Fig.7. Besides the obvious resonances mentioned in previous paragraphs, we need another $`1^{++}`$ resonance at about 2340 MeV with width $`270`$ MeV. Without it, we cannot describe the relative phase between $`1^{++}`$ and $`4^{++}`$ partial waves; also we would need the lower $`1^{++}`$ resonance to be very narrow ($`<50`$ MeV) in order to explain the sharp increase in the $`1^{++}`$ partial wave cross section at low mass. In our present fit with two $`1^{++}`$ resonances, the $`f_1(2340)`$ amplitude interferes destructively with the tail of the lower $`1^{++}`$ resonance and causes the sharply decreasing cross section with a broad dip around 2340 MeV. The phase motion caused by this $`f_1(2340)`$ can be seen clearly in the Argand plot for the $`1^{++}a_2\pi `$ partial wave in Fig.7. In Table 2, the branching ratios are calculated at the resonance masses and are corrected for their unseen decay modes, except for $`a_0(980)`$ where $`\mathrm{\Gamma }_{a_0\pi }=\mathrm{\Gamma }_{a_0\pi \eta \pi \pi }`$. Among these resonances, the $`\eta (2140)`$ and $`f_2(2240)`$ look special. Both have relative narrow decay width. The $`\eta (2140)`$ decays dominantly into $`\eta \sigma `$; the $`f_2(2240)`$ has the largest $`f_2\eta /a_2\pi `$ ratio. These properties suggest that they may have larger mixing of glue components than other resonances. ## 4 SUMMARY AND OUTLOOK In summary, from a full amplitude analysis of $`\overline{p}p\pi ^0\pi ^0\eta `$ in-flight, we have observed a new decay mode $`\eta \pi \pi `$ for three established resonances $`f_4(2050)`$, $`f_2(2010)`$ and $`f_2(2340)`$. In addition, we have observed 8 new or poorly established resonances in the energy range from 1960 to 2410 MeV, i.e., $`f_4(2320)`$, $`f_3(2000)`$, $`f_3(2280)`$, $`f_2(2240)`$, $`f_1(2340)`$, $`\eta (2140)`$, $`\eta _2(2040)`$ and $`\eta _2(2300)`$. Among them, the $`0^+`$ $`\eta (2150)`$ has very large decay branching ratio to $`\eta \sigma `$; $`2^{++}`$ $`f_2(2230)`$ has the largest $`f_2\eta /a_2\pi `$ ratio; both have relative narrow total decay width. These properties suggest that they may have larger glueball components. For a further study of the $`\eta (2140)`$ and $`f_2(2240)`$, we are going to reconstruct $`\pi ^0\pi ^0\eta ^{}`$ channel from $`10\gamma `$ events, where we expect less contamination from other channels. The main purpose here is to scan the $`f_2\eta ^{}`$ and $`\sigma \eta ^{}`$ modes which are also expected to be favourable decay modes of gluballs. We are also going to scan $`f_2\pi ^0`$ from $`\pi ^0\pi ^0\pi ^0`$ final state to study isovector $`q\overline{q}`$ states and scan $`f_2\omega `$ from $`\pi ^0\pi ^0\omega `$ final state to study isoscalar $`q\overline{q}`$ states. Both $`\pi ^0\pi ^0\pi ^0`$ and $`\pi ^0\pi ^0\omega `$ channels have enough statistics for a full amplitude partial wave analysis and have $`f_2`$ band as their most obvious contribution in their Dalitz plots. From these analyses of three-body annihilation in-flight, combined with information from two-body annihilations, we hope to establish the $`2.02.4GeV/c^2`$ meson spectroscopy which is crucial for identifying gluballs and understanding quark confinement. Acknowledgement: I thank the organizers of the LEAP98 conference for the invitation to give the talk in such a nice place, Sardinia. I am very grateful to D.V.Bugg and A.V.Sarantsev for the fruitful collaboration on data analysis, and to Crystal Barrel Collaboration for producing the beautiful data.
no-problem/9812/solv-int9812022.html
ar5iv
text
# Extension of the bilinear formalism to supersymmetric KdV-type equations ## I Introduction Supersymmetric integrable systems constitute a very interesting subject and as a consequence a number of well known integrable equations have been generalized into supersymmetric (SUSY) context. We mention the SUSY versions of sine-Gordon , , nonlinear Schrödinger , KP-hierarchy , KdV ,, Boussinesq etc. We also point out that there are many generalizations related to the number $`N`$ of fermionic independent variables. In this paper we are dealing with the $`N=1`$ SUSY. So far, many of the tools used in standard theory have been extended to this framework, such as Bäcklund transformations , prolongation theory , hamiltonian formalism , grasmmannian description , $`\tau `$ functions , Darboux transformations . The physical interest in the study of these systems have been launched by the seminal paper of Alvarez-Gaume et. al about the partition function and super-Virasoro constraints of 2D quantum supergravity. Although the $`\tau `$ function theory in the context of SUSY pseudodifferential operators was given for the SUSY KP-hierarchy , the bilinear formalism for SUSY equations was very little investigated. We mention here the algebraic approach using the representation theory of affine Lie super-algebras in the papers of Kac and van der Leur , Kac and Medina the super-conformal field theoretic approach of LeClair . Anyway in these articles the bilinear hierarchies are not related to the SUSY hierarchies of nonlinear equations. In this paper we consider a direct approach to SUSY equations rather than hierarchies namely extending the gauge-invariance principle of $`\tau `$ functions for classical Hirota operators. Our result generalize the Grammaticos-Ramani-Hietarinta theorem, to SUSY case and we find N=1 SUSY Hirota bilinear operators. With these operators one can obtain SUSY-bilinear forms for SUSY KdV equation of Mathieu and also it allows bilinear forms for certain SUSY extensions of Sawada-Kotera-Ramani and Hirota-Satsuma (long water wave) equations. Also the gauge-invarince principle allows to study the SUSY multisoliton solutions as exponentials of linears. We want to emphasize that a special super-bilinear identity for N=1 SUSY KdV hierarchy was conjectured by McArthur and Yung . Using it they were able to write the SUSY KdV hierarchy in the bilinear form. Our approach generalizes the super bilinear operator conjectured by them and in the case of SUSY KdV-type equations we obtain the same results. The paper is organized as follows. In section II the standard bilinear formalism is briefly discussed. In section III supersymmetric versions for nonlinear evolution equations are presented and in section IV we introduce the super-bilinear formalism. In the last section we shall present the bilinear form for SUSY KdV-type equations, super-soliton solutions and several comments about extension to N=2 SUSY sine-Gordon equation. ## II Standard bilinear formalism The Hirota bilinear operators were introduced as an antisymmetric extension of the usual derivative , because of their usefulness for the computation of multisoliton solution of nonlinear evolution equations. The bilinear operator $`𝐃_x=_{x_1}_{x_2},`$ acts on a pair of functions (the so-called ”dot product”) antisymmetrically: $$𝐃_xfg=(_{x_1}_{x_2})f(x_1)f(x_2)|_{x_1=x_2=x}=f^{}gfg^{}.$$ (1) The Hirota bilinear formalism has been instrumental in the derivation of the multisoliton solutions of (integrable) nonlinear evolution equations. The first step in the application is a dependent variable transformation which converts the nonlinear equation into a quadratic form. This quadratic form turns out to have the same structure as the dispersion relation of the linearized nonlinear equation, although there is no deep reason for that. This is best understood if we consider an exemple. Starting from paradigmatic KdV equation $$u_t+6uu_x+u_{xxx}=0,$$ (2) we introduce the substitution $`u=2_x^2\mathrm{log}F`$ and obtain after one integration: $$F_{xt}FF_xF_t+F_{xxxx}F4F_{xxx}F_x+3F_{xx}^2=0,$$ (3) which can be written in the following condensed form: $$(𝐃_x𝐃_t+𝐃_x^4)FF=0.$$ (4) The power of the bilinear formalism lies in the fact that for multisoliton solution $`F`$’s are polynomials of exponentials. Moreover it displays also the interaction (phase-shifts) between solitons. In the case of KdV equation the multisoliton solution has the following form: $$F=\underset{\mu =0,1}{}\mathrm{exp}(\underset{i=1}{\overset{N}{}}\mu _i\eta _i+\underset{i<j}{}A_{ij}\mu _i\mu _j),$$ (5) where $`\eta _i=k_ixk_i^3t+\eta _i^{(0)}`$ and $`expA_{ij}=(\frac{k_ik_j}{k_i+k_j})^2`$ which is the phase-shift from the interaction of the soliton $`\mathrm{"}i\mathrm{"}`$ with the soliton $`\mathrm{"}j\mathrm{"}`$. A very important observation (which motivated the present paper) is the relation of the physical field $`u=2_x^2\mathrm{log}F`$ of KdV equation with the Hirota function $`F`$: the gauge-transformation $`Fe^{px+\omega t}F`$ leaves $`u`$ invariant. This is a general property of all bilinear equation. Moreover, one can define the Hirota operators using the requirement of gauge-invariance. Let’s introduce a general bilinear expression, $$A_N(f,g)=\underset{i=0}{\overset{N}{}}c_i(_x^{Ni}f)(_x^ig)$$ (6) and ask to be invariant under the gauge-trasformation: $$A_N(e^\theta f,e^\theta g)=e^{2\theta }A_N(f,g)\theta =kx+\omega t+\mathrm{}(linears).$$ (7) Then we have the following, Theorem: $`A_N(f,g)`$ is gauge-invariant if and only if $`A_N(f,g)=𝐃_x^Nfg`$ i.e. $$c_i=c_0(1)^i\left(\begin{array}{c}N\\ i\end{array}\right)$$ and $`c_0`$ is a constant and the brakets represent binomial coefficient. ## III Supersymmetry The supersymmetric extension of a nonlinear evolution equation (KdV for instance) refers to a system of coupled equations for a bosonic $`u(t,x)`$ and a fermionic field $`\xi (t,x)`$ which reduces to the initial equation in the limit where the fermionic field is zero (bosonic limit). In the classical context, a fermionic field is described by an anticommuting function with values in an infinitely generated Grassmann algebra. However, supersymmetry is not just a coupling of a bosonic field to a fermionic field. It implies a transformation (supersymmetry invariance) relating these two fields which leaves the system invariant. In order to have a mathematical formulation of these concepts we have to extend the classical space $`(x,t)`$ to a larger space (superspace) $`(t,x,\theta )`$ where $`\theta `$ is a Grassmann variable and also to extend the pair of fiels $`(u,\xi )`$ to a larger fermionic or bosonic superfield $`\mathrm{\Phi }(t,x,\theta ).`$ In order to have nontrivial extension for KdV we choose $`\mathrm{\Phi }`$ to be fermionic, having the expansion $$\mathrm{\Phi }(t,x,\theta )=\xi (t,x)+\theta u(t,x).$$ (8) The N=1 SUSY means that we have only one Grassmann variable $`\theta `$ and we consider only space supersymmetry invariance namely $`xx\lambda \theta `$ and $`\theta \theta +\lambda `$ ($`\lambda `$ is an anticommuting parameter). This transformation is generated by the operator $`Q=_\theta \theta _x,`$ which anticommutes with the covariant derivative $`D=_\theta +\theta _x`$ (Notice also that $`D^2=_x`$). Expressions written in terms of the covariant derivative and the superfield $`\mathrm{\Phi }`$ are manifestly supersymmetric invariant. Using the superspace formalism one can construct different supersymmetric extension of nonlinear equations. Thus the integrable (in the sense of Lax pair) variant of N=1 SUSY KdV is $$\mathrm{\Phi }_t+D^6\mathrm{\Phi }+3D^2(\mathrm{\Phi }D\mathrm{\Phi })=0,$$ (9) which on the components has the form $`u_t`$ $`=`$ $`u_{xxx}6uu_x+3\xi \xi _{xx}`$ (10) $`\xi _t`$ $`=`$ $`\xi _{xxx}3\xi _xu3\xi u_x.`$ (11) We shall discuss also the following supersymmetric equations, although we do not know if these equations are completely integrable in the sense of Lax pair ($`\mathrm{\Phi }`$ is also a fermionic superfield). * N=1 SUSY Sawada-Kotera-Ramani, $$\mathrm{\Phi }_t+D^{10}\mathrm{\Phi }+D^2(10D\mathrm{\Phi }D^4\mathrm{\Phi }+5D^5\mathrm{\Phi }\mathrm{\Phi }+15(D\mathrm{\Phi })^2\mathrm{\Phi })=0.$$ (12) * N=1 SUSY Hirota-Satsuma (shallow water wave) $$D^4\mathrm{\Phi }_t+\mathrm{\Phi }_tD^3\mathrm{\Phi }+2D^2\mathrm{\Phi }D\mathrm{\Phi }_tD^2\mathrm{\Phi }\mathrm{\Phi }_t=0.$$ (13) A very important equation from the physical consideration is the SUSY sine-Gordon. We are going to consider the version studied by Kulish and Tsyplyaev . There are other integrable versions of SUSY sine-Gordon emerged from algebraic procedures . In this case one needs two Grassmann variables $`\theta _\alpha `$ with $`\alpha =1,2`$ and the supersymmetry transformation is $$x^{}_{}{}^{}\mu =x^\mu i\overline{\lambda }\gamma ^\mu \theta ,\theta _\alpha ^{^{}}=\theta _\alpha +\lambda _\alpha ,\mu =1,2.$$ Here, $`\lambda _\alpha `$ is the anticommuting spinor parameter of the transformation and $`\overline{\lambda }=(\lambda ^1,\lambda ^2)`$, $`\lambda ^\alpha =\lambda _\beta (i\sigma _2)^{\beta \alpha }`$, $`\gamma ^0=i\sigma _2`$, $`\gamma ^1=\sigma _1`$, $`\gamma ^5=\gamma ^0\gamma ^1=\sigma _3`$. We use the metric $`g^{\mu \nu }=diag(1,1)`$ and $`\sigma _i`$ are the Pauli matrices. The superfield has the following expansion: $$\mathrm{\Phi }(x^\mu ,\theta _\alpha )=\varphi (x^\mu )+i\overline{\theta }\psi (x^\mu )+\frac{i}{2}\overline{\theta }\theta F(x^\mu ),$$ (14) where $`\varphi `$ and $`F`$ are real bosonic (even) scalar fields and $`\psi _\alpha `$ is a Majorana spinor field. The SUSY sine-Gordon equation is: $$\overline{D}D\mathrm{\Phi }=2i\mathrm{sin}\mathrm{\Phi },$$ (15) where $`D_\alpha =_{\theta ^\alpha }+i(\gamma ^\mu \theta )_\alpha _\mu `$ and on the components it has the form: $`(\gamma ^\mu _\mu +\mathrm{cos}\varphi )\psi =0`$ (16) $`\varphi _{xx}\varphi _{tt}={\displaystyle \frac{1}{2}}(\mathrm{sin}(2\varphi )i\overline{\psi }\psi \mathrm{sin}\varphi ).`$ (17) This version of SUSY sine-Gordon equation has been studied by Kulish and Tsyplyaev using the Inverse Scattering Method. They also found super-kink solutions. ## IV Super-Hirota operators In order to apply the bilinear formalism on these equations one has to define a SUSY bilinear operator. We are going to consider the following general $`N=1`$ SUSY bilinear expression $$S_N(f,g)=\underset{i=0}{\overset{N}{}}c_i(D^{Ni}f)(D^ig),$$ (18) where $`D`$ is the covariant derivative and $`f`$, $`g`$ are Grassmann valued functions (odd or even). We shall prove the following Theorem: The general $`N=1`$ SUSY bilinear expression (18) is super-gauge invariant i.e. for $`\mathrm{\Theta }=kx+\omega t+\theta \widehat{\zeta }+`$…linears ($`\zeta `$ is a Grassmann parameter) $$S_N(e^\mathrm{\Theta }f,e^\mathrm{\Theta }g)=e^{2\mathrm{\Theta }}S_N(f,g),$$ if and only if $$c_i=c_0(1)^{i|f|+\frac{i(i+1)}{2}}\left[\begin{array}{c}N\\ i\end{array}\right],$$ where the super-binomial coefficients are defined by: $$\left[\begin{array}{c}N\\ i\end{array}\right]=\{\begin{array}{cc}\left(\begin{array}{c}\text{[N/2]}\\ \text{[i/2]}\end{array}\right)\hfill & \text{if }(N,i)(0,1)mod2\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}$$ $`|f|`$ is the Grassmann parity of the function $`f`$ defined by: $$|f|=\{\begin{array}{cc}1\hfill & \text{if }f\text{ is odd (fermionic)}\hfill \\ 0\hfill & \text{if }f\text{ is even (bosonic)}\hfill \end{array}$$ and $`[k]`$ is the integer part of the real number $`k`$ ($`[k]k<[k]+1`$) Proof: First we are going to consider $`N`$ even and we shall take it on the form $`N=2P`$. In this case we have: $$S_N(f,g)=\underset{i=1}{\overset{N}{}}c_i(D^{Ni}f)(D^ig)=\underset{i=0}{\overset{P}{}}c_{2i}(^{Pi}f)(^ig)+\underset{j=0}{\overset{P1}{}}c_{2j+1}(^{Pj1}Df)(^jDg)$$ Imposing the super-gauge invariance and expanding the covariant derivatives we obtain: $$\underset{n0}{}\underset{m0}{}\left(\underset{i=0}{\overset{P}{}}c_{2i}\left(\begin{array}{c}i\\ n\end{array}\right)\left(\begin{array}{c}Pi\\ m\end{array}\right)k^{Pnm}\right)(^mf)(^ng)+$$ $$+\underset{n^{}0}{}\underset{m^{}0}{}\mathrm{\Lambda }\left(\underset{j=0}{\overset{P1}{}}c_{2j+1}\left(\begin{array}{c}j\\ n^{}\end{array}\right)\left(\begin{array}{c}Pj1\\ m^{}\end{array}\right)k^{Pn^{}m^{}1}\right)(^m^{}f)(^n^{}Dg)+$$ $$+\underset{n^{}0}{}\underset{m^{}0}{}\mathrm{\Lambda }(1)^{|f|+1}\left(\underset{j=0}{\overset{P1}{}}c_{2j+1}\left(\begin{array}{c}j\\ n^{}\end{array}\right)\left(\begin{array}{c}Pj1\\ m^{}\end{array}\right)k^{Pn^{}m^{}1}\right)(^m^{}Df)(^n^{}g)+$$ $$+\underset{n0}{}\underset{m0}{}\left(\underset{j=0}{\overset{P1}{}}c_{2j+1}\left(\begin{array}{c}j\\ n\end{array}\right)\left(\begin{array}{c}Pj1\\ m\end{array}\right)k^{Pnm1}\right)(^mDf)(^nDg)=$$ $$=\underset{i=0}{\overset{P}{}}c_{2i}(^{Pi}f)(^ig)+\underset{j=0}{\overset{P1}{}}c_{2j1}(^{Pj1}Df)(^jDg)$$ where $`\mathrm{\Lambda }=\widehat{\zeta }+\theta k`$. From this, we must have for every $`m`$, $`n`$ subjected to $`0niPm`$ and $`jPm^{}`$. $$\underset{i=0}{\overset{P}{}}c_{2i}\left(\begin{array}{c}i\\ n\end{array}\right)\left(\begin{array}{c}Pi\\ m\end{array}\right)k^{Pnm}=c_{2n}\delta _{Pnm}$$ (19) Also due to the fact that the supergauge invariance has to be obeyed for every $`f`$ and $`g`$ we must have $`c_{2j+1}=0`$ The discrete equation (19) was solved in . Its general solution is given by: $`c_{2i}`$ $`=`$ $`c_0(1)^i\left(\begin{array}{c}P\\ i\end{array}\right)`$ (22) $`c_{2j+1}`$ $`=`$ $`0`$ (23) In the case of $`N=2P+1`$ we proceed in a similar manner and we obtain the following system: $$\underset{i=0}{\overset{P}{}}c_{2i}\left(\begin{array}{c}i\\ n\end{array}\right)\left(\begin{array}{c}Pi\\ m\end{array}\right)k^{Pnm}=c_{2n}\delta _{Pnm}$$ (24) $$\underset{j=0}{\overset{P}{}}c_{2j+1}\left(\begin{array}{c}j\\ n\end{array}\right)\left(\begin{array}{c}Pj1\\ m\end{array}\right)k^{Pnm1}=c_{2n+1}\delta _{Pnm1}$$ (25) $$(1)^{|f|}c_{2i}+c_{2i+1}=0$$ (26) This system has the following solution: $`c_{2i}`$ $`=`$ $`c_0(1)^i\left(\begin{array}{c}P\\ i\end{array}\right)`$ (29) $`c_{2i+1}`$ $`=`$ $`c_0(1)^{i+1+|f|}\left(\begin{array}{c}P\\ i\end{array}\right)`$ (32) The relations (22), (29) can be written in a compact form as $$c_i=c_0(1)^{i|f|+\frac{i(i+1)}{2}}\left[\begin{array}{c}N\\ i\end{array}\right].$$ and the theorem is proved. We mention that the super-bilinear operator proposed by McArthur and Yung is a particular case of the above super-Hirota operator. We shall note the bilinear operator as $$S_N(f,g):=𝐒_x^Nfg$$ Also, one can easily obtain the following properties: $$𝐒_x^{2N}fg=𝐃_x^Nfg$$ (33) $$𝐒_x^{2N+1}e^{\eta _1}e^{\eta _2}=[\widehat{\zeta }_1\widehat{\zeta }_2+\theta (k_1k_2)](k_1k_2)^Ne^{\eta _1+\eta _2}$$ (34) $$𝐒_x^{2N+1}1e^{\eta _1}=(1)^{N+1}(\widehat{\zeta }+\theta k)k^Ne^\eta =(1)^{N+1}𝐒_x^{2N+1}e^\eta 1$$ (35) where $`\eta _i=k_ix+\theta \widehat{\zeta }_i`$ and $`\widehat{\zeta }_i`$ are odd Grassmann numbers. ## V Bilinear SUSY KdV-type equations In order to use the super-bilinear operators defined above we shall consider the following nonlinear substitution for the superfield: $$\mathrm{\Phi }(t,x,\theta )=2D^3\mathrm{log}\tau (t,x,\theta )$$ (36) Introducing in SUSY KdV (9) we obtain the following super-bilinear form: $$(𝐒_x𝐃_t+𝐒_x^7)\tau \tau =0,$$ (37) which is equivalent with the form found by McArthur and Yung $$𝐒_x(𝐃_t+𝐃_x^3)\tau \tau =0.$$ (38) The 1 super-soliton solution has the following structure $$\tau ^{(1)}=1+e^{kxk^3t+\theta \widehat{\zeta }+\eta ^{(0)}}$$ (39) In order to find 2 super-soliton solution we are going to consider the form $$\tau ^{(2)}=1+e^{\eta _1}+e^{\eta _2}+e^{\eta _1+\eta _2+A_{12}}$$ (40) and we have to find the factor $`\mathrm{exp}A_{12}`$, where $`\eta _i=k_ixk_i^3t+\theta \widehat{\zeta }_i+\eta _i^{(0)}`$ The equation for $`\mathrm{exp}A_{12}`$ is the following: $$[(\widehat{\zeta }_1\widehat{\zeta }_2)+\theta (k_1k_2)](k_1k_2)=\mathrm{exp}A_{12}[(\widehat{\zeta }_1+\widehat{\zeta }_2)+\theta (k_1+k_2)](k_1+k_2)$$ (41) We assume that $`\mathrm{exp}A_{12}`$ depends only on $`k_i`$, $`\widehat{\zeta }_i`$, with $`i=1,2`$ and in the bosonic limit ($`\widehat{\zeta }_i=0`$) to have the standard form, $`(k_1k_2)^2/(k_1+k_2)^2`$. Accordingly, in order to solve (41) we consider the ansatz: $$\mathrm{exp}A_{12}=(\frac{k_1k_2}{k_1+k_2})^2+\widehat{a}(k_1,k_2)\widehat{\zeta }_1+\widehat{b}(k_1,k_2)\widehat{\zeta }_2+\gamma (k_1,k_2)\widehat{\zeta }_1\widehat{\zeta }_2$$ (42) where $`\widehat{a}`$, $`\widehat{b}`$ are odd Grassmann functions depending on $`k_1`$ and $`k_2`$ and $`\gamma `$ is an even Grassmann function. Introducing (42) in (41) we shall find that $$\widehat{a}(k_1,k_2)\widehat{\zeta }_1+\widehat{b}(k_1,k_2)\widehat{\zeta }_2+\gamma (k_1,k_2)\widehat{\zeta }_1\widehat{\zeta }_2=0$$ and $$k_1\widehat{\zeta }_2=k_2\widehat{\zeta }_1$$ So, the interaction effect remains the same as in the bosonic case. One can easily verify that the N super-soliton solution is given by $$\tau ^{(N)}=\underset{\mu =0,1}{}\mathrm{exp}(\underset{i=1}{\overset{N}{}}\mu _i\eta _i+\underset{i<j}{}A_{ij}\mu _i\mu _j),$$ (43) where $$\eta _i=k_ixk_i^3t+\theta \widehat{\zeta }_i+\eta _i^{(0)}$$ $$\mathrm{exp}A_{ij}=\left(\frac{k_ik_j}{k_i+k_j}\right)^2$$ $$k_i\widehat{\zeta }_j=k_j\widehat{\zeta }_i$$ For N=1 SUSY Sawada-Kotera-Ramani (12) using the same nonlinear substitution, $$\mathrm{\Phi }=2D^3\mathrm{log}\tau (t,x,\theta )$$ we shall find the following super-bilinear form: $$(𝐒_x𝐃_t+𝐒_x^{11})\tau \tau =0$$ (44) In a similar way we find the N super-soliton solution $$\tau ^{(N)}=\underset{\mu =0,1}{}\mathrm{exp}(\underset{i=1}{\overset{N}{}}\mu _i\eta _i+\underset{i<j}{}A_{ij}\mu _i\mu _j),$$ (45) where $$\eta _i=k_ixk_i^5t+\theta \widehat{\zeta }_i+\eta _i^{(0)}$$ $$\mathrm{exp}A_{ij}=\left(\frac{k_ik_j}{k_i+k_j}\right)^2\frac{k_i^2k_ik_j+k_j^2}{k_i^2+k_ik_j+k_j^2}$$ $$k_i\widehat{\zeta }_j=k_j\widehat{\zeta }_i$$ For N=1 SUSY Hirota-Satsuma equation (13) using the nonlinear substitution $$\mathrm{\Phi }=2D\mathrm{log}\tau (t,x,\theta )$$ one obtains the super-bilinear form: $$(𝐒_x^5𝐃_t𝐒_x^3𝐒_x𝐃_t)\tau \tau =0$$ (46) The N super-soliton solution is, $$\tau ^{(N)}=\underset{\mu =0,1}{}\mathrm{exp}(\underset{i=1}{\overset{N}{}}\mu _i\eta _i+\underset{i<j}{}A_{ij}\mu _i\mu _j),$$ (47) where $$\eta _i=k_ixk_it/(k_i^21)+\theta \widehat{\zeta }_i+\eta _i^{(0)}$$ $$\mathrm{exp}A_{ij}=\left(\frac{k_ik_j}{k_i+k_j}\right)^2\frac{(k_ik_j)^2+k_ik_j[(k_ik_j)^2(k_i^21)(k_j^21)]}{(k_ik_j)^2k_ik_j[(k_ik_j)^2(k_i^21)(k_j^21)]}$$ $$k_i\widehat{\zeta }_j=k_j\widehat{\zeta }_i$$ We can ask ourselves if it is possible to obtain super-bilinear forms for SUSY equations of the nonlinear Klein-Gordon type. In fact the SUSY sine-Gordon equation(15) can be written in the following form: $$[D_T,D_X]\mathrm{\Phi }(T,X,\theta ,\theta _t)=2i\mathrm{sin}\mathrm{\Phi }(T,X,\theta ,\theta _t)$$ (48) where we have introduced the light-cone variables $`X:=i(tx)/2`$, $`T:=i(t+x)/2`$, and $`\theta :=\theta _1`$, $`\theta _2:=\theta _t.`$ Covariant derivatives are $`D_X:=_\theta +\theta _X`$, $`D_T:=_{\theta _t}+\theta _t_T`$ and the square braket means the commutator. Using the nonlinear substitution ($`G`$ and $`F`$ are even functions) $$\mathrm{\Phi }=2i\mathrm{log}\left(\frac{G}{F}\right),$$ we find the following quadrilinear expression $$2i\{F^2(G[D_T,D_X]G[D_TG,D_XG])G^2(F[D_T,D_X]F[D_TF,D_XF])\}=F^4G^4$$ It is easy to see that the bilinear operator $$𝐒_{XT}\tau \tau :=\tau [D_T,D_X]\tau [D_T\tau ,D_X\tau ]$$ is super-gauge invariant with respect to the super-gauge $$e^\mathrm{\Theta }:=e^{(kx+\omega t+\theta \widehat{\zeta }+\theta _t\widehat{\mathrm{\Omega }}+liniars)}.$$ Accordingly we can choose the following super-bilinear form, formally the same with standard sine-Gordon equation, $`𝐒_{XT}GG`$ $`=`$ $`{\displaystyle \frac{1}{2i}}(F^2G^2)`$ (49) $`𝐒_{XT}FF`$ $`=`$ $`{\displaystyle \frac{1}{2i}}(G^2F^2)`$ (50) but, it is not clear how to compute the super-kink solutions. From these examples it seems that gauge-invariance is a useful concept for bilinear formalism in the supersymmetric case, though there is no deep reason for that. As a consequence we was able to bilinearize several supersymmetric equations of KdV type. The case of SUSY versions for mKdV, NLS, KP etc. requires further investigation because it seems that only certain supersymmetric extensions are super-bilinearizable. Although we do not know if the SUSY extension of Sawada-Kotera and Hirota-Satsuma proposed above are integrable in the sense of Lax, they admit super-bilinear form and also N super-soliton solution. Accordingly, the integrability in the sense of Hirota is satisfied. Probably a singularity analysis implemented on the super-bilinear form will reveal the connection between Hirota-integrability and Lax-integrability. ACKNOWLEDGEMENTS: This work was supported by the grant Nr. B1/18 MCT, 1998 of the Romanian Ministery of Research. A part of this work was done at the Institute of Theoretical Physics, University of Bern, Switzerland. The author wants to express his sincere thanks to Prof. H. Leutwyler for hospitality.
no-problem/9812/astro-ph9812470.html
ar5iv
text
# The LCO/Palomar 10,000 km s-1 Cluster Survey. II. Constraints on Large-Scale Streaming ## 1. Introduction For the past decade, establishing the scale of the largest bulk flows has been one of the major goals of observational cosmology. In the late 1980s, the “7-Samurai” (7S) group used the $`D_n`$-$`\sigma `$ relation<sup>1</sup><sup>1</sup>1$`D_n`$-$`\sigma `$ is one example of a class of elliptical galaxy scaling relations known as the Fundamental Plane (FP). We adopt the latter term for the remainder of the paper. to show that the peculiar velocity field in the nearby universe, $`cz\stackrel{<}{}4000\mathrm{km}\mathrm{s}^1,`$ is dominated by a coherent, large-amplitude ($`v_B500\mathrm{km}\mathrm{s}^1`$) bulk motion in the direction of the “Great Attractor” (GA) near $`l=310^{},`$ $`b=10^{}`$ (Dressler et al. 1987; Lynden-Bell et al. 1988). Willick (1990) obtained Tully-Fisher (TF) data for a sample of over 300 field spirals, and found that the Perseus-Pisces supercluster, at a distance of $``$50$`h^1`$ Mpc and on the opposite side of the sky at $`l110^{},`$ $`b30^{},`$ moves in the same direction as the 7S ellipticals at a velocity of $``$400 km s<sup>-1</sup>. A similar result was obtained by Han & Mould (1992) from a cluster TF sample in Perseus-Pisces. Mathewson et al. (1992) analyzed over 1300 Southern-sky TF spirals, and found that the flow identified by the 7S continued beyond the GA, because spirals in the GA region itself were moving rapidly away from the Local Group (LG). Courteau et al. (1993) used a preliminary version of the Mark III Catalog of Galaxy Peculiar Velocities (Willick et al. 1997a) to measure the bulk flow for the entire volume within 60$`h^1`$ Mpc, finding $`v_B=360\pm 40\mathrm{km}\mathrm{s}^1`$ toward $`l=294^{},`$ $`b=0^{}.`$ A recent reanalysis of the Mark III Catalog using the POTENT method by Dekel et al. (1998) finds $`v_B=370\pm 110\mathrm{km}\mathrm{s}^1`$ toward $`l=305^{},`$ $`b=14^{}`$ for the volume within 50$`h^1`$ Mpc. Thus, TF and FP data sets acquired through the early 1990s agreed on the reality of coherent bulk flows within $``$50$`h^1`$ Mpc. These motions were measured relative to the reference frame in which the dipole anisotropy of the Cosmic Microwave Background (CMB) vanishes, henceforward the “CMB” frame. The LG itself moves with a velocity of 627 km s<sup>-1</sup> toward $`l=276^{},`$ $`b=30^{}`$ in the CMB frame (Kogut et al. 1993). This direction is within 30–40 of the observed bulk flows, suggesting that the LG motion itself is generated, at least in part, on $`\stackrel{>}{}50h^1`$ Mpc scales. The studies cited above were not, however, deep enough to establish whether the bulk flows ended, or converged, beyond 50$`h^1`$ Mpc. Evidence of nonconvergence beyond this distance was first provided by the work of Lauer & Postman (1994, LP94), who used brightest cluster galaxies (BCGs) as a distance indicator. LP94 analyzed a sample of 119 BCGs out to a distance of 15,000 km s<sup>-1</sup>, and concluded that the entire volume out to that distance was moving coherently at $``$700 km s<sup>-1</sup> in the direction $`l=343^{},`$ $`b=52^{}.`$ This flow vector was about $`60^{}`$ away from the motions detected in the earlier FP and TF-based studies, and was on a much larger scale. However, a number of recent studies have challenged the validity of both the LP94 result in particular, and very large-scale ($`\stackrel{>}{}100h^1`$ Mpc) bulk flows in general. Giovanelli et al. (1996) analyzed a large sample of TF spirals toward the apex and antapex of the LP94 flow direction, and found zero net peculiar velocity along that axis out to a distance of $``$70$`h^1`$ Mpc. Riess, Press, & Kirshner (1995) used 13 Supernovae of Type Ia (SN Ia), each with a distance error less than 10%, to constrain the magnitude and direction of the bulk flow within 10,000 km s<sup>-1</sup> redshift. They showed that their data set was inconsistent with the LP94 flow vector at 99% confidence. More recently, Giovanelli et al. (1998a,b) and Dale et al. (1998) have analyzed I-band field and cluster TF samples to estimate the convergence scale. Giovanelli (1998a,b) found convergence at $`\stackrel{<}{}6000\mathrm{km}\mathrm{s}^1`$ using primarily field spirals. Dale et al. (1998) combined the distant ($`cz>4500\mathrm{km}\mathrm{s}^1`$) portion of the Giovanelli (1998a,b) sample with 522 spirals in 52 Abell clusters at distances between $``$50 and $``$200$`h^1`$ Mpc. The effective depth of this combined sample was $``$9500 km s<sup>-1</sup>. Dale et al. found a bulk velocity consistent with zero, and at most 200 km s<sup>-1</sup>, for this volume. The EFAR group (Wegner et al. 1996, 1998) has obtained FP data for $``$500 ellipticals in 84 clusters in two patches of the sky. They also find generally small cluster peculiar velocities in the mean, and in particular rule out the LP flow at 99% confidence (Saglia et al. 1998). However, the limited sky coverage of the EFAR sample means that it is not sensitive to the full range of possible flow directions. In contrast, the recently completed SMAC survey of elliptical galaxies (Hudson et al. 1998a,b) has found evidence for a large-scale bulk flow, though not in the LP94 direction. The SMAC group measured FP distances for 697 early-type galaxies in 56 clusters with $`cz12,000\mathrm{km}\mathrm{s}^1,`$ and found a bulk flow of $`640\pm 200\mathrm{km}\mathrm{s}^1`$ in the direction $`l=260\pm 15^{},`$ $`b=1\pm 12^{}.`$ This flow vector is within 40–50 of, and is similar in amplitude to, the motions detected 5–10 years ago by the 7S, Willick (1990), Mathewson et al. (1992), and Courteau et al. (1993). However, the SMAC data set has about twice the effective depth of those earlier surveys, and thus suggests that the convergence scale could be $`\stackrel{>}{}8000\mathrm{km}\mathrm{s}^1.`$ In short, while a number of studies agree on the reality of a significant bulk flow within 50$`h^1`$ Mpc, the persistence of such motions to distances beyond $``$80$`h^1`$ Mpc remains controversial. The purpose of this paper is to address this issue using a new data set. The outline of the paper is as follows. In § 2, we describe the new TF data set, known as the LP10K sample. In § 3, we describe the maximum-likelihood method used to constrain the bulk flow vector. In § 4, we apply this method to the LP10K data set. In § 5, we describe Monte-Carlo simulations of the sample, and discuss how these simulations are used to assess the statistical significance of the results. Finally, in § 6 we further discuss and summarize the main results of the paper. ## 2. The LP10K Survey In early 1992 the author initiated a survey designed specifically to test whether the bulk streaming observed within 50–60$`h^1`$ Mpc persists to distances $`\stackrel{>}{}100h^1`$ Mpc. The survey targeted spiral and elliptical galaxies in 15 Abell clusters with published redshifts in the range $`9000cz12,000\mathrm{km}\mathrm{s}^1,`$ and utilized the TF and FP relations as distance indicators.<sup>2</sup><sup>2</sup>2As discussed below, the upper end of the redshift range for these clusters turns out to be closer to 13,000 km s<sup>-1</sup>, the value quoted in the Abstract. To maximize sensitivity to a bulk flow, the 15 clusters were selected to be distributed as isotropically as possible on the sky. Limitations to isotropy were imposed only by the requirement of Galactic latitude $`|b|20^{}`$ to minimize extinction effects and by the finite number of clusters observed. Southern-sky clusters were observed from the Las Camapanas Observatory (LCO), while Northern-sky clusters were observed from Palomar Observatory. The survey is thus known as the LCO/Palomar 10,000 km s<sup>-1</sup> Cluster Survey (LP10K). The LP10K observing runs spanned the period 1992 March–1995 September, and totaled over 100 nights of observations. The observing strategy, data reduction methods, and the modeling of the TF relation were described in detail by Willick (1999a), hereafter Paper I. Although the LP10K TF data set is fully analyzed, observations and reductions of the elliptical galaxy portion of the survey are ongoing, with completion expected in 2–3 years. The results presented here are derived from TF data only. The full TF data set, along with a variety of tests of the accuracy and repeatability of the observations, will be presented in the third paper in this series (Willick 1999b, Paper III). Figure 1 shows the sky positions of the 15 LP10K clusters in Galactic coordinates. (The equatorial coordinates of the clusters are listed in Table 1 of Paper I.) The clusters are seen to be well distributed around the sky above $`|b|=20.`$ There are clusters near both the apex and the antapex of the CMB dipole at $`l=276^{},`$ $`b=30^{},`$ ensuring sensitivity to flows along this axis. The axis corresponding to the LP94 motion is similarly well-sampled. An unexpected feature of the survey, discussed in Paper I, was that many TF galaxies turned out to have redshifts well in excess of the published cluster velocities. To appreciate the extent of this effect, Figure 1 indicates the mean redshift $`cz`$ of each cluster TF sample by point type: solid circles indicate mean redshifts in the original target range, $`cz12,000\mathrm{km}\mathrm{s}^1`$ (in practice all of these have $`cz>11,000\mathrm{km}\mathrm{s}^1`$ as well); open circles show clusters with $`12,000<cz15,000`$; and starred symbols clusters with $`cz>15,000\mathrm{km}\mathrm{s}^1.`$ It is apparent that a sizable majority of LP10K sample clusters have mean redshifts greater than 12,000 km s<sup>-1</sup>, the upper limit of the original target range. Indeed, the sample includes a significant number of objects with $`z0.1.`$ The number of TF galaxies per cluster is denoted by point size. The cluster TF sample sizes range from 8 to 26 objects, with an average of about 16 galaxies per cluster. Table 1 provides additional information on the cluster TF samples. Column 1 gives the cluster name according to the catalogue of Abell, Corwin, & Olowin (1989; ACO). Columns 2 and 3 give the cluster Galactic coordinates $`l`$ and $`b.`$ There follows a listing of three redshifts for the cluster: column 4 lists an updated value derived from a literature search, typically weighted by early-type galaxies in the cluster core; column 5 lists the mean redshift of all LP10K TF galaxies in the cluster field; and column 6 lists the mean sample redshift when only objects with $`7000cz15,000\mathrm{km}\mathrm{s}^1`$ are included (the significance of this redshift cut is discussed below). All redshifts are given with respect to the CMB frame. Column 7 lists the total number of TF galaxies in each cluster field; these values correspond to the point sizes in Figure 1 as well as the redshifts in column 5. Column 8 lists the number of galaxies with $`7000cz15,000\mathrm{km}\mathrm{s}^1`$ and corresond to the redshift in column 6. Column 9 provides keys to the sources from which the literature-based redshift are derived. Sky positions of the fifteen LP10K clusters. Filled circles represent clusters for which the mean redshift of the LP10K TF sample is $`12,000\mathrm{km}\mathrm{s}^1.`$ Open circles represent clusters with mean TF sample redshift $`12,000\mathrm{km}\mathrm{s}^1cz15,000\mathrm{km}\mathrm{s}^1,`$ and starred symbols those with mean $`cz>15,000\mathrm{km}\mathrm{s}^1.`$ The number of galaxies in the cluster TF sample is symbolized by the point size, as indicated by the key at the lower left. The “$`\times `$” enclosed by a circle shows the direction of the LG motion with respect to the CMB. The prevalence of relatively high-redshift objects in the LP10K TF data set is a consequence of the way sample galaxies were selected, namely, on the basis of magnitude and morphology from CCD images (cf. Paper I), and proximity to the cluster center, not on prior knowledge of the redshift. Most sample galaxies, in fact, did not have a measured redshift prior to the LP10K observations. The high-redshift galaxies—which we define here, somewhat arbitarily, as those with $`cz>15,000\mathrm{km}\mathrm{s}^1`$—present a problem only insofar as they reduce the number of objects in the 9000–12,000 km s<sup>-1</sup> redshift shell at which the survey was originally targeted. On the other hand, the presence of higher-redshift galaxies provides some sensitivity to a bulk flow on scales considerably larger than the targeted shell. To clarify the analysis presented in § 3, it will prove useful to distinguish between the full LP10K TF sample and a subsample that maximizes sensitivity to the originally targeted redshift shell. We construct this subsample by excluding all galaxies with $`cz<7000\mathrm{km}\mathrm{s}^1`$ and $`cz>15,000\mathrm{km}\mathrm{s}^1.`$ This cut yields the cluster redshifts and sample sizes given in columns 6 and 8 of Table 1. We henceforward refer to this subsample as the “extended target range” (ETR). The ETR subsample comprises 172 galaxies, most of which are bona fide cluster members, judging from the fairly good agreement between $`cz_{\mathrm{LIT}}`$ and $`cz_{\mathrm{ETR}}`$ given in Table 1. The notable exceptions to this trend are the clusters A3202 and A3381, for which the ETR redshifts are, respectively, $``$900 km s<sup>-1</sup> greater and $``$1300 km s<sup>-1</sup> less than the literature values. A3202 is, however, the cluster for which the literature value is the most uncertain, and we are inclined to regard the the LP10K ETR redshift as more accurate. For A3381 the source of the discrepancy is probably significant subclustering within the cluster; the LP10K TF sample simply weights a lower-redshift clump more strongly than the studies from the literature. In summary, it is reasonable to conclude that the ETR subsample is representative of the fifteen LP10K clusters per se. This is not to say that all ETR galaxies are members of the virialized cluster cores; indeed, as shown in § 3.2.2, most almost certainly are not, because a Hubble expansion plus bulk flow model turns out to be a much better description of the TF data than a model which assumes all members of a given cluster are equidistant. A second point is that the ETR shell appears to be better described as sampling the redshift range 9000–13,000 km s<sup>-1</sup>, as compared with the original upper limit of 12,000. Distinct from the ETR is the entire LP10K TF data set irrespective of redshift, henceforward the “full sample” (FS) . The FS comprises 244 galaxies. Sixty-four of the 72 galaxies not in the ETR have redshifts $`>15,000\mathrm{km}\mathrm{s}^1,`$ while the remaining eight have redshifts between 5000 and 7000 km s<sup>-1</sup>. ## 3. Method As in Paper I, we use a maximum-likelihood method based on the inverse TF relation, but with two key differences. First, we predict distances from redshift and position on the sky using a Hubble expansion plus bulk flow model. (In Paper I bulk flow was omitted.) The distance in Mpc to the $`i`$th galaxy is given by $$d_i=H_0^1\left[cz_i𝐯_p\widehat{𝐧}_i\right],$$ (1) where $`\widehat{𝐧}_i`$ is a unit vector in the direction of the galaxy, and $`cz_i`$ its CMB frame redshift in km s<sup>-1</sup>. (We also consider, and then reject, a model in which $`cz_i`$ is replaced by the mean redshift of the cluster to which the galaxy nominally belongs; cf. § 4.2 for details.) One or more components of the bulk flow vector $`𝐯_p`$ may be treated as free parameters in the maximum likelihood fit, but $`𝐯_p`$ is the same for every galaxy in the sample. As in Paper I we take $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1;`$ the adopted value affects only the zero point of the TF relation, not the derived bulk flow vector. Given $`d_i,`$ the predicted value of the circular velocity parameter is calculated as $$\eta _{i,pred}=e(m_i5\mathrm{log}d_i25D)+\alpha \mu _i+\beta c_i$$ (2) where $`m_i,`$ $`\mu _{e,i},`$ and $`c_i`$ are the observed apparent magnitude, effective central surface brightness, and concentration index of galaxy $`i,`$ respectively (cf. Paper I for details). The quantities $`D`$ and $`e`$ are the zero point and slope of the TF relation, while $`\alpha `$ and $`\beta `$ are additional TF parameters whose significance is discussed in Paper I. The other important quantity that enters the likelihood analysis is the observed value of the circular velocity parameter, $`\eta _i.`$ Recall from Paper I that $`\eta _i`$ is defined in terms of an additional TF parameter, $`f_s,`$ which determines the radius at which the rotation velocity is evaluated from the full RC. This is known as the “$`f_s`$-formulation” of the TF relation (Paper I), and is the approach used here. (An alternative approach discussed in Paper I, the “$`x_t`$-formulation,” is essentially equivalent and is not used in this paper.) The second key difference from Paper I is that we now properly take into account the statistical effects of having multiple photometric and/or spectroscopic measurements for a large number of individual galaxies. In Paper I we included each photometry/spectroscopy data pair for a particular galaxy as a “data object,” and all such objects entered equally into the likelihood computation. In doing so we neglected the correlations among the various data objects derived from a single galaxy, but crudely estimated their effect by scaling the likelihood statistic by the ratio of distinct galaxies to the total number of data objects. This was a conservative approach, as the actual number of degrees of freedom is greater than the number of distinct galaxies because the repeat measurements are partially independent. Such an approach was acceptable in Paper I because the effects we sought to demonstrate had very strong statistical significance. The bulk flow parameters, however, have relatively large errors, and it is thus important to employ a rigorous rigorous likelihood statistic. This is done as follows. Suppose that the $`i`$th distinct galaxy was observed only once spectroscopically, yielding circular velocity parameter $`\eta _i,`$ but has $`n_i`$ photometric measurements (in practice, $`n_i4`$), yielding apparent magnitudes $`m_{ij},`$ $`j=1,\mathrm{},n_i.`$ Then the conditional probability density for the observed velocity width, given the $`n_i`$ photometry measurements and the redshift is $$P(\eta _i|m_{ij},cz_i)=\frac{1}{\sqrt{2\pi \sigma _{\mathrm{eff}}^2}}\mathrm{exp}\left\{\frac{\left[\eta _i\eta _{i,pred}(\overline{m}_i)\right]^2}{2\sigma _{\mathrm{eff}}^2}\right\},$$ (3) where: 1. $`\eta _{i,pred}(\overline{m}_i)`$ is the predicted circular velocity parameter based on the average of the photometric measurements, $`\overline{m}_i=_jm_{ij}.`$ Because we use a multiparameter TF relation, equation 2, $`\overline{m}_i`$ symbolizes averages over all relevant photometric quantities, not only apparent magnitude. 2. The observed circular velocity parameter $`\eta _i`$ is also derived by combining the spectroscopic measurement with average values of the needed photometric parameters: the inclination $`i`$ and the effective exponential scale length $`r_e`$ (cf. Paper I). 3. The effective scatter is computed as the quadrature sum of its four contributions: $$\sigma _{\mathrm{eff}}^2=\left(\frac{5e}{\mathrm{ln}10}\frac{\sigma _v}{d_i}\right)^2+\sigma _I^2+\sigma _S^2+\sigma _P^2/n_i$$ (4) where: 1. $`\sigma _v=250\mathrm{km}\mathrm{s}^1`$ is, as in Paper I, the assumed velocity noise relative to the bulk flow model; 2. $`\sigma _I`$ is the intrinsic scatter in the inverse TF relation, taken as a free parameter in the model; 3. $`\sigma _S`$ is the portion of the $`\eta `$-error due to spectroscopic measurement errors only. It was modeled as $`\sigma _S=\delta v_{rot}/(v_{\mathrm{TF}}\mathrm{sin}i),`$ where $`\delta v_{rot},`$ treated as a free parameter in the model, is the circular velocity measurement error, assumed constant, and $`v_{\mathrm{TF}}`$ is the predicted value of the circular velocity. The motivation for this error model was discussed in Paper I. (The projection factor $`\mathrm{sin}i`$ was erroneously neglected in the Paper I analysis.) 4. $`\sigma _P`$ is the rms error in measuring and predicting $`\eta `$ due to photometric measurement errors. Thus, it includes the effects of inclination and effective radius measurement errors on the measured $`\eta ,`$ as well as those of magnitude, surface brightness, and concentration index errors on the predicted $`\eta .`$ The various photometric errors were assessed by comparing values for multiply observed objects, and will be described fully in Paper III. In general, $`\sigma _P`$ constitutes a very small fraction of the overall error budget. Note that equation 3 reduces to equation 10 of Paper I when $`n_i=1.`$ If galaxy $`i`$ has two spectroscopic measurements yielding circular velocity parameters $`\eta _{i1}`$ and $`\eta _{i2},`$ then another term is present in the likelihood, as follows: $$P(\eta _{i1},\eta _{i2}|m_{ij},cz_i)=\frac{1}{\sqrt{4\pi \sigma _S^2}}\mathrm{exp}\left\{\frac{(\eta _{i1}\eta _{i2})^2}{2(\sqrt{2}\sigma _S)^2}\right\}$$ $$\times \frac{1}{\sqrt{2\pi \sigma _{\mathrm{eff}}^2}}\mathrm{exp}\left\{\frac{\left[\overline{\eta }_i\eta _{i,pred}(\overline{m}_i)\right]^2}{2\sigma _{\mathrm{eff}}^2}\right\},$$ (5) where $`\overline{\eta }_i=(\eta _{i1}+\eta _{i2})/2,`$ and the effective scatter is now given by $$\sigma _{\mathrm{eff}}^2=\left(\frac{5e}{\mathrm{ln}10}\frac{\sigma _v}{d_i}\right)^2+\sigma _I^2+\sigma _S^2/2+\sigma _P^2/n_i.$$ (6) Note that the two-spectroscopic measurement likelihood differs from the single measurement case by the presence of a term measuring the probability that the two measurements will differ by the observed amount. The corresponding term for photometric measurement differences does not occur because we are using conditional probabilities of $`\eta `$ given $`m.`$ The presence of this term gives the likelihood analysis greater leverage on the velocity measurement error term $`\delta v_{rot}`$ than the Paper I approach, which relied solely on TF residuals to estimate $`\delta v_{rot}.`$ In fact, as will be seen, this leads to a $``$10% increase in the estimated $`\delta v_{rot},`$ a relatively small change. The better-determined $`\delta v_{rot}`$ in turn means that the resultant value of $`\sigma _I,`$ the intrinsic TF scatter, is also more reliably determined. Our inclusion of photometric measurement errors, neglected in Paper I, also improves the $`\sigma _I`$ estimate. Thus, our present approach affords one of the most accurate estimates to date of the the intrinsic TF scatter, a quantity of considerable interest for galaxy formation theory (e.g., Steinmetz & Navarro 1998). The overall likelihood for the sample is given by $`𝒫=_iP(\eta _i|m_i,cz_i),`$ where the product runs over all distinct galaxies in the sample (as opposed to all data objects as in Paper I), and $`P(\eta _i|m_i,cz_i)`$ is calculated from equation 3 or equation 5, as appropriate. In practice, likelihood maximization is achieved by minimizing the statistic $`=2\mathrm{ln}𝒫.`$ In all cases, $``$ is minimized with respect to the various TF parameters (including those characterizing the TF errors), as well as the bulk flow components. Although the maximum likelihood method is effective for simultaneously determining bulk flow and TF parameters, it does not yield a rigorous measure of goodness of fit (cf. the discussion by Willick et al. 1997b). For this purpose, it is useful to define “cluster-$`\chi ^2`$” statistic sensitive only to the bulk flow parameters, and changes in which from one model to the next can be used to gauge improvements in the fit. We do so by computing an average radial peculiar velocity $`v_p`$ and corresponding error $`\delta v_p`$ for each cluster, and comparing with the predicted radial peculiar velocity $`u=𝐯_p\widehat{𝐧}`$ from the flow model: $$\chi _{\mathrm{clust}}^2=\underset{i=1}{\overset{15}{}}\left(\frac{v_{p,i}u_i}{\delta v_{p,i}}\right)^2,$$ (7) where the sum runs over the fifteen LP10K clusters. In calculating the $`\delta v_p,`$ we take the scatter contributions $`\sigma _I`$ and $`\sigma _S`$ to have fixed values, rather than the best-fit values from the maximum likelihood solution. This ensures that $`\chi _{\mathrm{clust}}^2`$ measures only bulk flow errors, not differences in the TF scatter from model to model. The adopted values are $`\sigma _I=0.025`$ and $`\delta v_{rot}=17.5\mathrm{km}\mathrm{s}^1,`$ close to the values obtained from the best ETR fit (cf. § 4.4). ## 4. Results Before proceeding to the results of the flow analysis, we address two key technical issues: Galactic extinction and the redshift-distance model. ### 4.1. Burstein-Heiles versus Schlegel-Finkbeiner-Davis extinctions Two all-sky Galactic extinction maps are presently available: the older Burstein-Heiles (Burstein & Heiles 1978, 1982; BH) maps, which are based on 21 cm column density and faint galaxy counts, and the recently completed Schlegel, Finkbeiner, & Davis (1998; SFD) maps, based on DIRBE/IRAS measurements of diffuse IR emission. The SFD extinctions have been favored in several recent analyses, and indeed were used in Paper I. Unlike BH, the SFD extinctions are based directly on dust emission and have comparatively high spatial resolution. However, it has not been established beyond doubt that they are free of systematic errors, such as could arise from the presence of cold dust invisible to IRAS. The BH extinctions are also vulnerable to possible systematic effects, such as a variable dust-to-gas ratio and galaxy count fluctuations. Thus it seems prudent to use both methods, or linear combinations of them, and see what effect this has on the results. We have run the likelihood and cluster-$`\chi ^2`$ analyses, for both the ETR and FS samples, correcting apparent magnitudes and surface brightnesses using BH, SFD, and their direct average, (BH+SFD)/2. The results are given in Table 2, of which a full description is given below; here we summarize only the conclusions with regard to extinction. First, the derived flow vector is quite insensitive to which extinction method is used. The flow direction shifts about $`+20^{}`$ in longitude and $`+10^{}`$ in latitude, and the flow amplitude increases by a few percent, when SFD extinctions are replaced by BH extinctions. These changes are smaller than the $`1\sigma `$ errors, and thus statistically insignificant. The cluster-$`\chi ^2`$ statistic is also relatively insensitive to the extinction method. For the ETR the SFD extinctions produce a smaller $`\chi _{\mathrm{clust}}^2,`$ whereas for the FS the situation is reversed. For both the ETR and the SF, average extinctions, (BH+SFD)/2, yield likelihood and $`\chi _{\mathrm{clust}}^2`$ statistics as good, or nearly so, as the better of SFD or BH. We conclude that the flow analysis does not provide clear evidence for the superiority of one extinction scheme over the other, and indicates that averaging them may produce most reliable results overall. We thus adopt the average extinctions to obtain the final flow vectors for the ETR and the FS, as well as to compare with alternative solutions such as flow along the LP94 direction or no flow (see below). ### 4.2. Distance assignments The LP10K sample consists, nominally, of cluster galaxies. It is conventional to assume that all members of a given cluster lie at a common distance, regardless of redshift (the “cluster paradigm”). In Paper I, however, we modeled all galaxy distances by the Hubble law, thus assuming that even within a cluster there is a redshift-distance correlation. Such an approach is obviously called for when analyzing the full sample, with its many galaxies well in the background of the targeted clusters. On the other hand, the ETR cluster subsamples have mean redshifts close to published values for the cluster cores, suggesting that the redshift-distance relation for the ETR might be more aptly modeled by the cluster paradigm. However, some previous cluster TF studies have shown that even spirals near cluster cores, as judged by position in redshift space, may exhibit a redshift-distance relation close to pure Hubble expansion (Bernstein et al. 1994; Willick et al. 1995). The LP10K clusters are too distant, given the large TF errors, to judge which model is better for a given cluster. However, we can address this question by carrying out a cluster paradigm fit to the ETR subsample. To do so we model the distance to the $`j`$th galaxy in the $`i`$th cluster by $$d_{ij}=H_0^1\left[cz_i𝐯_p\widehat{𝐧}\right],$$ (8) where $`cz_i`$ is the mean redshift of the $`i`$th cluster. We take this to be the literature-based redshift, column (4) of Table 1, except in the cases of A3202 and A3381, where we adopt the mean LP10K ETR redshift, column (6) of Table 1 (see the discussion in § 2). This distance model implicitly assumes that any redshift dispersion within a given cluster is due to virial velocities, not to Hubble expansion. We applied the maximum likelihood algorithm to the LP10K ETR subsample using equation 8 to assign distances, and calculated the resultant $`\chi _{\mathrm{clust}}^2.`$ The results are given in row (4) of Table 2. The likelihood statistic is larger, by 11 units, than the best-fit value assuming free expansion. Given that the models have the same numbers of degrees of freedom, this is a highly significant ($``$$`3.3\sigma `$) difference. Similarly, $`\chi ^2\mathrm{clust}`$ is 7.3 units larger for the cluster paradigm fit than for the free expansion fit. These statistics demonstrate that, overall, the redshift-distance relation for the ETR subsample is much better modeled by free expansion than the cluster paradigm. This point is demonstrated graphically in Figure 4.2, where we plot inverse TF residuals from the ETR cluster paradigm fit versus logarithmic redshift differences between a galaxy and its parent cluster. If the cluster paradigm held, there should be no trend. On the other hand, if free expansion is a better model, galaxies with redshifts smaller than the cluster mean are closer than those with redshifts larger than the cluster mean. This distance correlation translates into a trend of TF residuals with relative redshift given by $`\delta \eta 5e\mathrm{log}\left(v/v_{\mathrm{clust}}\right),`$ where $`e0.12`$ is the inverse TF slope. This expected trend is indicated as a dashed line in Figure 4.2. The trend in the individual residuals is difficult to discern by eye because of the large TF scatter. However, a running median of the residuals, shown as connected solid squares, closely follows the residual trend. This confirms that the large improvement in the fit when the cluster paradigm is abandoned does in fact derive from expansion motions of sample galaxies. Like the samples of Bernstein et al. (1994) and Willick et al. (1995), TF cluster samples in the LP10K ETR appear to be clusters in name only; dynamically they more closely approximate field galaxies. Thus, for the remainder of this paper we assume that the Hubble expansion plus bulk flow model, equation 1, is the most accurate representation of the redshift-distance relation for the LP10K galaxies. We note, however, that the bulk flow derived from the cluster paradigm fit, line 4 in Table 2, differs by less than $`1\sigma `$ in amplitude and direction from the bulk flow derived with the expansion model. ### 4.3. Main results The main results obtained from the maximum-likelihood fits are presented in Table 2. The bulk flow parameters are expressed in terms of amplitude and direction in columns 1–3. The amplitudes have not been corrected for the bias discussed in § 5. The directions are unbiased, however (§ 5). Columns (4) and (5) list the likelihood and $`\chi _{\mathrm{clust}}^2`$ statistics defined in § 3. Column (6) gives the extinction method used in each fit; for reasons discussed in § 4.1, we now focus only on fits using the mean extinctions, (BH+SFD)/2. Column (7) provides keys to details of the fits given in the table notes. Our final bulk flow vectors are obtained from the likelihood fits in which all three components of the flow velocity were varied, namely, the first lines in the ETR and FS sections of the table. The flow amplitudes, $`961\mathrm{km}\mathrm{s}^1`$ for the ETR and $`873\mathrm{km}\mathrm{s}^1`$ for the FS, are biased high by 33% and 25% respectively, as determined from the Monte Carlo analysis of § 5. When corrected for these biases, they yield the flow amplitudes quoted in the Abstract and Summary, § 6.3. The cluster-$`\chi ^2`$ statistic does not necessarily take on its minimum value for these fits, because the flow parameters are computed by maximizing likelihood, not by minimizing $`\chi _{\mathrm{clust}}^2.`$ However, the maximum-likelihood fits produce values of $`\chi _{\mathrm{c}\mathrm{l}\mathrm{u}\mathrm{s}\mathrm{t}}^2`$ within $``$1 unit of its minimum value. Other entries in Table 2 test alternative flow directions. The lines in which the direction is given as $`l=276^{},`$ $`b=30^{}`$ give results of fits in which the flow was assumed a priori to be parallel to the LG peculiar velocity, and only the amplitude was varied. The likelihood statistics for these fits differ very little from the best-fit values. Indeed, this choice of flow direction produces a smaller $`\chi _{\mathrm{clust}}^2`$ than does the best fit for the ETR. The LP10K bulk flow may thus be described, to better than $`1\sigma `$ accuracy, as being in the same direction as the LG motion. The lines in Table 2 in which the direction is given as $`l=343^{},`$ $`b=52^{}`$ give results of fits in which the flow was assumed a priori to be parallel to the LP94 bulk flow. The best-fit flow amplitudes along this axis are much smaller, for both the ETR and FS, than the 700 km s<sup>-1</sup> reported by LP94. Moreover, the likelihood and $`\chi _{\mathrm{clust}}^2`$ values for this fit indicates that these solutions are poor models in comparison with that in which the flow is held parallel to the LG motion. As these two models have the same number of free parameters, these differences are highly significant. In short, the LP10K data set is inconsistent with the LP94 bulk flow. Table 2 also lists the fit results when $`V_B0,`$ i.e., pure Hubble flow in the CMB frame. This yields the lowest likelihood and largest $`\chi _{\mathrm{clust}}^2`$ values of all fits considered (except the cluster paradigm fit discussed in § 4.2). We defer to § 5 a quantitative discussion of the confidence level with which the data set enables us to rule out the $`V_B0`$ model. However, from the change in the cluster-$`\chi ^2`$ statistic between the no-flow model and the best-fit model for the FS, $`\mathrm{\Delta }\chi _{\mathrm{clust}}^2=8.78`$ with the addition of three degrees of freedom, we can estimate that the $`V_B0`$ model is ruled out at about the $`2.4\sigma `$ level. Our Monte-Carlo assessment of significance levels (§ 5) roughly confirm this estimate. In Figures 3 and 4 we plot CMB frame radial peculiar velocities of the 15 LP10K clusters, for the ETR and FS samples respectively. The velocities plotted are those that went into the $`\chi _{\mathrm{clust}}^2`$ calculations for the $`V_B=0`$ model. They are derived from the mean inverse TF residuals for each cluster, assuming that all cluster galaxies lie at the mean cluster velocity. (In the case of the FS, this is not always a good assumption, but it is the only way to assign a peculiar velocity to a given cluster.) The point sizes are inversely proportional to peculiar velocity errors, so that large symbols carry proportionally more weight in determining the bulk flow solution. In each figure error bars are drawn for two points to calibrate the size-error mapping. In Figure 3, the velocities are plotted against the cosine of the angle between the cluster and the CMB dipole direction ($`l=276^{},`$ $`b=30^{}`$), the flow direction which yielded the smallest $`\chi _{\mathrm{clust}}^2`$ among the ETR fits. In Figure 4, the velocities are plotted against the cosine of the angle between the cluster and $`l=272^{},`$ $`b=10^{},`$ the flow direction which yielded the smallest $`\chi _{\mathrm{clust}}^2`$ for the FS. The tilted solid lines in each figure represent the predicted radial peculiar velocities from the respective flow models. Figures 3 and 4 provide visual evidence of the bulk flow solutions found from the likelihood analysis. In each case, the clusters near the apex of of the flow axis have peculiar velocities which are positive in the mean; clusters nearer the antapex tend to have negative velocities. The notable exceptions to the trend are the clusters A2657 and A260, which lie near the antapex of the flow and have velocities $`\stackrel{>}{}0.`$ However, as indicated on the figures, these clusters have comparatively few members. A2657 is in consistent at the $`<1\sigma `$ level with the flow model for the ETR, and A260 is consistent with the flow model for the FS. Thus, although these clusters deviate from the trend, the deviation is not statistically significant. For the FS, the cluster A2247 is a large ($`2\sigma `$) outlier from the flow model, but in a sense which reinforces the flow. While Figures 3 and 4 show why the data favor a large bulk flow in the general direction of the CMB dipole, they also reflect the relatively large scatter in the TF relation. At the distances of the LP10K clusters, even samples of 10–20 TF galaxies give rise to $`1\sigma `$ peculiar velocity errors approaching 1000 km s<sup>-1</sup>. As a result, there is considerable scatter about the mean trend in the diagrams, and the flow solutions themselves have significant errors. ### 4.4. TF parameters In Paper I we applied a pure Hubble flow model to the LP10K full sample in order to study the properties of the TF relation. Here we have shown that the addition of a bulk flow to the redshift-distance model significantly improves the fit likelihood. It is thus worth asking whether the TF parameters themselves have changed significantly. The TF parameters derived from the best-fit ETR and FS flow models are listed in Table 3. The meaning of the parameters given in the table was described briefly in § 3, and in greater detail in Paper I. Table 3 should be compared with Table 2 (line 4) of Paper I, which lists the same parameters from the Hubble flow fit of that paper. The parameters are seen to have changed very little as a result of the flow model and the adoption of a more rigorous likelihood algorithm. In particular, the values of $`\alpha ,`$ which measures the surface-brightness dependence of the TF relation, and of $`f_s,`$ the number of disk scale lengths at which the rotation curve should be evaluated to optimize the TF relation, and to which particular physical significance was ascribed in Paper I, are seen to be virtually unchanged. The TF slope and zero point have changed very slightly as a result of adopting the flow model. A somewhat more significant change is seen in the value of the rotation velocity measurement error $`\delta v_{rot}`$ (called $`\delta v_{\mathrm{TF}}`$ in Paper I), which has increased by about 10%. This change may be ascribed, as discussed in § 3, to the proper accounting for repeat spectroscopic measurements and photometric errors in the improved likelihood method of this paper. As a result, $`\delta v_{rot}`$ is more accurately determined. This, in turn, means that the intrinsic TF scatter is better constrained than in Paper I. The value of $`\sigma _I`$ for the FS, $`0.030,`$ is the more reliable because of the larger sample size. The Monte-Carlo simulations enable us to test for biases in the TF parameters just as the bulk flow parameters, and they show that the maximum likelihood value of $`\sigma _I`$ is biased low by about 10%. They also show that the rms error in the determination of $`\sigma _I`$ is about 25%. Taking these into account, out best estimate of the inverse TF scatter is $`\sigma _I=0.033\pm 0.008`$ dex. The corresponding value of the forward TF scatter is $`\sigma _{I,forw}=\sigma _I/e=0.28\pm 0.07`$ mag. ## 5. Monte-Carlo Simulations ### 5.1. Details of the simulations The true sky positions, redshifts, apparent magnitudes, and inclinations of all LP10K TF galaxies were used as initial input to the simulations. Distances were assigned to each galaxy according to a bulk flow model, $`d=H_0^1[cz𝐯_p\widehat{𝐧}].`$ Absolute magnitudes, $`M=m5\mathrm{log}(d)25,`$ where thus derived, and a preliminary circular velocity parameter $`\eta _0=e(MD)`$ was assigned to each galaxy. The TF parameters were taken to be $`e=0.12`$ and $`D=21.62,`$ similar to the observed values. The surface-brightness and concentration-index dependences of the TF relation were neglected in the simulations. A Gaussian random variable of mean zero and dispersion $`\sigma _I=0.032`$ was added to $`\eta _0,`$ yielding a “true” circular velocity parameter $`\eta _1`$ and a corresponding projected rotation velocity $`v_{rot}^{proj}=158.1\mathrm{sin}i\times 10^{\eta _1}\mathrm{km}\mathrm{s}^1,`$ where $`i`$ was the observed galaxy inclination. A second Gaussian random variable of dispersion $`16\mathrm{km}\mathrm{s}^1`$ was added to $`v_{rot}^{proj}`$ to simulate the effect of spectroscopic measurement errors. If there were two spectroscopic observations of an individual galaxy, independent random errors were added to each. The scattered rotation velocities were then divided by $`\mathrm{sin}i`$ to produce the final observational values of $`v_{rot}`$ and $`\eta .`$ Finally, Hubble flow noise was simulated by scattering each redshift by a Gaussian random variable of dispersion $`\sigma _v=250\mathrm{km}\mathrm{s}^1.`$ The above procedure yields a simulated data set that mimics in most respects the statistical properties of the real one. Photometric errors were not included in the simulations, but these have a negligible effect on the overall TF scatter. The simulated data sets were then subjected to the same cuts on absolute magnitude, inclination, etc. (cf. Paper I) as the real one. Consequently, the results of applying the maximum likelihood code to them should faithfully reflect the random errors in the recovered peculiar velocity vector. ### 5.2. Choice of simulated bulk flow vectors Ideally one would like to simulate a suite of universes with a large variety of underlying bulk flow vectors. One could then ask, for any given flow vector, what is the probability of obtaining the result found from the real data? One could then invert this relation using Bayesian statistics and obtain the probability distribution of the true vector given the observed data. However, because of the relatively large observational errors for LP10K, it is not essential to carry out this full-blown analysis here. For our present purposes, two types of simulations, each corresponding to a reasonable “paradigm,” suffice: 1. Paradigm I. The Hubble flow beyond $``$10,000 km s<sup>-1</sup> is isotropic in the CMB frame, i.e., $`V_B=0.`$ The only perturbation to Hubble expansion are random velocities. 2. Paradigm II. The motion of the Local Group relative to the CMB is generated on very large scales, $`\stackrel{>}{}300h^1`$ Mpc. The bulk flow of the LP10K sample would then be similar to the peculiar velocity of the Local Group. For these simulations we adopt $`V_B=625\mathrm{km}\mathrm{s}^1`$ toward $`l=270^{},`$ $`b=30^{}.`$ The random velocities are, of course, still present. ### 5.3. Results of simulations Four Monte-Carlo runs were carried out in total, two based on paradigm I and two on paradigm II. For each paradigm the FS and ETR samples were simulated. Each of the four runs consisted of $`10^4`$ simulated data sets and likelihood anlayses. The results of each simulation were a recovered bulk flow components $`V_x,`$ $`V_y,`$ and $`V_z,`$ as well as the various TF parameters and likelihoods. Ten thousand random realizations of the LP10K full sample in a universe in which the true bulk flow is $`625\mathrm{km}\mathrm{s}^1`$ towards $`l=270^{},`$ $`b=30^{}.`$ The recovered Galactic Cartesian velocity components are plotted in the upper panels and the lower left panel; the lower right panel shows the recovered direction of the flow in Galactic coordinates. Figure 5.3 shows the results of simulating the LP10K full sample based on paradigm II. (The plot that results from simulations of the ETR subsample is quite similar in appearance.) Several key features of the plot may be noted. First, the recovered individual velocity components are very nearly unbiased. The input components of the bulk flow are $`V_x=0,`$ $`V_y=541.3,`$ and $`V_z=312.5\mathrm{km}\mathrm{s}^1.`$ The corresponding mean recovered values are $`V_x=7.8\pm 3.2,`$ $`V_y=554.0\pm 3.5,`$ and $`V_z=318\pm 2.8\mathrm{km}\mathrm{s}^1.`$ Thus, the individual components are only very slightly ($`\stackrel{<}{}15\mathrm{km}\mathrm{s}^1`$) biased. This bias is very small in comparison with the rms scatter in the derived components, which is $``$300 km s<sup>-1</sup>. The accuracy of the components similarly translates into a mean flow direction toward $`l=270.3^{},`$ $`b=30.1^{},`$ virtually identical to the input direction. The rms error in the flow direction about the mean is 35. Thus, the simulations demonstrate that the LP10K can recover the direction of the bulk flow, and the value of its individual Cartesian components, in an unbiased fashion. The residual biases are so small in comparison with the scatter of a single measurement that they may be neglected. The amplitude of the recovered flow vector is, on the other hand, biased high. This occurs because the scatter in the individual components can only increase their quadrature sum. To quantify this bias we take the average of all $`10^4`$ recovered velocity amplitudes, which we find to be $`784\mathrm{km}\mathrm{s}^1,`$ or $`25.4\%`$ higher than the input value of $`625\mathrm{km}\mathrm{s}^1.`$ We take this as a measure of the velocity amplitude bias for the LP10K FS. For the ETR, the bias is somwhat larger, $`33.1\%`$ Thus, the final quoted values for the bulk flow derived from the FS and ETR samples are corrected by factors $`(1.254)^1`$ and $`(1.331)^1,`$ respectively, relative to the values which appear in Table 2. We calculate the error in the recovered bulk flow amplitude as the average over $`10^4`$ simulations of $`|V_BV_B|.`$ For FS, this yields $`\mathrm{\Delta }V=250\mathrm{km}\mathrm{s}^1.`$ For the ETR the same procedure yields $`\mathrm{\Delta }V=280\mathrm{km}\mathrm{s}^1.`$ These are the estimates of the $`1\sigma `$ bulk flow amplitude errors given in the absract. The paradigm II simulation enabled us to estimate biases and errors under the assumption that the detected ETR and FS flows are real. In order to measure how well the LP10K data set rules out the hypothesis of convergence, however, we need to consider the paradigm I simulation, in which the true bulk flow vanishes in the CMB frame. The results of $`10^4`$ paradigm I simulations of the LP10K FS sample are shown in Figure 5.3. As before, heavy crosses mark the true values of the Cartesian components of the flow, in this case, $`V_x=V_y=V_z=0.`$ The average values recovered from the simulations are $`V_x=3.8\pm 3.2\mathrm{km}\mathrm{s}^1,`$ $`V_y=5.2\pm 3.5\mathrm{km}\mathrm{s}^1,`$ $`V_z=12.6\pm 2.7\mathrm{km}\mathrm{s}^1.`$ Thus, the likelihood analysis again recovers, on average, the true flow components with negligible bias. The lower right panel of Figure 5.3 demonstrates that there is no preferred direction for the recovered velocity vector. The points are well-distributed around the sky, although a slight preference for the CMB apex and antapex quadrants is apparent. (This tendency is also manifested in the non-circular shape of the $`V_y`$ versus $`V_z`$ plot.) This represents a small geometrical bias that results from the imperfect isotropy of the LP10K clusters. To determine the confidence level at which we can rule out convergence, we proceed as follows. We ask the question, what fraction of paradigm I simulations yield derived flow vectors of amplitude $`V_BfV_{data},`$ and in a direction $`\widehat{𝐧}`$ such that $`\widehat{𝐧}\widehat{𝐧}_{data}\mathrm{cos}\theta _{\mathrm{RMS}},`$ where $`\theta _{\mathrm{RMS}}`$ is the $`1\sigma `$ directional error of the flow, and $`V_{data}`$ and $`\widehat{𝐧}_{data}`$ are the derived amplitude and direction of the LP10K observed (ETR and FS) flows. We choose the parameter $`f`$ such that half of all paradigm II simulations satisfy these threshold criteria, with $`V_{data}`$ and $`\widehat{𝐧}_{data}`$ are replaced by the mean paradigm II amplitude and direction; this yields $`f0.8.`$ In other words, we ask, “What fraction of zero-flow simulations produce flow results that occur half the time when the flow is real?” The answer to this question gives the probability that the LP10K results occur by chance in a $`V_B=0`$ universe. When we carry out this exercise, we find that 5.3% (525/10000) of all ETR paradigm I simulations, and 2.9% (290/10000) of the FS paradigm I simulations, meet these criteria. We thus deduce that we can rule out the hypothesis that the Hubble flow has converged to the CMB frame at distances less than $``$100$`h^1`$ Mpc at the 94.7% confidence level from the ETR subsample. Using the full LP10K TF sample, we can rule out the convergence hypothesis at the 97.1% confidence level. These significance levels are roughly consistent with those we deduced in § 4.3 from changes in the $`\chi _{\mathrm{clust}}^2`$ statistic between the no-flow and best-fit models in Table 2. ## 6. Further Discussion and Summary The results presented in this paper suggest that the volume of the local universe within $``$15,000 km s<sup>-1</sup> of the LG possesses a bulk peculiar velocity of $``$450–950 km s<sup>-1</sup> with respect to the CMB frame. In this section, we discuss the significance of this finding. ### 6.1. How reliable are the results? first, we must ask, is the LP10K bulk flow real? There are several reasons for caution. The statistical significance level is not high, somewhat greater than $`2\sigma .`$ A result at such a modest significance level must be confirmed with independent data sets. We reiterate that the LP10K survey includes elliptical galaxy FP data in the same fifteen clusters. In several years the observations and reduction of the elliptical sample will be completed, and will provide an independent check of the flow measured from spiral galaxies using the TF relation. There are other recent flow analyses to compare with the present result. The most encouraging comparison is with the recently completed SMAC survey of Hudson et al. (1998a,b). The SMAC sample is about 75% as deep as LP10K, but has similarly good sky coverage, and involves many more galaxy clusters and individual galaxies. Moreover, SMAC is an FP survey of elliptical galaxies, and thus is completely independent from the LP10K TF data. It is thus noteworthy that the bulk flow vectors obtained from the two programs are in good agreement in both amplitude and direction (cf. §1). The two results together constitute a strong argument for the reality of the flow. The LP10K and SMAC findings are, however, at variance with the results of the SFI/SCI survey of Giovanelli and coworkers (Giovanelli et al. 1998a,b; Dale et al. 1998). SFI/SCI is an $`I`$-band TF survey comprising numerous cluster and field galaxies. Its depth is somewhat less than LP10K’s, but quite similar to SMAC’s, at 8000–10,000 km s<sup>-1</sup>. The most consistent result from SFI/SCI is the clear signal of convergence of the Hubble flow to the CMB frame by a distance of 6000 km s<sup>-1</sup>. At 10,000 km s<sup>-1</sup>, the SFI/SCI group finds the bulk flow to be $`200\mathrm{km}\mathrm{s}^1`$ with high confidence, and to be consistent with zero (Dale et al. 1998). There is no simple way at present to resolve the discrepancy between the LP10K and SMAC results, on the one hand, and those of the SFI/SCI project, on the other. What will be required is a systematic comparative analysis of the data sets on a cluster by cluster, and object by object, basis. Such a comparison will be possible when all three groups have published their complete data sets. This will occur in the near future for the LP10K TF sample in Paper III of this series. ### 6.2. Interpretation The discussion above shows that the evidence for a $`\stackrel{>}{}600\mathrm{km}\mathrm{s}^1`$ bulk flow on a $`\stackrel{>}{}150h^1`$ Mpc scale is suggestive but not yet compelling. Bearing this in mind, let us for the moment take the LP10K and SMAC results at face value and ask, what are the implications for cosmology? Peculiar velocities arise naturally within the gravitational instability scenario for structure formation. The amplitude and coherence scale of bulk flows thus reflect both the power spectrum of density fluctuations and the value of the density parameter $`\mathrm{\Omega }_M,`$ and can, in principle, provide constraints on these important cosmological quantities. More specifically, consider the mean square amplitude of the bulk velocity on a scale $`R.`$ According to linear gravitational instability theory it is given by (e.g., Strauss & Willick 1995) $$v^2(R)=\frac{H_0^2\mathrm{\Omega }_M^{1.2}}{2\pi ^2}_0^{\mathrm{}}P(k)\stackrel{~}{W}^2(kR)𝑑k,$$ (9) where $`\mathrm{\Omega }_M`$ is the present value of the matter density parameter, $`P(k)`$ is the mass fluctuation power spectrum, and $`\stackrel{~}{W}(kR)`$ is the Fourier transform of the window function of the sample, $`W(r).`$ Precise determination of the window function is notoriously difficult for real samples. A common approximation, the “tophat” window function, clearly does not apply to the LP10K sample, which attempted to sample a shell rather than a full spherical volume. When restricted to the extended target range, LP10K does indeed approximate a shell, with selection probability roughly constant for between $`R_1=90h^1`$ Mpc and $`R_2=130h^1`$ Mpc. A reasonable representation for the ETR window function is thus $$W_{\mathrm{ETR}}(r)=\frac{3}{4\pi (R_2^3R_1^3)}\times \{\begin{array}{cc}1,\hfill & R_1rR_2;\hfill \\ 0\hfill & \mathrm{otherwise}.\hfill \end{array}$$ (10) The corresponding window function in Fourier space is $$\stackrel{~}{W}_{\mathrm{ETR}}(x_1,x_2)=\frac{3}{f^31}\left[f^3\frac{j_1(x_2)}{x_2}\frac{j_1(x_1)}{x_1}\right],$$ (11) where $`x_1=kR_1,`$ $`x_2=kR_2,`$ $`f=R_2/R_1,`$ and $`j_1(x)`$ is the first spherical Bessel function. (The reader may note that this form of the window function tends to the usual tophat form in the limit $`R_2R_1.`$) Substitution of equation 11 into equation 9 yields, for an adopted cosmology and power spectrum, the mean square amplitude of the bulk flow for the LP10K ETR subsample. Before making this calculation, a related issue should be clarified.<sup>3</sup><sup>3</sup>3The author is indebted to Paul Steinhardt for insightful comments on which the following discussion is based. The value of $`v^2`$ in a given window is three times the mean square value of any individual Cartesian velocity component. Consequently, the velocity amplitude has a Maxwellian distribution, $`P(v)dvv^2\mathrm{exp}(3v^2/2v^2).`$ The significance of the observed velocity must be gauged relative to this distribution. In particular, one may choose a significance level $`F,`$ say, and a corresponding velocity amplitude $`v_F,`$ such that $`P(vv_F)=1F.`$ A brief calculation then shows that $$v_F=\sqrt{\frac{2}{3}}V_{\mathrm{RMS}}y_F,$$ (12) where $`V_{\mathrm{RMS}}=\sqrt{v^2},`$ and $`y_F`$ is the solution to the equation $$F=\mathrm{erf}(y_F)\frac{2y_F}{\sqrt{\pi }}e^{y_F^2}.$$ (13) With this procedure we find, for example, that 1% of volumes will exhibit $`v1.945V_{\mathrm{RMS}}`$ ($`F=0.99`$), while 0.1% will exhibit $`v2.329V_{\mathrm{RMS}}`$ ($`F=0.999`$). (This calculation refers only to cosmic variance, and neglects observational error.) Figure 6.2 shows the results of calculating $`V_{\mathrm{RMS}}`$ through the LP10K ETR window, for $`0<\mathrm{\Omega }_M1`$ and COBE-normalized CDM-type power spectra. A Hubble constant of $`65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and a baryon abundance $`\mathrm{\Omega }_bh^2=0.025`$ were adopted for the calculation. It is evident that $`V_{\mathrm{RMS}}`$ is quite small in comparison with the measured value of $`720\mathrm{km}\mathrm{s}^1`$ for the LP10K ETR. If, for example, we consider cosmological parameters favored by recent measurements of Type Ia Supernovae, $`\mathrm{\Omega }_M=0.3,`$ $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ (Riess et al. 1998; Perlmutter et al. 1999), $`V_{\mathrm{RMS}}=150\mathrm{km}\mathrm{s}^1.`$ Thus, 99% of ETR volumes in the universe should exhibit bulk flows less than 292 km s<sup>-1</sup> in such a univers, and 99.9% of such volumes should possess bulk flows less than 349 km s<sup>-1</sup>. The corresponding values for $`\mathrm{\Omega }_M1`$ are only about 17% larger. For low-density, open universes $`V_{\mathrm{RMS}}`$ is several times smaller than for a flat universe of the same $`\mathrm{\Omega }_M.`$ We have also shown that the LP10K full sample, more than 25% of whose members have $`0.05<z0.1`$ and thus lie well beyond the ETR window, yields as strong a signal of large-scale bulk flow as the ETR. It is more difficult to calculate expected theoretical values for the FS, because its window function is poorly defined. However, we gain insight into the effect of the added depth by crudely approximating the FS window function as having the same form as the ETR’s, equation 10, but with $`R_2=180h^1`$ Mpc. (We leave the lower limit at 90$`h^1`$ Mpc, as there are only 8 FS galaxies in the foreground of the ETR.) When $`V_{\mathrm{RMS}}`$ is calculated for this window function, one finds it to be $``$20–30% smaller, for given values of $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda },`$ than it was for the ETR window. Thus, the FS bulk flow of $`700\mathrm{km}\mathrm{s}^1,`$ taken at value, exceeds theoretical predictions by a larger margin than the ETR result. A more thorough exploration of parameter space could, of course, yield slightly larger values of $`V_{\mathrm{RMS}}.`$ This is not necessary here, because the errors on the LP10K flow amplitudes are too large to strongly rule out models at present. The important point is simply that bulk flows on $`\stackrel{>}{}100h^1`$ Mpc scales are well-suited as probes of large-scale power. In particular, the results of this paper, if ultimately proven correct, would indicate the need for models with more power on large scales than COBE-normalized CDM models can provide. A similar conclusion was reached for the LP94 bulk flow by Strauss et al. (1995). We can anticipate future data sets that will probe bulk streaming on scales $`\stackrel{>}{}300h^1`$ Mpc. Such measurements will not be possible with methods such as TF and FP, for which peculiar velocity errors become prohibitively large at $`z\stackrel{>}{}0.1.`$ Type 1a supernovae (e.g., Riess et al. 1997) are sufficiently accurate distance indicators for this purpose, although it may prove difficult to assemble the required full-sky samples. Perhaps the most promising approach is measurement of the Sunyaev-Zel’dovich (SZ) effect in rich clusters. When combined with X-ray data, SZ measurements yield peculiar velocity estimates with an accuracy of 500–1000 km s<sup>-1</sup> (Holzapfel et al. 1997). Moreover, SZ measurement errors, although relatively large, do not increase with distance (in contrast to those of methods such as TF, FP and SN Ia), and thus provide a unique probe of bulk flows on scales $`z\stackrel{>}{}0.1.`$ It is notable that the LP10K and SMAC flow vectors are within $``$30 of the CMB dipole. If one subtracts the LG velocity from these vectors, little or no motion is left. Should the same pattern hold on much larger scales, one might legitimately question the special character we assign the CMB frame of reference in analyzing the Hubble expansion. Of course, to do so would also require a non-kinematic explanation of the $`\mathrm{\Delta }T/T10^3`$ dipole anisotropy which is consistent with the fact that higher-order anisotropies are two orders of magnitude smaller. Such scenarios have been proposed in the past (e.g., Paczynski & Piran 1990), but they require substantial modifications of standard Big Bang cosmology. The present data do not compel us to seriously consider such alternatives, but it is worth bearing in mind that future data could push us in that direction. ### 6.3. Summary We have used the LP10K TF sample to measure the bulk peculiar velocity on a $`\stackrel{>}{}150h^1`$ Mpc scale. Both the full sample of 244 galaxies, and a subsample restricted to galaxies with $`7000cz15,000\mathrm{km}\mathrm{s}^1,`$ the “extended target range” or ETR, were considered. The derived bulk flow was the same, within the errors, for both samples: $`v_B=720\pm 280\mathrm{km}\mathrm{s}^1`$ toward $`l=266^{},`$ $`b=19^{}`$ for the ETR, and $`v_B=700\pm 250\mathrm{km}\mathrm{s}^1`$ toward $`l=272^{},`$ $`b=10^{}`$ for the full sample. The overall $`1\sigma `$ directional uncertainty is $`35^{}.`$ These results were obtained using a maximum likelihood algorithm that minimizes selection and Malmquist biases. Residual biases, which have to do with the geometry of the sample and the fact that Cartesian velocity component errors add in quadrature, were calibrated using Monte-Carlo simulations. These showed that the raw maximum-likelihood flow amplitude is biased high by $``$25%. The results quoted above are corrected for this bias. The simulations also enabled us to estimate that the probability the survey would yield the derived flow vectors by chance, if in reality the Hubble flow has converged to the CMB frame at distances $`\stackrel{<}{}100h^1`$ Mpc, is 5.3% for the ETR and 2.9% for the full sample. The LP10K flow is similar in amplitude and scale to the bulk motion found by Lauer & Postman (1994) from their analysis of 119 brightest cluster galaxies within 15,000 km s<sup>-1</sup>. However, the directions of the LP10K and LP94 flow vectors differ by $``$70. When the LP10K likelihood analysis is done with the flow required to lie along the LP94 direction, $`l=343^{},`$ $`b=52^{},`$ the best-fit flow amplitude is only 150–200 km s<sup>-1</sup>, much smaller than the 700 km s<sup>-1</sup> found by LP94, and the fit quality is much worse than for the velocity vectors quoted in the previous paragraph. Thus, the LP10K data set is inconsistent with the flow vector measured by LP94. The LP10K flow vector is similar in amplitude and direction to the motion of the Local Group with respect to the CMB, and lies within $``$40 of the flows measured in the local ($`cz\stackrel{<}{}5000\mathrm{km}\mathrm{s}^1`$) universe in the late 1980s and early 1990s by Lynden-Bell et al. (1988), Willick (1990), Mathewson et al. (1992), and Courteau et al. (1993), as well as thos obtained from a recent POTENT analysis of the Mark III and SFI data sets by Dekel et al. (1998). This suggests that the local universe, including the LG, participates in the large-scale flow detected by the LP10K TF sample, which in turn is driven by density inhomegeneities on scales $`\stackrel{>}{}150h^1`$ Mpc. The fact that the LP10K full sample yields the same result as the ETR may hint that convergence to the CMB is not seen even at distances $`\stackrel{>}{}200h^1`$ Mpc. The results presented in this paper do not, by themselves, clinch the case for the reality of very large scale bulk flow, because their significance level is only slightly above $`2\sigma .`$ However, the excellent agreement of the LP10K TF bulk flow with that recently measured by Hudson et al. (1998) from the SMAC survey of elliptical galaxies adds considerable weight to their plausibility. On the other hand, the LP10K and SMAC results differ sharply from those reported in recent months by Giovanelli et al. (1998a,b) and Dale et al. (1998), who argue from the SFI/SCI TF data set that convergence to the CMB frame is clearly detected beyond 60$`h^1`$ Mpc. Resolution of this discrepancy is an important task for the near future. The parameters of the multiparameter TF relation introduced in Paper I were solved for simultaneously with the flow parameters, with the results largely unchanged. In particular, a surface brightness dependence of the TF relation was confirmed, with $`v_{rot}I_e^{0.13}L^{0.29}.`$ The analysis in this paper treated the various sources of TF scatter more carefully than did Paper I, with the result that the instrinsic scatter of the TF relation is more strongly constrained. It is found to be $`\sigma _I=0.28\pm 0.07`$ mag, a value which should be accounted for by successful models of galaxy formation. The author acknowledges the support of NSF grant AST-9617188 and the Research Corporation, and thanks Paul Steinhardt, Mike Hudson, and Keith Thompson for valuable discussions.
no-problem/9812/cond-mat9812181.html
ar5iv
text
# REFERENCES Comment on “Activation Gaps and Mass Enhancement of Composite Fermions” In a recent Letter, Park and Jain presented results of extensive Monte Carlo calculations of gaps of polarized fractional quantum Hall states at filling fractions $`\nu =1/3,2/5,3/7,4/9`$. In particular, they addressed the effects on these excitation energies of the finite width of the electronic wave function (wf) in the direction perpendicular to the interface. Their findings appeared to demonstrate that finite width corrections alone lead to agreement between experimental and theoretical values for these energy gaps. This agreement depends crucially on the choice of the width parameter $`\lambda `$ in the model interaction $$V(r)=\frac{e^2}{ϵ\sqrt{r^2+\lambda ^2}},$$ (1) that was used to take account of the width of the interface wf in an approximate manner. To fix $`\lambda `$, Park and Jain used results based on calculations of interface wf’s by Ortalano et al., by fitting the gap at $`\nu =1/3`$ for a system of 6 electrons . This fit leads to a value $`\lambda =11.7nm`$ for an electronic density $`n_S=2.3\times 10^{11}cm^2`$ of the sample used by Du et al.. This value for $`\lambda `$ leads to the above mentioned agreement between theoretical and experimental gap values (cf. Figure 3 in ). The main problem with this way of fixing $`\lambda `$ is that there are indications that the work of Ortalano et al. is not reliable: Analysis of the tabulated values of the Haldane pseudopotentials reveals serious inconsistencies and the calculation of energy gaps is not explained in enough detail to allow a verification of the results. Furthermore, as will be discussed below, the value of $`\lambda `$ which fits the gap results does not appear reasonable, although Park and Jain claim that it is. To see this we employ as interface wf the Fang-Howard variational wf $`\mathrm{\Psi }(z)\mathrm{exp}(bz/2)`$. The variational parameter $`b`$ is given by $$b=\left(\frac{48\pi m^{}e^2(n_{depl}+\frac{11}{32}n_S)}{ϵ\mathrm{}^2}\right)^{1/3},$$ (2) where $`m^{}`$ is the effective mass in the direction normal to the interface and $`n_{depl}`$ is the charge density in the depletion layer. Using the values for GaAs, a dielectric constant $`ϵ=12.7,m^{}=0.067m_0`$ and neglecting $`n_{depl}`$, we obtain $`1/b=3.1nm`$ at $`n_S=2.3\times 10^{11}cm^2.`$ As gaps at filling factors $`1/3\nu 2/3,`$ are basically determined by the Haldane pseudopotential $`V_m`$ for angular momentum $`m=1`$, the best way to fix $`\lambda `$ is by requiring that $`V_1`$ is reproduced when calculated with the model interaction (1). Using the Fang-Howard wf with $`1/b=3.1nm`$, we obtain $`V_1=0.3712e^2/(ϵ\mathrm{}_0)`$ at $`\nu =1/3`$ (i.e. at a magnetic field $`B=28.5T`$, and a magnetic length $`\mathrm{}_0=4.8nm`$). This $`V_1`$-value is reproduced using the model interaction (1) with $`\lambda /\mathrm{}_0=1.19`$, i.e. $`\lambda =5.7nm`$, about half the value used by Park and Jain . The question of wf width has been explored experimentally by Willett et al. . By measuring the energy difference between the first and second subband, they obtained a more accurate estimate of the subband wf’s with the result $`1/b=3.9nm`$. As that sample had a smaller density $`n_S=1.65\times 10^{11}cm^2`$, and assuming $`b`$ scales as $`n_S^{1/3}`$, we obtain a value $`1/b=3.5nm`$ for the sample considered by Park and Jain, only about 10 percent larger than the value based on equation (2). Again requiring that $`V_1`$ be reproduced by the model interaction, at $`\nu =1/3`$, we obtain $`\lambda /\mathrm{}_0=1.34`$, still very much smaller than the value $`\lambda /\mathrm{}_0=2.5`$ used by Park and Jain. On the basis of these results, we feel that the value of $`\lambda =11.7nm`$ used by Park and Jain is too large by about a factor of two. Clearly, for $`\lambda /\mathrm{}_01.3`$ the gap reduction due to the width of the interface wf is only about half of what Park and Jain obtained. Thus, a significant difference between theoretical and experimental gap values remains. However, effects of disorder and Landau level mixing are expected to further reduce the theoretical gap results. A last comment regards the use of the model interaction (1). While it is possible to fix $`\lambda `$ such that the pseudopotential coefficient $`V_m`$ is reproduced exactly for $`m=1`$, those for $`m>1`$ when calculated with the model interaction, are generally too large, especially for large $`\lambda `$ . Consequently, since incompressibility of polarized states in the range $`1/3\nu 2/3`$ depends on the ratio $`V_3/V_1`$ not becoming too large, a phase transition to compressible states may be predicted erroneously if the model interaction (1) is used (cf. Figure 3 in ). Realistic interactions based on appropriate interface wf’s must be used for a reliable study of this question . Rudolf H. Morf Condensed Matter Theory, Paul Scherrer Institute, CH-5232 Villigen, Switzerland
no-problem/9812/astro-ph9812433.html
ar5iv
text
# Supernova remnants in molecular clouds: on cosmic ray electron spectra ## 1 Introduction In a recent paper Chevalier (1999) discusses physical conditions in supernova remnants (SNRs) evolving inside molecular clouds. He considers a clumpy gas distribution in the clouds, where the dominant part of the cloud mass is contained in the compact dense clumps filling only 10% or less of the cloud volume and the rest of the cloud is filled with tenuous gas with a number density $`N10`$ cm<sup>-3</sup>. Thus a SNR exploding inside a cloud evolves mostly in such low density medium. Chevalier discusses a number of consequences of such a model and compares it to observations of three SNRs: W44, IC 433 and 3C391. In his discussion the observed, very flat, cosmic ray electron distributions responsible for the radio synchrotron spectra are interpreted as shock-compressed ambient distributions radiating downstream of the radiative shock. However, the existence of a very flat ambient electron distribution in a molecular cloud is a matter for debate. The uniform magnetic field structures observed in several clouds and efficient damping of short Alfvén waves in a partly neutral medium (Hartquist & Morfill 1984) suggests a possibility of an efficient cosmic ray exchange between clouds and a steep-spectrum galactic cosmic ray population. In the present note we point out the possibility of an acceleration of the flat spectrum electrons at a SNR shock wave, if the second-order Fermi acceleration in the vicinity of the shock is taken into account. In the next section we summarize results of an approximate analytic theory for the particle spectral indices (Ostrowski & Schlickeiser 1993, $``$ ‘OS93’), with a few typing errors of the original paper corrected. Then, in section 3, we demonstrate that the physical conditions considered by Chevalier (199) for shock waves – with the Alfvén velocity, $`V_A`$, non-negligible in comparison to the shock velocity – allow for generation of the required flat particle distributions (c.f., also, Drury 1983, Hartquist & Morfill 1983, Dröge et al. 1987; Schlickeiser & Fürst 1989). In the final section (Section 4) we briefly discuss an application of the present model to actual conditions in astrophysical objects. ## 2 Derivation of the particle spectral index OS93 derived a simplified kinetic equation for the particle distribution formed at the parallel shock wave due to action of the first-order acceleration at the shock and the second-order acceleration in the shock turbulent vicinity. With the phase-space distribution function at the shock, $`f(p)f(p,x=0)`$, an ‘integral’ distribution is defined as $$F(p)=f(p)\left[\frac{\kappa _1(p)}{U_1}+\frac{\kappa _2(p)}{U_2}\right],$$ $`(2.1)`$ where $`U_i`$ and $`\kappa _i`$ are the plasma flow velocity in the shock frame and the spatial diffusion coefficient of cosmic ray particles, respectively ($`i=1`$ upstream and $`2`$ downstream of the shock). Analogously, the momentum diffusion coefficient is indicated as $`D_i`$ ($`i=1`$, $`2`$). For the simple case of $`\kappa _i=const`$ ($`i=1,2`$), and $`D_1=D_2=D`$ the approximate transport equation for the function (2.1) takes the closed form: $$\frac{1}{p^2}\frac{}{p}\left\{p^2D(p)\frac{}{p}F(p)\right\}+\frac{1}{p^2}\frac{}{p}\left\{p^2\frac{\mathrm{\Delta }p}{\mathrm{\Delta }t}F(p)\right\}$$ $$+\frac{F(p)}{\tau (p)}=Q(p),$$ $`(2.2)`$ where we have included the source term $`Q(p)`$. The mean acceleration speed due to the first order process at the shock $`\frac{\mathrm{\Delta }p}{\mathrm{\Delta }t}`$ and the mean escape time due to advection downstream of the shock $`\tau (p)`$ can be derived from the spatial diffusion equation (cf. OS93) as: $$<\frac{\mathrm{\Delta }p}{\mathrm{\Delta }t}>=\frac{R1}{3R}\frac{U_1^2}{\kappa _1+R\kappa _2}p,$$ $`(2.3)`$ $$\tau (p)=\frac{R(\kappa _1+R\kappa _2)}{U_1^2},$$ $`(2.4)`$ where the shock compression $`RU_1/U_2`$. In more general conditions with $`\kappa _i=\kappa _i(p)`$ the function $`f(p)`$ must be explicitly present in the kinetic equation: $$\frac{1}{p^2}\frac{}{p}\left\{p^2D_1(p)\frac{}{p}\left[f(p)\frac{\kappa _1(p)}{U_1}\right]\right\}$$ $$\frac{\kappa _2(p)}{U_2}\frac{1}{p^2}\frac{}{p}\left\{p^2D_2(p)\frac{}{p}f(p)\right\}+$$ $$\frac{1}{p_2}\frac{}{p}\left\{p^2\frac{\mathrm{\Delta }p}{\mathrm{\Delta }t}f(p)\frac{\kappa _{ef}(p)}{U_1}\right\}+\frac{f(p)\kappa _{ef}(p)}{U_1\tau (p)}=Q(p).$$ $`(2.5)`$ Let us note in the above equation the asymmetry between the first two terms. If only the first-order acceleration takes place ( $`D_1=0=D_2`$), the known solution $`f(p)p^\sigma `$, with $`\sigma =3R/(R1)`$, is reproduced by Eq.(2.5). Obtaining a solution for a more general situation may be a difficult task. However, if we consider momenta much above (or below) the injection momentum, and the power-law form for the diffusion coefficient $`\kappa p^\eta `$, the solution is also a power-law. In such conditions, from Eq.(2.5) one can derive the spectral index $`\sigma `$. The Skilling (1975) formula is used for the momentum diffusion coefficient, relating it to the spatial diffusion, $`D(p)=V_A^2p^2/(9\kappa (p))`$ . For a given jump condition at the shock for the Alfvèn velocity and a given value of $`\eta `$, the resulting spectral index depends on only two parameters, the shock compression ratio $`R`$ and the velocity ratio $`V_{A,1}/U_1`$. OS93 checked the validity range of the approximate equation (2.5) with the use of numerical simulations. For $`\eta 0`$ the equation can be used for the range of parameters preserving $`V_A<<U`$, while, for $`\kappa =const`$, it provides a quite reasonable description of the particle spectrum at all $`V_A<U`$. In order to find a power-law solution of Eq.(2.5) let us assume the following forms for the diffusion coefficients ($`i`$ = $`1`$, $`2`$): $`\kappa _i(p)=\kappa _{0,i}p^\eta `$, $`D_i(p)=D_{0,i}p^{2\eta }`$, where the constants $`\kappa _{0,i}`$ and $`D_{0,i}`$ are related according to the formula $`D_{0,i}=V_{A,i}^2/(9\kappa _{0,i})`$. With these formulae and the power-law form for the distribution function, $`f(p)=f_0p^\sigma `$, equation (2.5) yields a quadratic equation for $`\sigma `$ : $$a\sigma ^2+b\sigma +c=0,$$ $`(2.6)`$ with coefficients: $$a=\frac{R}{9U_1^2}(V_{A,1}^2+RV_{A,2}^2),$$ $`(2.7)`$ $$b=(3+\eta )a+\frac{R1}{3},$$ $`(2.8)`$ $$c=\eta \frac{RV_{A,1}^2}{3U_1^2}+R.$$ $`(2.9)`$ Equation (2.6) has two solutions valid for, respectively, particle momentum much below ($`\sigma >0`$) and much above ($`\sigma <0`$) the injection momentum. Only the later one is of interest in the present considerations. Let us also note that the energy spectral index often appearing in the literature is $`\mathrm{\Gamma }=\sigma 2`$ and the synchrotron spectral index $`\alpha =(\sigma 3)/2=(\mathrm{\Gamma }1)/2`$. ## 3 Spectral indices in realistic SNRs In the SNR model discussed by Chevalier (1999) the shock wave propagates inside an inhomogeneous cloud. In the cloud, the dense clumps occupy less than 10% of the cloud volume and the shock evolution proceeds mostly in the tenuous inter-clump medium with a number density $`N10`$ cm<sup>-3</sup> and a magnetic field $`B210^5`$ G. Much stronger magnetic fields ($`10^4`$ G) may occur further downstream in the radiative shock. The measured shock velocities are in the range $`U_180`$$`150`$ km/s. With the notation $`N_0N/(10\mathrm{c}\mathrm{m}^3)`$, $`B_0B/(210^5\mathrm{G})`$ and $`U_0U_1/(100\mathrm{k}\mathrm{m}/\mathrm{s})`$ we find the ratio $$\frac{V_{A,1}}{U_1}=0.14B_0N_0^{1/2}U_0^1.$$ $`(3.1)`$ It can be of order $`0.1`$ for the parameters considered above. Thus, the second order acceleration process can substantially modify the energy spectrum of particles accelerated at the shock. In Fig-s1,2 we present spectral indices derived from the formulae of the previous section for the ‘canonical’ Kolmogorov value $`\eta =0.67`$. The shock compression ratio $`R=4.0`$ for the high Mach number adiabatic shock is assumed. An efficient particle acceleration requires a high amplitude turbulence near the shock (cf. Ostrowski 1994) and oblique magnetic field configurations may occur there. Thus, the mean magnetic field downstream of the shock is $`B_2>B_1`$ due to shock compression of both the uniform and the turbulent field components. Below we use an effective jump condition for the Alfvén velocity $`V_{A,2}=V_{A,1}`$ (in general $`B_1B_2B_1R`$ and, respectively, $`V_{A,1}/\sqrt{R}V_{A,2}V_{A,1}\sqrt{R}`$). The results for $`V_{A,1}/U_1<0.2`$ are considered, where the analytic formulae provide an accurate approximation for the actual spectra. For mean models for the SNRs discussed by Chevalier (1999) one has: a.) for W44: $`N_0=0.5`$, $`U_0=1.5`$, $`\alpha =0.33`$ ($`\mathrm{\Gamma }=1.72`$) b.) for IC 443 (shell A): $`N_0=1.5`$, $`U_0=0.8`$, $`\alpha =0.36`$ ($`\mathrm{\Gamma }=1.66`$) c.) for 3C391: $`N_0=1.0`$, $`U_0=3.0`$, $`\alpha =0.55`$ ($`\mathrm{\Gamma }=2.1`$) With these parameters one can easily explain flat spectral indices for the cases (a) and (b) if reasonable values of the magnetic field are involved. Fig.2 shows the spectral index $`\alpha `$ versus the magnetic field $`B`$ for the above listed choices of particle density $`N=N_0`$ and the shock velocity $`U_1=U_0`$. The measured indices are indicated on the respective curves. The steep spectrum of 3C391 can not be exactly reproduced with the use of the simple model considered in this paper, as even neglecting the second order acceleration leads, for the assumed shock compression $`R=4.0`$, to a somewhat flatter spectrum with $`\alpha =0.5`$. However, the observed trend in the spectral index variation is well reproduced by the model. ## 4 Discussion A derivation of the particle spectral index at a shock front is presented for a situation involving the second-order acceleration process in the shock vicinity. We prove that even very weak shocks may produce very flat cosmic ray particle spectra in the presence of momentum diffusion. As a consequence, the dependence of the particle spectral index of the shock compression ratio can be weaker than that predicted for the case of pure first-order acceleration. One should note an important feature of the acceleration process in the presence of the second-order Fermi process acting near the shock. Because the same Alfvén waves scatter particle momentum in direction and in magnitude, there exists a strict link between the first- and the second-order acceleration processes. The particle spectrum is shaped depending on $`V_{A,1}/U_1`$, by the compression $`R`$, the momentum dependence of $`\kappa `$ and, possibly, by the anisotropy of Alfvén waves determining the ratio of $`\kappa `$ to $`D`$. The last factor is not discussed here since under the present considerations we restrict ourselves to isotropic wave fields, but the role of anisotropy consists in decreasing the importance of momentum diffusion. Only through the above parameters the spectrum can be influenced by other physical characteristics of the shock and the medium in which it propagates. It is important that the Alfvén waves’ amplitude, or the magnitude of the spatial diffusion coefficient related to it, do not play any substantial role in determining the spectral inclination as both, first- and second-order, processes scale with it in the same way. One could argue that an energy transfer from waves to particles can cause quick damping of the waves and will leave us with the pure first-order Fermi acceleration at the shock. From the above discussion one can infer that such an objection is not valid for the isotropic wave field. Only the presence of high amplitude one-directional wave field enables the efficient first-order shock acceleration in the absence of momentum diffusion, but a detailed calculation is not possible without a detailed model for the upstream and downstream wave fields. In comparison to parallel shocks, the magnetic field inclined to the shock normal leads to a higher mean energy gain of particles interacting with the shock and a higher escape probability for downstream particles. Also, due to small cross-field diffusion the normal diffusive length scale near the shock decreases. As long as the shock velocity along the field is non-relativistic, the particle spectrum produced in the first-order process is not influenced by a field inclination. Because of smaller diffusive zones near the shock, the role of momentum diffusion may be of lesser importance as compared to the parallel case. This effect can be partly weakened by the presence of the magnetic field compression at the shock, which leads to higher Alfvén velocity downstream of the shock. Also the presence of high amplitude Alfvén waves (cf. Michałek & Ostrowski 1996) and an admixture of fast mode waves propagating obliquely with respect to the magnetic field (Schlickeiser & Miller 1998, Michałek et al. 1998) may lead to a more efficient acceleration than the one considered by OS93. Finaly, we would like to note a novel approach to the shock acceleration by Vainio & Schlickeiser (1999), who discuss conditions allowing for the shock generated flat spectra without action of the second order process. * I am grateful to the anonymous referee and to Horst Fichtner for valuable remarks and corrections. The present work was supported by the KBN grant PB 179/P03/96/11. ## References Chevalier R.A., 1999, ApJ, in press (ASTRO-PH/ 9805315) Dröge W., Lerche I., Schlickeiser R., 1987, A&A., 178, 252 Drury L.O’C., 1983, Space Sci.Rev., 36, 57 Hartquist T.W., Morfill G.E., 1983, ApJ, 266, 271 Hartquist T.W., Morfill G.E., 1984, ApJ, 287, 194 Michałek G., Ostrowski M., 1996, Nonlinear Proc. Geo- phys., 3, 66 Michałek G., Ostrowski M., Schlickeiser R., 1998, Solar Physics, in press Ostrowski M., Schlickeiser R., 1993, A&A, 268, 812 ($``$ OS93) Ostrowski M., 1994, A&A., 283, 344 Schlickeiser R., Fürst E., 1989, A&A, 219, 192 Schlickeiser R., Miller J.A., 1998, ApJ, 492, 352 Skilling J., 1975, MNRAS, 172, 557 Vainio R., Schlickeiser R., 1999, A&A, (in press)
no-problem/9812/cond-mat9812426.html
ar5iv
text
# Giant transverse magnetoresistance in an asymmetric system of three GaAs/AlGaAs quantum wells in a strong magnetic field at room temperature ## Abstract The giant transverse magnetoresistance is observed in the case of photoinduced nonequilibrium carriers in an asymmetric undoped system of three GaAs/AlGaAs quantum wells at room temperature. In a magnetic field of 75 kOe, the resistance of nanostructure being studied increases by a factor of 1.85. The magnetoresistance depends quadratically on the magnetic field in low fields and tends to saturation in high fields. This phenomenon is attributed to the rearrangement of the electron wave function in magnetic field. Using the fact that the incoherent part of the scattering probability for electron scattering on impurities and bulk defects is proportional to the integral of the forth power of the envelope wave function, the calculated field dependence of the magnetoresistance is shown to be similar to that observed experimentally. In our previous paper we studied for the first time in detail the lateral photogalvanic effect (PGE) in an asymmetric system of three GaAs/AlGaAs wells illuminated with white light of various intensities in a strong magnetic field. The spontaneous PGE current $`J^{PGE}`$ was shown to exhibit a maximum as a function of magnetic field $`H`$, the phenomenon earlier predicted theoretically in . We have also found that, at room temperature, the PGE voltage reaches the value of several tenth of Volt per millimeter of the specimen length in the illuminated region, and exhibits only a weak dependence on the light intensity. The temperature dependence of PGE was found to be rather weak: $`J^{PGE}`$ decreases by a factor of two upon cooling from room temperature to $``$ 200 K. The conclusion about the maximum of $`J^{PGE}`$ vs $`H`$ follows from an analysis of the expression for the toroidal moment density $`𝐓`$, to which the spontaneous PGE current is proportional. On the other hand, we have also proposed that such a maximum may be caused by a strong transverse magnetoresistance (TMR) of the nanostructure. Therefore, it was of interest to measure the TMR and its field dependence, particularly at high, room temperature, where the PGE was shown to be the largest. In the present paper, we report the results of investigation of the TMR in the asymmetric system of three quantum wells, whose structure (see Fig. 1) was exactly the same as in our previous paper . Namely, the samples of the $`i`$-Al<sub>x</sub>Ga<sub>1-x</sub>As/$`i`$-GaAs ($`x`$=0.25) nanostructures containing three quantum wells with layer of width $`L_W`$ = 54, 60 and 70 Å separated by barrier layers of width $`L_B`$ = 20 and 30 Å were investigated. This asymmetric system of tunneling-coupled quantum wells was sandwiched between two wide (200 Å) $`i`$-Al<sub>x</sub>Ga<sub>1-x</sub>As ($`x`$=0.25) barriers layers adjacent to an $`i`$-GaAs (1 $`\mu `$m) buffer layer and to an $`i`$-GaAs (200 Å) layer covering the structure. The samples were rectangular with dimensions of the order of 8$`\times 2`$ mm with single pair of in-line contacts (1, 2 in Fig. 1). The contacts were produced by the allowing in of indium. The measurements were carried out at room temperature in a specially designed “warm-field” insert of a superconducting solenoid. Light from a halogen lamp was delivered to the sample along a flexible optical fiber. The maximum power of the radiation delivered to the sample was of the order of 5 mW. The contacts and the adjacent parts of the samples were covered with a special shield (3), so that only the central part of the sample was illuminated. The samples were oriented with the plane of the layers parallel to the magnetic field and with the line of the contacts perpendicular to the magnetic field. The measurement circuit shown in Fig. 1 was a simple closed one with the sample connected in series with a source of controllable DC bias voltage $`E_v`$ and a standard measuring resistance $`R_n`$. The current $`J`$ in the circuit was determined from the voltage drop across the $`R_n`$. Note that the measured current was essentially the short-circuit current $`J_{sc}`$, since the resistance of the samples (of order of 100 M$`\mathrm{\Omega }`$ under nominal illumination) was much larger than $`R_n`$ (10 k$`\mathrm{\Omega }`$). During the experiments, the magnetic field dependencies $`J(H)`$ were measured at different $`E_v`$ and nominal fixed illumination. As the magnetic field was scanned bidirectionally from $``$75 kOe to 75 kOe, the measured values of $`J`$ were stored and averaged over a large number of readings. Figure 2 shows the magnetic field dependencies $`J(H)`$ measured at different bias voltages $`6`$ V $`<E_v<6`$ V. The odd curve $`J(H)`$ (at $`E_v=0`$) represents the magnetic field dependence of the spontaneous PGE current $`J^{PGE}(H)`$, that was investigated in details earlier in paper . As positive or negative bias voltage is applied, the $`J(H)`$ curves are shifted up or down. At the same time, a strong decrease of the absolute values of the current with increasing magnetic field is observed, i.e. a high magnetoresistance becomes apparent. The $`J(E_v)`$ values at given $`H`$ enable us to judge to what extent the current–voltage characteristics of the samples are linear and symmetric in so wide range of bias voltage. The data presented in Fig. 2 reveal that the current–voltage characteristics are linear and symmetric with the accuracy to within several percents throughout the studied $`E_v`$ and $`H`$ ranges. Special tests showed that the degree of nonlinearity and asymmetry primarily depends on the quality (symmetry) of the contacts and show a noticeably tend to reduction with increasing magnetic field. Since, contrary to PGE, the magnetoresistance is an even function of $`H`$, the procedure of obtaining its magnetic field dependence and excluding the contribution of PGE consists in subtracting the $`J(H)`$ dependence measured at negative $`E_v`$ from that measured at positive $`E_v`$: $`R(H)=2E_v/(J^+(H)J^{}(H))`$. The magnetic field dependencies of the magnetoresistance obtained using this procedure for both low (1 V) and high (6 V) absolute values of $`E_v`$ are shown in Fig. 3 as $`\mathrm{\Delta }R(H)/R(0)`$. A small difference between these two curves can be related to a weak field-dependent nonlinearity of the current–voltage characteristics. In low magnetic fields ($`H<`$10 kOe), the magnetoresistance is a quadratic function of $`H`$. In high fields, a tendency to saturation is seen. In a maximum field of 75 kOe used in present measurements, the resistance of the nanostructure increased by a factor 1.85. For interpretation of the data obtained, calculation of the energy spectrum and the wave functions for the studied asymmetric nanostructure has been performed by the envelope method in a magnetic field normal to the plane of the sample ($`x`$-axis). Figure 4 (curves 1–3) shows the magnetic field dependencies of the probabilities for an electron to be located in the corresponding (according to the number in Fig. 1) quantum well in the minimum of the first conduction subband of the spatial quantization. Note that the nanostructure studied was specially designed in such a way that, in zero magnetic field, the probabilities for electrons to be located in the ground state in the narrower quantum wells (1 and 2) are rather high. As seen from Fig. 4 (curve 3), the magnetic field induced rearrangement of the wave function leads to the localization of electrons in the widest well that certainly affects the conductivity (resistivity) of the nanostructure in the lateral direction. The resistivity of the nanostructure in the lateral direction is determined by the electron scattering on the heterojunction imperfections, residual impurities, and bulk defects. The localization of the wave function in the center of the widest well is likely to reduce the scattering on the heterojunction imperfections and, as a consequence, to decrease the resistivity. As for the scattering on impurities, it can be shown within the strongly localized potential approximation $$U(\stackrel{}{r})=U_0\delta (\stackrel{}{\rho }\stackrel{}{\rho }_i)\delta (xx_i)$$ (here, $`\stackrel{}{\rho }_i`$ is the impurity coordinate in the lateral direction, and $`x`$ – along the $`x`$-axis), that the incoherent part of the probability for the electron scattering on such a potential is proportional to $$\phi N_S(x)|f_i(x)|^4𝑑x$$ ($`N_S`$ is the surface concentration of impurity),i.e., for a homogeneous distribution of scattering centers, the scattering probability and, hence, the resistivity of the nanostructure, are proportional to the integral of the fourth power of the envelope wave function. The magnetic field dependence of $`\phi `$ value is shown by curve 4 in Fig. 4. It is seen that in magnetic field of the order of 160 kOe $`\phi `$ is more than doubled. It is also seen, that the calculated curve $`\phi (H)`$ is similar to the experimental $`R(H)`$ dependence, showing the same tendency to saturation but in higher magnetic field. (A tendency to saturation of the experimental $`R(H)`$ curve is seen even in a field of 50 kOe.) As mentioned above, a decrease in electron scattering on the heterojunction imperfections with increasing magnetic field should result in reduced resistivity. If so, the competition of the two mechanism, when taken into account in the calculations, may give a better agreement between the calculated and experimental values of the magnetic field where the saturation begins. It should be noted in conclusion that the observed transverse magnetoresistance of this particular asymmetric nanostructure is certain to have a significant effect on the field dependence of spontaneous PGE current $`J^{PGE}(H)`$, but does not explain the nonmonotonic behavior of $`J^{PGE}(H)`$ observed in paper . This nonmonotonic behavior of $`J^{PGE}(H)`$ is likely to be due to the PGE nature. This work was partly supported by the Russian Foundation for Basic Research, Project No.95-02-04358-a, and partly by Russian Program “Solid State Nanostructute Physics”, Project No.1-083/4.
no-problem/9812/hep-ex9812028.html
ar5iv
text
# 1 Introduction ## 1 Introduction The transition radiation arises at uniform and rectilinear motion of a charged particle when it intersects a boundary of two different media (in general case, when it moving in a nonuniform medium or near such medium). This phenomenon was actively investigated during a few last decades (see, e.g. reviews , ) and widely used in transition radiation detectors (see, e.g. recent review and references therein). We consider the standard TR radiator of $`N`$ foils of thickness $`l_1`$ separated by distances $`l_2`$ in a gas or the vacuum, so the period length is $`l=l_1+l_2`$. The plasma frequency of the foil and gap material are $`\omega _0`$ and $`\omega _{02}`$, we neglect $`\omega _{02}`$. The basic features of the TR in this radiator depend essentially on the interrelation between values of $`\omega _1={\displaystyle \frac{\omega _0^2l_1}{4\pi }}`$ and $`\overline{\omega _p}=\omega _0\gamma \sqrt{{\displaystyle \frac{l_1}{l}}}`$, $`\gamma `$ is the Lorentz factor. In the TR detectors the inequality $`\omega _1\overline{\omega _p}`$ is fulfilled, in the usual case $`l_1l_2`$ and radiated frequencies $`\omega <\omega _0\gamma `$. The total energy radiated is proportional to $`\gamma `$ and the TR detector are just used to measure this quantity. In the opposite case $`\omega _1\overline{\omega _p}`$ and $`l_1l_2`$ characteristics of the TR are quite different. The total radiated energy is independent of $`\gamma `$. Performing the collimation of the radiation within angle $`\vartheta _c`$ with respect velocity of the initial particle one can obtain the radiation concentrated in a rather narrow spectral band near $`\omega _1`$. The width of this band depend on $`\vartheta _c`$. There are limitations on the number of foils $`N`$ due to absorption of the radiated X-rays and multiple scattering of the projectile. Nevertheless one can pick out parameters which permit obtain number of radiated per one crossing photons $`N_\gamma 0.01÷1`$ (per one projectile). This case is discussed in detail in Sec.2. In Sec.3 the present situation with use of the TR as a X-ray source is discussed. Some specific features of the proposed radiator are analyzed including selection of the parameters. Set of examples of X-ray sources with various $`\omega _1`$ utilizing distinct material of the foils and operating at different energies is collected in Table. ## 2 Transition radiation from the periodic N-foil stack The spectral-angular distribution of emitted from ultrarelativistic electrons energy in the radiator consisting of many ($`N`$) thin foils of the thickness $`l_1`$ separated by equal distances $`l_2`$ in a gas was discussed in many papers, see e.g. - $$\frac{d^2\epsilon }{d\omega dy}=\frac{4e^2y}{\pi }\left(\frac{\kappa _0^2}{(1+y)(1+\kappa _0^2+y)}\right)^2\mathrm{sin}^2\frac{\phi _1}{2}\frac{\mathrm{sin}^2(N\phi /2)}{\mathrm{sin}^2(\phi /2)},$$ (1) where $`\omega `$ is the frequency of radiation, $`y=\vartheta ^2\gamma ^2`$, $`\gamma =ϵ/m`$ is the Lorentz factor, $`ϵ(m)`$ is the energy (the mass) of the incident electron, $`\vartheta `$ is the azimuthal angle of emission with respect velocity of the incident electron (we assume normal incidence), $`\kappa _0=\omega _p/\omega `$, here $`\omega _p=\omega _0\gamma `$, $`\omega _0`$ is the plasma frequency $$\omega _0^2=\frac{4\pi e^2n_e}{m},\phi _1=\frac{\omega l_1}{2\gamma ^2}\left(1+\kappa _0^2+y\right),\phi _2=\frac{\omega l_2}{2\gamma ^2}\left(1+y\right),\phi =\phi _1+\phi _2,$$ (2) where $`n_e`$ is the density of electrons in the medium of a foil. The radiated energy is the coherent sum of the TR amplitudes for each interface and in absence of absorption Eq.(1) has the pronounced interference pattern. Although the formula (1) is derived in classical electrodynamics, one can introduce the probability of the TR $$\frac{d^2w}{d\omega dy}=\frac{1}{\omega }\frac{d^2\epsilon }{d\omega dy}$$ (3) In this paper the system $`\mathrm{}=c=1`$ is used, $`e^2=\alpha =1/137`$. Recently authors developed the quantum theory of the TR and of the transition pair creation . At $`N1`$ the main contribution into the integral over $`y`$ in (1), which defines the spectral distribution of the TR, gives the interval of $`\phi `$ for which $$\mathrm{sin}^2\phi /21,\phi =2\pi n+\mathrm{\Delta }\phi ,\mathrm{\Delta }\phi \frac{1}{N},\mathrm{\Delta }y\frac{1}{N}\frac{2\gamma ^2}{\omega l}=\frac{1}{N}\frac{l_c}{l},$$ (4) where $`l_c=2\gamma ^2/\omega `$ is the formation length of radiation in the vacuum, $`l=l_1+l_2`$. The condition $$\phi =\phi _1+\phi _2=\frac{\omega l}{2\gamma ^2}\left(1+y\right)+\frac{\omega l_1}{2\gamma ^2}\kappa _0^2=2\pi n$$ (5) defines the radiated photon energy $`\omega `$ as a function of the emission angle $`\vartheta `$ for fixed $`n`$ (or for the $`n`$-th radiation harmonic $`\omega _n`$). Respectively, the integral over $`y`$ in (1) can be presented as a sum of the harmonic. We present (5) in a form $$1+y=\frac{l_1}{l}\left(\frac{\omega _p}{\omega _n}\right)^2\left(1\frac{\omega _n}{\omega }\right)\frac{\omega _n}{\omega }=\frac{\overline{\omega _p^2}}{\omega _n^2}\left(1\frac{\omega _n}{\omega }\right)\frac{\omega _n}{\omega },$$ (6) where $$\omega _n=\frac{\omega _1}{n},\omega _1=\frac{\omega _0^2l_1}{4\pi },\overline{\omega _p^2}=\gamma ^2\overline{\omega _0^2}=\gamma ^2\omega _0^2\frac{l_1}{l}.$$ In practice it is convenient to use $$\omega _1(eV)=0.40344\omega _0^2(eV)l_1(\mu m),$$ where the values $`\omega _1,\omega _0`$ are expressed in $`eV`$ and the value $`l_1`$ is in $`\mu m`$. Interrelation between values of $`\omega _1`$ and $`\overline{\omega _p}(\overline{\omega _p}=(\overline{\omega _p^2})^{1/2})`$ is very essential for the basic features of the TR. Consider first the case $`\omega _1\overline{\omega _p}`$. For this case the equation (5) has solutions for large $`n>2\omega _1/\overline{\omega _p}`$ only. In this situation the function $$\mathrm{sin}^2\frac{\phi _1}{2}=\mathrm{sin}^2\frac{\phi _2}{2}=\mathrm{sin}^2\left[n\pi \frac{l_2}{l}\left(1\frac{\omega _n}{\omega }\right)\right]$$ (7) oscillates very fast and one can substitute it by the mean value equal to 1/2. In the integral over $`y`$ in (1) represented as the sum of harmonic for large $`n`$ one can substitute summation over $`n`$ by integration $$\underset{0}{\overset{\mathrm{}}{}}d(\mathrm{\Delta }n)\frac{\mathrm{sin}^2\left(N\pi (\mathrm{\Delta }n)/2\right)}{\mathrm{sin}^2\left(\pi (\mathrm{\Delta }n)/2\right)}\frac{2}{\pi }\underset{0}{\overset{\mathrm{}}{}}\frac{dx}{x^2}\mathrm{sin}^2(Nx)=N$$ (8) After this operation the variables $`y`$ and $`\omega `$ in (1) become independent. This means together with (8) that in this case the TR is the noncoherent sum of the single-interface contributions (the total number of interfaces is $`2N`$). Actually just this case ($`\omega _1\overline{\omega _p}`$) is used in the TR detectors, where the radiated energy is $$E=\frac{2N}{3}\alpha \omega _0\gamma $$ (9) In the present paper we consider the opposite case $`\omega _1\overline{\omega _p}`$. In this case the characteristic angles of radiation are large comparing with $`1/\gamma `$ (except boundaries of spectra for the given harmonic). These angles are defined by Eq.(6) $$y_n\frac{\overline{\omega _p^2}}{\omega _n^2}\left(1\frac{\omega _n}{\omega }\right)\frac{\omega _n}{\omega },y_n^{max}=\frac{1}{4}n^2a,a=\frac{\overline{\omega _p^2}}{\omega _1^2}1.$$ (10) If one performs collimation of the emitted radiation with the collimation angle $`y_c=\vartheta _c^2\gamma ^2<y_n^{max}`$ the frequency interval for the n-th harmonic is split into two parts: $`\omega _n\omega \omega _n^{(1)},\omega \omega _n^{(2)}`$, where $`\omega _n^{(1,2)}`$ are defined by equations following from (10) $$\frac{\omega _n}{\omega _n^{(1,2)}}=\frac{1}{2}\left(1\pm \sqrt{1z_n}\right),z_n=\frac{y_c}{y_n^{max}}=\frac{4y_c}{n^2a}=\frac{l}{l_1}\left(\frac{2\vartheta _c\omega _1}{n\omega _0}\right)^2,$$ (11) where $`\omega _1`$ is defined in (6). When the collimation is strong ($`y_cy_1^{max}=a/4`$) the radiation is concentrated in a rather narrow frequency band $`\mathrm{\Delta }\omega _n^{(1)}=\omega _n^{(1)}\omega _n\omega _n{\displaystyle \frac{z_n}{4}}=\omega _n{\displaystyle \frac{y_c}{n^2a}}=\omega _1{\displaystyle \frac{y_c}{n^3a}},`$ $`\omega _n^{(2)}={\displaystyle \frac{4\omega _n}{z_n}}=\omega _n{\displaystyle \frac{n^2a}{y_c}}=\omega _1{\displaystyle \frac{na}{y_c}},{\displaystyle \frac{y_c}{a}}={\displaystyle \frac{l}{l_1}}{\displaystyle \frac{\vartheta _c^2\omega _1^2}{\omega _0^2}}.`$ (12) The spectral distribution of the radiated energy on the $`n`$-th harmonic we obtain integrating Eq.(1) over $`y`$ at $`N1`$ in the interval near $`y`$ satisfying the equation $`\phi (y)=2\pi n`$ $$\frac{\mathrm{sin}^2\left(N\mathrm{\Delta }\phi /2\right)}{\mathrm{sin}^2\left(\mathrm{\Delta }\phi /2\right)}𝑑y\frac{4\gamma ^2}{\omega l}\underset{\mathrm{}}{\overset{\mathrm{}}{}}\frac{\mathrm{sin}^2(Nx)}{x^2}𝑑x=\frac{4\gamma ^2}{\omega l}N\pi .$$ (13) As it was indicated above, the relative width of the integration interval $`\mathrm{\Delta }y/y1/N`$ and within this accuracy at $`N1`$ one can use the formula $$\frac{\mathrm{sin}^2\left(N\phi /2\right)}{\mathrm{sin}^2\left(\phi /2\right)}\underset{n=1}{\overset{\mathrm{}}{}}2\pi N\delta (\phi 2\pi n).$$ (14) This formula is exact at $`N\mathrm{}`$. There is some analogy between radiation considered in this paper and the undulator radiation (see, e. g. ). In undulator the deviation of the particle’s velocity $`𝐯`$ from its mean value varies periodically under influence of the periodical (in space) magnetic field. In the considered case such deviation occurs with the wave vector $`𝐤`$ of emitted radiation (the refraction index) while the particle’s velocity remains constant. However, since the velocity $`𝐯`$ and the wave vector $`𝐤`$ are contained in expressions describing the radiation in the combination $$1\mathrm{𝐧𝐯}\frac{1}{2}\left(\frac{1}{\gamma ^2}+\vartheta ^2+v_{}^2+\frac{\omega _0^2}{\omega ^2}\right),𝐧=\frac{𝐤}{\omega },$$ (15) the both effects are formally equivalent with respect to the coherence of radiation from different periods. The essential difference between considered the TR and the undulator radiation is due to the dependence of Eq.(15) on $`\omega `$. This property leads to the following consequences: 1. the characteristic frequencies of the undulator radiation are directly proportional to $`n`$ (in contrast to $`\omega _n`$ Eq.(6)) and inversely proportional to the structure period; 2. in the case considered for fixed $`n`$ there is the maximal angle of radiation at $`\omega =2\omega _n`$ and for smaller angles $`\vartheta <\vartheta _m`$ the interval of the allowed frequencies ($`\omega \omega _n`$) is divided into two intervals, which are larger than $`\omega _n`$ in contrast to the undulator case. Substituting Eqs.(14) and (6) into (3) we obtain for the spectral distribution of probability of radiation $`{\displaystyle \frac{dw}{d\omega }}(yy_c)={\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{dw_n^c}{d\omega }},{\displaystyle \frac{dw_n^c}{d\omega }}={\displaystyle \frac{4\alpha N}{\pi n}}`$ $`\times {\displaystyle \frac{\mathrm{sin}^2\left[n\pi {\displaystyle \frac{l_2}{l}}\left(1{\displaystyle \frac{\omega _n}{\omega }}\right)\right]}{(\omega \omega _n)\left[1+{\displaystyle \frac{l_1}{l}}\left({\displaystyle \frac{\omega }{\omega _n}}1\right)\right]^2}}\vartheta (\omega \omega _n)\left[\vartheta (\omega _n^{(1)}\omega )+\vartheta (\omega \omega _n^{(2)})\right],`$ (16) where $`\vartheta (x)`$ is the Heaviside step function. Here the threshold values $`\omega _n^{(1,2)}`$ are defined in Eq.(11), the value $`y_c`$ is defined by the collimation angle ($`\vartheta _c=\sqrt{y_c}/\gamma `$). Note that Eq.(16) is independent of the energy of particle. However, the condition of applicability this equation depends essentially on the Lorentz factor $`\gamma `$: $$\omega _1=\frac{\omega _0^2l_1}{4\pi }\overline{\omega _p}=\gamma \sqrt{\frac{l_1}{l}}\omega _0,\gamma \frac{\omega _0\sqrt{l_1l}}{4\pi }$$ (17) The formula (16) was obtained in absence of X-ray absorption which is especially essential in the low frequency region. Taking into account absorption we find limitation on the number of foils $`NN_{ef}(\omega )`$ and $$N_{ef}(\omega )=\frac{1}{\sigma (\omega )l_1}\left(1\mathrm{exp}(N\sigma (\omega )l_1)\right),$$ (18) where $`1/\sigma (\omega )`$ is the X-ray attenuation length at the frequency $`\omega `$. We assume that the absorption in a single foil is small. For the strong collimation ($`y_ca/4`$) the spectral probability of radiation on the $`n`$-th harmonic can be written as $`{\displaystyle \frac{dw_{n1}^c}{d\omega }}{\displaystyle \frac{4\alpha N\pi n^3l_2^2}{\omega _1^2l^2}}\left(\omega {\displaystyle \frac{\omega _1}{n}}\right),\omega {\displaystyle \frac{\omega _1}{n}}0,\omega {\displaystyle \frac{\omega _1}{n}}{\displaystyle \frac{\omega _1y_c}{n^3a}};`$ $`{\displaystyle \frac{dw_{n2}^c}{d\omega }}{\displaystyle \frac{4\alpha N\omega _1^2l^2}{\pi n^3\omega ^3l_1^2}}\mathrm{sin}^2\left(\pi n{\displaystyle \frac{l_1}{l}}\right),\omega n\omega _1{\displaystyle \frac{a}{y_c}}.`$ (19) These probabilities attain the maximal values on the boundaries of the regions. These values are $$\frac{dw_{n1}^c}{d\omega }\left(\omega =\omega _n^{(1)}\right)=\frac{4\alpha N\pi l_2^2y_c}{\omega _1l^2a},\frac{dw_{n2}^c}{d\omega }\left(\omega =\omega _n^{(2)}\right)=\frac{4\alpha Nl^2}{\pi \omega _1n^6l_1^2}\left(\frac{y_c}{a}\right)^3\mathrm{sin}^2\left(\pi n\frac{l_1}{l}\right).$$ (20) Integrating (19) over $`\omega `$ we obtain the total number of the passing through the collimator photons: $$w_{n1}^c=2\alpha N\pi \frac{1}{n^3}\left(\frac{l_2}{l}\right)^2\left(\frac{y_c}{a}\right)^2,w_{n2}^c=\frac{2\alpha N}{\pi n^5}\left(\frac{l}{l_1}\right)^2\left(\frac{y_c}{a}\right)^2\mathrm{sin}^2\left(\pi n\frac{l_1}{l}\right).$$ (21) The corresponding expressions for the energy losses due to collimated X-ray emission are $$E_{n1}^c=2\alpha N\pi \frac{\omega _1}{n^4}\left(\frac{l_2}{l}\right)^2\left(\frac{\mathrm{\Delta }\omega }{\omega _1}\right)^2,E_{n2}^c=\frac{4\alpha N\omega _1}{\pi n^4}\frac{y_c}{a}\left(\frac{l}{l_1}\right)^2\mathrm{sin}^2\left(\pi n\frac{l_1}{l}\right).$$ (22) It is seen from these expressions that one can neglect the contribution of higher harmonics ($`n2`$). The contribution of the first harmonic is concentrated in the narrow frequency interval $`\mathrm{\Delta }\omega \omega _1y_c/a\omega _1`$ while in the second available interval the frequencies much larger than $`\omega _1`$ contribute ($`\omega \omega _1a/y_c\omega _1`$). So, the collimated TR in the forward direction have quite good monochromaticity. Note that side by side with absorption there is another process which imposes limitation on the number of foils $`N`$ especially when the angle of collimation is small. This process is multiple scattering of the projectile which leads to a ”smearing” of the $`\delta `$-function in Eq.(14) and thus reduces the degree of the monochromaticity. Below we discuss these both processes for some particular examples. We estimate now the contribution of higher harmonics ($`n1`$) into the spectral distribution of probability in absence of the collimation. For $`\omega \omega _1`$ this contribution is suppressed as $`1/n^3`$. For $`\omega \omega _1`$ the main contribution give values $`n\omega _1/\omega 1`$. In this situation the spectrum becomes quasicontinuous, the factor $`\mathrm{sin}^2(\phi _1/2)`$ in Eq.(1) is oscillating very fast and one can substitute it by 1/2. The summation over $`n`$ can be replaced by the integration and this is integration of $`\delta `$-function in (14). For integration over $`y`$ the variables $`\omega `$ and $`y`$ can be considered as independent. Taking this into account and integrating (1) over interval $`{\displaystyle \frac{4\gamma ^2}{\omega l_2}}y({\displaystyle \frac{\phi _2}{2}}1)`$ we find $$\frac{dw}{d\omega }\left(\omega \omega _1\right)\frac{2\alpha N}{\pi \omega }\left[\mathrm{ln}\left(\pi \frac{\omega _1l_2}{\omega l_1}\right)+const\right]$$ (23) ## 3 Discussion Use of the TR as a source of X-rays was investigated recently in many papers (see - and references cited therein). Starting from differential spectral-angular distribution of radiated energy Eq. (1) authors analyzed position of maxima in this distribution. The corresponding resonance conditions are $$\frac{\phi _1}{2}=(2m1)\frac{\pi }{2},\frac{\phi }{2}=m^{}\pi ,m,m^{}=1,2,3\mathrm{}$$ (24) When these conditions are fulfilled there is connection between the emission angle and the photon energy. Measurements were preformed for variety of energies using different foils and various $`N,l_1,l_2`$. Typically spectral X-ray intensity was measured as function of the emission angle for fixed photon energy or as function of the photon energy at fixed emission angle. Experimental results obtained in - are in quite good agreement with theoretical calculations. In yield of X-ray photons (with energy $`2÷6`$ keV) from electrons with energy $`ϵ=855`$ MeV was $`N_\gamma 10^4`$ per electron and width of the spectral band was $`\mathrm{\Delta }\omega \omega `$ for Kapton foils and $`N=3`$. In yield of X-ray photons from electrons with energy $`ϵ=900`$ MeV (with energy $`14.4`$ and $`35.5`$ keV) was $`N_\gamma 210^5`$ and $`N_\gamma 610^5`$ per electron respectively and width of the spectral band (FWHM) was $`\mathrm{\Delta }\omega =0.5`$ keV and $`\mathrm{\Delta }\omega =0.81`$ keV for Silicon monocrystalline foils and $`N=10`$ and $`N=100`$. Now we turn to some specific features of proposed approach. The ratio of thicknesses $`l_1`$ and $`l_2`$ is the important characteristic of the TR radiator. The thickness $`l_1`$ is defined by the radiation frequency (energy) $`\omega _1`$ Eq.(6) which can be written in a form $$\omega _1=\alpha mn_e\lambda _c^2l_1,$$ (25) where $`n_e=Zn_a`$ ($`n_a`$ is density of atoms in the foil), $`n_e`$ is defined in Eq.(2), $`\lambda _c=1/m=(\mathrm{}/mc)`$ is the Compton wavelength. It is seen from Eq.(21) that the number of collimated photons increases with $`l_2`$ (the factor $`(l_2/l)^2`$). From the other side, the inequality $`\overline{\omega _p}\omega _1`$, which have to be fulfilled in our case, becomes more strong if $`l_1/l`$ increases. In this situation the requirement on the collimation angle $`\vartheta _c`$ becomes more weak and influence of the multiple scattering diminishes (for the given monochromaticity of the radiation): $$\vartheta _c^2=\frac{l_1}{l}\frac{\omega _0^2}{\omega _1^2}\frac{\mathrm{\Delta }\omega _1}{\omega _1}>\vartheta _s^2=\frac{4\pi Nl_1}{\alpha \gamma ^2L_{rad}},$$ (26) where $`L_{rad}`$ is the radiation length. So, the optimal value of $`l_2`$ should be of the same order as $`l_1`$. Note that in the TR detectors where $`\overline{\omega _p}\omega _1`$ there is no limitation connected with the multiple scattering and the thickness $`l_2`$ usually one order of magnitude larger than $`l_1`$. From (26) we have the limitation on the value $`N`$ due to multiple scattering $$N<N_s=\frac{\alpha }{4\pi }\frac{L_{rad}}{l_1}\frac{\overline{\omega _p^2}}{\omega _1^2}\frac{\mathrm{\Delta }\omega _1}{\omega _1}.$$ (27) Using the definitions $`\overline{\omega _p^2}`$ and $`\omega _1^2`$ (6) and explicit formula for $`L_{rad}`$ (valid for large $`Z`$) we rewrite (26) in the form $$N_s=\frac{\gamma ^2l_1}{4l}\frac{n_a}{\omega _1^3\mathrm{ln}\left(183Z^{1/3}\right)}\frac{\mathrm{\Delta }\omega _1}{\omega _1}.$$ (28) So $`N_s\gamma ^2`$ and $`N_s\omega _1^3`$ and increases when the energy $`\omega _1`$ diminishes. For low $`\omega _1`$ the photon absorption (see Eq.(18)) becomes essential. Since for $`\omega 10`$ keV the x-ray attenuation length $`1/\sigma (\omega )`$ for the heavy elements is from two to three order of magnitude shorter than for the light elements, one can use in this region of $`\omega `$ the light elements only. In the region $`\omega 30`$ keV the attenuation length is rather long, but for large $`\omega _1`$ the influence of the multiple scattering of the projectile (28) becomes more essential. In this region one can provide enough large $`N_s`$ having used the large Lorentz factor only. For hard X-ray the difference between use of the heavy elements or the light elements is not so significant. We consider a few examples of the X-ray yield basing on the results obtained above. In Fig.1 the spectral distribution of the radiated energy $`{\displaystyle \frac{d\epsilon }{d\omega }}`$ Eq.(1) is shown for Li foils ($`ϵ=25`$ GeV, $`l_1=0.13mm,l=3l_1,N=100`$). The important effect of the dependence of the width and form of the first harmonic peak ($`\omega _1={\displaystyle \frac{\omega _0^2l_1}{4\pi }}=10`$ Kev) Eq.(6) on the collimation angle $`\vartheta _c,(y_c=\vartheta _c^2\gamma ^2)`$ is given. It is seen that the collimation is cutting out the different parts of the radiation spectrum. The connection between the frequency and the emission angle is the specific property of the undulator type radiation. When the collimation angle decreases the spectral peak becomes more narrow (correspondingly the total number of emitted photons decreases also) and vise versa. The curve 5 exhibits the situation when the value $`y_c`$ is near the boundary value $`y_{max}`$ Eq.(10). For the curve 6 one has $`y_c>y_{max}`$ and the spectral curve is independent of the collimation angle. The position of the right slope of the peak $`\omega _n^{(1)}`$ is defined from Eq.(11), e.g. for $`y_c=250`$ one has $`\omega _1^{(1)}=12.6`$ keV or $`\mathrm{\Delta }\omega /\omega 0.25`$. In Fig.2(a) the dependence of the height the $`{\displaystyle \frac{d\epsilon }{d\omega }}`$ Eq.(1) on the number of foils ($`N=20,100,200`$) for the fixed collimation angle ($`y_c=250`$) is presented. The width of the peak is practically independent of $`N`$, while the height is proportional to $`N`$.The peaks of higher harmonics are situated at $`\omega _n=\omega _1/n`$ ($`n`$=2, 3, 4). Their characteristics are: the height $`1/n`$ and the width $`1/n^3`$ (see Eqs.(19))-(21), it is necessary to remind that in Figs.1,2 the energy losses spectrum is shown, while these equations are for the probability spectrum). Note that spectra shown in Fig.2(a) are calculated without absorption. If one takes the absorption into account the higher harmonics will be strongly suppressed. There is also the hard part of the emitted spectrum which is given in Fig.2(b). This is the contribution of the first harmonic (see Eqs.(20) and (21)). For $`y_c=250`$ the position of the left slope $`\omega _1^{(2)}48`$ keV (see Eq.(11)). As the collimation becomes more strong the value $`\omega _1^{(2)}`$ increases. Note, that the absorption is much weaker for the hard part of the spectrum. Calculating the area of the peak in Fig.1,2 one can estimate number of radiated photons per electron $`N_\gamma `$, for $`y_c=250`$ one has $`N_\gamma 0.1`$ for N=100. Number of foil $`N=500÷1000`$ is typical in many experiments with the TR detectors, the lithium foils were used in . Quite good estimation of $`N_\gamma `$ can be obtained using (21), if $`N<N_s`$, substituting $`N=N_{ef}`$. Because of limitations connected with the photon absorption and the multiple scattering of the projectile one have to use the light elements. Although results given in Figs.1 and 2 are obtained for the particular substance and the definite parameters, these results are quite universal at least qualitatively. There is an opportunity to install the proposed TR radiator in the electron storage ring. In this case the number $`N`$ will be defined by the storage ring operational regime. Let us note that at $`N=`$20 the first harmonic peak is quite distinct. The approach proposed in this paper permits to obtain the yield of X-ray photons $`N_\gamma 0.01÷1`$ per electron and the width of the spectral band up to $`\mathrm{\Delta }\omega /\omega 0.1`$. Of course, the yield decreases for a more narrow spectral band. This means at least two order of magnitude increase of the yield comparing with obtained in -. Some specific examples are given in Table where $`N_1={\displaystyle \frac{1}{\sigma (\omega _1)l_1}}`$ defines the value $`N_{ef}`$ for fixed $`N`$ and $`\omega _1`$ (18). The values $`N_s`$ are calculated according with Eq.(27). If $`N_s>N_1`$ we choose $`N`$ slightly larger than $`N_1`$. In the opposite case we choose $`N`$ slightly smaller than $`N_s`$. In the first column the material of the foil is given, $`\mathrm{CH}_2`$ is used for Polyethylene. The value $`N_\gamma `$ is the number of photons emitted into collimator for the given values of $`\omega _1`$ and $`\mathrm{\Delta }\omega _1`$. All calculations are performed for $`l=3l_1`$. The results in Table are in a reasonable agreement with Figs.1,2. Acknowledgments This work was supported in part by the Russian Fund of Basic Research under Grant 98-02-17866. Figure captions * Fig.1 The spectral distribution of the radiated energy $`E(\omega ){\displaystyle \frac{d\epsilon }{d\omega }}`$ Eq.(1) in units $`2\alpha /\pi `$ vs photon energy for $`Li`$ foils ($`ϵ=25`$ GeV, $`l_1=0.13mm,l=3l_1,N=100`$). The dependence of the width and form of the first harmonic peak ($`\omega _1=10`$ Kev) Eq.(6) on the collimation angle $`\vartheta _c`$ is shown. This angle is measured in units $`y_c=\vartheta _c^2\gamma ^2`$: $`y_c=150,200,250,300,350,400`$ for curves 1, 2, 3, 4, 5, 6 respectively. * Fig.2 The spectral distribution of the radiated energy $`E(\omega ){\displaystyle \frac{d\epsilon }{d\omega }}`$ Eq.(1) in units $`2\alpha /\pi `$ vs photon energy for $`Li`$ foils ($`ϵ=25`$ GeV, $`l_1=0.13mm,l=3l_1,y_c=250`$). The dependence of the height of the spectral curve on the number of foils is presented ($`N=20,100,200`$ for curves 1, 2, 3 respectively). + (a) The soft part of the spectral curve. The main (the first harmonic) peak and peaks of $`n=2,3,4`$ harmonics ($`\omega _n=\omega _1/n`$) are seen. + (b) The hard part of the spectral curve.
no-problem/9812/physics9812028.html
ar5iv
text
# High-precision calculations of dispersion coefficients, static dipole polarizabilities, and atom-wall interaction constants for alkali-metal atoms ## Abstract The van der Waals coefficients for the alkali-metal atoms from Na to Fr interacting in their ground states, are calculated using relativistic ab initio methods. The accuracy of the calculations is estimated by also evaluating atomic static electric dipole polarizabilities and coefficients for the interaction of the atoms with a perfectly conducting wall. The results are in excellent agreement with the latest data from ultra-cold collisions and from studies of magnetic field induced Feshbach resonances in Na and Rb. For Cs we provide critically needed data for ultra-cold collision studies. preprint: ND Atomic Theory Preprint 98/11 The van der Waals interaction plays an important role in characterizing ultra-cold collisions between two ground state alkali-metal atoms. While the calculation of interaction coefficients has been a subject of great interest in atomic, molecular and chemical physics for a very long time, it is only very recently that novel cold collision experiments, photoassociation spectroscopy, and analyses of magnetic field induced Feshbach resonances have yielded strict constraints on magnitudes of the coefficients. Moreover, due to the extreme sensitivity of elastic collisions to the long-range part of the potentials, knowledge of the van der Waals coefficients influences predictions of signs and magnitudes of scattering lengths. Although many theoretical methods have been developed over the years to calculate van der Waals coefficients, persistent discrepancies remain. In this paper, various relativistic ab initio methods are applied to determine the van der Waals coefficients for the alkali-metal dimers of Na to Fr . As a check on our calculations, we also evaluate the atom-wall interaction constants, which have recently been calculated by other methods, and use them as a sensitive test of the quality of our wave functions. Furthermore, we calculate atomic polarizabilities and compare them to experimental data, where available. The dynamic polarizability at imaginary frequency $`\alpha (i\omega )`$ for a valence state $`|v`$ can be represented as a sum over intermediate states $`|k`$ $$\alpha \left(i\omega \right)=\frac{2}{3}\underset{k}{}\frac{E_kE_v}{\left(E_kE_v\right)^2+\omega ^2}v|𝐑|kk|𝐑|v,$$ (1) where the sum includes an integration over continuum states and $`𝐑=_{j=1}^N𝐫_j`$ is the dipole operator for the $`N`$-electron atomic system. We use atomic units throughout. The dispersion coefficient $`C_6`$ of the van der Waals interaction between two identical atoms is $$C_6=\frac{3}{\pi }_0^{\mathrm{}}𝑑\omega [\alpha (i\omega )]^2.$$ (2) The coefficient $`C_3`$ of the interaction between an atom and a perfectly conducting wall is $$C_3=\frac{1}{4\pi }_0^{\mathrm{}}𝑑\omega \alpha (i\omega ),$$ (3) or alternatively $$C_3=\frac{1}{12}v|𝐑𝐑|v.$$ (4) Using the latter relation, we have previously determined the values of $`C_3`$ coefficients for alkali-metal atoms using many-body methods. The dipole operator $`𝐑`$, being a one-particle operator, can have non-vanishing matrix elements for intermediate states represented by two types of Slater determinant. Firstly, the valence electron $`v`$ can be promoted to some other valence state $`w`$. Secondly, one of the core orbitals $`a`$ can be excited to a virtual state $`m`$, leaving the valence state $`v`$ unchanged. In the language of second-quantization, the first type of states is represented by $`a_w^{}|0_c`$ and the second type by $`a_m^{}a_aa_v^{}|0_c`$, where $`|0_c`$ describes the core. These states will be referred to as “valence” and “autoionizing” states, respectively. In accordance with such a classification, we break the total polarizability $`\alpha `$ into three parts: the polarizability due to valence states $`\alpha _v`$, the core polarizability $`\alpha _c`$, and the valence-core coupling term $`\alpha _{cv}`$, with $`\alpha =\alpha _v+\alpha _c+\alpha _{cv}.`$The last two terms arise from the summation over autoionizing states. In evaluating the core polarizability we permit excitations into all possible states outside the core. The term $`\alpha _{cv}`$ is a counter term accounting for the consequent violation of the Pauli principle. Various states contribute at drastically different levels to the dynamic polarizability. For example, 96% of the static polarizability of Cs is determined by the two intermediate valence states $`6P_{1/2}`$ and $`6P_{3/2}`$, other valence states contribute less than 1%. The core polarizability accounts for approximately 4% of the total value and the contribution of the core-valence coupling term is about $`0.1`$%. The relative sizes of contributions to the static polarizabilities for the other alkali-metal atoms are similar. The dynamic polarizability $`\alpha (i\omega )`$, given in Eq. (1), behaves as $`\alpha (i\omega ){\displaystyle \underset{k}{}}f_{vk}/\omega ^2=N/\omega ^2,`$at large value of $`\omega `$, where we have used the nonrelativistic oscillator strength sum rule $`S\left(0\right)=f_{vk}=N`$. Because the ratio $`\alpha _c/\alpha _v`$ nonrelativistically is close to $`N1`$ we expect the core polarizability to give the major contribution at large $`\omega `$. Therefore, the core polarizability becomes increasingly important for heavier atoms. Based on the above argument, we use several many-body techniques of varying accuracy to calculate the different contributions to the total polarizability. In particular, we employed the relativistic single-double (SD) all-order method to obtain the leading contribution from valence states . The core polarizability is obtained from the relativistic random-phase approximation (RRPA) . The core-valence coupling term and the non-leading contribution from valence states is estimated in the Dirac-Hartree-Fock approximation by a direct summation over basis set functions . The relativistic single-double (SD) all-order method has been previously used to obtain high-precision atomic properties for the first few excited states in alkali-atom systems . The results of theoretical SD matrix elements and comparison with experimental data are presented elsewhere . Generally, the electric-dipole matrix elements for principal transitions agree with precise experimental data to better than 0.5% for all alkali-metal atoms; the calculations being more accurate for lighter elements. In the present work, for Na, K, Rb, and Cs, we have used SD matrix elements for the first six lowest $`P_{1/2}`$ and $`P_{3/2}`$ levels. For Fr, we have used SD matrix elements for a principal transition and matrix elements calculated with the third-order many-body perturbation theory (MBPT), described in , for four other lowest $`P_{1/2}`$ and $`P_{3/2}`$ states. Unless noted otherwise, we have used experimental values of energy levels from Ref. and from the compilation of Dzuba et al. for Fr. The relativistic random-phase approximation (RRPA) was used previously to obtain static core polarizabilities for all alkali-metal atoms except Fr in Ref. . In the present calculations we reformulated the original differential equation method used in in terms of basis sets , in a manner similar to . We reproduce the results of Ref. and, in addition, obtain a value of 20.41 a.u. for the static dipole polarizability of the Fr<sup>+</sup> ion. Zhou and Norcross find $`\alpha _c(0)`$ = 15.644(5) for the polarizability of Cs<sup>+</sup>, by fitting Rydberg states energies to a model potential for Cs, while the present RRPA calculations yield the value $`\alpha _c(0)`$=15.81. Based on this comparison, we expect the RRPA method to give at least a few per cent accuracy in the calculation of $`\alpha _c(iw)`$. To demonstrate the sensitivity of our results to errors in the core polarizability, we present the ratios of values calculated omitting $`\alpha _c`$ to the total values of $`\alpha (0)`$, $`C_3`$, and $`C_6`$ in Table I. We see that while $`\alpha (0)`$ is affected at the level of a few per cent, the core contribution to $`C_6`$ becomes increasingly important for heavier systems. $`\alpha _c(iw)`$ contributes 2% to $`C_6`$ for Na and 23% for Fr. The atom-wall interaction constant $`C_3`$, obtained with Eq. (3), is the most sensitive to the core contribution. Indeed, while $`\alpha _c`$ contributes 16% of $`C_3`$ for Na, it accounts for the half of the total value of $`C_3`$ for Fr. The tabulation of our results for static dipole polarizabilities, atom-wall interaction constants $`C_3`$, and $`C_6`$ dispersion coefficients is presented in Tables IIIV. In Method I we use high-precision experimental values for dipole matrix elements of the principal transition. We used a weighted average of experimental data if there were several measurements for a particular transition. In Method II we use the theoretical SD matrix elements for the principal transition. We recommend using the values obtained with Method I for $`\alpha (0)`$ and $`C_6`$, since the accuracy of experimental data for the principal transitions is better than that of SD predictions. In Table II we compare our calculations with experimental data for static polarizabilities. We find perfect agreement with a high-precision value for Na obtained in recent atom-interferometry experiments . The experimental data for static polarizabilities of K, Rb, and Cs are known with the accuracy of about 2% . While we agree with those experimental values, we believe that our theoretical approach gives more accurate results, mainly due to the overwhelming contribution of the principal transition to the sum over intermediate states. The electric-dipole matrix elements for principal transitions are known typically at the level 0.1% accuracy for all alkalis. The theoretical error is estimated from the experimental accuracy of matrix elements , from an estimated 5% error for the core polarizabilities, and 10% error for the remaining contributions to $`\alpha (0)`$. A sensitive test of the quality of the present dynamic polarizability functions is obtained by calculating $`C_3`$ coefficients in two different ways: i) by direct integration of $`\alpha (i\omega )`$ using Eq. (3) and ii) by calculating the diagonal expectation value of $`𝐑^2`$ in Eq. (4). In the present work we extend calculations of the expectation value of $`𝐑^2`$ in the SD formalism to obtain $`C_3`$ values for Rb, Cs, and Fr. In the Table III, we compare the SD values for $`C_3`$ with those obtained in using MBPT. The difference of 7% for Cs and 10% for Fr between SD and MBPT values is not surprising, since the MBPT underestimates the line-strength of principal transitions by a few per cent for Cs and Fr. To make a consistent comparison between the $`C_3`$ values obtained by integrating $`\alpha (i\omega )`$ and by calculating the expectation value, we have used SD energies and matrix elements in Method II calculations in Table III. These $`C_3`$ values agree to about 0.6% for Na, 1% for K and Rb, 2.5% for Cs, and 3.4% for Fr. At present, it appears no experimental data are available for comparison. We assume that most of the error is due to the RRPA method used to calculate the core polarizability. Therefore, the error estimates in $`C_6`$ are based on the accuracy of experimental matrix elements for the principal transition , and by scaling the error of core contribution from $`C_3`$ to $`C_6`$, using Table I. The comparison of $`C_6`$ coefficients with other calculations is presented in the Table IV. For Na the results are in good agreement with a semi-empirical determination . The integration over $`\alpha (i\omega )`$ as in Eq. (2) has been most recently used by Marinescu, Sadeghpour, and Dalgarno and by Patil and Tang . In contrast to the present ab initio calculations, both works employed model potentials. In addition, Ref. used corrections to multipole operators to account for core polarization effects with parameters chosen to reproduce the experimental values of static polarizabilities, which for K, Rb, and Cs atoms are known from experimental measurements with an accuracy of approximately 2%. The major contribution in the integration of Eq. (2) arises from the region of $`\omega =0`$ and the integrand is quadratic in $`\alpha (i\omega )`$. Therefore, until more accurate experimental values for static polarizabilities are available, the predictions of $`C_6`$ for K, Rb, and Cs have an inherent (experimental) accuracy of about 4%. Theoretical uncertainty of the method used in Ref. is determined, among other factors, by the omitted contribution from core polarizability as discussed in Refs. . Patil and Tang used model-potential calculations with analytical representations of wave functions and with experimental energies. They used a direct summation method in Eq. (1). The contribution from the core polarizability was not included as can be seen from Eq. (3.4) of Ref. . In fact, this formula in the limit of large $`\omega `$ results in $`\alpha (i\omega )1/\omega ^2`$ instead of the correct limit $`\alpha (i\omega )N/\omega ^2`$, which follows from the oscillator strength sum rule. Therefore, the model-potential calculations generally underestimate the $`C_6`$ coefficients. Indeed, from the comparison in Table IV, one can see that the $`C_6`$ values from Ref. and Ref. are systematically lower than our values. Maeder and Kutzellnigg used a method alternative to the integral Eq. (2) to calculate dispersion coefficients by minimizing a Hylleraas functional providing a lower bound. However, their prediction depended on the quality of the solution of the Schrödinger equation for the ground state. For alkali-metal atoms, model potentials were used to account for correlations. The predicted static polarizabilities are several per cent higher than experimental values, and are not within the experimental error limits. However, for $`C_6`$ coefficients we generally find good agreement with the values of Maeder and Kutzellnigg . Recently Marinescu et al. presented calculations of dispersion coefficients of different molecular symmetries for Fr, using a model potential method similar to Ref. . As shown in Table IV our result for Fr is significantly larger than the result of Ref. . We believe this may be because the method of Ref. does not completely take into account the contribution of the core polarizability, which accounts for 23% of $`C_6`$ for Fr. Elastic scattering experiments and photoassociation spectroscopy have sensitively constrained the possible values of $`C_6`$ for Na and Rb. Van Abeelen and VerHaar reviewed spectroscopic and cold-collision data for Na, including data from recent observations of magnetic field induced Feshbach resonances . They considered values for Na of $`1539<C_6<1583`$ and concluded that $`C_6=1539`$ gave the best consistency between data sets. Our result for Na using Method I is in particularly good agreement with this value. Photoassociation experiments for Rb limits the $`C_6`$ coefficient to a range 4400-4900 a.u. and even more recently a study of a Feshbach resonance in elastic collisions of <sup>85</sup>Rb concluded $`C_6=4700(50)`$. Our value $`C_6=4691(23)`$ is in excellent agreement with this experiment. For Cs, knowledge of the value of $`C_6`$ is critical for predictions of the sign of the elastic scattering length , though it has been demonstrated the resulting cross sections are not particularly sensitive to the value of $`C_6`$ . For Fr, the paucity of other dimer data constrains quantitative theoretical collisional studies for the near future. As photoassociation experiments move beyond the alkali-metal atoms to other atoms with many electrons such as Sr and Cr , it will be important to have reliable ab initio methods for calculation of atomic properties. The approaches presented here could, in principle, be applied to Sr and perhaps with some significant effort to Cr. AD would like to thank H. R. Sadeghpour, B. D. Esry, F. Masnou-Seeuws, and D. J. Heinzen for useful discussions. The work of AD, WRJ, and MSS was supported in part by NSF Grant No. PHY 95-13179 and that of JFB by NSF Grant No. PHY 97-24713. The Institute for Theoretical Atomic and Molecular Physics is supported by a grant from the NSF to the Smithsonian Institution and Harvard University.
no-problem/9812/cond-mat9812061.html
ar5iv
text
# Strongly Correlated Electrons on a Silicon Surface: Theory of a Mott Insulator ## Abstract We demonstrate theoretically that the electronic ground state of the potassium-covered Si(111)-B surface is a Mott insulator, explicitly contradicting band theory but in good agreement with recent experiments. We determine the physical structure by standard density-functional methods, and obtain the electronic ground state by exact diagonalization of a many-body Hamiltonian. The many-body conductivity reveals a Brinkman-Rice metal-insulator transition at a critical interaction strength; the calculated interaction strength is well above this critical value. Transport behavior in crystalline materials is governed by the excitation spectrum: insulators have a finite gap to excitations while metals have zero-energy excitations. Band theory accurately describes this distinction in most materials: systems with only filled or empty bands are insulating while systems with partially occupied bands are metallic. However, the band description may break down under circumstances when, roughly speaking, the energy cost for forming an extended state exceeds the cost for forming a localized state. The resulting ground state, which arises from electron-electron interactions that band theory cannot describe, is known as a Mott insulator . Surfaces provide a potentially fertile environment for Mott insulators. Electrons occupying surface states may localize more readily than in the bulk, due to two significant effects: (1) Atoms at surfaces have lower coordination than in the bulk, raising the energetic cost for electron hopping. (2) Surfaces often undergo reconstructions, yielding much larger inter-orbital spacings than in the bulk. These effects combine to make surfaces natural systems to look for Mott insulating behavior. In a recent series of experiments, Weitering et al. used photoemission and inverse photoemission to demonstrate that the K/Si(111)-$`(\sqrt{3}\times \sqrt{3})`$-B surface has a gap at the Fermi level. Since this system has an odd number of electrons per unit cell it must be metallic in a band description, clearly contradicting the photoemission data. On this basis, Weitering et al. hypothesized that this system (hereafter K/Si-B) is a Mott insulator. In this Letter we explicitly demonstrate, by exact solution of the appropriate many-body Hamiltonian, that the electronic ground state of K/Si-B is indeed a Mott insulator. The calculation is in three parts. First, we use standard density-functional methods to determine the geometrical and electronic structure of this surface within the local-density approximation (LDA). Second, we map the relevant electronic states onto a many-body Hamiltonian, which we then solve on a periodic cluster using exact diagonalization techniques. Third, we use the resulting many-body ground state to compute the zero-frequency conductivity or Drude weight, $`D`$, and then show that in the infinite system $`D0`$, that is, a metal-insulator transition occurs in the thermodynamic limit. Boron induces a well-known $`\sqrt{3}\times \sqrt{3}`$ reconstruction of the clean Si(111) surface . Boron substitutes for every third Si atom in the second subsurface layer, and the displaced Si assumes an adatom position above the boron (see Fig. 1). The electron in the Si-adatom dangling bond is transferred subsurface, enabling the B atom to participate in four covalent bonds. By this mechanism, the surface forms a conventional band insulator, leaving each Si adatom with an empty orbital extending away from the surface. These orbitals form a triangular lattice on the surface. In the experiments of Weitering et al., K was then deposited onto this insulating substrate until the saturation coverage was reached. To determine the equilibrium structure of K/Si-B, we have performed extensive LDA calculations. The calculations used a slab geometry with three double layers of Si, terminated by H, and a vacuum region equivalent to three double layers of Si. Total energies and forces were calculated using Hamann and Troullier-Martins pseudopotentials, and a plane-wave basis with a kinetic-energy cutoff of 20 Ry, as implemented in the fhi96md code . Four k-points were used for Brillouin-zone integrations. Full structural relaxation was performed on all atoms, except those in the bottommost double layer, until the rms force was less than 0.05 eV/Å. We began by first fully relaxing the surface without K present, and then proceeded to determine the equilibrium coverage and geometry of the K-saturated surface. Experimentally, coverage is monitored via the electron work function: at the saturation coverage, the low-temperature work function reaches a minimum . The absolute K coverage is not known from experiment, so it must be determined theoretically. We calculated the work function for the lowest-energy arrangement of adsorbates at coverages of 1/6, 1/3, 2/3, and 1 monolayer (ML), and find a minimum at 1/3 ML, in agreement with the conclusions of Weitering et al. At all coverages, the experimental photoemission spectra show that the Si adatom backbond state persists upon K deposition, suggesting the K adsorbates do not break the Si adatom bonds. We therefore assume that at these coverages the adsorbates do not destroy the underlying reconstruction. The resulting minimum energy configuration at the saturation coverage of 1/3 ML is shown in Fig. 1. The K adsorbates are in the $`H_3`$ hollow site, with the Si adatom slightly shifted from its position with no K present. A metastable state with the K adsorbate in the $`T_4`$ hollow site has an energy 0.1 eV higher per unit cell. At coverages below 2/3 ML, one expects the K 4$`s`$ electrons to partially occupy the surface state arising from the empty Si-adatom orbitals. At 1/3 ML there is one K per Si orbital, so in the band description the single surface band is half occupied; this simple picture is confirmed by the calculated LDA band structure shown in Fig. 2. Clearly this system must be metallic within band theory. To investigate the importance of electronic interactions not included in band theory, we derive a single-band Hubbard model for the half-filled surface state. The Hamiltonian is $$H=\underset{ij\sigma }{}t_{ij}c_{i\sigma }^{}c_{j\sigma }^{}+U\underset{i}{}n_in_i,$$ (1) where $`c_{i\sigma }^{}`$ creates an electron with spin $`\sigma `$ on site $`i`$, and $`n_{i\sigma }=c_{i\sigma }^{}c_{i\sigma }^{}`$ is the number operator. The sites correspond to the empty Si orbitals that the K electrons are doping. The amplitude for hopping from orbital $`i`$ to orbital $`j`$ is given by $`t_{ij}`$, and $`t_{ij}=t_{ji}`$. There is a Coulomb energy cost of $`U`$ to occupy an orbital with two electrons. Our approach to determining the parameters of the Hubbard model is similar to other first-principles approaches . We first solve the Hubbard model in the mean-field (MF) approximation, and then require that the resulting single-particle energies optimally reproduce the corresponding LDA spectrum (which is also a mean-field theory) throughout the zone. In the MF approximation, the up electrons move in the average potential generated by the down electrons (and vice-versa), so the MF Hubbard Hamiltonian for the up electrons becomes $$H_{}^{\mathrm{MF}}=\underset{ij}{}t_{ij}c_i^{}c_j^{}+U\underset{i}{}n_in_i,$$ (2) where $`n_i`$ is the average density of the down electrons on site $`i`$. We assume a paramagnetic state in both the LDA and the MF solution to the Hubbard model. To determine the hopping amplitudes, $`t_{ij}`$, we fit the single-particle eigenvalues in the MF Hubbard solution to the LDA eigenvalues of the surface band at 100 special $`k`$-points. Note that although the K adsorbates break the three-fold rotational symmetry of the substrate, the LDA band structure remains nearly isotropic, and so we assume isotropic hopping. We allow hopping between nearest-neighboring and second-nearest-neighboring sites, with amplitudes $`t_1`$ and $`t_2`$ respectively—Third-nearest-neighbor hopping was found to be insignificant. Thus the dispersion is given by $`\epsilon (𝐤)`$ $`=`$ $`2t_1\left[\mathrm{cos}(k_y)+2\mathrm{cos}(k_x\sqrt{3}/2)\mathrm{cos}(k_y/2)\right]`$ (3) $`+`$ $`2t_2\left[\mathrm{cos}(k_x\sqrt{3})+2\mathrm{cos}(k_x\sqrt{3}/2)\mathrm{cos}(3k_y/2)\right].`$ (4) The optimized amplitudes are $`t_1=66`$ meV and $`t_2=24`$ meV. A plot of the fit and the LDA eigenvalues along high symmetry directions is shown in Fig. 2; the fit is very good, with a rms error of 42 meV. To determine the intra-orbital Coulomb repulsion $`U`$, we subject the system to a density fluctuation by moving charge from one Si orbital to another. The optimal interaction parameter, $`U_{\text{K/Si-B}}`$, is then determined by requiring the LDA solution and the MF Hubbard calculation to respond identically. In order to maintain overall charge neutrality, a supercell calculation with two Si orbitals is required. At the edge of the Brillouin zone, the paramagnetic single-particle eigenvalues for a shift of $`\delta n`$ of an electron from one orbital to another take the simple form $$\epsilon _\pm =\pm U\delta n/2,$$ (5) up to an overall constant which we take to be zero. The fit of $`\epsilon _+`$ to the LDA eigenvalues is shown in Fig. 3. For small charge shifts, the LDA eigenvalues are nearly linear, and we obtain $`U_{\text{K/Si-B}}=1.23`$ eV. For larger charge shifts, additional bands in the LDA calculation enter that are not present in the single-band Hubbard model, and the LDA eigenvalues drop very slightly below the MF Hubbard eigenvalues. As a check of the reliability of this approach, we also determined $`U_{\text{K/Si-B}}`$ by fitting the change in the total kinetic energy in the MF Hubbard solution to the LDA. We find $`U_{\text{K/Si-B}}1.2`$ eV, consistent with the above result. Having determined the parameters of the Hubbard model describing K/Si-B, we now solve this model exactly (using a periodic 16-site cluster) to obtain the many-body electronic ground state. The number of states in the Hilbert space grows exponentially with the number of sites in the cluster: in the Hubbard model, each site can have zero, one (either up or down), or two electrons, so the number of basis states in the space of an $`N`$-site system is $`4^N`$. The symmetries of the Hamiltonian make the matrix block diagonal, but with 16 sites the size of the largest block is still more than $`10^7\times 10^7`$. Conventional algorithms obviously cannot diagonalize matrices this large, and so we use the Lanczos algorithm to determine the exact ground state. Storing the Hamiltonian and three basis vectors in memory on an IBM SP2 required 64 nodes with 1 Gb memory each; the computation required about 600 CPU hours. To distinguish quantitatively between metallic and insulating behavior, we calculate the zero-frequency conductivity or Drude weight, $`D`$, of the many-body ground state. The Drude weight provides a definitive way to distinguish metals from insulators irrespective of the applicability of band theory: in the thermodynamic limit, metals have non-zero Drude weight, while for insulators $`D=0`$. Kohn first showed that $`D`$ may be calculated from the variation of the ground-state energy with respect to an applied vector potential , $$D=^2E/\varphi ^2.$$ (6) We use this technique to determine the Drude weight of our Hubbard cluster as a function of the interaction parameter $`U`$. The results are shown in Fig. 4. At small $`U`$, the system is in the metallic regime and so $`D`$ is large, and it decreases monotonically with increasing $`U`$. $`D`$ never becomes zero, because only an infinite system can undergo a true metal-insulator transition. Instead of a critical interaction strength, our 16-site cluster shows a transition region in the vicinity of the physically interesting interaction strengths, near $`U=1`$ eV. In general, there may be level crossings in the ground state as a function of $`U`$. For our hopping amplitudes, this is not the case—The ground state evolves adiabatically with increasing $`U`$, and the Drude weight in Fig. 4 is a continuous function. We see no evidence of the intermediate (semi-metallic) phase that was found in recent slave-boson studies of the Hubbard model on a triangular lattice with $`t_2=0`$ . To show that K/Si-B lies on the insulating side of a Mott-insulator transition, we need the critical interaction strength for the transition in order to compare with our calculated interaction strength, $`U_{\text{K/Si-B}}`$. We can obtain this value by extending our calculated $`D(U)`$ from the exact finite-cluster result to the thermodynamic infinite limit, using the form derived with the Gutzwiller approximation for the infinite system, $$D_{\mathrm{}}(U)\{\begin{array}{cc}1(U/U_c)^2,\hfill & U<U_c\hfill \\ 0,\hfill & U>U_c\hfill \end{array}$$ (7) where $`U_c`$ is the critical interaction strength for the metal-insulator transition. This functional form fits our 16-site results at small and intermediate values of $`U`$ extremely well, as shown in Fig. 4. From this fit, we estimate the critical interaction strength for the Mott metal-insulator transition in the infinite system occurs at $`U_c=0.95\pm 0.02`$ eV. Our calculated value for the physical system, $`U_{\text{K/Si-B}}=1.23`$ eV, is well above this critical value, and establishes that K/Si-B is indeed a Mott insulator. In the Mott-insulating regime, the localized electrons will interact via an antiferromagnetic Heisenberg exchange coupling with strength $`J_{ij}=4t_{ij}^2/U`$ . The Heisenberg model on a triangular lattice is frustrated. The nearest-neighbor model has strong three-sublattice correlations, but whether the correlations are long-ranged is controversial . In the Hubbard model for K/Si-B, the second-neighbor hopping is substantial, so the second-neighbor antiferromagnetic Heisenberg coupling in the Mott-insulating limit will be significant. This coupling frustrates the three-sublattice correlations in the nearest-neighbor model, so it is unlikely that three-sublattice order is established. The ground state will likely either establish some other type of collinear order or enter a quantum-disordered regime with no long-range order . To conclude, we have shown that the many-body electronic ground state of the K/Si-B surface is a Mott insulator. Specifically, we have first determined the surface coverage and morphology of K adsorbed on Si(111)-B using first-principles total-energy methods. We then mapped the relevant electronic degrees of freedom onto a Hubbard model, which we solved with exact diagonalization. By calculating the Drude weight of the model, we have demonstrated that for the physical parameters of K/Si-B, the model has a Mott insulating ground state. We thank H. H. Weitering, E. J. Mele, M. J. Rozenberg, and D. W. Hess for enlightening discussions. This work was supported by the National Research Council and was funded by ONR. Computational work was supported by a grant of HPC time from the DoD Major Shared Resource Center ASCWP.
no-problem/9812/chao-dyn9812024.html
ar5iv
text
# The dynamics of a low-order coupled ocean-atmosphere model ## 1 Introduction On a time scale of days or weeks, the atmospheric component of the earth’s climate system is dominant. Therefore, for short range weather forecasts oceanic variables, such as the sea surface temperature, can be considered fixed. On a much longer time scale, say years or decades, the ocean’s dynamics and its coupling to the atmosphere can play an important role. It has to be taken into account when studying for instance decadal climate variability, see e.g. , or anthropogenic influence like the greenhouse effect. For such purposes state-of-the-art climate models are often used, which possess millions of degrees of freedom. Even so-called intermediate models, with a rather coarse resolution by meteorological standards, still have thousands of degrees of freedom. The results of experiments with such models are analysed statistically, as they are out of reach of the ordinary analysis of dynamical systems theory. One important open issue is the interplay of the short time scale of the atmospheric, intrinsically chaotic, components, and the long time scale of the oceanic component. As much understanding of atmosphere models has been gained by looking at extremely low dimensional truncations, our aim is to do the same for coupled models. A proposal for a low order coupled model is due to Roebber . He coupled the Lorenz-84 model, which is a metaphor for the general circulation of the atmosphere , to Stommel’s box model for a single ocean basin . Our model is similar to Roebber’s, but we have simplified the ocean model somewhat. The feature modeled by Stommel in is the thermohaline circulation (THC) in the North Atlantic ocean. This is the large scale circulation driven by the north-south heating gradient on one hand, and the difference in salt content of the sea water on the other. The Lorenz model describes the westerly circulation, i.e. the jet stream, and traveling planetary waves. Experiments with realistic climate models (see, for instance ) indicate that the circulation of the ocean is largely driven by the atmospheric dynamics. In contrast, the feedback to the atmosphere seems to be rather weak, and only notable on long time scales. Therefore, we assume that the coupling terms in the ocean model are of the same order of magnitude as its internal dynamics. The coupling terms in the atmosphere model are taken much smaller than its internal dynamics. The behaviour of the coupled system is then investigated as a function of the coupling parameters in the atmosphere model. When varying these parameters we find stable equilibrium points, as well as periodic solutions and chaotic attractors. By means of numerical algorithms the Kaplan-Yorke dimension and the correlation dimension of the chaotic attractors are calculated. The difference between the typical Kaplan-Yorke dimension found in the coupled system and the value found in the uncoupled Lorenz system is almost two, the dimension of the ocean model. The difference in correlation dimension is only half as big. This is related to the fact that there is little variability in the ocean model as coupled to the atmosphere model. It is basically a relaxation equation driven by a chaotic forcing. The main feature of the ocean box model, in fact the reason for studying it in the first place, is the occurrence of coexisting stable equilibria. One of these equilibria describes the temperature driven THC which is currently observed, with warmer water flowing poleward in the upper layer and cooler water flowing back towards the equator in a deeper layer. This circulation is driven by the heating gradient. The other equilibrium describes an inverted THC, driven by the salinity gradient. We show that in the coupled model, for a range of parameter values, there also exists an attracting set in phase space on which the THC is salinity driven. This may be an equilibrium point or a periodic solution. For these parameter values, the model has competing attractors, as there also exists an attracting set on which the THC is temperature driven. This may be a periodic solution or a chaotic attractor. Another property of the coupled model is the intermittent behaviour, which is observed in the transition from periodic to chaotic motion. By means of bifurcation analysis of periodic solutions this behaviour can be studied in detail. It turns out, that a periodic solution loses its stability in a Neimark-Sacker bifurcation. Very close to the Neimark-Sacker bifurcation a saddle node bifurcation occurs, at which the periodic solution disappears. The intermittent behaviour persists beyond this point. This phenomenon might be called ’skeleton dynamics’ after Nishiura and Ueyama . Both the Neimark-Sacker and the saddle-node bifurcation are local, which means that some distance away from the bifurcating structure in phase space, the vector field remains essentially the same. The ’ghost’ of the periodic orbit keeps attracting the phase point, but only for a finite time. The length of the seemingly periodic intervals can be measured, and we can consider its distribution as a function of the bifurcation parameter. This approach was first taken by Pomeau and Manneville . They also made a prediction for the order of the divergence of the average length of a periodic interval, $`l`$, as the bifurcation parameter approaches its critical value. Although analytical arguments suggest a divergence as $`\mathrm{ln}1/ϵ`$, where $`ϵ`$ is the distance from the critical value, their own computer simulations showed a power law scaling, i.e. $`lϵ^\alpha `$. The exponent they measure, $`\alpha 0.04`$, agrees reasonably well with our result, $`\alpha 0.06`$. The power law scaling holds only for very small values of $`ϵ`$. The intermittent behaviour is found in a much larger range in parameter space. It is our conjecture, that the presence of the slow ocean system enhances the intermittent behaviour. Just beyond the Neimark-Sacker point, during a periodic interval the phase point approaches the unstable periodic solution near its stable manifold. Here, the convergence rate is set by the time scale of the slow system. Thus, the periodic intervals are much longer than the period of the periodic solution, indeed comparable to the relaxation times of the ocean model. Summarising, we can say that, for a broad range of parameter values, the ocean model does not seem to play a role of importance because of the weak coupling to the atmosphere model. The atmospheric dynamics is dominant. Two notable exceptions are the occurrence of attracting equilibria and periodic solutions with an inverted THC, also present in the uncoupled ocean model, and the intermittent transition to chaos. The intermittency is generic in the sense that, mathematically, the occurrence of chaos and the loss of stability of periodic orbits through a Neimark-Sacker bifurcation are. Whether it is generic in a hierarchy of increasingly realistic models with increasing dimension remains to be found out. ## 2 The Lorenz-84 general circulation model Like the Lorenz-63 model, a famous example of a low-order model showing chaotic behaviour, the Lorenz-84 model is a Galerkin truncation of the Navier-Stokes equations. Where the ’63 model describes convection, the ’84 model gives the simplest approximation to the general atmospheric circulation at midlatitude. The approximation is applicable on a $`\beta `$-plane, which we place over the North Atlantic ocean. With this derivation in mind, we can give a physical interpretation of the variables of the Lorenz-84 model: $`x`$ is the intensity of the westerly circulation, $`y`$ and $`z`$ are the sine and cosine components of a large traveling wave. The time derivatives are given by $`\dot{x}`$ $`=y^2z^2ax+aF`$ (1) $`\dot{y}`$ $`=xybxzy+G`$ (2) $`\dot{z}`$ $`=bxy+xzz`$ (3) where $`F`$ and $`G`$ are forcing terms due to the average north-south temperature contrast and the earth-sea temperature contrast, respectively. Conventionally we take $`a=1/4`$ and $`b=4`$. The behaviour of this model has been studied extensively since its introduction by Lorenz . Numerical and analytical explorations can be found for instance in and . A bifurcation analysis is presented in . The bifurcation diagram of this model is quite rich. It brings forth equilibrium points, periodic and quasi periodic orbits as well as chaotic motion. Qualitatively the behaviour can be sketched by looking at the energy transfer between the westerly circulation and the traveling wave. The energy content of the westerly circulation tends to grow, forced by solar heating. Above a certain value however this circulation becomes unstable and energy is transferred to traveling waves, and then dissipated. The energy content of the westerly circulation decreases rapidly and the cycle repeats itself in a periodic or irregular fashion. In figure (1) one can see that the orbit tends to spiral around the $`x`$-axis towards a critical value of $`x`$, then drops towards the $`y,z`$-plane. At parameter values $`(F,G)=(6,1)`$ two stable periodic solutions coexist. These parameter values are called summer conditions. For $`(F,G)=(8,1)`$ the behaviour is chaotic (see figure (1)). These parameter values are called winter conditions. If we fix these forcing parameters to summer conditions in the coupled model, described below, no complex dynamics arise. When varying the coupling parameters we see only equilibrium points and periodic solutions. In our investigations we will take $`(F,G)=(8,1)`$, i.e. we will stick to perpetual winter conditions. ## 3 The box model for a single ocean basin The ocean-box model was introduced by Stommel in 1961 . It is a simple model of a single ocean basin, the North Atlantic. This basin is divided in two boxes, one at the equator and one at the north pole. Within the boxes the water is supposed to be perfectly mixed, so that the temperature and salinity are constant within each box but may differ between them. This drives a circulation between the boxes which represents the thermohaline circulation. Water evaporates from the equatorial box and precipitates into the polar box. Thus the salinity difference between the boxes is enhanced. The temperature difference is maintained by the difference in heat flux from the sun. Thus, the salinity and the temperature difference drive a circulation in opposite directions. For a suitable choice of parameters, both the circulation driven by salinity and the circulation driven by temperature occur as stable solutions in this model . In contrast to the Lorenz model, no complex dynamics arise. Figure (2) shows the setting of the model. The volume of water is kept equal, but its density may differ between the boxes. Using a linearised equation of state and some assumptions on the damping, dynamical equations for the temperature difference $`T=T_eT_p`$ and the salinity difference $`S=S_eS_p`$ can be derived. They are $`\dot{T}=k_a(T_aT)|f(T,S)|Tk_wT`$ (4) $`\dot{S}=\delta |f(T,S)|Sk_wS`$ (5) $`f=\omega T\xi S`$ (6) where $`k_a`$ is the coefficient of heat exchange between ocean and atmosphere, $`k_w`$ is the coefficient of internal diffusion and $`\omega `$ and $`\xi `$ derive from the linearised equation of state. The flow, $`f`$, represents the THC. It is positive when temperature driven and negative when salinity driven. The inhomogeneous forcing by solar heating and atmospheric water transport are given by $`T_a`$ and $`\delta `$, respectively. When coupling the box model to the Lorenz-84 model, we will use Roebbers estimates for the parameters in (4)-(6). The volume of the deep ocean box, not present in our model, is simply divided between the polar and the equatorial box . The absolute value in (4) and (5) was put there by Stommel, arguing that the mixing of the water should be independent of the direction of the flow. A more straightforward derivation of the equations of motion of a simple ocean model related to the box model indicates that this is indeed the case, although the term comes out quadratic instead of piecewise linear . If we take this term to be quadratic in the coupled model, described below, the average values of $`T`$ and $`S`$ change significantly, but we find qualitatively the same behaviour. ## 4 The coupled equations Having described these simple models for atmospheric and oceanic circulation, and the physical interpretation of their variables, we can now identify three mechanisms by which they interact: 1. The pole-equator temperature contrast is supposed to be in permanent equilibrium with the wind current $`x`$, i.e. we put $`T_ax`$. Also, the forcing by temperature contrast in (1) is adjusted, so we put $`FF_0+F_1T`$. This expresses the simplest geostrophic equilibrium: a north-south temperature gradient which drives a east-west atmospheric circulation. 2. The inhomogeneous forcing by land-sea temperature contrast in (2) should decrease with increasing temperature difference $`T`$. It is assumed that in the polar region the sea water temperature is higher than the temperature over land, while in the equatorial region it is lower. A higher temperature difference $`T`$ thus means a lower land-sea temperature contrast. This influence is described as a fluctuation upon a fixed forcing: $`GG_0+G_1(T_{av}T)`$. 3. The water transport through the atmosphere is taken to be linear in the energy content of the traveling wave: $`\delta \delta _0+\delta _1(y^2+z^2)`$. Combining (1)-(5) with the proposed coupling terms we obtain $`\dot{x}=`$ $`y^2z^2ax+a(F_0+F_1T)`$ (7) $`\dot{y}=`$ $`xybxzy+G_0+G_1(T_{av}T)`$ (8) $`\dot{z}=`$ $`bxy+xzz`$ (9) $`\dot{T}=`$ $`k_a(\gamma xT)|f(T,S)|Tk_wT`$ (10) $`\dot{S}=`$ $`\delta _0+\delta _1(y^2+z^2)|f(T,S)|Sk_wS`$ (11) with $`f`$ as in (6). With the coupling some new constants have been introduced. They are $`T_{av}`$, the standard temperature contrast between the polar and the equatorial box, $`\gamma `$, the proportionality constant of the westerly wind current and the temperature contrast and $`\delta _1`$, a measure for the rate of water transport through the atmosphere. When exploring the dynamical behaviour of the model we take $`F_1`$ and $`G_1`$ as free parameters. As motivated in the introduction, we consider small coupling to the atmosphere model. This is the case if we take $`(F_1,G_1)[0,0.1]\times [0,0.1]`$. As remarked in the previous section, we follow Roebber in scaling the parameters. In table (1) they are listed. In this scaling, one unit of time in the model corresponds to the typical damping time scale of the planetary waves. This time scale is estimated to be five to ten days. The system of equations (7)-(11) is not conservative. Energy is being added through solar heating, and dissipated in the atmosphere as well as the ocean model. It can be shown that the model is globally stable. The proof consists of defining a trapping region, and is omitted here. ## 5 Bifurcations of equilibrium points In order to find the equilibrium points of the model, we must equate the time derivatives (7)-(11) to zero. By some algebraic manipulations the set of equations can be simplified, and a program like Mathematica can be used to calculate all equilibria for given parameter values, along with their spectra. In addition, the saddle node and Hopf bifurcations of these equilibria can be found using a continuation package like AUTO . On a plane in phase space, defined by $`f=0`$, the vector field is not differentiable. There is an equilibrium point on this plane if $$G_1=\frac{G_0\pm \sqrt{a(F_0+F_1T_0x_0)(12x_0+(1+b^2)x_0^2)}}{T_{av}T_0}.$$ (12) with equilibrium values $`x_0=(\delta _0+a\delta _1F_0)(k_a+k_w)\xi /(\omega k_wk_a\gamma +a\delta _1\xi [k_a+k_wF_1\gamma k_a])`$ and $`T_0=k_a\gamma x_0/(k_a+k_w)`$. On the curve in parameter space, defined by (12), a bifurcation occurs. When crossing it, increasing $`G_1`$, two equilibrium points appear, one with a positive value of $`f`$, and one with a negative value. The latter is stable. In fact, for any $`G_1`$ greater than the right hand side in (12) there is an attracting equilibrium or periodic solution on which $`f`$ is negative, i.e. the THC is inverted. The results of the bifurcation and stability analysis are shown in figure (3). The stability of the equilibrium points is indicated in the diagrams. As can be seen, only in a small window in parameter space there exist a stable equilibrium with positive flow. The attractors which arise in the regime with positive flow, i.e. a temperature driven THC, are either periodic or chaotic. The behaviour in this regime is more complex than in the regime with a salinity driven THC. A reason for asymmetry may be that the coupling through $`\delta _1`$ is rather weak. Experiments with more realistic models indicate that the water vapour transport through the atmosphere, represented by this constant, should be made a function of the temperature difference $`T`$, as the air temperature influences the efficiency of the transport. ## 6 Chaotic attractors The bifurcation diagrams given so far only display bifurcations of equilibrium points. It turns out that in a large parameter range, there is an abundance of periodic orbits that undergo saddle node, torus and period doubling bifurcations. In the uncoupled Lorenz model there exists a codimension two point that acts as an organising center for the bifurcation diagram. At such a point, normal form theory can be employed to find the local bifurcation structure. By continuation techniques information can be gained about the global bifurcation structure. Such an analysis is described in . The absence of such a point in the coupled model makes it quite hard, if not impossible to find and characterise the complete bifurcation structure. Instead we can do brute force integrations in order to classify the behaviour of the model. It is found that for many parameter values the behaviour is chaotic. Using the algorithm described by Wolf et al we can approximate the Kaplan-Yorke dimension of the chaotic attractors. For several parameter values it is found to be about $`4.3`$, compared to the typical Kaplan-Yorke dimension of about $`2.4`$ for the Lorenz-84 model. This quantity, however, only characterises the geometry of the attractor. Even if there is very little variability in the degrees of freedom we add from the ocean model, the Kaplan-Yorke dimension is increased by nearly one for each degree of freedom. A way to keep track of the dynamics on the attractor, is to calculate the correlation dimension. If there is little variability in the degrees of freedom we add, the attractor of the combined system will be dynamically ’flat’ and the correlation dimension will not increase as much as the Kaplan-Yorke dimension. Indeed, for the uncoupled Lorenz model the correlation dimension is typically about $`2.3`$ , compared to $`3.4\pm 0.2`$ for the coupled model. In other words, the attractor of the coupled system is much more inhomogeneous than that of the Lorenz system. A chaotic attractor can coexist with an attracting equilibrium point, or a stable periodic solution, with negative flow. It does not seem to be possible for the system to change from the regime with positive flow to the regime with negative flow or vice versa repetitively. Depending on the initial conditions, one of the regimes is soon entered and stayed in for ever. Although, as mentioned above, a modification of the coupling terms might alter the behaviour in the regime with negative flow, we suspect that no mechanisms that could force such a transition are represented in the model. The projection of a chaotic attractor of the coupled model onto the subspace of the fast variables looks much like the chaotic attractor of the Lorenz model shown in figure (1). The parameter ranges in which the behaviour is chaotic are bordered by periodic regimes. The transition is through intermittency. The latter is of interest, as the intermittent behaviour seems to be enhanced by the presence of a slow time scale. We will describe in some detail how this behaviour is brought about. ## 7 Intermittency In figure (4) a time series is shown, obtained by simply integrating the model with arbitrary initial conditions and parameter values given by $`(F_1,G_1)=(0.021685,0.01)`$. If the integration is continued, periodic intervals keep appearing. The length of those intervals is randomly distributed, but on average the (near-)periodic behaviour lasts about as long as the chaotic behaviour. This is not a transient effect, in the sense that if we integrate long enough the system will settle on periodic, weakly attracting set. It has all the properties of intermittent behaviour and is found in a range of parameter values around those used in this integration. There seems to be a periodic solution, which is approached during the periodic interval. Looking more closely at the transition to chaotic motion (see figure (5)) there seem to be several periodic solutions that attract the orbit briefly. In order to identify these periodic solutions and calculate their spectra we can define a Poincaré map, e.g. in the section $`𝒮_x=\{(x,y,z,T,S)^5|x=1\}`$, and numerically look for its fixed points using the method of Newton-Raphson. An initial guess can be taken from integrations such as the one shown in figure (4). Proceeding like this we find several fixed points of saddle type. These periodic solutions can be continued in one of the parameters, for instance using the algorithm described in . In figure (6) the results are shown of a continuation in parameter $`F_1`$ of the periodic solution approached in the periodic regime in figure (4). There is quite a large number of saddle node bifurcations in this continuation as well as Neimark-Sacker bifurcations and period doublings, which are not shown in the picture. Doing the continuation for other periodic solutions, found by the method described above, we find qualitatively the same behaviour. Looking at the spectra of the periodic solutions we find that two of the Floquet multipliers are close to, but smaller than, unity. The associated eigenvectors lie almost entirely in the subspace of the slow variables. This turns out to be a generic feature of the periodic solutions of the coupled model. ### 7.1 Skeleton dynamics Following the Floquet multipliers of the Poincaré map closely near the leftmost saddle node bifurcation in figure (6), near which the intermittency takes place, it turns out that both branches are initially unstable. Very close to the saddle node bifurcation a Neimark-Sacker bifurcation occurs, at which two multipliers cross the unit circle as a complex pair. Past the Neimark-Sacker point, the branch with the higher period is stable and periodic behaviour sets in. To the left of this point the behaviour is intermittent, and remains so left of the saddle node bifurcation. This is because the saddle node bifurcation is local. Some distance away from the bifurcating orbit in phase space, the vector field remains essentially the same. Thus the ‘ghost’ of the periodic solutions still influences the global dynamics. This effect was labeled ‘skeleton dynamics’ by Nishiura in a recent preprint on transient phenomena in partial differential equations . The farther away from the saddle node point the parameters are chosen, the less the influence of the skeleton structure. This notion can be quantified by measuring the length of the periodic intervals, or rather its distribution, for a number of parameter values. To obtain these data, integrations of $`5\times 10^6`$ in units of $`t`$ (about $`6.8\times 10^4`$ years) were done, during which more that $`600`$ periodic intervals were registered. This was done by tracing approximate recurrences of points under the Poincaré map on $`𝒮_x`$. In the chaotic as well as in the periodic phase $`x`$ fluctuates about unity, so that the Poincaré map is always defined. In contrast, the mean value of $`T`$ and $`S`$ differs significantly between the two phases. From the distributions at different parameter values we can calculate the expectation value, denoted by $`l`$, and plot it against the parameter. Thus, we can see the decreasing effect of the skeleton structure as we go farther from the saddle node point. Another way to see this is to look at the relative amount of time spent in the periodic regime, denoted by $`\tau _{per}`$. Both these measures are plotted in figure (7). ### 7.2 The theory of intermittency The idea of studying the distribution of the length of the periodic intervals was first phrased by Pomeau and Manneville . They made theoretical predictions of the dependence of the expectation value on the bifurcation parameter near its critical value. Three types of intermittency are distinguished, one for each generic bifurcation of a periodic solution. They are named type I, II and III for saddle node, Neimark-Sacker and period doubling bifurcations, respectively. Therefore, the intermittent behaviour described here is labeled type II. The theoretical prediction for the order of the divergence is $`l\mathrm{ln}1/ϵ`$, where $`l`$ is the expectation value of the length of a periodic interval, and $`ϵ=F_{NS}F_1`$ the distance from the bifurcation point. In it is remarked that numerical experiments suggest a power law scaling as $`lϵ^\alpha `$, where $`\alpha 0.04`$. Our experiments yield a power law scaling with $`\alpha 0.06`$, in reasonable agreement. ### 7.3 Chaotic bursts The missing ingredient in this description of the intermittent behaviour is the understanding of the chaotic bursts. During these bursts the motion cannot be distinguished from fully developed chaos, at least not by the eye. Numerically calculated Lyapunov exponents converge to the same values as found at fully developed chaos, two being positive. The conjecture is, that during this seemingly chaotic motion, the orbit is trapped by the numerous periodic solutions and their stable and unstable manifolds, that may intersect in a complicated way. However, the stable manifold of one of the periodic orbits that stem from the saddle node bifurcation described above, or, left of the bifurcation point, its ghost, partly lies in this tangle. The phase point moves around more or less randomly (locally, nearby orbits diverge) until it get trapped by this stable manifold and is attracted toward the weakly unstable periodic solution or its ghost. This is also referred to as the reinjection process. If we continue the Neimark-Sacker bifurcation in two parameters, we see that the reinjection process breaks down outside a certain range. Decreasing $`G_1`$, another periodic solution becomes stable and the behaviour becomes periodic. Increasing $`G_1`$ beyond some threshold, we see a one way transition to chaos, such as described in . In figure (9) the results of a continuation in two parameters is shown, indicating the range in which intermittent behaviour can be observed. There are several such windows in parameter space. Looking at the spectra near the saddle node in figure (6), one can see that the upper branch has a stable manifold of dimension two and the lower branch of dimension three. Both have one rapidly contracting dimension (multiplier close to zero), whereas the other directions are slowly contracting. As mentioned before, the periodic solutions of this system typically have two real eigenvalues close to unity. They are related to the long time scales of the ocean model variables. Thus, the slow dynamics of the ocean model enhances the length of the periodic intervals. Other periodic solutions also attract the orbit briefly, that is why they can be found in the first place. But they have at least one multiplier in the order of ten to one thousand, so that the time the orbit can be expected to linger in their neighbourhood is of the same order as their period or smaller. A nice illustration of the above conjecture is given by pictures of Poincaré sections. As these sections are four dimensional, we can only look at some projections. Shown here are the section $`𝒮_x`$ as defined above, projected onto the $`(y,z)`$-plane and the $`(T,S)`$-plane (figure (8)). The periodic solution itself is a fixed point of the third iterate of the Poincaré map. In the pictures its intersections have been marked by a cross. The integration from which the section $`𝒮_x`$ was obtained was started near the periodic solution, with a small perturbation in the unstable direction, so as to get an indication of the shape of the unstable manifold. In figure (8)(bottom) it is clearly visible how the orbit comes to the chaotic region and wanders around until it gets trapped by the stable manifold again and slowly approaches the periodic solution. The integration was stopped after a few approaches in order to get a clear picture. The same data are plotted in a other projection in figure (8)(top). Here, the typical shape of the Lorenz-attractor is clearly visible. The intersections of the periodic solution form tiny near-periodic islands in a chaotic sea. ## 8 Conclusion When varying the coupling parameters of the model, we find equilibrium points as well as periodic solutions and chaotic attractors. The presence of competing attractors with a different orientation of the THC is inherited from the ocean box model. The chaotic attractor of the coupled model is rather inhomogeneous. In the chaotic regime, the atmospheric dynamics is dominant and there is little variability in the oceanic variables. The transition from periodic to chaotic motion can be intermittent. The intermittent behaviour is found near a Neimark-Sacker bifurcation, in which a periodic solution loses its stability. It persists beyond the point where the unstable periodic solution disappears in a saddle node bifurcation. Following the approach of Pomeau and Manneville , the average length of a periodic interval has been measured as a function of the bifurcation parameter. Its divergence as the bifurcation parameter approaches the Neimark-Sacker point obeys a power law scaling, in agreement with the results of Pomeau and Manneville. There are numerous periodic solutions in phase space, which share the property that two of their Floquet multipliers are smaller than, but close to, unity. These are related to the slow evolution of the oceanic variables. The time scale of the periodic intervals during the intermittent behaviour is set by this slow evolution. Thus, the intermittent behaviour is enhanced by the coupling to the slow ocean model. The bifurcation scenario leading to intermittency is found in several places in parameter space. It involves only generic, codimension one, phenomena. Therefore it might be expected in other chaotic slow-fast systems. Whether or not it plays a role in more realistic climate models, of higher dimension, remains to be investigated. ## 9 Acknowledgments Part of the work by the first author was done at the faculty of applied and analytical mathematics of the University of Barcelona. The hospitality and helpfulness of Carles Simó and his group is gratefully acknowledged. Contributions were also made by F.A. Bakker and G. Zondervan at the KNMI. This work is part of the NWO project ‘a conceptual approach to climate variability’ (nr. 61-620-367).
no-problem/9812/cond-mat9812144.html
ar5iv
text
# Thermodynamics of the 𝑡-𝐽 Ladder: A Stable Finite Temperature Density Matrix Renormalization Group Calculation ## Abstract Accurate numerical simulations of a doped $`t`$-$`J`$ model on a two-leg ladder are presented for the particle number, chemical potential, magnetic susceptibility and entropy in the limit of large exchange coupling on the rung using a finite temperature density matrix renormalization group (TDMRG) method. This required an improved algorithm to achieve numerical stability down to low temperatures. The thermal dissociation of hole pairs and of the rung singlets are separately observed and the evolution of the hole pair binding energy and magnon spin gap with hole doping is determined. Standard quantum Monte Carlo methods for the simulation of fermions are limited to relatively high temperatures due to the fermion sign problem. The density matrix renormalization group method (DMRG) allows simulations of large clusters but is limited to groundstate properties. In this Letter we report on a finite temperature DMRG (TDMRG) method using an improved numerically stable algorithm to simulate a strongly interacting fermion system down to low temperatures. The TDMRG method applies the DMRG to the quantum transfer matrix (QTM) in the real space direction . In the TDMRG iterations the QTM is enlarged in the imaginary time direction and iterates to lower temperatures at fixed Trotter time steps $`\mathrm{\Delta }\tau `$. This is in contrast to the DMRG method in which the system grows in the real space direction. The TDMRG has the advantage that the free energy and other thermodynamic quantities for the infinite system can be obtained directly from the largest eigenvalue and the QTM and of the corresponding eigenvector. The system we examine is a two-leg $`t`$-$`J`$ ladder model in the limit where the exchange interaction across the rungs ($`J^{}`$) is large compared to the value along the legs($`J`$) and to the isotropic hopping integral $`t`$. The ground state properties of this model at low hole doping have been analyzed previously by exact diagonalization of small clusters . In this limit $`J^{}J,t`$ the thermal dissociation of hole pairs and the excitation of triplet magnons can be distinguished. These strong coupling processes are a good test for any method. In this Letter we present accurate results for the magnetic susceptibility $`\chi `$, the particle number $`n`$ and the entropy density $`s`$ in the grand canonical ensemble as a function of chemical potential $`\mu `$ and temperature $`T`$ and then remap $`\chi (\mu ,T)\chi (n,T)`$ to obtain the $`T`$-dependence at constant density. Previous versions of the TDMRG method for fermions have suffered from numerical instabilities due to the non-Hermiticity of the QTM and the corresponding density matrices which are constructed from the right and left eigenvector of the largest eigenvalue of the QTM. These numerical instabilities grow as the number of states kept is increased or the filling is changed away from half-filled bands. We have identified the loss of biorthonormality between the left and right eigenvectors $`(v_i^{(l)},v_j^{(r)})=\delta _{ij}`$ of the density matrix as the source of the problem. The biorthogonal but normalized eigenvectors $`v_i^{(l)}/v_i^{(l)}_2`$ and $`v_j^{(r)}/v_j^{(r)}_2`$ have to be multiplied with a factor $`\left[(v_i^{(l)},v_i^{(r)})/(v_i^{(l)}_2v_i^{(r)}_2)\right]^{1/2}`$, to become biorthonormal, which leads to severe loss of precision due to roundoff errors if the overlap between these vectors is small. These near-breakdowns occur especially often in conjunction with the second numerical problem, spurious small imaginary parts of (nearly) degenerate eigenvalue pairs. This latter problem can be solved by using the real and imaginary components of the corresponding complex conjugate eigenvector pairs and discarding the imaginary part of the eigenvalues, which are artifacts of roundoff errors and are only of the order of the machine precision. To circumvent the loss of precision in the former problem of nearly orthogonal eigenvectors our algorithm uses an iterative re-biorthogonalization step for the eigenvectors kept, which stabilizes the method for all temperatures. Technical details of the algorithms will be presented elsewhere . In Tab. I we show results of numerical stability tests of the original and our improved algorithm for the case of noninteracting spin $`S=1/2`$ fermions in one dimension. For this simple fermionic model the original TDMRG method becomes numerically unstable whenever more than about $`m=10`$ states are kept, thus severely restricting the achievable accuracy. The improved algorithm presented here, on the other hand, is always numerically stable and achieves much higher accuracy. The test example clearly demonstrates the need for numerical stabilization in the simulation of fermionic models. The results of the stabilized TDMRG method are accurate and unbiased, with errors only originating from the finite size of the Trotter time steps and the truncation in the DMRG algorithm. The latter are usually very small if the number of states kept, $`m`$, is large enough, and the former can be eliminated by extrapolating $`\mathrm{\Delta }\tau 0`$ by fitting to a polynomial in $`\mathrm{\Delta }\tau ^2`$. We have used Trotter time steps from $`\mathrm{\Delta }\tau t=0.01`$ to $`\mathrm{\Delta }\tau t=0.2`$, and $`m`$ between $`m=40`$ and $`m=60`$. We made use of spin conservation symmetry, the subspace of zero winding number and the reflection symmetry of the ladder along the rungs to optimize the calculations and reduce numerical errors. Thermodynamic quantities such as the internal energy $`U`$, the hole density $`n_h(=1n)`$ and the magnetic susceptibility $`\chi `$ have been determined directly from the eigenvectors of the transfer matrix . This is preferable to taking numerical derivatives of the free energy density obtained from the largest eigenvalue of the QTM. The low temperature properties of a doped $`t`$-$`J`$ two-leg ladder in the limit $`J^{}J,t`$ are determined solely by the singlet hole pairs (HP) . They form a hard core boson gas with a bandwidth of $`4t^{}`$, with $`t^{}=2t^2(J^{}4t^2/J^{})^1`$ in second order perturbation theory. Neglecting a weak nearest neighbor attraction, the HP fluid can be mapped to an ideal Fermi gas in this one dimensional geometry. As the temperature $`T`$ is increased the HPs dissociate into two quasiparticles (QP), each consisting of a single electron with spin $`S=1/2`$ in a rung bonding state. Each QP propagates with a bandwidth of $`2t`$ so that in the limit of low hole doping $`n_h1`$ the HP binding energy is $`E_B=J^{}4t+4t^2(J^{}4t^2/J^{})^1`$. The gas of QPs with density $`n_{QP}(T)`$ contributes to the spin susceptibility as a nondegenerate gas of $`S=1/2`$ fermions. A second contribution comes from the thermal excitation of singlet rungs to a triplet magnon state. The activation energy for a magnon $`\mathrm{\Delta }_M`$ ($`=J^{}J+J^2/2J^{}`$ in second order perturbation theory in the limit $`n_h0`$ ) is larger than that of the QPs ($`\mathrm{\Delta }_{QP}=E_B/2`$) but since the density of QPs is limited by the hole density ($`n_{QP}(T)n_h`$), the temperature evolution of $`\chi (n,T)`$ is determined largely by the magnons at low doping. We now turn to the presentation of our finite temperature results obtained using the improved TDMRG algorithm for $`J=t/2=J^{}/10`$ and compare them to expectations based on the above discussion of this strong coupling regime. As the calculations were performed in the grand canonical ensemble we first present results for the hole density $`n_h(\mu ,T)`$. A selection of our results is presented in Fig. 1 including a fit to a hard core boson model for the HPs: $$ϵ_{HP}^k=ϵ_{HP}+2t^{}\mathrm{cos}k+2\mu n_h.$$ (1) Fitting the data for $`n_h<0.1`$ at temperatures $`T<0.5t`$ we obtain an estimate for the center of the band for HPs at $`ϵ_{HP}=4.82(6)t`$ and a bandwidth $`4t^{}=1.5(2)t`$. The minimum energy to add a HP to an undoped ladder is $`ϵ_{HP}2t^{}=4.1(1)t`$, in good agreement with values from the finite clusters ($`ϵ_{HP}4.71t`$, $`4t^{}1.494t`$ ). A further confirmation of the validity of this hard core boson model for the HPs comes from considering the low temperature entropy density per site $`s`$, determined from the free energy density $`f`$ and the energy density $`u`$ as $`s=(uf)/T`$. As can be seen in Fig. 2 the entropy at $`T<0.3t`$ and low doping ($`n_h<0.1`$) is also well described by the hard core boson model for the HPs. At higher temperatures the thermal dissociation of HPs into two independent QPs and the thermal excitation of magnons from rung singlets govern the thermodynamics. These processes show up in the spin susceptibility $`\chi (T)`$, which is easiest to interpret in the canonical ensemble with fixed hole density $`n_h`$. Therefore we use $`n_h(\mu ,T)`$ to remap $`\chi (\mu ,T)\chi (n_h,T)`$. The values of $`\chi (\mu ,T)`$ were calculated by measuring the magnetization $`S^z(T)`$ in the presence of a small external field $`h/t=5\times 10^3`$. The results for $`\chi (n_h,T)`$ appear in Fig. 3. At high temperatures $`TJ^{}`$, $`\chi `$ follows a Curie-law for free spins $`\chi =(1n_h)/4T`$, and it decreases when the temperature is lowered below the magnon-gap $`\mathrm{\Delta }_M4.13t`$. The maximum of the peak is shifted towards lower $`T`$ with increasing doping, indicating a reduction of the magnon gap due to interactions with holes. Simultaneously the magnon bandwidth is enhanced, indicating that the energy of a localized magnon is not much changed by the holes. At very low temperatures of $`T<0.5t`$ we can see a second exponential decrease of $`\chi `$ with a smaller gap, which we attribute to the recombination of QPs into HPs at temperatures below the QP-gap, $`\mathrm{\Delta }_{QP}=E_B/2`$. Note the magnitude of this contribution increases with $`n_h`$. A quantitative description of $`\chi (n_h,T)`$ can be given by adding separately the contributions of the QPs $`\chi _{QP}`$ and of the magnons $`\chi _M`$, i.e.: $$\chi (n_h,T)=\chi _{QP}(n_h,T)+\chi _M(n_h,T).$$ (2) $`\chi _{QP}`$ is approximated by the value for free spins $`\chi _{QP}=n_{QP}(T)/4T`$ with a temperature dependent density of the QPs determined by the energy dispersion of the QPs $`\epsilon _{QP}^k=\mathrm{\Delta }_{QP}+a_{QP}(1+\mathrm{cos}k)/2`$ with $`\beta =1/T`$: $$n_{QP}=\frac{n_h}{\pi }_\pi ^\pi 𝑑k\frac{1}{e^{\beta \epsilon _{QP}^k}+1}.$$ (3) The density of rungs occupied by two spins at low temperatures where all holes are bound in HPs is $`1n_h`$ but exciting QPs reduces the number of such rungs by one for each QP so that the rung density is then $`1n_hn_{QP}`$. Our approach to a model for $`\chi _M`$ is simply to scale the form for undoped ladders proposed by Troyer et al. by this two-spin rung density leading to $$\chi _M=(1n_hn_{QP})\beta \frac{z(\beta )}{1+3z(\beta )},$$ (4) where $`z(\beta )=_\pi ^\pi 𝑑k(2\pi )^1\mathrm{exp}(\beta \epsilon _M^k)`$, and $`\epsilon _M^k=[\mathrm{\Delta }_M^2+4a_M(1+\mathrm{cos}k)]^{1/2}`$ . The parameters obtained by a fit of this model to the TDMRG data are shown in Tab. II. The main change upon doping is the decrease of the magnon gap $`\mathrm{\Delta }_M`$ due to interactions between the magnons and QPs. Due to hybridization with higher lying bands the QP bandwidth $`a_{QP}`$ is also reduced from the leading order perturbation result $`a_{QP}=2t`$, but the QP gap $`\mathrm{\Delta }_{QP}`$ is in reasonable agreement with the second order perturbative estimate of $`0.98t`$. The increase of $`\mathrm{\Delta }_{QP}`$ (or equivalently the binding energy $`E_B`$) with $`n_h`$, can be attributed to an effective repulsion between the QPs and HPs. A similar increase of the QP gap $`\mathrm{\Delta }_{QP}`$ was found in Ref. . This is an issue which warrants further investigations. Finally, in Fig. 4 we show the entropy density $`s`$, remapped in the same way to the canonical ensemble of fixed $`n_h`$. In the limit of $`T\mathrm{}`$ $`s_{\mathrm{}}=(1n_h)\mathrm{ln}2n_h\mathrm{ln}n_h(1n_h)\mathrm{ln}(1n_h)`$. At $`T/t=20`$ the entropy has acquired between $`99.4\%`$ of its maximal value $`s_{\mathrm{}}`$ for $`n_h=0.025`$ and $`99.7\%`$ for $`n_h=0.2`$. Below the magnon gap $`\mathrm{\Delta }_M`$, the entropy decreases exponentially, for the undoped Heisenberg ladder down to $`s=0`$. In the presence of hole doping, the exponential decrease shows a crossover to a linear decrease at low temperatures, as is expected for Luther-Emery liquids. This behavior is consistent with the hard core boson model proposed for the magnons. Quantitative fits are however better performed on $`s(\mu ,T)`$, as we did earlier, due to added uncertainties arising from the remapping to constant hole doping. In conclusion, we have developed the first numerically stable TDMRG algorithm for fermionic systems. This has enabled us to calculate accurate results for the magnetic susceptibility $`\chi `$ and the entropy density $`s`$ of the doped $`t`$-$`J`$ ladder with strong exchange on the rungs down to low temperatures in the thermodynamic limit of infinite system size. The system we have studied has two crossovers as the spins bind in singlet pairs and the holes in hole pairs. These crossovers can be clearly seen in the numerical data and demonstrate that this form of the TDMRG can be successfully used to reliably simulate strongly interacting fermions over a wide temperature range. Finally, very recently Rommer and Eggert report TDMRG calculations for a spin chain with impurities which uses the same method as we do to overcome the problems caused by complex eigenvalues, but they did not introduce the re-biorthogonalization which for fermionic models, we find is essential. We wish to thank E. Heeb, M. Imada, and X. Wang for useful discussions. Most of the calculations have been performed on the DEC 8400 5/300’s of the C4-cluster at ETH Zürich.
no-problem/9812/adap-org9812001.html
ar5iv
text
# Power spectra of extinction in the fossil record ## 1 Introduction In a recent paper, Solé et al. (1997) have studied the power spectra of extinction intensity in the fossil record, using family-level data for the Phanerozoic (approximately the last 550 million years) drawn from the compilation by Benton (1993). Such power spectra measure the degree to which extinction at one time is correlated with extinction at another. Intriguingly, Solé et al. find that for a variety of groups of organisms and extinction metrics, the variation of the power spectrum $`P(f)`$ with frequency $`f`$ appears to follow a power law: $$P(f)f^\beta ,$$ (1) where the exponent $`\beta `$ is in the vicinity of 1. This result has provoked considerable interest, since it indicates that extinction at different times in the fossil record is correlated on arbitrarily long time-scales—that there is some mechanism by which extinction events at all times throughout the Phanerozoic are linked together. This would be a startling discovery if true, since there are no known processes either biotic or abiotic which act on time-scales of 100 million years (My) or greater. The time-scale on which families in the fossil database become extinct and are replaced by new ones ranges from about 30 My in the Palaeozoic to about 80 My in the Mesozoic and Cenozoic, so one might reasonably expect correlations in the extinction profile to be absent at times longer than this. Solé et al. discuss a number of different possible explanations for their results, particularly the idea that long-time correlations in extinction intensities might arise through so-called “critical” processes in the evolution of species. The results of Solé et al. have however been questioned. In two recent studies it has been shown by numerical simulation (Kirchner and Weil 1998) and by mathematical analysis (Newman and Kirchner 1998) that the $`1/f`$ form is probably an artifact of the particular method used to calculate the power spectrum. The method used was a combination of the standard Blackman–Tukey autocorrelation technique (Davis 1973) with a linear interpolation scheme, and it appears that this combination generates a $`1/f`$ spectrum regardless of any correlations in the data. Solé (1998) has confirmed this result independently. In this paper, therefore, we take a different approach to the power spectrum of fossil extinction, performing a direct Fourier analysis of the fossil data without any intermediate steps. This method should, we believe, be free of the $`1/f`$ artifacts seen in the Blackman–Tukey method. As we will show, although the overall form of the spectrum calculated in this way roughly obeys Equation (1), closer inspection reveals two different regimes, one approximately following an exponential law with no long-time correlations, and one following a steep power law which, we will argue, is a result of the way the power spectrum is calculated rather than an indicator of any real biological effect. The outline of this paper is as follows. In Section 2 we describe how the power spectra are calculated, in Section 3 we give the spectra for a number of different data sets, in Section 4 we offer an explanation of the form of these spectra, and in Section 5 we give our conclusions. ## 2 Calculation of power spectra Extinction intensity can be measured in a variety of different ways. In this paper we use data at the family level, as did Solé et al. (1997). This makes a direct comparison with their results more straightforward. Four metrics of extinction are in common use: 1. number of families becoming extinct in each stratigraphic stage; 2. number of families becoming extinct in each stratigraphic stage divided by the length of the stage in My; 3. fraction (or equivalently percentage) of families becoming extinct in each stage; 4. fraction of families becoming extinct in each stage divided by the stage length. Solé et al. looked at data for marine and land-dwelling organisms separately, as well as combined data covering all organisms. Their data were taken from the compilation by Benton (1993). In the present paper we use data both from the Benton compilation, and also from the compilation by Sepkoski (1992). We concentrate however on marine organisms, partly because the marine fossil record is considerably more detailed than the terrestrial one, and partly because Sepkoski’s database does not contain data for terrestrial organisms. In both databases the data extend approximately from the start of the Cambrian to the end of the Pliocene. The data have been culled of all families which appear in only a single stage in order to curb the worst excesses of systematic bias, such as monograph and sampling effects (Raup and Boyajian 1988). The time-scale used for stage boundaries is essentially that of Harland et al. (1990). However, since this time-scale is believed to be in error where some of the earlier stage boundaries are concerned (Bowring et al. 1993) we have updated it with corrections kindly supplied by J. J. Sepkoski, Jr. and D. H. Erwin. (In fact, we have experimented with a number of different time-scales, with and without these corrections, and find that the principal results of this paper do not depend on which one we use.) The power spectrum $`P(f)`$ is defined to be the square of the magnitude of the Fourier transform of the extinction intensity. Denoting extinction intensity as a function of time by $`x(t)`$, we have $$P(f)=\left|_{t_0}^{t_1}x(t)\mathrm{}^{\mathrm{i2}\pi ft}t\right|^2,$$ (2) where $`t_0`$ and $`t_1`$ are the limits of time over which our data extend. In the case of data such as extinction records which are sampled at discrete time intervals, we should use the discrete version of this equation: $$P(f)=\left|\underset{t=t_0}{\overset{t_1}{}}x(t)\mathrm{}^{\mathrm{i2}\pi ft}\right|^2.$$ (3) In order to generate valid results however, this equation requires extinction data which are evenly spaced over time. The stratigraphic stages are not evenly spaced, so some interpolation scheme is necessary to generate a suitable set of values of $`x(t)`$. Here we make use of two different schemes. The first is the linear scheme employed by Solé et al., which we have adopted to facilitate comparison with their work. This scheme is a simple linear interpolation to intervals of one million years. In other words, they placed straight lines between the known data points to generate extra points in between at intervals of 1 My. Our other interpolation scheme is, in a sense, the scheme which assumes least about the data. In this scheme we assume that we know the number of families becoming extinct in a particular stage, but that we have no more accurate information than this about when exactly during the stage any particular family became extinct. (This in fact is true; we don’t have any more accurate information.) In this case, the best assumption we can make is that the probability of a family becoming extinct is uniformly distributed throughout the corresponding stage. This gives a kind of steplike form to the interpolated extinction data. The two interpolation schemes are illustrated in Figure 2. ## 3 Results for power spectra In Figure 3 we show the power spectra of fossil extinction, calculated using Equation (3), for data taken from the compilation by Sepkoski (1992). In this case we used total extinction as our metric of extinction intensity (number 1 on the list given in the previous section) although the results are similar for other metrics. The lower curve makes use of the linear interpolation scheme of Solé et al. and the upper one our own “flat” interpolation scheme. In both cases the data were interpolated to 1 My intervals, just as in the studies of Solé et al. As we can see, the two curves are similar in appearance; the choice of interpolation scheme makes little difference to the results, except at very high frequencies. For each curve we have marked with a dotted line the slope expected of a $`1/f`$ power spectrum. As the figure shows, the average line of the curves follows the $`1/f`$ form reasonably well but also displays marked systematic deviations from it, being clearly convex: it is shallower than $`1/f`$ at low frequencies and steeper than $`1/f`$ at high frequencies. In fact, at high frequencies the power spectra approximately follow power laws with forms $`1/f^2`$ and $`1/f^4`$ for the two different interpolation schemes. (These forms are also marked on the figure.) In the next section we propose an explanation of these results. In Figure 3 we show a similar power spectrum for data drawn from the compilation by Benton (1993). In fact, the data used to produce this figure were precisely the data used by Solé et al. in their calculations, having been kindly provided to us by Ricard Solé, who also performed the linear interpolation between the stages to eliminate the possibility of any discrepancy in the way the interpolation was carried out. As the figure shows, the spectrum again follows an average $`1/f`$ form, but is in general shallower than $`1/f`$ at low frequencies and approximately $`1/f^4`$ in form at high frequencies. ## 4 Discussion Whilst it is true that on average the power spectra of Figures 3 and 3 follow a $`1/f`$ form, we believe that there are clear deviations from this form visible in the figures, and in fact that the spectra each possess two distinct regimes: a low frequency regime in which the curve falls off approximately exponentially, and a high frequency one in which it falls off as a relatively steep power law. We now discuss the explanation of each of these regimes. First, let us look at the high frequency behaviour of the power spectra. Consider the top spectrum in Figure 3, which was produced using the flat interpolation scheme outlined in Section 2. We now demonstrate that the power spectrum of any function which has the steplike form produced by this interpolation scheme should fall off as $`1/f^2`$ at high frequencies. To do this, we observe that the power spectrum $`P(f)`$ may also be regarded as the Fourier transform of the two-time autocorrelation function $`\chi (t)`$ of the extinction intensity. For very short times, this autocorrelation has only a constant term and a contribution from the boundaries between stages which goes linearly with the time difference $`t`$, so that $`\chi (t)=A+Bt`$, where $`A`$ and $`B`$ are constants. It is straightforward to show that the Fourier transform of such a function varies with frequency $`f`$ as $`1/f^2`$. This result is in fact familiar to physicists studying X-ray scattering, who know it as Porod’s law (Guinier and Fournet 1955). The $`1/f^2`$ form of the power spectrum is clearly visible in Figure 3. This result does not apply to spectra generated using the linear interpolation scheme. However in this case we notice that the interpolated function is piecewise linear between the data points at each stage (see Figure 2) and hence that the derivative $`x/t`$ of the extinction intensity is a step-like function of the form discussed in the previous paragraph. Thus the power spectrum of the derivative must fall off as $`1/f^2`$ at large frequencies. This being the case, we can then use integration by parts to show that the power spectrum of the intensity itself must fall off as $`1/f^4`$. This behaviour is visible in Figures 3 and 3. Thus we have demonstrated that the high frequency behaviour of the power spectrum is purely a mathematical artifact, and is not associated with any interesting biological phenomena. However, the arguments given above break down when we look at time-scales greater than the typical length of a stage, which means greater than about 10 My. This corresponds to frequencies in the power spectrum of less than about $`0.1`$ My<sup>-1</sup>. And indeed we can see from the figures that the behaviour of the spectrum does change below this frequency. The behaviour below this point contains all of the interesting biological information to be found in these spectra, and it is on this region that we now concentrate. In Figure 4 we show this low-frequency region of the power spectrum replotted on semi-logarithmic scales, both for the Sepkoski and Benton data. On these scales, the spectrum appears to follow a straight line, apart from statistical fluctuations. This implies that the spectra have an approximately exponential form. The slope of the exponential gives a “correlation time” $`\tau `$, which describes the time-scale on which the extinction data are correlated with one another. The best fits to the data are shown as the dotted lines in the figure and the corresponding correlation times are measured to be $`\tau =39.5\pm 4.9`$ My for the Sepkoski data and $`\tau =45.4\pm 6.3`$ My for the Benton data. Within the errors these two figures are the same. The exponential form of the power spectrum at low frequencies indicates that there is correlation between the extinction intensity at different times in the fossil record, but that it falls off quite quickly, in a way which is not consistent with, for example, the critical processes considered by Solé et al. The mean lifetime of families in the Sepkoski database (again excluding single-stage records) is $`58.5\pm 1.3`$ My. Thus the time-scale $`\tau `$ on which there are correlations is similar, in fact slightly shorter than, the time-scale on which the families present turn over. It is therefore not surprising that we see correlations on these time-scales. We should note that the data presented in Figure 4 are also consistent with a power-law hypothesis. Using an $`F`$-test, we find that there is no statistically significant advantage of one fit over the other. Thus we have not ruled out the possibility of a power-law power spectrum, but we have shown that there is no statistical evidence in its favour. The form of the power spectrum of fossil extinction data can be explained as the result only of normal, short-time correlations and does not require us to invoke critical phenomena or similar explanations, at least in this case. ## 5 Conclusions In this paper we have calculated power spectra of extinction intensity in the Phanerozoic fossil record of marine families, using data from two independent compilations. These spectra show two distinct regimes of behaviour: one at low frequency (below about $`0.1`$ My<sup>-1</sup>) in which the spectrum is consistent with an exponential form with a time-scale on the order of the typical lifetime of a family, and another for high frequencies which falls off either as $`1/f^2`$ or as $`1/f^4`$ with frequency, depending on the interpolation scheme used in calculating the spectra. The exponential form is typical of most power spectra, and denotes short-time correlations in the extinction data, but no long-time ones such as might be typical of the “critical” systems which Solé et al. (1997) suggested to explain their results. The high-frequency behaviour of the spectrum is the result solely of the fact that the databases used record the time of extinction of families to the nearest stage, and does not reflect any real biological phenomena. ## Acknowledgements We would like to thank Jim Kirchner, Jack Sepkoski, Kim Sneppen and Ricard Solé for useful discussions, and Doug Erwin, Jack Sepkoski and Ricard Solé for providing data used in the calculations and figures. This work was supported by the Santa Fe Institute and DARPA under grant number ONR N00014–95–1–0975. ## References * Benton, M. J. 1993 The Fossil Record 2. Chapman and Hall (London). * Bowring, S. A., Grotzinger, J. P., Isachsen, C. E., Knoll, A. H., Pelechaty, S. M. & Kolosov, P. 1993 Calibrating rates of early Cambrian evolution. Science 261, 1293–1298. * Davis, J. C. 1973 Statistics and Data Analysis in Geology. John Wiley (New York). * Guinier, A. and Fournet, J. 1955 Small Angle Scattering of X-Rays. Wiley Interscience (New York). * Harland, W. B., Armstrong, R., Cox, V. A., Craig, L. E., Smith, A. G. & Smith, D. G. 1990 A Geologic Time Scale 1989. Cambridge University Press (Cambridge). * Kirchner, J. W. and Weil, A. 1998 No fractals in fossil extinction statistics. Nature 395, 337–338. * Newman, M. E. J. and Kirchner, J. W. 1998 Spectral analysis of fossil data. In preparation. * Raup, D. M. and Boyajian, G. E. 1988 Patterns of generic extinction in the fossil record. Paleobiology 14, 109–125. * Sepkoski, J. J., Jr. 1993 A compendium of fossil marine animal families, 2nd edition. Milwaukee Public Museum Contributions in Biology and Geology 83. * Solé, R. V. 1998 Private communication. * Solé, R. V., Manrubia, S. C., Benton, M. & Bak, P. 1997 Self-similarity of extinction statistics in the fossil record. Nature 388, 764–767.
no-problem/9812/chao-dyn9812015.html
ar5iv
text
# Kyoto University, Department of Astronomy, 98-38 Osaka University, OUTAP-88 Time scales of relaxation and Lyapunov instabilities in a one-dimensional gravitating sheet system ## I Introduction Relaxation is the most fundamental process in evolution of many-body system. The classical statistical theory is based on ergodic property, which is considered to be established after relaxation. However, not all systems do not show such an idealistic relaxation. A historical example is FPU (Fermi-Pasta-Ulam) problem, which experiences the induction phenomenon (e.g., Ref) and does not relax to the equipartition for very long time. From nearly thirty years of investigation, one-dimensional self-gravitating sheet systems (OGS) have been known by their strange behavior in evolution. Hohl first asserted that OGS relaxes to the thermodynamical equilibrium (the isothermal distribution) in a time scale of about $`N^2t_c`$, where $`N`$ is the number of sheets, and $`t_c`$ is typical time for a sheet to cross the system. Later, more precise numerical experiments determined that the Hohl’s result was not right, and then arguments for the relaxation time arose in 1980’s. A Belgian group claimed the OGS relaxed in shorter than $`Nt_c`$, whereas a Texas group showed that the system showed long lived correlation and never relaxed even after $`2N^2t_c`$. Tsuchiya, Gouda and Konishi (1996) suggested that this contradiction can be resolved in the view of two different types of relaxations: the microscopic and the macroscopic relaxations. At the time scale of $`Nt_c`$, cumulative effect of the mean field fluctuation makes the energies of the individual particles change noticeably. Averaging this change gives the equipartition of energies, thus there is a relaxation at this time scale. By this relaxation the system is led not to the thermal equilibrium but only to a quasiequilibrium. The global shape of the one-body distribution remains different from that of the thermal equilibrium. This relaxation appears only in the microscopic dynamics, thus it is called the microscopic relaxation. The global shape of the one-body distribution transforms in much longer time scale. For example, a quasiequilibrium (the water-bag distribution, which has the longest life time) begins to transform at $`4\times 10^4Nt_c`$ in average. Tsuchiya et al.(1996) called this transformation the macroscopic relaxation, but later in Tsuchiya, Gouda and Konishi (1998) , it is shown that this transformation is onset of the itinerant stage. In this stage, the one-body distribution stays in a quasiequilibrium for some time and then changes to other quasiequilibrium. This transformation continues forever. Probability density of the life time of the quasiequilibria has a power law distribution with a long time cut-off and the longest life time is $`sim10^4Nt_c`$. Only by averaging over a time longer than the longest life time of the quasiequilibria, the one-body distribution becomes that of the thermal equilibrium, which is defined as the maximum entropy state. Therefore the time $`10^6Nt_c`$ is necessary for relaxation to the thermal equilibrium, and called the thermal relaxation time. Although there are some attempts to clarify the mechanisms of these relaxations , the reason why the system does not relax for such a long time is still unclear. At the view of chaotic theory of dynamical systems, relaxation is understood as mixing in phase space, and its time scale is given by the Kolmogorov-Sinai time (KS time), $`\tau _{\mathrm{KS}}=1/h_{\mathrm{KS}}`$, where $`h_{\mathrm{KS}}`$ is the Kolmogorov-Sinai entropy. However, it does not simply correspond to the relaxation of the one-body distribution function, which is of interest in many-body systems. Recently, Dellago and Posch showed that in a hard sphere gas, the KS time equals the mixing time of neighboring orbits in the phase space, whereas the relaxation of the one-body distribution function corresponds to the collision time between particles. Now, it is fruitful to study relation between relaxation and some dynamical quantities, such as the KS entropy and the Lyapunov exponents, in the OGS. Milanović et al. showed the Lyapunov spectrum and the Kolmogorov-Sinai entropy in the OGS for $`10N24`$. However, since it is known that the chaotic behavior changes for $`N30`$ for the OGS, it is considerably important to extend the analysis to the system larger than $`N30`$. In this paper, we extend the number of sheets to $`N=256`$ and follow the evolution numerically up to $`T10^6Nt_c`$, which is long enough for the thermal relaxation . ## II Numerical simulations The OGS comprises $`N`$ identical plane-parallel mass sheets, each of which has uniform mass density and infinite in, say, the $`y`$ and $`z`$ direction. They move only in the $`x`$ direction under their mutual gravity. When two of the sheets intersect, they pass through each other. The Hamiltonian of the system has the form $$H=\frac{m}{2}\underset{i=1}{\overset{N}{}}v_i^2+(2\pi Gm^2)\underset{i<j}{}|x_jx_i|,$$ (1) where $`m`$, $`v_i`$, and $`x_i`$ are the mass (surface density), velocity, and position of the $`i`$th sheet, respectively. Since the gravitational field is uniform, the individual particles moves parabolically, until they intersect with the neighbors. Thus the evolution of the system can be followed by solving quadratic equations. This property helps us to calculate long time evolution with a high accuracy. Since length and velocity (thus also energy) can be scaled in the system, the number of the sheets $`N`$ is the only free parameter. The crossing time is defined by $$t_c=(1/4\pi GM)(4E/M)^{1/2},$$ (2) where $`M`$ and $`E`$ is the total mass and total energy of the system. Detailed descriptions of the evolution of the OGS can be found in our previous papers. In order to investigate dynamical aspects of the system, we calculated the Lyapunov spectrum. The basic numerical algorithm follows Shimada and Nagashima, and detailed description of the procedure for the OGS can be found in ref. We made numerical integration for $`8N128`$ up to $`10^8t_c`$, which is enough time for the system to relax, and up to $`1.8\times 10^7t_c`$ for $`N=256`$ for reference. ## III Results Figure 1 shows the spectrum of the Lyapunov exponents, $`\{\lambda _i\}`$, where their unit is $`1/t_c`$. This figure is the same diagram as Fig. 6 in Milanović et al, but the range of $`N`$ is extended to $`8N256`$. In the horizontal axis, $`l`$ is the index of the Lyapunov exponents, which is labeled in the order from the maximum to the minimum. Thus all the positive Lyapunov exponents $`(lN)`$ is scaled between 0 to 1 in the axis. The vertical axis shows the Lyapunov exponents normalized by the maximum Lyapunov exponents, $`\lambda _1`$. Milanović et al stated that the shape of the spectrum approximately converges for large $`N`$.A closer look, however, shows bending of the spectrum, which is most clearly seen at $`(Nl)/(N1)0.9`$. This bending seems increase with $`N`$ for $`N32`$. A further investigation is needed to give a definite conclusion about the convergence of the shape of the spectrum. Figure 2 shows $`N`$-dependence of the maximum $`(\lambda _1)`$, the minimum positive Lyapunov exponent $`(\lambda _{N2})`$, and the KS entropy $`h_{KS}`$ per the number of freedom. $`\lambda _1`$ is already shown in Fig.13 in Tsuchiya et al., and it is proportional to $`N^{1/5}`$ for $`N32`$. Decreasing nature of the Lyapunov exponent may indicate that the OGS approaches closer to an integrable system for larger $`N`$. As expected from the spectrum the KS entropy divided by $`N`$ is also proportional to $`N^{1/5}`$. Therefore the conjecture by Benettin et al. that $`h_{KS}`$ increases linearly with $`N`$ is not right. It is clear that the inverses of both the maximum Lyapunove exponents and the KS entropy do not give the time scale of any type of relaxation time. The $`N`$-dependence of small positive Lyapunov exponents are quite different from larger ones. In Fig.2, the minimum positive Lyapunov exponent, $`\lambda _{N2}`$, is shown by a dashed dotted line with the symbol $`\mathrm{}`$. It decreases linearly for $`N32`$, and its time scale $`1/\lambda _{N2}`$ is about the same as the microscopic relaxation time ($`Nt_c`$). The eigen vectors for the Lyapunov exponents also give a useful information. Figure 3 shows projection of the eigen vector for $`N=64`$ on to the one-body phase space. Filled circles indicate positions of $`N`$ sheets at a moment and the arrows give the direction of the Lyapunov eigen vector, which grows with the rate of the Lyapunov exponent. Fig 3(a) is for the maximum Lyapunov exponent $`\lambda _1`$, and Fig.3(b) is for the minimum positive one, $`\lambda _{N2}`$. For $`\lambda _1`$, the instability is carried only by a few particles, which are interacting in a very small region. The instability is thus not for global transformation. On the other hand, the instability with $`\lambda _{N2}`$ makes all particles mix in the phase space. This is the very effect of relaxation. These features are commonly seen for different $`N`$. The results that the coincidence of the $`1/\lambda _{N2}`$ and the microscopic relaxation time, and the direction of the eigen vector, may be suggesting that the microscopic relaxation time is determined by the growing time of the weakest instability, which is determined by the minimum positive Lyapunov exponent; in other words, this time is necessary for the phase space orbit to mix in the phase space in the all directions of freedom. In our working model of the evolution of the OGS, the phase space is derived by some barriers which keep the phase orbit inside for a long time. The microscopic relaxation is considered to be a diffusion process in the barierred region, and in the time $`Nt_c`$, restricted ergodicity is established within the barierred region. This time may correspond to the diffusion time in the slowest direction. ## IV Conclusions and Discussion In the ergodic theory, the KS time represents the time scale of “Mixing” in the phase space. On the other hand, the relaxation of the one-body distribution is of the most interest in systems with large degrees of freedom. We have shown that the time scale of the relaxation of one-body distribution (both the microscopic and thermal relaxation) is certainly different from that of the KS time, and found that the growing time of the weakest Lyapunov instability is about the same as the microscopic relaxation time. In addition, taking into account the direction of the eigen vector of the weakest Lyapunov exponent, it is suggested that the microscopic relaxation is determined by the weakest Lyapunov instability. The KS entropy is defined as a typical time for the system to increase “information”. This definition does not depend on the number of degrees of freedom. In higher dimensions, however, even very small growth of instability can increase information quite rapidly. Therefore the KS time does not seem suitable to characterize the relaxation of the one-body distribution function. The relaxation of the one-body distribution function implies ergodicity. To attain ergodicity, the phase space orbits should diffuse over all accessible phase space. For the microscopic relaxation, even though it is not true thermal relaxation, the system shows ergodicity which is restricted in a part of the phase space. Therefore it seems natural that the microscopic relaxation time is characterized by the slowest time of diffusion. That may be the reason that the inverse of the minimum positive Lyapunov exponent coincides the microscopic relaxation time. Gurzadyan and Savvidy derived the KS time in the usual three dimensional stellar systems, which was found to be proportional to $`N^{1/3}`$, by the method of the geodesic deviation. They asserted that the system relaxes in this time scale. This result was not supported by numerical simulations and by a semi-analytical study. Our analysis of one-dimensional systems also gives negative result for the conjecture of Gurzadyan and Savvidy. According to our results, the time scale of the minimum positive Lyapunov exponent would be related to the relaxation also in three-dimensional systems, though it is difficult to show it numerically. We have found that the KS time and any of the times of Lyapunov instabilities do not give the thermal relaxation time in the OGS. In our working model, the thermal relaxation is the successive transitions of the phase space orbit among the barierred regions. The each region has locally restricted ergodicity, hence the long time average could only give average of the Lyapunov exponents, which are defined in each region, and it is reasonable that the averaged Lyapunov exponents do not characterize the thermal relaxation. Actual time of the thermal relaxation is that of transition among quasiequilibria. It is necessary to find appropriate dynamical quantities to describe it. ###### Acknowledgements. The authors are grateful to T. Konishi for many critical suggestions. We also thank B. N. Miller and Y. Aizawa for valuable discussions. This work was supported in part by JSPS Research Fellow and in part by the Grant-in-Aid for Scientific Research(No.10640229)from the Ministry of Education, Science, Sports and Culture of Japan.
no-problem/9812/astro-ph9812170.html
ar5iv
text
# Evolution of Multipolar Magnetic Field in Isolated Neutron Stars ## 1 Introduction Strong multipole components of the magnetic field have long been thought to play an important role in the radio emission from pulsars. Multipole fields have been invoked for the generation of electron positron pairs in the pulsar magnetosphere. For example, the Ruderman & Sutherland (1975) model requires that the radius of curvature of the field lines near the stellar surface should be of the order of the stellar radius to sustain pair production in long period pulsars. This is much smaller than the expected radius of curvature of the dipole field. Such a small radius of curvature could be a signature of either an extremely offset dipole (Arons, 1998) or of a multipolar field (Barnard & Arons, 1982). Further, soft X-ray observations of pulsars show non-uniform surface temperatures which can be attributed to the presence of a quadrupolar field (Page & Sarmiento, 1996). Magnetic multipole structure at and near the polar cap is also thought to be responsible for the unique pulse profile of a pulsar (Vivekanand & Radhakrishnan 1980, Krolik 1991, Rankin & Rathnasree 1995). The recent estimates that there should be several tens of sparks populating the polar cap is also best explainable if multipole fields dictate the spark geometry near the surface (Deshpande & Rankin 1998, Rankin & Deshpande 1998, Seiradakis 1998). Significant evolution in the structure of the magnetic field during the lifetime of a pulsar may therefore leave observable signatures. If the multipoles grow progressively weaker in comparison to the dipole then one can expect pulse profiles to simplify with age and vice versa. The evolution of the magnetic fields in neutron stars in general is still a relatively open question. During the last decade, two major alternative scenarios for the field evolution have emerged. One of these assumes that the field of the neutron star permeates the whole star at birth, and its evolution is dictated by the interaction between superfluid vortices (carrying angular momentum) and superconducting fluxoids (carrying magnetic flux) in the stellar interior. As the star spins down, the outgoing vortices may drag and expel the field from the interior leaving it to decay in the crust (Srinivasan et al. 1990). In a related model, plate tectonic motions driven by pulsar spindown drags the magnetic poles together, reducing the magnetic moment (Ruderman 1991a,b,c). The other scenario assumes that most of the field is generated in the outer crust after the birth of the neutron star (Blandford, Applegate & Hernquist 1983). The later evolution of this field is governed entirely by the ohmic decay of currents in the crustal layers. The evolution of the dipole field carried by such currents has been investigated in some detail in the recent literature (Sang & Chanmugam 1987, Geppert & Urpin 1994, Urpin & Geppert 1995, 1996, Konar & Bhattacharya 1997, 1998). These studies include field evolution in isolated neutron stars as well as those accreting from their binary companions. The results show interesting agreements with observations lending some credence to the crustal picture. In this paper, we explore the ohmic evolution of higher order multipoles in isolated neutron stars assuming the currents to be originally confined in the crustal region. Our goal is to find whether there would be any observable effect on the pulse shape of radio emission from isolated pulsars as a result of this evolution. In section 2 we discuss the details of the computation and in section 3 we present our results and discuss the implications. ## 2 computations The evolution of the magnetic field, due to ohmic diffusion, is governed by the equation (Jackson 1975) : $$\frac{𝐁}{t}=\frac{c^2}{4\pi }\times (\frac{1}{\sigma }\times \times 𝐁),$$ (1) where $`\sigma (r,t)`$ is the electrical conductivity of the medium. Following Wendell, Van Horn & Sargent (1987) we introduce a vector potential $`𝐀=(0,0,A_\varphi )`$ assuming the field to be purely poloidal, such that: $$S(r,\theta ,t)=rsin\theta A_\varphi (r,\theta ,t),$$ where $`S(r,\theta ,t)`$ is the Stokes’ stream function. $`S`$ can be separated in $`r`$ and $`\theta `$ in the form : $$S(r,\theta ,t)=\underset{l1}{}R_l(r,t)sin\theta P_l^1(cos\theta ),$$ where $`P_l^1(cos(\theta ))`$ is the associated Legendre polynomial of degree one and $`R_l`$ is the multipole radial function. From equation (1) we obtain : $$\frac{^2R_l}{x^2}\frac{l(l+1)}{x^2}R_l=\frac{4\pi R_{}^2\sigma }{c^2}\frac{R_l}{t}$$ (2) where $`xr/R_{}`$ is the fractional radius in terms of the stellar radius $`R_{}`$. The solution of this equation with the boundary conditions : $`{\displaystyle \frac{R_l}{x}}+{\displaystyle \frac{l}{x}}R_l`$ $`=`$ $`0,\text{as }x1`$ $`R_l`$ $`=`$ $`0,\text{at }x=x_c`$ (3) for a particular value of $`l`$ gives the time-evolution of the multipole of order $`l`$. Here, the first condition matches the correct multipole field in vacuum at the stellar surface and the second condition makes the field vanish at the core-crust boundary (where $`r=r_c`$, the radius of the core) to keep the field confined to the crust. We assume that the field does not penetrate the core in the course of evolution, as the core is likely to be superconducting. ### 2.1 Crustal Physics The rate of ohmic diffusion is determined mainly by the electrical conductivity of the crust. The conductivity of the solid crust is given by $$\frac{1}{\sigma }=\frac{1}{\sigma _{\mathrm{ph}}}+\frac{1}{\sigma _{\mathrm{imp}}}$$ where $`\sigma _{\mathrm{ph}}`$ is the phonon scattering conductivity, which we obtain from Itoh et al. (1984) as a function of density and temperature, and the impurity scattering conductivity $`\sigma _{\mathrm{imp}}`$ is obtained from the expressions given by Yakovlev & Urpin (1980). We construct the density profile of the neutron star in question using the equation of state of Wiringa, Fiks & Fabrocini (1988) matched to Negele & Vautherin (1973) and Baym, Pethick & Sutherland (1971) for an assumed mass of 1.4 M. As conductivity is a steeply increasing function of density and since the density in the crust spans eight orders of magnitude the conductivity changes sharply as a function of depth from the neutron star surface. Thus the deeper the location of the current distribution, the slower is the decay. Another important factor in determining the conductivity is the temperature of the crust. In absence of impurities the scattering of crustal electrons come entirely from the phonons in the lattice (Yakovlev & Urpin 1980) and the number density of phonons increases steeply with temperature. The thermal evolution of the crust therefore plays an important role in the evolution of the magnetic field. The thermal evolution of a neutron star has been computed by many authors, and it is clearly seen that the inner crust ($`\rho >10^{10}\mathrm{gcm}^3`$) quickly attains an isothermal configuration after birth. At outer regions of the crust, the temperature follows an approximate relation, $$T(\rho )=\left(\frac{\rho }{\rho _b}\right)^{1/4}T_i,\rho \stackrel{<}{_{}}\rho _b$$ (4) where $`T_i`$ is the temperature of the isothermal inner crust and $`\rho _b`$ is the density above which the crust is practically isothermal. As the star cools, larger fraction of the crust starts becoming isothermal, with $`\rho _b`$ being approximately given by, $$\rho _b=10^{10}\left(\frac{T_i}{10^9}\right)^{1.8}$$ (5) The relations 4 and 5 above have been obtained by fitting to the radial temperature profiles published by Gudmundsson, Pethick & Epstein (1983). For the time evolution of $`T_i`$ we use the results of Urpin & van Riper (1993) for the case of standard cooling (the crustal temperature $`T_m`$ in their notation corresponds to $`T_i`$ above). A third parameter that should be considered in determining conductivity is the impurity concentration. The effect of impurities on the conductivity is usually parametrised by a quantity $`Q`$, defined as $`Q=\frac{1}{n}_in_i(ZZ_i)^2`$, where $`n`$ is the total ion density, $`n_i`$ is the density of impurity species $`i`$ with charge $`Z_i`$, and $`Z`$ is the ionic charge in the pure lattice (Yakovlev & Urpin 1980). In the literature $`Q`$ is assumed to lie in the range 0.0 - 0.1. But statistical analyses indicate that the magnetic field of isolated pulsars do not undergo significant decay during the radio pulsar life time (Bhattacharya et al. 1992, Hartman et al. 1997, Mukherjee & Kembhavi 1997). It has been shown (Konar 1997) that to be consistent with this impurity values in excess of 0.01 are not allowed in the crustal model. ### 2.2 Numerical Scheme To solve equation (2) we assume the multipole radial profile used by Bhattacharya & Datta (1996, see also Konar & Bhattacharya 1997). This profile contains the depth and the width of the current configuration as input parameters and we vary them to check the sensitivity of the result to these. We solve equation (2) numerically using the Crank-Nicholson method of differencing. We have modified the numerical code developed by Konar (1997) and used by Konar & Bhattacharya (1997) to compute the evolution of multipolar magnetic fields satisfying the appropriate boundary conditions given by equation (3). ## 3 Results and Discussion In figures and we plot the evolution of the various multipole components of the magnetic field, assuming the same initial strength for all, with time due to pure diffusion in an isolated neutron star. It is evident from the figures that except for very high multipole orders ($`l\stackrel{>}{_{}}25`$) the reduction in the field strength is very similar to that of the dipole component. For a multipole of order $`l`$ there would be $`2^l`$ reversals across the stellar surface. For typical spin-periods the size of the polar cap bounded by the base of the open field lines is $`0.01\%`$ of the total surface area. To contribute to the substructure of the pulse therefore the required multipoles must have a few reversals in the polar cap which demands that the multipole order must be five or more. On the other hand if the multipole order is very large ($`l>l_{\mathrm{max}}20`$) the fine structure would be so small that it would be lost in the finite time resolution of observations. Therefore, $`l`$ values in the range 5 to $`l_{\mathrm{max}}`$ would be the major contributors to the observed structure of the pulse profile. However, as seen from figures and multipoles of such orders evolve similarly to the dipole. Therefore no significant evolution is expected in the pulse shape due to the evolution of the multipole structure of the magnetic field. As discussed before multipole orders contributing to the required field line curvature for pair-production are low, most prominently a quadrupole. As the evolution of these low orders are also very close to the dipole the radii of curvature of the field lines on the polar cap are not expected to change significantly in the lifetime of a radio pulsar. To test the sensitivity of these results on the impurity concentration of the crust and the density at which the initial current is concentrated we have evolved models with various values of these parameters. The results are displayed in figures and where we plot the ratio of the dipole to higher multipoles at an age of $`10^7`$ years. It is seen that the results are insensitive to these parameters, particularly for low orders of multipoles of interest. Krolik (1991) and Arons (1993) conjectured that except for multipoles of order $`l\stackrel{>}{_{}}R_{}/\mathrm{}r`$ the decay rates would be similar due to the finite thickness $`\mathrm{}r`$ of the crust over which the current is confined. The evolution plotted in figure assumes that $`\mathrm{}r=1.2`$ km for which $`R_{}/\mathrm{}r8`$. However it is seen from figures and that significant decay occurs only for $`l\stackrel{>}{_{}}25`$, much greater than $`R_{}/\mathrm{}r`$. This is most likely caused by steep increase in conductivity towards the interior. In conclusion, our results indicate that for a crustal model of the neutron star magnetic field there would be no significant change in the multipolar structure with age. This fact seems to be corroborated by observations: studies identifying multiple components in pulse profiles (Kramer et al. , 1994) show that the number of components does not vary with the age of the pulsar. Thus the evolution of the multipolar structure of the magnetic field is unlikely to leave any observable signature on pulsar emission. This is in contrast with the predictions from the plate-tectonics model of Ruderman (1991a,b,c) which suggests a major change in the field structure with pulsar spin evolution. ## Acknowledgment We thank A. A. Deshpande, Rajaram Nityananda, N. Rathnasree, V. Radhakrishnan, C. Shukre and M. Vivekanand for helpful discussions. We are grateful to V. Urpin for providing us the computer-readable versions of cooling curves computed by Urpin and van Riper (1993). We gratefully acknowledge D. Page for bringing the X-ray work to our attention and an anonymous referee for his useful remarks.
no-problem/9812/astro-ph9812101.html
ar5iv
text
# The Orbital Period of HDE226868/Cyg X-1 ## 1 Introduction Cygnus X-1, identified with the bright (V$``$8) star HDE226868 (Bolton 1972; Webster & Murdin 1972), has long been regarded as the best black hole candidate among the high-mass X-ray binaries. As such, it has been an object of extensive observation over the past two and a half decades. Perhaps surprisingly, there remain some important uncertainties and discrepancies in the derived properties of the system. In particular, Ninkov, Walker, & Yang (1987, hereafter NWY) report evidence of possible period variation, discussed below, and additional periodicities on timescales ranging from 39 days to 4.5 years have been suggested by Kemp, Herman, & Barbour (1978), Wilson and Fox (1981), Priedhorsky, Terrell, & Holt (1983), and Walker and Quintanilla (1978). HDE226868 is a single-lined spectroscopic binary. The optical spectrum has been classified O9.7Iab by Walborn (1973), with variable emission at HeII $`\lambda `$4686 and, less prominently, at the Balmer lines. The radial velocity period of 5.6 days has been established for over two decades, and the definitive orbital elements and ephemeris were published by Gies and Bolton (1982, hereafter GB). They give a period of 5.59974 $`\pm `$ 0.00008 days, a precision which translates to an uncertainty of only $`\pm `$0.02 cycle in phase at the present time. However, NWY, in a subsequent radial-velocity study, reported a somewhat longer period of 5.6017 $`\pm `$ 0.0001 days, and suggest the possibility of a period increase with time. These results have the effect that the orbital phase of the Cyg X-1 system is then highly uncertain at the present epoch. The phase calculated using the ephemeris of NWY differs from the GB phase by 0.5 cycle, and if the period is in fact varying, the phase at the present epoch is completely indeterminate. If correct, a time-varying period would have important implications for the mass transfer rate and hence evolution of the binary. Furthermore, with this indeterminacy in phase, it is impossible to ascribe and interpret accurately features in the X-ray light curve such as the dipping behaviour, in particular the distribution of dipping with orbital phase (e.g. Remillard and Canizares 1984). To resolve these ambiguities we therefore undertook a program of observations to re-establish the orbital ephemeris of HDE226868. These data also allowed us to undertake a more detailed investigation of the long-term stability of the orbit, as high quality data now exist over a $``$30 year baseline. Our new ephemeris has thereby allowed us to complete a survey of the distribution of X-ray dipping with orbital phase (Balucinska-Church et al. 1998). ## 2 Observations Through the La Palma Service Programme, and with the assistance of regularly scheduled observers, we obtained CCD spectra of HDE226868 using the Intermediate Dispersion Spectrograph on the Isaac Newton Telescope on 17 nights in May and June 1996. These included 11 consecutive nights covering two complete orbital cycles. In all we obtained 37 spectra of the object, each covering a spectral range of 4100 to 4900Å, with a dispersion of approximately 0.8 Å/pixel. Exposures varied from 100 to 200 seconds, and yielded a S/N in excess of 100. In addition, we observed 19 Cep, a radial velocity standard of similar spectral type, on two nights, for reference and calibration purposes. The spectra were extracted and reduced using standard IRAF routines. The extracted spectra were wavelength calibrated using a Cu-Ar arc recorded immediately before and after the program spectra, and then normalized. Our wavelength calibration accuracy had rms residuals of $``$0.10Å. Typical HDE226868 and 19 Cep spectra are shown in Figure 1. ## 3 Period Determination We determined radial velocities by cross-correlation of the HDE226868 spectra with the 19 Cep spectra, using IRAF routines. Separate determinations were made for the hydrogen and helium lines. We used the HeI absorption lines at 4388, 4472, 4713 and 4921Å. The resulting velocities are given in Table 1, and were reduced to heliocentric velocities using the catalogue value (Hoffleit 1982) for the radial velocity of 19 Cep. We performed period searches on both sets of velocities, as well as on our velocities in combination with previously determined velocities from the literature. We used several different period-search techniques, including Scargle periodograms, Fourier searches, and chi-square minimization to a circular orbit fit. All methods yielded similar results, as follows. The best fit period to the hydrogen line velocities for our data alone is 5.566 $`\pm `$ 0.012 days, while the best fit to our helium line data is 5.629 $`\pm `$ 0.015 days. The apparent discrepancy here is startling, as is the amount of apparent change in period from the canonical value in either case. Both of these issues can be readily resolved. It is well established that the hydrogen lines are contaminated by emission whose velocity is in approximate antiphase with the absorption lines (GB; Hutchings, Crampton, & Bolton 1979). This produces variable and irregular line profiles which yield spurious velocities and periods when cross-correlated with standard spectra. So any period determined from the hydrogen line velocities is highly suspect. For this reason we reject all velocity determinations based on the hydrogen lines, and in working with historical data below, we use only published velocities determined for the helium lines. The fact that the best-fit period to the helium lines seems to be $`2\sigma `$ larger than previously determined values should also be viewed with some scepticism. In fact, with a dense grid of observations over a relatively short period of time, the quality of fit is good over a much broader period range than suggested by the quoted uncertainty. Fitting our data to the NWY period, or to the GB period, produces $`\chi ^2`$ residuals which are not significantly worse than those for our best fit. In fact, the formal uncertainty calculated by the fitting routines is undoubtedly too small; this is because the scatter of the data is such that even for the best orbital models the resulting $`\chi ^2`$ is too large for the formal uncertainty calculations to be valid. We shall see below that this is not unique to our results but in fact has been a problem for most, and perhaps all, previous published orbital solutions for Cyg X-1; uncertainties have been systematically underestimated, resulting in apparent discrepancies where none really exist. To test for the possibility of period variation, we combined our observations with all previously published helium-line velocity data (Brucato and Zappala 1974; GB; NWY) and performed a $`\chi ^2`$ minimization weighted fit to a circular orbit both with a fixed period and with a period which varies linearly with time, using the MINUIT package of function-fitting routines (James 1994). The results are summarized in Table 2 and plotted in figure 2. The best fixed period is 5.5998 $`\pm `$ 0.0001 days, with a $`\chi ^2`$ of 4284; the best variable period is 5.5997 $`\pm `$ 0.0001 days, with $`\dot{P}`$ of 3.8 $`\times `$ 10<sup>-7</sup> and $`\chi ^2`$ of 4231. But with only 216 data points, this means that the $`\chi _\nu ^2`$ is $``$20 in both cases. Our new period agrees well with the photometric period derived recently by Voloshina, Lyutyi & Tarasov (1997) and a solution from HeI $`\lambda `$6678 measurements by Sowers et al (1998). Our values for $`\gamma `$ and K are within 2$`\sigma `$ of previous results (NWY and GB). The variable-period model produces a slightly better fit to the data, but, again, with a $`\chi ^2`$ difference of less than 1.5%, the improvement in quality of fit between that and the fixed-period model is not statistically significant. We conclude that there is no compelling evidence for period variation in Cygnus X-1. We have therefore adopted the orbital elements of the best-fit constant period model in all further discussion. We computed O–C residuals for all the helium-line velocities and performed period searches on these. No significant periodicity was found in these residuals. In particular, we find no evidence of the suggested periodicities at 39d or 78d (Kemp, Herman, & Barbour 1978), or 4.5 years (Wilson and Fox 1981) or of the 294 day X-ray modulation (Priedhorsky, Terrell, & Holt 1983) or the 91d photometric period (Walker and Quintanilla 1978). The absence of a 294 d signal was also noted by Gies & Bolton (1984). ## 4 Discussion Our best-fit period for the combined datasets, 5.5998 $`\pm `$ 0.0001 d is identical to well within the standard errors with the GB period of 5.59974 $`\pm `$ 0.00008 d, and our T<sub>0</sub>, or epoch of inferior conjunction, agrees with the prediction of the GB ephemeris to within 0.014 phase, or 0.078 d. Since the uncertainty in the periods yields an uncertainty in the GB prediction of T<sub>0</sub> of $`\pm `$ 0.12 days at this epoch, this is excellent agreement. Thus our results confirm the ephemeris of GB and refute suggestions of period variation by NWY. Why, then, did NWY’s result show such a discrepancy and suggest a change in period? They reported that the period determined from their data alone was 5.60172 $`\pm `$ 0.00003 d, differing by 20 $`\sigma `$ from the period determined from all previous observations. We suggest several sources for this error. First of all, we performed our own period searches on the data published by NWY; the best fit we find is at 5.6002 d, or only 2 $`\sigma `$ greater than the GB value, so we may be looking at much ado about a typographical error. In addition, there is excellent phase agreement between the NWY data and the GB ephemeris, again suggesting no real period discrepancy. Finally, it seems that the error estimates for both the NWY period and their period determined from historical data are too small by as much as an order of magnitude. This is primarily because the formal error calculations used to determine these error estimates are based on the assumption of a very good fit of the model to the data, i.e. that $`\chi _\nu ^2`$1. This condition is not met by any of the data sets and fits used in this work, presumably because the estimates of the uncertainties in the velocities were too small, although perhaps because of variability in the source. We recalculated the fits, trying larger estimates for the velocity uncertainties until the reduced chi-square criterion was satisfied. We found that the period uncertainties then were about an order of magnitude larger than reported by the original authors. For example, treating NWY’s data in this way yields a best-fit period of 5.6002 $`\pm `$ 0.0003 d, and the discrepancy with GB disappears. An additional source of underestimated error uncertainty is the inclusion by both NWY and Bolton (1975) of two velocities obtained by Seyfert and Popper (1941), ostensibly to improve the precision of the period determinations. For example, Bolton (1975) uses these points to decrease his uncertainty estimate by a factor of 10; NWY do not discuss the effect of including these points on their uncertainty. Popper (1996, personal communication) suggests that these velocities are “very weak reeds on which to hang significant conclusions”, the velocities having considerable uncertainties and being based on averaging velocities from lines of several different species. Because of the problems with hydrogen-line velocities discussed above, inclusion of these points is therefore unlikely to improve the period uncertainty. GB also introduced “velocity corrections” to many of their velocities, shifting spectra to fit a mean interstellar K-line velocity due to instabilities in their spectrograph. It is not unlikely that this introduced uncertainties larger than the mean errors they report. ## 5 Conclusions We have carried out a programme of radial velocity determinations for the black hole binary Cyg X-1, and have combined these data with data used previously to determine the period, to provide a new orbital ephemeris for the source. A critical consideration of the errors associated with previous work has shown these to be underestimated. Based on this, our main conclusion is that there is no evidence for a change in orbital period as suggested by Ninkov et al. Finally, our new ephemeris allows the orbital phase calculation for Cyg X-1 with an error that is much reduced compared with the error that would be attached to extrapolating the Gies and Bolton ephemeris with its quoted accuracy to the present epoch. ## Acknowledgements We are very grateful to the La Palma support astronomers, and particularly Don Pollacco, who operate the Service Programme, and to those astronomers who allowed the Cyg X-1 observations to be taken during their own time. The Isaac Newton Group of telescopes is operated on the island of La Palma by the Royal Greenwich Observatory in the Spanish Observatorio del Roque de Los Muchachos of the Instituto de Astrofísica de Canarias. JLS wishes gratefully to acknowledge the following sources of partial support for his participation in this work: the Margaret Cullinan Wray Charitable Lead Annuity Trust (through the AAS Small Grants Program); the NASA JOVE program; a Theodore Dunham, Jr., Grant of the Fund for Astrophysical Research; the International Astronomical Union Exchange of Astronomers Program; and the Department of Astrophysics of the University of Oxford. Figure 1: Typical rectified spectra of HDE226868 (top) and the radial velocity standard 19 Cep (bottom) obtained with the La Palma 2.5m INT. The spectral types are very similar, but note the HeII $`\lambda `$4686 emission in HDE226868. Figure 2: Best fitting fixed period, circular orbit to all helium radial velocity data (ours plus all previously published material, see text). The lower panel shows the residuals to the fit.
no-problem/9812/astro-ph9812286.html
ar5iv
text
# Intergalactic Extinction of High Energy Gamma-Rays ## 1 Introduction Very high energy $`\gamma `$-ray beams from blazars can be used to measure the intergalactic infrared radiation field, since pair-production interactions of $`\gamma `$-rays with intergalactic IR photons will attenuate the high-energy ends of blazar spectra . In recent years, this concept has been used successfully to place upper limits on the the intergalactic IR field (IIRF) \- . Determining the (IIRF), in turn, allows us to model the evolution of the galaxies which produce it. As energy thresholds are lowered in both existing and planned ground-based air Cherenkov light detectors , cutoffs in the $`\gamma `$-ray spectra of more distant blazars are expected, owing to extinction by the IIRF. These can be used to explore the redshift dependence of the IIRF , . There are now 66 “grazars” ($`\gamma `$-ray blazars) which have been detected by the EGRET team . These sources, optically violent variable quasars and BL Lac objects, have been detected out to a redshift greater that 2. Of all of the blazars detected by EGRET, only the low-redshift BL Lac, Mrk 421 ($`z=0.031`$), has been seen by the Whipple telescope . The fact that the Whipple team did not detect the much brighter EGRET source, 3C279, at TeV energies , is consistent with the predictions of a cutoff for a source at its much higher redshift of 0.54 . So too are the further detections of three other close BL Lacs ($`z<0.12`$), viz., Mrk 501 ($`z=0.034`$) , 1ES2344+514 ($`z=0.044`$), and PKS 2155-304 ($`z=0.117`$) which were too faint at GeV energies to be seen by EGRET<sup>1</sup><sup>1</sup>1PKS 2155-304 was seen in one observing period by EGRET as reported in the Third EGRET Catalogue . ## 2 The Opacity of Intergalactic Space Owing to the IIRF The formulae relevant to absorption calculations involving pair-production are given and discussed in Ref. . For $`\gamma `$-rays in the TeV energy range, the pair-production cross section is maximized when the soft photon energy is in the infrared range: $$\lambda (E_\gamma )\lambda _e\frac{E_\gamma }{2m_ec^2}=2.4E_{\gamma ,TeV}\mu m$$ (1) where $`\lambda _e=h/(m_ec)`$ is the Compton wavelength of the electron. For a 1 TeV $`\gamma `$-ray, this corresponds to a soft photon having a wavelength near the K-band (2.2$`\mu `$m). (Pair-production interactions actually take place with photons over a range of wavelengths around the optimal value as determined by the energy dependence of the cross section; see eq. (6)).) If the emission spectrum of an extragalactic source extends beyond 20 TeV, then the extragalactic infrared field should cut off the observed spectrum between $`20`$ GeV and $`20`$ TeV, depending on the redshift of the source , . ## 3 Absorption of Gamma-Rays at Low Redshifts Stecker and De Jager (hereafter SD98) have recalculated the absorption coefficient of intergalactic space using a new, empirically based calculation of the spectral energy distribution (SED) of intergalactic low energy photons by Malkan and Stecker (hereafter MS98) obtained by integrating luminosity dependent infrared spectra of galaxies over their luminosity and redshift distributions. After giving their results on the $`\gamma `$-ray optical depth as a function of energy and redshift out to a redshift of 0.3, SD98 applied their calculations by comparing their results with the spectral data on Mrk 421 and spectral data on Mrk 501 . SD98 make the reasonable simplifying assumption that the IIRF is basically in place at a redshifts $`<`$ 0.3, having been produced primarily at higher redshifts , , . Therefore SD98 limited their calculations to $`z<0.3`$. (The calculation of $`\gamma `$-ray opacity at higher redshifts , will be discussed in the next section.) SD98 assumed for the IIRF, two of the SEDs given in MS98 (shown in Figure 1). The lower curve in Figure 1 (adapted from MS98) assumes evolution out to $`z=1`$, whereas the upper curve assumes evolution out to $`z=2`$. Evolution in stellar emissivity is expected to level off or decrease at redshifts greater than $`1.5`$ - so that the two curves in Fig. 1 may be considered to be lower and upper limits, bounding the expected IR flux. Using these two SEDs for the IIRF, SD98 obtained parametric expressions for $`\tau (E,z)`$ for $`z<0.3`$, taking a Hubble constant of $`H_o=65`$ km s<sup>-1</sup>Mpc<sup>-1</sup> . The results of MS98 generally agree well with very recent COBE data.<sup>2</sup><sup>2</sup>2The derived COBE point at 140 $`\mu `$m appears to be inconsistent with all calculated IIRF SEDs. It is also inconsistent with the spectrum of Mrk 501 (Konopelko, these proceedings), since it would imply a $`\gamma `$-ray optical depth $`6`$ at 20 TeV. and with lower limits from galaxy counts and other considerations \- . The results of MS are also in agreement with upper limits obtained from TeV $`\gamma `$-ray studies \- . This agreement is illustrated in Figure 2 which shows the upper SED curve from MS98 in comparison with various data and limits. The double-peaked form of the SED of the IIRF requires a 3rd order polynomial to approximate the opacity $`\tau `$ in a parametric form. SD98 give the following approximation: $$log_{10}[\tau (E_{\mathrm{TeV}},z)]\underset{i=0}{\overset{3}{}}a_i(z)(\mathrm{log}_{10}E_{\mathrm{TeV}})^i\mathrm{for}\mathrm{\hspace{0.33em}\hspace{0.33em}1.0}<E_{\mathrm{TeV}}<50,$$ (2) where the z-dependent coefficients are given by $$a_i(z)=\underset{j=0}{\overset{2}{}}a_{ij}(\mathrm{log}_{10}z)^j.$$ (3) Table 1 gives the numerical values for $`a_{ij}`$, with $`i=0,1,2,3`$, and $`j=0,1,2`$. The numbers before the brackets are obtained using the lower IIRF SED shown in Figure 1; The numbers in the brackets are obtained using the higher IIRF SED. Equation (2) approximates $`\tau (E,z)`$ to within 10% for all values of z and E considered. | Table 1: Polynomial coefficients $`a_{ij}`$ | | | | | | --- | --- | --- | --- | --- | | $`j`$ | $`a_{0j}`$ | $`a_{1j}`$ | $`a_{2j}`$ | $`a_{3j}`$ | | 0 | 1.11(1.46) | -0.26( 0.10) | 1.17(0.42) | -0.24( 0.07) | | 1 | 1.15(1.46) | -1.24(-1.03) | 2.28(1.66) | -0.88(-0.56) | | 2 | 0.00(0.15) | -0.41(-0.35) | 0.78(0.58) | -0.31(-0.20) | Figure 3 shows the results of the SD98 calculations of the optical depth for various energies and redshifts up to 0.3. Figure 4 shows observed spectra for Mrk 421 and Mrk 501 in the flaring phase, compared with best-fit spectra of the form $`KE^\mathrm{\Gamma }\mathrm{exp}(\tau (E,z=0.03))`$, with $`\tau (E,z)`$ given by the two appropriate curves shown in Figure 3. Because $`\tau <1`$ for $`E<10`$, TeV, there is no obvious curvature in the differential spectra below this energy; rather, we obtain a slight steepening in the power-law spectra of the sources as a result of the weak absorption. This result implies that the intrinsic spectra of the sources should be harder by $`\delta \mathrm{\Gamma }`$ 0.25 in the lower IRRF case, and $``$ 0.45 in the higher IIRF case. The SD98 results for the absorption coefficient as a function of energy do not differ dramatically from those obtained previously , ; however, they are more reliable because they are based on the empirically derived IIRF given by MS98, whereas all previous calculations of TeV $`\gamma `$-ray absorption were based on theoretical modeling of the IIRF. The MS98 calculation was based on data from nearly 3000 IRAS galaxies. These data included (1) the luminosity dependent infrared SEDs of galaxies, (2) the 60$`\mu `$m luminosity function of galaxies and, (3) the redshift distribution of galaxies. The advantage of using empirical data to construct the SED of the IIRF, as done in MS98, is particularly indicated in the mid IR range. In this region of the spectrum, galaxy observations indicate more flux from warm dust in galaxies than that taken account of in more theoretically oriented models (e,g, Primack, et al., these proceedings). As a consequence, the mid-IR “valley” between the cold dust peak in the far IR and cool star peak in the near IR is filled in more in the MS98 results and is not as pronounced as in previously derived models of the IR background SED. Such other derived SEDs are in conflict with recent lower limits in the mid-IR derived from galaxy counts (see Figure 2). The SD98 calculations predict that intergalactic absorption should only slightly steepen the spectra of Mrk 421 and Mrk 501 below $``$ 10 TeV, which is consistent with the data already in the published literature (see Figure 4). The SD98 calculations further predict that intergalactic absorption should turn over the spectra of these sources at energies greater than $``$20 TeV (see Figure 4). Observations of these objects at large zenith angles, which give large effective threshold energies, may thus demonstrate the effect of intergalactic absorption. The observed spectrum of Mrk 501 in the flaring phase has been newly extended to an energy of 24 TeV by observations of the HEGRA group. (These new data are not shown in Figure 4 but are given in the paper of Konopelko in these proceedings.) The new HEGRA data are well fitted by a source spectrum power-law of spectral index $`1.8`$ steepened at energies above a few TeV by intergalactic absorption with the optical depth calculated by SD98 (Konopelko, private communication). Finally, we consider the source PKS 2155-304, an XBL located at a moderate redshift of 0.117, which has been reported by the Durham group to have a flux above 0.3 TeV of $`4\times 10^{11}`$ cm<sup>-2</sup> s<sup>-1</sup> , close to that predicted by a simple SSC model . Using the SD98 absorption results for the higher IR SED in Figure 1 and assuming an $`E^2`$ source spectrum, we predict an absorbed (observed) spectrum as shown in Figure 5. As indicated in the figure, we find that this source should have its spectrum steepened by $``$ 1 in its spectral index between $`0.3`$ and $`3`$ TeV and should show an absorption turnover above $`6`$ TeV. Observations of the spectrum of this source should provide a further test for intergalactic absorption. ## 4 Absorption of Gamma-Rays at High Redshifts We now discuss the absorption of 10 to 500 GeV $`\gamma `$-rays at high redshifts. In order to calculate such high-redshift absorption properly, it is necessary to determine the spectral distribution of the intergalactic low energy photon background radiation as a function of redshift as realistically as possible out to frequncies beyond the Lyman limit. This calculation, in turn, requires observationally based information on the evolution of the spectral energy distributions (SEDs) of IR through UV starlight from galaxies, particularly at high redshifts. Conversely, observations of high-energy cutoffs in the $`\gamma `$-ray spectra of blazars as a function of redshift, which may enable one to separate out intergalactic absorption from redshift-independent cutoff effects, could add to our knowledge of galaxy formation and early galaxy evolution. In this regard, it should be noted that the study of blazar spectra in the 10 to 300 GeV range is one of the primary goals of a next generation space-based $`\gamma `$-ray telescope GLAST (Gamma-ray Large Area Space Telescope) (Ref. and Gehrels, these proceedings) as well as VERITAS and other future ground based $`\gamma `$-ray telescopes. Salamon and Stecker (hereafter SS98) have calculated the $`\gamma `$-ray opacity as a function of both energy and redshift for redshifts as high as 3 by taking account of the evolution of both the SED and emissivity of galaxies with redshift (see section 4.2). In order to accomplish this, they adopted the recent analysis of Fall, et al. and also included the effects of metallicity evolution on galactic SEDs. They then gave predicted $`\gamma `$-ray spectra for selected blazars and extend our calculations of the extragalactic $`\gamma `$-ray background from blazars to an energy of 500 GeV with absorption effects included (see section 4.3). Their results indicate that the extragalactic $`\gamma `$-ray background spectrum from blazars should steepen significantly above 20 GeV, owing to extragalactic absorption. Future observations of such a steepening would thus provide a test of the blazar origin hypothesis for the $`\gamma `$-ray background radiation. The results of the SS98 absorption calculations can be used to place limits on the redshifts of $`\gamma `$-ray bursts (see section 4.4). We describe and discuss these results in the following subsections. ### 4.1 Redshift Dependence of the Intergalactic Low Energy SED The opacity of intergalactic space to high energy $`\gamma `$-rays as a function of redshift depends upon the number density of soft target photons (IR to UV) as a function of redshift, photons whose production is dominated by stellar emission. To evaluate the SED of the IR-UV intergalactic radiation field we must integrate the total stellar emissivity over time. This requires an estimate of the dependence of stellar emissivity on redshift. Previous calculations of $`\gamma `$-ray opacity have either assumed that essentially all of the background was in place at high redshifts, corresponding to a burst of star formation at the initial redshift , , or strong evolution , or that there is no evolution . Pei and Fall have devised a method for calculating stellar emissivity which bypasses the uncertainties associated with estimates of poorly defined luminosity distributions of evolving galaxies. The core idea of their approach is to relate the star formation rate directly to the evolution of the neutral gas density in damped Ly $`\alpha `$ systems, and then to use stellar population synthesis models to estimate the mean co-moving stellar emissivity $`_\nu (z)`$ of the universe as a function of frequency $`\nu `$ and redshift $`z`$ . The SS98 calculation of stellar emissivity closely follows this elegant analysis, with minor modifications. Damped Ly $`\alpha `$ systems are high-redshift clouds of gas whose neutral hydrogen surface density is large enough ($`>2\times 10^{20}`$ cm<sup>-2</sup>) to generate saturated Ly $`\alpha `$ absorption lines in the spectra of background quasars that happen to lie along and behind common lines of sight to these clouds. These gas systems are believed to be either precursors to galaxies or young galaxies themselves, since their neutral hydrogen (HI) surface densities are comparable to those of spiral galaxies today, and their co-moving number densities are consistent with those of present-day galaxies , . It is in these systems that initial star formation presumably took place, so there is a relationship between the mass content of stars and of gas in these clouds; if there is no infall or outflow of gas in these systems, the systems are “closed”, so that the formation of stars must be accompanied by a reduction in the neutral gas content. Such a variation in the HI surface densities of Ly $`\alpha `$ systems with redshift is seen, and is used by Pei and Fall to estimate the mean cosmological rate of star formation back to redshifts as large as $`z=5`$. Pei and Fall estimated the neutral (HI plus HeI) co-moving gas density $`\rho _c\mathrm{\Omega }_g(z)`$ in damped Ly $`\alpha `$ systems from observations of the redshift evolution of these systems by Lanzetta, et al. . Lanzetta, et al. have observed that while the number density of damped Ly $`\alpha `$systems appears to be relatively constant over redshift, the fraction of higher density absorption systems within this class of objects decreases steadily with decreasing redshift. They attribute this to a reduction in gas density with time, roughly of the form $`\mathrm{\Omega }_g(z)=\mathrm{\Omega }_{g0}e^z`$, where $`\rho _c\mathrm{\Omega }_{g0}`$ is the current gas density in galaxies. Pei and Fall have taken account of self-biasing effects to obtain a corrected value of $`\mathrm{\Omega }_g(z)`$. SS98 have reproduced their calculations to obtain $`\mathrm{\Omega }_g(z)`$ under the assumptions that the asymptotic, high redshift value of the neutral gas mass density is $`\mathrm{\Omega }_{g,i}=1.6\times 10^2h_0^1`$, where $`h_0H_0`$/(100 km s<sup>-1</sup>Mpc<sup>-1</sup>). In a “closed galaxy” model, the change in co-moving stellar mass density $`\rho _c\dot{\mathrm{\Omega }}_s(z)=\rho _c\dot{\mathrm{\Omega }}_g(z)`$, since the gas mass density $`\rho _c\mathrm{\Omega }_g(z)`$ is being converted into stars. This determines the star formation rate and consequent stellar emissivity. The rate of metal production, $`\dot{Z}`$, is related to star formation rate by $`\mathrm{\Omega }_g\dot{Z}=\zeta \dot{\mathrm{\Omega }}_s`$, where $`\zeta =0.38Z_{}`$ is the metallicity yield averaged over the initial stellar mass function, with $`Z_{}`$ being the solar metallicity . This gives a metallicity evolution $`Z(z)=\zeta \mathrm{ln}[\mathrm{\Omega }_g(z)/\mathrm{\Omega }_{g,i}]`$. In order to determine the mean stellar emissivity from the star formation rate, an initial mass function (IMF) $`\varphi (M)`$ must be assumed for the distribution of stellar masses $`M`$ in a freshly synthesized stellar population. To further specify the luminosities of these stars as a function of mass $`M`$ and age $`T`$, Fall, Charlot, and Pei use the Bruzual-Charlot (BC) population synthesis models for the spectral evolution of stellar populations , . In these population synthesis models, the specific luminosity $`L_{\mathrm{star}}(\nu ,M,T)`$, of a star of mass $`M`$ and age $`T`$ is integrated over a specified IMF to obtain a total specific luminosity $`S_\nu (T)`$ per unit mass for an entire population, in which all stellar members are produced simultaneously ($`T=0`$). Following Fall, Charlot, and Pei , SS98 used the BC model corresponding to a Salpeter IMF, $`\varphi (M)dMM^{2.35}dM`$, where $`0.1M_{}<M<125M_{}`$. The mean co-moving emissivity $`_\nu (t)`$ was then obtained by convolving over time $`t`$ the specific luminosity with the mean co-moving mass rate of star formation. SS98 also obtained metallicity correction factors for stellar radiation at various wavelengths. Increased metallicity gives a redder population spectrum , . SS98 calculated stellar emissivity as a function of redshift at 0.28 $`\mu `$m, 0.44 $`\mu `$m, and 1.00 $`\mu `$m, both with and without a metallicity correction. Their results agree well with the emissivity obtained by the Canada-French Redshift Survey over the redshift range of the observations ($`z1`$). The stellar emissivity in the universe is found to peak at $`1z2`$, dropping off steeply at lower reshifts and more slowly at higher redshifts. Indeed, Madau, et al. have used observational data from the Hubble Deep Field to show that metal production has a similar redshift distribution, such production being a direct measure of the star formation rate. (See also Ref. ). The co-moving radiation energy density $`u_\nu (z)`$ is the time integral of the co-moving emissivity $`_\nu (z)`$, $$u_\nu (z)=_z^{z_{\mathrm{max}}}𝑑z^{}_\nu ^{}(z^{})\frac{dt}{dz}(z^{})e^{\tau _{\mathrm{eff}}(\nu ,z,z^{})},$$ (4) where $`\nu ^{}=\nu (1+z^{})/(1+z)`$ and $`z_{\mathrm{max}}`$ is the redshift corresponding to initial galaxy formation. The extinction term $`e^{\tau _{\mathrm{eff}}}`$ accounts for the absorption of ionizing photons by the clumpy intergalactic medium (IGM) that lies between the source and observer. Although the IGM is effectively transparent to non-ionizing photons, the absorption of photons by HI, HeI and HeII can be considerable . ### 4.2 The Gamma-Ray Opacity at High Redshifts With the co-moving energy density $`u_\nu (z)`$ evaluated (SS98), the optical depth for $`\gamma `$-rays owing to electron-positron pair production interactions with photons of the stellar radiation background can be determined from the expression $$\tau (E_0,z_e)=c_0^{z_e}𝑑z\frac{dt}{dz}_0^2𝑑x\frac{x}{2}_0^{\mathrm{}}𝑑\nu (1+z)^3\left[\frac{u_\nu (z)}{h\nu }\right]\sigma _{\gamma \gamma }(s)$$ (5) where $`s=2E_0h\nu x(1+z)`$, $`E_0`$ is the observed $`\gamma `$-ray energy at redshift zero, $`\nu `$ is the frequency at redshift $`z`$, $`z_e`$ is the redshift of the $`\gamma `$-ray source, $`x=(1\mathrm{cos}\theta )`$, $`\theta `$ being the angle between the $`\gamma `$-rayand the soft background photon, $`h`$ is Planck’s constant, and the pair production cross section $`\sigma _{\gamma \gamma }`$ is zero for center-of-mass energy $`\sqrt{s}<2m_ec^2`$, $`m_e`$ being the electron mass. Above this threshold, $$\sigma _{\gamma \gamma }(s)=\frac{3}{16}\sigma _\mathrm{T}(1\beta ^2)\left[2\beta (\beta ^22)+(3\beta ^4)\mathrm{ln}\left(\frac{1+\beta }{1\beta }\right)\right],$$ (6) where $`\beta =(14m_e^2c^4/s)^{1/2}`$. Figure 6 shows the opacity $`\tau (E_0,z)`$ for the energy range 10 to 500 GeV, calculated by SD98 both with and without a metallicity correction. Extinction of $`\gamma `$-rays is negligible below 10 GeV. The weak redshift dependence of the opacity at the higher redshifts as shown in Figure 6 indicates that the opacity is not very sensitive to the initial epoch of galaxy formation, contrary to the speculation of MacMinn and Primack . In fact, the uncertainty in the metallicity correction (see Figure 6) would obscure any dependence on $`z_{max}`$ even further. ### 4.3 The Effect of Absorption on the Spectra of Blazars and the Gamma-Ray Background With the $`\gamma `$-ray opacity $`\tau (E_0,z)`$ calculated out to $`z=3`$, the cutoffs in blazar $`\gamma `$-ray spectra caused by extragalactic pair production interactions with stellar photons can be predicted. The left graph in Figure 7 from Ref. (SS98) shows the effect of the intergalactic radiation background on a few of the grazars observed by EGRET, viz., 1633+382, 3C279, 3C273, and Mrk 421, assuming that the mean spectral indices obtained for these sources by EGRET extrapolate out to higher energies attenuated only by intergalactic absorption. Observed cutoffs in grazar spectra may be intrinsic cutoffs in $`\gamma `$-rayproduction in the source, or may be caused by intrinsic $`\gamma `$-ray absorption within the source itself. The right hand graph in Figure 7 shows the background spectrum predicted from unresolved blazars , compared with the EGRET data . Note that the predicted spectrum steepens above 20 GeV, owing to extragalactic absorption by pair-production interactions with radiation from external galaxies, particularly at high redshifts. Above 10 GeV, blazars may have natural cutoffs in their source spectra and intrinsic absorption may also be important in some sources . Thus, above 10 GeV the calculated background flux from unresolved blazars shown in Figure 7 may actually be an upper limit. Whether cutoffs in grazar spectra are primarily caused by intergalactic absorption can be determined by observing whether the grazar cutoff energies have the type of redshift dependence predicted here. ### 4.4 Constraints on Gamma-ray Bursts The discovery of optical and X-ray afterglows of $`\gamma `$-ray bursts and the identification of host galaxies with measured redshifts, i.e. , , , has lead the accumulation of evidence that these bursts are highly relativistic fireballs originating at cosmological distances and may be associated primarily with early star formation . As indicated in Figure 6 $`\gamma `$-rays above an energy of $``$ 15 GeV will be attenuated if they at emitted at a redshift of $``$ 3. On 17 February 1994, the EGRET telescope observed a $`\gamma `$-ray burst which contained a photon of energy $``$ 20 GeV . As an example, if one adopts the opacity results which include the metallicity correction, the highest energy photon in this burst would be constrained probably to have originated at a redshift less than $``$2. Future detectors such as GLAST (Ref. , also Gehrels, these proceedings) may be able to place better redshift constraints on bursts observed at higher energies. Such constraints may further help to identify the host galaxies of $`\gamma `$-ray bursts. ## 5 Acknowledgment I wish to acknowledge that the work presented here was a result of extensive collaboration with O.C. De Jager, M.A. Malkan, and M.H. Salamon, as indicated in the references cited. I also wish to thank Okkie De Jager for helping with the manuscript.
no-problem/9812/chao-dyn9812006.html
ar5iv
text
# References Chaos control in traffic flow models Elman Mohammed Shahverdiev,<sup>1</sup><sup>1</sup>1e-mail:shahverdiev@lan.ab.az On leave from Institute of Physics, 33, H.Javid avenue, Baku 370143, Azerbaijan Department of Information Science, Saga University, Saga 840, Japan e-mail:elman@ai.is.saga-u.ac.jp Shin-ichi Tadaki Department of Information Science, Saga University, Saga 840, Japan e-mail:tadaki@ai.is.saga-u.ac.jp Nowadays traffic flow problems has acquired interdisciplinary status due to the following reasons:first of all to investigate traffic flow models methods from different branches of science such as hydrodynamics, theory of magnetizm, cybernetics, etc. are applied; secondly results of investigations of traffic problems with different approaches could be adequate in various scientific directions, see, e.g. refs.1-9. Mean field approach is one of widely used approaches in traffic problems.(for some publications, see, ref.8-10 and references therein). It is well-known that nowadays cellular automaton (CA) models has extensive applications to the traffic flow models.In this paper we dwell on just two models from the mean field theory:one- and two-dimensional systems.First we will consider one dimensional model. Recently, the authors of ref.8 have presented microscopic derivations of mean field theories for CA models of traffic flow in one dimension.They established the following mapping between the average velocities of cars $`v`$ at times t+1 and t: $$v(t+1)=(2f)v(t)(1p)^1v^2(t)+p(1p)^1v^3(t),(1)$$ where,$`p`$ is the car density; $`f`$ is the quantity responsible for the random delay due to the say, different driving habits and road conditions. So one has discrete dynamical nonlinear system, which exhibit rich dynamical behavior(see below after the presentation of two-dimensional traffic model) Not so long ago Biham et al.(ref.2) (below simply BML) introduced a simple two dimensional (2D) CA model with traffic lights and studied the average velocity of cars as a function of their density.In that model, cars moving from west to east attempt to move in odd time steps and cars from south to north -in even time steps.There are three possible states on the square lattice:(i) occupied by an eastbound car; (ii)occupied by a northbound car; (iii)vacant. In ref.8 it was underlined that the divison of time into odd and even time steps simulates the effect of of traffic lights.The main result of BML:the average velocity in the long time limit vanishes when the density of cars $`p`$ is higher than a critical value $`p_c`$.Below $`p_c`$, the traffic is in a moving phase, while above $`p_c`$ it is in a jamming phase.Numerous improvements of the basic BML model taking into account the effects of factors such as overpasses, faulty traffic lights, asymmetric distribution of cars in a homogenous lattice, and traffic accidents,(see ref.8 and refenences therein.)has been carried out. In the very recent paper (ref.8) an improved mean field theory for 2D traffic flow with a fraction $`c`$ of overpass sites and with possible asymmetry in the distributions of cars in the two different directions is studied. The model in fact is the improved version of Nagatani model, ref.11. The overpass sites can be occupied simultaneously by an eastbound car and a northbound car, thus modelling the two-level overpasses in modern road systems in cities.Nagatani model deals with the isotropic distribution of cars with different overpass sites.It has been shown that the addition of overpasses enhances both the average speed of the traffic and critical density of cars.However the Nagatani model has some shortcomings in the sense that blockage of cars due to cars moving in the same direction is not taken into account properly, which led to too low estimation of the concentration of overpasses for the transition from jamming to moving phase at $`p=1`$ and too high estimate of the critical car density at $`c=0`$. Although in this paper we will deal with the isotropic distributions of cars for the sake of completeness we first write the system of equations with asymmetry. Let $`p_x`$ and $`p_y`$ to be the density of cars in the $`x`$ (eastbound) and $`y`$(northbound) directions respectively.Also let $`v_x`$ and $`v_y`$ to be the average speeds of cars in the same directions;Let also $`c`$ to be a fraction of overpass sites.Then according to ref.8 these quantities are related through the following nonlinear dynamical equations: $$v_x=1(1c)(p_yv_y^1+p_x(v_x^11))$$ $$v_y=1(1c)(p_xv_x^1+p_y(v_y^11)),(2)$$ From the matematical point of view the system (2)also is the nonlinear dynamical system. It is well-known that some dynamical systems depending on the value of systems’ parameters exhibit unpredictable,chaotic behaviour,refs.10-15. The seminal papers (refs.12-13) induced avalanche of research works in the theory of control of chaos in synergetics.Chaos synchronization in dynamical systems is one of such ways of controlling chaos. In the spirit of refs.12-13 by synchronization of two systems we mean that the trajectories of one of the systems will converge to the same values as the other and they will remain in step with each other. For the chaotic systems synchronization is performed by the linking of chaotic systems with a common signal or signals (the so-called drivers) According to refs.12-13 in the above mentioned way of chaos control one or some of these state variables can be used as an input to drive a subsystem that is a replica of part of the original system.In refs.12-13 it has been shown that if all the Lyapunov exponents (or the largest Lyapunov exponent) or the real parts of these exponents for the subsystem are negative then the subsystem synchronizes to the chaotic evolution of original system.If the largest subsystem Lyapunov exponent is not negative then as it has been proved in ref.18 synchronism is also possible. In this case a nonreplica system constructed according to ref.18 is used instead of replica subsystem. The interest to the chaos synchronization in part is due to the application of this phenomenen in secure communications, in modeling of brain activity and recognition processes, etc, see,references in ref. 17). Also we should mention that this method of chaos control may result in the improved performance (according to some criterion) of chaotic systems (see, e.g.ref.17). In this paper for the first time (to our knowledge) we report on the possible chaos control in the traffic flow models. We will act within the algorithm proposed in ref.18. Our paper is dedicated to the study of the stablization of unstable behaviors in one dimensional and application of both replica and nonreplica approaches to chaos control to the 2D traffic model. First we will investigate the one dimensional model.The stationary values of average velocity can be easily calculated from the equation (1).These values are: $$v_1^{st}=0,v_2^{st}=\frac{1(14(1f)p(1p))^{\frac{1}{2}}}{2p},(3)$$ The stability analysis of this stationary states show that :the $`v_1^{st}`$ is always unstable,except for $`p=1`$. Indeed,for this state $$|\frac{v(t+1)}{v(t)}|=|(2f)|>1,(4)$$ As $`f`$ changes between zero and unity.The instability condition for the second stationary state is: $$|(1p)^1\frac{3(14(1f)p(1p))12(14(1f)p(1p))^{\frac{1}{2}}}{4p}+2f|>1,(5)$$ As the stability analysis show in general we have stable and unstable states depending on the value of $`p,f`$. As a rule the unstable states are discarded as unphysical ones.But nowadays due to the success of chaos control theory it is possible to stabilize the unstable fixed points or periodic orbits. Below in dealing with the control of instability of fixed points in one dimensional map we will follow the so-called proportional feedback method described in ref.19, which is map based variation of a method proposed in ref.12.Following the method presented in ref.19 first we linearize the one dimensional map the one dimensional map (1) in the vicinity of the fixed points (or stationary states $`v^{st}`$): $$v(t+1)=h(v(t)v^{st})+v^{st},(6)$$ where $`|h|>1`$ is the slope of the map at $`v^{st}`$. In the mapping (1) we have only one parameter $`f`$ by changing which one can stabilize the unstable fixed points.The positive answer to this problemvindicates the intuition that by improving drving habit,road conditions one cansolve some traffic difficulties.Of course, theoretically the regularization of traffic flow also could be achieved by manipulation with $`p`$.Having this in mind let us denote the parameters as m=(f,p).Now suppose that we change this parameter $`m`$ by small amount $`\delta m`$ to move the unstable fixed point $`v^{st}`$ without significant changing of the slope of the map (1) $`h`$. In other words $$v_{t+1}(m+\delta m)=h(v_tv^{st}(m+\delta m))+v^{st}(m+\delta m),(7)$$ (in order to avoid confusion ,where necessary we write $`t`$ as a subscript) where $$v^{st}(m+\delta m)=\delta m\frac{dv^{st}}{dm}+v^{st},(8)$$ Now suppose that$`v_t=v^{st}(m)+\delta _1v`$ ,where the second term in the right-hand side of this equality is much smaller than the first one. If at this moment $`m`$ is changed to $`m+\delta m`$ such that $`v_{t+1}(m+\delta m)=v^{st}(m)`$, the system state is directed to the original unstable fixed point upon the next iteration.If $`m`$ is then switched back to its original value, the system would remain at $`v^{st}`$ indefinitely. The necessary variations of $`m`$ can easily be determined by the formula $$\delta m=\frac{h}{(h1)\frac{dv^{st}}{dm}}\delta _1v=\frac{\delta _1v}{g},(9)$$ One can see easily the necessary changing of parameters to stabilise the unstable fixed points is proportional to the deviation of $`v`$ from the fixed point (or stationary state).That is why the method is called the proportional -feedback one. So using the proportional feedback method can allow one to stabilize unstable stationary states Now we will study the two dimensional case.Below, as it was underlined above in this paper we restrict ourselves to the isotropic case. The system of equation (2)can be regarded as a mapping describing the time evolution of the velocity in the moving phase with $`v_x(t+1)`$ and $`v_y(t+1)`$ on the left-hand side and $`v_x(t)`$, $`v_y(t)`$ on the right-hand side. In the case of isotropic distribution this mapping can be written as: $$v_x(t+1)=1(1c)(\frac{p}{2}v_y^1+\frac{p}{2}(v_x^11))=F_1,$$ $$v_y(t+1)=1(1c)(\frac{p}{2}v_x^1+\frac{p}{2}(v_y^11))=F_2,(10)$$ where $`p_x=p_y=\frac{p}{2}`$. The system of nonlinear mapping has two steady state solutions $$v_\pm =\frac{1}{2}(1+\frac{(1c)p}{2}\pm ((1+\frac{(1c)p}{2})^24(1c)p)^{\frac{1}{2}}),(11)$$ where $`v=v_x=v_y`$. First of all we should find the condition of possible chaoticity in the system (10). The stability of mapping is determined by the eigenvalues of the Jacobian matrix of the nonlinear mapping (10). $$J=\frac{(F_1,F_2)}{(v_x(t),v_y(t))},(12)$$ It can be seen easily the eigenvalues of the Jacobian matrix is calculated by the following equation: $$\lambda ^2\lambda (1c)\frac{p}{v^2}=0,(13)$$ From here we obtain easily that $$\lambda _1=0,$$ $$\lambda _2=(1c)\frac{p}{v^2},(14)$$ In the last expression while calculating $`\lambda `$ we use the steady state solutions (10). This simplification is justified at least for systems whose chaotic behavior has arisen out of stability of fixed points, see ref.15, also ref.20-21. The mapping will exhibit chaotic behaviour, if the absolute values of $`\lambda `$ exceed unity. $$|\lambda _2|>1,(15)$$ . As the initial or original nonlinear mapping is symmetric over $`v_x`$ and $`v_y`$ considering only one of these variables as a driver will be sufficient.Take for the definiteness $`v_x`$ variable as a driver. Then the replica subsystem(with the superscript”r”) can be written as follows: $$v_y^r(t+1)=1(1c)(\frac{p}{2v_x(t)}+\frac{p}{2}(\frac{1}{v_y^r(t)}1))=H,(16)$$ Then the Lyapunov exponent can be calculated as follows,ref.18 $$\mathrm{\Lambda }=\mathrm{ln}\frac{H}{v_y^r}=\mathrm{ln}(1c)\frac{p}{2}\frac{1}{v^2},(17)$$ . For the chaos control (to be more specific for the synchronization of the evolution of the response system to the chaotic evolution of the initial nonlinear mapping when time goes to infinity) it is required that $$\mathrm{\Lambda }<0,(18)$$ . It will take place, if the following condition is satisfied: $$(1c)\frac{p}{2v^2}<1,(19)$$ . Thus we have two inequalities:(19) and (15).If these inequalities do not contradict each other, then the replica approach allows us to perform chaos control. We see that chaos control within replica approach is realizable if $$1<(1c)p\frac{1}{v^2}<2,(20)$$ Although the restriction (20) is very severe in the sense that diapason of changing of traffic flow models parameters such as $`c`$, $`p`$ could be very narrow.Nevertheless, as the analysis of the data presented in ref.8 shows that chaos synchronization would be possible within the replica approach.If this approach fails, we can apply nonreplica one to achieve our goal,as it was underlined above. Now suppose that our attempts to perform chaos control failed within replica approach. In this case we can try nonreplica approach. According to ref.18, within nonreplica approach we can use the following nonreplica response system(with the superscript ” nr”): $$v_x^{nr}(t+1)=1(1c)(\frac{p}{2}(v_y^{nr})^1+\frac{p}{2}(v_x^11))+\alpha (v_x^{nr}v_x)=F_3,$$ $$v_y^{nr}(t+1)=1(1c)(\frac{p}{2}v_x^1+\frac{p}{2}((v_y^{nr})^11))+\beta (v_x^{nr}v_x)=F_4,(21)$$ where $`\alpha `$ and $`\beta `$ are the arbitrary constants. Here again as in the previous case we consider $`v_x`$ dynamical variable as the driver. The Lyapunov exponents are the eigenvalues of the Jacobian: $$J=\frac{(F_3,F_4)}{(v_x^{nr}(t),v_y^{nr}(t))},(22)$$ From (21) we easily establish that these exponents are solutions to the following equation: $$\lambda ^2\lambda (\alpha +(1c)\frac{p}{2}v^2)\beta (1c)\frac{p}{2}v^2=0,(23)$$ Here $`v`$ is the steady state solution of the original mapping. As it can be seen from (23) the roots of this equation $`\lambda _1`$ and $`\lambda _2`$ satisfy the relationships: $$\lambda _1+\lambda _2=\alpha +(1c)\frac{p}{2}v^2,$$ $$\lambda _1\lambda _2=\beta (1c)\frac{p}{2}v^2,(24)$$ Remind that our aim is to satisfy the conditions $`|\lambda _1|<1`$ and $`|\lambda _2|<1`$. Due to the arbitraryness of the constants $`\alpha `$ and $`\beta `$ this can be done easily. Up to now while performing chaos synchronization we have taken the advantage ofusing the presence of driving variables explicitly.Our calculations show that chaos synchronization is also reachable in case of parameter perturbation method,ref.22. Namely, we have shown that by changing the fraction of overpasses onecan make the absolute values of $`\lambda _{1,2}`$ less than unity. Indeed, by assuming the following change for $`c`$ : $$c=c_1\alpha _c(v_yv_{y_{ap}}),(25)$$ (where:$`c_1`$ is the nominal value for the $`c`$ in the original two dimensional model;$`\alpha _c`$ is the control coefficient to be found;$`v_y`$ and $`v_{y_{ap}}`$ are are response system and drive system orbits,respectively.) after lengthy calculations we obtain that for the isotropic case much sought eigenvalues are: $$\lambda _1=0,$$ $$\lambda _2=2(1c_1)\frac{p}{2}v_{ap}^2+\frac{p}{2}\alpha _c(12v_{ap}^1),(26)$$ Thus we have the real possibility to satisfy the condition for the chaos control:$`|\lambda _2|<1`$. Now let us make some estimations based on our approach.According to ref.8 for the fraction of overpasses $`c=0.5`$ the critical value of car density when jam occurs $$p_{cr}=\frac{632^{\frac{1}{2}}}{1c}=0.686,(27)$$ Then for $`p>p_{cr}`$ jam phase takes place with v close to zero.It means that for these values of $`c,p`$ instability conditions hold. Now we will use the formulae (25)and (26)to resolve the jamming problem by changing $`c`$.Of course, while conducting calculations we should keep in mind that maximal values for $`c,v`$ are unity and the jamming phase could be avoided by increasing $`c`$.From the condition $`|\lambda _2|>1`$ (formula (26)) for the extreme case $`v_{yap}=1`$ we obtain that $`1.9<\alpha _c<3.9`$.Take for definiteness $`\alpha _c=1.5`$.Further, having in mind that when synchronization takes place $`v_y`$ is very close to $`v_{yap}`$, therefore assuming $`v_yv_{yap}=0.1`$ from the equation (25) we derive new value for the fraction of overpasses capable of resolving the traffic jamming:$`c=0.65`$ which is very close to the value of $`c`$,$`c=0.657`$ when there is no jamming at all;that is $`p_{cr}=1`$.,ref.8. Of course much depends on the degree of accuracy for synchronization between $`v_y`$ and $`v_{yap}`$.But,in any case we can say safely that our results do notcontadict to the conventional visdom and gives the right direction of action In conclusion in this work we pointed out to the possibility of the stabilization of the unstable stationary states in one of one dimensional model with random delay. Also we have investigated the possibilty of chaos control in one of two dimensional mapping in traffic flow within replica, nonreplica and parameter change approaches.As we indicated above nonreplica approach has advantages over the replica approach.One of them is the possibility to make Lyapunov exponents not only negative, but also larger in magnitude.This is very important from the application point of view, as the time required to achieve synchronization depends on the value of the largest Lyapunov exponent, ref.1-16, 18. Speaking about the study of 2D CA traffic flow models one has to mention that although 2D CA models less representative (in comparison with 1D rule-184 CA) of real traffic flow, however they may be applicable to abstract traffic problems such as data packets in computer networks,ref.23. Besides, 2D CA models may be useful from the viewpoint of complex behavior in deterministic dynamics, ref.24-26. Acknowledgments E.M.Shahverdiev thanks JSPS for the Fellowship.
no-problem/9812/nucl-th9812052.html
ar5iv
text
# Density and expansion effects on pion spectra in relativistic heavy-ion collisions ## Figure Captions Fig. 1: Systematics obtained by varying $`(𝖺)`$ the fireball’s radius $`R`$, $`(𝖻)`$ the temperature $`T`$ and $`(𝖼)`$ the surface expansion velocity $`\beta `$ for rapidity $`y_{cm}=0`$. $`(𝖽)`$ Distribution for a rapidity $`y_{lab}=3.0`$. Fig. 2: $`(𝖺)`$ Theoretical distribution $`(2\pi m_t)^1d^2N/dm_tdy`$ computed with the parameters $`T=120`$ MeV, $`\beta =0.5`$, $`N_\pi ^{}=160`$ and $`y_{cm}=0`$ compared to data from the E-802/866 on mid rapidity negative pions from central Au+Au reactions at $`11.6A`$ GeV/c. The theoretical curve has been multiplied by the constant $`𝒩=0.56`$ that minimizes the $`\chi ^2`$ when compared to data above $`m_tm=0.4`$ GeV. $`(𝖻)`$ Distribution computed with the same parameters but with $`N_{\pi ^+}=115`$ compared to data on mid rapidity positive pions from the same reaction. The theoretical curve has been multiplied by the constant $`𝒩=0.59`$ that minimizes the $`\chi ^2`$ when compared to data above $`m_tm=0.4`$ GeV. Data are $`(2\pi m_t\sigma _{trig})^1d^2\sigma /dm_tdy`$ in the rapidity interval $`0<\mathrm{\Delta }y<0.2`$. The total measured yield spans the rapidity interval $`|\mathrm{\Delta }y|<1`$ around central rapidity .
no-problem/9812/astro-ph9812414.html
ar5iv
text
# R-Process Abundances and Cosmochronometers in Old Metal-Poor Halo Stars ## R-Process Abundances and Cosmochronometers in Old Metal-Poor Halo Stars B. Pfeiffer <sup>1</sup>, K.-L. Kratz <sup>1</sup>, F.-K. Thielemann <sup>2</sup>, J.J. Cowan <sup>3</sup>, C. Sneden <sup>4</sup>, S. Burles <sup>5,6</sup>, D. Tytler <sup>5</sup>, and T.C. Beers <sup>7</sup> <sup>1</sup> Institut für Kernchemie, Universität Mainz, Mainz, Germany <sup>2</sup> Institut für Theoretische Physik, Universität Basel, Basel, Switzerland <sup>3</sup> Department of Physics and Astronomy, University of Oklahoma, USA <sup>4</sup> Department of Astronomy and McDonald Observatory, University of Texas, Austin, USA <sup>5</sup> Department of Physics and Center for Astrophysics and Space Sciences, University of California, San Diego, USA <sup>6</sup> Department of Astronomy and Astrophysics, University of Chicago, Chicago, USA <sup>7</sup> Department of Physics and Astronomy, Michigan State University, East Lansing, USA Already 30 years ago, Seeger et al. expressed the idea that the solar system r-process isotopic abundance distribution (N<sub>r,⊙</sub>) is composed of several components. But only on the basis of new experimental and modern theoretical nuclear-physics input, Kratz et al. demonstrated that within the so-called waiting-point approximation a minimum of three components (showing a steady flow of $`\beta `$-decays between magic neutron numbers) can give a reasonable fit to the whole N<sub>r,⊙</sub>. A somewhat better fit can be obtained by a more continuous superposition of exponentially declining neutron number densities . Accordingly, the s-process shows a steady flow of neutron captures in between magic neutron numbers and a good fit is achieved when taking an exponentially declining superposition of exposures. Analyzing with present day almost perfectly known nuclear data in the s-process (as neutron capture and $`\beta `$-decay rates), Goriely recently reproduced this exponential exposure with his “multi-event” model. As there is practically no experimental nuclear input in the r-process we prefer to apply the waiting-point approximation based on a smooth physical behaviour in an exponential model, rather than obtaining spurious results, which are just driven by obtaining a better fit with, however, the wrong physics. Deficiencies in calculated N<sub>r,⊙</sub>-abundances (even using the most recent macroscopic-microscopic mass models FRDM and ETFSI) were attributed to an incorrect trend in neutron separation energies when approaching magic neutron numbers far from stability . The weakening of shell strength near the neutron drip line predicted from astrophysical requirements was recently also obtained by Hartree-Fock-Bogolyubov (HFB) mass calculations with the Skyrme-P force . And indeed, new spectroscopic studies of very neutron-rich Cd-isotopes at CERN/ISOLDE have revealed first experimental evidence for a quenching of the N=82 major shell below <sup>132</sup>Sn . Applying these HFB masses around the magic neutron numbers resulted in an eradication of the abundance troughs . As large-scale HFB calculations for deformed nuclear shapes are not yet available, Pearson et al. modified their ETFSI mass model to asymptotically approach the HFB masses at the drip-lines. The N<sub>r,⊙</sub>-abundances calculated with these ETFS-Q masses are shown in Fig. 1 to give a good fit over the whole range of stable r-process isotopes. This gives confidence to extrapolate the calculations to the unstable actinide isotopes. The abundances prior to $`\alpha `$\- and $`\beta `$-decay are displayed in Fig. 1 as a dashed line and the final abundances after decay as a solid line. The good reproduction of the Tl-Pb-region (as endproducts of the $`\alpha `$-decay chains) let us to conclude that estimates of the initial abundances of the long-lived isotopes <sup>232</sup>Th and <sup>235,238</sup>U, which are applied as cosmochronometers, can be taken from our r-process model. Recently, stellar abundances of neutron-capture elements (beyond iron) have been determined over a wide Z-range in the very metal-poor Galactic halo star CS22892-052 . After adjustment to solar metallicity, the values are consistent with the global solar-system r-process abundances as well as with our predictions (see Fig. 2). From this agreement, we concluded that the heavy elements in this star are of pure r-origin and that from the comparison of the observed and calculated Th/Eu abundance ratios an age estimate for the heavy elements of about 13 Gyr can be derived . This indicates, that r-synthesis started early in the Galactic evolution and that there might be a unique r-process scenario (at least beyond Z$``$50). As, evidently, one single star cannot stand for the whole low metallicity end of the Galactic halo, further measurements are needed, not only to investigate other stars over a range of metallicities, but essentially to detect additional elements, especially the 3<sup>rd</sup>-peak elements (Os, Pt, Pb) close to Th and U. These elements have absorption lines in the UV, so that they are best observed from space. Therefore, spectra for three K giant stars (HD115444, HD122563, HD126238) were measured with the Goddard High Resolution Spectrograph on the Hubble Space Telescope . Additional high-resolution spectra were registered with the High Resolution Echelle Spectrometer (HIRES) at the Keck I telescope for the star HD115444 in particular to separate the Th absorption line clearly from a blending <sup>13</sup>CH molecular line . The results of these observations are summarized in Fig. 2 as filled squares together with ground-based results (open squares). After proper renormalization, the observed neutron-capture elements in the four stars displayed overlap perfectly with our theoretical r-process curve (solid line) and the solar system distribution (dashed line). In addition to the observation of Th in CS22892-052 mentioned above, the new measurements yielded a firm value for HD115444 as well as an upper limit for HD122563. In the case of the second chronometer U, only an upper limit could be obtained for HD115444. The measured Th/Eu ratios combined with our calculated zero-age value allow to derive an estimate for the decay age of T=(13 $`\pm `$ 4) Gyr, where the uncertainty takes only in account the counting statistics . This value represents a lower limit for the age of the Galaxy and is in line with a variety of recent age estimates for the Universe. To summarize, the reproduction of the r-process component of solar abundances in the framework of the “waiting-point approximation” applying nuclear input data calculated from a macroscopic-microscopic mass model with Bogolyubov-enhanced shell “quenching” (ETFSI-Q) gives confidence in extrapolations beyond the stable isotopes to the actinide r-process cosmochronometers. The observation of “solar” neutron-capture element abundance distributions in four metal-poor halo stars indicate to a unique r-process site in the Galaxy (at least for Z$``$56). This further strenghtens our objections to the conclusions of Goriely and Arnould that a series of several non-solar isotopic abundance distributions might produce a total elemental abundance pattern that fortuitously matches some of the neutron-capture elements in one low-metallicity star. Although the observation of a solar r-process elemental pattern is not an absolute proof for isotopic solar r-process abundances, it is nevertheless the most reasonable and probable conclusion.
no-problem/9812/cond-mat9812212.html
ar5iv
text
# Critical behavior of a traffic flow model ## I Introduction Real traffic displays with increasing car density a transition from free flow traffic to congested traffic. This break down of free flow traffic is accompanied by the occurrence of traffic jams which are not attributable to any external cause but only to distance fluctuations between following vehicles within very dense and unstable traffic (see for instance ). These traffic jams find expression in shock waves, i.e., in backward moving density fluctuations. This property of jams was already found during the 60’s in traffic observations . Beyond this early experimental investigations recently performed measurements of real highway traffic leads to the conjecture that jams are characterized by several independent parameters, e.g., the jam velocity, mean flux, etc. . Since the seminal work of Lighthill and Whitham in the middle of the 50’s many attempts have been made to construct more and more sophisticated models which incorporate various phenomena occurring in real traffic (for an overview see ). Few years ago Nagel and Schreckenberg introduced a cellular automata model, which simulates single-lane one-way traffic, and which is able to reproduce the main features of traffic flow, backward moving shock waves and the so-called fundamental diagram (see and references therein). Investigations of the Nagel-Schreckenberg traffic flow model show that crossing a critical point a transition takes place from a homogeneous regime (free flow phase) to an inhomogeneous regime which is characterized by a coexistence of the free flow phase and the jammed phase . Thereby, the free flow phase is characterized by a low local density and the jammed phase by a high local density, respectively. Due to the particle conservation of the model the transition is realized by the system separating into a low density region and a high density region . Despite several attempts it is still an open question wether the transition can be described as a critical phenomenon (see for instance ). An attempt to investigate the occurrence of jam was made by Vilar and Souza who introduced an order parameter which equals the number of standing cars. Without noise this quantity vanishes continuously below the transition. But including noise the amount of standing cars is finite for any nonzero density, i.e., the average number of standing cars does not vanish below the critical density. The crucial point is that below the transition a particle may stop due to the noisy slowing down rule of the dynamics, but this behavior does not coincide with backward moving density fluctuations and algebraic decaying correlation functions which would indicate a critical behavior of the system. At present, a convincing definition of an order parameter, which tends below the critical density to zero, is not known. Again, it is even not clear wether the Nagel-Schreckenberg traffic flow model displays criticality at all. We use the dynamical structure factor both to make predictions about the properties of the different phases observed in the Nagel-Schreckenberg model and to investigate the question wether the occurrence of jam can be described as a phase transition or not. The dynamical structure factor which is closely related to the correlation function is an appropriate tool to do this because it naturally distinguishes between the two phases characterized by positive and negative velocities (free traffic flow and backward moving shock waves). Thus the advantage of our method of analyses is that the properties of both phases can be examined without the necessity to define a car to be jammed or not. The paper is organized as follows: In Sec. II we briefly describe our analysis of the dynamical structure factor and recall the main results which were published recently . In Sec. III we extend this method of analysis to the so-called velocity-particle space and address the question whether the transition from the free flow regime to the phase coexisting regime (where the system separates into the jammed and free flow phase) can be considered as a phase transition. Basing on the dynamical structure factor, our results suggest that a continuous phase transition takes place. ## II Model and Simulations The Nagel-Schreckenberg traffic flow model is based on a one-dimensional cellular automaton of linear size $`L`$ and $`N`$ particles. Integer values describing the position $`r_n\{1,2,\mathrm{},L\}`$, the velocity $`v_n\{0,1,\mathrm{},v_{\mathrm{max}}\}`$ and the gap $`g_n`$ to the forward neighbor are associated with each particle. For each particle, the following update steps representing the acceleration, the slowing down, the noise, and the motion of the particles are done in parallel: (1) if $`v_n<g_n`$ the velocity is increased with respect to the maximal velocity, $`v_n\text{Min}\{v_n+1,v_{\mathrm{max}}\}`$, (2) to avoid crashes the velocity is decreased by $`v_ng_n`$ if $`v_n>g_n`$, (3) if $`v_n>0`$ the velocity is decreased by $`v_nv_n1`$ with probability $`P`$ in order to allow fluctuations, and finally (4) the motion of the cars is given by $`r_nr_n+v_n`$. Thus the behavior of the model is determined by three parameters, the maximal velocity $`v_{\mathrm{max}}`$, the noise parameter $`P`$ and the global density of cars $`\rho =N/L`$, where $`N`$ denotes the total number of cars and $`L`$ the system size, which is chosen to be $`L=32768`$ throughout the whole paper. Our analyses is based on the dynamical structure factor whose definition is as follows: Consider the occupation function $$\eta _{r,t}=\{\begin{array}{c}1\text{if cell }r\text{ is occupied at time }t\hfill \\ 0\text{otherwise}.\hfill \end{array}$$ (1) The evolution of $`\eta _{r,t}`$ leads directly to the space-time diagram where the propagation of the particles can be visualized (see for instance Fig. 2 in ). The dynamical structure factor $`S(k,\omega )`$ is then given by $$S(k,\omega )=\frac{1}{lT}\left|\underset{r,t}{}\eta _{r,t}e^{i(kr\omega t)}\right|^2,$$ (2) where the Fourier transform is taken over a finite rectangle of the space-time diagram of size $`l\times T`$, i.e., $`r`$ and $`t`$ are integers ranging from $`1`$ to $`l`$ and $`1`$ to $`T`$, respectively. Then $`k`$ and $`\omega `$ are also discrete values $`k=2\pi m_k/l`$ and $`\omega =2\pi m_\omega /T`$ with $`m_k\{0,1,2,\mathrm{},l1\}`$ and $`m_\omega \{0,1,2,\mathrm{},T1\}`$, respectively. The dynamical structure factor $`S(k,\omega )`$ is related to the Fourier transform of the real space density-density correlation function $`C(r,t)`$ and compared to the analysis of the steady state structure factor and the related steady state correlation function it contains both the spatial and the temporal evolution of the system. Figure 1 shows the dynamical structure factor both below and above the transition. Below the transition $`S(k,\omega )`$ exhibits one mode formed by the ridges. This mode is characterized by a positive slope $`v=\omega /k`$ corresponding to the positive velocity $`v_\mathrm{f}`$ of the particles in the free flow phase. Increasing the global density, a second mode appears at the transition to the coexistence regime. This second mode exhibits a negative slope ($`v_\mathrm{j}<0`$) indicating that it corresponds to the backward moving density fluctuations in the jammed phase. Due to the sign of their characteristic velocities $`v_\mathrm{f}`$ and $`v_\mathrm{j}`$, both phases can clearly be distinguished. Recently performed investigations turned out that the characteristic velocity of the free flow phase equals the velocity ($`v_\mathrm{f}=v_{\mathrm{max}}P`$) of free flowing cars in the low density limit ($`\rho 0`$), i.e., cars of the free flow phase behave as independent particles for all densities . The velocity of the jams neither depends on the global density nor on the maximum velocity, i.e., it is a function of the noise parameter only . This result coincides with those of recently published investigations , which were obtained from a variation analysis of a multi-point-autocorrelation function. The jam velocity is, beside the maximum flow, the fluxes inside and outside of the jam and the average vehicle speeds (see for instance and references therein) a characteristic parameter of real traffic and its knowledge is therefore necessary to calibrate any traffic model to the conditions observed in real traffic. ## III The jammed phase In the last section we briefly described that the analysis of the dynamical structure factor allows to determine the characteristic velocities of the jammed and the free flow phase. In this section we are interested in the correlations occurring within the jammed phase. Therefore it is necessary to analyse the jammed mode. Unfortunately, the superposition of the jammed and the free flow mode makes it difficult to consider the pure jammed mode (see Fig. 1). Thus a refinement of the analysis of the dynamical structure that allows to separate both modes is needed. This can be achieved by changing from the occupation function $`\eta _{r,t}`$, defined in real space, to the velocity-particle space where the evolution of each particle velocity $`v_{n,t}`$ is considered. A snapshot of the velocity-particle space for a given time is shown in Fig. 2. In the free flow phase, where the cars can be considered as independent particles , the velocities fluctuate according to the noisy slowing down rule between the two values $`v_{\mathrm{max}}`$ and $`v_{\mathrm{max}}1`$, respectively. Extended regions with small or even zero velocities corresponds to traffic jams. Comparable to the space-time diagram these regions move backward in time. The Fourier transform of the velocity-particle diagram in two dimensions leads to the dynamical structure factor $`S_v(k,\omega )`$ which is given by $$S_v(k,\omega )=\frac{1}{NT}\left|\underset{n,t}{}v_{n,t}e^{i(kn\omega t)}\right|^2,$$ (3) with $`k=2\pi m_k/N`$, $`\omega =2\pi m_\omega /T`$ and where the $`m`$’s being integers. Compared to the dynamical structure factor of the ordinary space-time diagram the dynamical structure factor of the velocity-particle space has the advantage that the free flow phase contributes only white noise ($`S_v(k,\omega )|_{\mathrm{free}\mathrm{flow}}=\mathrm{const}`$), i.e., the analysis of the occurring traffic jams is made easier. Figure 3 shows the structure factor $`S_v(k,\omega )`$ above the transition. The structure factor displays one ridge with a negative slope, corresponding to the backward moving jams. The notch parallel to the $`k`$-axis through $`\omega =0`$ is caused by finite-size effects and disappears with increasing system size. The peak in the center of the diagram \[$`S_v(k=0,\omega =0)`$\] describes the velocity fluctuations of the whole system. It is an average over both coexisting phases and therefore it leads to uncertain results on the occurrence of jams. In the following we carry out a quantitative analysis of the ridges to obtain information about correlations between jammed particles. We will show that among those particles long range correlations exist only above the transition whereas below the transition these correlations are restricted to a finite correlation length. In Fig. 4 we plot the values of the dynamical structure factor $`S_v(k,\omega )`$ for $`\omega /k=v_\mathrm{j}`$, i.e., the values along the ridge which correspond to the jam modes. In order to take finite-size effects into account, we use the scaling ansatz $$S_v(k,\omega )|_{\omega /k=v_\mathrm{j}}=N^\theta f(N^\zeta k).$$ (4) For $`\theta =1.22`$ and $`\zeta =0.34`$ we obtain a convincing data collapse of all curves. The dynamical structure factor decays algebraically, $$S_v(k,\omega )|_{\omega /k=v_\mathrm{j}}k^\gamma ,$$ (5) where the exponent is given by $`\gamma 3.6\pm 0.2`$. This algebraic decay of the dynamical structure factor indicates that the corresponding correlation function is also characterized by an algebraic decay, i.e., the system displays long range correlations above the transition. Next we are interested in the values of $`S_v(k,\omega )`$ for $`\omega /k=v_\mathrm{j}`$ below the critical density. These values are shown in Fig. 5. Due to the finite amount of standing cars below the transition, the jam mode does not vanish. Our analysis shows that the jam mode decreases like a Lorentz curve, $$S_v(k,\omega )|_{\omega /k=v_\mathrm{j}}\frac{1}{1+(k\xi )^2}+c.$$ (6) Herein $`\xi `$ denotes a correlation length, defined in the velocity-particle space. The term $`c`$ takes into consideration that, caused by the free flow phase, the jam mode does not tend to zero for large $`k`$ but to a finite value. Beside the jam mode, Fig. 5 shows a Lorentz curve according to Eq. (6), which has been fitted to the data. To estimate the influence of finite-size effects on the correlation length, we first determined the $`\xi `$ for different values of $`N`$ (with $`T=N`$) and found that $`\xi `$ is almost in-affected by finite-size effects for the densities shown in Fig. 5. Therefore, these values of the correlation length equal within negligible errors those values of the correlation length found in the limit $`N\mathrm{}`$ (with $`N=T`$). The dependence of the correlation length on $`\rho _\mathrm{c}\rho `$ is shown in the inset of Fig. 5. With $`\rho `$ approaching $`\rho _\mathrm{c}`$ the correlation length diverges as $$\xi (\rho _\mathrm{c}\rho )^\nu ,$$ (7) with $`\nu =0.92\pm 0.05`$. Since $`\xi `$ describes the correlation of particles of the jammed phase, the occurrence of long range order in the system at $`\rho =\rho _\mathrm{c}`$ indicates that the system displays criticality. As mentioned above, the correlation length $`\xi `$ is infinite above the transition (long range order) and finite below. This coincides with measurements of the life time distribution of jams at the critical point which displays a power-law behavior, i.e., traffic jams occur on all time scales. Below the critical density the life time of the jams is finite and no long range correlations of jams can occur. In the following we address the question how long range order occurs by approaching the transition point. Therefore we consider the smallest positive mode on the ridge of jammed particles, $`S_v(k0,\omega 0)`$ with $`\omega /k=v_\mathrm{j}`$. This value contains the information how long range and long time correlations appear when the transition takes place and it is believed to be closely related to an order parameter (see for instance ). If the transition from the free flow regime to the jammed regime can be described as a critical phenomenon, $`S_v(k0,\omega 0)`$ should vanish below the transition ($`\rho <\rho _c`$). Simulating finite system sizes this means that the smallest jam mode obeys the finite-size scaling ansatz $$S_v(k0,\omega 0)=(NT)^{y/2}f[(NT)^{x/2}(\rho \rho _\mathrm{c})],$$ (8) with $`\omega /k=v_\mathrm{j}`$. We measured $`S_v(k0,\omega 0)`$ for various values of $`N`$ and $`T`$ and for $`P=0.321`$ which corresponds to a jam velocity $`v_j1/2`$ . The finite-size scaling works and for $`y0.2`$ and $`x0.1`$ we got a good data collapse, which is plotted in Fig. 6. Especially we obtain the data collapse for different ratios of $`N`$ and $`T`$, which shows that the system behaves isotropic in $`N`$ and $`T`$. To investigate the dependence of the exponents on $`P`$, we determined them also for another value of the noise parameter ($`P=0.519`$ corresponding to $`v_j1/3`$ ). Due to the size of the corresponding error bars no significant $`P`$ dependence of the exponents could be observed. Further investigation with improved accuracy are needed to clarify this point. Note that it is not justified to identify the scaling exponents \[Eq. (8)\] with the usual critical exponents of second order phase transitions ($`y=\beta /\nu `$ and $`x=1/\nu `$). First it is not clear if $`S_v(k0,\omega 0)`$ equals the order parameter (see and references therein). The second point is that the usual finite-size scaling ansatz rests on the validity of the hyperscaling relation between the exponents (see for instance ). Despite these restrictions the finite-size scaling analysis of the smallest jam mode reveals that it vanishes below the transition. In the hydrodynamic limit ($`N\mathrm{},T\mathrm{}`$) no long range correlations in space and time occur for $`\rho <\rho _c`$. Above the critical value the correlation function displays an algebraic decay. Since the correlations of the jams are finite below and infinite above $`\rho _c`$ , the system displays critical behavior. This agrees with the above mentioned investigations of the life time distribution of jams and with measurements of the relaxation time of the system. It was shown that the relaxation time $`\tau `$ diverges at the critical point with increasing system size $`\tau L^z`$ with a $`P`$ depending exponent $`z`$ . But one has to mention that above the transition the measurements of the relaxation time yields unphysical result, in the sense that the relaxation time becomes negative . We think that the origin of this behavior is caused by the inhomogeneous character of the system above the transition where the system separates into two coexisting phases. The relaxation time measurement does not take this inhomogeneous character into account. ## IV Conclusions We studied numerically the Nagel-Schreckenberg traffic flow model. The investigation of the dynamical structure factor allowed us to examine the transition of the system from a free flow regime to a jammed regime. Above the transition the dynamical structure factor exhibits two modes corresponding to the coexisting free flow and jammed phase. Due to the sign of their characteristic velocities $`v_\mathrm{f}`$ and $`v_\mathrm{j}`$, both phases can clearly be distinguished. The analysis of the dynamical structure factor of the velocity-particle space shows that above the transition the system exhibits long range correlations of the jammed particles and below the transition the correlations display a finite correlation length that diverges at the critical point, indicating that a continuous phase transition takes place. Using a finite-size scaling analysis we showed that in the hydrodynamic limit the smallest jam mode which corresponds to the long range correlations of jams vanishes below the transition. We think that an extended investigation of this quantity could lead to a convincing definition of an order parameter which describes the transition from the free flow to the jammed regime.
no-problem/9812/astro-ph9812411.html
ar5iv
text
# 1 Introduction ## 1 Introduction The study of primordial chemistry of molecules adresses a number of interesting questions pertaining to the thermal balance of collapsing molecular protoclouds. In numerous astrophysical cases molecular cooling and heating influence dynamical evolution of the medium The existence of a significant abundance of molecules can be crucial on the dynamical evolution of collapsing objects. Because the cloud temperature increases with contraction, a cooling mechanism can be important for the structure formation, by lowering pressure opposing gravity, i.e. by allowing continued collapse of Jeans unstable protoclouds. This is particularly true for the first generation of objects. It has been suggested that a finite amount of molecules such as $`H_2`$, $`HD`$ and $`LiH`$ can be formed immediately after the recombination of cosmological hydrogen (Lepp and Shull 1984, Puy et al 1993). The cosmological fractional abundance $`HD/H_2`$ is small. Nevertheless, the presence of a non-zero permanent electric dipole moment makes $`HD`$ a potentially more important coolant than $`H_2`$ at modest temperatures, although $`HD`$ is much less abundant than $`H_2`$. Recently Puy and Signore (1997) showed that during the early stages of gravitational collapse, for some collapsing masses, $`HD`$ molecules were the main cooling agent when the collapsing protostructure had a temperature of about 200 Kelvins. Thus in section 2, we analytically estimate the molecular cooling during the collapse of protoclouds Then, in section 3, the possibility of thermal instability during the early phase of gravitational collapse is discussed. ## 2 Molecular functions of a collapsing protocloud We consider here only the transition between the ground state and the first state. The transition is a dipolar transition whereas for $`H_2`$ molecule the transition is quadrupolar (because it has no permanent dipolar moment). In this study, we consider only the molecular cooling which is the dominant term in the cooling function as we have shown for a collapsing cloud (Puy & Signore 1996, 1997, 1998a, 1998b). we obtain for the cooling function of $`HD`$ the expression: $$\mathrm{\Lambda }_{HD}\frac{3C_{1,0}^{HD}exp(\frac{T_{1,0}^{HD}}{T_m})h\nu _{1,0}^{HD}n_{HD}A_{1,0}^{HD}}{A_{1,0}^{HD}+C_{1,0}^{HD}\left[1+3exp(\frac{T_{1,0}^{HD}}{T_m})\right]}$$ (1) and the molecular cooling for the $`H_2`$ molecule is given by: $$\mathrm{\Lambda }_{H_2}\frac{5n_{H_2}C_{2,0}^{H_2}A_{2,0}^{H_2}h\nu _{2,0}^{H_2}exp(\frac{T_{2,0}^{H_2}}{T_m})}{A_{2,0}^{H_2}+C_{2,0}^{H_2}\left[1+5exp(\frac{T_{2,0}^{H_2}}{T_m})\right]}$$ (2) The total cooling function is given by $$\mathrm{\Lambda }_{Total}=\mathrm{\Lambda }_{HD}+\mathrm{\Lambda }_{H_2}=\mathrm{\Lambda }_{HD}(1+\xi _{H_2})$$ (3) where $$\xi _{H_2}=\frac{\mathrm{\Lambda }_{H_2}}{\mathrm{\Lambda }_{HD}}$$ is the ratio between the $`H_2`$ and $`HD`$ cooling functions. Finally we deduce: $$\mathrm{\Lambda }_{HD}\mathrm{\hspace{0.17em}2.66}\times 10^{21}n_{HD}^2exp(\frac{128.6}{T_m})\frac{\sqrt{T_m}}{1+n_{HD}\sqrt{T_m}\left[1+3exp(\frac{128.6}{T_m})\right]}$$ (4) $$\mathrm{\Lambda }_{H_2}\mathrm{\hspace{0.17em}1.23}\times 10^{20}n_{HD}^2exp(\frac{512}{T_m})\frac{\sqrt{T_m}}{5\times 10^4+n_{HD}\sqrt{T_m}\left[1+5exp(\frac{512}{T_m})\right]}$$ (5) and for the ratio: $$\xi _{H_2}\mathrm{\hspace{0.17em}4.63}e^{\frac{383.4}{T_m}}\frac{1+\sqrt{T_m}n_{HD}[1+3exp(\frac{128.6}{T_m})]}{5\times 10^4+\sqrt{T_m}n_{HD}[1+5exp(\frac{512}{T_m})]}$$ (6) We study a homologous model of spherical collapse of mass $`M`$ similar to the model adopted in Lahav (1986) and Puy & Signore (1996) in which we only consider $`H_2`$ and $`HD`$ molecules. The aim of this paper is to analyse, through our approximations (where the only two first excited levels are considered), the evolution of the cooling and the potentiality of thermal instability. First, let us recall the equations governing dynamics of a collapsing protocloud: $$\frac{dT_m}{dt}=2\frac{T}{r}.\frac{dr}{dt}\frac{2}{3nk}\mathrm{\Lambda }_{Total}$$ (7) for the evolution of the matter temperature $`T_m`$, $$\frac{d^2r}{dt^2}=\frac{5kT_m}{2m_Hr}\frac{GM}{r^2}$$ (8) for the evolution of the radius $`r`$ of the collapsing cloud and $$\frac{dn}{dt}=3\frac{n}{r}.\frac{dr}{dt}$$ (9) for the evolution of the matter density $`n`$. At the beginning of the gravitational collapse the matter temperature increases, then due to the very important efficiency of the molecular cooling, the temperature decreases. We consider here this transition regime i.e. the point where the temperature curve has a horizontal asymptot (see Puy & Signore 1997): $$\frac{dT_m}{dt}=\mathrm{\hspace{0.17em}0}$$ Finally we obtain for the total molecular cooling $$\mathrm{\Lambda }_{Total}=\delta T_m^{7/2}\frac{exp(5T_o/T_m)}{\left[1+\xi _oexp(\mathrm{\Delta }T_o/T_m)\right]^5}$$ (10) $`\delta =\frac{3kGM^2}{2m_H\kappa ^5}M^2`$ with $`\kappa =\mathrm{\hspace{0.17em}1.4}\times 10^{22}`$, $`T_o=128.6`$ K, $`\mathrm{\Delta }T_o=383.4`$ K and $`\xi _o=9260`$. ## 3 Thermal Instability We have learned much about thermal instability in the last 30 years since the appearance of the work of Field (1965). Many studies have focused on the problem of thermal instabilitie in different situations. In general, astronomical objects are formed by self-gravitation. However, some objects can not explained by this process. For these objects the gravitational energy is smaller than the internal energy. Thus if the thermal equilibrium of the medium is a balance between energy gains and radiative losses, instability results if, near equilibrium, the losses increase with decreasing temprature. Then, a cooler than average region cools more effectively than its surroundings, and its temperature rapidly drops below the initial equilibrium value. Thus, if we introduce a perturbation of density and temperature such that some thermodynamics variable (pressure, temperature…) is held constant, the entropy of the material $`𝒮`$ will change by an amount $`\delta 𝒮`$, and the heat-loss function, by an amount $`\delta \mathrm{\Psi }=\mathrm{\Gamma }\mathrm{\Lambda }`$. From the equation $$\delta \mathrm{\Psi }=Td(\delta 𝒮)$$ there is instability only if: $$\left(\frac{\delta \mathrm{\Psi }}{\delta 𝒮}\right)>\mathrm{\hspace{0.17em}0}$$ We know that $`Td𝒮=C_pdT`$ in an isobaric perturbation. The corresponding inequality can be written: $$\left(\frac{\delta \mathrm{\Psi }}{\delta T}\right)_p=\left(\frac{\delta \mathrm{\Psi }}{\delta T}\right)_n+\frac{n}{T}\left(\frac{\delta \mathrm{\Psi }}{\delta n}\right)_T<\mathrm{\hspace{0.17em}0}$$ which characterizes the Field’s criterion. The criterion involves constant $`P`$ because small blobs tend to maintain pressure equilibrium with their surroundings when heating and cooling get out of balance. In our case (i.e. at the transition regime), a thermal instability could spontaneously be developped if the Field’s Criterion is verified: $$\frac{ln\mathrm{\Lambda }}{lnT_m}<\mathrm{\hspace{0.17em}0}$$ (11) In our case the logaritmic differential of the total molecular cooling function is given by $$\frac{ln\mathrm{\Lambda }}{lnT_m}=\frac{7T_m+7T_m\xi _oe^{\frac{\mathrm{\Delta }T_o}{T_m}}10T_o10T_o\xi _oe^{\frac{\mathrm{\Delta }T_o}{T_m}}10\xi _o\mathrm{\Delta }T_oe^{\frac{\mathrm{\Delta }T_o}{T_m}}}{2+2\xi _oe^{\frac{\mathrm{\Delta }T_o}{T_m}}}$$ (12) In figure 1, we have plotted the curve $`y(T_m)=ln\mathrm{\Lambda }/lnT_m`$. It shows that the Field’s criterion is always verified. We conclude that a thermal instability is possible at the transition regime. Thus by maintening the same pressure in its surroundings, such a blob would get cooler and cooler and denser and denser until it could separate into miniblobs. This possibility is very interesting and could give a scenario of formation of primordial clouds. However, a quantitative study is necessary to evaluate the order of magnitude of the mass and the size of the clouds. This last point is crucial. Therefore, from these estimations, the calculations of molecular functions could, in principle, be extended to the $`CO`$ molecules for collapsing protoclouds at $`z<5`$ (see Puy & Signore 1998b). But, in any case -and in particular for the discussion of thermal instability at the transition regime\- the corresponding total cooling function must be numerically calculated because the excitation of many rotational levels must be taken into account. This extended study is beyond the scope of our paper. ## Acknowledgements The authors gratefully acknowledge Stéphane Charlot, Philippe Jetzer, Lukas Grenacher and Francesco Melchiorri for valuable discussions on this field. Part of the work of D. Puy has been conducted under the auspices of the D<sup>r</sup> Tomalla Foundation and Swiss National Science Foundation ## References Field G. B. 1965 ApJ 142, 531 Lahav O. 1986 MNRAS 220, 259 Lepp S., Shull M., 1984, ApJ 280, 465 Puy D., Alecian G., Lebourlot J., Léorat J., Pineau des Forets G. 1993 A&A 267, 337 Puy D., Signore M., 1996, A&A 305, 371 Puy D., Signore M., 1997, New Astron. 2, 299 Puy D., Signore M., 1998a, New Astron. 3, 27 Puy D., Signore M., 1998b, New Astron. 3, 247
no-problem/9812/nucl-th9812053.html
ar5iv
text
# 1 Vacuum polarization insertion into a virtual photon propagator. LA-UR-98-5728 Hadronic Vacuum Polarization and the Lamb Shift J.L. Friar Theoretical Division Los Alamos National Laboratory Los Alamos, NM 87545 USA and J. Martorell Departament d’Estructura i Constituents de la Materia Facultat Física Universitat de Barcelona Barcelona 08028 Spain and D. W. L. Sprung Department of Physics and Astronomy McMaster University Hamilton, Ontario, L8S 4M1 Canada ## Abstract Recent improvements in the determination of the running of the fine-structure constant also allow an update of the hadronic vacuum-polarization contribution to the Lamb shift. We find a shift of $``$3.40(7) kHz to the 1S level of hydrogen. We also comment on the contribution of this effect to the determination by elastic electron scattering of the r.m.s. radii of nuclei. Hadronic vacuum polarization (VP) contributes to several quantities that have elicited much recent interest: g-2 of the muon, the effective fine-structure constant at the energy scale of the Z-boson mass, the hyperfine splitting in muonium and positronium, and the energy levels of the hydrogen atom. The shaded ellipse in Figure (1) represents the creation and subsequent annihilation of arbitrary hadronic states by virtual photons on the left (with polarization, $`\mu `$) and right (with polarization, $`\nu `$). For (squared) momentum transfers, $`q^2`$, comparable to the masses of various components in the shaded ellipse, the screening of charge that defines the vacuum polarization changes with $`q^2`$ and leads to an effective fine-structure constant that depends on $`q`$: $`\alpha (q^2)`$. If the entire unit in Fig. (1) is inserted as a vertex correction on a lepton, it will also affect g-2 of that lepton via an integral over $`q^2`$. Finally, at very small values of $`q^2`$, inserting the unit between an electron and a nucleus will lead to a shift in hydrogenic energy levels. The S-matrix for the process sketched in Fig. (1) (with one VP insertion) has the form (in a metric where $`p^2=m^2`$) $$S^{\mu \nu }=\left(\frac{ie_0g^{\mu \alpha }}{q^2}\right)[i\pi ^{\alpha \beta }(q^2)]\left(\frac{ie_0g^{\beta \nu }}{q^2}\right),$$ (1) and for the process without polarization (just a single photon propagator) is $`(ie_0^2g^{\mu \nu }/q^2)`$. The strength is determined by the bare electric charge, $`e_0`$, located at the two ends of Fig. (1). The polarization structure function $`\pi ^{\alpha \beta }(q^2)`$ must be gauge invariant, which requires $$\pi ^{\alpha \beta }(q^2)=(g^{\alpha \beta }q^2q^\alpha q^\beta )\pi (q^2).$$ (2) Coupling the $`q^\alpha q^\beta `$ term to any conserved current (e.g., an electron or nucleus) leads to vanishing results and simplifies the tensor structure of $`S^{\mu \nu }(g^{\mu \nu })`$. The sequence of 0, 1, 2, $`\mathrm{}`$ insertions of $`\pi ^{\alpha \beta }`$ into a photon propagator (Fig. (1) shows one insertion) generates the geometric series $$S^{\mu \nu }=\frac{ig^{\mu \nu }e_0^2}{q^2}(1\pi +\pi ^2\mathrm{})=\frac{ig^{\mu \nu }}{q^2}\frac{e_0^2}{1+\pi (q^2)}.$$ (3) Clearly, at $`q^2=0`$ we expect $`e_0^2/(1+\pi (0))`$ to be $`e^2`$, the renormalized charge (squared). Since $`\pi `$ itself is proportional to $`e_0^2`$, we form $`\mathrm{\Delta }\pi (q^2)=\pi (q^2)\pi (0)`$, rearrange, and find to first order in $`e^2`$ $$S^{\mu \nu }=\frac{ig^{\mu \nu }}{q^2}e^2(q^2),$$ (4) where $$e^2(q^2)=\frac{e^2}{1+\mathrm{\Delta }\pi (q^2)}$$ (5) is the (squared) effective charge at the scale $`q^2`$. Much recent work has been devoted to the numerical determination of $`\frac{e^2(q^2)}{4\pi }\alpha (q^2)`$. Our purpose is to treat vacuum polarization in hydrogenic ions with nuclear charge $`Ze`$. The momentum-space Coulomb potential generated by Eq. (5) is given by $$V(q^2)=\frac{Ze^2(q^2)}{q^2}=\frac{Ze^2(q^2)}{𝐪^2}\frac{Ze^2}{𝐪^2}Ze^2\pi ^{}(0),$$ (6) where $`q^2=𝐪^2`$ for this term in Coulomb gauge, and $`\mathrm{\Delta }\pi (q^2)q^2\pi ^{}(0)+\mathrm{}`$, where the first term in Eq. (6) is the usual Coulomb potential and the second is the vacuum polarization potential, which we write in configuration space as $$V_{\mathrm{VP}}(𝐫)=4\pi (Z\alpha )\pi ^{}(0)\delta ^3(𝐫).$$ (7) Thus for the n<sup>th</sup> S-state the energy shift is given by $$E_{\mathrm{VP}}4\pi (Z\alpha )\pi ^{}(0)|\varphi _n(0)|^2=\frac{4(Z\alpha )^4\pi ^{}(0)\mu ^3}{n^3},$$ (8) where $`\mu `$ is the hydrogenic reduced mass. The intrinsic nature of vacuum polarization is attraction, so we expect $`\pi ^{}(0)>0`$. By slicing Fig. (1) through the polarization insertion, we can reexpress $`\pi (q^2)`$ as a dispersion relation, with an imaginary part proportional to $`\sigma _h(q^2)`$, the cross-section for producing all hadron states in $`e^+e^{}`$ collisions. A single subtraction then produces $`\mathrm{\Delta }\pi (q^2)`$ in the form $$\mathrm{\Delta }\pi (q^2)=\frac{q^2}{\pi }_{4m_\pi ^2}^{\mathrm{}}\frac{dtIm(\mathrm{\Delta }\pi (t))}{t(tq^2iϵ)},$$ (9) where $`Im(\mathrm{\Delta }\pi (t))=t\sigma _h(t)/(4\pi \alpha (t))^2`$. Utilizing all available $`e^+e^{}`$ collision data (and some theory) permits an accurate interpolation of $`\sigma _h(t)`$, and $`\mathrm{\Delta }\pi (q^2)`$ can be constructed numerically. We require only $`\pi ^{}(0)`$ for the hadronic VP (henceforth subscripted with $`h`$), which is given by the parameter $`\mathrm{}_1`$ in section 1.5 and the error from Figure (7) of Ref. (note that our $`\mathrm{\Delta }\pi =\mathrm{\Delta }\alpha `$ of ): $$\pi _h^{}(0)=9.3055(\pm 2.2\%)10^3\mathrm{GeV}^2.$$ (10) Equation (8) also applies to muon-pair vacuum polarization, for which $`\pi _\mu ^{}(0)=\alpha /(15\pi m_\mu ^2)`$, where $`m_\mu `$ is the muon mass. We therefore obtain $$\pi _h^{}(0)=0.671(15)\pi _\mu ^{}(0)\delta _h\pi _\mu ^{}(0),$$ (11) and thus $$E_{\mathrm{VP}}^{\mathrm{had}}=0.671(15)E_{\mathrm{VP}}^\mu ,$$ (12) where for S-states Eq. (8) gives $$E_{\mathrm{VP}}^\mu =\frac{4\alpha (Z\alpha )^4\mu ^3}{15\pi n^3m_\mu ^2},$$ (13) and a numerical value of $``$5.07 kHz for the 1S state of hydrogen. The much heavier $`\tau `$-lepton analogously contributes $``$0.02 kHz. Previous values obtained for $`\delta _h`$ are displayed together with our value in Table I. Additional values were calculated in Refs. and . The latter estimate used only the $`\rho `$-meson contribution, which is known to give the largest fractional contribution to $`\pi _h^{}(0)`$, and results for g-2 of the muon are consistent with that fraction ($``$ 60%) . All tabulated values are consistent with the more accurate Eq. (12). We also note that a noninteracting pion pair generates only $``$ 10% of the total hadronic contribution. Finally, we repeat a caveat from Ref. . In elastic-electron-scattering determinations of nuclear form factors (and hence their radii), the radiative-corrections procedure that is used to analyze the data corrects for $`e^+e^{}`$ vacuum polarization, sometimes for the muon one, but typically not for the hadronic one. If one type of vacuum polarization is omitted, equation (5) then demonstrates that the effective measured form factor expressed in terms of $`F_0`$ (the true form factor) is $$F_{\mathrm{eff}}(q^2)=\frac{F_0(q^2)}{1+\mathrm{\Delta }\pi (q^2)},$$ (14) and hence the effective radius is $$r^2_{\mathrm{eff}}^{1/2}=[r^2_06\pi ^{}(0)]^{1/2},$$ (15) where $`6\pi _h^{}(0)=0.0022\mathrm{fm}^2`$, for example. Although this is a very tiny effect, comparing a measured radius ($`r^2_{\mathrm{eff}}^{1/2}`$ in Eq. (15)) with one determined from optical measurements corrected for hadronic VP (i.e., $`r^2_0^{1/2}`$ from $`F_0(q^2)`$) would be inconsistent. In summary, we have updated the calculation of the hadronic vacuum polarization correction in hydrogen using a recently obtained and more accurate value for $`\pi _h^{}(0)`$. This leads to a shift of $``$3.40(7) kHz in the 1S level of hydrogen. We also noted that elastic electron scattering from nuclei is not corrected for hadronic VP. Acknowledgements The work of J. L. F. was performed under the auspices of the United States Department of Energy. D. W. L. S. is grateful to NSERC Canada for continued support under Research Grant No. SAPIN-3198. The work of J. M. is supported under Grant No. PB97-0915 of DGES, Spain. One of us (J. L. F.) would like to thank P. Mohr of NIST for a stimulating series of conversations, and L. Maximon of The George Washington Univ. for information about radiative corrections.
no-problem/9812/physics9812053.html
ar5iv
text
# Superluminal Signal Velocity ## 1 Introduction Tunneling represents the wave mechanical analogy to the propagation of evanescent modes . Evanescent modes are observed, e.g. in the case of total reflection, in undersized waveguides, and in periodic dielectric heterostructures . Compared with the wave solutions an evanescent mode is characterized by a purely $`\mathrm{𝑖𝑚𝑎𝑔𝑖𝑛𝑎𝑟𝑦}`$ wave number, so that i.e. the wave equation yields for the electric field $`E(x)`$ $`E(x)=e^{i(\omega tkx)}E(x)=e^{i\omega t\kappa x},`$ (1) where $`\omega `$ is the angular frequency, t the time, x the distance, k the wave number, and $`\kappa =ik`$ the $`\mathrm{𝑖𝑚𝑎𝑔𝑖𝑛𝑎𝑟𝑦}`$ wave number of the evanescent mode. Thus evanescent modes are characterized by an exponential attenuation and a lack of phase shift. The latter means that the mode has not spent time in the evanescent region, which in turn results in an infinite velocity in the phase time approximation neglecting the phase shift at the boundary . Two examples of electromagnetic structures in which evanescent modes exist are shown in Fig.1 . The dispersion relations of the respective transmission coefficients are displayed in the same figure. ## 2 Signals Quite often a signal is said to be defined by switching on or off light. It is assumed that the front of the light beam informs my neighbour of my arrival home with the speed of light. Such a signal is sketched in Fig.2. The inevitable inertia of the light source causes an inclination of the signal’s front and tail. Due to the detector’s sensitivity, level $`D_S`$, the information about the neighbours arrival (switching on) and departure (switching off) becomes dependent on intensity. In this example the departure time is detected earlier with the attenuated weak signal. This special signal does not transmit reliable information on arrival and departure time. In addition to the dependence on intensity , a light front may be generated by any spontaneous emission process or accidentally by another neighbour. Obviously, a detector needs more than a signal’s front to response properly. The detector needs information about carrier frequency and modulation of the signal in order to obtain reliable information about the cause. In general, a signal is detected some time after the arrival of the light’s front. Due to the dynamics of detecting systems there are several signal oscillations needed in order to produce an effect . An $`\mathrm{𝑒𝑓𝑓𝑒𝑐𝑡}`$ is detected with the energy velocity. In vacuum or in a medium with normal dispersion the signal velocity equals both, the energy and the group velocities. For example, a classical signal can be transmitted by the Morse alphabet, in which each letter corresponds to a certain number of dots and dashes. In general signals are either frequency (FM) or amplitude modulated (AM) and they have in common that the signal does not depend on its magnitude. A modern signal transmission, where the halfwidth corresponds to the number of digits, is displayed in Fig.3. This AM signal has an infra-red carrier with $`1.5\mu m`$ wave length and is glass fiber guided from transmitter to receiver. As mentioned above, the signals are independent of magnitude as the halfwidth does not depend on the signal’s magnitude. The front or very beginning of a signal is only well defined in the theoretical case of an infinite frequency spectrum. However, physical generators only produce signals of finite spectra. This is due to their inherent inertia and due to a signal’s finite energy content. These properties result in a real front which is defined by the measurable beginning of the signal. For example the signals of Fig.3 have a detectable frequency band width of $`\mathrm{\Delta }\nu =\pm 10^4\nu _C`$, where $`\nu _C`$ is the carrier frequency. Frequency band limitation in consequence of a finite signal energy reveals one of the fundamental deficiencies of classical physics. A classical detector can detect a deliberately small amount of energy, whereas every physical detector needs at least one quantum of the energy $`\mathrm{}\omega `$ in order to respond. ## 3 An Experimental Result Superluminal signal velocities have been measured by Enders and Nimtz . The experiments were carried out with AM microwaves in undersized waveguides and in periodic dielectric heterostructures. The measured propagation time of a pulse is shown in Fig.4. The microwave pulse has travelled either through air or it has crossed an evanescent barrier . The linewidth of the pulse represents the signal. The experimental result is, that the tunneled signal has passed the airborne signal at a superluminal velocity of 4.7 $``$ c. The measurements of the traversal time are carried out under vacuum-like conditions at the exit of the evanecscent region, the reason for this will be discussed later. ## 4 Some Implications of superluminal signal velocity Measured microwave signals are shown in Fig.4. The halfwidth (information) of the tunneled signal has traversed the evanescent region at a velocity of 4.7 $``$ c. As explained above, signals have a limited frequency spectrum since their energy content W is always finite and detectable frequency components with $`\omega W/\mathrm{}`$ can not exist. In this experiment all frequency components of the signal are evanescent and move at a velocity faster than c. The beginning of the evanescent signal overtakes that of the airborne signal as seen in Fig.4. The superluminal velocity of evanescent modes has some interesting features differing fundamentally from luminal or subluminal propagation of waves with $`\mathrm{𝑟𝑒𝑎𝑙}`$ wave numbers. This will be discussed in the following subsections. ### 4.1 Change of chronological order The existence of a superluminal signal velocity ensures the posssibility of an interchange of chronological order. Let us assume an inertial system $`\mathrm{\Sigma }_{II}`$ moves away from system $`\mathrm{\Sigma }_I`$ with a velocity $`v_r`$. The Special Relativity (SR) gives the following relationship for the travelling time $`\mathrm{\Delta }t`$ and for the distance $`\mathrm{\Delta }x`$ of a signal in the system $`\mathrm{\Sigma }_I`$ which is watched in $`\mathrm{\Sigma }_{II}`$ $`\mathrm{\Delta }t_{II}={\displaystyle \frac{\mathrm{\Delta }_Iv_r\mathrm{\Delta }x_I/c^2}{(1v_r^2/c^2)^{1/2}}}={\displaystyle \frac{\mathrm{\Delta }t_I(1v_Sv_r/c^2)}{(1v_r^2/c^2)^{1/2}}}.`$ (2) $`v_rc^2/v_S`$ is the condition for the change of chronological order, i.e. $`\mathrm{\Delta }t_{II}0`$, between the systems $`\mathrm{\Sigma }_I`$ and $`\mathrm{\Sigma }_{II}`$. For example, at a signal velocity $`v_S`$ 10 c the chronological order changes at $`v_r`$ 0.1 c. This result does not violate the SR. The common constraint $`v_S`$$`c`$ is posed by the SR on electromagnetic wave propagation in a dispersive medium and not on the propagation of evanescent modes. ### 4.2 Negative electromagnetic energy The Schrödinger equation yields a negative kinetic energy in the tunneling case, where the potential U is larger than the particle’s total energy W: $`{\displaystyle \frac{d^2\mathrm{\Psi }}{d^2x}}+{\displaystyle \frac{2m}{\mathrm{}^2}}(WU)\mathrm{\Psi }=0`$ (3) The same happens to evanescent modes. Within the mathematical analogy their kinetic electromagnetic energy is negative too. The Helmholtz equation for the electric field $`E`$ in a waveguide is given by the relationship $`{\displaystyle \frac{d^2E}{d^2x}}+(k^2k_c^2)E=0`$ (4) where $`k_c`$ is the cut-off wave number of the evanescent regime. The quantity ($`k^2`$ \- $`k_c^2`$) plays a role analogous to the energy eigenvalue and is negative in the case of evanescent modes. The dielectric function $`ϵ`$ of evanescent modes is negative and thus the refractive index is imaginary. For the basic mode a rectangular waveguide has the following dispersion of its dielectric function, where $`k_c`$ = $`(\pi /b)^2`$ holds and $`b`$ is the waveguide width, $`ϵ(\lambda _0)=(1\lambda _0/2b)`$ (5) $`\lambda _0`$ is the free space wavelength of the electromagnetic wave. In the case of tunneling it is argued that a particle can only be measured in the barrier with a photon having an energy $`\mathrm{}\omega `$ (U - W) . This means that the total energy of the system is positive. According to Eq.(5) the evanescent mode’s electric energy density $`\rho `$ is given by the relationship $`\rho ={\displaystyle \frac{1}{2}}ϵϵ_0E^2<\mathbf{\hspace{0.17em}0},`$ (6) where $`ϵ_0`$ is the electric permeability. The analogy between the Schrödinger equation and the Helmholtz equation holds again and it is not possible to measure an evansecent mode. Achim Enders and I tried hard to measure evanescent modes with probes put into the evanescent region but failed . Obviously evanescent modes are not directly measurable in analogy to a particle in a tunnel. We might also say this problem is due to impedance mismatch between the evanescent mode and a probe. The impedance Z of the basic mode in a rectangular waveguide is given by the relationship $`Z=Z_0ϵ^{1/2}`$ (7) where $`Z_0`$ is the free space impedance. In the evanesecent regime $`k<k_c`$ the impedance is imaginary. ### 4.3 The not-causal evanescent region Evanescent modes do not experience a phase shift inside the evanescent region . They cross this region without consuming time. The predicted and the measured time delay happens at the boundary between the wave and the evanescent mode regime. For opaque barriers (i.e. $`\kappa x1`$,where $`\kappa `$ is the $`\mathrm{𝑖𝑚𝑎𝑔𝑖𝑛𝑎𝑟𝑦}`$ wave number and $`x`$ the length of the evanescent barrier) the phase shift becomes constant with $`2\pi `$ which corresponds to one oscillation time of the mode. In fact, the measured barrier traversal time was roughly equal to the reciprocal frequency in the microwave as well as in the optical experiments, i.e. either in the 100 ps or in the 2 fs time range independent of the barrier length . The latter behaviour is called Hartman effect: the tunneling time is independent of barrier length and has indeed been measured with microwave pulses thirty years after its prediction . ## 5 Summing up Evanescent modes show some amazing properties with which we are not familiar. For instance the evansecent region is not causal since evanescent modes do not spend time there. This is an experimental result due to the fact that the traversal time is independent of barrier length. Another strange experience in classical physics is that evanescent fields cannot be measured. This is due to their $`\mathrm{𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒}`$ energy or to the impedance mismatch. Amazingly enough this is in analogy with the wave mechanical tunneling. The energy of signals is always finite resulting in a limited frequency spectrum both according to Planck’s energy quantum $`\mathrm{}\omega `$. This is a fundamental deficiency of classical physics, which assumes the measurability of any small amount of energy. A physical signal never has an ideal front. the latter needs infinite high frequency components with a correspondingly high energy. Another consequence of the frequency band limitation of signals is, if they have only evanescent mode components, they are not Lorentz-invariant. The signal may travel faster than light. Front, group, signal, and energy velocities all have the same value in vacuum. Bearing in mind the narrow frequency band of signals, the former statement holds also for the velocities of evanescent modes. In first order approximation the dispersion relation of a stop band is constant and a significant pulse reshaping does not take place. This result demonstrate that signals and effects may transmitted with superluminal velocities provided that they are carried by evanescent modes. ## 6 Acknowledgments Stimulating discussions with V. Grunow, D. Kreimer, P. Mittelstaedt, R. Pelster, and H. Toyatt are gratefully acknowledged.
no-problem/9812/hep-ph9812244.html
ar5iv
text
# 1 Introduction ## 1 Introduction Neutrinos are one of the most abundant components of the universe. Apart from the 3K black body electromagnetic radiation, the universe is filled with a sea of relic neutrinos which were created in the early stages and decoupled from the rest of the matter within the first few seconds. These relic neutrinos have played a crucial role in primordial nucleosynthesis, structure formation and the evolution of the universe as a whole. They may also contain clues about the mechanism of baryogenesis. The properties of relic neutrinos, their role in nature and their possible manifestations were the main topics of the workshop on The Physics of Relic Neutrinos. It was organised at The Abdus Salam International Center for Theoretical Physics (ICTP), Trieste, Italy during September 16 – 19, 1998, by ICTP and INFN. The workshop was attended by about 80 participants. Around 40 talks were distributed in the following sessions : $``$ Neutrino Masses and Mixing $``$ Leptogenesis and Baryogenesis $``$ Big Bang Nucleosynthesis $``$ Structure Formation $``$ Detection and Manifestations of Relic Neutrinos $``$ Other sources of Neutrino Background $``$ Neutrinos in Extreme Conditions. In what follows, we will describe the main results presented in the talks. We also give as complete as possible a list of references to the original papers where the results have been published. ## 2 Neutrino Masses and Mixing If neutrinos are massless and there is no significant lepton asymmetry in the universe, the properties of the relic neutrino sea are well known: the neutrinos are uniformly distributed in the universe with a density of 113/cc/species and at a temperature of 1.95K. The existence of a nonzero neutrino mass can dramatically change the properties of the sea and its role in the evolution of the universe. In this connection, the existing evidences for non-zero neutrino masses and mixing have been extensively discussed. Recent SuperKamiokande (SK) results on atmospheric neutrinos give the strongest evidence for a nonzero neutrino mass. E. Lisi (Bari) showed that the best fit to the sub-GeV, multi-GeV and upward-going muon data from SK is obtained at $$\mathrm{\Delta }m_{23}^2=2.5\times 10^3\text{ eV}^2,\mathrm{sin}^2\theta _{23}=0.63\text{ and }\mathrm{sin}^2\theta _{e3}=0.14,$$ though taking the CHOOZ results into account will decrease the values of $`\mathrm{sin}^2\theta _{e3}`$ and $`\mathrm{\Delta }m_{23}^2`$. Maximal depth $`\nu _\mu \nu _{sterile}`$ oscillations can also give a good fit of the data (O. Peres, Valencia), with a slightly higher value of $`\mathrm{\Delta }m^2`$ . A majority of alternative explanations of the atmospheric neutrino problem, like a neutrino decay, reviewed by S. Pakvasa (Hawaii), still imply nonzero neutrino masses. The decay of neutrinos can account for the sub-GeV and multi-GeV atmospheric neutrino data rather well . However in this case, the deficit of the upward-going muon fluxes, as indicated by the data from SK and MACRO, cannot be explained. All the above explanations of the SK results imply that at least one neutrino species has the mass $`m0.03`$ eV. This means that at least one of the components of the relic neutrino sea is non-relativistic, opening up the possibility of the structure formation of the sea. Though the results on solar neutrinos give a strong hint of the existence of a nonzero neutrino mass, we are still far from the final conclusion. L. Krauss (Case Western) pointed out that if the oscillations are “just-so”, certain correlations between the spectral distortions and seasonal variations of the solar neutrino signal may be observable . If both the solar and the atmospheric neutrino anomalies have the oscillation interpretation, neutrinos can contribute significantly to the hot dark matter only if the neutrino mass spectrum is degenerate: all three neutrinos have the mass of about 1 eV. A strong bound on this scenario follows from the negative searches of the neutrinoless double beta decay. With a degenerate mass spectrum, the present bound on the effective Majorana neutrino mass $$m_{\mathrm{majorana}}<0.45\text{ eV (90\% C.L.)}$$ – implies a large mixing of the electron neutrinos and some cancellation of contributions from different mass eigenstates. Alternatively it means that the neutrino contribution to the energy density in the universe is $`\mathrm{\Omega }_\nu <0.06`$. F. Simkovic (Comenius) showed that new estimations of the nuclear matrix elements using the pn-RQRPA (proton – neutron relativistic quasiparticle random phase approximation) allow the weakening of the present bound on the Majorana neutrino masses by 50% . The reconstruction of the whole neutrino mass spectrum on the basis of the present data is of a great importance both for particle physics and cosmology. Several plausible patterns of neutrino masses and mixing have been elaborated. One possibility which has attracted significant interest recently (especially in connection with the recent measurements of the recoil electron energy spectrum of the solar neutrinos) is the bi-maximal mixing scheme with degenerate neutrinos . As was described by F. Vissani (DESY), this scheme reproduces $`\nu _\mu \nu _\tau `$ oscillation solution of the atmospheric neutrino problem, explains the solar neutrino data by “just-so” oscillations of $`\nu _e`$ into $`\nu _\mu `$ and $`\nu _\tau `$, and gives a significant amount of the HDM without conflicting with the double beta decay bound. However, this scheme requires a strong fine-tuning. M. Fukugita (Tokyo) reviewed the models of fermion masses based on the $`S_{3L}\times S_{3R}`$ permutation symmetry which lead to the “democratic” mass matrices for charged fermions. The Majorana character of neutrinos admits a diagonal mass matrix with a small mass splitting due to the symmetry violation. In this case, one gets a large lepton mixing and neutrino mass degeneracy required for HDM. Fukugita presented the embedding of this scheme of mass matrix patterns in SU(5) GUTs . R. Mohapatra (Maryland) showed that the bi-maximal mixing pattern can be derived from the maximal, symmetric, four-neutrino mixing in the limit that one of the neutrinos is made heavy . He also showed that combining the permutation symmetry $`S_3`$ with a $`Z_4\times Z_3\times Z_2`$ symmetry in the left-right symmetric extension of the standard model, the mixing pattern of the democratic mass matrix can be generated . This would account for a large $`\nu _\mu \nu _\tau `$ and maximal $`\nu _e\nu _\mu `$ mixing, along with small Majorana masses through the double seesaw mechanism. The attempts to accomodate all the existing data and / or to explain the large lepton mixing lead to the introduction of sterile neutrinos (Mohapatra) . Their existence would have enormous implications for astrophysics and cosmology. Z. Berezhiani (Ferrara) explored the possibility of the sterile neutrinos $`\nu ^{}`$ being from a mirror world which communicate with our world only through gravity or through the exchange of some particles of the Planck scale mass . In the mirror world, the scale of the electroweak symmetry breaking can be higher than our scale: $`v_{EW}^{}=zv_{EW}`$, $`z>1`$. In this case $`\nu _e\nu _e^{}`$ mixing can provide the solution of the solar neutrino problem via the MSW effect ($`z30`$) or “just-so” oscillations ($`z1`$). The mirror neutrinos (and mirror baryons) can also form the dark matter in the universe. Various aspects of the theory of neutrino oscillations, and in particular the problem of coherence and decoherence in the oscillations, were discussed by L. Stodolsky (MPI, Munich). ## 3 Leptogenesis and Baryogenesis In the early universe, one of the first processes directly influenced by the neutrinos would have been those of leptogenesis and baryogenesis. One of the favoured mechanisms for the dynamical generation of the observed baryon asymmetry is through the production of a lepton asymmetry, which can then be converted to the baryon asymmetry by $`(BL)`$ conserving electroweak sphalerons . The leptonic asymmetry can be generated in the CP-violating decay of heavy ($`M>10^{10}`$ GeV) right handed neutrinos $`N_i`$ to Higgs and usual neutrinos $`N_i\mathrm{}^cH^{}`$, $`\mathrm{}HN_i`$. The lifetime of these Majorana neutrinos needs to be long enough, so that the thermal equilibrium is broken. E. Roulet (La Plata) discussed the finite temperature effects on these CP violating asymmetries . E. Akhmedov (ICTP) described a new scenario of baryogenesis via neutrino oscillations . The lepton asymmetry is created in CP-violating oscillations of three right handed neutrino species with masses 20 – 50 GeV. The neutrinos should have very small ($`10^810^7`$) and different Yukawa couplings. These Yukawa couplings lead both to the production of the RH neutrinos and the propagation of the generated asymmetry to the usual leptons. The lepton asymmetry is generated in different neutrino species, but the total lepton number is still zero. At least one of the singlet neutrino species needs to be in equilibrium and at least one out of equilibrium when the sphalerons freeze out. Then only those neutrinos which are in equilibrium will transform the asymmetry to light ($`SU(2)`$ doublet) leptons. This asymmetry will be then converted to the baryon asymmetry. Thus, a lepton asymmetry can be produced without a total lepton number violation, through the “separation” of charges. A. Pilaftsis (MPI, Munich) talked about a model with two singlet neutrinos per fermion family, which get their masses through an off-diagonal Majorana mass term . The mass splitting between these two neutrinos can be small (in $`E_6`$ theories, for example) – as small as 10 – 100 eV for the Majorana neutrino masses of 10 TeV. If the splitting is comparable to the decay widths, CP asymmetries in the neutrino decays are resonantly enhanced. A remarkable consequence is that the scale of leptogenesis may be lowered upto the TeV range. In all the above scenarios, the seesaw mechanism leads to the light neutrino masses in the range $`10^31`$ eV, which are relevant for cosmology and for explaining the solar and atmospheric neutrino data. The leptonic asymmetry can also be produced without right handed neutrinos in the decays of two heavy Higgs triplets (U. Sarkar, PRL, Ahmedabad). The Higgs masses of $`10^{13}`$ GeV lead both to a successful leptogenesis and to a few eV scale for masses of the usual neutrinos . In all the above scenarios, the leptonic asymmetry is of the same order as the final baryon asymmetry. In general, a large lepton asymmetry can be produced without a large baryon asymmetry, e.g. through the Affleck-Dine mechanism . In the scenario with the decay of the RH neutrinos, if the hierarchy of the Dirac masses as well as the Majorana masses is similar to that of the up-type quarks, and if the solar neutrino deficit is due to the MSW effect, the temperature for baryogenesis may be as high as $`T_BM_R10^{10}`$ GeV. At such high temperatures, however, a large number of gravitinos are generated. These gravitinos might overclose the universe, and if they decay late, modify the primordial light element abundances in a way that is incompatible with observations. According to W. Buchmüller (DESY), these problems can be avoided if gravitinos are the LSPs (and therefore stable), and have the mass 10 – 100 GeV. The relic density of these gravitinos will be cosmologically important and they can play the role of the cold dark matter . ## 4 Big Bang Nucleosynthesis Properties of the neutrino sea are crucial for the outcome of the big bang nucleosynthesis (BBN), i.e. the primordial abundances of the light nuclides: D, <sup>3</sup>He, <sup>4</sup>He and <sup>7</sup>Li. The implications of the recent data on the primordial abundances for cosmology and particle physics were reviewed by G. Steigman (Ohio) . The data appear to be in rough agreement with the predictions of the standard cosmological model for three species of light neutrinos and a nucleon-to-photon ratio restricted to a narrow range of $$\eta n_B/n_\gamma =(34)\times 10^{10}.$$ A closer inspection, however, reveals a tension between the inferred primordial abundances of D and <sup>4</sup>He. For deuterium, at present there are two different analyses of the data from the observations of high-redshift, low-metallicity absorbing regions: the first analysis leads to the primordial abundance of $$D/H=(1.9\pm 0.5)\times 10^4(\text{ high }D),$$ while the second gives $$D/H=(3.40\pm 0.25)\times 10^5(\text{ low }D).$$ The primordial abundance of <sup>4</sup>He is derived from the observations of low-metallicity extragalatic H II regions. Here also, there are two inconsistent results for the <sup>4</sup>He mass abundance $`Y_P`$. One of the calculations leads to a high number $$Y_P=0.244\pm 0.002(\text{ high }^4\text{He}),$$ and another gives a low number, $$Y_P=0.234\pm 0.002(\text{ low }^4\text{He}).$$ The consistency of the D and <sup>4</sup>He results with the predicted abundances in the standard BBN is possible in two cases: (i) low D, high <sup>4</sup>He and high $`\eta `$, or (ii) high D, low <sup>4</sup>He and low $`\eta `$. Resolution of this conflict may lie within the statistical uncertainties in the data or with the systematic uncertainties: in the extrapolation from “here and now to there and then”. However, if both the D and <sup>4</sup>He abundances are low, the Standard BBN is in “crisis”. The problem can be resolved if the contribution of some non-standard particle physics leads to an effective number of light neutrino species ($`N_{\mathrm{eff}}^\nu `$) at the time of BBN smaller than three. This can be realized, for example, if the mass of the tau neutrino is in the range of a few MeV and it decays invisibly with $`\tau <5`$ sec (S. Pastor, Valencia). In fact, $`N_{\mathrm{eff}}^\nu `$ can be as low as 1 if the products of the neutrino decay include electron neutrinos, due to their direct influence on the neutron $``$ proton reactions . A simple statistical method for determining the correlated uncertainties of the light element abundances expected from BBN was presented by F. Villante (Ferrara) . This method, based on the linear error propagation, avoids the need for lengthy Monte Carlo simulations and helps to clarify the role of the different nuclear reactions. The results of a detailed calculation of nucleon weak interactions relevant for the neutron-to-proton ratio at the onset of BBN were presented by G. Mangano (Naples) . The presence of sterile neutrinos in the relic neutrino sea can significantly modify BBN. Though recent conservative bounds on $`N_{\mathrm{eff}}^\nu `$ still admit more than four neutrino species , the question of whether sterile neutrinos can be in equilibrium at the BBN is still alive. If sterile neutrinos have masses and mixing which give the solution of the atmospheric neutrino anomaly, then the equilibrium concentration of sterile neutrinos will be generated via $`\nu _\mu \nu _s`$ oscillations. This can be avoided if the lepton asymmetry of the order $`>10^5`$ exists at the time of $`\nu _\mu \nu _s`$ oscillations . The asymmetry can be produced in the oscillations $`\nu _\tau \nu _s`$ and $`\overline{\nu }_\tau \overline{\nu }_s`$ at earlier times. The numerical integrations of the corresponding quantum kinetic equations (R. Volkas, Melbourne) show that this requires $`m_{\nu _\tau }>4`$ eV (for $`|\delta m_{atm}^2|=10^{2.5}`$ eV<sup>2</sup>) . However X. Shi (San Diego) concludes from his calculations that a $`\nu _\tau `$ with a larger mass, $`15\text{ eV }<m_{\nu _\tau }<100\text{ eV }`$, is neeeded. Such a $`\nu _\tau `$ must decay non-radiatively with a lifetime $`<10^3`$ years, in order to have a successful structure formation at high redshifts . Recently Shi’s results have been criticized by Foot and Volkas , who confirmed their previous lower value of $`m_{\nu _\tau }`$. Volkas also presented the general principles of the creation of a lepton asymmetry as a generic outcome of active to sterile neutrino oscillations ($`\nu _a\nu _s`$ and $`\overline{\nu }_a\overline{\nu }_s`$, where $`a=e,\mu ,\tau `$) in the early universe as a medium. It can be studied from a simpler, Pauli-Boltzmann approach as well as starting from the exact quantum kinetic equations . If a significant electron-neutrino asymmetry ($`>1\%`$) is generated, $`N_{\mathrm{eff}}^\nu `$ can be less than three . D. Kirilova (Sofia) discussed the oscillations of $`\nu _a\nu _s`$ with a small mass difference ($`\delta m^2<10^7`$ eV<sup>2</sup>). These oscillations become effective after the decoupling of active neutrinos. Using an exact kinetic approach, it is possible to study the evolution of the neutrino number density for each momentum mode. This approach allows one to calculate all the effects of neutrino oscillations on the production of primordial <sup>4</sup>He: the depletion of the neutrino population, the distortion of the energy spectrum and the generation of a neutrino asymmetry . ## 5 Structure Formation Neutrinos are a major component of the hot dark matter (HDM) – the particles which were relativistic at $`t1`$ year, when $`T<`$ keV and the “galaxies” came within the horizon. The neutrinos with masses in the eV range would contribute significantly to the matter density in the universe: $$\mathrm{\Omega }_\nu =0.01h^2\left(\frac{m_\nu }{\mathrm{eV}}\right),$$ and even smaller masses can be relevant for the structure formation. For $`\mathrm{\Omega }_\nu 0.1`$, neutrinos would significantly influence the observable spectrum of density perturbations, giving more strength to supercluster scales and suppressing smaller scales. The primordial density fluctuations in the universe are probed, in particular, by the anisotropies in the cosmic microwave background (CMB) radiation (for scales $`>`$$``$ 100 Mpc), and observations of the large scale distribution of galaxies. Optical red shift surveys of galaxies can now examine scales upto $``$ 100 Mpc. As described by J. Silk (Berkeley), no current model seems to fit the detailed shape of the power spectrum of the primordial density perturbations and satisfy all the existing constraints, although the Cold + Hot dark matter (CHDM) model with $$\mathrm{\Omega }_{cold}0.7,\mathrm{\Omega }_\nu 0.2,\mathrm{\Omega }_b0.1$$ gives a relatively better fit . This model implies a neutrino mass (or the sum of the neutrino masses) of about 5 eV and describes the nearby universe well, however it (like the other models with $`\mathrm{\Omega }=1`$ and zero cosmological constant $`\mathrm{\Lambda }`$) is disfavoured by the new data on (i) the early galaxies, (ii) cluster evolution, and (iii) high redshift type IA supernovae. The models with a cosmological constant, $`\mathrm{\Lambda }`$CDM ($`\mathrm{\Omega }_\mathrm{\Lambda }0.6`$), seems to be favoured in the light of the new data , but the overall fit is still not satisfactory. The sizes of voids give an important clue for the relative fraction of the HDM. J. Primack (UC Santa Cruz) described the use of the void probability function (VPF) to quantify this distribution . It is found that on intermediate (2 – 8 $`h^1`$ Mpc) scales, the VPF for the standard CHDM model (with $`\mathrm{\Omega }_{cold}/\mathrm{\Omega }_{hot}/\mathrm{\Omega }_{bar}=0.6/0.3/0.1`$) exceeds the observational VPF, indicating that the HDM fraction is lower than what was thought earlier. T. Kahniashvili (Tbilisi) argued that that consistency with the current data can be achieved for the (COBE-normalized) models only for $$\mathrm{\Omega }_{hot}/\mathrm{\Omega }_{matter}0.2,h=0.5(0.7),\text{ and }0.45(0.3)\mathrm{\Omega }_{matter}0.75(0.5)$$ at $`1\sigma `$ level , so that $`\mathrm{\Omega }_\nu <0.1`$. The presence of a non-zero cosmological constant, though theoretically problematic from the point of view of “naturalness”, seems to help in understanding the large scale structure better. In that case, the main conclusion (as emphasized by M. Roos, Helsinki) is that the presence of HDM is no longer necessary (and eV neutrinos are not needed to provide this component), although some amount of HDM is still possible and may be useful for a further tuning. The situation can be clarified with new precision measurements of the CMB anisotropy by MAP and PLANCK, which will be sensitive to $`\mathrm{\Omega }_\nu 0.01`$ and therefore $`m_\nu >0.2`$ eV . S. Hannestad (Aarhus) discussed the role of these in constraining neutrino decays and for detecting the imprints of sterile neutrinos . PLANCK will be able to probe the anisotropy to the multipole $`l<2500`$, so that the number of neutrino species can be determined to a precision of $`\mathrm{\Delta }N_\nu 0.05`$, which is much better than that obtained from the BBN. New galaxy surveys like SDSS will probe neutrino masses as low as 0.1 eV . The CMB anisotropy measurements also allow us to put a limit on the degeneracy of neutrinos. According to S. Sarkar (Oxford), the present CMB data still admits a rather strong degeneracy (the best fit being at $`\mu /T=3.4`$ with the spectral index $`n=0.9`$), and hence a large lepton asymmetry. The existence of such a large lepton asymmetry can modify the history of the Universe, leading to symmetry non-restoration at high temperature and thus solving the monopole and domain wall problems . The height of the lowest multipole peak in the CMB spectrum increases with the degeneracy of neutrinos. (The difference in heights between the $`\mu /T=1`$ and $`\mu /T=0`$ cases is about 10%.) So forthcoming precision measurements of the multipole spectrum will be able to restrict the degeneracy. ## 6 Detection and Manifestations of Relic Neutrinos The direct detection of relic neutrinos will of course be of a fundamental importance. However it looks practically impossible with the present methods. The situation have been summarized several years ago in the review , and some possible schemes have been proposed in . At the same time, it is possible to search for some indirect manifestations of the relic sea even now. D. Fargion (Rome) and T. Weiler (Vanderbilt) have considered a mechanism involving relic neutrinos that may generate the highest energy cosmic rays detected at the earth (see for example ), which have energies above the Greisen-Zatsepin-Kuzmin (GZK) cut-off of $`5\times 10^{19}`$ eV . The process is the annihilation of ultrahigh energy neutrinos on the nonrelativistic neutrinos from the relic sea: $$\nu _{\text{cosmic}}+\overline{\nu }_{\text{relic}}Z\text{nucleons and photons .}$$ For the neutrino mass $`m_\nu `$ few eV, the energy of cosmic ray neutrinos should be about $`E_\nu >10^{21}`$ eV. It is assumed that the production rate is greatly enhanced due to a significant clustering of the relic neutrino density in the halo of our galaxy or the galaxy cluster. The secondary nucleons and photons may propagate to the earth without too much energy attenuation and are the primary candidate particles for inducing super-GZK air showers in the earth’s atmosphere. A numerical calculation has been done in which indicates that such cascades could contribute more than 10% to the observed cosmic ray flux above $`10^{19}`$ eV in the case of eV neutrinos. Recently Waxman has showed that for the annihilation to contribute significantly to the detected cosmic ray-events, a new class of high energy neutrino sources, unrelated to the sources of UHE cosmic rays, needs to be invoked. The relic sea could also be detected if neutrinos are massive and undergo a radiative decay. This hypothesis was suggested to explain the high ionization of the interstellar hydrogen. Present status of this hypothesis was summarized by D. Sciama (Trieste) . If this heavy (27.4 eV) neutrino is sterile, it will decouple earlier (at $`T200`$ MeV) and its contribution to the matter density $`\mathrm{\Omega }`$ will be small, thus avoiding any conflict with the structure formation . Direct searches of the expected EM line at $`\lambda 900`$ Å from this radiative decay are being performed by EURD detector and the results are expected soon. The decaying neutrino cosmology leaves a particular imprint in the angular power spectrum of temperature fluctuations in the CMB, which will be tested with the forthcoming MAP and PLANCK surveyor missions. The evolution of the relic neutrino sea, the possibility of clustering, the formation of structures, local concentrations etc. are of great importance both for direct and indirect detections of relic neutrinos. A possible scenario of the structure formation on galactic scales was discussed by N. Bilic (Zagreb): self-gravitating neutrino clouds can show “gravitational phase transitions” in the process of contraction and form neutrino stars, the scale of whose sizes would depend on the neutrino mass . ## 7 Other Sources of Neutrino Background Apart from the big bang relic neutrinos, the present universe is filled with relic neutrinos from astrophysical sources: past supernovae, supermassive objects and probably, primordial black holes. The possibilities of the detection of neutrinos from relic and real-time supernovae with existing and new detectors were discussed by D. Cline (UCLA) and K. Sato (Tokyo). Sato has calculated the expected rate of relic supernova neutrinos at the Super-Kamiokande detector. The rate of supernova explosions is derived from a model of galaxy evolution where the effect of the chemical evolution is appropriately taken into account . Monte-Carlo simulations show that the rate is a few events/year in the observable energy range of 15-40 MeV, which is still about two orders of magnitude smaller than the observational limit at Super-Kamiokande. A similar rate is found for the new experiment ICARUS, described by Cline. A future detection of a supernova neutrino burst by large underground detectors will provide a measurement of neutrino masses and mixing (Cline). New projects of a supernova burst observatory (SNBO/OMNIS) with an operation time of $`>2040`$ years were described, where neutrinos will be detected through the secondary neutrons emitted by the recoiling nuclei . A new cosmic neutrino source may be provided by Supermassive Objects (SMOs), that may be formed as the final evolutionary stage of dense star clusters (X. Shi, San Diego). Through relativistic instabilities, SMOs will eventually collapse into giant black holes, such as those at the centers of galaxies. A significant fraction of the gravitational binding energy of the collapse of the SMOs may be released by freely escaping neutrinos in a short period of time ($`1`$ sec) with an average energy 1-10 MeV. Neutrino bursts from nearby SMO’s ($`d750`$ Mpc) may be detectable at ICECUBE, a planned $`1`$ km<sup>3</sup> neutrino detector in Antarctica (an expanded version of the current AMANDA) with an expected rate of $`0.1`$ to $`1`$ burst per year . Some contribution to the relic neutrino sea may also come from the evaporation of Primordial Black Holes (PBHs) through Hawking radiation . E. Bugaev (Moscow) showed that the most favorable energy to detect the flux of neutrinos of PBH origin is a few MeV. Comparison of the theoretically expected neutrino flux from PBHs with Super-Kamiokande data sets an upper bound on the contribution of PBHs to the present energy density of the universe ($`\mathrm{\Omega }_{PBH}<10^5`$). This, however, is much weaker than the bounds from the $`\gamma `$-background data. ## 8 Neutrinos in Extreme Conditions An important aspect of the physics of the relic neutrinos is the propagation and the interactions of neutrinos in the extreme conditions of the very hot and dense plasma, in strong magnetic fields, etc.. R. Horvat (Zagreb) has used the real-time approach of the thermal field theory (TFT) to calculate the finite temperature and finite density radiative corrections to the neutrino effective potential in the CP-symmetric early universe (see also ). The $`𝒪(\alpha )`$ photon corrections have been shown to be free of infrared and finite mass singularities, so that bare purturbation series is adequate for the calculations. D. Grasso (Valencia) has calculated the radiative decay rate of neutrinos in a medium using a generalisation of the optical theorem . This is a powerful method to handle dispersive and dissipative properties of the medium. The results are applicable to the neutrino evolution in the early universe where the electron – positron plasma is ultra-relativistic and non-degenerate. A. Ioannisian (Munich) discussed the Čerenkov radiation process $`\nu \nu \gamma `$ in the presence of a homogeneous strong magnetic field . Apart from inducing an effective neutrino-photon vertex, the magnetic field also modifies the photon dispersion relations. Even for fields as large as $`B_{\mathrm{crit}}m_e^2/e4\times 10^{13}`$ Gauss (which are encountered around pulsars), the Čerenkov rate is found to be small, which indicates that the magnetosphere of a pulsar is quite transparent to neutrinos. ## 9 Summary and Outlook 1. The SK atmospheric neutrino results imply that neutrinos are massive and at least one component of the relic sea is non-relativistic. This opens up the possibility of clustering of neutrinos and the formation of structures. Forthcoming experiments on atmospheric and solar neutrinos, double beta decay, etc. may shed more light on the neutrino mass spectrum, and therefore, on the relevance of neutrinos for cosmology. The possible discovery of sterile neutrinos (light singlet fermions) that mix with usual neutrinos will have an enormous impact on astrophysics and cosmology. 2. The simple mechanism of the baryon asymmetry generation via leptogenesis seems very plausible. Moreover, in several suggested scenarios, the masses of light neutrinos are expected to be in the range relevant for cosmology. Further developments in this field would be related to the identification of the mechanism of neutrino mass generation as well as the studies of alternative scenarios of baryogenesis – like the electroweak baryogenesis based on supersymmetry. 3. The neutrino sea has a strong influence on the big bang nucleosynthesis. Here the observational situation is not clear. Conservative bounds admit more than four neutrino species in equilibrium at the time of BBN, so that one light sterile neutrino in equilibrium is possible. On the other hand, if the observations imply a lesser number of effective neutrino species, it can be accounted for by scenarios like neutrino decay or oscillations into sterile components. The progress would come from further studies of the systematics in the determination of abundances, restrictions on $`\eta `$, and searches for sterile neutrino effects in the laboratory experiments. 4. Recent cosmological data is changing our understanding of the role of neutrinos as the HDM: it seems that the HDM is not necessary, although some amount is allowed and may be useful for a better fit to the data. Future cosmological observations will give important information about the neutrino masses, the presence of sterile states, neutrino degeneracy, etc.. 5. The direct detection of the relic neutrinos is a challenge. However, indirect observations of the neutrino sea are possible via the studies of the cosmic rays of ultrahigh energies, or through the searches for radiative decays of relic neutrinos. There are deep connections between the physics of relic neutrinos and a variety of fundamental open questions in cosmology, astrophysics and particle physics. Understanding the properties of the relic neutrino sea and its possible detection will be one of the challenges for the physics and astrophysics of the next millenium. ## 10 Epilogue This report is an attempt to substitute the “Proceedings”, which are, in many cases, a nightmare for the organisers, a waste of time for the speakers and a practically useless showpiece for the readers due to the time delays. Its objectives were * to give general information about the meeting (format, participants, topics, etc.), * to review the results and discussions, * to give, as much as possible, a complete reference list to the original papers of participants in which the results presented during the conference were published. (Indeed, a majority of the results have been published before or within about two months after the meeting.) We also give some information about other appropriate papers, as well as about further related developments during the short time after the conference. This review has been written (as an experiment) by the organisers of the workshop. Probably a better idea would be to select “reporters” from among the participants in advance, who will review the conference in a short period of time.
no-problem/9812/quant-ph9812016.html
ar5iv
text
# Optimal state estimation for 𝑑-dimensional quantum systems ## Abstract We establish a connection between optimal quantum cloning and optimal state estimation for $`d`$-dimensional quantum systems. In this way we derive an upper limit on the fidelity of state estimation for $`d`$-dimensional pure quantum states and, furthermore, for generalized inputs supported on the symmetric subspace. 03.67.-a, 03.65.-w One of the fundamental problems in quantum physics is the question of how well one can estimate the state $`|\psi `$ of a quantum system, given that only a finite number of identical copies is available. An appropriate figure of merit in this context is the fidelity which will be defined below. The optimal fidelity for state estimation of two-level systems has been derived in , and an algorithm for constructing an optimal positive operator valued measurement (POVM) for a general quantum system has been given in . The purpose of this letter is to derive the optimal fidelity for state estimation of an ensemble of $`N`$ identical pure $`d`$-dimensional quantum systems by establishing a connection to optimal quantum cloning. In the following we will prove the link between optimal quantum cloning and optimal state estimation, using a similar line of argument as in . We consider both processes to be universal, in the sense that the corresponding fidelity does not depend on the input state $`|\psi `$. The link we want to show is given by the equality $$F_{d,est}^{opt}(N)=F_{d,QCM}^{opt}(N,\mathrm{}).$$ (1) Here $`F_{d,QCM}^{opt}(N,M)`$ is the fidelity of the optimal quantum cloner for $`d`$-dimensional systems, taking $`N`$ identical pure inputs and creating $`M`$ outputs, which was derived in to be $$F_{d,QCM}^{opt}(N,M)=\frac{MN+N(M+d)}{M(N+d)}.$$ (2) (This formula refers to the fidelity between an output one-particle reduced density operator and one of the identical inputs.) In equation (1) $`F_{d,est}^{opt}(N)`$ is the optimal average fidelity of state estimation for $`N`$ identical $`d`$-dimensional inputs, defined as $$F_{est}=\underset{\mu }{}p_\mu (\psi )|\psi |\psi _\mu |^2,$$ (3) where $`p_\mu (\psi )`$ is the probability of finding outcome $`\mu `$ (to which we associate candidate $`|\psi _\mu `$), given that the inputs were in state $`|\psi `$. Let us introduce the generalized Bloch vector $`\stackrel{}{\lambda }`$ by expanding a $`d`$-dimensional density matrix in the following way $$\rho _d=\frac{1}{d}1𝐥+\frac{1}{2}\underset{i=1}{\overset{d^21}{}}\lambda _i\tau _i,$$ (4) where $`\tau _i`$ are the generators of the group $`SU(d)`$ with $$\text{ Tr}\tau _i=0;\text{ Tr}(\tau _i\tau _j)=2\delta _{ij}.$$ (5) Note that the length of the generalized Bloch vector for pure states is $$|\stackrel{}{\lambda }|=\sqrt{2(1\frac{1}{d})},$$ (6) which reduces to the familiar case $`|\stackrel{}{\lambda }|=1`$ for qubits, i.e. $`d=2`$. It has been shown in that, as far as optimality of the fidelity for a universal map is concerned, one can restrict oneself to covariant transformations. Furthermore, a covariant map, acting on pure $`d`$-dimensional input states, can only shrink the generalized Bloch vector, namely it transforms equation (4) into the output density operator $$\rho _d=\frac{1}{d}1𝐥+\frac{1}{2}\eta _d\underset{i=1}{\overset{d^21}{}}\lambda _i\tau _i,$$ (7) where we call $`\eta _d`$ the shrinking factor. Note that for pure input states the fidelity is related to $`\eta _d`$ as follows: $$F_d=\frac{1}{d}[1+(d1)\eta _d].$$ (8) Remember that, as mentioned above, in this paper we consider universal quantum cloning and universal state estimation and therefore the above considerations apply. In order to clarify the role of the shrinking factor in quantum state estimation we notice that equation (3) can also be interpreted as $$F_{d,est}(N)=\psi |\varrho _{d,est}|\psi $$ (9) where the density operator $`\varrho _{d,est}`$, due to universality, is the shrunk version of the input $`|\psi \psi |`$, namely $$\varrho _{d,est}=\eta _{d,est}(N)|\psi \psi |+(1\eta _{d,est}(N))\frac{1}{d}1𝐥.$$ (10) We now start proving the equality (1) by noticing that after performing a universal measurement procedure on $`N`$ identically prepared input copies $`|\psi `$, we can prepare a state of $`L`$ systems, supported on the symmetric subspace of $`_d^L`$, where each system has the same reduced density operator, given by $`\varrho _{d,est}`$. The symmetric subspace is defined as the space spanned by all states which are invariant under any permutation of the constituent subsystems. As shown in , a universal cloning process generates outputs that are supported on the symmetric subspace. Therefore, the above method of performing state estimation followed by preparation of a symmetric state can be viewed as a universal cloning process and thus it cannot lead to a higher fidelity than the optimal $`NL`$ cloning transformation. Therefore we find the inequality $$F_{d,est}^{opt}(N)F_{d,QCM}^{opt}(N,L).$$ (11) The above inequality must hold for any value of L, in particular for $`L\mathrm{}`$. In order to derive the opposite inequality, we consider a measurement procedure on N copies which is composed of an optimal $`NL`$ cloning process and a subsequent universal measurement on the $`L`$ output copies. This total procedure is also a possible state estimation method. As mentioned above the output $`\varrho _L`$ of the optimal universal $`d`$-dimensional cloner is supported on the symmetric subspace and therefore can be decomposed as $$\varrho _L=\underset{i}{}\alpha _i|\psi _i\psi _i|^L;\text{ with}\underset{i}{}\alpha _i=1,$$ (12) where the coefficients $`\alpha _i`$ are not necessarily positive. After performing the optimal universal measurement on the L cloner outputs we can calculate the average fidelity of the total process, due to linearity of the measurement procedure, as follows: $$F_{d,total}(N,L)=\underset{i}{}\alpha _i\psi |[\eta _{d,est}^{opt}(L)|\psi _i\psi _i|+(1\eta _{d,est}^{opt}(L))\frac{1}{d}1𝐥]|\psi .$$ (13) (Remember that $`_i\alpha _i|\psi _i\psi _i|`$ is the one particle reduced density matrix at the output of the $`NL`$ cloner and thus depends on $`N`$ and $`L`$.) In the limit $`L\mathrm{}`$ we have $`\eta _{d,est}^{opt}(\mathrm{})=1`$ and the average fidelity can be written as $`\underset{L\mathrm{}}{lim}F_{d,total}(N,L)`$ $`=`$ $`\psi |[{\displaystyle \underset{i}{}}\alpha _i|\psi _i\psi _i|]|\psi `$ (14) $`=`$ $`\psi |[\eta _{d,QCM}^{opt}(N,\mathrm{})|\psi \psi |+(1\eta _{d,QCM}^{opt}(N,\mathrm{})){\displaystyle \frac{1}{d}}1𝐥]|\psi `$ (15) $`=`$ $`{\displaystyle \frac{1}{d}}[1+(d1)\eta _{d,QCM}^{opt}(N,\mathrm{})],`$ (16) where in the second line we have explicitly written down the output of the cloning stage for clarity. This fidelity cannot be higher than the one for the optimal state estimation performed directly on N pure inputs, thus we conclude $$F_{d,QCM}^{opt}(N,\mathrm{})F_{d,est}^{opt}(N).$$ (17) The above inequality, together with equation (11), leads to the equality (1). Thus we have derived the optimal fidelity for state estimation of $`N`$ copies of a $`d`$-dimensional quantum system to be $$F_{d,est}^{opt}(N)=\frac{N+1}{N+d}.$$ (18) Note that we can extend this argument for optimal state estimation to more general inputs, namely to inputs supported on the symmetric subspace. Using the decomposition (12), we see immediately that we can always reach at least the same shrinking factor as for pure inputs, due to linearity of the measurement procedure. Moreover, we can prove by contradiction that the shrinking factor cannot be larger than for pure states: let us assume we could perform better on such a described entangled input. We can think of arranging the following procedure. We concatenate an $`NM`$ cloning transformation taking $`N`$ pure inputs and creating $`M`$ outputs with a subsequent state estimation. Notice that, generalizing the result of , the shrinking factors of two concatenated universal operations multiply, given that the output of the first is supported by the symmetric subspace. If we could perform better than in the pure case at the second stage of this concatenation, we could, by reconstructing the output state according to the state estimation result, create an $`N\mathrm{}`$ cloner that is better than the optimal one, thus arriving at a contradiction. In conclusion, we have derived the optimal fidelity for state estimation of an ensemble of identical $`d`$-dimensional quantum states, pointing out the connection to optimal quantum cloning. We have also extended the possible inputs for state estimation in $`d`$ dimensions to those supported on the symmetric subspace. Note that an algorithm to construct the according POVM consisting of a finite set of operators has been given in reference . We would like to thank A. Ekert and P. Zanardi for helpful discussions. We acknowledge support by the European TMR Research Network ERP-4061PL95-1412. Part of this work was completed during the 1998 workshops on quantum information of ISI - Elsag-Bailey and Benasque Center of Physics.
no-problem/9812/cond-mat9812088.html
ar5iv
text
# The mechanism of thickness selection in the Sadler-Gilmer model of polymer crystallization ## I Introduction On crystallization from solution and the melt many polymers form lamellae where the polymer chain traverses the thin dimension of the crystal many times folding back on itself at each surface. Although lamellar crystals were first observed over forty years ago their physical origin is still controversial. It is agreed that the kinetics of crystallization are crucial since extended-chain crystals are thermodynamically more stable than lamellae. However, the explanations for the dependence of the lamellar thickness on temperature offered by the two dominant theoretical approaches appear irreconcilable. The lamellar thickness is always slightly greater than $`l_{\mathrm{min}}`$, the minimum thickness for which the crystal is thermodynamically more stable than the melt; $`l_{\mathrm{min}}`$ is approximately inversely proportional to the degree of supercooling. The first theory, which was formulated by Lauritzen and Hoffman soon after the initial discovery of the chain-folded crystals, invokes surface nucleation of a new layer on the thin side faces of the lamellae as the key process. It assumes that there is an ensemble of crystals of different thickness, each of which grows with constant thickness. The crystals which grow most rapidly dominate this ensemble, and so the average value of the thickness in the ensemble, which is equated with the observed thickness, is close to the thickness for which the crystals have the maximum growth rate. The growth rates are derived by assuming that a new crystalline layer grows by the deposition of a succession of stems (straight portions of the polymer that traverse the crystal once) along the growth face. The two main factors that determine the growth rate are the thermodynamic driving force and the free energy barrier to deposition of the first stem in a layer. The former only favours crystallization when the thickness is greater than $`l_{\mathrm{min}}`$. The latter increases with the thickness of the crystal because of the free energetic cost of creating the two new lateral surfaces on either side of the stem and makes crystallization of thicker crystals increasingly slow. Therefore, the growth rate passes through a maximum at an intermediate value of the thickness which is slightly greater than $`l_{\mathrm{min}}`$. The second approach, which was developed by Sadler and Gilmer and has been termed the entropic barrier model, is based upon the interpretation of kinetic Monte Carlo simulations and rate-theory calculations of a simplified model of polymer crystal growth. The model has since been applied to the crystallization of long-chain paraffins, copolymer crystallization and non-isothermal crystallization. As with the surface nucleation approach, the observed thickness is suggested to result from the competition between a driving force and a free energy barrier contribution to the growth rate. However, a different cause for the free energy barrier is postulated. As the polymer surface in the model can be rough, it is concluded that the details of surface nucleation of new layers are not important. Instead, the outer layer of the crystal is found to be thinner than in the bulk; this rounded crystal profile prevents further crystallization. Therefore, growth of a new layer can only begin once a fluctuation occurs to an entropically unlikely configuration in which the crystal profile is ‘squared-off’. As this fluctuation becomes more unlikely with increasing crystal thickness, the entropic barrier to crystallization increases with thickness. Recently, we performed a numerical study of a simple model of polymer crystallization. Our results suggest that some of the assumptions of the Lauritzen-Hoffman (LH) theory do not hold and led us to propose a new picture of the mechanism by which the thickness of polymer crystals is determined. Firstly, we examined the free energy profile for the crystallization of a polymer on a surface. This profile differed from that assumed by the LH theory because the initial nucleus was not a single stem but two stems connected by a fold which grew simultaneously. Secondly, we performed kinetic Monte Carlo simulations on a model where, similar to the LH approach, new crystalline layers grow by the successive deposition of stems across the growth face, but where some of the constraints present in the LH theory are removed—stems could be of any length and grew by the deposition of individual polymer units. We, therefore, refer to this model as the unconstrained Lauritzen-Hoffman (ULH) model. Attempts had previously been made to study similar models but at a time when computational resources were limited, so that only approximate and restricted treatments were possible. These kinetic Monte Carlo simulations confirmed that the initial nucleus was not a single stem. We also found that the average stem length in a layer is not determined by the properties of the initial nucleus, and that the thickness of a layer is, in general, not the same as that of the previous layer. Furthermore, during growth lamellar crystals select the value of the thickness, $`l^{}`$, for which growth with constant thickness can occur, and not the thickness for which crystals grow most rapidly if constrained to grow at constant thickness. This thickness selection mechanism can be understood by considering the free energetic costs of the polymer extending beyond the edges of the previous crystalline layer and of a stem being shorter than $`l_{\mathrm{min}}`$, which provide upper and lower constraints on the length of stems in a new layer. Their combined effect is to cause the crystal thickness to converge dynamically to the value $`l^{}`$ as new layers are added to the crystal. At $`l^{}`$, which is slightly larger than $`l_{\mathrm{min}}`$, growth with constant thickness then occurs. Some similarities were found between the behaviour of our model and that of the Sadler-Gilmer (SG) model. For example, at low supercoolings rounding of the crystal growth front was found to inhibit growth. However, the mechanism of thickness selection that we found appears, at first sight, to be incompatible with Sadler and Gilmer’s interpretation in terms of an entropy barrier. Here, we reexamine the SG model in order to determine whether the fixed-point attractors that we found at large supercooling also occur in the SG model, and in order to understand the relationship between the two viewpoints. ## II Methods In the SG model the growth of a polymer crystal results from the attachment and detachment of polymer units at the growth face. The rules that govern the sites at which these processes can occur are designed to mimic the effects of the chain connectivity. In the original three-dimensional version of the model, kinetic Monte Carlo simulations were performed to obtain many realizations of the polymer crystals that result. Averages were then taken over these configurations to get the properties of the model. Under many conditions the growth face is rough and the correlations between stems in the direction parallel to the growth face are weak. Therefore, an even simpler two-dimensional version of the model was developed in which lateral correlations are neglected entirely, and only a slice through the polymer crystal perpendicular to the growth face is considered. One advantage of this model is that it can be formulated in terms of a set of rate equations which can easily be solved. The behaviour of this new model was found to be very similar to the original three-dimensional model. The geometry of the model is shown in Figure 1. Changes in configuration can only occur at the outermost stem and stems behind the growth face are ‘pinned’ because of the chain connectivity. There are three ways that a polymer unit can be added to or removed from the crystal: (1) The outermost stem can increase in length upwards. (2) A new stem can be initiated at the base of the previous stem. (3) A polymer unit can be removed from the top of the outermost stem. The set of rate equations that describes the model can be most easily formulated in a frame of reference that is fixed at the growth face. The position of each stem (which is representative of a layer in the three-dimensional crystal) is then denoted by its distance from the growth face (the outermost stem is at position 1). The rate equations are formulated in terms of three quantities: $`C_n(i)`$, the probability that the stem at position $`n`$ has length $`i`$; $`P_n(i,j)`$, the probability that the stem at position $`n`$ has length $`i`$ and the stem at position $`n+1`$ has length $`j`$; and $`f_n(i,j)`$, the conditional probability that the stem at position $`n+1`$ has length $`j`$ given that the stem at position $`n`$ has length $`i`$. These quantities are related to each other by $$C_n(i)=\underset{j}{}P_n(i,j)$$ (1) and $$f_n(i,j)=P_n(i,j)/C_n(i).$$ (2) The evolution of the system is then described as follows: for $`i>1`$ $$\frac{dP_1(i,j)}{dt}=k^+P_1(i1,j)+k^{}(i+1,j)P_1(i+1,j)+k^{}(1,i)P_1(1,i)f_2(i,j)2k^+P_1(i,j)k^{}(i,j)P_1(i,j),$$ (3) and for $`i=1`$ $$\frac{dP_1(1,j)}{dt}=k^+C_1(j)+k^{}(2,j)P_1(2,j)+k^{}(1,1)P_1(1,1)f_2(1,j)2k^+P_1(1,j)k^{}(1,j)P_1(1,j),$$ (4) where $`k^+`$ and $`k^{}(i,j)`$ are the rate constants for attachment or detachment of a polymer unit, respectively. The first three terms of Equation (3) correspond to the flux into state $`(i,j)`$ by extension of the outermost stem, reduction of the outermost stem and the removal of an outermost stem of length 1, respectively. The fourth term is the flux out of $`(i,j)`$ due to the extension of the outermost stem or the initiation of a new stem, and the fifth term is the flux out due to a reduction of the outermost stem. The only difference for $`i=1`$ is that the first term now corresponds to flux into state $`(1,j)`$ by initiation of a new stem. Sadler and Gilmer found $`f_n`$ to be independent of $`n`$. Therefore, $`f_2`$ can be replaced by $`f_1`$ in Equations (3) and (4) and so the steady-state solution ($`dP(i,j)/dt=0`$) of the above $`c_{\mathrm{max}}\times c_{\mathrm{max}}`$ equations can easily be found ($`c_{\mathrm{max}}`$ is the size of the box in the c-direction). We used a fourth-order Runge-Kutta scheme to integrate the rate equations to convergence starting from an initial configuration that, for convenience, we chose to be a crystal of constant thickness. From $`P_1`$ it is simple to calculate all $`P_n`$ by iterative application of the equations $$C_{n+1}(j)=\underset{i}{}P_n(i,j)$$ (5) and $$P_{n+1}(i,j)=f(i,j)C_{n+1}(i).$$ (6) Far enough into the crystal the properties converge, i.e. $`C_{n+1}=C_n`$. We denote the quantities in this limit as $`C_{\mathrm{bulk}}`$ and $`P_{\mathrm{bulk}}`$. The rate constants, $`k^+`$ and $`k^{}(i,j)`$ are related to the thermodynamics of the model through $$k^+/k^{}=\mathrm{exp}(\mathrm{\Delta }F/kT),$$ (7) where $`\mathrm{\Delta }F`$ is the change in free energy on addition of a particular polymer unit. The above equation only defines the relative rates and not how the the free energy change is apportioned between the forward and backward rate constants. We follow Sadler and Gilmer and choose $`k^+`$ to be constant. We use $`1/k^+`$ as our unit of time. In the model the energy of interaction between two adjacent crystal units is -$`ϵ`$ and the change in entropy on melting of the crystal is given by $`\mathrm{\Delta }S=\mathrm{\Delta }H/T_m=2ϵ/T_m`$, where $`T_m`$ is the melting temperature (of an infinitely thick crystal) and $`\mathrm{\Delta }H`$ is the change in enthalpy. It is assumed that $`\mathrm{\Delta }S`$ is independent of temperature. We can also include the contribution of chain folds to the thermodynamics by associating the energy of a fold, $`ϵ_f`$, with the initiation of a new stem. Sadler and Gilmer ignored this contribution and so for the sake of comparison most of our results are also for $`ϵ_f=0`$. From the above considerations it follows that the rate constants are given by $`k^{}(i,j)`$ $`=`$ $`k^+\mathrm{exp}(2ϵ/kT_mϵ/kT+ϵ_f/kT)i=1`$ (8) $`k^{}(i,j)`$ $`=`$ $`k^+\mathrm{exp}(2ϵ/kT_m2ϵ/kT)ij,i1`$ (9) $`k^{}(i,j)`$ $`=`$ $`k^+\mathrm{exp}(2ϵ/kT_mϵ/kT)i>j.`$ (10) The first term in the exponents is due to the gain in entropy as a result of the removal of a unit from the crystal; the second term is due to the loss of contacts between the removed unit and the rest of the crystal and the third term (only for $`i=1`$) is due to the reduction of the area of the fold surface of the crystal. Quantities such as the growth rate, $`G`$, and the average thickness of the $`n^{\mathrm{th}}`$ layer, $`\overline{l}_n`$, can be easily calculated from the steady-state solution of $`P_1(i,j)`$: $$G=k^+k^{}C_1(1),$$ (11) where $`k^{}`$ is given by Equation (8), and $$\overline{l}_n=\underset{i}{}iC_\mathrm{n}(i).$$ (12) We can also estimate the minimum thermodynamically stable thickness, $`l_{\mathrm{min}}`$. If we assume the crystal to be of constant thickness: $$l_{\mathrm{min}}=\frac{(ϵ_f+ϵ)T_m}{2ϵ\mathrm{\Delta }T},$$ (13) where $`\mathrm{\Delta }T=TT_m`$ is the supercooling. However, this value is a lower bound since the roughness in the fold surface reduces the surface free energy; the resulting surface entropy more than compensates for the increase in surface energy. As well as solving the rate equations of this model, we sometimes use kinetic Monte Carlo. This method is more appropriate when examining the evolution of the system towards the steady state. At each step in the kinetic Monte Carlo simulation we randomly choose a state, $`b`$, from the three states connected to the current state, $`a`$, with a probability given by $$P_{ab}=\frac{k_{ab}}{_b^{}k_{ab^{}}},$$ (14) and update the time by an increment $$\mathrm{\Delta }t=\frac{\mathrm{log}\rho }{_bk_{ab}},$$ (15) where $`\rho `$ is a random number in the range . ## III Results ### A Mechanism of thickness selection The current model has three variables: $`kT_m/ϵ`$, $`T/T_m`$ and $`ϵ_f/ϵ`$. In this section we only discuss results for $`ϵ_f=0`$ as we wish to reexamine the same model as that used by Sadler and Gilmer. Throughout the paper we chose to use $`kT_m/ϵ=0.5`$. Sadler and Gilmer found that this parameter did not affect the qualitative behaviour of the model. Figures 2 and 3 show the typical behaviour of the model. The crystal thickness, $`\overline{l}_{\mathrm{bulk}}`$, is a monotonically increasing function of the temperature which goes to $`\mathrm{}`$ at $`T_m`$. The approximately linear character of Figure 2b is in agreement with the expected temperature dependence of the growth rate: $`G\mathrm{exp}K_G/T\mathrm{\Delta }T`$, where $`K_G`$ is a constant. The rounding of the edge of the lamellar crystal is clear from the typical crystal profile shown in Figure 3a. This rounding increases with temperature (Figure 3c) suggesting that it is associated with the greater entropy of configurations with a rounded profile. The rounding also increases with an increasing value of $`kT_m/ϵ`$. The probability distributions of the stem length, $`C_n`$, of course, reflect this rounding (Figure 3b). The main difference between $`C_n`$ for different values of $`n`$ occurs at small stem lengths. All the $`C_n`$ seem to have the same approximately exponential decay at large stem lengths. However, the bulk can tolerate few stems shorter than $`l_{\mathrm{min}}`$ for reasons of thermodynamic stability. Near the growth face this thermodynamic constraint is more relaxed. Moreover, at the growth face there is the kinetic effect that each new stem must initially be only one unit long, thus contributing to $`C_1(1)`$. It is also interesting to note that $`C_{\mathrm{bulk}}`$ is approximately symmetrical about its maximum. The rounding has a major effect on the mechanism of growth. Since the majority of the short stems at the surface cannot be incorporated into the bulk, rounding inhibits growth. Therefore, a fluctuation in the outer layer to a thickness similar to $`\overline{l}_{\mathrm{bulk}}`$ is required before the growth front can advance. By this mechanism stems with length less than $`l_{\mathrm{min}}`$ are selected out. From Figure 3a it can be seen that this selection process becomes more complete as one goes further into the crystal. One of the main aims of this paper is to discover whether the mechanism of thickness determination that we found for the ULH model also holds for the SG model. Central to this mechanism was a fixed-point attractor that related the thickness of a layer to the thickness of the previous layer. As layers in that model once grown could not thereafter change their thickness, the relationship between the thickness of a layer and the previous layer was independent of the position of the layer in the crystal. The same is not true of the SG model since there is a zone near to the growth face where the layers have not yet achieved their final thickness (Figure 3a). Therefore, it is appropriate to look for the attractor in the relationship between the thickness of layers in the bulk of the crystal. To do so we define $`f_n^{}(i,j)`$ as the conditional probability that the stem at position $`n`$ has length $`i`$ given that the stem at position $`n+1`$ has length $`j`$. $`f_n^{}(i,j)`$ is simply related to quantities that we know: $$f_n^{}(i,j)=P_n(i,j)/C_{n+1}(j).$$ (16) From this we can calculate $`l_n(j)`$, which is the average thickness of layer $`n`$ given that layer $`n+1`$ has thickness $`j`$, using $$l_n(j)=\underset{i}{}f_n^{}(i,j)i.$$ (17) The actual thickness of the $`n^{\mathrm{th}}`$ layer is simply related to this function by $$\overline{l}_n=\underset{j}{}C_{n+1}(j)l_n(j).$$ (18) Figure 4a shows the function $`l_{\mathrm{bulk}}(j)`$ at $`T/T_m=0.95`$. As anticipated, the figure has the form of a fixed-point attractor. The arrowed dotted lines show the thickness of the layer converging to the fixed point from above and below. For example, a layer in the bulk of the crystal which is thirty units thick will be followed by one that is 17.8, which will be followed by one that is 14.3, …, until the thickness at the fixed point, $`l^{}`$, is reached. $`l^{}`$ is approximately equal to the crystal thickness, $`\overline{l}_{\mathrm{bulk}}`$. Figure 4d shows the crystal profile that results from starting kinetic Monte Carlo simulations from a crystal that is initially 30 units thicks. The convergence of the thickness to $`l^{}`$ is apparent and the resulting step on the surface is similar to that produced by a temperature jump. Figure 4b shows that similar maps with a fixed point occur at all temperatures. As the temperature increases the slope of the curves become closer to 1 and so the convergence to $`l^{}`$ becomes slower. To understand why the function $`l_n(j)`$ has the form of a fixed-point attractor we examine probability distributions of the stem length when the previous layer had a specific thickness. In Figure 5 we show $`f_{\mathrm{bulk}}^{}(i,j)`$ for different $`j`$. These probability distributions have a number of features. Firstly, the probability that the stem length is smaller than $`l_{\mathrm{min}}`$ drops off exponentially. This is a simple consequence of the thermodynamics. Secondly, the probability that the stem length is larger than the thickness of the previous layer also drops off exponentially. This is again related to the thermodynamics. The absence of an interaction of the polymer with the surface makes it unfavourable for a stem to extend beyond the edge of the previous layer. This feature also causes a build-up in probability at stem lengths at or just shorter than the thickness of the previous layer. Therefore, as we saw in our previous model, there is a range of stem lengths between $`l_{\mathrm{min}}`$ and the thickness of the previous layer which are thermodynamically viable, and it is the combined effect of the two constraints that causes the thickness to converge to the fixed point. Thirdly, when the previous layer is thick the probability also falls off exponentially with increasing stem length in the range $`l_{\mathrm{min}}<i<j`$. This decay of the probability is much less rapid than for stems that overhang the previous layer and is a kinetic effect. At each step in the growth of a stem there is a certain probability, $`p_{\mathrm{new}}`$, that a new stem will be successfully initiated, thus bringing the growth of the original stem to an end. It therefore follows that the probability will decay with an exponent $`\mathrm{log}(1p_{\mathrm{new}})`$. It is for this reason that the thickness of a new layer will remain finite even if the previous is infinitely thick. Comparison of $`f_{\mathrm{bulk}}`$ at different temperatures shows that this rate of decay increases with decreasing temperature. This is because the thermodynamic driving force is greater at lower temperature making successful initiation of a new stem more likely (at higher temperature a new stem is much more likely to be removed). The relationship between the probability distributions and the fixed-point attractor can be most clearly seen from the contour plot of $`\mathrm{log}(f_{\mathrm{bulk}}^{}(i,j))`$ in Figure 5b. It clearly shows the important roles played by $`l_{\mathrm{min}}`$ and $`j`$ in constraining the thickness of the next layer. When the previous layer is larger than $`l^{}`$ the average stem length must lie between $`l_{\mathrm{min}}`$ and $`j`$ causing the crystal to become thinner. As $`j`$ approaches $`l_{\mathrm{min}}`$ the probability distributions become narrower and the difference in probability between the stem length being greater than $`j`$ and smaller than $`j`$ decreases until the fixed point $`l^{}`$ is reached where the average stem length is the same as the thickness of the previous layer. This explains why $`l^{}`$ is slightly larger than $`l_{\mathrm{min}}`$. For $`j<l^{}`$ the asymmetry of the probability distribution is reversed. These results confirm that the mechanism for thickness determination in the SG model is very similar to that which we found in the ULH model. We have also shown our results for $`f_1^{}(i,j)`$ in Figure 5. The main difference from $`f_{\mathrm{bulk}}^{}`$ results from the fact that stems shorter than $`l_{\mathrm{min}}`$ can be present in the outermost layer. The lack of this constraint causes the outer layer to be thinner than the bulk, and therefore the crystal profile to be rounded. The two modes of exponential decay at larger stem length that we noted for $`f_{\mathrm{bulk}}^{}`$ are clearly also present in $`f_1^{}`$. Another finding from our work on the ULH model was that the rate of growth actually slowed down as the thickness of a crystal converged to $`l^{}`$ from above. To determine whether this behaviour also holds for the SG model we performed kinetic Monte Carlo simulations from initial crystals which had different thicknesses. As can be seen from Figure 6 the initial growth rate depends on the thickness of the initial crystal, but the final growth rate is independent of this initial thickness (on the right-hand side of the figure the lines are all parallel) because the thicknesses of all the crystals have now converged to $`l^{}`$. In agreement with our previous results the initial growth rate increases with the initial crystal thickness. Interestingly for a crystal with an initial thickness of 7 (less than $`l_{\mathrm{min}}`$) the crystal initially melts causing the growth face to go backwards. Only once a fluctuation causes the nucleation of some thicker layers at the growth front can the crystal begin to grow again. The thinner the crystal the more difficult is this nucleation event. Although we have only presented results for the two-dimensional version of the Sadler-Gilmer model in this section, we should note that in the three-dimensional version of the model we also found that the thickness of a crystal converged to the same value whatever the starting configuration of the crystal. ### B Sadler and Gilmer’s entropic barrier The description of the mechanism of thickness selection that we give above is in terms of the fixed-point attractor, $`l_{\mathrm{bulk}}(j)`$. It makes no mention of the growth rate (or a maximum in that quantity) in contrast to the LH surface nucleation theory and Sadler and Gilmer’s own interpretation of the current model. This of course raises the question, “Which viewpoint is correct?” In this section we examine Sadler and Gilmer’s argument again to determine whether it provides a complementary description to the picture presented above or whether the two pictures are incompatible in some respects. This reexamination is complicated by the fact that two slightly different arguments were presented. In the first the growth rate was decomposed into the terms $$G(i)=k^+C_1(i)k^{}(1,i)P_1(1,i),$$ (19) which satisfy $`G=_iG(i)`$. The function $`G(i)`$ has a maximum close to $`l_{\mathrm{bulk}}`$ (Figure 7). It is expected that $$\overline{l}_{\mathrm{bulk}}=\underset{i}{}iG(i)/G.$$ (20) Sadler and Gilmer argue that the maximum in $`G(i)`$ causes the crystal thickness to take the value $`l_{\mathrm{bulk}}`$. Firstly, we should note that this explanation is very different from the maximum growth-rate criterion in the LH theory. In the LH theory the growth rate has a maximum with respect to the thickness in a fictitious ensemble of crystals of different thickness, each of which grows with constant thickness. Rather, $`G(i)`$ is a characterization of that one steady state where growth with constant average thickness occurs. Secondly, as a characterization of the steady state Equation (20) is correct. As $`G(i)`$ is simply the rate of incorporation of stems of length $`i`$ into the crystal, at the steady state it should obey the equation: $$G(i)=GC_{\mathrm{bulk}}(i).$$ (21) Comparison of Figures 3b and 7 provides a simple visual confirmation of the validity of this equation. Equation (20) simply follows from the above equation and Equation (12). The above equation also implies that explanations in terms of $`G(i)`$ or $`C_{\mathrm{bulk}}(i)`$ should be equivalent. Thirdly, although the steady state must be stable against fluctuations (the rate equations would not converge to it if it was not) $`G(i)`$ does not in itself explain how convergence to the steady state is achieved or how fluctuations to larger or smaller thickness decay away. $`G(i)`$ only characterizes the steady state and not the system when it has been perturbed from the steady state. For example, the maximum in $`G(i)`$ does not imply that crystals which are thicker than $`l_{\mathrm{bulk}}`$ grow more slowly. Rather, as we showed earlier, these crystals initially grow more rapidly, but this growth causes the thickness to converge to $`l^{}`$. Sadler and Gilmer then continue by examining the form of $`G(i)`$. Rearranging Equation (19) leads to $$G(i)=k^+\left(1\frac{k^{}(1,i)P_1(1,i)}{k^+C_1(i)}\right)C_1(i)$$ (22) This equation now has the form of a driving force term multiplied by a barrier term. The barrier term, $`C_1(i)`$, is the probability that the system is at the transition state and the driving-force term ($`k^+`$ multiplied by the term in the brackets) is the rate at which this transition state is crossed. The two terms are shown in Figure 7 and it can be seen that their forms agree with the above interpretation. At small $`i`$, $`G(i)`$ is small because of the small driving force and at large $`i`$ the barrier term leads to the decrease in $`G(i)`$. This leads to a maximum in the growth rate at an intermediate value of $`i`$ when the barrier and driving force terms have intermediate values. It should be noted that the term we have labelled the driving force does not have exactly the behaviour one might expect. In contrast to Sadler and Gilmer’s suggestion, this term is not simply proportional to $`il_{\mathrm{min}}`$ near to $`l_{\mathrm{min}}`$. For, as is evident from Equation (21), $`G(i)`$ can never become negative even at $`i<l_{\mathrm{min}}`$. This is because $`G(i)`$ is not the rate of growth of a crystal of thickness $`i`$ but the rate of incorporation of stems of length $`i`$ when the system is at the steady state; there is always a small probability that a stem with length less than $`l_{\mathrm{min}}`$ is incorporated into the crystal. The barrier term, $`C_1`$, has a clear interpretation; it quantifies the effect of the rounding of the crystal growth front in inhibiting growth. The rate-determining step in the growth process is the fluctuation to a more unlikely squared-off configuration that has to occur before growth is likely to proceed. However, Sadler and Gilmer interpret $`C_1`$ in terms of what they call an entropic barrier. Firstly, we note that a decay in $`C_1(i)`$ does not necessarily imply an increase in the free energy with $`i`$ because $`C_1(i)`$ is a steady-state probability distribution under conditions where the crystal is growing, whereas the Landau free energy for the outermost stem, $`A_1(i)`$, is related to the equilibrium probability distribution through $$A_1(i)=AkT\mathrm{log}C_1^{\mathrm{eq}}(i),$$ (23) where $`A`$ is the Helmholtz free energy. The outermost layer may not have reached a state of thermodynamic equilibrium. Secondly, and more importantly, their explanation makes little mention of the limiting effect of the thickness of the previous layer. However, if we examine the contributions to $`C_1(i)`$ from configurations where the thickness of layer 2 is $`j`$ (Figure 8) it can be clearly be seen that these contributions decay sharply for $`i>j`$. This is because it is unfavourable for a stem to extend beyond the previous layer. The combination of this decay in $`f_1^{}(i,j)`$ (Figure 5) with the rounding inherent in $`C_2(j)`$ (Figure 3b) causes $`C_1(i)`$ to decay as $`i`$ increases. Sadler and Gilmer also considered a second approach to account for the thickness selection of lamellar crystals. In this approach a maximum in $`G(l)`$, the growth rate of a crystal of thickness $`l`$, is sought. However, by doing so this approach suffers the same weakness as the LH maximum growth criterion. The implicit assumption that a crystal of any thickness continues to grow at that thickness is contradicted both by the behaviour of the current model (e.g. Figure 4d) and by experiment. As with the previous approach the growth rate is divided into a barrier and a driving force term. The barrier term is $`C(l)`$, the probability that the system had a squared-off configuration (i.e. $`l_n=l`$ for all $`n`$). Sadler and Gilmer argue that $`C(l)`$ is an exponentially decreasing function of $`l`$, because ‘it seems likely the number of tapered configurations will go up approximately exponentially with $`l`$’. Later, Sadler quantified this argument by calculating $`C(l)`$ for the case that $`l_nl_{n+1}`$: $$C(l)=\underset{j=1}{\overset{l1}{}}\frac{1}{1+(k^{}/k^+)^j}$$ (24) However, this result appears to be incorrect. We find: $$C(l)=\underset{j=1}{\overset{l1}{}}\left[1(k^{}/k^+)^j\right].$$ (25) The original and the present derivations are given in the Appendix. However, the two expressions do have a similar functional dependence on $`l`$ and temperature, and so the error does not affect the substance of the argument. In Figure 9a we show the dependence of $`C(l)`$ on $`l`$ at a number of temperatures. It can be seen that $`C(l)`$ does not decay exponentially, but after an initial decay reaches an asymptote at large $`l`$. The origins of this behaviour in Equation (25) is easy to understand. For $`T<T_m`$, $`k^{}/k^+<1`$ and therefore $$\underset{l\mathrm{}}{lim}\frac{P(l+1)}{P(l)}=\underset{l\mathrm{}}{lim}\left[1\left(k^{}/k^+\right)^l\right]=1.$$ (26) The rate of decay to this asymptote depends on the value of $`k^{}/k^+`$. As the temperature is decreased, the magnitude of $`\mathrm{\Delta }F`$ increases and therefore $`k^{}/k^+`$ decreases (Equation 7) and $`C(l)`$ decays to its asymptote more rapidly. The curves in Figure 9 show just this behaviour. The physical significance of these results is clear. For thick crystals a squared-off configuration does not become any more unlikely as the thickness is increased. In particular, the free energy barrier to growth caused by rounding of the crystal profile does not grow exponentially with $`l`$ as Sadler and Gilmer have argued. It should also be stressed that the rounding of the crystal growth front that is expressed by $`C(l)`$ is not just a result of entropy. The number of configurations with an outermost stem of length $`i`$ is independent of $`i`$. Rounding of the crystal profile occurs because those fluctuations that lead to a stem near the edge of the crystal being thicker than the previous layer are energetically disfavoured compared to those which lead to a stem that is shorter than the previous layer. Again we see the crucial role played by the free energetic penalty for a stem extending beyond the previous layer. Sadler implicitly included this effect in the calculation of $`C(l)`$ by imposing the condition $`l_nl_{n+1}`$. Rounding of the crystal profile results from an interplay of energetic and entropic contributions to the free energy. ### C The free energy of the outermost stem As the free energy function that is experienced by the outermost stem plays such an important role in both the LH and SG mechanisms of thickness determination here we calculate this quantity for the current model. This also allows us to examine the magnitude of non-equilibrium effects. Firstly, $`A_1(i|j)`$, the free energy of the outermost stem when the thickness of layer 2 is $`j`$, is simply $`A_1(i|j)/ϵ`$ $`=`$ $`2iT/T_m2i+1ij`$ (27) $`A_1(i|j)/ϵ`$ $`=`$ $`2iT/T_mij+1i>j.`$ (28) For $`ij`$ the free energy decreases with increasing i because the energy gained from crystallization is greater than the loss in entropy. However, for $`i>j`$ the new polymer units added to the crystal no longer gain any energy from interactions with the previous layer causing the free energy to increase. The example free energy profile, $`A_1(i|10)`$, shown in Figure 10a exhibits these features. From $`A_1(i|j)`$ it is easy to calculate the value of $`f_1^{}(i,j)`$ if the outermost stem is at equilibrium: $$f_1^{\mathrm{eq}}(i,j)=\frac{\mathrm{exp}(A_1(i|j)/kT)}{_i^{}\mathrm{exp}(A_1(i^{}|j)/kT)}.$$ (29) In Figure 10b we show $`f_1^{\mathrm{eq}}(i,j)`$ at a number of values of $`j`$. $`f_1^{\mathrm{eq}}(i,j)`$ is always peaked at $`j`$ the position of the free energy minimum in $`A_1(i|j)`$ and the asymmetry in the probability distribution (the stem length is more likely to be less than $`j`$ than greater than $`j`$) reflects the asymmetry in the free energy profile. Comparison of the equilibrium and steady-state values of $`f_1^{}(i,j)`$ shows that the two are in reasonable agreement for small $`j`$ but the discrepancy increases with $`j`$. The steady-state values of $`f_1^{}(i,j)`$ are always larger at small values of $`j`$ because of the bias introduced by the successful initiation of a new stem before equilibrium is reached for the outermost stem; this kinetic effect was noted in our earlier discussion of the form of $`f^{}(i,j)`$. As $`j`$ increases it becomes increasingly likely that a new stem is initiated before the current stem has reached a length $`j`$. We can also obtain $`C_1^{\mathrm{eq}}(i)`$ from $`f_1^{\mathrm{eq}}(i,j)`$: $$C_1^{\mathrm{eq}}(i)=\underset{j}{}f_1^{\mathrm{eq}}(i,j)C_2(j).$$ (30) Using the steady-state values of $`C_2`$ in the above equation we found that $`C_1^{\mathrm{eq}}`$ and $`C_1`$ are virtually identical (Figure 10b) reflecting the good agreement between $`f_1^{}(i,j)`$ and $`f_1^{\mathrm{eq}}(i,j)`$ for small $`j`$. As $`C_1^{\mathrm{eq}}(i)`$ is also related to $`A_1(i)`$, the free energy of a stem in the outermost layer of length $`i`$, through Equation (23), it follows that $`A_1(i)=`$ $``$ $`kT\mathrm{log}{\displaystyle \underset{j}{}}{\displaystyle \frac{C_2(j)\mathrm{exp}(A_1(i|j)/kT)}{_i^{}\mathrm{exp}(A_1(i^{}|j)/kT)}}`$ (31) $`+`$ $`kT\mathrm{log}{\displaystyle \underset{j}{}}{\displaystyle \frac{C_2(j)}{_i^{}\mathrm{exp}(A_1(i^{}|j)/kT)}},`$ (32) where we have chosen the zero of $`A_1(i)`$ so that $`A_1(1)=A_1(1|j)=2T/T_m1`$. Figure 10a shows that $`A_1(i)`$ rises as $`i`$ increases. There is a free energy barrier that increases with stem length. As with $`C_1`$ this increase results from the combination of the rounding present in $`C_2`$ and the free energy barrier to extension of a stem beyond the edge of the previous layer. ### D Effect of $`ϵ_f`$ Although we found a similar mechanism of thickness selection in the SG model as in the ULH model, there are a number of differences in the behaviour of the models. Firstly, the crystal profile in the SG model is always rounded whereas in the ULH model rounding does not occur at larger supercoolings. Secondly, $`l_{\mathrm{bulk}}(j)`$ in the SG model is a monotonically increasing function of temperature, whereas it increases at lower temperatures in the ULH model. This increase is because the free energy barrier for the formation of a fold (and hence the initiation of a new stem) becomes increasingly difficult to scale, so on average a stem continues to grow for longer. This effect is probably also partly responsible for the lack of rounding at larger supercoolings. One possible source of some of the differences between the models is the neglect of the energetic cost of forming a fold. In this section we consider the effect of non-zero $`ϵ_f`$ by presenting results for $`ϵ_f=ϵ`$. One effect of a non-zero $`ϵ_f`$ is to increase $`l_{\mathrm{min}}`$ (Equation 13) and so $`\overline{l}_{\mathrm{bulk}}`$ is always larger than for $`ϵ_f=0`$ (Figure 2a). Furthermore, $`\overline{l}_{\mathrm{bulk}}`$ is now a non-monotonic function of temperature; it gradually begins to increase below $`T=0.8T_m`$. We will explain the cause of this increase later in this section. The growth rate is slower than for $`ϵ_f=0`$, because the energetic cost of forming a fold makes initiation of a new stem more difficult. However, the temperature dependence of $`G`$ is fairly similar (Figure 2b). The convergence of the thickness to its steady-state value is still described by a fixed-point attractor (Figure 11a) but the slope of the line is closer to 1 and so the convergence is less rapid. Furthermore, the temperature dependence of $`l_{\mathrm{bulk}}(j)`$ is different (Figure 11b). $`l_{\mathrm{bulk}}(j)`$ is now approximately constant at a value near to $`j`$ over a wide range of temperature, whereas for $`ϵ_f=0`$, $`l_{\mathrm{bulk}}(j)`$ is approximately independent of $`j`$ at low temperature with a value close to $`l_{\mathrm{min}}`$ (Figure 4c). This difference is due to the greater difficulty of initiating a new stem when $`ϵ_f=ϵ`$; the length of the stem is therefore much more likely to reach $`j`$. Similarly, comparison of Figure 12 with Figure 5 shows that the kinetic decay of $`f_{\mathrm{bulk}}^{}(i,j)`$ in the range $`l_{\mathrm{min}}<i<j`$ is much reduced. As a result the steady-state and equilibrium probabilities for the outer layer are in closer agreement than for $`ϵ_f=0`$ (Figure 12c) and the deviation of the rounding of the crystal profile from the theoretical prediction is reduced at low temperature (Figure 3c). The slopes of the free energy profile $`A_1(i|j)`$ for $`i<j`$ and $`i>j`$ (Figure 10a) have the same magnitude when $`T=0.75T_m`$. Therefore, below this temperature the probability distribution $`f_1^{\mathrm{eq}}(i,j)`$ is asymmetrical about $`j`$ with $`i>j`$ more likely. This asymmetry can also be seen for $`f_{\mathrm{bulk}}^{}(i,j)`$ in Figure 12a. In the absence of the low $`i`$ shoulder in $`f_{\mathrm{bulk}}^{}(i,j)`$, which is caused by the initiation of new stems before equilibrium in the outermost layer is reached, this asymmetry would lead to the divergence of the thickness. Instead, it just causes $`l_{\mathrm{bulk}}`$ to rise modestly. By contrast, below $`T<0.5T_m`$ there is no longer any free energy barrier to the extension of a stem beyond the previous layer and $`l_{\mathrm{bulk}}`$ increases rapidly. These effects are not seen at $`ϵ_f=0`$ because of the more rapid kinetic decay of $`f^{}(i,j)`$ as $`i`$ increases. Although using a non-zero $`ϵ_f`$ makes the behaviour of the SG and ULH models in some ways more similar, differences remain. For example, the crystal profile is still always rounded (Figure 3c). The differences probably stem from the differences in the number of growth sites in the two models. In the ULH model the crystalline polymer is explicitly represented and there is only two sites in any layer at which growth can occur—at either end of the crystalline portion of the polymer. In contrast growth can occur at any stem position at the growth front in the SG model. Consequently, in the ULH model most of the stems in the outermost layer must be longer than $`l_{\mathrm{min}}`$ if the new crystalline layer is to be stable, thus making the rounding of the crystal profile less than for the SG model. We can also reach this conclusion if we note that in some ways the ULH model is similar to the two-dimensional SG model but where the slice represents a polymer growing along the surface rather than a cut through the crystal perpendicular to the growth face. For this variant of the SG model, Figure 3a would represent the dependence of the average stem length in the outermost layer on the distance from the growing end of the crystalline portion of the polymer. Only near to this growing end could the stem length be less than $`l_{\mathrm{min}}`$. ## IV Conclusion In this paper we have reexamined the mechanism of crystal thickness selection in the Sadler-Gilmer (SG) model of polymer crystallization. We find that, as with the ULH model that we investigated previously, a fixed-point attractor describes the dynamical convergence of the thickness to a value just larger than $`l_{\mathrm{min}}`$ as the crystal grows. This convergence arises from the combined effect of two constraints on the length of stems in a layer: it is unfavourable for a stem to be shorter than $`l_{\mathrm{min}}`$ and for a stem to overhang the edge of the previous layer. There is also a kinetic factor which affects the stem length: whenever a stem is longer than $`l_{\mathrm{min}}`$ the growth of the stem can be terminated by the successful initiation of a new stem. This factor plays an important role in constraining the stem length at larger supercoolings when the free energy of crystallization is larger and the barrier to extension of a stem beyond the previous layer is therefore smaller. This explanation of thickness selection differs from that given by Sadler and Gilmer. They realized that in order to explain the temperature dependence of the growth rate in their model they required a ‘barrier’ term that increased with the crystal thickness at the steady state. They correctly identified that the rounding of the crystal profile has just such an inhibiting effect on the growth rate. However, they then also tried to explain the thickness selection in terms of the growth rate in a manner somewhat similar to the maximum growth rate approach in the LH theory. This approach has a number of problems. Firstly, $`G(i)`$ (Equation 19) merely characterizes the steady state. A maximum in that quantity does not explain how or why convergence to that steady state is achieved. $`G(i)`$ is just the rate of incorporation of stems of length $`i`$ into the crystal at the steady state. Equation (21) trivially shows that it must have a maximum at the most common stem length in the crystal. By contrast, Sadler’s alternative maximum growth argument in terms of $`C(l)`$ has potentially more explanatory power, but it falls into the LH trap of incorrectly assuming that a crystal of any thickness can continue to grow at that thickness. Secondly, it is somewhat misleading to label the effect of the rounding of the crystal profile as due to an entropic barrier because it is due to both the energetic and entropic contributions to the free energy of the possible configurations. In particular, it ignores the important role of the free energy barrier for the extension of a stem beyond the edge of the previous layer. We therefore argue that our interpretation in terms of a fixed-point attractor provides a more coherent description of the thickness selection mechanism in the SG model. The added advantage of this pictures is its generality—we have shown it holds for both the SG and ULH models. Therefore, the mechanism is not reliant upon the specific features of these models. There is also experimental evidence that a fixed-point attractor describes the mechanism of thickness selection in lamellar polymer crystals. The steps on the lamellae that result from a change in temperature during crystallization show how the thickness converges to the new $`l^{}`$. The characterization of the profiles of these steps by atomic-force microscopy could enable the fixed-point attractors that underlie this behaviour to be experimentally determined. As well as testing our theoretical predictions, such results would provide important data that could help provide a firmer understanding of the microscopic processes involved in polymer crystallization. For example, although the mechanism of thickness selection in both the ULH model and the SG model is described by a fixed-point attractor, the form of the temperature dependence of $`l_{\mathrm{bulk}}(j)`$ has significant variations depending on the model used and the parameters of the model (compare for example, Figures 4c and 11b in this paper and Figures 6a and 13a in Ref. ). ###### Acknowledgements. The work of the FOM Institute is part of the research programme of ‘Stichting Fundamenteel Onderzoek der Materie’ (FOM) and is supported by NWO (‘Nederlandse Organisatie voor Wetenschappelijk Onderzoek’). JPKD acknowledges the financial support provided by the Computational Materials Science program of the NWO and by Emmanuel College, Cambridge. We would like to thank James Polson and Wim de Jeu for a critical reading of the manuscript. ## A Calculating C(l) In Ref. Sadler presented an expression for the probability that a crystal of thickness $`l`$ has a squared-off profile (i.e. $`l_n=l`$ for all $`n`$). To do so he assumed that only tapered configurations were possible, i.e. $`l_nl_{n+1}`$, and that the system is at equilibrium. The calculation requires that the ratio of the Boltzmann weight of the squared-off configuration to the partition function, $`Z`$, be calculated. If we use the free energy of the squared-off configuration for our zero, $`C(l)=1/Z`$. Sadler argues that $`C(l)`$ can be calculated using a random-walk analogy as in Figure 13a. Each possible configuration is represented by by a walk of horizontal and vertical displacements from $`i=1`$ to $`i=l`$. To achieve a squared-off configuration $`l1`$ successive vertical displacements from $`i=1`$ have to be made. As a horizontal move would involve a loss in free energy of $`(li)\mathrm{\Delta }F`$, where $`\mathrm{\Delta }F`$ is the free energy of crystallization per polymer unit, the ratio of the probabilities of horizontal and vertical moves is $$\frac{p_h}{p_v}=\mathrm{exp}((li)\mathrm{\Delta }F/kT).$$ (A1) It simply follows from $`p_h+p_v=1`$ that $$p_v=\frac{1}{1+\mathrm{exp}((li)\mathrm{\Delta }F/kT)}.$$ (A2) Therefore, the probability of $`l1`$ successive vertical steps is $$C(l)=\underset{i=1}{\overset{l1}{}}\frac{1}{1+\mathrm{exp}((li)\mathrm{\Delta }F/kT)}.$$ (A3) Substituting the exponential by a ratio of rate constants (Equations 7 and 9) and using $`j=li`$ gives $$C(l)=\underset{j=1}{\overset{l1}{}}\frac{1}{1+(k^{}/k^+)^j}.$$ (A4) However, the method of calculation is flawed. A proper calculation of the partition function requires that all possible configurations be considered. We can characterize a general configuration by the set of integers $`\{j_1,\mathrm{},j_{l1}\}`$ (Figure 13b). The free energy of a configuration with respect to the squared-off configuration can be calculated by summing the shaded rectangles in Figure 13b; this gives $`_{i=1}^{l1}j_i(li)\mathrm{\Delta }F`$. The partition function can then be written as $$Z=\underset{j_1,\mathrm{},j_{l1}=0}{\overset{\mathrm{}}{}}\mathrm{exp}\left(\underset{i=1}{\overset{l1}{}}j_i(li)\mathrm{\Delta }F/kT\right).$$ (A5) As an exponential of a sum is a product of exponentials $$Z=\underset{i=1}{\overset{l1}{}}\underset{j_i=0}{\overset{\mathrm{}}{}}\mathrm{exp}(j_i(li)\mathrm{\Delta }F/kT).$$ (A6) The sum is a geometric series so $$Z=\underset{i=1}{\overset{l1}{}}\frac{1}{1\mathrm{exp}((li)\mathrm{\Delta }F/kT)}.$$ (A7) Using $`C(l)=1/Z`$ and again substituting for the exponent and for $`i`$ gives $$C(l)=\underset{j=1}{\overset{l1}{}}\left[1(k^{}/k^+)^j\right].$$ (A8) In this approach we can also calculate a number of other quantities. For example, $`C_1(i)=Z_1(i)/Z`$, where $`Z_1(i)`$, the contribution to the partition function from configurations with an outer stem of length $`i`$, is given by $$Z_1(i)=(k^{}/k^+)^{li}\underset{j=1}{\overset{li}{}}\frac{1}{1(k^{}/k^+)^j}$$ (A9) for $`i<l1`$ and by $`Z_1(l1)=\left(1k^{}/k^+\right)^1`$ and $`Z_1(l)=1`$. $`\overline{l}_1`$ can then be calculated using Equation (12).
no-problem/9812/astro-ph9812014.html
ar5iv
text
# N-body simulations of interacting disc galaxies ## 1 Introduction Galactic discs are very responsive to external forcing. Thus when a disc galaxy interacts (or merges) with another galaxy a lot of disc sub-structures can either be formed or destroyed. As examples let me mention $``$ bridges and tails $``$ spiral structure $``$ bars $``$ off-centerings / asymmetries $``$ rings $``$ warps $``$ lenses $``$ bulges $``$ thick discs In this review I will be selective, neglecting some items, and developing only a few aspects of others. The choice of topic reflects to a large extent my own interests. In many cases I will focus on results obtained by the Marseille group with our GRAPE systems. I will furthermore confine myself to purely stellar models, neglecting gas, star formation and their various consequences. ## 2 Bridges and tails These are often seen in interacting pairs of disc galaxies and are amongst the most spectacular results of interactions. Bridges link the two galaxies, while tails extend in the opposite direction. Amongst the best known examples are the Antennae (NGC 4038/39), the Mice (NGC 4676) and the Atoms-for-Peace galaxy (NGC 7252), where the tails extend several tens of kpcs from the main bodies of the galaxies. A yet more extreme case is the Super-antennae, where the tail extent is of the order of 350 kpc (IRAS 19254-7245; Mirabel, Lutz & Maza 1991). Toomre & Toomre (1972) convincingly demonstrated that gravity alone could account for the formation of such structures, so that there was no need to invoke magnetic or other forces. The most impressive examples are formed by direct passages, where the angular velocity of the companion is, temporarily, equal to that of some of the stars in the disc of the other galaxy. In this way there is a broad “near-resonance”, and the effect of the interaction can be strong. Self-gravity can make further substructure in the tails. Bound clumps can form, containing both stars and gas. Such clumps could evolve to dwarf galaxies as discussed by Barnes & Hernquist (1992) and, from the observational point of view, e.g. by Schweizer (1978), by Hibbard et al. (1994) and by Duc et al. (1998). Contrary to the stars and gas, the halo material will not particularly concentrate into these clumps, so that the ratio of halo to luminous material could be quite low in dwarfs created from such clumps. Since tails extend to large distances from the centers of their parent galaxy they could in principle be used as probes for the dark matter halo (Faber & Gallagher 1979). Dubinski, Mihos & Hernquist (1996) considered a series of galaxy models with the same disc and bulge, and halos of very different extents and masses. These models are relatively similar in their inner parts but have very different values of luminous-to-dark mass ratio. Simulations of interactions showed that the galaxies with less massive and less extended halos form longer, more massive and more striking tails than the galaxies with more massive and more extended halos. This can be easily understood since the latter galaxies will result in interactions with higher relative velocity, in which there will be fewer disc particles in near-resonance with the angular velocity of the tidal forcing. Furthermore the disc particles will need more energy to climb out of the deeper potential well. Thus Dubinski, Mihos & Hernquist (1996) set an upper limit to the halo mass in their specific model. Nevertheless one should not extrapolate this to set limits on the disc to halo mass ratio in general, since this limit can depend strongly on the distribution of matter within the two components (Barnes 1998, Springel & White 1998). ## 3 Spiral structure A close passage of a sufficiently massive companion can form a grand design spiral in the disc of the target galaxy. The best known example, and one of the most spectacular ones, is the spiral in M51, a galaxy interacting with its close companion NGC 5195. Statistical studies (e.g. Kormendy & Norman 1979, Elmegreen & Elmegreen 1982) have shown that M51 is not a unique case and that discs with grand design spirals very often have close companions, as noted also by Toomre & Toomre (1972), who proposed that the origin of these spirals is tidal. A large number of simulations have since shown that triggering by a close and sufficiently massive companion can indeed lead to the formation of a two armed trailing grand design spiral (e.g. Toomre 1981 and contribution to this meeting, Hernquist 1990, Howard & Byrd 1990, Sundelius 1991, Salo & Laurikainen 1993), while the physical mechanism responsible for it, swing amplification, has been presented by Toomre (1981, and contribution to this meeting). ## 4 Bars Although bars could be the result of an instability of isolated galactic discs, tidal forces can trigger their formation (Noguchi 1987, Guerin, Combes & Athanassoula 1990, Noguchi 1996, Miwa & Noguchi 1998). An interesting question in this context is whether triggered bars have the same basic properties as bars developing in isolated disc galaxies. The references mentioned above suggest that low-amplitude tidal forcing can trigger the formation of bars whose properties are largely determined by the internal structure of the target galaxy. The situation is more complicated for the case of high amplitude forcing. For such cases the work by Miwa & Noguchi (1998) suggests that the properties of the driven bars are quite different from those of bars growing spontaneously in isolated discs. More work is necessary on this very interesting point. Bars can not only be triggered by interactions, but can also be destroyed by them (Pfenniger 1991; Athanassoula 1996b, hereafter A96b). This will be discussed in more detail in section 7 ## 5 Off-centerings / asymmetries Many galaxies are at some level asymmetric and some show strong asymmetries, either in their outer regions, or in their inner parts, or in both. Typical examples are M101, the LMC, NGC 4027 etc. In some cases the center of a given component, e.g. the bar, does not coincide with that of the other components (old disc, dynamical center, etc.). The formation of such asymmetries in the outer parts can be a natural consequence of interactions. The formation of off-centerings or asymmetries in the inner parts could either be due to a mode, or be the direct result of an interaction. For example a compact and sufficiently massive companion hitting the inner parts of a barred disc galaxy can push the bar to one side. Examples of such impacts can be seen in many of my simulations and a few cases have been shown and briefly discussed by Athanassoula, Puerari & Bosma (1997, hereafter APB), Athanassoula (1996a, hereafter A96a) and A96b. All these simulations are fully self-consistent, i.e. not only the disc, but also the halo and the companion are described by particles. The bar, once displaced, sloshes around in the central part of the disc. If the galaxy is centrally concentrated, dynamical friction will drive the bar very fast back to the central regions and also strip it of a substantial part of its material. Using a spherical object rather than a bar, Athanassoula, Makino & Bosma (1997) tested how these processes depend on the central concentration of the target. A bar is even more vulnerable than a spherical object, since, if the passages occur at an awkward angle with respect to bar major axis, it can lose its form. The fact that off-centered bars survive longer in less centrally concentrated galaxies than in more concentrated ones is in good agreement with the observation that such asymmetries are mainly seen in late type systems (e.g. de Vaucouleurs & Freeman 1972, Odewahn 1996). Furthermore, when the bar is pushed off-center in the simulations, a long one-armed spiral is formed (see e.g. figure 3 of A96a), very reminiscent of structures observed in late type off-centered barred galaxies such as NGC 4027 (de Vaucouleurs & Freeman 1972). ## 6 Ringed galaxies Three type of rings can be found in galaxies: \- Polar rings, which are nearly perpendicular to the disc of the galaxy \- Rings in barred galaxies located at the main resonances \- Ringed galaxies Here I will only briefly discuss the third type of rings. For the other two types see e.g. the review by Athanassoula & Bosma (1985). Ringed galaxies can be formed from the central impact of a sufficiently massive and compact companion on a target galaxy (Lynds & Toomre 1976; Theys & Spiegel 1976; 1977, Toomre 1978). The temporary extra gravitational attraction due to this companion causes material in the target disc to move inwards. This is followed by a recoil. Thus the material in the target disc starts large epicyclic oscillations, whose period increases with distance from the center of the target. The oscillations drift out of phase and orbits crowd together and produce an expanding ring. This is a density wave propagating outwards. Often the first ring is followed by a second one and in some cases spokes form between the two. The best known example of a ring galaxy showing all these features simultaneously is the Cartwheel galaxy (A0035-324). Several simulations have followed the first pioneering ones (e.g. Huang & Stuart 1988, Appleton & James 1990) and the results have been reviewed and discussed by Appleton & Struck-Marcell (1996). I will here only briefly present some results obtained by the Marseille group, mainly by APB. They verified that the rings are indeed density waves, as had been predicted theoretically by Lynds & Toomre (1976) and Toomre (1978). They found that the expansion velocity of the ring decreases steadily with time, both for the first and the second ring and that the first ring expands faster than the second one. The amplitude, the width, the lifetime and the expansion velocity of the first ring increase considerably with the mass of the companion. The same is true for the velocity of the particles in the ring. Rings formed by low mass encounters are more symmetric, more circular and narrower. On the other hand encounters with high mass companions increase substantially the extent of the target disc. As expected slow impacts are more efficient and form first rings of higher relative amplitude, with higher expansion velocities and longer lifetimes. The velocity of the particles in the ring is faster for slower impacts. In general there is a broad correlation between the expansion velocity of the first ring and the mean radial velocity of the particles that constitute it at a given time. Perpendicular impacts make more symmetric and circular rings than oblique ones, which make more eccentric and broader rings. Finally such impacts make substantial changes to the vertical structure of the disc. ## 7 Can a companion destroy a bar without destroying the disc the bar resides in? To answer this question I first tried simulations where the companion, initially on a rectilinear orbit, was aimed either perpendicularly, or at an angle at a barred disc galaxy. In all my trials, however, either the companion was not sufficiently massive, causing only a change in the bar pattern speed and a drop in its amplitude, or, when it was sufficiently massive, it destroyed the disc as well as the bar (APB, A96a). Discouraged by these attempts, I tried a totally different type of trajectory in which the companion started on a near-circular orbit, and I was immediately rewarded by much more success (cf. also Pfenniger 1991). The simulations that I will discuss here were all done on the GRAPE-3 and GRAPE-4 systems in Marseille Observatory. The former, together with the available software and its performance, is discussed by Athanassoula et al. (1998). For more information on the GRAPE systems in general the reader is referred to Makino & Taiji (1998). I made two series of simulations. In the first series there are 120 000 particles in the target galaxy, while in the second series there are 800 000 particles in the target, out of which 280 000 in the disc and the remaining in the halo. A large fraction of the simulations in the first series were run with a direct summation GRAPE code, while the remaining, as well as the simulations in the second series, were run using the GRAPE treecode. The first series consists of 46 simulations, and they include two different target galaxies (with or without bulge), three different companions and seven different orientations of the initial plane of the companion orbit. Only few simulations with companions on retrograde orbits were tried. The second series consists so far of twelve simulations, covering two different target discs, three different companions, and both direct and retrograde companion orbits. In all cases of this second series, however, the plane on the companion’s orbit coincides with the plane of the target disc. The second series, seen the number of particles used, was, to a large extent, aimed at a study of the disc thickening and, more generally, to the changes of the density and velocity distributions of the disc, the halo and the companion. In both series the mass of all particles in a given simulation is the same. Thus the ratio of masses of two components is equal to the ratio of the number of particles in these two components. Initially the halo was a Plummer sphere and the disc a Kuzmin/Toomre disc. I evolved the disc galaxy in isolation until it developed a bar and chose as initial condition for the interaction a time when the bar was well developed. In this way the companion will perturb an already barred galaxy. In other studies (e.g. Pfenniger 1991), the companion is set in the simulation before the bar has fully grown, so that the growth of the bar is partly stimulated by the tidal force. This might cause problems with overshooting and thus complicates the analysis. In order to measure the difference I started a number of simulations in the first series with a non-barred disc, to allow comparisons between spontaneous and stimulated bars. The three companions considered have respectively a mass equal to 1.0, 0.29, and 0.1 times that of the disc. All three have the same half mass radius and outer cut-off radius. They start on near-circular orbits somewhat outside the outer edge of the halo. Although it is quite consuming in CPU-time, such a start is necessary, in order to avoid starting the simulation out of equilibrium (as it would if the companion started within the halo) and to measure what fraction of the companion’s mass, energy and angular momentum stays in the halo. Preliminary results from the first series of simulations are given by A96b, while the tilt of the disc plane was discussed by both A96b and Huang and Carlberg (1997). Here I will briefly present some further results, mainly from the second series of simulations. ### 7.1 The fate of the companion: Bulge or thick disc The most massive companion is sufficiently dense not to be disrupted by its passage through the halo and the disc. It loses only a small percentage of its mass, while it spirals in towards the central regions of the target galaxy and most of it reaches the center. Could it thus become the bulge of the target galaxy? Before asserting this I have to examine the density and velocity distribution in the companion after the merging and compare them with those of observed bulges. The number of particles in the second series of simulations should be sufficient for this task. If these detailed comparisons confirm first impressions, then these simulations will argue that one possible way of forming a bulge is by merging a target galaxy with a sufficiently massive and compact companion. Thus such interactions, or rather mergings, would entail evolution along the Hubble sequence, since they would transform a late type galaxy with either a small bulge, or no bulge at all, to an earlier type disc galaxy, with a sizeable bulge component. Companions of the second (intermediate mass) and third type (smallest mass) get disrupted well before reaching the center. They start losing a substantial part of their particles when they reach the main parts of the disc component. This is not done symmetrically around the surface of the companion. Most of the particles leave from the part of the companion that is farthest from the center of the target, in a tail-like fashion. This “tail” winds more and more tightly around the center of the target. The structure is less clear with time, so that, after a sufficient time has elapsed, the companion can be considered as forming a thick disk, where some structure may or may not be visible. In all cases I examined, this disc was quite thicker than the initial disc of the target galaxy. This is presumably linked to the fact that the initial diameter of the companion was in all simulations bigger than the thickness of the target disc. ### 7.2 The fate of the target disc The target disc suffers considerable changes during the interaction and merger, which, as could be expected, can be seen clearest in the case of the most massive companion. Indeed the disc suffers considerable thickening, but also considerable extension in the radial direction. The latter can be easily understood from the conservation of angular momentum in the system, since initially the companion has substantial angular momentum because of its high mass and its large distance from the center of the target. As the target disc expands both vertically and radially, its shape stays that of a disc. An analysis of some of the simulations in the first series shows that the disc suffers some small relative thickening, i.e. that the axial ratio $`c/a`$ is larger after the merging than before. This needs to be confirmed with the help of the second series of simulations. Indeed these measurements are not trivial, due to the presence of the bar, which is thicker than the outer parts of the disc, due to the formation of the peanut (Combes & Sanders 1981, Combes et al. 1990, Raha et al. 1991). The most exiting development, however, is in the case where the orbit of the companion is initially in a plane which does not coincide with that of the target’s disc, but is at an angle to it. Then the target suffers a very dramatic tilt. In simulations with the highest mass companion the plane of the target disc after the merging is close to that of the initial orbital plane of the companion. This also can be understood by angular momentum conservation. Such a plane switch has not been considered in the analytical approach of Toth & Ostriker (1992). Since it absorbs a large fraction of the vertical energy initially stored in the orbital motion of the companion, it accounts for the fact that the target disc is not overly heated in the vertical direction. ### 7.3 The fate of the bar In the simulations where the companion was disrupted before reaching the center of the target, the bar suffers some evolution but is not destroyed. On the other hand in the simulations with the massive compact companion, where most of the companion reaches the target’s center, the bar does not survive the merging. In the unperturbed barred system the particles sustaining the bar rotate around the center of the galaxy in orbits elongated along the bar (i.e. orbits trapped around the $`x_1`$ family). When the companion approaches the bar these orbits are severely perturbed, since the mass of the companion is equal to that of the disc and therefore larger than that of the bar. These important perturbations cause the bar to be disrupted, as can be inferred by visualising the particles in the disc at sufficiently frequent intervals during the last stages of the interaction. Furthermore, after the merging is completed, the disc galaxy is much more centrally concentrated than it was at the beginning of the simulation and this can stabilise it against bar instability (A96b). The effect of central concentration on the bar instability has been discussed at length for isolated galaxies e.g. by Hasan & Norman (1990), Hasan, Pfenniger & Norman (1993), Friedli & Benz (1993), Friedli (1994) and Norman, Sellwood & Hasan (1996). ###### Acknowledgements. I would like to thank Albert Bosma for many useful discussions and Jean-Charles Lambert for his invaluable help with the GRAPE software and the administration of the runs. I would also like to thank IGRAP, the INSU/CNRS and the University of Aix-Marseille I for funds to develop the computing facilities used for the calculations in this paper.
no-problem/9812/nucl-th9812061.html
ar5iv
text
# Lessons to be learned from the coherent photoproduction of pseudoscalar mesons ## I Introduction The coherent photoproduction of pseudoscalar mesons has been advertised as one of the cleanest probes for studying how nucleon-resonance formation, propagation, and decay get modified in the many-body environment; for current experimental efforts see Ref. . The reason behind such optimism is the perceived insensitivity of the reaction to nuclear-structure effects. Indeed, many of the earlier nonrelativistic calculations suggest that the full nuclear contribution to the coherent process appears in the form of its matter density —itself believed to be well constrained from electron-scattering experiments and isospin considerations. Recently, however, this simple picture has been put into question. Among the many issues currently addressed—and to a large extent ignored in all earlier analyses—are: background (non-resonant) processes, relativity, off-shell ambiguities, non-localities, and violations to the impulse approximation. We discuss each one of them in the manuscript. For example, background contributions to the resonance-dominated process can contaminate the analysis due to interference effects. We have shown this recently for the $`\eta `$-photoproduction process, where the background contribution (generated by $`\omega `$-meson exchange) is in fact larger than the corresponding contribution from the $`D_{13}(1520)`$ resonance . In that same study, as in a subsequent one , we suggested that—by using a relativistic and model-independent parameterization of the elementary $`\gamma N\eta N`$ amplitude—the nuclear-structure information becomes sensitive to off-shell ambiguities. Further, the local assumption implicit in most impulse-approximation calculations, and used to establish that all nuclear-structure effects appear exclusively via the matter density, has been lifted by Peters, Lenske, and Mosel . An interesting result that emerges from their work on coherent $`\eta `$-photoproduction is that the $`S_{11}(1535)`$ resonance—known to be dominant in the elementary process but predicted to be absent from the coherent reaction —appears to make a non-negligible contribution to the coherent process. Finally, to our knowledge, a comprehensive study of possible violations to the impulse-approximation, such as the modification to the production, propagation, and decay of nucleon resonances in the nuclear medium, has yet to be done. In this paper we concentrate—in part because of the expected abundance of new, high-quality experimental data—on the coherent photoproduction of neutral pions. The central issue to be addressed here is the off-shell ambiguity that emerges in relativistic descriptions and its impact on extracting reliable resonance parameters; no attempt has been made here to study possible violations to the impulse approximation or to the local assumption. Indeed, we carry out our calculations within the framework of a relativistic impulse approximation model. However, rather than resorting to a nonrelativistic reduction of the elementary $`\gamma N\pi ^0N`$ amplitude, we keep intact its full relativistic structure . As a result, the lower components of the in-medium Dirac spinors are evaluated dynamically in the Walecka model . Another important ingredient of the calculation is the final-state interactions of the outgoing pion with the nucleus. We address the pionic distortions via an optical-potential model of the pion-nucleus interaction. We use earlier models of the pion-nucleus interaction plus isospin symmetry—since these models are constrained mostly from charged-pion data—to construct the neutral-pion optical potential. However, since we are unaware of a realistic optical-potential model that covers the $`\mathrm{\Delta }`$-resonance region, we have extended the low-energy work of Carr, Stricker-Bauer, and McManus to higher energies. In this way we have attempted to keep at a minimum the uncertainties arising from the optical potential, allowing concentration on the impact of the off-shell ambiguities to the coherent process. A paper discussing this extended optical-potential model will be presented shortly . Finally, we use an elementary $`\gamma N\pi ^0N`$ amplitude extracted from the most recent phase-shift analysis of Arndt, Strakovsky, and Workman . Our paper has been organized as follows. In Sec. II and in the appendix we discuss in some detail the pion-nucleus interaction and its extension to the $`\mathrm{\Delta }`$-resonance region. Sec. III is devoted to the central topic of the paper: the large impact of the off-shell ambiguity on the coherent cross section. Sec. IV includes a qualitative discussion on several important mechanisms that go beyond the impulse-approximation framework, but that should, nevertheless, be included in any proper treatment of the coherent process. Finally, we summarize in Sec. V. ## II Pionic Distortions Pionic distortions play a critical role in all studies involving pion-nucleus interactions. These distortions are strong and, thus, modify significantly any process relative to its naive plane-wave limit. Indeed, it has been shown in earlier studies of the coherent pion photoproduction process—and verified experimentally —that there is a large modification of the plane-wave cross section once distortions are included. Because of the importance of the pionic distortions, any realistic study of the coherent reaction must invoke them from the outset. However, since a detailed microscopic model for the distortions has yet to be developed, we have resorted to an optical-potential model. This semi-phenomenological choice implies some uncertainties. Thus, pionic distortions represent the first challenge in dealing with the coherent photoproduction processes. We have used earlier optical-potential models of the pion-nucleus interaction, supplemented by isospin symmetry, to construct the $`\pi ^0`$-nucleus optical potential. Moreover, we have extended the low-energy work of Carr, Stricker-Bauer, and McManus to the $`\mathrm{\Delta }`$-resonance region. Most of the formal aspects of the optical potential have been reserved to the appendix and to a forthcoming publication . Here we proceed directly to discuss the impact of the various choices of optical potentials on the coherent cross section. ### A Results The large effect of distortions can be easily seen in Fig. 1. The left panel of the graph (plotted on a linear scale) shows the differential cross section for the coherent photoproduction of neutral pions from $`{}_{}{}^{40}\mathrm{Ca}`$ at a laboratory energy of $`E_\gamma =168`$ MeV. The solid line displays our results using a relativistic distorted-wave impulse approximation (RDWIA) formalism, while the dashed line displays the corresponding plane-wave result (RPWIA). The calculations have been done using a vector representation for the elementary $`\gamma N\pi ^0N`$ amplitude. Note that this is only one of the many possible representations of the elementary amplitude that are equivalent on-shell. A detailed discussion of these off-shell ambiguities is deferred to Sec. III. At this specific photon energy—one not very far from threshold—the distortions have more than doubled the value of the differential cross section at its maximum. Yet, the shape of the angular distribution seems to be preserved. However, upon closer examination (the right panel of the graph shows the same calculations on a logarithmic scale) we observe that the distortions have caused a substantial back-angle enhancement due to a different sampling of the nuclear density, relative to the plane-wave calculation. This has resulted in a small—but not negligible—shift of about $`10^{}`$ in the position of the minima. The back-angle enhancement, with its corresponding shift in the position of the minimum, has been seen in our calculations also at different incident photon energies. The effect of distortions on the total photoproduction cross section from $`{}_{}{}^{40}\mathrm{Ca}`$ as a function of the photon energy is displayed in Fig. 2. The behavior of the the distorted cross section is explained in terms of a competition between the attractive real (dispersive) part and the absorptive imaginary part of the optical potential. Although the optical potential encompasses very complicated processes, the essence of the physics can be understood in terms of $`\mathrm{\Delta }`$-resonance dominance. Ironically, the behavior of the dispersive and the absorptive parts are caused primarily by the same mechanism: $`\mathrm{\Delta }`$-resonance formation in the nucleus. The mechanism behind the attractive real part is the scattering of the pion from a single nucleon—which is dramatically increased in the $`\mathrm{\Delta }`$-resonance region. In contrast, the absorptive imaginary part is the result of several mechanisms, such as nucleon knock-out, excitation of nuclear states, and two-nucleon processes. At very low energies some of the absorptive channels are not open yet, resulting in a small imaginary part of the potential. This in turn provides a chance for the attractive real part to enhance the coherent cross section. As the energy increases, specifically in the $`\mathrm{\Delta }`$-resonance region, a larger number of absorptive channels become available leading to a large dampening of the cross section. Although the attractive part also increases around the $`\mathrm{\Delta }`$-resonance region, this increase is more than compensated by the absorptive part, which greatly reduces the probability for the pion to interact elastically with the nucleus. Since understanding pionic distortions constitutes our first step towards a comprehensive study of the coherent process, it is instructive to examine the sensitivity of our results to various theoretical models. To this end, we have calculated the coherent cross section using different optical potentials, all of which fit $`\pi `$-nucleus scattering data as well as the properties of pionic atoms. We have started by calculating the coherent cross section using the optical potential developed by Carr and collaborators . It should be noted that although our optical potential originates from the work of Carr and collaborators, there are still significant differences between the two sets of optical potentials. Some of these differences arise in the manner in which some parameters are determined. Indeed, in our case parameters that have their origin in pion–single-nucleon physics have been determined from a recent $`\pi N`$ phase shift analysis , while Carr and collaborators have determined them from fits to pionic-atom data. Moreover, we have included effects that were not explicitly included in their model, such as Coulomb corrections when fitting to charge-pion data. In addition to the above potentials, we have calculated the coherent cross section using a simple 4-parameter Kisslinger potential of the form: $$2\omega U=4\pi \left[b_{\mathrm{eff}}\rho (r)c_{\mathrm{eff}}\stackrel{}{}\rho (r)\stackrel{}{}+c_{\mathrm{eff}}\frac{\omega }{2M_N}^2\rho (r)\right].$$ (1) Note that we have used two different sets of parameters for this Kisslinger potential, denoted by K1 and K2 . Both sets of parameters were constrained by $`\pi `$-nucleus scattering data and by the properties of pionic atoms. However, while the K1 fit was constrained to obtain $`b_{\mathrm{eff}}`$ and $`c_{\mathrm{eff}}`$ parameters that did not deviate much from their pionic-atom values, the K2 fit allowed them to vary freely, so as to obtain the best possible fit. Results for the coherent photoproduction cross section from $`{}_{}{}^{40}\mathrm{Ca}`$ at a photon energy of $`E_\gamma =186`$ MeV (resulting in the emission of a 50 MeV pion) for the various optical-potential models are shown in Fig. 3. In the plot, our results are labeled full-distortions (solid line) while those of Carr, Stricker-Bauer, and McManus as CSM (short dashed line); those obtained with the 4-parameter Kisslinger potential are labeled K1 (long-dashed line) and K2 (dot-dashed line), respectively. It can be seen from the figure that our calculation differs by at most 30% relative to the ones using earlier forms of the optical potential. Note that we have only presented results computed using the vector parameterization of the elementary amplitude. Similar calculations done with the tensor amplitude (not shown) display optical-model uncertainties far smaller (of the order of 5%) than the ones reported in Fig. 3. In conclusion, although there seems to be a non-negligible uncertainty arising from the optical potential, these uncertainties pale in comparison to the large off-shell ambiguity, to be discussed next. ## III Off-Shell Ambiguity The study of the coherent reaction represents a challenging theoretical task due to the lack of a detailed microscopic model of the process. Indeed, most of the models used to date rely on the impulse approximation: the assumption that the elementary $`\gamma N\pi N`$ amplitude remains unchanged as the process is embedded in the nuclear medium. Yet, even a detailed knowledge of the elementary amplitude does not guarantee a good understanding of the coherent process. The main difficulty stems from the fact that there are, literally, an infinite number of equivalent on-shell representations of the elementary amplitude. These different representations of the elementary amplitude—although equivalent on-shell—can give very different results when evaluated off-shell. Of course, this uncertainty is present in many other kind of nuclear reactions, not just in the coherent photoproduction process. Yet, this off-shell ambiguity comprises one of the biggest, if not the biggest, hurdle in understanding the coherent photoproduction of pseudoscalar mesons. ### A Formalism Before discussing the off-shell ambiguity, let us set the background by introducing some model-independent results for the differential cross section. Using the relativistic formalism developed in our earlier work , the differential cross section in the center-of-momentum frame (c.m.) for the coherent photoproduction of pseudoscalar mesons is given by $$\left(\frac{d\sigma }{d\mathrm{\Omega }}\right)_{\mathrm{c}.\mathrm{m}.}=\left(\frac{M_T}{4\pi W}\right)^2\left(\frac{q_{\mathrm{c}.\mathrm{m}.}}{k_{\mathrm{c}.\mathrm{m}.}}\right)\left(\frac{1}{2}k_{\mathrm{c}.\mathrm{m}.}^2q_{\mathrm{c}.\mathrm{m}.}^2\mathrm{sin}^2\theta _{\mathrm{c}.\mathrm{m}.}\right)|F_0(s,t)|^2,$$ (2) where $`M_T`$ is the mass of the target nucleus. Note that $`W`$, $`\theta _{\mathrm{c}.\mathrm{m}.}`$, $`k_{\mathrm{c}.\mathrm{m}.}`$ and $`q_{\mathrm{c}.\mathrm{m}.}`$ are the total energy, scattering angle, photon and $`\pi `$-meson momenta in the c.m. frame, respectively. Thus, all dynamical information about the coherent process is contained in the single Lorentz-invariant form factor $`F_0(s,t)`$; this form-factor depends on the Mandelstam variables $`s`$ and $`t`$. We now proceed to compute the Lorentz invariant form factor in a relativistic impulse approximation. In order to do so, we need an expression for the amplitude of the elementary process: $`\gamma N\pi ^0N`$. We start by using the “standard” model-independent parameterization given in terms of four Lorentz- and gauge-invariant amplitudes . That is, $$T(\gamma N\pi ^0N)=\underset{i=1}{\overset{4}{}}A_i(s,t)M_i,$$ (3) where the $`A_i(s,t)`$ are scalar functions of $`s`$ and $`t`$ and for the Lorentz structure of the amplitude we use the standard set: $`M_1`$ $`=`$ $`\gamma ^5\text{/}ϵ\text{/}k,`$ (5) $`M_2`$ $`=`$ $`2\gamma ^5\left[(ϵp)(kp^{})(ϵp^{})(kp)\right],`$ (6) $`M_3`$ $`=`$ $`\gamma ^5\left[\text{/}ϵ(kp)\text{/}k(ϵp)\right],`$ (7) $`M_4`$ $`=`$ $`\gamma ^5\left[\text{/}ϵ(kp^{})\text{/}k(ϵp^{})\right].`$ (8) This form, although standard, is only one particular choice for the elementary amplitude. Many other choices—all of them equivalent on shell—are possible. Indeed, we could have used the relation—valid only on the mass shell, $`M_1`$ $`=`$ $`\gamma ^5\text{/}ϵ\text{/}k={\displaystyle \frac{1}{2}}\epsilon ^{\mu \nu \alpha \beta }ϵ_\mu k_\nu \sigma _{\alpha \beta }={\displaystyle \frac{i}{2}}\epsilon ^{\mu \nu \alpha \beta }ϵ_\mu k_\nu {\displaystyle \frac{Q_\alpha }{M_N}}\gamma _\beta `$ (9) $``$ $`{\displaystyle \frac{1}{2M_N}}\gamma ^5\left[\text{/}ϵ(kp)\text{/}k(ϵp)\right]{\displaystyle \frac{1}{2M_N}}\gamma ^5\left[\text{/}ϵ(kp^{})\text{/}k(ϵp^{})\right],`$ (10) to obtain the following representation of the elementary amplitude: $$T(\gamma N\pi ^0N)=\underset{i=1}{\overset{4}{}}B_i(s,t)N_i.$$ (11) where the new invariant amplitudes and Lorentz structures are now defined as: $`B_1`$ $`=`$ $`A_1;N_1={\displaystyle \frac{i}{2}}\epsilon ^{\mu \nu \alpha \beta }ϵ_\mu k_\nu {\displaystyle \frac{Q_\alpha }{M_N}}\gamma _\beta ,`$ (13) $`B_2`$ $`=`$ $`A_2;N_2=M_2=2\gamma ^5\left[(ϵp)(kp^{})(ϵp^{})(kp)\right],`$ (14) $`B_3`$ $`=`$ $`A_3A_1/2M_N;N_3=M_3=\gamma ^5\left[\text{/}ϵ(kp)\text{/}k(ϵp)\right],`$ (15) $`B_4`$ $`=`$ $`A_4A_1/2M_N;N_4=M_4=\gamma ^5\left[\text{/}ϵ(kp^{})\text{/}k(ϵp^{})\right].`$ (16) Note that we have introduced the four-momentum transfer $`Q^\mu (kq)^\mu =(p^{}p)^\mu `$. Although clearly different, Eqs. (3) and (11) are totally equivalent on-shell: no observable measured in the elementary $`\gamma N\pi ^0N`$ process could distinguish between these two forms. We could go on. Indeed, it is well known that a pseudoscalar and a pseudovector representation are equivalent on shell. That is, we could substitute the pseudoscalar vertex in $`N_2`$ and $`M_2`$ by a pseudovector one: $$\gamma ^5=\frac{\text{/}Q}{2M_N}\gamma ^5.$$ (17) The possibilities seem endless. Given the fact that there are many—indeed infinite—equivalent parameterizations of the elementary amplitude on-shell, it becomes ambiguous on how to take the amplitude off the mass shell. In this work we have examined this off-shell ambiguity by studying the coherent process using the “tensor” parameterization, as in Eq. (3), and the “vector” parameterization, as in Eq. (11). Denoting these parameterizations as tensor and vector originates from the fact that for the coherent process from spherical nuclei (such as the ones considered here) the respective cross sections become sensitive to only the tensor and vector densities, respectively. Indeed, the tensor parameterization yields a coherent amplitude that depends exclusively on the ground-state tensor density : $$\left[\rho _T(r)\widehat{r}\right]^i=\underset{\alpha }{\overset{\mathrm{occ}}{}}\overline{𝒰}_\alpha (𝐱)\sigma ^{0i}𝒰_\alpha (𝐱);\rho _T(r)=\underset{a}{\overset{\mathrm{occ}}{}}\left(\frac{2j_a+1}{4\pi r^2}\right)2g_a(r)f_a(r),$$ (18) where $`𝒰_\alpha (𝐱)`$ is an in-medium single-particle Dirac spinor, $`g_a(r)`$ and $`f_a(r)`$ are the radial parts of the upper and lower components of the Dirac spinor, respectively, and the above sums run over all the occupied single-particle states. The vector parameterization, on the other hand, leads to a coherent amplitude that depends on timelike-vector—or matter—density of the nucleus which is defined as: $$\rho _V(r)=\underset{\alpha }{\overset{\mathrm{occ}}{}}\overline{𝒰}_\alpha (𝐱)\gamma ^0𝒰_\alpha (𝐱);\rho _V(r)=\underset{a}{\overset{\mathrm{occ}}{}}\left(\frac{2j_a+1}{4\pi r^2}\right)\left(g_a^2(r)+f_a^2(r)\right).$$ (19) In determining these relativistic ground-state densities, we have used a mean-field approximation to the Walecka model . In doing so, we have maintained the full relativistic structure of the process. In the Walecka model, one obtains three non-vanishing ground state densities for spherical, spin-saturated nuclei. These are the timelike-vector and tensor densities defined earlier, and the scalar density given by $$\rho _S(r)=\underset{\alpha }{\overset{\mathrm{occ}}{}}\overline{𝒰}_\alpha (𝐱)𝒰_\alpha (𝐱);\rho _S(r)=\underset{a}{\overset{\mathrm{occ}}{}}\left(\frac{2j_a+1}{4\pi r^2}\right)\left(g_a^2(r)f_a^2(r)\right).$$ (20) All other ground-state densities—such as the pseudoscalar and axial-vector densities—vanish due to parity conservation. This is one of the appealing features of the coherent reaction; because of the conservation of parity, the coherent process becomes sensitive to only one ($`A_1`$) of the possible four, elementary amplitudes. It is important to note that the three non-vanishing relativistic ground-state densities are truly independent and constitute fundamental nuclear-structure quantities. The fact that in the nonrelativistic framework only one density survives (the scalar and vector densities become equal and the tensor density becomes dependent on the vector one) is due to the limitation of the approach. Indeed, in the nonrelativistic framework one employs free Dirac spinors to carry out the nonrelativistic reduction of the elementary amplitude. Hence, any evidence of possible medium modifications to the ratio of lower-to-upper components of the Dirac spinors is lost. Before presenting our results we should mention a “conventional” off-shell ambiguity. In the vector parameterization of Eq. (11) the amplitude includes the four momentum transfer $`Q=(kq)`$. While the photon momentum $`𝐤`$ is well defined, the asymptotic pion three-momentum $`𝐪`$ is different—because of distortions—from the pion momentum immediately after the photoproduction process. Since the “local” pion momentum in the interaction region is the physically relevant quantity, we have replaced the asymptotic pion momentum $`𝐪`$ by the pion-momentum operator ($`i`$). Thus, in evaluating the scattering matrix element $`T_\pm =\pi (q);A(p^{})|J^\mu |A(p);\gamma (k,ϵ_\pm )`$, we arrive at an integral of the form: $`\epsilon ^{ijm0}ϵ_ik_j{\displaystyle d^3x\left[\varphi _{q}^{()}{}_{}{}^{}(𝐱)\right]_me^{i𝐤𝐱}\frac{\rho _V(r)}{2M_N}}`$ $`=`$ $`\pm (2\pi )^{3/2}{\displaystyle \frac{|𝐤|}{M_N}}{\displaystyle \underset{l=1}{\overset{\mathrm{}}{}}}\sqrt{{\displaystyle \frac{l(l+1)}{2l+1}}}`$ (22) $`Y_{l,\pm 1}(\widehat{q}){\displaystyle r^2𝑑r\rho _V(r)R_l(r)},`$ where $$R_l(r)=j_{l+1}(kr)\left[\frac{d}{dr}\frac{l}{r}\right]\varphi _{l,q}^{(+)}(r)+j_{l1}(kr)\left[\frac{d}{dr}+\frac{l+1}{r}\right]\varphi _{l,q}^{(+)}(r).$$ (23) Note that we have introduced the distorted pion wave function $`\varphi _q^{(\pm )}(𝐱)`$, the spherical Bessel functions of order $`l\pm 1`$, and the $`\pm `$ sign for positive/negative circular polarization of the incident photon. Moreover, adopting the $`𝐪i`$ prescription, has resulted, as in the tensor case , in no s-wave ($`l=0`$) contribution to the scattering amplitude. This is also in agreement with the earlier nonrelativistic calculation of Ref. . Finally, we have obtained the four Lorentz- and gauge-invariant amplitudes $`A_i(s,t)`$ for the elementary process from the phase-shift analysis of the VPI group . ### B Results Based on the above formalism, we present in Fig. 4 the differential cross section for the coherent photoproduction of neutral pions from $`{}_{}{}^{40}\mathrm{Ca}`$ at a photon energy of $`E_\gamma =230`$ MeV using a relativistic impulse approximation approach. Both tensor and vector parameterizations of the elementary amplitude were used. The off-shell ambiguity is immense; factors of two (or more) are observed when comparing the vector and tensor representations. It is important to stress that these calculations were done by using the same nuclear-structure model, the same pionic distortions, and two elementary amplitudes that are identical on-shell. The very large discrepancy between the two theoretical models emerges from the dynamical modification of the Dirac spinors in the nuclear medium. Indeed, in the nuclear medium the tensor density—which is linear in the lower-component of the Dirac spinors \[see Eq. (18)\]—is strongly enhanced due to the presence of a large scalar potential (the so-called “$`M^{}`$-effect”). In contrast, the conserved vector density is insensitive to the $`M^{}`$-effect. Yet the presence of the large scalar—and vector—potentials in the nuclear medium is essential in accounting for the bulk properties of nuclear matter and finite nuclei . We have compared our theoretical results to preliminary and unpublished data (not shown) provided to us courtesy of B. Krusche . The data follows the same shape as our calculations but the experimental curve seems to straddle between the two calculations, although the vector calculations appears closer to the experimental data. This behavior—a closer agreement of the vector calculation to data—has been observed in all of the comparisons that we have done so far. In Fig. 5 we present results for the differential cross section from $`{}_{}{}^{40}\mathrm{Ca}`$ at a variety of photon energies, while in Fig. 6 we display results for the total cross section. By examining these graphs one can infer that the tensor parameterization always predicts a large enhancement of the cross section—irrespective of the photon incident energy and the scattering angle—relative to the vector predictions. As stated earlier, this large enhancement is inextricably linked to the corresponding in-medium enhancement of the lower components of the nucleon spinors. Moreover, the convolution of the tensor and vector densities with the pionic distortions give rise to similar qualitative, but quite different quantitative, behavior on the energy dependence of the corresponding coherent cross sections. In order to explore the A-dependence of the coherent process, we have also calculated the cross section from <sup>12</sup>C at various photon energies. This is particularly relevant for our present discussion, as <sup>12</sup>C displays an even larger off-shell ambiguity than <sup>40</sup>Ca. In Fig. 7 we show the differential cross section for the coherent process from <sup>12</sup>C at a photon energy of $`E_\gamma =173`$ MeV. The off-shell ambiguity for this case is striking; at this energy the tensor result is five times larger than the vector prediction. The additional enhancement observed here relative to <sup>40</sup>Ca is easy to understand on the basis of some of our earlier work . Indeed, we have shown in our study of the coherent photoproduction of $`\eta `$-mesons, that if one artificially adopts an in-medium ratio of upper-to-lower components identical to the one in free space, then the tensor and vector densities are no longer independent; rather, they become related by: $$\rho _T(Q)=\frac{Q}{2M_N}\rho _V(Q).$$ (24) However, this relation was proven to be valid only for closed-shell nuclei. As <sup>12</sup>C is an open-shell nucleus (closed $`p^{3/2}`$ but open $`p^{1/2}`$ orbitals) an additional enhancement of the tensor density—above and beyond the $`M^{}`$-effect—was observed. Fig. 7 also shows a comparison of our results with the experimental data of Ref. . It is clear from the figure that the vector representation is closer to the data; note that the tensor calculation has been divided by a factor of five. Even so, the vector calculation also overestimates the data by a considerable amount. For further comparison with experimental data we have calculated the coherent cross section from <sup>12</sup>C at photon energies of $`E_\gamma =235`$, $`250`$, and $`291`$ MeV. In Table I we have collated our calculations with experimental data published by J. Arends and collaborators for $`E_\gamma =235`$ and $`291`$ MeV, and with data presented by Booth and A. Nagl, V. Devanathan, and H. Überall for $`E_{\gamma \mathrm{lab}}=250`$ MeV. The experimental data exhibits similar patterns as our calculations (not shown) but the values of the maxima of the cross section are different. The tensor calculations continue to predict large enhancement factors (of five and more) relative to the vector calculations. More importantly, these enhancement factors are in contradiction with experiment. The experimental data appears to indicate that the maximum in the differential cross section from $`{}_{}{}^{12}\mathrm{C}`$ is largest at about $`250`$ MeV, while our calculations predict a maximum around $`295`$ MeV. It is likely that this energy “shift” might be the result of the formation and propagation of the $`\mathrm{\Delta }`$-resonance in the nuclear medium. Clearly, in an impulse-approximation framework, medium modifications to the elementary amplitude—arising from changes in resonance properties—can not be accounted for. Yet, a binding-energy correction of about $`40`$ MeV due to the $`\mathrm{\Delta }`$-nucleus interaction has been suggested before. Indeed, such a shift would also explain the discrepancy in the position of our theoretical cross sections in <sup>40</sup>Ca, relative to the (unpublished) data by Krusche and collaborators . Moreover, such a shift—albeit of only 15 MeV—was invoked by Peters, Lenske, and Mosel in their recent calculation of the coherent pion-photoproduction cross section. Yet, a detailed study of modifications to hadronic properties in the nuclear medium must go beyond the impulse approximation; a topic outside the scope of the present work. However, a brief qualitative discussion of possible violations to the impulse approximation is given in the next section. We conclude this section by presenting in Figs. 8 and 9, a comparison between our plane- and distorted-wave calculations with experimental data for the coherent cross section from <sup>12</sup>C as a function of photon energy for a fixed angle of $`\theta _{\mathrm{lab}}=60^{}`$. The experimental data from MAMI is contained in the doctoral dissertation of M. Schmitz . Perhaps the most interesting feature in these figures is the very good agreement between our RDWIA calculation using the vector representation and the data—if we were to shift our results by +25 MeV. Indeed, this effect is most clearly appreciated in Fig. 9 where the shifted calculation is now represented by the dashed line. In our treatment of the coherent process, the detailed shape of the cross section as a function of energy results from a delicate interplay between several effects arising from: a) the elementary amplitude—which peaks at the position of the delta resonance ($`E_\gamma 340`$ MeV from a free nucleon and slightly lower here because of the optimal prescription ), b) the nuclear form factor—which peaks at low-momentum transfer, and c) the pionic distortions—which strongly quench the cross sections at high energy, as more open channels become available. We believe that the pionic distortions (see Sec. II) as well as the nuclear form factor have been modeled accurately in the present work. The elementary amplitude, although obtained from a recent phase-shift analysis by the VPI group , remains one of the biggest uncertainties, as no microscopic model has been used to estimate possible medium modifications to the on-shell amplitude. Evidently, an important modification might arises from the production, propagation, and decay of the $`\mathrm{\Delta }`$-resonance in the nuclear medium. Indeed, a very general result from hadronic physics, obtained from analyses of quasielastic $`(p,n)`$ and $`({}_{}{}^{3}He,t)`$ experiments , is that the position of the $`\mathrm{\Delta }`$-peak in nuclear targets is lower relative to the one observed from a free proton target. However, it is also well known that such a shift is not observed when the $`\mathrm{\Delta }`$-resonance is excited electromagnetically . This apparent discrepancy has been attributed to the different dynamic responses that are being probed by the two processes. In the case of the hadronic process, it is the (pion-like) spin-longitudinal response that is being probed, which is known to get “softened” (shifted to lower excitation energies) in the nuclear medium. Instead, quasielastic electron scattering probes the spin-transverse response—which shows no significance energy shift. Unfortunately, in our present local-impulse-approximation treatment it becomes impossible to assess the effects associated with medium modifications to the $`\mathrm{\Delta }`$-resonance. A detailed study of possible violations to the impulse approximation and to the local assumption remains an important open problem for the future (for a qualitative discussion see Sec. IV). ## IV Violations to the Impulse Approximation In this section we address an additional ambiguity in the formalism, namely, the use of the impulse approximation. The basic assumption behind the impulse approximation is that the interaction in the medium is unchanged relative to its free-space value. The immense simplification that is achieved with this assumption is that the elementary interaction now becomes model independent, as it can be obtained directly from a phase-shift analysis of the experimental data (see, for example, Ref. ). The sole remaining question to be answered is the value of $`s`$ at which the elementary amplitude should be evaluated, as now the target nucleon is not free but rather bound to the nucleus (see Fig. 10). This question is resolved by using the “optimal” prescription of Gurvitz, Dedonder, and Amado , which suggests that the elementary amplitude should be evaluated in the Breit frame. Then, this optimal form of the impulse approximation leads to a factorizable and local scattering amplitude—with the nuclear-structure information contained in a well-determined vector form factor. Moreover, as the final-state interaction between the outgoing meson and the nucleus is well constrained from other data, a parameter-free calculation of the coherent photoproduction process ensues. This form of the impulse approximation has been used with great success in hadronic processes, such as in $`(p,p^{})`$ and $`(p,n)`$ reactions, and in electromagnetic processes, such as in electron scattering. Perhaps the main reason behind this success is that the elementary nucleon-nucleon or electron-nucleon interaction is mediated exclusively by $`t`$-channel exchanges—such as arising from $`\gamma `$-, $`\pi `$-, or $`\sigma `$-exchange. This implies that the local approximation (i.e., the assumption that the nuclear-structure information appears exclusively in the form of a local nuclear form factor) is well justified. For the coherent process this would also be the case if the elementary amplitude would be dominated by the exchange of mesons, as in the last Feynman diagram in Fig. 11. However, it is well known—at least for the kinematical region of current interest—that the elementary photoproduction process is dominated by resonance ($`N^{}`$ or $`\mathrm{\Delta }`$) formation, as in the $`s`$-channel Feynman diagram of Fig. 11. This suggests that the coherent reaction probes, in addition to the nuclear density, the polarization structure of the nucleus (depicted by the “bubbles” in Fig. 11). As the polarization structure of the nucleus is sensitive to the ground- as well as to the excited-state properties of the nucleus, its proper inclusion could lead to important corrections to the local impulse-approximation treatment. Indeed, Peters, Lenske, and Mosel have lifted the local assumption and have reported—in contrast to all earlier local studies—that the $`S_{11}(1535)`$ resonance does contribute to the coherent photoproduction of $`\eta `$-mesons. Clearly, understanding these additional contributions to the coherent process is an important area for future work. ## V Conclusions We have studied the coherent photoproduction of pseudoscalar mesons in a relativistic-impulse-approximation approach. We have placed special emphasis on the ambiguities underlying most of the current theoretical approaches. Although our conclusions are of a general nature, we have focused our discussions on the photoproduction of neutral pions due to the “abundance” of data relative to the other pseudoscalar channels. We have employed a relativistic formalism for the elementary amplitude as well as for the nuclear structure. We believe that, as current relativistic models of nuclear structure rival some of the most sophisticated nonrelativistic ones, there is no longer a need to resort to a nonrelativistic reduction of the elementary amplitude. Rather, the full relativistic structure of the coherent amplitude should be maintained . We have also extended our treatment of the pion-nucleus interaction to the $`\mathrm{\Delta }`$-resonance region. Although most of the details about the optical potential will be reported shortly , we summarize briefly some of our most important findings. As expected, pionic distortions are of paramount importance. Indeed, we have found factors-of-two enhancements (at low energies) and up to factors-of-five reductions (at high energies) in the coherent cross section relative to the plane-wave values. Yet, ambiguities arising from the various choices of optical-model parameters are relatively small; of at most 30%. By far the largest uncertainty in our results emerges from the ambiguity in extending the many—actually infinite—equivalent representations of the elementary amplitude off the mass shell. While all these choices are guaranteed to give identical results for on-shell observables, they yield vastly different predictions off-shell. In this work we have investigated two such representations: a tensor and a vector. The tensor representation employs the “standard” form of the elementary amplitude and generates a coherent photoproduction amplitude that is proportional to the isoscalar tensor density. However, this form of the elementary amplitude, although standard, is not unique. Indeed, through a simple manipulation of operators between on-shell Dirac spinors, the tensor representation can be transformed into the vector one, so-labeled because the resulting coherent amplitude becomes proportional now to the isoscalar vector density. The tensor and vector densities were computed in a self-consistent, mean-field approximation to the Walecka model . The Walecka model is characterized by the existence of large Lorentz scalar and vector potentials that are responsible for a large enhancement of the lower components of the single-particle wave functions. This so-called “$`M^{}`$-enhancement” generates a large increase in the tensor density, as compared to a scheme in which the lower component is computed from the free-space relation. No such enhancement is observed in the vector representation, as the vector density is insensitive to the $`M^{}`$-effect. As a result, the tensor calculation predicts coherent photoproduction cross sections that are up to a factor-of-five larger than the vector results. These large enhancement factors are not consistent with existent experimental data. Still, it is important to note that the vastly different predictions of the two models have been obtained using the same pionic distortions, the same nuclear-structure model, and two sets of elementary amplitudes that are identical on-shell. Finally, we addressed—in a qualitative fashion—violations to the impulse approximation. In the impulse approximation one assumes that the elementary amplitude may be used without modification in the nuclear medium. Moreover, by adopting the optimal prescription of Ref. , one arrives at a form for the coherent amplitude that is local and factorizable. Indeed, such an optimal form has been used extensively—and with considerable success—in electron and nucleon elastic scattering from nuclei. We suggested here that the reason behind such a success is the $`t`$-channel–dominance of these processes. In contrast, the coherent-photoproduction process is dominated by resonance formation in the $`s`$-channel. In the nuclear medium a variety of processes may affect the formation, propagation, and decay of these resonances. Thus, resonant-dominated processes may not be amenable to treatment via the impulse-approximation. Further, in $`s`$-channel–dominated processes, it is not the local nuclear density that is probed, but rather, it is the (non-local) polarization structure of the nucleus. This can lead to important deviations from the naive local picture. Indeed, by relaxing the local assumption, Peters and collaborators have reported a non-negligible contribution from the $`S_{11}(1535)`$ resonance to the coherent photoproduction of $`\eta `$-mesons , in contrast to all earlier local studies. In summary, we have studied a variety of sources that challenge earlier studies of the coherent photoproduction of pseudoscalar mesons. Without a clear understanding of these issues, erroneous conclusions are likely to be extracted from the wealth of experimental data that will soon become available. What will be the impact of these calculations on our earlier work on the coherent photoproduction of $`\eta `$-mesons is hard to predict. Yet, based on our present study it is plausible that the large enhancement predicted by the tensor form of the elementary amplitude might not be consistent with the experimental data. In that case, additional calculations using the vector form will have to be reported. Moreover, this should be done within a framework that copes simultaneously with all other theoretical ambiguities. Indeed, many challenging and interesting lessons have yet to be learned before a deep understanding of the coherent-photoproduction process will emerge. ###### Acknowledgements. We are indebted to J.A. Carr for many useful discussions on the topic of the pion-nucleus optical potential, and to R. Beck and B. Krusche for many conversations on experimental issues. One of us (LJA) thanks W. Peters for many illuminating (electronic) conversations. This work was supported in part by the U.S. Department of Energy under Contracts Nos. DE-FC05-85ER250000 (JP), DE-FG05-92ER40750 (JP) and by the U.S. National Science Foundation (AJS). ## A Pion-Nucleus Optical Potential The form of the optical potential is derived using a semi-phenomenological formalism that uses a parameterized form of the elementary $`\pi N\pi N`$ amplitude that is assumed to remain unchanged in the nuclear medium (impulse approximation). However, the elementary amplitude does not encompass the many other processes that can occur in the many-body environment, such as multiple scattering, true pion absorption, Pauli blocking, and Coulomb (in the case of charged-pion scattering) interactions. The corrections resulting from these processes are of second and higher order relative to the strength of the first-order expression given by the impulse approximation. To account for these corrections, the impulse approximation form of the optical potential is modified to arrive at a pion-nucleus optical potential—applicable from threshold up to the delta-resonance region—of the form: $`2\omega U`$ $`=`$ $`4\pi \left[p_1b(r)+p_2B(r)\stackrel{}{}Q(r)\stackrel{}{}{\displaystyle \frac{1}{4}}p_1u_1^2c(r){\displaystyle \frac{1}{4}}p_2u_2^2C(r)+p_1y_1\stackrel{~}{K}(r)\right],`$ (A1) where $`b(r)`$ $`=`$ $`\overline{b}_0\rho (r)ϵ_\pi b_1\delta \rho (r),`$ (A3) $`B(r)`$ $`=`$ $`B_0\rho ^2(r)ϵ_\pi B_1\rho (r)\delta \rho (r),`$ (A4) $`c(r)`$ $`=`$ $`c_0\rho (r)ϵ_\pi c_1\delta \rho (r),`$ (A5) $`C(r)`$ $`=`$ $`C_0\rho ^2(r)ϵ_\pi C_1\rho (r)\delta \rho (r),`$ (A6) $`Q(r)`$ $`=`$ $`{\displaystyle \frac{L(r)}{1+\frac{4\pi }{3}\lambda L(r)}}+p_1x_1\stackrel{´}{c}\rho (r),`$ (A7) $`L(r)`$ $`=`$ $`p_1x_1c(r)+p_2x_2C(r),`$ (A8) $`\stackrel{~}{K}(r)`$ $`=`$ $`{\displaystyle \frac{3}{5}}\left({\displaystyle \frac{3\pi ^2}{2}}\right)^{2/3}c_0\rho ^{5/3}(r),`$ (A9) and with $`\overline{b}_0`$ $`=`$ $`b_0p_1{\displaystyle \frac{A1}{A}}(b_0^2+2b_1^2)I,`$ (A11) $`\stackrel{´}{c}`$ $`=`$ $`p_1x_1{\displaystyle \frac{1}{3}}k_o^2(c_0^2+2c_1^2)I.`$ (A12) In the above expressions, the set {$`p_1,u_1,x_1,`$ and $`y_1`$} represents various kinematic factors in the effective $`\pi N`$ system (pion-nucleon mechanisms), and the set {$`p_2,u_2,`$ and $`x_2`$} represents the corresponding kinematic factors in the $`\pi 2N`$ system (pion-two-nucleon mechanisms). These kinematic factors have been derived using the relativistic potential model with no recourse to nonrelativistic approximations and it includes nucleus recoil. The set of parameters {$`b_0,b_1,c_0,`$ and $`c_1`$} originates from the $`\pi N\pi N`$ elementary amplitudes while all other parameters–excluding the kinematic factors–have their origin in the second and higher order corrections to the optical potential. These first-order parameters have been determined from a recent $`\pi N`$ phase-shift analysis , in contrast to the approach by Carr and collaborators in which they were fit to pionic-atom results. In spite of this difference, the parameters determined by the two methods match nicely. Nuclear effects enter in the optical potentials through the nuclear density $`\rho (r)`$, and through the neutron-proton density difference $`\delta \rho (r)`$. Moreover, $`A`$ is the atomic number, $`\lambda `$ is the Ericson-Ericson effect parameter, $`k_o`$ is the pion lab momentum, $`\omega `$ is the pion energy in the pion-nucleus center of mass system, and $`I`$ is the so-called $`1/r_{correlation}`$ function. The $`B`$ and $`C`$ parameters arise from true pion absorption. A detailed account of this optical potential will be the subject of a paper that will be submitted for publication shortly .
no-problem/9812/math9812108.html
ar5iv
text
# 1 Introduction ## 1 Introduction Green functions play important roles in physics. Field theoretical problems involving boundaries, such as the Casimir interactions, particle pair productions i.e., all employ Green functions. Therefore if one is interested in the investigation of some physical effects on the non–commutative spaces construction of the Green functions in these media is useful. Motivated by these considerations we think it is of interest to study the Green functions on the quantum group spaces which are the natural examples of the non–commutative geometries. Previously we have constructed the Green function on the quantum sphere $`S_q^2`$ . In this paper we study the same problem for the quantum plane $`E_q^2`$ which may be more relevant to physics. In Section 2 we recall main result concerning the quantum group $`E_q(2)`$ and its homogeneous spaces. In Section 3 we construct the Green function on the quantum plane. The Green function we obtain, provides the possibility of the future studies on the new q-functions which are the deformations of the Neumann and Hankel functions. ## 2 Quantum Group $`E_q(2)`$ and its Homogeneous Spaces Let $`A`$ be the set of linear operators in the Hilbert space $`l^2(𝒁\text{ })`$ subject to the condition $$\underset{j=\mathrm{}}{\overset{\mathrm{}}{}}q^{2j}(e_j,F^{}Fe_j)<\mathrm{};FA.$$ (1) Here $`0<q<1`$ and $`\{e_j\}`$ is the orthonormal basis in $`l^2(𝒁\text{ })`$. Explicit form of $`e_j`$ is $$e_j=(0,\mathrm{},0,1,0,\mathrm{}),$$ (2) where either $`j^{th}`$ ( for $`j>0`$ ) or $`(j)^{th}`$ ( for $`j<0`$ ) component is one, all others are zero. Any vector $`x=(x_0,x_1,x_1,\mathrm{},x_n,x_n,\mathrm{})`$ of $`l^2(𝒁\text{ })`$ has representation $$x=\underset{j=\mathrm{}}{\overset{\mathrm{}}{}}x_je_j.$$ (3) $`(,)`$ in (1) is the scalar product in $`l^2(𝒁\text{ })`$: $$(x,y)=\underset{j=\mathrm{}}{\overset{\mathrm{}}{}}\overline{x_j}y_j.$$ (4) $`A`$ is the Hilbert space with the scalar product $$(F,G)_A=(1q^2)\underset{j=\mathrm{}}{\overset{\mathrm{}}{}}q^{2j}(e_j,F^{}Ge_j);F,GA.$$ (5) Let us introduce the linear operators acting in $`l^2(𝒁\text{ })`$ $$ze_j=e^{i\psi }q^je_j,\upsilon e_j=e^{i\varphi }e_{j+1},$$ (6) where $`\psi `$ and $`\varphi `$ are the classical phase variables. $`n`$ is normal and $`\upsilon `$ is unitary operator in $`l^2(𝒁\text{ })`$. It is easy to show that they satisfy the relations : $$z\upsilon =q\upsilon z,z^{}\upsilon =q\upsilon n^{},zz^{}=z^{}z.$$ (7) Any element $`FA`$ can be represented as $$F=\underset{j=\mathrm{}}{\overset{\mathrm{}}{}}f_j(z,z^{})\upsilon ^j$$ (8) by suitable choice of the functions $`f_j`$. The linear operators $`Z`$ and $`V`$ given by $$Z=z\upsilon ^1+\upsilon z,V=\upsilon \upsilon $$ (9) are normal and unitary in $`l^2(𝒁\text{ }\times 𝒁\text{ })`$ They satisfy the relations $$ZV=qVZ,Z^{}V=qVZ^{},ZZ^{}=Z^{}Z.$$ (10) Note that the operators $`N`$ and $`V`$ have the same properties as $`n`$ and $`\upsilon `$. Therefore there exits the linear map $$\mathrm{\Delta }:AA_AA$$ (11) defined as $$\mathrm{\Delta }(f(z,z^{})\upsilon ^j)=f(Z,Z^{})V^j.$$ (12) Here $`_A`$ is the completed tensor product $``$ with respect to the scalar product $$(F_1F_2,F_3F_4)_A=(F_1,F_3)_A(F_2,F_4)_A;F_nA$$ (13) in $`AA`$. $`A`$ is the space of square integrable functions on the quantum group $`E_q(2)`$ and $`\mathrm{\Delta }`$ is the quantum analog of the group multiplication. The one parameter groups $`\{\sigma _1\}`$ and $`\{\sigma _2\}`$ of automorphism of $`A`$ given by $$\sigma _1(\upsilon )=e^{it}\upsilon ,\sigma _1(z)=e^{it}z$$ (14) and $$\sigma _2(\upsilon )=\upsilon ,\sigma _2(z)=e^{it}z$$ (15) with $`t𝑹`$ are isomorphic to $`U(1)`$. The subspaces $$B=\{FA:\sigma _1(F)=F,\mathrm{for}\mathrm{all}t𝑹\}$$ (16) and $$H=\{FB:\sigma _2(F)=F\mathrm{for}\mathrm{all}t𝑹\}$$ (17) are the space of square integrable functions on the quantum plane $`E_q^2`$ and two sided coset space $`U(1)\backslash E_q(2)/U(1)`$. Any element of $`H`$ is the function of $`\rho =zz^{}`$. Note that the scalar product (5) on $`H`$ becomes a q-integration $$(f(\rho ),g(\rho ))_A=(1q^2)\underset{j=\mathrm{}}{\overset{\mathrm{}}{}}q^{2j}\overline{f(q^{2j})}g(q^{2j})=_0^{\mathrm{}}\overline{f(\rho )}g(\rho )d_{q^2}\rho $$ (18) Let $`U_q(e(2))`$ be the $``$–algebra generated by $`p`$ and $`\kappa ^{\pm 1}`$ such that $$p^{}p=q^2pp^{},\kappa ^{}=\kappa ,\kappa p=q^2p\kappa .$$ (19) The representation $``$ of $`U_q(e(2))`$ in $`A`$ is given by $`(p)f(z,z^{})\upsilon ^j`$ $`=`$ $`iq^{j+1}D_+^zf(z,z^{})\upsilon ^{j+1}`$ (20) $`(p^{})f(z,z^{})\upsilon ^j`$ $`=`$ $`iq^jD_{}^z^{}f(z,z^{})\upsilon ^{j1}`$ (21) $`(\kappa )f(z,z^{})\upsilon ^j`$ $`=`$ $`q^jf(q^1z,qz^{})\upsilon ^j,`$ (22) where $$D_\pm ^xf(x)=\frac{f(x)f(q^{\pm 2}x)}{(1q^{\pm 2})x}.$$ (23) For the Casimir element $`C=q^1\kappa ^1pp^{}`$ we have $$(C)f(z,z^{})\upsilon ^j=q^jD_{}^z^{}D_+^zf(qz,q^1z^{})\upsilon ^j.$$ (24) The restriction $`\mathrm{}`$ of $`(C)`$ on $`H`$ is $$\mathrm{}=D_{}^\rho \rho D_+^\rho $$ (25) which we call the radial part of $`(C)`$. ## 3 Green Function on the Quantum Plane (i) Green Function on $`U(1)\backslash E_q(2)/U(1)`$ The Green function $`𝒢^p(\rho )`$ on the two sided coset space is defined as $$(\mathrm{}+p)𝒢^p(\rho )=\delta (\rho ),$$ (26) where $`\delta `$ is the delta function which defined with respect to the scalar product (18) as $$(\delta ,f)_A=f(0)$$ (27) for any $`fH_0`$. The equation (26) is understood as $$(𝒢_p,f)_A=\underset{ϵ0}{lim}(\delta (\rho ),\frac{1}{\mathrm{}+p+iϵ}f(\rho ))_A$$ (28) For $`\rho 0`$ the equation (26) is solved by $$𝒥(\sqrt{p\rho })=\underset{k=0}{\overset{\mathrm{}}{}}\frac{(1)^k}{([k]!)^2}(p\rho )^k$$ (29) and $$𝒩(\sqrt{p\rho })=\frac{qq^1}{2q\mathrm{log}(q)}𝒥(\sqrt{p\rho })(\mathrm{log}(p\rho )+2C_q)\frac{1}{q}\underset{k=1}{\overset{\mathrm{}}{}}\frac{(1)^k}{([k]!)^2}(p\rho )^k\underset{m=1}{\overset{k}{}}\frac{q^m+q^m}{[m]},$$ (30) where $$[m]=\frac{q^mq^m}{qq^1},[m]!=[1][2]\mathrm{}[m].$$ (31) The Hahn-Exton q-Bessel function $`𝒥`$ is regular at $`\rho =0`$. It is the zonal spherical function of the unitary irreducible representations of $`E_q(2)`$. $`𝒩`$ can be called q-Neuman function which indeed is reduced to the usual Neuman function in $`q1`$ limit. Here $`C_q`$ is some constant, which in $`q1`$ limit should become the Euler constant . The Green function on $`U(1)\backslash E_q(2)/U(1)`$ is then $$𝒢_p(\rho )=𝒩(\sqrt{p\rho })i𝒥(\sqrt{p\rho }),$$ (32) which in classical limit becomes the Hankel function. Using the Fourier-Bessel integral $$_0^{\mathrm{}}d_{q^2}\rho 𝒥(q^n\sqrt{\rho })𝒥(q^m\sqrt{\rho })=\frac{q^{2m+2}}{1q^2}\delta _{mn}$$ (33) we arrive at the following representation for the Green function $$𝒢_p(\rho )=\underset{ϵ0}{lim}q^2_0^{\mathrm{}}d_{q^2}\lambda \frac{𝒥(\sqrt{\lambda \rho })}{p\lambda +iϵ}$$ (34) from which one can derive the constant $`C_q`$. To prove that $`𝒢`$ solves (26) we first have to show that $$\mathrm{}\mathrm{log}\rho =\frac{2q\mathrm{log}(q)}{qq^1}\delta (\rho ).$$ (35) For $`\rho 0`$ we have $$\mathrm{}\mathrm{log}\rho =0.$$ (36) Since the operator $`\mathrm{}`$ is symmetric in $`H`$ we have $`(\mathrm{}\mathrm{log}\rho ,f)_A`$ $`=`$ $`(\mathrm{log}\rho ,\mathrm{}f)_A`$ (37) $`=`$ $`{\displaystyle \frac{2q\mathrm{log}(q)}{qq^1}}\underset{n\mathrm{}}{lim}{\displaystyle \underset{j=\mathrm{}}{\overset{n}{}}}j[2f(q^{2j})f(q^{2(j+1)})f(q^{2(j1)})]`$ $`=`$ $`{\displaystyle \frac{2q\mathrm{log}(q)}{qq^1}}\underset{n\mathrm{}}{lim}[f(q^{2n})n(f(q^{2n+2})f(q^{2n}))].`$ We then employ the q–Taylor expansion at the neighborhood of $`\rho =0`$ $$f(q^2\rho )f(\rho )D_+^\rho f(0)(q^21)\rho .$$ (38) For $`n>>1`$ we get $$n(f(q^{2n+2})f(q^{2n}))nD_+^\rho f(0)(q^21)q^{2n}.$$ (39) Since $`nq^{2n}`$ vanishes as $`n\mathrm{}`$, we arrive at $$(\mathrm{}\mathrm{log}\rho ,f)_A=\frac{2q\mathrm{log}(q)}{qq^1}\underset{n\mathrm{}}{lim}f(q^{2n})=\frac{2q\mathrm{log}(q)}{qq^1}f(0).$$ (40) In a similar fashion one can show that $$((\mathrm{}+p)𝒢^p(\rho ),f)_A=f(0).$$ (41) (ii) Green Function on $`E_q^2`$ We obtain the Green function $`𝒢^p(R)`$ on the quantum plane $`E_q^2`$ from the one $`𝒢^p(\rho )`$ on the two sided coset space by the group multiplication : $$𝒢^p(R)=\mathrm{\Delta }𝒢^p(\rho ).$$ (42) Here $$R=\mathrm{\Delta }(\rho )=\rho 1+1\rho +\upsilon z^{}z\upsilon +z\upsilon ^{}\upsilon ^{}z^{}$$ (43) is self-adjoint operator in $`l^2(𝒁\text{ }\times 𝒁\text{ })`$ and $$Re_{ts}=q^{2t}e_{ts}$$ (44) where the eigenfunctions $`e_{ts}`$ are given by $$e_{ts}=\underset{j=\mathrm{}}{\overset{\mathrm{}}{}}(1)^jq^{tj}𝒥_s(q^{tj})e_{s+j}e_j.$$ (45) They satisfy the orthogonality condition $$(e_{ts},e_{ij})=\delta _{ti}\delta _{sj}.$$ (46) We also have $$e_{s+j}e_j=\underset{t=\mathrm{}}{\overset{\mathrm{}}{}}(1)^jq^{tj}𝒥_s(q^{tj})e_{ts}.$$ (47) Therefore the basis elements $`e_{ts}`$; $`t,s(\mathrm{},\mathrm{})`$ form the complete set in $`l^2(𝒁\text{ }\times 𝒁\text{ })`$. The Green function on the quantum plane is the linear operator in this space defined as $$𝒢^p(R)e_{ts}=𝒢^p(q^{2t})e_{ts}.$$ (48)
no-problem/9812/hep-ph9812240.html
ar5iv
text
# COSMIC–RAY ANTIPROTONS FROM NEUTRALINO ANNIHILATION IN THE HALO ## 1 Introduction Relic neutralinos, if present in the halo of our galaxy as a component of dark matter, would annihilate, and then produce indirect signals of various kinds. Among them, cosmic–ray antiprotons are certainly one of the most interesting and may be detected by means of balloons or of space missions. To discriminate this potential source of primary $`\overline{p}`$’s from the secondary ones, we can use the different features of their low–energy spectra ($`T_{\overline{p}}<`$ 1 GeV, $`T_{\overline{p}}`$ being the antiproton kinetic–energy). In this energy regime the interstellar (IS) secondary $`\overline{p}`$ spectrum is expected to drop off very markedly because of kinematical reasons, while primary antiprotons would show a milder fall off. This discrimination power is somewhat hindered by some effects we try to clarify in the following sections. ## 2 Cosmic–ray proton spectrum We have first to fix the primary IS cosmic–ray proton spectrum, since we need it for the evaluation of the secondary $`\overline{p}`$’s. The IS cosmic–ray proton spectrum is derived by assuming for it appropriate parametrizations and by fitting their corresponding solar–modulated expressions to the TOA (top of the atmosphere) experimental fluxes. In the present paper we use the two most recent high–statistics measurements of the TOA proton spectrum reported by the IMAX Collaboration and by the CAPRICE Collaboration . We fitted these spectra using two different parametrizations: one depending on the total proton energy, $`E_p=T_p+m_p`$, and the other on the momentum, $`p`$ (equivalent to rigidity for protons). The detailed results of our best fits to the proton data can be found in Table I of Ref., in terms of the normalization coefficient, the spectral index and the solar–modulation parameter $`\mathrm{\Delta }`$. We find that even using both the parametric forms for the IS proton spectrum, the data of the two experiments do not lead to a set of central values for the parameters mutually compatible within their uncertainties. In Fig.1 we report the median proton flux, with its uncertainty band, as obtained from the fits to the data of the two experiments. ## 3 Secondaries $`\overline{p}`$’s fluxes Cosmic ray protons interact with hydrogen atoms at rest, lying in the gaseus HI and HII clouds of the galactic ridge, and may produce $`\overline{p}`$’s. This conventional spallation process is actually a background to an hypothetical supersymmetric antiproton signal. The propagation of cosmic rays inside the Galaxy has been considered in the framework of a two–zone diffusion model . We have included energy losses in the diffusion equation, which tend to shift the antiproton spectrum towards lower energies with the effect of replenishing the low–energy tail. The steps of the method we followed to calculate secondary $`\overline{p}`$’s production and diffusion are fully described in Ref. ## 4 $`\overline{p}`$’s from neutralino annihilation The differential rate per unit volume and unit time for the production of $`\overline{p}`$’s from $`\chi `$$`\chi `$ annihilation as a function of the kinetic energy is defined as $$q_{\overline{p}}^{\mathrm{susy}}(T_{\overline{p}})\frac{dS(T_{\overline{p}})}{dT_{\overline{p}}}=<\sigma _{\mathrm{ann}}v>g(T_{\overline{p}})\left(\frac{\rho _\chi (r,z)}{m_\chi }\right)^2.$$ (1) Here $`<\sigma _{\mathrm{ann}}v>`$ denotes the average over the galactic velocity distribution function of neutralino pair annihilation cross section $`\sigma _{\mathrm{ann}}`$ multiplied by the relative velocity $`v`$ of the annihilating particles, $`m_\chi `$ is the neutralino mass and $`g(T_{\overline{p}})`$ denotes the $`\overline{p}`$ differential spectrum. Note the dependence on the square of the mass distribution function of neutralinos in the galactic halo, $`\rho _\chi (r,z)`$. For all the details of the computation of Eq.(1) and the main features of the Minimal Supersymmetric Standard Model (MSSM) – framework in which we calculated all the neutralino physical properties discussed in this talk – we refer to Sect. IIIB of Ref. . ## 5 Comparison with BESS95 data A recent analysis of the data collected by the BESS spectrometer during its 1995 flight (BESS95) has provided a significant improvement in statistics in the kinetic–energy range $`180\mathrm{MeV}T_{\overline{p}}1.4\mathrm{GeV}`$. From a first look at Fig.2 it is apparent that the experimental data are rather consistent with the flux due to secondary $`\overline{p}`$’s. However, it is interesting to explore which would be the chances for a signal, due to relic neutralino annihilations, of showing up in the low–energy window ($`T_{\overline{p}}<`$ 1 GeV). This point is very challenging, especially in view of the interplay which might occur among low–energy measurements of cosmic–ray $`\overline{p}`$’s and other searches, of quite a different nature, for relic neutralinos in our Galaxy. Since the experimental flux seems to suggest a flatter behaviour, as compared to the one expected for secondaries, we try to explore how much room for neutralino $`\overline{p}`$’s would there be in the BESS95 data. As a quantitative criterion to select the relevant supersymmetric configurations, we choose to pick up only the configurations which meet the following requirements: i) they generate a total theoretical flux $`\mathrm{\Phi }^{\mathrm{th}}`$ which is at least at the level of the experimental value (within 1-$`\sigma `$) in the first energy bin; ii) their $`(\chi ^2)_{\mathrm{red}}`$, in the best fit of the BESS95 data, is bounded by $`(\chi ^2)_{\mathrm{red}}`$ 2.2 (corresponding to 95% C. L. for 5 d.o.f.). On the other hand, supersymmetric configurations with a $`(\chi ^2)_{\mathrm{red}}>`$ 4 have to be considered strongly disfavoured by BESS95 data (actually, they are excluded at 99.9 % C.L. See Ref. for a detailed analyses of their properties). The selected configurations are shown in Fig.3, where $`m_\chi `$ is plotted in terms of the fractional amount of gaugino fields, $`P=a_1^2+a_2^2`$, in the neutralino mass eigenstate. It can be seen that higgsino–like and mixed configurations are much stronger constrained in the neutralino mass range than the gaugino–like ones, because of the requirement on a rather high value of flux. ## 6 Comparison with the DAMA/NaI data on annual modulation effect In Refs. we showed that the indication of a possible annual modulation effect in WIMP direct search are interpretable in terms of a relic neutralino which may make up the major part of dark matter in the Universe. We recall that the DAMA/NaI data reported in Ref. single out a very delimited 2–$`\sigma `$ C.L. region in the plane $`\xi \sigma _{\mathrm{scalar}}^{(\mathrm{nucleon})}`$$`m_\chi `$, where $`\sigma _{\mathrm{scalar}}^{(\mathrm{nucleon})}`$ is the WIMP–nucleon scalar elastic cross section and $`\xi =\rho _\chi /\rho _l`$ is the fractional amount of local WIMP density $`\rho _\chi `$ with respect to the total local dark matter density $`\rho _l`$. In the analysis carried out in Ref., we considered all the supersymmetric configurations (set $`S`$) which turned out to be contained in the 2–$`\sigma `$ C.L. region of Ref. , by accounting for the uncertainty in the value of $`\rho _l`$. Fig.4 displays the scatter plots for TOA antiproton fluxes calculated at $`T_{\overline{p}}=0.24`$ GeV, to conform to the energy range of the first bin of the BESS95 data (0.175 GeV $`T_{\overline{p}}`$ 0.3 GeV). We find that, while most of the susy configurations of the appropriate subset of $`S`$ stay inside the experimental band for $`\rho _l`$ = 0.1, 0.3 GeV cm<sup>-3</sup>, at higher values of $`\rho _l`$ a large number of configurations provide $`\overline{p}`$ fluxes in excess of the experimental results. This occurrence is easily understood on the basis of the different dependence on $`\rho _l`$ of the direct detection rate and of the $`\overline{p}`$ flux, linear in the first case and quadratic in the second one. These results show the remarkable property that a number of the supersymmetric configurations singled out by the annual modulation data may indeed produce measurable effects in the low–energy part of the $`\overline{p}`$ spectrum. We stress that the joint use of the annual modulation data in direct detection and of the measurements of cosmic–ray antiprotons is extremely useful in pinning down a number of important properties of relic neutralinos and show the character of complementarity of these two classes of experimental searches for particle dark matter. This shows the great interest for the analyses now under way of new antiproton data, those collected by a recent balloon flight carried out by the BESS Collaboration and those measured by the AMS experiment during the June 1998 Shuttle flight. ## References
no-problem/9812/hep-th9812111.html
ar5iv
text
# References THE SECRET LIFE OF THE DIPOLE Jeeva S. Anandan Department of Physics and Astronomy University of South Carolina Columbia, SC 29208, USA. E-mail: jeeva@sc.edu and Department of Theoretical Physics University of Oxford 1 Keble Road, Oxford OX1 3NP, U.K. ## Abstract A new force on the magnetic dipole, which exists in the presence of both electric and magnetic fields, is described. Its origin due to the ‘hidden momentum’, implications and possible experimental tests are discussed. We are familiar with the acceleration of charged particles, such as protons and electrons, by uniform electromagnetic fields. But neutral particles can be accelerated by uniform fields too, if they possess a magnetic dipole<sup>1,2</sup>. Some consequences and the possibility of observing this force are the focus of a new study<sup>3</sup>. The force on a magnetic dipole in an electromagnetic field has been studied for more than 100 years. The expression found in textbooks depends on the spatial gradient of the applied magnetic field: $`𝐅=(\mu 𝐁)`$ where $`\mu `$ is the magnetic moment and $`𝐁`$ is the magnetic field. If we imagine the magnetic dipole to be an infinitesimal current loop, then $`\mu `$ is proportional to the angular momentum or the spin of the circulating current. Then $`𝐅`$ may be understood as the net force resulting from the different Lorentz forces acting on different parts of the current loop due to the variation of the magnetic field over the loop. It therefore came as a surprise to me to find, nine years ago, that there is an additional force on the dipole given by<sup>1,2</sup> $$𝐟=\tau (𝐁^{}\times \mu )\times 𝐄$$ (1) where $`𝐄`$ is the electric field, $`𝐁^{}=𝐁𝐯\times 𝐄`$ is the magnetic field in the rest frame of the dipole that is moving with velocity $`𝐯`$ relative to the laboratory, and $`\tau `$ is the ratio of $`\mu `$ to the spin $`𝐒`$ of the dipole (using units in which the velocity of light is 1). This force was surprising because it exists even when the fields do not vary in position. It cannot therefore be obtained from the above intuitive picture. It was surprising also because of its dependence on the electric field: as the current loop representing the dipole may have no net electric charge it may be expected not to couple to an electric field. Also, (1) is doubly non linear; it does not reverse in direction if either the electromagnetic field strength or the magnetic moment is reversed (without reversing the spin). I know no other force that is non linear in the field strength (as opposed to the potential). But after four separate derivations<sup>1,2</sup> of (1), in quantum and classical physics, I was convinced that it does exist. A key to understanding the new force is that for this dipole the kinetic momentum (classically $`m𝐯`$) differs from the canonical momentum $`𝐩`$. The canonical momentum is conserved if there is translational symmetry in space, as when the dipole interacts with a uniform electromagnetic field. In fact, $$m𝐯=𝐩\mu \times 𝐄$$ (2) The rate of change of (2) is the total force by Newton’s second law. The time derivative of $`𝐩`$ is $`𝐅`$, while the time derivative of the last term is $`𝐟`$ (assuming that $`𝐄`$ is time independent). The last term in (2), which is due to relativistic effects, is called the hidden momentum. This explains the title of this article. The hidden momentum is like the potential for the dipole interacting with an appropriate $`SU(2)`$ Yang-Mills gauge field, because $`\mu `$ is proportional to the spin vector whose components generate the $`SU(2)`$ group. Then (2) corresponds to the non linear term in the Yang-Mills field strength which determines the force. An example of a neutral particle with a magnetic dipole is the neutron. As in the above example of a current loop, the magnetic moment of the neutron is proportional to its spin of magnitude $`s=\mathrm{}/2`$, where $`\mathrm{}`$ is Planck’s constant divided by $`2\pi `$. Hence, $`\tau =\mu /s=2\mu /\mathrm{}`$. It turns out that when $`B=1`$ Tesla and $`E=10^7Vm^1`$, which can be realistically achieved in the laboratory, the acceleration caused by the new force (1) is of the order of $`12`$ cm/sec<sup>2</sup>. On the face of it this is a large acceleration. But the problem in actually detecting it is that the spin precesses about $`𝐁^{}`$, due to the torque it experiences. With a precession frequency of $`1.8\times 10^8`$ radians $`s^1`$, $`𝐟`$ averages to zero very quickly. That is why it has not already been observed in the large number of experiments in which a dipole interacts with an electromagnetic field. I suggested that to make the effect of the force accumulate, instead of cancel out, $`𝐁`$ may be kept constant while $`𝐄`$ alternates in space<sup>1,2</sup>. Recently, in a beautiful and clearly written paper, Wagh and Rakhecha<sup>3</sup> have carefully studied this proposal in detail. The neutron is assumed to pass through a sequence of cells each having length $`l`$ and containing transverse uniform electric fields $`𝐄,𝐄,𝐄,𝐄`$…. such that $`𝐄`$ is parallel to a uniform magnetic field $`𝐁`$. The length $`l`$ is chosen so that the time taken for the neutron to travel each cell is half the Larmor period $`T`$ \- that is, the time for its spin to rotate around the magnetic field $`𝐁^{}`$. Suppose that Cartesian axes are chosen such that the $`x`$ axis is along the direction of motion of the neutron and the $`z`$ axis is along the common direction of $`𝐄`$ and $`𝐁`$. To lowest order, $`𝐁^{}=𝐁`$. Then (1) in this approximation is $$𝐟=\frac{4\mu ^2}{\mathrm{}^2}BE(S_x,S_y,0)$$ (3) If originally $`𝐒`$ was in the $`y`$ direction then in this approximation it will rotate in the $`xy`$ plane with constant frequency $`2\pi /T`$. As the neutron traverses the first cell, $`S_x`$ is negative while $`S_y`$ is both positive and negative for sucessive equal durations and averages to zero. Since $`E`$ is negative in the first cell, $`f_x`$ is therefore positive. In the next cell, $`S_x`$ is positive while $`S_y`$ is negative and positive for successive equal durations. But owing to the reversal of $`𝐄`$ in the second cell, $`f_x`$ is positive in the second cell also. Hence $`mv_x`$ steadily increases, whereas $`mv_y`$ fluctuates. More exactly, because $`𝐒`$ rotates about $`𝐁^{}`$ which slightly tilts from the $`z`$ axis to its two sides in the $`yz`$ plane alternatively in successive cells, $`𝐒`$ actually spirals towards the $`z`$ axis. The spin $`𝐒`$ points in the $`z`$ direction after the neutron traverses through $`2n`$ cells, where $`n`$ is a suitably chosen positive integer. Therefore, $`f_x`$ and $`f_y`$ gradually decrease, according to (3), which is reflected in a corresponding tapering off of the changes in $`v_x`$ and $`v_y`$. In the absence of the new force $`𝐟`$, the change in velocity is entirely due to $`𝐅`$ with $`𝐁^{}`$ substituted for $`𝐁`$. As the neutron enters each cell, owing to the change in $`𝐄`$ and therefore $`𝐁^{}`$, there would be a sudden change in its velocity. Only the $`x`$ component of the velocity changes due to $`𝐅`$ because of the translational symmetry in the $`y`$ and $`z`$ directions. This component varies as a step function of the number of cells traversed. (The splitting of the wave packets due to the longitudinal Stern-Gerlach effect, which has been experimentally observed<sup>4</sup>, appears to be negligible in the present case.) The cumulative effect could be detected by allowing the accelerated beam of neutrons to interfere with an unaccelerated beam. Now in a neutron interferometry experiment the fringe contrast is greatest when both interfering beams are in the same spin state. So, Wagh and Rakhecha have considered a cyclic evolution of the spin state by letting the neutron go through another series of $`2n`$ cells which amounts to a time reversal of the evolution that it underwent during its journey through the first series of cells. Then the dynamical phase<sup>5,6</sup> acquired, which can in principle be observed in neutron interferometry, depends on the momentum gained by the neutron as it passed through the first $`2n`$ cells. However, since the phase shift is due to the change in canonical momentum only, we cannot ambiguously conclude that its detection is a verification of the new force. A direct, unambiguous way of observing $`𝐟`$ is from the deviation of the neutron beam due to the $`y`$component of its kinetic momentum which it acquires<sup>7</sup> as it passes through a sequence of these cells. This component would be zero in the absence of the new force $`𝐟`$ because of the translational symmetry in this direction. In the presence of it, $`v_y`$ is negative in each of the cells, and therefore the beam deviates increasingly towards the negative $`y`$ direction as it passes through the cells. The detection of this deviation would constitute definitive evidence of $`𝐟`$. Another way of observing $`𝐟`$ is to have neutrons incident with spin in the $`x`$direction and measure the time of flight through a large number of cells, which would be modified by the change in the $`x`$component of velocity due to $`𝐟`$. The observation $`𝐟`$ would provide direct evidence of the hidden momentum. If that is the only objective, it can be achieved more simply by passing the neutron into a region of uniform $`𝐄`$-field, with no $`𝐁`$, such as the space between two capacitor plates, and measuring the velocity change due to the change in te hidden momentum $`\mu \times 𝐄`$. For example, if $`𝐄`$ is in the $`z`$ direction and the neutron spin is polarized along the x- direction then, according to (2), the change of velocity is in the $`y`$ direction equal to $`\frac{\mu E}{m}`$. But the earlier proposal has the advantage that it would detect $`𝐟`$, which has never been observed before. Also, Yang-Mills fields are used for describing the weak and strong interactions which are short range, whereas, as mentioned, the magnetic dipole sees the electromagnetic field as a long range $`SU(2)`$ Yang-Mills field<sup>1,2</sup>. So, observing $`𝐟`$ would demonstrate this important non abelian gauge field interaction, in addition to revealing a hitherto hidden aspect of the neutron or more generally a magnetic dipole. I thank Apoorva Wagh for useful discussions. A shorter version of this article, but with two figures, appeared in Nature, 387, 558-559 (5 June 1997).
no-problem/9812/astro-ph9812263.html
ar5iv
text
# 1 INTRODUCTION ## 1 INTRODUCTION Several studies in the past suggested that dust can form more easily above cool spots in evolved stars. Frank (1995) conducted a detailed study of dust formation above cool asymptotic giant branch (AGB) starspots, and showed that the mass loss rate above the spots increases, though the terminal wind velocity does not change much. However, Frank does not discuss the source of the cool starspots. Schwarzschild (1975) suggested that cool regions in red giants are formed by very large convective elements. Polyakova (1984) suggests two antipodal active magnetic regions over which dust forms to explain the light and polarization variations in the M supergiant $`\mu `$ Cep. The spots rotate with the star and cause the observed light and polarization variations. She finds that a rotation period of about 20 years and an activity cycle of about 2.5 years fit the observations. Clayton, Whitney, & Mattei (1993) suggest that the intensive dust formation close to the photosphere of R Coronae Borealis (RCB) stars can be facilitated by cool magnetic spots. In a recent paper, Soker (1998) proposed a scenario in which the axisymmetrical mass loss during the high mass loss rate phase at the end of the AGB, which is termed the superwind, results from dust formation above cool magnetic spots. He further argues that this scenario has the advantage that it can operate for very slowly rotating AGB and RCB stars, i.e., only $`10^4`$ times the break up velocity. This rotation velocity is $`23`$ orders of magnitude smaller than what is required by models were rotation or the magnetic field have a dynamical role (Chevalier & Luo 1994; Dorfi & Höfner 1996; Ignace, Cassinelli, & Bjorkman 1996; Garcia-Segura 1997). We refer only to elliptical PNs, since bipolar PNs seem to require the presence of a close stellar binary companion to their progenitor (Soker 1997; Mastrodemos & Morris 1999). The morphology of planetary nebulae (PNs) and proto-PNs suggest that the transition to the highly non-spherical mass loss episode at the end of the AGB is highly nonlinear. By nonlinear we mean that a small change in one or more of the properties of the AGB star leads to a very large change in the mass loss rate and geometry. The mechanism of dust formation via the activity of a dynamo in the envelope of evolved stars is a highly nonlinear process (Soker 1998). This dynamo is not required to form a strong magnetic field. A weak magnetic field is enough, as it will be enhanced inside cool spots by a factor of $`10^4`$ or more, by the convective motion. In the sun, for example, the magnetic field in cool spots is $`10^3`$ stronger than the average magnetic field. It cannot reach higher values near the photosphere, since then it greatly exceeds the ambient thermal pressure. Therefore, it is possible that even if the average magnetic field of the sun were weaker, the intensity of the magnetic field would still reach the same value in cool spots. Convective influences on the magnetic field, dust formation and mass loss rate due to dust, are all non-linear processes. For example, the density above the photosphere decreases exponentially with radius (Bedijn 1988; Bowen & Wilson 1991). Therefore, if the temperature drops a little, dust formation will occur closer to the star where the density is much higher (Frank 1995). It has been suggested that the superwind results from this increase of the density scale height above the photosphere (Bedijn 1988; Bowen & Wilson 1991). All the studies described above assume that the photosphere of the cool spot is at the same radius as the photosphere of the rest of the star (hereafter the stellar photosphere). However, we know from the sun that this is not the case. In the sun, the photospheres of the cool spots are $`2l_p`$ deep in the envelope (Priest 1987, $`\mathrm{\S }\mathrm{1.4.2}`$D), where $`l_p`$ is the pressure scale height on the solar photosphere. This results from the lower density and temperature of the spot. Since the opacity decreases as temperature decreases, for conditions appropriate to the solar photosphere, the photosphere is at higher densities in the spot, which occurs deeper in the envelope. As we discuss in $`\mathrm{\S }3`$ below, the behavior of the opacity is just the opposite in the AGB stars. In these stars, the spot will be above the stellar photosphere. In $`\mathrm{\S }4`$ we discuss the formation of dust above magnetic cool spots, taking into account the magnetic field, and suggest observations to detect cool spots in AGB and RCB stars. We summarize in $`\mathrm{\S }5`$. We first turn to examine some observations which support the formation of dust in cool magnetic stellar spots ($`\mathrm{\S }2`$). Before doing that we would like to stress that we do not suggest that magnetic activity is the direct cause of the enhanced mass loss rate near the equatorial plane. We still think that radiation pressure on the dust, the formation of which is facilitated by stellar pulsation, does the job. The magnetic field forms cool spots which further facilitate the formation of dust. As shown by Soker (1998), the overall magnetic activity is much below the level required by models in which the magnetic field has a dynamical role. ## 2 SUPPORTING OBSERVATIONS ### 2.1 AGB stars Soker (1998) reviews several properties of PNs and AGB stars relevant to the cool magnetic spot model. The most relevant property that a theory for the formation of elliptical PNs should explain is the correlation between the onset of the superwind at the end of the AGB, and the transition to a more asymmetrical wind. In many elliptical PNs, the inner shell, which was formed from the superwind, deviates more from sphericity than the outer shell, which was formed from the regular slow wind (prior to the onset of the superwind). In extreme cases, the inner region is elliptical while the outer shell or halo is spherical (e.g., NGC 6826). In addition, most ($`75\%`$) of the 18 spherical PNs (listed in Soker 1997 table 2) do not have superwind, but just an extended spherical halo. The correlation between the onset of the superwind and the onset of a more asymmetrical wind is not perfect, and in some cases both the inner and outer regions have a similar degree of asymmetry (e.g., NGC 7662). Soker (1998) suggests that magnetic activity may explain this correlation by becoming more pronounced at the end of the AGB phase, due to the decrease in the envelope density in the convective region (because of mass loss). Another supporting argument brought by Soker (1998) is the presence of magnetic fields in the atmospheres of some AGB stars. This is inferred from the detection of X-ray emission from a few M giants (Hünsch et al. 1998). Kemball & Diamond (1997) find a magnetic field at the locations of SiO maser emission, which form a ring around TX Cam at a radius of $`4.8\mathrm{AU}2\mathrm{R}`$, and mention the possibility that the mass loss occurs in a preferred plane. They also suggest that “The fine-scale features \[of the Maser image\] are consistent with local outflows, flares or prominences, perhaps coincident with regions in which localized mass loss has taken place.” We now present more supporting and motivating observations to those presented by Soker (1998), through a more careful examination of the stellar magnetic activity. (1) From the sun we know that during most of the solar cycle, the cool spots are concentrated between the equator and latitudes $`\pm 35^{}`$ (e.g., Priest 1987; $`\mathrm{\S }\mathrm{1.4.2}`$E). The model presented by Soker (1998) predicts therefore, that during most of the AGB stellar cycle, a higher mass loss rate will occur close to the equatorial plane. However, at the beginning of a new solar cycle, every $`11`$ years, the cool spots are concentrated at two annular regions around latitudes $`\pm 30^{}`$. (2) In the sun there are at most several large spots at any given time. This means, for dust formation in AGB stars, that the mass loss will be enhanced in specific directions, leading to the formation of dense clumps in the descendant PN (if spots survive for a long time). (3) Another property of a stellar magnetic field is that the magnetic axis direction can change. If the magnetic axis and rotation axis are not aligned, then the magnetic axis direction will change during the stellar rotation. Another possibility is that the magnetic axis will change in a sporadic way, as occurred several times for the Earth’s magnetic field. There is no basic dynamo model to predict the length of the stellar cycle in AGB stars, the latitude at which spots appear at the beginning of such a cycle, and the change in the direction of the magnetic axis. In any case, some morphological features in PNs are consistent with an enhanced mass loss rate in two annuli above and below the equator, and with a sporadic mass loss rate. Let us consider a few examples. Some PNs have two annuli, one at each side of the equatorial plane, with somewhat higher density than their surroundings. The PN K 3-26 (PNG 035.7-05.0; Manchado et al. 1996), have such “rings”, with a high density between the rings, but somewhat lower than in the rings themselves. The density in the polar directions is very low. Such a structure could be formed from magnetic activity in two annuli on the surface of the progenitor AGB star. Another, more popular explanation, is that a slow wind with a mass loss rate which increases toward the equatorial plane, was shaped by a fast wind blown by the central star of the PN. In the later case, the magnetic activity (to form cool spots) is only required to enhance the mass loss toward the equator. Active annuli may also form two dense rings, which might appear in projection as radial condensations in symmetrical configuration, as in NGC 6894 (PNG 069.4-02.6; Manchado et al. 1996; Balick 1987). Some PNs show loops, arcs, and long condensations extending from the shell toward the central star. Such features are what we expect from enhanced mass loss rate above magnetic cool spots. Loops might be caused by the change in direction of the magnetic axis and from active annuli on the surface of the progenitor. Examples of PNs which show clear loops are A 72 (68PNG 059.7-18.7) and NGC 7094 (PNG 066.7-28.2) from Manchado et al. (1996), and He2-138 (PNG 320.1-09.6), M1-26 (PNG 358.9-00.7), and He2-131 (315.1-13.0) from Sahai & Trauger (1998). Sahai & Trauger (1998) suggest that the change in direction of the symmetry axis, and the complicated structures in the inner regions of many PNs, may result from multiple sub-stellar (mainly planets) companions which interact one after another with the AGB star. Although a substellar companion may be the source of the angular momentum required to operate the dynamo (Soker 1996; 1998), we think that the interaction of several large planets with different equatorial planes is very unlikely. We prefer sporadic behavior of a stellar dynamo to explain these structures in elliptical PNs. (Well defined jets in bipolar PNs cannot be explained by our model, and probably require stellar companions). Large long-lasting sporadic magnetic spots might form dense condensations, as in IC 4593 (PNG 025.3+40.8; Corradi et al. 1997), and A 30 (PNG 208.5+33.2; Manchado et al. 1996; Balick 1987). A30 is an interesting PN. It has a large, almost spherical, halo, with optically bright, hydrogen-deficient, blobs in the inner region (Jacoby & Ford 1983). The blobs, which are arranged in a more or less axisymmetrical shape, are thought to result from a late helium shell flash. Soker (1998) suggests the following explanation according to the magnetic cool spots model. During the formation of the halo dust was forming far from the stellar surface. If after the helium flash the formation of dust occurred closer to the stellar surface, the process became more vulnerable to magnetic activity, resulting in the axisymmetrical mass loss. ### 2.2 RCB stars The RCB stars are rare hydrogen-deficient carbon-rich supergiants which undergo very spectacular declines in brightness of up to 8 magnitudes at irregular intervals as dust forms along the line of sight (Clayton 1996). There are two major evolutionary models for the origin of RCB stars: the double degenerate and the final helium shell flash (Iben, Tutukov, & Yungelson 1996). The former involves the merger of two white dwarfs, and in the latter a white dwarf/evolved PN central star is blown up to supergiant size by a final helium flash. In the final flash model, there is a close relationship between RCB stars and PN such as A30, discussed above. The connection between RCB stars and PN has recently become stronger, since the central stars of three old PN’s (Sakurai’s Object, V605 Aql and FG Sge; Duerbeck & Benneti 1996; Clayton & De Marco 1997; Gonzalez et al. 1998) have had observed outbursts that transformed them from hot evolved central stars into cool giants with the spectral properties of an RCB star. Wdowiak (1975) first suggested the possibility that dust in RCB stars forms over large convection cells which are cooler than the surrounding photosphere. Clayton et al. (1993) suggested that a magnetic activity cycle similar to the Solar Cycle could fit in well with the observed properties of RCB stars. It would provide a mechanism for a semi-periodic variation in dust production, could cause cool spots over which patchy dust clouds might form, and could be related to the chromospheric emission seen in these stars. There is no direct observational evidence for a magnetic field in any RCB star. When in decline, RCB stars do exhibit an emission spectrum that is often referred to as ’chromospheric’ although not all the emission lines typical to a chromosphere are seen. Lines associated with transition regions, such as C II $`\lambda `$1335, C III\] $`\lambda `$1909 and C IV $`\lambda `$1550 are also seen (Clayton 1996, Maldoni et al. 1999). These lines indicate temperatures of $`10^5`$ K. Models of the transition regions in other stars indicate that acoustic waves alone cannot provide enough energy to account for the radiation losses and a small magnetic field must be present (Jordan & Linsky 1987). No flares have been observed on an RCB star although Y Mus does exhibit flickering in its lightcurve (Lawson et al. 1990; Lawson & Cottrell 1997). No X-rays have ever been detected. Photometric detection of starspots is difficult due to the presence of pulsations and dust formation events. There is no measurement of the rotation period of an RCB star. The effect of rotation is not measureable in existing high resolution spectroscopic data (e.g. Pollard, Cottrell, & Lawson 1994). Therefore, the rotation period of these stars is one year or longer. The pulsation periods of RCB stars lie in the range 40-100 days (Lawson et al. 1990). These are confirmed as pulsational variations by radial velocity measurements (Lawson & Cottrell 1997). Fourier analysis of RCB light curves do show significant low frequency ($``$ 200 d) contributions but they are attributed to couplings of higher frequency terms or the windowing effect of the observing seasons. RY Sgr has two periods seen in its lightcurve of 38 and 55 d (Lawson et al. 1990). However, only the 38 d period shows up in radial velocity measurements (Lawson & Cottrell 1997). ## 3 MAGNETIC COOL SPOTS ### 3.1 The position of the spot’s photosphere Let us examine the structure of a vertical magnetic flux tube as is done for sunspots, following, e.g., Priest (1987; $`\mathrm{\S }8.4`$). The biggest uncertainty in the model is the temperature of the cool spot, which is also the most important factor for dust formation. Keeping this in mind, we will make several simplifying assumptions in this section. Using the definition of the photosphere as the place where $`\kappa ł\rho _p=2/3`$, where $`\kappa `$ is the opacity, $`l`$ the density scale height and $`\rho _p`$ the density at the photosphere, the pressure at the photosphere is given by (e.g., Kippenhahn & Weigert 1990, $`\mathrm{\S }10.2`$) $$P_p=\frac{2}{3}\frac{GM}{R^2}\frac{1}{\kappa },$$ (1) where $`M`$ is the stellar mass and $`R`$ the photospheric radius. At the level of accuracy of our calculations, we can take the pressure and density scale height at the photosphere to be equal (we do not consider here the density inversion region below the photosphere of AGB stars; Harpaz 1984). The density at the photosphere is given by $$\rho _p=\frac{2}{3}\frac{GM\mu m_H}{k}\frac{1}{R^2\kappa T_p},$$ (2) where $`T_p`$ is the photospheric temperature, $`k`$ the Boltzmann constant, and $`\mu m_H`$ the mean mass per particle. Let the subscript $`i`$ denote quantities in the center of the cool spot, and the subscript $`e`$ quantities outside the spot, where the magnetic field can be neglected. Pressure balance between the spot and its surroundings reads $$P_i+P_B=P_e,$$ (3) where $`P_B`$ is the magnetic pressure inside the cool spot. Derivation with respect to the radial coordinate gives $$\frac{dP_i}{dr}=\frac{dP_e}{dr}\frac{dP_B}{dr}.$$ (4) As in the sun, we assume that the magnetic field lines inside the spot are vertical, and only near the photosphere are they open tangentially in order to reduce the magnetic pressure. The magnetic pressure deep in the envelope is of the same order as the thermal pressure. Near the photosphere the magnetic field has to open up in order for the magnetic pressure not to exceed the surrounding’s thermal pressure (e.g., Priest 1987 $`\mathrm{\S }8.4`$). We approximate the magnetic pressure gradient as $$\frac{dP_B}{dr}=\frac{P_B\alpha P_B}{d},$$ (5) where $`P_B`$ is the magnetic pressure on the photosphere of the spot, and $`\alpha P_B`$ is the magnetic pressure at a radius equal to the surrounding photospheric (hereafter just photospheric) radius. $`d`$ is the radial distance between the photosphere and the spot’s photosphere. In the sun, the spot is deep in the envelope. In this case, $`d`$ is negative and $`\alpha <1`$. In AGB stars, we will find below that the spot photosphere is above the photosphere, so that $`d>0`$ is positive and $`\alpha >1`$. Since the vertical magnetic field lines do not exert radial force (e.g., Priest 1987 $`\mathrm{\S }8.4`$), the hydrostatic equilibrium within the spot does not include the magnetic pressure gradient $$\frac{1}{\rho _i}\frac{dP_i}{dr}=\frac{1}{\rho _e}\frac{dP_e}{dr}=g,$$ (6) where $`g=GM/R^2`$ is the gravity in the photosphere. Substituting $`dP_i/dr`$ from equation (4) into equation (6), and using equation (5) for $`dP_B/dr`$, give $$\frac{1}{\rho _i}\left(\frac{dP_e}{dr}\frac{P_B\alpha P_B}{d}\right)=\frac{1}{\rho _e}\frac{dP_e}{dr}.$$ (7) In the photosphere $`\rho _p\kappa _pl_p=2/3`$, while in the spot’s photosphere $`\kappa _il_i\rho _i=2/3`$. From the last two equations we find $$\rho _i=\rho _p\frac{\kappa _pl_p}{\kappa _il_i}.$$ (8) We now use our approximation that the pressure scale height is equal to the density scale height. This is not a bad approximation when there is a steep pressure drop and a shallow temperature drop as in the photosphere. Multiplying and dividing the right hand side of equation (6) by $`P_e`$ and taking quantities at the photosphere, and the same for the left hand side with $`P_i`$, we find $`T_i/l_i=T_p/l_p`$. In the last equality we have assumed that the scale height of the surrounding atmosphere near the photosphere of the spot is the same as that at the photosphere of the envelope $`l_p=l_eP/\left(dP/dr\right)`$. The density at the photosphere is given by $`\rho _p=\rho _ee^{d/l_p}`$, where $`\rho _e`$ is the surrounding envelope density at the radius of the spot’s photosphere. Using the above expressions for $`l_i`$ and $`\rho _p`$ in equation (8), we find $$\rho _i=\rho _e\left(\frac{\kappa _pT_p}{\kappa _iT_i}\right)e^{d/l_p}.$$ (9) Since we are considering the spot’s photosphere, we will use the subscript $`s`$ instead of $`i`$ from now on. Dividing equation (7) by $`P_e`$, using the definition of the scale height $`lP/\left(dP/dr\right)`$, and substituting for $`\rho _i`$ from equation (9), gives after rearranging terms $$\frac{\kappa _sT_s}{\kappa _pT_p}\left[1+\frac{l_p}{d}\frac{P_B}{P_e}\left(1\alpha \right)\right]e^{d/l}=1.$$ (10) ### 3.2 The Sun Let us examine the validity of the last equation for the sun. For the solar photosphere $`\rho _p10^7\mathrm{g}\mathrm{cm}^3`$, $`T_p=5800\mathrm{K}`$, gravity $`g=2.74\times 10^4\mathrm{cm}\mathrm{s}^2`$, and $`P=1.2\times 10^5\mathrm{dyne}\mathrm{cm}^2`$. From Alexander & Ferguson (1994) and the TOPbase data base (Cunto et al. 1993; Seaton et al. 1994) we find $`\kappa 0.25\mathrm{cm}^2\mathrm{g}^1`$. The scale height is $`l_p=280\mathrm{km}`$. For a typical large cool solar spot, we take $`T_s3,700\mathrm{K}`$ (e.g., Priest 1987 $`\mathrm{\S }1.4`$). By using these opacity tables, we find the photospheric density and opacity to be $`\rho _s5\times 10^7\mathrm{g}\mathrm{cm}^3`$, and $`\kappa _s0.05`$. With these values, we solve equation (10). When the pressure gradient is neglected, i.e., $`\alpha =1`$, the solution is $`d=2l_p`$, i.e., the spot is $`2l_p560\mathrm{km}`$ deep in the photosphere. With $`\alpha 1`$ and $`P_BP_e`$, the solution is $`d=2.4l_p`$, or a depth of $`670\mathrm{km}`$. These values are within the range of the depth of the spots in the sun, $`d500700\mathrm{km}`$, as inferred from the Wilson effect (e.g., Priest 1987 $`\mathrm{\S }1.4`$). This shows that equation (10) is a good approximation, at least for the sun. The reason for the spots being deeper in the envelope is that for the typical parameters of the solar photosphere, opacity decreases as temperature decreases. ### 3.3 AGB stars In AGB stars, the situation is the opposite of that in the sun. From the data presented by Alexander & Ferguson (1994), we find that the opacity drops slightly to a minimum as the temperature drops to $`2700\mathrm{K}`$ from $`3,000\mathrm{K}`$, but then sharply increases to a value $`50`$ times higher at a temperature of $`T2100\mathrm{K}`$ (all at a constant density). The higher opacity in the cool spots means that the density will be lower than in the rest of the photosphere. Lower density means a somewhat lower opacity, so that the real increase in opacity will be by a factor of $`50`$. Let us consider a specific example. From the definition of the pressure scale height, $`lP/\left(dP/dr\right)=P/\left(\rho g\right)`$, we find (in the photosphere), $$\frac{l_p}{R}0.05\left(\frac{R}{300R_{}}\right)\left(\frac{T_p}{3,000\mathrm{K}}\right)\left(\frac{M}{0.8M_{}}\right)^10.05\left(\frac{T_p}{3,000\mathrm{K}}\right)^1\left(\frac{M}{0.8M_{}}\right)^1\left(\frac{L}{6,500L_{}}\right)^{1/2},$$ (11) where the gravity is $`g=GM/R^2`$. We took the mean mass per particle to be $`\mu m_H=m_H`$, higher than for a fully ionized plasma since gas in an AGB star photosphere is partially recombined, and RCB stars are hydrogen deficient. The stellar mass is taken for a typical star on the AGB tip, with envelope mass of $`0.2M_{}`$ and a core mass of $`0.6M_{}`$. We use the scale height, radius, and temperature as in equation (11), to find the photospheric opacity and density. For the photosphere we get $`\rho _p10^9\mathrm{g}\mathrm{cm}^3`$ and $`\kappa _p5\times 10^4\mathrm{cm}^2\mathrm{g}^1`$. Following the sun, we take the cool spot to be at a temperature of $`T_s=2T_p/3=2,000\mathrm{K}`$. We find the density and opacity to be $`\rho _s5\times 10^{11}\mathrm{g}\mathrm{cm}^3`$ and $`\kappa _s1.3\times 10^2\mathrm{cm}^2\mathrm{g}^1`$, respectively. Solving equation (10) with these values and taking $`\alpha =1`$, we obtain $`d2.85l_p`$. Taking the magnetic pressure gradient into account, i.e., $`\alpha >1`$, will reduce $`d`$. Here, $`P_B<P_e`$, since the pressures are evaluated at the location of the spot photosphere, $`r=R+d`$. It will be more convenient to write equation (10) in terms of the pressure ratio at the stellar photosphere, $`\left(P_B/P_e\right)_{\mathrm{phot}}`$, rather than at the spot photosphere. This ratio is $$\left(\frac{P_B}{P_e}\right)=\left(\frac{P_B}{P_e}\right)_{\mathrm{phot}}\frac{e^{d/l_p}}{\alpha }.$$ (12) Rearranging terms in equation (10) gives $$e^{d/l_p}=\frac{\kappa _pT_p}{\kappa _sT_s}+\frac{l_p}{d}\left(\frac{P_B}{P_e}\right)_{\mathrm{phot}}\left(1\frac{1}{\alpha }\right).$$ (13) Under the condition that $`\kappa _s\kappa _p`$, the first term on the r.h.s may become much smaller than the second term. For example, for $`\left(P_B/P_e\right)_{\mathrm{phot}}=0.5`$ and $`\alpha =2`$, and with the other parameters as taken above, the solution of equation (13) is $`d=1.5l_p`$. In this case, the second term on the r.h.s. is $`0.167`$, while the first term is $`0.0577`$. We cannot make $`\alpha `$ much greater, since then $`d`$ becomes smaller, and the pressure cannot drop by a large factor in such a short distance . Going back to neglect the pressure gradient, we examine other temperatures. At $`T_s=1600\mathrm{K}`$, the opacity decreases (relative to $`2,000\mathrm{K}`$) to $`\kappa _s5\times 10^3\mathrm{cm}^2\mathrm{g}^1`$, and from equation (10) (and for $`\alpha =1`$) $`d=1.7l_p`$, while at $`T_s=2500\mathrm{K}`$ we find $`\kappa _s2\times 10^3\mathrm{cm}^2\mathrm{g}^1`$, and from equation (10) $`d1.2l_p`$. We conclude that cool magnetic spots on the surfaces of AGB stars are protruding above the photosphere by $`1.53`$ scale heights. Cool spots at $`2500\mathrm{K}`$ will probably have only a small influence, while at $`T_s1600\mathrm{K}`$ dust is already forming. The relevant temperature is $`2,000`$, where the spots are $`d1.53l_p0.10.15R`$ above the photosphere! In $`\mathrm{\S }4`$ we will discuss the implications of the protruding cool spots on dust formation and observations. ### 3.4 RCB stars Cool RCB stars. Cool magnetic spots in cool RCB stars, ($`T_{\mathrm{eff}}5,0007,000\mathrm{K}`$), will be deeper that the surrounding photosphere as in the sun. Because of the composition of RCB stars, mainly helium, the opacities are lower than for solar composition. From equation (1) we see that the pressure will be higher than that of a solar composition star with the same radius, luminosity and mass. As an example consider an RCB star of surface temperature $`7,000\mathrm{K}`$, radius $`70R_{}`$, hence luminosity of $`L=1.05\times 10^4L_{}`$, and a mass of $`0.6M_{}`$. From the table of the TOPbase opacity project (Cunto et al. 1993), and the scale height $`l_p0.03R`$, by equation (11; the mean weight per particle in RCB stars is larger than that assumed in equation 11, $`>1m_H`$, but to first order we still use equation 11) we find the opacity and density on the photosphere for these hydrogen deficient stars ($`X=0`$, $`Z=0.02`$) to be $`\kappa _p1.3\times 10^3\mathrm{cm}^2\mathrm{g}^1`$, and $`\rho _p4\times 10^9`$ (see also model atmospheres by Asplund et al. 1997). For a cool spot of $`T_s=2T_p/3=4,700\mathrm{K}`$, the opacity and density are $`\kappa _s6\times 10^4\mathrm{cm}^2\mathrm{g}^1`$, and $`\rho _s10^8`$, respectively. From equation 10 we find the depth of the cool spot to be $`d1.2l_p0.04R`$. Taking the pressure gradient into account with $`P_B=P_e`$ and $`\alpha 1`$, we find $`d1.7l_p0.05R`$. This is deeper than in the sun, since in the sun $`\left(l_p/R_{}\right)=4\times 10^4`$, while in cool RCB stars this ratio is two orders of magnitude higher. It will be interesting to conduct numerical simulations, similar to those of Woitke et al. (1996), but where the shock waves are traveling inside the “pipe” of the deep, $`d0.05R`$, magnetic spots on cool RCB stars. This, of course, is beyond the scope of the present paper. In $`\mathrm{\S }4.1`$ we suggest that formation of amorphous carbon dust occurs as the shock breaks out of the pipe on the surface of the star. Hot RCB stars. For a temperature of $`T_p=18,000\mathrm{K}`$, luminosity of $`L=10^4L_{}`$, hence $`R=10R_{}`$, and a mass of $`M=0.6M_{}`$, we find from equation (11) $`l_p0.01R`$. The photospheric opacity and density are $`\kappa 0.4\mathrm{cm}^2\mathrm{g}^1`$, and $`\rho _p2\times 10^{10}`$, respectively. Opacity tables for a hydrogen deficient atmosphere at $`T15,000`$ show that the opacity depends very weakly (relative to the range of cool RCB stars) on temperature. In a small range near $`20,000\mathrm{K}`$ the opacity even increases a little as temperature decreases. Taking the opacity to be constant, and $`T_s=\left(2/3\right)\times T_p`$, as in the sun, we find from equation (10) and for a small magnetic pressure gradient (in this case $`d`$ is small, so we can take $`\alpha 1`$) $`d0.4l_p`$. We see that the spot is well inside the photosphere, even when the opacity is taken to be constant. Below $`15,000\mathrm{K}`$, the opacity decreases steeply, and if the spot temperature is in this range, then the spot will be $`12l_p0.01R`$ inside the photosphere, much shallower than in cool RCB stars. ## 4 IMPLICATIONS ### 4.1 Dust formation As a parcel of gas in the wind moves away from the cool spots, it starts to get more and more radiation from the hotter surface of the star surrounding the spot. Therefore, even if initially this parcel is much cooler than the rest of the gas in the wind, at some distance from the surface it will be at only a slightly lower temperature than the surrounding gas. In order to stay much cooler until dust forms, Frank (1995) finds that the cool spots should be very large; having radius of a few$`\times 0.1R`$, where $`R`$ is the stellar radius. There is a problem in forming such large magnetic cool spots. This is because the strong magnetic field in cool spots is formed by concentrating a weak magnetic field. Magnetic flux conservation means that the area from which the weak magnetic field is concentrated to the spot is much larger than the spot’s area. This cannot be the case if the magnetic spot is as large as required by the calculation of Frank. The solution, we think, is that the dust forms very close to the cool spot, so that even small spots (but not too small) can form dust. We should stress again that the formation of dust above cool spots, as suggested here and by Soker (1998), is not intended to replace dust formation around the star at several stellar radii (as occurs in AGB stars). Our idea is that enhanced dust formation above cool spots increases the mass loss rate, and makes the overall mass loss geometry less spherical. AGB stars. The temperature amplitude due to the pulsation of Mira variables can be as high as $`15\%`$ (e.g., Hoffmeister, Richter, & Wenzel 1985). This means, that a cool spot of temperature $`2,000\mathrm{K}`$ can cool to $`1700\mathrm{K}`$. The high density of the spot photosphere, means that dust can already forms at this, or a slightly lower, temperature. Therefore, it is quite possible, that when a large and cool magnetic cool spots forms, large quantities of dust formed during the minimum temperature of each pulsation cycle. RCB stars. Such low temperatures are not attainable around cool and hot RCB stars even on cool spots. However, as described below, Woitke et al. (1996) show that for $`\rho _s10^{13}`$ to $`10^{16}\mathrm{g}\mathrm{cm}^3`$ conditions allow for the condensation temperature for carbon to be reached as a shock passes through the atmosphere of the star. However, for higher densities, the adiabatic cooling is negligible during the reexpansion following the shock so the temperature remains near the radiative equilibrium temperature. Therefore, the higher densities present inside the cool spot do not enhance dust formation in the Woitke scenario. The cooler temperatures and magnetic field, by enhancing adiabatic expansion (see below), may aid dust formation close to the spot where the densities are lower than inside the spot. To present our proposed scenario for enhanced dust formation, in RCB and AGB stars, but in particular in hot RCB stars, we must first summarize the effects of shocks as calculated and discussed by Woitke et al. (1996). Woitke et al. study the effect of shock waves, excited by stellar pulsations, on the condensations of dust around cool RCB stars. They consider only the spherically symmetric case, with an effective temperature of $`T_p=7,000\mathrm{K}`$. They examine shocks, which begin to develop somewhere below the photosphere, as they run out to several stellar radii. The shock velocities in their calculations were $`20\mathrm{km}\mathrm{s}^1`$ and $`50\mathrm{km}\mathrm{s}^1`$. Somewhere outside the photosphere, at radius of $`2R`$, the density is in the right range for the following cycle to occur. $`\left(i\right)`$ As the shock passes through the gas, it compresses the gas by a factor of $`610`$, and heats it by a factor of $`310`$. The compression and heating factors depend mainly on the shock velocity. $`\left(ii\right)`$ due to its higher density, the gas cools very quickly to its radiative equilibrium temperature. This equilibrium is with the radiation from the photosphere. $`\left(iii\right)`$ The compressed gas reexpands and its density drops by more than an order of magnitude. This results in a large adiabatic cooling, which may bring the gas to below the dust condensation temperature. The decrease in density and hence the adiabatic cooling becomes more pronounced as the shock velocity increases. In Woitke et al. calculations, a $`50\mathrm{km}\mathrm{s}^1`$ shock results in dust formation, while for a $`20\mathrm{km}\mathrm{s}^1`$ shock, no dust forms. Let us examine what happens during this three stage cycle for gas above a cool magnetic spot. The pressure equilibrium above the photosphere is given by equation (3), i.e., the thermal pressure above the spot plus its magnetic pressure equals the thermal pressure of the surroundings (where the magnetic pressure is very small). $`\left(i\right)`$ As a strong shock moving radially outward passes through a region, it compresses the gas by a factor $`>4`$, and heats it. The thermal pressure in the calculations of Woitke et al. (1996) increases by a factor of $`10^2`$. Since the magnetic field lines near the center of the spots are radial (e.g., Priest 1987), the magnetic pressure does not increase behind the shock. Therefore, the surrounding post shock pressure exceeds that of the region above the spots. The surrounding pressure compresses the region above the spot in the transverse direction, increasing both the thermal and magnetic pressure there, but the magnetic field is still smaller than the thermal pressure. Therefore, the region above the spot is compressed by a larger factor than the surrounding medium. This increases the efficiency of the mechanism studied by Woitke et al.. Both regions, above the spot and the surroundings, reach similar thermal states since the magnetic pressure is small. $`\left(ii\right)`$ Due to the high density, the gas in the two regions cools very fast to its radiative equilibrium temperature. But above the spot the temperature will be lower. $`\left(iii\right)`$ The compressed gas reexpands and its density drops by more than an order of magnitude. Because of the cooling and the reexpansion, mainly in the radial direction, the thermal pressure drops and the magnetic pressure above the spots becomes an important, or even the dominant pressure. The magnetic field pressure results in a transverse expansion of the magnetic field lines. Since the gas is partially ionized, it will be practically frozen-in to the magnetic field lines, and the gas above the spots will expand transversely. The net result is that during the adiabatic cooling stage, the gas above the spot will reexpand by a larger factor, and hence will reach lower temperature. To summarize, cool magnetic spots have two factors which ease dust formation. First the temperature is lower, and, second, the magnetic field increases the reexpansion, and hence the adiabatic cooling, of the region above the spot. Both lower the temperature and density of the spot and the gas above it, and make the dust formation mechanism studied by Woitke et al. effective closer to the stellar surface. ### 4.2 Observations Clayton et al. (1997) found that in a deep decline of R CrB, the position angle of the continuum polarization was almost flat from 1 µm to 7000 Å but then changed rapidly, rotating by $`60^{}`$ between 7000 and 4000 Å. This behavior is strikingly similar to that produced in post-AGB stars having an obscuring torus and bipolar lobes of dust. These new data strengthen the earlier suggestion that there is a preferred direction to the dust ejections in R CrB (Clayton et al. 1995). Dust ejections seem to occur predominantly along two roughly orthogonal directions consistent with a bipolar geometry. Another example of asymmetrical mass loss from RCB stars is the apparent bipolar nebulosity observed around UW Cen (Pollacco et al. 1991). However, Clayton et al. (1999) find that the shape of the nebula changes with time due to changes in the illumination from the star. More observations are planned to detect and map the morphology of shells around RCB stars. Starspots have been detected and mapped on a number of stars using techniques which combine photometry and spectroscopy (Vogt & Penrod 1983; Strassmeier 1988 and references therein). The Doppler Imaging technique uses spectra of sufficient resolution to resolve individual stellar lines into several velocity bins. Because RCB stars likely rotate so slowly, extremely high spectral resolution would be required for Doppler Imaging. But accurate long-term photometric observations can be used to test for the presence of spots. The problem of confusion with pulsations and dust formation remain. The predicted RCB starspots will lie below the photosphere of star like those on the Sun and should be distinguishable from spots at the level of or higher than the stellar photosphere as we predict for AGB stars. Due to the Wilson effect, the spots will be vignetted when near the stellar limb affecting the photometric behavior of the star (Priest 1987). The main problem of observing cool spots on these evolved stars is that when the spot is large and long-lived, we predict enhanced dust formation, which complicates the observation. In addition, these stars rotate very slowly, so the rotation period is likely to be longer than the lifetime of a magnetic cool spot. This means that photometric variations due to rotation, are very hard to detect. Therefore, the detection of cool spots is very tricky. In RCB stars a careful observation should be made before a deep decline, looking for photometric characteristics of a large cool region (of course, we’ll know we observed the right time only after the decline). The spot will form on a dynamical time scale, which for RCB stars is $`12`$ months. Pulsation, as stated above, will complicate things considerably. In any case, broadband photometry of an RCB star obtained over a few months before a decline should be carefully compared with observations of the same time span in quiet times. In AGB stars the situation is much more complicated. In addition to dust formation above cool spots, the star forms large quantities of dust further out due to the large amplitude stellar pulsations and the cool photosphere. These stars tend to be obscured by dust. We suggest the following type of observations. The target stars should be on the upper AGB, preferentially Carbon stars, but before dust obscuration. At this stage magnetic activity starts to become significant (Harpaz & Soker 1999), and the cool spots are expected live for a few weeks to a few months. Continuous broadband photometry (i.e. VRI) should be made, for a complete pulsation cycle, about a year. For both the RCB and AGB stars, a spot computer model will attempt to fit the light variations in various bands to multiple spots on the surface of the star (Strassmeier 1988). The different points on the surface inside and outside of spots will be assigned different temperatures. The flux is then integrated over the visible hemisphere of the star and then compared to the photometric observations in different bands. Good fits can be obtained using this method but the results are not unique unless combined with Doppler imaging. In parallel, a search for magnetic field in, e.g., SiO masers, should me made. A protruding cool magnetic spot ($`\mathrm{\S }3.3`$), when at an angle of $`90^{}`$ to the line of sight, will cause the AGB star to appear as asymmetrical. Speckle interferometry can be used to study such stars e.g., Karovska et al. (1991), to detect deviations from symmetry. Karovska et al. (1991) mention several possibilities for the asymmetry they detect in Mira, one of them is a large convective spot (Schwarzschild 1975). We would like to add to their list a large protruding magnetic spot, as one of the possibilities of causing deviation from sphericity. ## 5 SUMMARY Our main goals and results can be summarized as follows. (1) Properties of cool magnetic spots as known from the sun, can naturally explain many properties of mass loss from AGB and RCB stars. The assumption that we made here is that magnetic dynamo activity occurs in these evolved stars even when they rotate very slowly, $`10^4`$ times their equatorial Keplerian velocity (Soker 1998). (2) We calculate the position of the spots’ photosphere. In AGB stars the spots protrude from the photosphere, while they are deeper in the envelope of RCB stars. (3) Using the mechanism proposed by Woitke et al. (1996), and the results of Frank (1995), we suggest that the lower temperature and the magnetic field above the spot, facilitate dust formation closer to the stellar surface, after the passage of a shock wave driven by the stellar pulsation. (4) We propose observations that can be made to look for the presence of cool magnetic spots on AGB and RCB stars. These include long-term photometric monitoring in several broadband filters, with temporal resolution of days, and speckle interferometry of AGB stars. (5) Future calculations should combine the work of Frank (1995) and Woitke et al. (1996) with a magnetic field above the spot. We need 2D or, even better, 3D simulations of shock waves propagating from the cool spot and around it, with the magnetic pressure included above the cool spot. ACKNOWLEDGMENTS: This research was supported in part by a grant from the University of Haifa and a grant from the Israel Science Foundation. NS would like to thank the Israeli Students Union for their 6 week long strike during the winter semester, which allowed the completion of this work in a relatively short time. GC was supported by NASA grant, JPL 961526.
no-problem/9812/nucl-th9812057.html
ar5iv
text
# Highly Sensitive Centrality Dependence of Elliptic Flow – A Novel Signature of the Phase Transition in QCD ## Abstract Elliptic flow of the hot, dense system which has been created in nucleus-nucleus collisions develops as a response to the initial azimuthal asymmetry of the reaction region. Here it is suggested that the magnitude of this response shows a “kinky” dependence on the centrality of collisions for which the system passes through a first-order or rapid transition between quark-gluon plasma and hadronic matter. We have studied the system Pb(158AGeV) on Pb employing a recent version of the transport theoretical approach RQMD and find the conjecture confirmed. The novel phase transition signature may be observable in present and forthcoming experiments at CERN-SPS and at RHIC, the BNL collider. One of the most important goals of the heavy-ion programs in the ultrarelativistic energy domain is the search for the phase transition between hadronic matter and the quark-gluon plasma (QGP). After a decade-long effort based on numerous experiments at fixed-target machines (CERN-SPS, BNL-AGS) heavy-ion physics can be considered a mature field today. Thus it may seem surprising that there still is a shortage of reliable signatures for the elusive state QGP and the transition itself . It may be easier to comprehend the difficulties to identify the QGP if one takes into account that properties of the QGP and the transition have to be reconstructed from the final state which obviously is of hadronic nature. On a more fundamental level, it has become clear only recently that the properties of strongly interacting matter even far above the critical temperature $`T_c`$ are essentially non-perturbative. This makes many of the “first-generation” QGP signals which are based on perturbation theory unreliable at best. On the other side, information about the QGP and the phase transition region has become available with the advent of more powerful lattice gauge simulations of quantum chromodynamics (QCD) . Most notably, it has been shown that chiral symmetry is restored at rather low temperatures (in the range 140 to 170 MeV). Furthermore, the Equation of State (EOS) varies rather rapidly in the transition region. It is not clear yet whether the transition is of weak first-order or just a rapid cross-over between the two phases. The EOS extracted from the lattice clearly displays the transition from hadron to quark-gluon degrees of freedoms. Pressure and energy density approach the ideal Stefan-Boltzmann values at temperatures $`3T_c`$. A generic feature of the EOS in the transition region is the presence of the so-called “softest point of the EOS” related to the effect that the energy density may jump with increasing temperature but the pressure does not. The collective transverse flow which develops in the heavy-ion collisions reflects on the properties of the EOS. Usually, one distinguishes various types of transverse flow, the radial (isotropic component), directed (sideward kick in the reaction plane) and the elliptic flow, the latter being a preferential emission either along the impact parameter axis or out of the reaction plane (squeeze-out) . The general idea why a phase transition may show up in flow observables is rather straightforward. At densities around the softest point the force driving the matter expansion gets weakened. A long time ago, van Hove has suggested that the multiplicity dependence of average transverse momenta may display a plateau and a second rise . So far, it has not been possible to deduce the presence of a phase transition from the transverse momentum spectra. Some time ago we have suggested that the elliptic flow may be a better-suited observable to identify a first-order type phase transition . Here we make good on this promise and present a novel signature of the QCD phase transition. We predict a rather characteristic centrality dependence of the elliptic flow if the created system passes through the softest region of the EOS in the heavy-ion reactions. Elliptic flow in the central region of ultrarelativistic collisions is driven by the almond-shape of the participant matter in the transverse plane . It was argued in that elliptic flow may be more sensitive to the early pressure than the isotropic radial flow. “Early” and “late” is defined by the time scale set by the initial transverse size $`r_t=\sqrt{x^2+y^2}`$ of the reaction region. Thus early flow appears at times $`r_t/c`$ while we would refer to flow generated at times $`>2r_t/c`$ as late. One reason for the larger sensitivity of the elliptic flow to early pressure is that the generated flow asymmetry works against its cause and diminishes the spatial asymmetry on a time scale proportional to $`\sqrt{y^2}\sqrt{x^2}`$. Furthermore, the elliptic asymmetry is proportional to the difference between the flow strengths in $`x`$ (parallel to impact parameter) and $`y`$ direction. Thus it is more fragile than radial flow. Viscosity related non-ideal effects tend to wash out the pressure-driven asymmetries. Obviously, these effects will be more pronounced in the later dilute stages of the reaction. Unfortunately, this could not be demonstrated in the earlier work. The transport model RQMD (version 2.3) employed for the calculations lacked any sizable transverse pressure in the early stages – a combination of softness from pre-equilibrium motion and absence of a QGP phase which would generate more pressure than the resonance matter simulated in the model. As a result the final hadron momentum spectra showed azimuthal asymmetries much smaller than hydrodynamical results which include a phase transition into the QGP. In the mean time, NA49 has analysed data for semi-central Pb(158AGeV) on Pb collisions . The measured azimuthal asymmetries are roughly equally distant from the closest results based on hydrodynamics and from the RQMD calculations . Both of these calculations show a factor of two disagreement, however, in different directions. In this Letter we are going to present results from calculations with a new version of the transport model RQMD (version 3.0) which incorporates an EOS with 1st order phase transition. Comparing to the results obtained in the model without QGP phase we may assess the importance of the phase transition. Let us first shortly describe how the phase transition is implemented into the model. A detailed description of the algorithm will be presented elsewhere. In RQMD nucleus-nucleus collisions are calculated in a Monte Carlo type fashion. While the nucleons from each colliding nuclei pass through each other, they are decomposed into constituent quarks. Strings may be excited, and overlapping strings fuse into ropes (with larger chromoelectric field strength). After their decay and fragmentation secondaries emerge and may interact with each other. Formed resonances are treated as unstable quasi-particles. This leads to a resonance gas EOS if there are no corrections from other interactions. The QCD dynamics in the phase transition region is not well understood. Even if there is a quasi-particle description it is not obvious which one of the many possible choices (strings, constituent quarks, partons, either massless or with dynamical masses) is to be prefered. In this situation we have decided to stick to the implemented degrees of freedom and modify the collision term instead. Since we expect that hydrodynamics is a reasonable approach for the transverse dynamics in the ultradense stage, the EOS should be the most relevant ingredient for the expansion dynamics anyway. It is well-known that different treatment of interactions between quasi-particles may modify the EOS. In general, if particles are free between interaction points the virial theorem specifies that the pressure of the system in equilibrium is given by $`P`$ $`=`$ $`P_{id}+{\displaystyle \frac{1}{dV\mathrm{\Delta }T}}{\displaystyle \underset{(a,b)}{}}\left(\delta \stackrel{}{p}_a\stackrel{}{r}_a+\delta \stackrel{}{p}_b\stackrel{}{r}_b\right).`$ (1) The first term arises from free streaming. The second term represents the non-ideal contribution $`\mathrm{\Delta }P`$ due to momentum changes $`\delta \stackrel{}{p}_a`$ at discrete collision points $`\stackrel{}{r}_a`$. $`d`$=3 is the number of space dimensions, $`V`$ the volume of the system, $`\mathrm{\Delta }T`$ a sufficently large time interval, and the summation goes over all collisions. $`a`$ and $`b`$ specifies which quasi-particles collide. The standard collision term in RQMD is manufactured such that it does not contribute to the pressure. Now, we depart from this “ideal” collision term and let each quasi-particle interact elastically with a neighbor after any of the standard collisions. The momentum change is constrained by $`\delta \stackrel{}{p}_a\stackrel{}{r}_a+\delta \stackrel{}{p}_b\stackrel{}{r}_b`$ $`\stackrel{!}{=}`$ $`d{\displaystyle \frac{\mathrm{\Delta }P}{\rho }}\left(\mathrm{\Delta }t_a^{sc}+\mathrm{\Delta }t_b^{sc}\right).`$ (2) $`\mathrm{\Delta }t_a^{sc}`$ refers to the time which has elapsed since the last of the EOS modulating collisions. $`\rho `$ is the density of quasi-particles. Introducing collisions according to eq. (2) changes the pressure of the system to $`P_{id}+\mathrm{\Delta }P`$. Eq. (2) provides a numerically rather efficient method to modify the ideal EOS. The physics content of eq. (2) is that the momentum transfer may be chosen to be either suitably repulsive (QGP at high temperature) or attractive (mixed phase). Fig. 1 displays the ideal EOS based on counting the propagating quasi-particles in RQMD. In addition, an EOS is shown which may be produced by introducing energy density dependent additional interactions according to eq. (2). This EOS is the one which will be used for the calculations presented in this Letter. It is calculated from a quasi-particle model of quarks and gluons with dynamical thermal masses . We have chosen this EOS, because it provides a good fit to lattice data. The EOS contains a 1st order transition at $`T_c`$=160MeV with a latent heat of 467 MeV/fm<sup>3</sup>. For the RQMD calculations of nucleus-nucleus collisions the novel interaction term is introduced in a local density approximation, i.e. all variables in eq. (2) are evaluated in the local rest system of the energy current. Neither is the modulation of the local pressure tensor restricted to regions of local equilibrium nor is – the other extreme – any local equilibration enforced, e.g. by randomizing directions of local momenta. Let us now turn to results of RQMD calculations which contain a phase transition. We have chosen to do the calculation for the system Pb(158AGeV) on Pb, i.e. the heavy-ion reaction at highest beam energy which is currently available. This may be a good place to look for the phase transition. The time evolution of the azimuthal asymmetry parameter $`\alpha `$ (momentum flow asymmetry) $$\alpha =\left(p_x^2p_y^2\right)/\left(p_x^2+p_y^2\right)$$ (3) for quasi-particles around midrapidity is displayed in fig. 2. It shows a very different behaviour than corresponding calculations based on RQMD without QGP-type EOS . We see from the figure that in case of QGP formation essentially all of the finally observable asymmetry is created at times smaller than 4 fm/c. The mixed phase leads to a marked dip of the asymmetry for more central collisions. Since the pressure is comparably low, free motion between interactions is able to destroy some of the earlier created flow asymmetry. On the other side, the calculations for semi-peripheral collisions (e.g. b=7.6 fm) show that the softening in the mixed phase cannot stall the expansion of the system. Needless to say that this is a function of the latent heat which is very moderate for the chosen EOS. The overall effect of mixed phase and purely hadronic stage is rather small in a broad impact parameter range. Under the condition of an already well-developed flow asymmetry, diffusive processes and thermal pressure driven work seem to neutralize each other at the later stages. In the QGP-scenario the azimuthal asymmetry is indeed mostly a signature of the early pressure. It is amuzing that non-ideal effects from viscosity in the low-density stage may be helpful to infer information about the pressure in the high-density region. In the following we will present the main result of the Letter, the measurable azimuthal asymmetry of final hadrons which the experimentalists usually take to be the number flow asymmetry $`v_2`$ $$v_2=\mathrm{cos}(2\varphi )$$ (4) as a function of centrality. Tight impact parameter cuts can be obtained using a forward-energy trigger like NA49 has. Of course, the spatial asymmetry of the reaction zone which is correlated with the asymmetry of the participant nucleons in the ingoing nuclei $$\alpha _x=\left(y^2x^2\right)/\left(x^2+y^2\right).$$ (5) is itself a function of the impact parameter. Trivially, $`v_2`$ goes to zero for very small and very large impact parameters. The value for $`v_2`$ at any given centrality reflects both the strength of the spatial asymmetry and the response of the created system due to the generated pressure. However, we may disentangle the effects from geometry and dynamics. In general, the final flow asymmetry $`v_2`$ can be viewed as a function of many variables, $`\alpha _x`$, the average initial energy density $`e_0`$, the transverse size $`r_t`$, to name just a few: $$v_2=f(\alpha _x,e_0,r_t,\mathrm{})A_2(\overline{\alpha _x})\alpha _x+𝒪((\alpha _x\overline{\alpha _x})^2),$$ (6) where we have obtained the second equation from a Taylor expansion around some intermediate value $`\overline{\alpha _x}`$ and taking into account that $`v_2`$ vanishes for $`\alpha _x0`$. In Pb on Pb collisions, $`\alpha _x`$ varies between 0 and 0.50 for impact parameters less than 12 fm. Picking an intermediate value of $`\overline{\alpha _x}`$ means that the neglected higher order terms in $`(\alpha _x\overline{\alpha _x})`$ are expected to be rather small in practice, on the order of 10 percent. Defining the scaled flow asymmetry as $$A_2=v_2/\alpha _x$$ (7) will therefore allow to assess the dynamical response of the created system to the initial spatial asymmetry. We display the scaled flow asymmetry $`A_2`$ versus impact parameter $`b`$ in fig. 3. Of course, the asymmetry factor $`A_2`$ will tend to vanish in the most peripheral collisions ($`b2R_{Pb}`$). Initial energy densities are too small, and the system size does not sustain extended reaction times. Both for pions and for protons, $`A_2`$ shows a pronounced variation for smaller $`b`$ values. This is a result of the EOS softness at intermediate energy densities. However, non-equilibrium effects, in particular partial thermalization initially and system-size dependent freeze-out, also play a major role. Extracted $`A_2`$ values from hydrodynamic calculations show essentially no centrality dependence, except for the grazing collisions. This feature is in marked contrast to the transport calculation which includes the non-equilibrium aspects of the dynamics. Without phase transition, the asymmetry factor $`A_2`$ calculated from RQMD would simply increase monotoneously with centrality – approximately linearly with the initial system size in the reaction plane ($`2R_{Pb}b`$). Indeed, the hard QGP stage of the reaction leads to a rapid increase of the asymmetry $`A_2`$ in collisions with $`b`$10 fm as is visible from fig. 3. In this range of centralities the initial source size $`\sqrt{x^2}`$ along the impact parameter axis is small enough that the associated characteristic time for the development of flow falls within the life time of the QGP phase. In somewhat more central collisions further increase of the asymmetry is cut-off after the system enters into the stage of soft and lateron viscous expansion. Initial energy densities change less with increasing centrality than the system size. Therefore at the characteristic time for flow development typical energy densities are in the region of the softest point. In these reactions increasing reaction time which is helpful to develop the asymmetry is counteracted by the softness of the matter. In any case, the centrality dependence of the flow asymmetry follows a different slope than for the class of more peripheral collisions. For collisions with $`b<`$ 5 fm kinetic equilibration which takes place on a scale of 3-4 fm/c may be realized already in the QGP phase. This gives rise to yet another centrality dependence of the flow asymmetry $`A_2`$ (a second rise). van Hove’s original idea to look for a plateau and a second rise in momentum spectra as a signal of the QCD phase transition may turn out to be true, after all. Present experience tells that it will probably not been found in the multiplicity dependence of average transverse momenta. He did not take into account that the dynamics of the hadronic stages may add a late radial flow component which spoils the original idea. However, if the presented calculations contain some truth it is much better justified to neglect the late hadronic stages for the azimuthal asymmetries of particle spectra. The presented calculation contains some uncertainties. The equation of state is not well-determined in the transition region. The admixture of baryons in the central region and the strong pre-equilibrium deformation of the local stress tensor add to the uncertainties. Nevertheless, the potential rewards in terms of insight into the phase transition dynamics should justify a careful search for structure in the centrality dependence of elliptic flow at SPS and future RHIC energy. This work has been supported by DOE grant No. DE-FG02-88ER40388. After completion of the manuscript the author became aware of a recent work in which the influence of a 1st order transition on elliptic flow is also being discussed.
no-problem/9812/patt-sol9812009.html
ar5iv
text
# 1 INTRODUCTION ## 1 INTRODUCTION Interaction between self-focussing cylindrical beams (spatial solitons) in bulk nonlinear media is a problem of obvious interest both by itself and for applications. This interaction was studied experimentally and by means of numerical simulations in photorefractive media , and was simulated in detail in an isotropic model with the second-harmonic-generating (SHG) nonlinearity . In the latter model, it was demonstrated that the spiraling bound state of two cylindrical beams is unstable. A general analytical expression for a potential of interaction between far separated two- and three-dimensional (2D and 3D) solitons was very recently derived in . The potential also predicts an instability of the orbiting bound state of two solitons. A very convenient model for the study of the multidimensional solitons and their interactions is the *cubic-quintic* nonlinear Schrödinger (CQNLS) equation, in which the cubic nonlinearity is self-focusing, giving rise to the beams (2D solitons) or “light bullets” (3D solitons), while the quintic term is self-defocusing, precluding the wave collapse in the model: $$iu_t+^2u+|u|^2ug|u|^4u=0.$$ (1) The coefficients in (1), except for $`g`$, can be set equal to $`1`$ by means of scale transformations, whereas $`g`$ is the quintic-to-cubic nonlinear susceptibilities ratio. In the application to nonlinear optical media, the temporal variable $`t`$ in (1) must be replaced by the propagation distance $`z`$, while the role of the third transverse variable is played by the “reduced time”, $`tz/V_{\mathrm{gr}}`$, $`V_{\mathrm{gr}}`$ being the mean group velocity of the carrier wave (this implies that the temporal dispersion in the medium is anomalous) . Finally, the Hamiltonian of this model is $$H_u=\left(\left|u\right|^2\frac{1}{2}|u|^4+\frac{g}{3}|u|^6\right)𝑑𝐫.$$ (2) *Vortex* beams, with the vorticity (“spin”) $`s=1`$, and interactions between them, described by Eq. (1), were simulated by Quiroga-Teixeiro and Michinel . A remarkable result is the numerically discovered robustness of the vortex beams (which were found to be strongly unstable against azimuthal perturbations in the SHG model ). Note that the model (1) is not merely the simplest one that gives rise to stable multidimensional solitons: according to experimental data , the combination of the focusing cubic and defocusing quintic terms adequately represents the nonlinear optical properties of some real materials. The effective potential of the intersoliton interaction derived in applies to a wide class of models, including Eq. (1). However, it does not apply to bimodal systems including *two* equations with *in*coherent nonlinear coupling between them (mediated by the cross-phase modulation in nonlinear optical media), in the case when the two solitons (beams) belong to different modes. The simplest bimodal generalization of the model based on the Hamiltonian (2) is furnished by the Hamiltonian $`H=H_u+H_v+H_{\mathrm{int}}`$, where $`H_v`$ is the same expression as (2) with the field $`u`$ substituted by another field $`v`$, and the interaction part of the Hamiltonian is $$H_{\mathrm{int}}=\left[\beta |u|^2|v|^2+\alpha \left(|u|^4|v|^2+|u|^2|v|^4\right)\right]𝑑𝐫,$$ (3) with, generally speaking, arbitrary positive constants $`\alpha `$ and $`\beta `$. The full Hamiltonian of the bimodal system gives rise to the equations $`iu_t+^2u+\left(|u|^2+\beta \left|v\right|^2\right)u\left(g|u|^4+2\alpha |u|^2|v|^2+\alpha \left|v\right|^4\right)u`$ $`=`$ $`0,`$ (4) $`iv_t+^2v+\left(|v|^2+\beta \left|u\right|^2\right)u\left(g|v|^4+2\alpha |u|^2|v|^2+\alpha \left|u\right|^4\right)v`$ $`=`$ $`0.`$ (5) Commonly known examples of optical bimodal systems are provided by two orthogonal polarizations of light, or two light waves with different carrier wavelengths . In the latter case, as well as in the former one with the circular polarizations, the cubic cross-phase-modulation coefficient is $`\beta =2`$. In the case of two linear polarizations, $`\beta =2/3`$ (and the usual assumption is to drop additional four-wave mixing terms). The constant $`\alpha `$ is left here to be arbitrary, but, in the most interesting cases, the second term in (3) will only produce a small correction to the effective interaction potential. As well as in the case of the single-mode system, the interaction between solitons in different modes depends on the separation between them, but, unlike the single-mode case, it is not sensitive to a phase difference between the solitons, hence the interaction is expected to be *simpler* than inside the same mode. The objective of this work is to find an effective potential of the interaction between 2D and 3D solitons in the bimodal system, including the cases when the interacting solitons are both identical and different (the interaction between 2D solitons with different vorticities is also included). The interaction between identical 2D beams was recently considered in , but using an artificially introduced Gaussian ansatz for the soliton. As well as it was done in for the single-mode system, in this work we find the interaction potential in a general analytical form. However, the derivation is essentially different from that developed in ; in particular, the derivation proves to be very different for the 2D and 3D cases, while in the single-mode system these two cases were very similar. In section 2, we derive the potential for the interaction between 2D solitons (spatial beams) with unequal masses. We explicitly consider two limit cases, when the masses of the interacting solitons are very different or nearly equal. The latter case demonstrates a singularity in the limit of equal masses, therefore the interaction between identical 2D solitons should be considered separately, which is done in section 3. The result, and the way to obtain it, turn out to be drastically different from the case of unequal masses: when the masses are not equal, a dominating contribution to the effective interaction potential is produced by a vicinity of the soliton having a larger mass (which is similar to the situation for the single-mode system ), while, in the equal-mass case, a dominating area is located *between* the solitons, in contrast with the case of the single-mode system. In section 4, the potential is derived for the 3D solitons with equal masses. In this case, the derivation is similar to that for the 2D solitons with the *u*nequal masses, but it gives rise to an additional logarithmic factor. The results are summarized in section 5, where we conclude, in particular, that the obtained potentials give rise to two bound states of the solitons orbiting around each other, one of which is *stable* (which was already concluded in ), on the contrary to the orbiting states in the single-mode models (including the SHG one ), which are all unstable. This difference, which is, obviously, very important for applications, is due to the fact that, in the bimodal system, the interaction potential does not depend on the phase difference between the two solitons. ## 2 THE INTERACTION BETWEEN DIFFERENT TWO-DIMENSIONAL SOLITONS A general 2D stationary solution to Eq. (4), with an internal frequency $`\omega \mu ^2`$, is looked for in the form $$u_s=\mathrm{exp}\left(i\mu ^2t+is\theta \right)U(r),$$ (6) where the $`s`$ is an integer spin (vorticity), and a real function $`U(r)`$ satisfies the equation $`U^{\prime \prime }+r^1U^{}s^2r^2+U^3gU^5=\mu ^2U`$, which can be easily solved numerically . A soliton solution is defined by its asymptotic form at $`r\mathrm{}`$, $$U(r)A_s(\mu )r^{1/2}\mathrm{exp}(\mu r),$$ (7) where the amplitude $`A_s(\mu )`$ is to be found numerically. We will consider a situation with the solitons of the form (6), (7) in each mode $`u`$ and $`v`$, that have, generally, different spins $`s_1`$ and $`s_2`$ and different frequency parameters $`\mu _1`$ and $`\mu _2`$ which determine effective masses of the solitons. A size of the soliton can be estimated, pursuant to Eq. (7), as $`\mu ^1`$. We will consider the case when the separation between the solitons is much larger than their proper sizes, i.e., $`R\mu _{1,2}1`$. The interaction Hamiltonian (3) allows one to define an effective interaction potential for two separated solitons, approximating the two-soliton configuration by a linear superposition of two isolated solitons, and substituting it into (3) . This approach requires actual calculation of the integrals in (3), which can be done for 2D solitons in an exact form only in exceptional cases (see, e.g., ), another drawback being that a distortion of the “tail” of each soliton due to its interaction with the “body” of the other one is ignored. In the work , the necessary integral was evaluated, assuming a Gaussian ansatz for the isolated solitons. However, the corresponding effective interaction potential, decaying $`\mathrm{exp}\left(\mathrm{const}R^2\right)`$, was actually produced by the ansatz rather than by the model. In fact, the potential must decay as $`\mathrm{exp}\left(\mathrm{const}R\right)`$, see below. In the work , another approach to the calculation of the effective potential was developed for the single-mode systems, following the way elaborated earlier for 1D solitons in . This method did not require knowing internal structure of the soliton, and did not imply neglecting the distortion of each soliton’s tail due to its overlapping with the other soliton. All that is necessary to know about the individual solitons for the application of this method, is only the asymptotic amplitudes $`A_s(\mu )`$ in (7). Here, we will apply a similar method to the bimodal system (4), (5), although technical details will be essentially different from those in the case of the single-mode systems. We start, still assuming the linear superposition of the two solitons $`u_{s_1}`$ and $`v_{s_2}`$ (see Eqs. (6) and (7)) with widely separated centers, and setting, for the definiteness, $`\mu _1>\mu _2`$. Because of the exponential decay of the fields, it is obvious that a dominant contribution to the interaction potential (3) will be produced by a vicinity of the soliton with $`\mu =\mu _1`$. First, we consider the case $`\mu _2\mu _1`$, i.e., a light soliton (beam) interacting with a heavy one. Substituting the field $`v`$ for the light soliton by the asymptotic expression (7), and neglecting its small variation over the size of the narrow heavy soliton, we can easily cast the expression for the interaction potential into the form $`H_{\mathrm{int}}`$ $``$ $`A_{s_2}^2(\mu _2)R^1\mathrm{exp}\left(2\mu _2R\right)\left[\beta {\displaystyle \left|u_{s_1}(𝐫;\mu _1)\right|^2𝑑𝐫}+\alpha {\displaystyle \left|u_{s_1}(𝐫;\mu _1)\right|^4𝑑𝐫}\right]`$ (8) $``$ $`\left(\beta m_1+\alpha \stackrel{~}{m}_1\right)A_{s_2}^2(\mu _2)R^1\mathrm{exp}\left(2\mu _2R\right),`$ where $`m_1\left|u_{s_1}(𝐫;\mu _1)\right|^2𝑑𝐫`$ and $`\stackrel{~}{m}_1\left|u_{s_1}(𝐫;\mu _1)\right|^4𝑑𝐫`$ are two integral characteristics of the heavy soliton, $`m_1`$ being, in fact, its effective mass. Thus, both attraction and repulsion between the light and heavy solitons may take place, depending on the sign in front of the expression (8). Another interesting case is $`\mu _1\mu _2\mathrm{\Delta }\mu `$ $`\mu _2\mu `$ (i.e., the interaction between nearly identical solitons, provided that $`s_1=s_2`$). In this case, following , we assume that, in terms of the polar coordinates $`(r,\theta )`$ with the origin at the center of the heavier (first) soliton, a main contribution to $`H_{\mathrm{int}}`$ comes from the distances $`\mu ^1rR`$, where *both* solitons may be approximated by the asymptotic expressions (7). Then, it is straightforward to obtain the following expression corresponding to the first term in (3) (cf. the corresponding expressions and Fig. 1 in ): $$U_{\mathrm{int}}\beta A_{s_1}^2(\mu )A_{s_2}^2(\mu )R^1_0^{\mathrm{}}r𝑑r_0^{2\pi }𝑑\theta r^1$$ $$\mathrm{exp}\left[2(\mu +\mathrm{\Delta }\mu )r2\mu \sqrt{(R+r\mathrm{cos}\theta )^2+r^2\mathrm{sin}^2\theta }\right].$$ (9) Here, the small difference $`\mathrm{\Delta }\mu `$, and the difference between $`R`$ and the exact distance from an integration point $`(r,\theta )`$ to the center of the second (lighter) soliton, $`\sqrt{(R+r\mathrm{cos}\theta )^2+r^2\mathrm{sin}^2\theta }`$ , are neglected everywhere, except for the argument of the exponential function. Making use of the expansion $$\sqrt{(R+r\mathrm{cos}\theta )^2+r^2\mathrm{sin}^2\theta }=R+r\mathrm{cos}\theta +\mathrm{},$$ (10) and of the formula $`_0^{2\pi }\mathrm{exp}\left(2\mu r\mathrm{cos}\theta \right)𝑑\theta \sqrt{\pi /\mu r}\mathrm{exp}\left(2\mu r\right)`$, valid for $`\mu r`$ $`1`$, we can simplify the integral (9) to the form $`H_{\mathrm{int}}`$ $``$ $`\beta A_{s_1}^2(\mu )A_{s_2}^2(\mu )\sqrt{\pi /\mu }R^1\mathrm{exp}(2\mu R){\displaystyle _0^{\mathrm{}}}r^{1/2}\mathrm{exp}\left[2(\mathrm{\Delta }\mu )r\right]𝑑r`$ (11) $``$ $`\pi \beta \left(2\mu \mathrm{\Delta }\mu \right)^{1/2}A_{s_1}^2(\mu )A_{s_2}^2(\mu )R^1\mathrm{exp}(2\mu R).`$ For small $`\mathrm{\Delta }\mu `$, the integral in (11) is dominated by a contribution from the region $`r1/\mathrm{\Delta }\mu \mu ^1`$, which justifies the use of the asymptotic approximation (7) for the field $`u(r)`$. A contribution from the second term in the full expression (3), evaluated in the same approximation, demonstrates the same dependence on the separation $`R`$, but without the large multiplier $`(\mathrm{\Delta }\mu )^{1/2}`$, therefore this only a small correction to (11). ## 3 ATTRACTION BETWEEN IDENTICAL TWO-DIMENSIONAL SOLITONS The expression (11) diverges in the most interesting case $`\mathrm{\Delta }\mu =0`$, which corresponds to the interaction between identical solitons (provided that $`s_1=s_2s`$). The divergence suggests that a region dominating the interaction potential is not that around the soliton, as was the case both in the previous section for the case of $`\mathrm{\Delta }\mu 0`$, and, for the identical solitons, in the single-mode system , but a wider region *between* the two solitons. To calculate $`U_{\mathrm{int}}`$ in this case, we use the Cartesian coordinates $`(x,y)`$ defined so that the centers of the two solitons are placed at the points $`(\pm R/2,0)`$. Then, using once again the asymptotic expressions (7) for both $`u`$\- and $`v`$-solitons with $`\mu _1=\mu _2\mu `$, the interaction potential corresponding to the first term in Eq. (3 ) is given, after obvious transformations, by $$H_{\mathrm{int}}=\beta A_s^4𝑑\xi 𝑑\eta \left[\left(\xi ^2+\eta ^2+1/4\right)^2\xi ^2\right]^{1/2}$$ $$\mathrm{exp}\left[2\mu R\left(\sqrt{\left(\xi +1/2\right)^2+\eta ^2}+\sqrt{\left(\xi 1/2\right)^2+\eta ^2}\right)\right],$$ (12) where $`\xi x/R`$, and $`\eta y/R`$. Because the parameter $`\mu R`$ is large according to the underlying assumption, the integral (12) is dominated by a contribution from a vicinity of points where the argument of the exponential function has a maximum. An elementary analysis shows that the maximum is attained not at isolated points, but rather at the whole segment $`|\xi |<1/2`$, $`\eta =0`$. Expanding the integral in small $`\eta ^2`$ in a vicinity of this segment, we approximate Eq. (12) by an integral that can be easily calculated: $$H_{\mathrm{int}}=\beta A_s^4_{1/2}^{+1/2}𝑑\xi \left(\frac{1}{4}\xi ^2\right)^1_{\mathrm{}}^+\mathrm{}𝑑\eta \mathrm{exp}\left[2\mu R\left(1+\frac{1}{2}\frac{\eta ^2}{1/4\xi ^2}\right)\right]$$ $$=\beta A_s^4\sqrt{\frac{\pi ^3}{\mu R}}\mathrm{exp}(2\mu R).$$ (13) Comparing this result with that (11) for the solitons with different masses, we conclude that the divergence in the latter expression at $`\mathrm{\Delta }\mu 0`$ implies replacement of the pre-exponential factor $`R^1`$ by a larger one, $`R^{1/2}`$. Evaluating the second term in (3) in the same approximation, we conclude that it yields an expression differing from (13) just by the factor $`R^1`$ instead of $`R^{1/2}`$, i.e., a small correction to (13). Note that, unlike the interaction between heavy and light solitons, which may have either sign, the nearly identical or identical solitons always attract each other, cf. Eqs. (8), (11), and (13). ## 4 ATTRACTION BETWEEN THREE-DIMENSIONAL SOLITONS In the 3D case, we consider only the solitons without the internal “spin”, i.e., with $`s=0`$. The 3D soliton has the form $`u=\mathrm{exp}(i\mu ^2t)a(r)`$, with the asymptotic form $`a(r)Ar^1\mathrm{exp}(\mu r)`$ at $`r\mathrm{}`$, cf. Eqs. (6) and (7). Substitution of this asymptotic expression for the fields $`u`$ and $`v`$ into the integral in the first term of Eq. (3) around each soliton (cf. Eq. (9 )) and using the expansion (10) yield $$H_{\mathrm{int}}=4\pi \beta A^4R^2_0^{\mathrm{}}r^2𝑑r_0^\pi \mathrm{sin}\theta d\theta r^2\mathrm{exp}\left[2\mu r2\mu \left(R+r\mathrm{cos}\theta \right)\right],$$ where an extra factor $`2`$ takes into regard the fact that, in the case of identical solitons, one has equal contributions from the vicinity of both solitons. After elementary integration over $`\theta `$, we arrive at an expression containing a formal logarithmic singularity, $$H_{\mathrm{int}}=2\pi \beta A^4\mu ^1R^2\mathrm{exp}\left(2\mu R\right)_0^{\mathrm{}}r^1𝑑r.$$ (14) Actually, the lower and upper limits of the integration are, respectively, $`\mu ^1`$ and $`R`$, so that, with the *logarithmic accuracy*, the final expression for the effective interaction potential in the 3D case becomes (cf. (13)) $$H_{\mathrm{int}}=2\pi \beta A^4\mu ^1R^2\mathrm{exp}\left(2\mu R\right)\mathrm{ln}(\mu R).$$ (15) As for the contribution from the second term in (3), it has the same dependence on $`R`$ as (15), but without the large logarithmic factor, so that in this case too, it is a small correction only. The above analysis was done for identical solitons. If the solitons have a small mass difference, corresponding to a small difference $`\mathrm{\Delta }\mu `$, the interaction potential is given by essentially the same expression (15), except for a factor of $`2`$, which is absent if $`\mathrm{\Delta }\mu R_{}^>\mathrm{\hspace{0.33em}1}`$, when the calculation of $`H_{\mathrm{int}}`$ is dominated by a vicinity of one soliton only. ## 5 CONCLUSION In this work, we have derived analytical expressions for an effective potential of interaction between two- and three-dimensional solitons (including the case of the two-dimensional vortex solitons) belonging to two different modes which are incoherently coupled through cross-phase modulation in models of media with the self-focusing cubic and self-defocusing quintic nonlinearities. The derivation was based on the calculation of the interaction term in the full Hamiltonian of the system. An essential peculiarity is that, in the 3D case, as well as in the case of 2D solitons with unequal masses, the main contribution to the interaction potential originates from a vicinity of one or both solitons, similarly to what was recently found in the 2D and 3D single-mode systems , while in the case of identical 2D solitons, the dominating area covers all the space between the solitons. Except for the case of the interaction between light and heavy solitons, which may have either sign, the solitons always attract each other. The attraction between the solitons may give rise to their orbiting bound states in the 2D and 3D cases (in the latter case, it is assumed that the two solitons move in one plane). Orbiting of incoherently interacting 2D solitons was experimentally observed in a photorefractive medium . Numerical simulations and analytical arguments presented in and demonstrate that the orbiting bound states of the 2D solitons in the single-mode systems, including the SHG model, are unstable. However, it was recently pointed out in that they might be stable in the bimodal system. Indeed, for the orbiting state, the interaction potential (11), (13), or (15) must be supplemented by the centrifugal energy $`E_{\mathrm{cf}}=\left(M^2/2m\right)R^2`$, where $`M`$ is the angular momentum of the soliton pair, and $`m`$ is the soliton’s effective mass. A straightforward consideration of the net potential, $`H_{\mathrm{int}}+E_{\mathrm{cf}}`$, demonstrates that it may have two stationary points, the one corresponding to a smaller value of $`R`$ being a potential *minimum* that gives rise to a stable orbiting state. The instability of similar states in the single-mode systems is due to the fact that, in those systems, the interaction potential also depends on the phase difference between the solitons, the effective mass corresponding to the phase-difference degree of freedom being *negative* . This, eventually, made the existence of stable stationary points of the effective interaction potential impossible. This work has been supported by INTAS under grant No: 96-0339
no-problem/9812/nucl-th9812067.html
ar5iv
text
# HADRONS IN THE NUCLEAR MEDIUM11footnote 1work supported by BMBF and DFG ## 1 Introduction The investigation of so-called in-medium properties of hadrons has found widespread interest during the last decade. This interest was triggered by two aspects. The first aspect that triggered an interest in the investigation of in-medium properties of hadrons was a QCD sum-rule based prediction by Hatsuda and Lee in 1992 that the masses of vector mesons should drop dramatically as a function of nuclear density. It was widely felt that an experimental verification of this prediction would establish a long-sought direct link between quark degrees of freedom and nuclear hadronic interactions. In the same category fall the predictions of Brown and Rho that argued for a general scaling for hadron masses with density . The second aspect is that even in ultrarelativistic heavy-ion reactions, searching for observables of a quark-gluon plasma phase of nuclear matter, inevitably also many relatively low-energy ($`\sqrt{s}24\text{GeV}`$) final state interactions take place. These interactions involve, for example, collisions between many mesons for which the cross-sections and meson self-energies in the nuclear medium are not known, but may influence the interpretation of the experimental results. Hadron properties in medium involve masses, widths and coupling strengths of these hadrons. In lowest order in the nuclear density all of these are linked by the $`t\rho `$ approximation that assumes that the interaction of a hadron with many nucleons is simply given by the elementary $`t`$-matrix of the hadron-nucleon interaction multiplied with the nuclear density $`\rho `$. For vector mesons this approximation reads $$\mathrm{\Pi }_\mathrm{V}=4\pi f_{\mathrm{VN}}(0)\rho $$ (1) where $`f_{\mathrm{VN}}`$ is the forward-scattering amplitude of the vector meson (V) nucleon (N) interaction. Approximation (1) is good for low densities ($`\mathrm{\Pi }_\mathrm{V}`$ is linear in $`\rho `$) and/or large relative momenta where the vector meson ‘sees’ only one nucleon at a time. Relation (1) also neglects the Fermi-motion of the nucleons although this could easily be included. Simple collision theory then gives the shift of mass and width of a meson in nuclear matter as $`\delta m_\mathrm{V}`$ $`=`$ $`\gamma v\sigma _{\mathrm{VN}}\eta \rho `$ $`\delta \mathrm{\Gamma }_\mathrm{V}`$ $`=`$ $`\gamma v\sigma _{\mathrm{VN}}\rho .`$ (2) Here, according to the optical theorem, $`\eta `$ is given by the ratio of real to imaginary part of the forward scattering amplitude $$\eta =\frac{\mathrm{}f_{\mathrm{VN}}(0)}{\mathrm{}f_{\mathrm{VN}}(0)}.$$ (3) The expressions (1) are interesting since an experimental observation of these mass- and width-changes could give valuable information on the free cross sections $`\sigma _{\mathrm{VN}}`$ which may not be available otherwise. The more fundamental question, however, is if there is more to in-medium properties than just the simple collisional broadening predictions of (1). ## 2 Fundamentals of Dilepton Production From QED it is well known that vacuum polarization, i.e. the virtual excitation of electron-positron pairs, can dress the photon. Because the quarks are charged, also quark-antiquark loops can dress the photon. These virtual quark-antiquark pairs have to carry the quantum numbers of the photon, i.e. $`J^\pi =1^{}`$. The $`q\overline{q}`$ pairs can thus be viewed as vector mesons which have the same quantum numbers; this is the basis of Vector Meson Dominance (VMD). The vacuum polarization tensor is then, in complete analogy to QED, given by $$\mathrm{\Pi }^{\mu \nu }=d^4xe^{iqx}0|T[j^\mu (x)j^\nu (0)]|0=\left(g^{\mu \nu }\frac{q^\mu q^\nu }{q^2}\right)\mathrm{\Pi }(q^2)$$ (4) where $`T`$ is the time ordering operator. Here, in the second line, the tensor structure has been exhibited explicitly. This so-called current-current correlator contains the currents $`j^\mu `$ with the correct charges of the vector mesons in question. Simple VMD relates these currents to the vector meson fields $$j^\mu (x)=\frac{m_{\mathrm{V}}^{0}{}_{}{}^{2}}{g_\mathrm{V}}V^\mu (x).$$ (5) Using this equation one immediately sees that the current-current correlator (4) is nothing else but the vector meson propagator $`D_\mathrm{V}`$ $$\mathrm{\Pi }(q^2)=\left(\frac{m_{\mathrm{V}}^{0}{}_{}{}^{2}}{g_\mathrm{V}}\right)^2D_\mathrm{V}(q^2).$$ (6) The scalar part of the vector meson propagator is given by $$D_\mathrm{V}(q^2)=\frac{1}{q^2m_{\mathrm{V}}^{0}{}_{}{}^{2}\mathrm{\Pi }_\mathrm{V}(q^2)}.$$ (7) Here $`\mathrm{\Pi }_\mathrm{V}`$ is the selfenergy of the vector meson. For the free $`\rho `$ meson information about $`\mathrm{\Pi }(q^2)`$ can be obtained from hadron production in $`e^+e^{}`$ annihilation reactions $$R(s)=\frac{\sigma (e^+e^{}\text{hadrons})}{\sigma (e^+e\mu ^+\mu ^{})}=\frac{12\pi }{s}\mathrm{}\mathrm{\Pi }(s)$$ (8) with $`s=q^2`$. This determines the imaginary part of $`\mathrm{\Pi }`$ and, invoking vector meson dominance, also of $`\mathrm{\Pi }_\mathrm{V}`$. The data (see, e.g. Fig. 18.8 in , or Fig. 1 in ) clearly show at small $`\sqrt{s}`$ the vector meson peaks, followed by a flat plateau starting at $`\sqrt{s}1.5\text{GeV}`$ described by perturbative QCD. In order to get the in-medium properties of the vector mesons, i.e. their selfenergy $`\mathrm{\Pi }_\mathrm{V}`$, we now have two ways to proceed: We can, *first*, try to determine the current-current correlator by using QCD sum rules ; from this correlator we can then determine the self-energy of the vector meson following eqs. (6),(7). The *second* approach consists in setting up a hadronic model and calculating the selfenergy of the vector meson by simply dressing its propagators with appropriate hadronic loops. In the following sections I will discuss both of these approaches. ### 2.1 QCD sum rules and in-medium masses The QCD sum rule for the current-current correlator is obtained by evaluating the function $`R(s)`$, and thus $`\mathrm{}\mathrm{\Pi }(s)`$ (see (8)), in a hadronic model on one hand and in a QCD-based model on the other. The latter, QCD based, calculation uses the fact that the current-current correlator (4) can be Taylor expanded in the space-time distance $`x`$ for small space-like distances between $`x`$ and $`0`$; this is nothing else than the Operator Product Expansion (OPE) . In this way we obtain for the free meson $$R^{\mathrm{OPE}}(M^2)=\frac{1}{8\pi ^2}\left(1+\frac{\alpha _\mathrm{S}}{\pi }\right)+\frac{1}{M^4}m_\mathrm{q}\overline{q}q+\frac{1}{24M^4}\frac{\alpha _\mathrm{S}}{\pi }G^2\frac{56}{81M^6}\pi \alpha _\mathrm{S}\kappa \overline{q}q^2.$$ (9) Here $`M`$ denotes the so-called Borel mass. The expectation values appearing here are the quark- and gluon-condensates. The last term here contains the mean field approximation $$(\overline{q}q)^2=\kappa \overline{q}q^2.$$ (10) The other representation of $`R`$ in the space-like region can be obtained by analytically continuing $`\mathrm{}\mathrm{\Pi }(s)`$ from the time-like to the space-like region by means of a twice subtracted dispersion relation. This finally gives $$R^{\mathrm{HAD}}(M^2)=\frac{\mathrm{}\mathrm{\Pi }^{\mathrm{HAD}}(0)}{M^2}\frac{1}{\pi M^2}_0^{\mathrm{}}𝑑s\mathrm{}\mathrm{\Pi }^{\mathrm{HAD}}(s)\frac{s}{s^2+ϵ^2}\mathrm{exp}s/M^2.$$ (11) Here $`\mathrm{\Pi }^{\mathrm{HAD}}`$ represents a phenomenological hadronic spectral function. Since for the vector mesons this spectral function is dominated by resonances in the low-energy part it is usually parametrized in terms of a resonance part with parameters such as strength, mass and width at low energies with a connection to the QCD perturbative result for the quark structure for the current-current correlator at higher energies (for details see the contribution by S. Leupold et al. in these proceedings and refs. ). The QCD sum rule is then obtained by setting $$R^{\mathrm{OPE}}(M^2)=R^{\mathrm{HAD}}(M^2).$$ (12) Knowing the lhs of this equation then allows one to determine the parameters in the spectral function appearing in $`R^{\mathrm{HAD}}`$ on the rhs. If the vector meson moves in the nuclear medium, then $`R`$ depends also on its momentum. However, detailed studies find only a very weak momentum dependence. The first applications of the QCDSR have used a simplified spectral function, represented by a $`\delta `$-function at the meson mass and a perturbative QCD continuum, starting at about $`s1.2`$ GeV. Such an analysis gives a value for the free meson mass that agrees with experiment. On this basis the QCDSR has been applied to the prediction of in-medium masses of vector mesons by making the condensates density-dependent (for details see ). This then leads to a lowering of the vector meson mass in nuclear matter. This analysis has recently been repeated with a spectral function that uses a Breit-Wigner parametrization with finite width. In this study it turns out that QCD sum rules are compatible with a wide range of masses and widths (see also ref. ). Only if the width is – artificially – kept zero, then the mass of the vector meson has to drop with nuclear density . However, also the opposite scenario, i.e. a significant broadening of the meson at nearly constant pole position, is compatible with the QCDSR. ### 2.2 Hadronic models Hadronic models for the in-medium properties of hadrons start from known interactions between the hadrons and the nucleons. In principle, these then allow one to calculate the forward scattering amplitude $`f_{\mathrm{VN}}`$ for vector meson interactions, for example. Many such models have been developed over the last few years . The model of Friman and Pirner was taken up by Peters et al. who also included $`s`$-wave nucleon resonances. It turned out that in this analysis the $`D_{13}N(1520)`$ resonance plays an overwhelming role. This resonance has a significant $`\rho `$ decay branch of about 20 %. Since at the pole energy of 1520 MeV the $`\rho `$ decay channel is not yet energetically open this decay can only take place through the tails of the mass distributions of resonance and meson. The relatively large relative decay branch then translates into a very strong $`N^{}N\rho `$ coupling constant (see also ). The main result of this $`N^{}h`$ model for the $`\rho `$ spectral function is a considerable broadening for the latter. This is primarily so for the transverse vector mesons (see Fig. 1), whereas the longitudinal degree of freedom gets only a little broader with only a slight change of strength downwards (see Fig. 2. The results shown in Fig. 1 actually go beyond the simple ”$`t\rho `$” approximation discussed earlier (see (1)) in that they contain higher order density effects: a lowering of the $`\rho `$ meson strength leads to a strong increase of the phase-space available for decay of the $`N(1520)`$ resonance; this causes a corresponding strong increase of the $`N(1520)`$ $`\rho `$-decay width which in turn affects the spectral function. The result is the very broad, rather featureless spectral function for the transverse $`\rho `$ shown in Fig. 1. ## 3 Experimental Observables In this section I will now discuss various possibilities to verify experimentally the predicted changes of the $`\rho `$ meson properties in medium. ### 3.1 Heavy-Ion Reactions Early work on an experimental verification of the predicted broadening of the $`\rho `$ meson spectral function has concentrated on the dilepton spectra measured at relativistic energies (about 1 – 4 A GeV) at the BEVALAC, whereas more recently many analyses have been performed for the CERES and HELIOS data obtained at ultrarelativistic energies (150 – 200 A GeV) at the SPS. In such collisions nuclear densities of about 2 - 3 $`\rho _0`$ can already be reached in the relativistic domain; in the ultrarelativistic energy range baryon densities of up to $`10\rho _0`$ are predicted (for a recent review see ). Since the selfenergies of produced vector mesons are at least proportional to the density $`\rho `$ (see (1)) heavy-ion reactions seem to offer a natural enhancement factor for any in-medium changes. The CERES data indeed seem to confirm this expectation (see Fig. 3). The left calculation shows that a large part of the CERES dilepton yield can already be explained by simple secondary reactions, mainly by $`\pi ^+\pi ^{}e^+e^{}`$ because of the known high pion multiplicity in such high-energy heavy-ion reactions. Even then, there is, however, some yield still missing. The present situation is – independent of the special model used for the description of the data – that agreement with the measured dilepton mass spectrum in the mass range between about 300 and 700 MeV for the 200 A GeV $`S+Au`$ and $`S+W`$ reactions can only be obtained if $`\rho `$-meson strength is shifted downwards (for a more detailed discussion see ) (for the recently measured 158 A GeV $`Pb+Au`$ reaction the situation is not so clear; here the calculations employing ‘free’ hadron properties lie at the lower end of the experimental error bars ). However, all the predictions are based on equilibrium models in which the properties of a $`\rho `$ meson embedded in nuclear matter with infinite space-time extensions are calculated. A ultrarelativistic heavy-ion collision is certainly far away from this idealized scenario. In addition, heavy-ion collisions necessarily average over the polarization degrees of freedom. The two physically quite different scenarios, broadening the spectral function or shifting simply the $`\rho `$ meson mass downwards while keeping its free width, thus lead to indistinguishable observable consequences in such collisions. This can be understood by observing that even in an ultrarelativistic heavy-ion collision, in which very high baryonic densities are reached, a large part of the observed dileptons is actually produced at rather low densities (see Fig. 3 in ). ## 4 $`\pi +A`$ Reactions Motivated by this observation we have performed calculations of the dilepton invariant mass spectra in $`\pi ^{}`$ induced reactions on nuclei ; the experimental study of such reactions will be possible in the near future at GSI. The calculations are based on a semiclassical transport theory, the so-called Coupled Channel BUU method (for details see ) in which the nucleons, their resonances up to 2 GeV mass and the relevant mesons are propagated from the initial contact of projectile and target until the final stage of the collision. This method allows one to describe strongly-coupled, inclusive processes without any a-priori assumption on the equilibrium or preequilibrium nature of the process. Results of these calculations are being published in . Since then, we have improved the calculations in several aspects (see also ). First, we now include also processes like $`N^{}\mathrm{\Delta }\rho `$ in the $`\rho `$ production channel. We find that this channel yields sizeable contributions to the low-mass $`\rho `$ yield coming from the decay of rather high-lying ($`1.9`$ GeV) nucleon resonances. Second, we now use a more sophisticated treatment of broad resonances (i.e. the $`\rho `$), that resembles that generally used for the treatment, e.g., of the $`\mathrm{\Delta }`$ in transport calculations . Instead of producing the vector mesons only at their pole mass like in and folding in the phase-space weighted spectral function, we now produce these particles already with the proper spectral function and thus propagate also mesons on the tail of the mass-distribution. This improvement, however, creates new problems: in these calculations very low mass $`\rho `$ and $`\omega `$ mesons sometimes reach the nuclear surface and escape, thus leading to a peak in the dilepton spectrum at very low masses. (the structures in Fig. 4 below the vector meson mass are due to statistical fluctuations and not to this effect). We cure this problem at present only heuristically by including a density-dependent selfenergy that ensures that all physical particles freeze out with their vacuum spectral distribution. And finally, we now use a VMD mandated $`1/M^3`$ weighting for the dilepton spectra, which is consistent for calculations employing free hadron properties. In these reactions the dominant dilepton emission channels are the same as in ultrarelativistic heavy-ion collisions; this can be seen in Fig. 4 where I show the results for the dilepton spectra produced by bombarding Pb nuclei with 1.3 GeV pions. Up to about 400 MeV invariant mass the strongest component is given by the $`\eta `$ Dalitz decay where the $`\eta `$’s are produced through the experimentally rather well known process $`\pi N\eta N`$. Very close in magnitude in this lower mass range is the somewhat model-dependent $`\pi N`$ bremsstrahlung component (not shown here). In the vector meson mass range primarily both $`\rho `$ and $`\omega `$ contribute to the dilepton yield. First, still preliminary calculations indicate that the in-medium changes in the dilepton mass spectrum are of the order of a factor 2 in the mass range between 300 MeV and 700 MeV . There are various interesting problems in this process that ultimately have to be answered by experiment. First, the $`\pi N`$ bremsstrahlung is quite uncertain. Parts of it are included in a calculation that includes processes like $`\pi +NN^{}Ne^+e^{}`$ ($`s`$-channel contributions), but $`t`$-channel processes for $`\pi N`$ scattering are not so easy to handle, because the frequently used long-wavelength approximation is known to be quite unreliable . An exclusive measurement of pion-induced dilepton emission from the proton would be highly desirable to investigate this point. A further interesting problem is how to determine the branching ratios of vector mesons into the dilepton channel. Simple VMD relates the coupling strength to the bare (pole) mass $`m_\rho `$ of the meson so that one obtains $`\mathrm{\Gamma }_{\rho e^+e^{}}(M)1/M^3`$. However, in medium the vector meson is strongly coupled, e.g. to $`N^{}h`$ excitations, which may lead to mass-dependent vertex corrections. ## 5 Photonuclear Reactions Pion induced reactions have the disadvantage that the projectile already experiences strong initial state interactions so that many produced vector mesons are located in the surface where the densities are low. A projectile that is free of this undesirable behavior is the photon. ### 5.1 Dilepton Production We have therefore – also in view of a corresponding proposal for such an experiment at CEBAF – performed calculations for the dilepton yield expected from $`\gamma +A`$ collisions. Results of these calculations are shown in Fig. 5. In the top figure the various sources of dilepton radiation are shown. The dominant sources are again the same as those in pion- and heavy-ion induced reactions, but the (uncertain) $`\pi N`$ bremsstrahlung does not contribute in this reaction. The middle part of this figure shows both the Bethe-Heitler (BH) contribution and the contribution from all the hadronic sources. In the lowest (dot-dashed) curve we have chosen a cut on the product of the four-momenta of incoming photon ($`k`$) and lepton ($`p`$) in order to surpress the BH contribution. It is seen that even without BH subtraction the vector meson signal surpasses that of the BH process. The lowest figure, finally, shows the expected in-medium effects : the sensitivity in the region between about 300 and 700 MeV amounts to a factor of about 3 and is thus in the same order of magnitude as in the ultrarelativistic heavy-ion collisions. In addition, the calculated strong differences in the in-medium properties of longitudinal and transverse vector mesons can probably only be verified in photon-induced reactions, where the incoming polarization can be controlled; this is true also for dilepton production with virtual, spacelike photons from inelastic electron scattering. Another approach would be to measure the coherent photoproduction of vector mesons; here the first calculation available so far shows a distinct difference in the production cross sections of transverse and longitudinal vector mesons. ### 5.2 Photoabsorption Earlier in this paper I have discussed that a strong change of the $`\rho `$ meson properties comes about because of its coupling to $`N^{}h`$ excitations and that this coupling – through a higher-order effect – in particular leads to a very strong increase of the $`\rho `$ decay width of the $`N(1520)D_{13}`$ resonance. This increase may provide a reason for the observed disappearance of the higher nucleon resonances in the photoabsorption cross sections on nuclei . These cross sections scale very well with the massnumber $`A`$ of the nucleus, which indicates a single-nucleon phenomenon. They also clearly show a complete absence of the resonance structure in the second and third resonance region. The disappearance in the third region is easily explainable as an effect of the Fermi-motion. The disapparance of the second resonance region, i.e. in particular of the $`N(1520)`$ resonance, however, presents an interesting problem; it is obviously a typical in-medium effect. First explanations assumed a very strong collisional broadening, but in ref. it has been shown that this broadening is not strong enough to explain the observed disappearance of the $`D_{13}`$ resonance. Since at the energy around 1500 MeV also the $`2\pi `$ channel opens it is natural to look for a possible connection with the $`\rho `$ properties in medium. Fig.6 shows the results of such an analysis (see also ). It is clear that the opening of the phase space for $`\rho `$ decay of this resonance provides enough broadening to explain its disappearance. ## 6 Summary In this talk I have concentrated on a discussion of the in-medium properties of the $`\rho `$ meson. I have shown that the scattering amplitudes of the $`\rho `$ meson on nucleons determine the in-medium spectral function of the $`\rho `$, at least in lowest order in the nuclear density. The dilepton channel can give information on the properties of the $`\rho `$ deep inside nuclear matter whereas the $`2\pi `$ decay channel – because of its strong final state interaction – can give only information about the vector meson in the low-density region. The original QCD sum rule predictions of a lowered $`\rho `$ mass have turned out to be too naive, because they were based on the assumption of a sharp resonance state. In contrast, all specific hadronic models yield very broad spectral functions for the $`\rho `$ meson with a distinctly different behavior of longitudinal and transvese $`\rho `$’s. Recent QCD sum rule analyses indeed do not predict a lowering of the mass, but only yield – rather wide – constraints on the mass and width of the vector mesons. I have also discussed that hadronic models that include the coupling of the $`\rho `$ meson to nucleon resonances and a corresponding shift of vector meson strength to lower masses give a natural backreaction on the width of these resonances. In particular, the $`N(1520)D_{13}`$ resonance is significantly broadened because of its very large coupling constant to the $`\rho N`$ channel. Since the $`\rho `$ decay of this resonance has never been directly seen in an experiment, but is deduced only from one partial wave analysis of $`2\pi `$ production , it would be very essential to have new, better data (and a new analysis of these) for this channel. A large part of the unexpected surplus of dileptons produced in ultrarelativistic heavy-ion collisions can be understood by simply including secondary interactions, in this case $`\pi +\pi \rho `$, in the ‘cocktail plot’ of hadronic sources; this special source is one that is particular to heavy-ion collisions with their very large pion multiplicity. Since the pions are produced rather late during the collision, the gain from the high densities reached in heavy ion collisions is not so large as one could naively have expected. Motivated by this observation I have then discussed predictions for experiments using pion and photon beams as incoming particles. In both cases the in-medium sensitivity is nearly as large as it is in the heavy-ion experiments. In addition, such experiments have the great advantage that they take place much closer to equilibrium, an assumption on which all predictions of in-medium properties are based. Furthermore, only in such experiments it will be possible to look for polarization effects in the in-medium properties of the $`\rho `$ meson. I have finally shown that the in-medium properties of the $`\rho `$ also show up in photonuclear reactions. One intriguing suggestion is that the observed disappearance of the second resonance region in the photoabsorption cross section is due to the broadening of the $`N(1520)`$ resonance caused by the shift of $`\rho `$ strength to lower masses. At high energies, finally, the in-medium broadening of the $`\rho `$ leads to a mean free path of about 2 fm in nuclear matter. Shadowing will thus only be essential if the coherence length is larger than this mean free path. Increasing the four-momentum transfer $`Q^2`$ at fixed energy transfer $`\nu `$ leads to smaller coherence length and thus diminishes the initial state interactions of the photon leading to a larger transparancy. This effect is essential to verify experimentally; it is superimposed on the color-transparency effect that is still being looked for. ## 7 Acknowledgement This talk is based on results of work with E. Bratkovskaya, W. Cassing, M. Effenberger, H. Lenske, S. Leupold, W. Peters, M. Post and T. Weidmann. I am grateful to all of them for many stimulating discussions. I wish to thank in particular M. Effenberger and E. Bratkovskaya for providing me with the results shown in Figs. 4 and 5 before publication.
no-problem/9812/nucl-ex9812002.html
ar5iv
text
# Large Momentum Transfer Measurements of the Deuteron Elastic Structure Function 𝐀⁢(𝑄²) at Jefferson Laboratory ## Abstract The deuteron elastic structure function $`A(Q^2)`$ has been extracted in the range $`0.7Q^26.0`$ (GeV/c)<sup>2</sup> from cross section measurements of elastic electron-deuteron scattering in coincidence using the Hall A Facility of Jefferson Laboratory. The data are compared to theoretical models based on the impulse approximation with the inclusion of meson-exchange currents, and to predictions of quark dimensional scaling and perturbative quantum chromodynamics. Electron scattering from the deuteron has long been a crucial tool in understanding the internal structure and dynamics of the nuclear two-body system. In particular, the deuteron electromagnetic form factors, measured in elastic scattering, offer unique opportunities to test models of the short-range nucleon-nucleon interaction, meson-exchange currents and isobaric configurations as well as the possible influence of explicit quark degrees of freedom,. The cross section for elastic electron-deuteron (e-d) scattering is described by the Rosenbluth formula: $$\frac{d\sigma }{d\mathrm{\Omega }}=\sigma _M\left[A(Q^2)+B(Q^2)\mathrm{tan}^2(\frac{\theta }{2})\right]$$ (1) where $`\sigma _M=\alpha ^2E^{}\mathrm{cos}^2(\theta /2)/[4E^3\mathrm{sin}^4(\theta /2)]`$ is the Mott cross section. Here $`E`$ and $`E^{}`$ are the incident and scattered electron energies, $`\theta `$ is the electron scattering angle, $`Q^2=4EE^{}\mathrm{sin}^2(\theta /2)`$ is the four-momentum transfer squared and $`\alpha `$ is the fine structure constant. The elastic electric and magnetic structure functions $`A(Q^2)`$ and $`B(Q^2)`$ are given in terms of the charge, quadrupole and magnetic form factors $`F_C(Q^2)`$, $`F_Q(Q^2)`$, $`F_M(Q^2)`$: $`A(Q^2)=F_C^2(Q^2)+{\displaystyle \frac{8}{9}}\tau ^2F_Q^2(Q^2)+{\displaystyle \frac{2}{3}}\tau F_M^2(Q^2)`$ (2) $`B(Q^2)={\displaystyle \frac{4}{3}}\tau (1+\tau )F_M^2(Q^2)`$ (3) where $`\tau =Q^2/4M_d^2`$, with $`M_d`$ being the deuteron mass. The interaction between the electron and the deuteron is mediated by the exchange of a virtual photon. In the non-relativistic impulse approximation (IA), where the photon is assumed to interact with one of the two nucleons in the deuteron, the deuteron form factors are described in terms of the deuteron wave function and the electromagnetic form factors of the nucleons. Theoretical calculations based on the IA approach using various nucleon-nucleon potentials and parametrizations of the nucleon form factors generally underestimate the existing $`A(Q^2)`$ data ,,,. Recent relativistic impulse approximation (RIA) calculations improve or worsen the agreement with the data depending on their particular assumptions. There are two RIA approaches: manifestly covariant calculations , and light-front dynamics,. It is well known that the form factors of the deuteron are very sensitive to the presence of meson-exchange currents (MEC). There have been numerous extensive studies and calculations augmenting both the IA and RIA approaches with the inclusion of MEC. The principal uncertainties in these calculations are the poorly known value of the $`\rho \pi \gamma `$ coupling constant and the $`\rho \pi \gamma `$ vertex form factor. Some calculations show also sensitivity to possible presence of isobar configurations in the deuteron. The inclusion of MEC brings the theory into better agreement with the existing data. It is widely recognized that the underlying quark-gluon dynamics cannot be ignored at distances much less than the nucleon size. This has led to the formulation of so-called hybrid quark models that try to simultaneously incorporate the quark- and gluon-exchange mechanism at short distances and the meson-exchange mechanism at long and intermediate distances. When the internucleon separation is smaller than $`1`$ fm, the deuteron is treated as a six-quark configuration with a certain probability that results in an additional contribution to the deuteron form factors. At sufficiently large momentum transfers the form factors are expected to be calculable in terms of only quarks and gluons within the framework of quantum chromodynamics (QCD). The first attempt at a quark-gluon description of the deuteron form factors was based on quark-dimensional scaling (QDS). The underlying dynamical mechanism during e-d scattering is the rescattering of the constituent quarks via the exchange of hard gluons. The $`Q^2`$ dependence of this process is then predicted by simply counting the number of gluon propagators (5), which implies that $`\sqrt{A(Q^2)}(Q^2)^5`$. This prediction was later substantiated in the framework of perturbative QCD (pQCD), where it was shown that in leading-order: $`\sqrt{A(Q^2)}=[\alpha _s(Q^2)/Q^2]^5_{m,n}d_{mn}[\mathrm{ln}(Q^2/\mathrm{\Lambda }^2)]^{\gamma _n\gamma _m}`$ where $`\alpha _s(Q^2)`$ and $`\mathrm{\Lambda }`$ are the QCD strong coupling constant and scale parameter, and $`\gamma _{m,n}`$ and $`d_{mn}`$ are QCD anomalous dimensions and constants. The existing SLAC $`A(Q^2)`$ data exhibit some evidence of this asymptotic fall-off for $`Q^2>2`$ (GeV/c)<sup>2</sup>. The unique features of the Continuous Electron Beam Accelerator and Hall A Facilities of the Jefferson Laboratory (JLab) offered the opportunity to extend the kinematical range of $`A(Q^2)`$ and to resolve inconsistencies in previous data sets from different laboratories by measuring the elastic e-d cross section for $`0.7Q^26.0`$ (GeV/c)<sup>2</sup>. Electron beams of 100$`\%`$ duty factor were scattered off a liquid deuterium target in Hall A. Scattered electrons were detected in the electron High Resolution Spectrometer (HRSE). To suppress backgrounds and separate elastic from inelastic processes, recoil deuterons were detected in coincidence with the scattered electrons in the hadron HRS (HRSH). Elastic electron-proton (e-p) scattering in coincidence was used to calibrate this double-arm system. A schematic of the Hall A Facility as used in this experiment is shown in Figure 1. The incident beam energy was varied between 3.2 and 4.4 GeV. The beam intensity, 5 to 120 $`\mu `$A, was monitored using two resonant cavity beam current monitors (BCM) upstream of the target system. The two cavities were frequently calibrated against a parametric current transformer monitor (Unser monitor). The beam was rastered on the target in both horizontal and vertical directions at high frequency and its position was monitored with two beam position monitors (BPM). The uncertainties in the incident beam current and energy were estimated to be $`\pm 2\%`$ and $`\pm 0.2\%`$, respectively. The target system contained liquid hydrogen and deuterium cells of length $`T`$=15 cm. Two Al foils separated by 15 cm were used to measure the contribution to the cross section from the Al end-caps of the target cells. The liquid hydrogen(deuterium) was pressurized to 1.8(1.5) atm and pumped at high velocity ($`0.5`$ m/s) through the cells to heat exchangers. The hydrogen(deuterium) temperature was 19(22) K. This system provided a record high luminosity of $`4.0\times 10^{38}`$cm<sup>-2</sup>s<sup>-1</sup> ($`4.7\times 10^{38}`$cm<sup>-2</sup>s<sup>-1</sup>) for hydrogen(deuterium). The raster system kept beam-induced density changes at a tolerable level: up to 2.5%(5.0%) at 120$`\mu `$A for deuterium(hydrogen). Scattered electrons were detected in HRSE used in its standard configuration consisting of two planes of plastic scintillators to form an “electron” trigger, a pair of drift chambers for electron track reconstruction, and a gas threshold Čerenkov counter and a segmented lead-glass calorimeter for electron identification. Recoil nuclei were detected in HRSH using a subset of its detection system: two planes of scintillators to form a “recoil” trigger and a pair of drift chambers for recoil track reconstruction. The efficiencies of the calorimeter and Čerenkov counter were $`99.5\%`$, and of scintillators and tracking almost 100$`\%`$ for both spectrometers. Event triggers consisted of electron-recoil coincidences and of a prescaled sample of electron and recoil single-arm triggers. Electron events were identified on the basis of a minimal pulse height in the Čerenkov counter and an energy deposited in the calorimeter consistent with the momentum determined from the drift chamber track. Coincidence events were identified using the relative time-of-flight (TOF) between the electron and recoil triggers. Contributions from the target cell end-caps and random coincidences were negligible. Elastic e-p scattering was measured for each e-d elastic kinematics. The e-p kinematics was chosen to match the electron-recoil solid angle Jacobian for the corresponding e-d kinematics. Data were taken with and without acceptance defining collimators in front of the spectrometers. The elastic e-p and e-d cross sections were calculated using: $$\frac{d\sigma }{d\mathrm{\Omega }}=\frac{N_{ep(d)}C_{eff}}{N_iN_t(\mathrm{\Delta }\mathrm{\Omega })_{MC}F}$$ (4) where $`N_{ep(d)}`$ is the number of e-p(e-d) elastic events, $`N_i`$ is the number of incident electrons, $`N_t`$ is the number of target nuclei/cm<sup>2</sup>, $`(\mathrm{\Delta }\mathrm{\Omega })_{MC}`$ is the effective double-arm acceptance from a Monte Carlo simulation, $`F`$ is the portion of radiative corrections that depends only on $`Q^2`$ and $`T`$ (1.088 and 1.092, on average, for e-p and e-d elastic respectively) and $`C_{eff}=C_{det}C_{cdt}C_{rni}`$. Here $`C_{det}`$ is the electron and recoil detector and trigger inefficiency correction (2.6$`\%`$), $`C_{cdt}`$ is the computer dead-time correction (typically 10$`\%`$ for e-d elastic), and $`C_{rni}`$ is the correction for losses of recoil nuclei due to nuclear interactions in the target (0.7-1.8$`\%`$ for protons and 2.8-5.1$`\%`$ for deuterons). The effective double-arm acceptance was evaluated with a Monte Carlo computer program that simulated elastic e-p and e-d scattering under identical conditions as our measurements. The program tracked scattered electrons and recoil nuclei from the target to the detectors through the two HRS’s using optical models based on magnetic measurements of the quadrupole and dipole elements and on position surveys of collimation systems, magnets and vacuum apertures. The effects from ionization energy losses and multiple scattering in the target and vacuum windows were taken into account for both electrons and recoil nuclei. Bremsstrahlung radiation losses for both incident and scattered electrons in the target and vacuum windows as well as internal radiative effects were also taken into account. Details on this simulation method can be found in Ref. . Monte Carlo simulated spectra of scattered electrons and recoil nuclei were found to be in very good agreement with experimentally measured spectra. The e-p elastic cross sections measured with the acceptance defining collimators were found to agree within 0.3$`\%`$, on average, with values calculated using a recent fit to world data of the proton form factors. The e-p elastic cross sections measured without the collimators were, on average, 2.6$`\%`$ higher than the ones measured with collimators. All e-d cross section data taken without collimators have been normalized by 2.6$`\%`$. Values for $`A(Q^2)`$ were extracted from the measured e-d cross sections under the assumption that $`B(Q^2)`$ does not contribute to the cross section (supported by the existing $`B(Q^2)`$ data). The extracted $`A(Q^2)`$ values are presented in Fig. 2 together with previous SLAC data and theoretical calculations. The error bars represent statistical and systematic uncertainties added in quadrature. The statistical error ranged from $`\pm 1\%`$ to $`\pm 28\%`$. The systematic error has been estimated to be $`\pm 5.9\%`$ and is dominated by the uncertainty in $`(\mathrm{\Delta }\mathrm{\Omega })_{MC}`$ ($`\pm 3.6\%`$). Each of the two highest $`Q^2`$ points represents the average of two measurements with different beam energies (4.0 and 4.4 GeV). Tables of numbers are given in Ref. . It is apparent that our data agree very well with the SLAC data in the range of overlap and exhibit a smooth fall-off with $`Q^2`$. The double dot-dashed and dot-dashed curves in Fig. 2 represent the RIA calculations of Van Orden, Devine and Gross (VDG) and Hummel and Tjon (HT), respectively. The VDG curve is based on a relativistically covariant calculation that uses the Gross equation and assumes that the virtual photon is absorbed by an off-mass-shell nucleon or a nucleon that is on-mass-shell right before or after the interaction. The HT curve is based on a one-boson-exchange quasipotential approximation of the Bethe-Salpeter equation where the two nucleons are treated symmetrically by putting them equally off their mass-shell with zero relative energy. In both cases the RIA appears to be lower than the data. Both groups have augmented their models by including the $`\rho \pi \gamma `$ MEC contribution. The magnitude of this contribution depends on the $`\rho \pi \gamma `$ coupling constant and vertex form factor choices. The VDG model (dashed curve) uses a $`\rho \pi \gamma `$ form factor from a covariant separable quark model. The HT model (dotted curve) uses a Vector Dominance Model. The difference in the two models is indicative of the size of theoretical uncertainties. Although our data favor the VDG calculations, a complete test of the RIA+MEC framework will require improved and/or extended measurements of the nucleon form factors and of the deuteron $`B(Q^2)`$, planned at JLab. Figure 3 shows our data in the “low” $`Q^2`$ range where they overlap with data from other laboratories. The previous measurements tend to show two long-standing diverging trends, one supported by the SLAC data and the other one by the CEA and Bonn data. Our data agree with the Saclay data and confirm the trend of the SLAC data. It should be noted that another JLab experiment has measured $`A(Q^2)`$ in the $`Q^2`$ range 0.7 to 1.8 (GeV/c)<sup>2</sup>. The two curves are from a recent non-relativistic IA calculation using the Argonne $`v_{18}`$ potential without (dot-dashed curve) and with (dashed curve) MEC, and exhibit clearly the necessity of MEC inclusion also in the non-relativistic IA. Figure 4 (top) shows values for the “deuteron form factor” $`F_d(Q^2)\sqrt{A(Q^2)}`$ multiplied by $`(Q^2)^5`$. It is evident that our data exhibit a behavior consistent with the power law of QDS and pQCD. Figure 4 (bottom) shows values for the “reduced” deuteron form factor $`f_d(Q^2)F_d(Q^2)/F_N^2(Q^2/4)`$ where the two powers of the nucleon form factor $`F_N(Q^2)=(1+Q^2/0.71)^2`$ remove in a minimal and approximate way the effects of nucleon compositeness. Our $`f_d(Q^2)`$ data appear to follow, for $`Q^2>2`$ (GeV/c)<sup>2</sup>, the asymptotic $`Q^2`$ prediction of pQCD: $`f_d(Q^2)[\alpha _s(Q^2)/Q^2][\mathrm{ln}(Q^2/\mathrm{\Lambda }^2)]^\mathrm{\Gamma }`$. Here $`\mathrm{\Gamma }=(2C_F/5\beta )`$, where $`C_F=(n_c^21)/2n_c`$, $`\beta =11(2/3)n_f`$, with $`n_c=3`$ and $`n_f=2`$ being the numbers of QCD colors and effective flavors. Although several authors have questioned the validity of QDS and pQCD at the momentum transfers of this experiment,, similar scaling behavior has been reported in deuteron photodisintegration cross sections at moderate photon energies. In summary, we have measured the elastic structure function $`A(Q^2)`$ of the deuteron up to large momentum transfers. The results have clarified inconsistencies in previous data sets at low $`Q^2`$. The high luminosity and unique capabilities of the JLab facilities enabled measurements of record low cross sections (the average cross section for $`Q^2=6`$ (GeV/c)<sup>2</sup> is $`2\times 10^{41}`$ cm<sup>2</sup>/sr) that allowed extraction of values of $`A(Q^2)`$ lower by one order of magnitude than achieved at SLAC. The precision of our data will provide severe constraints on theoretical calculations of the electromagnetic structure of the two-body nuclear system. Calculations based on the relativistic impulse approximation augmented by meson-exhange currents are consistent with the present data. The results are also indicative of a scaling behavior consistent with predictions of dimensional quark scaling and perturbative QCD. Future measurements, at higher $`Q^2`$, of $`A(Q^2)`$ and $`B(Q^2)`$ as well as of the form factors of the helium isotopes would be critical for testing the validity of the apparent scaling behavior. We acknowledge the outstanding support of the staff of the Accelerator and Physics Divisions of JLab that made this experiment possible. We are grateful to the authors of Refs. , and for kindly providing their theoretical calculations, and to F. Gross for valuable discussions. This work was supported in part by the U.S. Department of Energy and National Science Foundation, the Kent State University Research Council, the Italian Institute for Nuclear Research, the French Atomic Energy Commission and National Center of Scientific Research, the Natural Sciences and Engineering Research Council of Canada and the Fund for Scientific Research-Flanders of Belgium.
no-problem/9812/astro-ph9812303.html
ar5iv
text
# 1 Log of the observations of GX 301–2 performed by BeppoSAX. Observations 3503 and 3514 have been summed together. BeppoSAX OBSERVATIONS OF AN ORBITAL CYCLE OF THE X–RAY BINARY PULSAR GX 301–2 M. Orlandini<sup>1</sup>, D. Dal Fiume<sup>1</sup>, F. Frontera<sup>1,2</sup>, T. Oosterbroek<sup>3</sup>, A.N. Parmar<sup>3</sup>, A. Santangelo<sup>4</sup>, A. Segreto<sup>4</sup> <sup>1</sup> TeSRE/C.N.R., via Gobetti 101, 40129 Bologna, Italy <sup>2</sup> Physics Dept. Ferrara University, via Paradiso 12, 44100 Ferrara, Italy <sup>3</sup> SSD/ESA, ESTEC, Keplerlaan 1, 2200 AG Noordwijk, The Netherlands <sup>4</sup> IFCAI/C.N.R., via La Malfa 153, 90146 Palermo, Italy ABSTRACT We present preliminary results on our campaign of observations of the X–ray binary pulsar GX301–2. BeppoSAX observed this source six times in January/February 1998: at the periastron and apoastron, and at other four, intermediate, orbital phases. We present preliminary results on the GX 301–2 spectral and temporal behaviour as a function of orbital phase. 1 INTRODUCTION The X–ray binary pulsar GX 301–2 (4U 1223–62) is a $`700`$ s pulsator orbiting the B2 Iae supergiant Wray 977 every 41.5 days along the most eccentric orbit among X–ray binary pulsars \[Sato et al. 1986, Koh et al. 1997\]. It exhibits a flaring activity that shows its maximum $`1.4`$ days before the periastron passage. This anticipation with respect to the phase of closest approach to Wray 977 has been explained as due to the crossing of the neutron star through a circumstellar disk around the supergiant \[Pravdo et al. 1995\]. Recent observations by BATSE \[Pravdo et al. 1995\] have revealed the presence of a second flare occurring near the apoastron, at orbital phase 0.45. The overall power-law X–ray spectrum of GX 301–2 is characterized by a strong soft excess below 4 keV. Because pulsation was not detected in this band, the partial covering model has been ruled out as possible origin of this excess. ASCA observations revealed the presence of two soft components: a scattering component, between 2 and 4 keV, due to the gas stream from the supergiant to the neutron star, and a ultrasoft component, below 2 keV, described with thermal emission from a plasma at $`0.8`$ keV \[Saraswat et al. 1996\]. A strong narrow iron emission line and an absorption edge are also present. The emission line is the result of fluorescence in the cooler circumstellar material, while the absorption edge is due to the same material when it crosses the line of sight and absorbs the X–rays coming from the neutron star. At high (E$`>`$10 keV) energies, the power law spectrum is modified by a high energy cutoff. A possible cyclotron resonance feature at $`40`$ keV has been claimed \[Makishima and Mihara 1992\]. 2 OBSERVATIONS GX 301–2 was the target of a campaign of observations with the Italian-Dutch satellite BeppoSAX \[Boella et al. 1997\]. The source was observed at six different orbital phases (see Table 1), in order to monitor its spectral and timing behaviour along the orbit. All four Narrow Field Instruments aboard BeppoSAX worked nominally during all the observations, namely the LECS (0.1–10 keV), MECS (1.5–10 keV), HPGSPC (3–120 keV), and PDS (15–200 keV). The source displayed its maximum intensity not at the periastron observation OP3428/9 but at orbital phase 0.85 (OP3373). The net count rate in this observation was about four times that detected at the periastron. We confirmed the increase in the intensity near the apoastron. 2.1 Spectral Analysis The spectral analysis of the BeppoSAX observations is very preliminary (we did not use HPGSPC data). The pulse averaged spectrum is very complex and rich in features. We show in Fig. 1 some of the best-fit parameters relative to the fit to the pulse averaged spectra of all the six observations with the scattering model by Saraswat et al. (1996), defined in terms of two absorbed power laws, with the same index but different absorptions and normalizations, plus a high energy cutoff for describing PDS data. The stronger absorption affects the pulsating, hard (above 4 keV) component, while the softer absorption is though to arise because of scattering. We added an Iron line at $`6.4`$ keV, but we found systematic deviations at $`20`$ and $`40`$ keV. While the latter might be attributable to a cyclotron resonance, the correspondence of the cyclotron energy with the cutoff energy for the former makes, at this stage of the analysis an identification with a cyclotron resonance difficult. 2.2 Timing Analysis We folded the light curves of all the observations with the apparent pulse period obtained by an epoch folding search. In this preliminary analysis arrival times were not corrected to the solar system barycenter nor for orbital motion. The background subtracted pulse profiles as a function of energy for the most intense observation are shown in Fig. 2. In this particular observation we were also able to detect pulsation below 2 keV and in the 60–100 keV range. Note the evolution of the four sub-peaks in the main peak. The modulation index, defined as $`1I_{\mathrm{min}}/I_{\mathrm{max}}`$, where $`I_{\mathrm{min}}`$ and $`I_{\mathrm{max}}`$ are the minimum and maximum count rate observed in the pulse profile, shows a monotonic increase with energy without any deviation at the suspected cyclotron resonance features \[Frontera and Dal Fiume 1989\]. 3 REFERENCES
no-problem/9812/astro-ph9812378.html
ar5iv
text
# Effects of Ram-Pressure from Intracluster Medium on the Star Formation Rate of Disk Galaxies in Clusters of Galaxies ## 1 Introduction Clusters of galaxies in the redshift range of $`0.20.5`$ often exhibit an overabundance, relative to present-day clusters, of blue galaxies (Butcher & Oemler 1978). This star formation activity is often called the Butcher-Oemler effect (BOE). Although several mechanisms have been suggested to explain the effect, it is still not known which is most important. This is because the change of star formation rate (SFR) caused by these mechanisms has not been predicted physically and quantitatively, which makes the comparison between the theoretical predictions and observations difficult. In contrast with the usual “black-box” approach of N-body simulations which has previously dominated the theoretical discussion in this field, Fujita (1998, Paper I) constructs a simple model to evaluate the change of SFR of a disk galaxy made by environmental effects in clusters of galaxies. He shows that tidal force from the potential well of a cluster can induce star formation activity of a galaxy in the central part of the cluster, but it is inconsistent with observations. He also shows that the increase of external thermal pressure when a galaxy plows into the intracluster medium (ICM) does not induce significant rise of the SFR. On the contrary, he shows that successive high-speed encounters between galaxies (galaxy harassment, see Moore et al. 1996) can make star formation active. These arguments are based on the idea that the BOE is caused by blue starburst galaxies. Some recent observations, however, suggest that starbursts do not play a major role in driving cluster galaxy evolution at high redshift. Abraham et al. (1996) observe Abell 2390 and find that the galaxy population in the cluster changes gradually from a red, evolved, early-type population in the inner part of the cluster to a progressively blue, later-type population in the extensive outer envelope of the cluster. The radial change of colors and morphologies of galaxies is also observed in other clusters (e.g. Balogh et al. 1997; Rakos, Odell, & Schombert 1997 ; Oemler, Dressler, & Butcher 1997 ; Smail et al. 1998 ; Couch et al. 1998 ; van Dokkum et al. 1998). Abraham et al. (1996) speculate that the cluster has been built up gradually by the infall of blue field galaxies and that star formation has been truncated in infalling galaxies during the accretion process. The BOE is due to a high proportion of blue spiral galaxies which later turn to red S0 galaxies during the infall into the cluster center. Starbursts are not required to explain the galaxy spectra. In fact, high angular resolution images from the Hubble Space Telescope (HST) show that significant fraction of blue galaxies in high redshift clusters are “normal” spiral galaxies (Oemler et al. 1997). Moreover, using HST data, Couch et al. (1998) claim that there are fewer disturbed galaxies than predicted by the galaxy harassment model. They suggest that ram-pressure stripping by hot ICM truncates star formation of blue disk galaxies and makes them red. Motivated by these observations, the effects of ram-pressure from ICM on the SFR of a disk galaxy are investigated in this study. Previous theoretical studies show that in the central part of clusters, where the density of the ICM is large, ram-pressure from the ICM sweeps interstellar medium (ISM) of galaxies away (e.g. Gunn & Gott 1972; Takeda, Nulsen, & Fabian 1984; Gaetz, Salpeter, & Shaviv 1987). Thus, the SFR could decrease as Couch et al. (1998) suggest. On the other hand, ram-pressure could enhance star formation through the compression of ISM. For example, Bothun & Dressler (1986) suggest that the observational data of 3C295 cluster and Coma cluster are consistent with the picture of ram-pressure induced star formation. Therefore, the quantitative estimation of the SFR of a galaxy under pressure from the ICM is required to know whether the ram-pressure decreases or increases the SFR. The plan of this paper is as follows. In §2, the models of molecular gas and ram-pressure are described. In §3, the evolution of the SFR of a infalling galaxy is investigated. In §4, the results of the model calculations are compared with observations. The conclusions are summarized in §5. ## 2 Models ### 2.1 Molecular Clouds The method for modeling the evolution of the molecular clouds in disk galaxies was extensively discussed in Paper I. In this subsection the main features of the method are briefly reviewed. Molecular clouds are divided according to their initial masses, that is, $`M_{\mathrm{min}}=M_1<\mathrm{}<M_i<\mathrm{}<M_{\mathrm{max}}`$, where the lower and upper cutoffs are $`M_{\mathrm{min}}=10^2\mathrm{M}_{\mathrm{}}`$ and $`M_{\mathrm{max}}=10^{8.5}\mathrm{M}_{\mathrm{}}`$, respectively, and their intervals are chosen so that $`\mathrm{log}(M_{i+1}/M_i)=0.01`$. Referring to the total mass of clouds whose initial masses are between $`M_i`$ and $`M_{i+1}`$ as $`\mathrm{\Delta }\stackrel{~}{M}_i(t)`$, the rate of change is $$\frac{d\mathrm{\Delta }\stackrel{~}{M}_i(t)}{dt}=\stackrel{~}{f}_iS(t)\frac{\mathrm{\Delta }\stackrel{~}{M}_i(t)}{\tau (M_i,P)},$$ (1) $$S(t)=S_{}(t)+S_{\mathrm{mol}}(t),$$ (2) where $`\stackrel{~}{f}_i`$ is the initial mass fraction of the molecular clouds whose initial masses are between $`M_i`$ and $`M_{i+1}`$, $`S_{}(t)`$ is the gas ejection rate of stars, $`S_{\mathrm{mol}}(t)`$ is the recycle rate of molecular gas, $`\tau (M_i,P)`$ is the destruction time of a molecular cloud with mass $`M_i`$ and pressure $`P`$. The initial mass function of molecular clouds is assumed to be $`N(M)M^2`$. Thus $`\stackrel{~}{f}_i`$ can be derived from this relation. The gas ejection rate of stars is divided into two terms, $`S_{}(t)=S_s(t)+S_l`$. The gas ejection rate of stars with small lifetime is given by $$S_s(t)=_{m_l}^{m_u}\psi (tt_m)R(m)\varphi (m)𝑑m,$$ (3) where $`m`$ is the stellar mass, $`\psi (t)`$ is the SFR, $`\varphi (m)`$ is the IMF expressed in the form of the mass fraction, $`R(m)`$ is the return mass fraction, and $`t_m`$ is the lifetime of stars with mass $`m`$. The slope of the IMF is taken to be 1.35 (Salpeter mass function). The upper and lower mass limits, $`m_u`$ and $`m_l`$, are taken to be $`50M_{\mathrm{}}`$ and $`2.1M_{\mathrm{}}`$, respectively, although the lower limit of the IMF is $`0.08M_{\mathrm{}}`$. The gas ejection rate of stars with mass smaller than $`m_l`$, or $`S_l`$, is assumed to be constant. It is determined by the balance between the formation and destruction rate of molecular clouds (see Paper I). The recycle rate of molecular gas is $$S_{\mathrm{mol}}(t)=\underset{i}{}[1ϵ(M_i,P)]\frac{\mathrm{\Delta }\stackrel{~}{M}_i(t)}{\tau (M_i,P)},$$ (4) where $`ϵ(M_i,P)`$ is the star formation efficiency of a molecular cloud with mass $`M_i`$ and pressure $`P`$. Note that $`S_{\mathrm{mol}}(t)`$ is identical with the evaporation rate of molecular gas by newborn stars. On simple assumptions, Elmegreen & Efremov (1997) derive $`ϵ(M_i,P)`$ and $`\tau (M_i,P)`$ ; their results are adopted hereafter. The SFR is described by $$\psi (t)=\underset{i}{}ϵ(M_i,P)\frac{\mathrm{\Delta }\stackrel{~}{M}_i(t)}{\tau (M_i,P)}.$$ (5) Equation (1) can be solved if $`P`$ is given. In the next subsection, $`P`$ is determined for ram-pressure. Then the SFR is calculated from equations (1) $``$ (5). ### 2.2 Ram-pressure As mentioned in §1, ram-pressure could induce more star formation in a disk galaxy through compression of the ISM, while it could decrease the SFR by stripping the ISM from a galaxy. In this subsection a model is constructed to determine which effect dominates. In addition to molecular gas, warm HI gas is also considered, although equation (1) does not include it explicitly. It is assumed that the gas ejected from stars and the gas evaporated by young stars temporary become HI gas before they finally become molecular gas. The time-scale of the HI gas phase is assumed to be small in comparison with that of the galaxy evolution. Thus, the total mass of HI gas can be given by $`M_{\mathrm{HI}}\tau _{\mathrm{mol}}S/ϵ_{\mathrm{mol}}`$, where $`\tau _{\mathrm{mol}}`$ and $`ϵ_{\mathrm{mol}}`$ are the time-scale and efficiency of molecular cloud formation, respectively. If molecular cloud formation is triggered by density waves in the galactic disk, $`\tau _{\mathrm{mol}}`$ is nearly the ratio of the rotation time of the galaxy to the number of the arms. In this case, it can be assumed that $`\tau _{\mathrm{mol}}`$ is constant. If $`ϵ_{\mathrm{mol}}`$ is also constant, $`M_{\mathrm{HI}}S(t)`$. Note that $`S(t)`$ changes only $`20`$ % in the calculations in §3. Ram-pressure from ICM is $$P_{\mathrm{ram}}=\rho _{\mathrm{ICM}}v_{\mathrm{gal}}^2,$$ (6) where $`\rho _{\mathrm{ICM}}`$ is the mass density of ICM, and $`v_{\mathrm{gal}}`$ is the velocity of a galaxy relative to ICM. The ram-pressure is assumed to be the same both for molecular gas and for HI gas. For the HI gas, the restoring force per unit area due to the gravity of the disk of the galaxy is given by $`F_{\mathrm{grav},\mathrm{HI}}`$ $`=`$ $`2\pi G\mathrm{\Sigma }_{}\mathrm{\Sigma }_{\mathrm{HI}}`$ (7) $`=`$ $`v_{\mathrm{rot}}^2R^1\mathrm{\Sigma }_{\mathrm{HI}}`$ (8) $`=`$ $`2.1\times 10^{11}\mathrm{dyne}\mathrm{cm}^2\left({\displaystyle \frac{v_{\mathrm{rot}}}{220\mathrm{km}\mathrm{s}^1}}\right)^2\left({\displaystyle \frac{R}{10\mathrm{kpc}}}\right)^1\left({\displaystyle \frac{\mathrm{\Sigma }_{\mathrm{HI}}}{8\times 10^{20}m_\mathrm{H}\mathrm{cm}^2}}\right),`$ (9) where $`G`$ is the gravitational constant, $`\mathrm{\Sigma }_{}`$ is the gravitational surface mass density, $`\mathrm{\Sigma }_{\mathrm{HI}}`$ is the surface density of the HI gas, $`v_{\mathrm{rot}}`$ is the rotation velocity of the galaxy, $`R`$ is the radius of the disk, and $`m_\mathrm{H}`$ is the mass of hydrogen (Gunn & Gott 1972). Equation (8) is derived from the relation $`\mathrm{\Sigma }_{}=(2\pi G)^1v_{\mathrm{rot}}^2R^1`$ (Binney & Tremaine 1987). In the following section, $`v_{\mathrm{rot}}=220\mathrm{km}\mathrm{s}^1`$ and $`R=10`$ kpc are used unless otherwise mentioned. Assuming $`\mathrm{\Sigma }_{\mathrm{HI}}M_{\mathrm{HI}}`$, the surface density can be given by $`\mathrm{\Sigma }_{\mathrm{HI}}=\mathrm{\Sigma }_{\mathrm{HI},0}(S(t)/S_0)`$, where $`\mathrm{\Sigma }_{\mathrm{HI},0}`$ and $`S_0`$ are the initial surface density and SFR, respectively. The HI gas is stripped when $$P_{\mathrm{ram}}>F_{\mathrm{grav},\mathrm{HI}}.$$ (10) After the HI gas is stripped, instead of equation (2), $`S(t)`$ is fixed at zero in equation (1). Molecular clouds are also stripped when $$P_{\mathrm{ram}}>2\pi G\mathrm{\Sigma }_{}\mathrm{\Sigma }_{\mathrm{mol}},$$ (11) where $`\mathrm{\Sigma }_{\mathrm{mol}}`$ is the column density of each molecular cloud and is given by $$\mathrm{\Sigma }_{\mathrm{mol}}=190\mathrm{M}_{\mathrm{}}\mathrm{pc}^2\left(\frac{P_{\mathrm{ram}}}{10^4k_\mathrm{B}\mathrm{cm}^3\mathrm{K}}\right)^{1/2},$$ (12) regardless of the mass (Elmegreen 1989). Since $`\mathrm{\Sigma }_{\mathrm{mol}}>\mathrm{\Sigma }_{\mathrm{HI}}`$, the molecular clouds are stripped after the HI gas is stripped. ## 3 Evolution of Radially Infalling Galaxy The distribution of the gravitational matter of a model cluster is $$\rho (r)=\frac{\rho _0}{(1+r^2/r_c^2)^{3/2}},$$ (13) where $`r`$ is the distance from the cluster center, $`\rho _0`$ is the mass density at the center, and $`r_c(=400\mathrm{kpc})`$ is the core radius of the cluster. The central mass density is given by $`\rho _0`$ $`=`$ $`{\displaystyle \frac{9}{4\pi Gr_c^2}}{\displaystyle \frac{k_\mathrm{B}T}{\mu m_\mathrm{H}}}`$ (14) $`=`$ $`9.0\times 10^{26}\mathrm{g}\mathrm{cm}^3\left({\displaystyle \frac{r_c}{400\mathrm{kpc}}}\right)^2\left({\displaystyle \frac{k_\mathrm{B}T}{8\mathrm{keV}}}\right),`$ (15) where $`k_\mathrm{B}`$ is the Boltzmann constant, $`T`$ is the effective temperature of the cluster, and $`\mu (=0.6)`$ is the mean molecular weight. Note that recent numerical simulations support a steeper core profile (e.g. Navarro, Frenk, & White 1997). However, even if this profile is adopted instead of equation (13), the following results does not change significantly (see Figure 2 in Paper I). The gas distribution is $$\rho _{\mathrm{gas}}(r)=\frac{\rho _{\mathrm{gas},0}}{1+r^2/r_c^2},$$ (16) where $`\rho _{\mathrm{gas},0}`$ is the central gas density. The pressure of molecular clouds is given by $$P=\mathrm{max}(P_{\mathrm{ram}}+P_{\mathrm{stat}},P_{\mathrm{}}),$$ (17) where $`P_{\mathrm{}}=3\times 10^4\mathrm{cm}^3\mathrm{K}`$ is the pressure from random motion of the clouds in the galaxy and $`P_{\mathrm{stat}}=\rho _{\mathrm{gas}}k_\mathrm{B}T/\mu m_\mathrm{H}`$ is the static pressure from ICM. The initial surface density of HI gas and the initial SFR of the model galaxy are $`\mathrm{\Sigma }_{\mathrm{HI},0}=8\times 10^{20}m_\mathrm{H}\mathrm{cm}^2`$ and $`S_0=6\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$, respectively. The parameters for the gas distribution of model clusters are summarized in Table 1. They are the typical ones obtained by X-ray satellites (e.g. Jones & Forman 1984 ; White, Jones, & Forman 1997). The initial position, $`r_0`$, and velocity, $`v_{\mathrm{gal},0}`$, of the model galaxy mean that the galaxy begins to fall from where the mass density of the cluster is 100 times the critical density of the Universe (for $`H_0=75\mathrm{k}\mathrm{m}\mathrm{s}^1\mathrm{Mpc}^1`$). Moreover, if $`P(t=0)`$ is given, the initial total mass of molecular clouds, $`M_{\mathrm{mol},0}`$, and the gas ejection rate of stars with long lifetime, $`S_l`$, can be determined so that the formation and destruction rate of molecular clouds are balanced at $`t=0`$ (see Paper I). For the parameters above, $`M_{\mathrm{mol},0}=2.5\times 10^9\mathrm{M}_{\mathrm{}}`$ (except for models A1 and B1), $`M_{\mathrm{mol},0}=2.1\times 10^9\mathrm{M}_{\mathrm{}}`$ (models A1 and B1), and $`S_l5\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$. In models B1–B4, the stripping of HI gas and molecular gas are ignored for comparison. Figure 1 shows the pressure evolution. As is shown, they strongly depend on the gas density and effective temperature of the cluster because of equation (6) and the dependence of galaxy velocity on the depth of potential well. Figure 2 shows the SFR of a model galaxy. The calculations continue even after the galaxy passes the cluster center. The SFR rises to twice its initial value at most. The maximum of the SFR can be explained by the relation between pressure and destruction time of molecular clouds (Figure 4 in Elmegreen & Efremov 1997), which is used in equation (1). Model B2 is taken as an example. As a galaxy gets closer to the cluster’s center, the ram-pressure increases. Thus, the destruction time of molecular clouds decreases. When $`P10P_{\mathrm{}}`$ (at $`r400`$ kpc), the destruction times of all molecular clouds ($`<10^{8.5}\mathrm{M}_{\mathrm{}}`$) become less than the time passed since $`P`$ started to rise significantly ($`2\times 10^8`$ yr, see Figure 1a and 4). Therefore, the pressure increase has affected all the clouds by this time. In fact, the ratio of the initial total mass of molecular clouds, $`2.5\times 10^9\mathrm{M}_{\mathrm{}}`$ to the time, $`2\times 10^8`$ yr, corresponds to the peak of SFR, $`13\mathrm{M}_{\mathrm{}}\mathrm{yr}^1`$. Note that the total mass of molecular clouds at $`r=400`$ kpc is not zero (Figure 3a). This is because the stars with long lifetime supply additional gas. The SFR rapidly drops in the central region of a high-temperature and/or gas-rich cluster when ram-pressure stripping is considered (models A1-A3). This is because HI gas is stripped and the formation of molecular clouds ceases. For example, the condition (10) is satisfied for $`r<1.2`$ Mpc in model A1. In that region, without the supply ($`S(t)=0`$ in equation ), part of the molecular gas is consumed to form stars and the rest is stripped soon after being evaporated by newborn stars. The larger the density and temperature of the model cluster, the greater the effect of ram-pressure, so that gas can be stripped and the SFR drops further out in the cluster (Figure 1). If surface density of the galaxy is smaller, the condition (10) is satisfied at larger radius. In that case, the SFR begins to decrease earlier. In model A4, the condition (10) is never satisfied, that is, the HI gas is not stripped. In this model, the SFR does not change significantly because neither ram-pressure compression nor stripping is effective. Note that the ram-pressure cannot strip molecular clouds in all the models considered here. Although the SFR in models A1–A4 is that of all the galaxy, it is not very different from that of the disk component because most gas resides in the disk and because the parameter $`R=10`$ kpc is the typical value of disk radius. The SFR of the bulge component can be considered separately as follows. The normalized SFR, $`\psi /S_0`$, does not depend on $`S_0`$, if the formation and destruction rate of molecular clouds are initially balanced. Thus the normalized SFR in models B1–B4 which do not include the effect of the ram-pressure stripping corresponds to that of the galactic bulge where restoring force is large (see equation ). For example, at $`R=10`$ kpc, the ram-pressure stripping occurs when $`P10P_{\mathrm{}}`$ (models A1–A3). Thus, for $`R<0.5`$ kpc, it occurs when $`P200P_{\mathrm{}}`$ if $`v_{\mathrm{rot}}`$ and $`\mathrm{\Sigma }_{\mathrm{HI}}`$ are identical. That is, the ram-pressure stripping does not occur in the bulge (Figure 1). In models B1–B4, after $`\psi /S_0`$ rises only twice as much as the initial value, it begins to decrease due to the shortage of molecular gas (Figure 3). However, the star formation does not cease even if the galaxy reaches the center of the cluster. After the galaxy passes the center, the SFR continues to decrease, although the rate is slow (Figure 2). ## 4 Discussion In this section, the results in §3 are compared with the recent observations of distant clusters in order to investigate whether ram-pressure is really responsible for the evolution of galaxies in clusters. Most distant clusters observed in detail are rich ones, and their temperatures are large ($`k_\mathrm{B}T=610`$ keV). Although the gas density distributions of most of the clusters are not known, they are assumed to be almost the same as those of nearby clusters. This is supported by the fact that there is no evidence for the evolution of luminosity-temperature relation among clusters at $`z0.4`$ (Mushotzky & Scharf 1997). Thus, the models of $`k_\mathrm{B}T=8`$ keV in §3 can be applied to the comparison. Using the population synthesis code made by Kodama & Arimoto (1997), the evolutions of color ($`BV`$) and B-band luminosity in models A1 and B1 are calculated (Figure 5 and 6) on the assumption that $`\psi (t)=S_0`$ for $`10<t<0`$ Gyr. The evolutions in models A2 and B2 are similar to those in models A1 and B1, although the color and luminosity start to change later. Model A1 shows the color and luminosity evolutions for a infalling disk galaxy. In that model, ram-pressure sweeps the ISM away at $`tt_{\mathrm{cent}}=5\times 10^8`$ yr, where $`t_{\mathrm{cent}}`$ is the time when the galaxy reaches the cluster center ($`t_{\mathrm{cent}}1.2`$ Gyr). Before ram-pressure stripping occurs, the changes of color and luminosity are very small. This indicates that the ram-pressure compression does not induce observable star formation activity because the intrinsic scatter of color and luminosity among galaxies are larger. After the stripping occurs, the color of the model galaxy rapidly becomes red and approaches the color of passively evolving stellar system such as elliptical galaxies (Figure 5). Thus, the model predicts that the fraction of blue disk galaxies is small in the central part of rich clusters if significant fraction of the galaxies have infallen from outside field. Recent HST observations indicate that there are proportionally more spiral galaxies and fewer E/S0 galaxies in the high-redshift clusters ($`z=0.30.5`$) than in nearby clusters (Oemler et al. 1997 ; Dressler et al. 1997 ; Couch et al. 1998) . Thus, some spiral galaxies may have been transformed into E/S0 galaxies by $`z=0`$. This morphological transformation may be explained by the models in §3 as follows. The luminosity of disk component of the model galaxy is approximately proportional to that of model A1 as noted in §3. Figure 6 shows the luminosity in model A1 becomes 40% of the initial value after the model galaxy passes the cluster center. On the other hand, the bulge of a disk galaxy generally has little gas, and most of light from the bulge component is dominated by passively evolving stellar system. This means that the luminosity of the bulge is not affected by ram-pressure very much. Even if the bulge contains a large amount gas, the color and luminosity of the bulge does not change significantly because in this case, the bulge evolution follows model B1 (Figure 5 and 6). Therefore, the bulge to disk luminosity ratio (B/D) changes when the galaxy radially infalls into the cluster center. For example if $`\mathrm{B}/\mathrm{D}=0.2`$ initially, it increases to $`\mathrm{B}/\mathrm{D}=0.5`$ after the galaxy passes the cluster center. This means Sb galaxies turn into S0 galaxies (Solanes et al. 1989). Therefore, the fraction of S0 galaxy should be large in the central region of clusters. Moreover, the number of S0 galaxy should increase in the all cluster as more field spiral galaxies accrete to the cluster. In summary, the ram-pressure from ICM indeed truncates star formation activity in disk galaxies as Abraham et al. (1996) speculate. Moreover, the above arguments show that if clusters have been built up by the infall of blue field disk galaxies, the ram-pressure model can explain the radial color and morphology distribution observed in distant clusters, that is, the galaxy population in the cluster changes gradually from a red, evolved, early-type population in the inner part of the cluster to a progressively blue, later-type population in the extensive outer envelope of the cluster. The BOE can be explained by the high proportion of the blue infalling galaxies. However, several observational results may not be explained by our simple ram-pressure model. In the rest of this section, a few examples are discussed. First, Valluri & Jog (1991) have shown that the observed relationship between HI deficiency and size is opposite to that expected from the ram-pressure model; they indicate that the fraction of HI-deficient galaxies increases with optical size over most of the size range. On the other hand, other observations show that HI-deficient galaxies are concentrated in the central part of clusters (e.g. Cayatte et al. 1990). This is consistent with the ram-pressure model. Valluri & Jog (1991) suggest that the disagreement results from mass segregation in clusters. If the larger galaxies are more concentrated toward the cluster core, they would be more severely affected by the ICM than smaller galaxies. Another solution may exist. Since the luminosity of dwarf galaxies decreases after ram-pressure truncates the star formation, the dwarf galaxies may become too dark to be observed in cluster cores. Second, Lea & Henry (1988) report that the percentage of blue objects in clusters seems to increase with the X-ray luminosity for the most luminous clusters, and no correlation is apparent at low luminosity. If ram-pressure is the only mechanism that drives the evolution of galaxies in clusters, the fraction of blue galaxies must always be high in low X-ray luminosity clusters, which usually have low temperatures, because ram-pressure stripping is not effective (see Model A4 in §3). Finally we would like to remark that although the effect of ram-pressure is important for cluster evolution, it cannot be denied that other processes, such as galaxy harassment, affect some galaxies in clusters. ## 5 Conclusions We have quantitatively investigated the effect of ram-pressure on the star formation rate of a radially infalling disk galaxy in a cluster using a simple model of molecular cloud evolution. The primary results of the study can be summarized as follows: 1. Since ram-pressure compresses the molecular gas, the star formation rate of a disk galaxy increases to twice its initial value at most as the galaxy approaches the center of a cluster with high gas density and deep potential well, or with a central pressure of $`10^2\mathrm{cm}^3\mathrm{keV}`$. However, this increase does not affect the color of the galaxy significantly. 2. When the galaxy approaches closer to the cluster center ($`1`$ Mpc from the center), the star formation rate of the disk component rapidly drops due to ram-pressure stripping. As a result, the galaxy turns red and the disk becomes dark. However, the star formation rate of the bulge component does not significantly change. 3. On the other hand, in a cluster with low gas density and shallow potential well, or the central pressure of $`10^3\mathrm{cm}^3\mathrm{keV}`$, the change of star formation rate is small. In these clusters neither ram-pressure compression nor stripping is effective. 4. The color and luminosity change induced by the ram-pressure effects can explain the color and morphology distribution and the evolution of galaxies observed in high-redshift clusters if the clusters have been built up by accretion of field spiral galaxies. We would like to thank I. Smail for his useful comments. We are also grateful to an anonymous referee for improving this paper. This work was supported in part by the JSPS Research Fellowship for Young Scientists.
no-problem/9812/astro-ph9812351.html
ar5iv
text
# 1 Introduction ## 1 Introduction NGC 3516 is a Seyfert 1 galaxy with a well documented history of short-timescale, large-amplitude continuum and emission-line variability (Koratkar et al. 1996). Historically, IUE/SWP spectra of NGC 3516 have also displayed what has previously been referred to as a strong, variable, broad absorption line (VAL, FWHM $``$200 km s<sup>-1</sup>) (Kolman et al. 1993; Walter et al. 1990; Voit et al. 1987), presumed to be similar to that observed in Broad Absorption Line quasars (BALs, Weymann et al. 1991). The subsequent disappearance of the supposed VAL between 1989 and the onset of the 1993 IUE/SWP monitoring campaign (Koratkar et al. 1996), may be associated with the increased ionization state of the gas inferred from the observed variation in the strength of the X-ray warm absorber detected by ROSAT (Mathur et al. 1997). However, recent high resolution UV observations of the C iv emission-line region in NGC 3516 with the HST/GHRS (Crenshaw et al. 1998), show that NGC 3516 displays multiple intrinsic absorption lines that are all blue-shifted and relatively narrow (FWHM $``$500 km s<sup>-1</sup>), and which appear to remain invariant on timescales of $``$ 6 months, and are thought to originate in the narrow-line region gas. In light of these observations, Goad et al. (1999) argue that the supposed VAL is in fact multiple, narrow absorption-line systems which appear broad at the lower resolution of IUE. In this picture, the historical variation in the strength of the VAL results from variations in the number of narrow-line clouds coupled with changes in the shape of the underlying broad emission-line. In an ongoing effort to determine the nature of the broad emission/absorption line regions and their long-term evolution, we obtained 5 HST/FOS cycle 6 observations of NGC 3516, from 1995 December to 1996 November, covering the wavelength range $`\lambda \lambda `$1150–3300 Å. In this paper we present the first results of this study, highlighting the absence of emission-line flux variations in the broad Mg ii and UV Fe ii emission lines, and its implications in terms of viable kinematic models of the BLR. ### 1.1 Results from the 1996 HST/FOS monitoring campaign Goad et al. (1999) showed that during the 1996 HST/FOS monitoring campaign, NGC 3516 displayed the following continuum and emission-line variability characteristics: (i) large amplitude, wavelength dependent, continuum flux variations (a factor of 5 at $`\lambda 1365`$ Å, c.f. a factor of 3 at $`\lambda 2230`$ Å), (ii) correlated, large amplitude, UV broad emission-line flux variations (a factor of 2 for both Ly$`\alpha `$ and C iv), and (iii) the complete absence of variations in both the strength and shape of the broad Mg ii and UV Fe ii emission-line complex (Figure 1). A comparison of the Mg ii emission-line strength determined from IUE/LWP observations taken in 1993 (Koratkar et al. 1996) with the present data, show that the flux in Mg ii has apparently remained constant for at least 4 years. The absence of significant variability in the Mg ii emission-line strength, despite large amplitude continuum and HIL flux variations, has also been noted in lower resolution (pre–1989) IUE/LWP data (Koratkar et al. 1996) when the Mg ii emission-line strength was a factor of 2 weaker. Interestingly, the observed doubling of the strength of the broad Mg ii emission-line between 1989–1993, appears to coincide with the disappearance of the higher-velocity narrow absorption-line components comprising the so-called VAL. It remains to be seen whether these two phenomena are related. Significantly, despite the notable increase in the Mg ii emission-line strength between 1989 and 1993 its shape has remained unchanged for over a decade (Figure 2). Moreover, in the present campaign, the shapes of the broad Ly$`\alpha `$, C iv and Mg ii emission lines as observed on 1996, February 21, when the continuum was in its highest state, are indistinguishable from one another (Figure 3a, N.B. Ly$`\alpha `$ is not shown). ## 2 Implications for kinematic models of the BLR A viable kinematic model of the BLR in NGC 3516, must explain : (i) the absence of both short-timescale ($``$ months) and long-timescale ($``$ a few years) variations in the Mg ii emission-line flux and shape, despite significant changes in the strength of the ionizing continuum; and (ii) the remarkable similarity between the line shapes of the HILs and LILs, at high continuum levels, when clearly these lines form under very different physical conditions. ### 2.1 Flux variability constraints An absence of response in the Mg ii emission line can result for a variety of physical reasons : (i) the continuum band driving the line is invariant; (ii) the emission-line is insensitive to continuum variations; and (iii) the line emitting region is physically extended. Here we address the shortcomings of each of these scenarios. (i) While it is true that Mg ii and Ly$`\alpha `$ respond to very different continuum bands (Mg ii is mostly sensitive to the continuum from 600–800 eV (Krolik and Kallman 1988) whereas Ly$`\alpha `$ is driven mainly by Lyman continuum photons), simultaneous multi-wavelength observations of a small number of AGN in the UV, optical and soft X-ray band-passes, indicate that the amplitude of the continuum variations generally increase toward shorter wavelengths (Romano and Peterson 1998). Although we cannot confirm whether such a trend holds at 600–800 eV, the Seyfert 1 galaxies for which simultaneous observations were made in the UV/EUV, NGC 5548 (Marshall et al. 1997) and NGC 4051 (Cagnoni et al. 1998), showed increased variability at shorter wavelengths. Thus, the evidence suggests that the lack of Mg ii and Fe ii variations is not due to an absence of continuum variability in the bandpass responsible for driving these lines. (ii) Goad et al. (1993, 1995) and O’Brien et al. (1995) demonstrated that the line responsivity $`\eta `$, the fractional change in line emissivity for a given fractional change in the continuum level, is relatively modest for Mg ii ($`\eta 0.20.3`$), when compared to the approximately linear response ($`\eta 1`$) of the HILs. This differences arises because for a single cloud and a fixed continuum shape, the variation in the position of the hydrogen ionization front is generally small compared to the overall extent of the partially ionized zone where LILs such as Mg ii are formed. For a fixed continuum shape this condition holds true over a large range in ionizing continuum luminosity (factors of 10 or more). Although the Mg ii line responsivity is small, given the factor of 2 change in the C iv emission-line flux, we should have detected $`40`$% variation in the Mg ii emission-line flux. Since no significant ($`<`$7%) variation was detected, low emission-line responsivity cannot be the sole explanation for the absence of Mg ii emission-line variations. (iii) Finally, the lack of response in the Mg ii emission line can also result if the Mg ii line-emitting region is physically large, and has thus yet to respond to the observed continuum variations. While plausible, given the short-timescale response of the C iv emission-line to the continuum variations ($``$ 4.5 days, Koratkar et al. 1996), it is difficult in this picture to explain the similarity in shape of the C iv and Mg ii emission-line profiles at high continuum levels, particularly if the size of the regions where these lines originate differ by a few orders of magnitude. The similar ionization potential of Fe ii and Mg ii suggests that these lines are formed in similar regions, hence it is unsurprising that the Fe ii emission lines display a similar lack of response to continuum variations (N.B. relatively few of the Fe ii lines (although strong) are resonance lines, although others are pumped by the continuum, especially in the presence of extra thermal particle speeds). The smoothness of the Fe ii complex in NGC 3516 when compared to similar observations of I Zw I (Goad et al. 1999), a quasar with strong narrow Fe ii emission-lines (Laor et al. 1997), supports this finding, and suggests that the Fe ii lines in NGC 3516 are indeed produced in high velocity gas (FWHM$`4500`$ km s<sup>-1</sup>). ## 3 Profile variability constraints The similarity in shape between the HILs and LILs at the highest continuum levels suggests that in the high-state both C iv and Mg ii arise in kinematically similar regions. However, if the lack of response of Mg ii is not the result of an absence of variability in the continuum band driving this line, then the difference in response timescale between the HILs and Mg ii suggest that these lines do arise in spatially distinct regions, with Mg ii formed in a region which extends over several light-months or more. Figure 3b indicates that between the low and high-states, the core of the C iv emission line shows a deficit of response redward of line center from $`0`$ to $`+`$4000 km s<sup>-1</sup> (even after allowing for possible contamination by the variable absorption-line and narrow emission-line components) and an enhanced response in the far red-wing ($`>5000`$ km s<sup>-1</sup>). This may result from : (i) reverberation within a radial flow; (ii) a change in the radiation pattern of the ionizing continuum; or (iii) a change in the spatial distribution of the line emitting gas. Taken together, this evidence places severe constraints upon the spatial distribution and kinematics of the line-emitting gas in NGC 3516. For example, based on this data the following models can be excluded : * Radial flows at constant velocity, illuminated by either an isotropic or anisotropic continuum source (Goad and Knigge 1999). * Radial flows in which Mg ii and C iv arise in azimuthally separated cloud populations, either through spatial segregation or anisotropic illumination. * A spherical distribution of clouds in Keplerian motion and illuminated by an isotropic continuum source. We propose two models which may account for the observed emission-line variations: (i) an anisotropic continuum source model and (ii) a terminal wind model (i.e. a flow that accelerates from rest until it reaches constant velocity). (i) The anisotropic continuum model of Wanders et al. (1995) can broadly reproduce the gross details of the observations reported here. In this model, the continuum is assumed to be comprised of two components, an anisotropic variable component responsible for driving the variations in the HILs and an isotropic non-variable component responsible for producing the Mg ii and Fe ii emission. To match the observed variations, the BLR must be comprised of two distinct cloud populations, a low column density population producing significant HIL emission, and a spatially distinct high-column density population producing both HILs and LILs. The variable ionizing continuum component preferentially illuminates the low-column density population, giving rise to the variations in Ly$`\alpha `$ and C iv emission-lines, whereas the LILs arise predominantly from the high-column density clouds illuminated by the non-variable isotropic continuum component. If the ionization parameter $`U`$ is large enough, and the EUV is not absorbed, these high column density clouds could also produce significant HIL emission. To account for the similarity in profile shape in the high-state, we further assume that the variable ionizing continuum component becomes more isotropic when brighter, so that the C iv emission arises predominantly from the high-column density clouds. Since the FWHM of the low and high-state C iv emission-line profile is effectively the same, the velocity field cannot be predominantly radial, otherwise a different range in projected velocity for the HILs and LILs will result. Instead a randomized or chaotic flow is favored. While plausible, this model requires a high-degree of fine tuning to produce the detailed profile changes observed in the HILs. Hardest to explain are the reversals in HIL profile shape, from red-asymmetric at low continuum levels to blue-asymmetric at high continuum levels (Figure 3b). (ii) One possible scenario which may account for the above observations is a terminal wind model for the BLR<sup>1</sup><sup>1</sup>1The best available variability data on NGC 3516 (Wanders et al. 1993; Koratkar et al. 1996) suggest that the BLR kinematics are not dominated by radial motion, but may be consistent with a mixture of velocity components.. A terminal wind is the one physical structure which can display the same range in projected velocities on both small and large spatial scales. The wind may be spherical or non-spherical. For a non-spherical wind, for example a bi-conical flow, similar profiles for the HILs and LILs will result regardless of their respective radial emissivity distributions, provided that the opening angle of the cone remains constant. However, to produce profiles similar to that observed here (i.e. broad wings and narrow cores), we require a strong azimuthal dependence on the velocity field, or a significant circularized velocity component to the flow (Goad and Knigge 1999). That is, the simplest terminal flow model in which the velocity field is constant everywhere cannot produce the correct profile shapes. However, hydromagnetic wind models (e.g. Emmering et al. 1992; Bottorff et al. 1997) do exhibit these basic properties. While the emission-line flux variations of the HILs are assumed to be driven by continuum variations, we propose that in this model the changes in profile shape at low continuum levels are in part due to changes in the formation radius of the HILs. Specifically, we propose that in the low-state, conditions within the BLR gas are such that a large fraction of the C iv emission arises at the base of the wind, in the region where the gas begins to accelerate (Goad and Knigge 1999). ## 4 Conclusions The picture we envisage is one in which the steady-state profile is dominated by the dynamics of the system, with lines formed in kinematically similar regions, possibly a terminal flow. The stability of the Mg ii profile on timescales of several years, provide strong supporting evidence for the existence of such a structure. Moreover, the absence of significant flux variations in the Mg ii line on timescales of a few years suggest that this region is physically extended in size. The historic variation in the Mg ii emission-line flux, but not its profile, could be the result of an accretion event, a disk instability, or an increase in the wind density and may be linked to the disappearance of the C iv VAL. Acknowledgements The results presented here are based upon observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Associated Universities for Research in Astronomy, Inc, under NASA contract NAS5-26555. MRG and AK acknowledge research grant GO-6108 for financial support. We would also like to thank the referee for many helpful suggestions. MRG would also like to thank Christian Knigge for many useful discussions concerning disk-wind modeling. Figures
no-problem/9812/cond-mat9812184.html
ar5iv
text
# Genetic Correlations in Mutation Processes ## I Introduction Biological evolution is influenced by a number of processes including population growth, mutation, extinction, and interaction with the environment, to name a few . Genetic sequences are strongly affected by such processes and thus provide an important clue to their nature. The ongoing effort of reconstructing evolution histories given the incomplete set of mapped sequences constitutes much of our current understanding of biological evolution. However, this challenge is extraordinary as it involves an inverse problem with an enormous number of degrees of freedom. Statistical methods such as maximum likelihood techniques coupled with simplifying assumptions on the nature of the evolution process are typically used to infer the structure of the underlying evolutionary tree, i.e., the phylogeny . Genetic sequences such as RNA/DNA or amino acid sequences can be seen as words with letters taken from an alphabet of 4 or 20 symbols, respectively. Generally, there are nontrivial intra-sequence correlations that influence the evolution of the entire sequence. Additionally, the structure of the evolutionary tree plays a role in this process as one generally expects that the closer sequences are on this tree, the more correlated they are . In this study, we are interested in describing the influence of the latter aspect, namely the phylogeny, on the evolution of sequences. Specifically, we examine correlations between sequences, thereby complementing related studies on changes in fluctuations and entropy due to the phylogeny . To this end, we consider particularly simple sequences and focus on a model that mimics the competition between the fundamental processes of mutation and duplication. The rest of this paper is organized as follows. In Sec. II, the model is introduced, and the main result is demonstrated using the pair correlations. Correlations of arbitrary order are obtained and analyzed asymptotically in Sec. III. To examine the range of validity of the results, generalizations to stochastic tree morphologies and sequences with larger alphabets are briefly discussed in Sections IV and V. Sec. VI discusses implications for multiple site correlations in sequences with independently evolving sites. We conclude with a summary and a discussion in Sec. VII. ## II Pair Correlations Let us formulate the model first. The sequences are taken to be of unit length and the corresponding alphabet consists of two letters. The numeric values $`\sigma =\pm 1`$ are conveniently assigned to these letters. We will focus on binary trees where the number of children equals two. This structure is deterministic in that both the number of children and the generation lifetime are fixed. Nevertheless, the results apply qualitatively to stochastic tree morphologies as well. Finally, the mutation process is implemented as follows: with probability $`1p`$ a child equals his predecessor while with probability $`p`$ a mutation occurs, as illustrated in Fig. 1. The mutation process is invariant under the transformation $`\sigma \sigma `$ and $`p1p`$, and we restrict our attention to the case $`0p1/2`$ without loss of generality. A natural question is how correlated are the various leafs (or nodes) of a tree in a given generation (or equivalently, time)? Consider $`G_2(k)`$ the average correlation between two nodes at the $`k`$th generation $$G_2(k)=\sigma _i\sigma _j.$$ (1) The first average should be taken over all realizations for a fixed pair of nodes $`ij`$, while the second average is taken over all different pairs belonging to the same generation. For example, consider this quantity at the second generation (see Fig. 1), $`G_2(2)=[\sigma _3\sigma _4+\sigma _3\sigma _5+\sigma _3\sigma _6]/3`$. One index ($`i=3`$) may be fixed since all nodes in a given generation are equivalent. To evaluate averages, it is useful to assign a multiplicative random variable $`\tau _i=\pm 1`$ to every branch of the tree such that $`\sigma _i=\sigma _j\tau _i`$ with $`j`$ the predecessor of $`i`$. One has $`\tau _i=1`$ ($`1`$) with probability $`1p`$ ($`p`$), and consequently, $$\tau \tau _i=12p.$$ (2) Pair correlations are readily calculated using the $`\tau `$ variables: writing $`\sigma _3=\sigma _0\tau _1\tau _3`$ and similarly for $`\sigma _4`$ gives $`\sigma _3\sigma _4=\sigma _0\tau _1\tau _3\sigma _0\tau _1\tau _4=\sigma _0^2\tau _1^2\tau _3\tau _4`$. Since $`\sigma _i^2=\tau _i^2=1`$, this correlation simplifies, $`\sigma _3\sigma _4=\tau _3\tau _4`$. Furthermore, mutation processes on different branches are independent and consequently $`\tau _i\tau _j=\tau _i\tau _j`$ when $`ij`$. Thus, $`\sigma _3\sigma _4=\tau ^2`$ and similarly $`\sigma _3\sigma _5=\sigma _3\sigma _6=\tau ^4`$. The overall picture becomes clear: when calculating two-point correlations, the path to the tree root is traced for each node. As $`\tau ^2=1`$, doubly counted branches cancel. Only branches that trace the path to the first common ancestor are relevant. In other words, $$\sigma _i\sigma _j=\tau ^{d_{i,j}}$$ (3) with $`d_{i,j}`$ the “genetic distance” between two points, the minimal number of branches that connect two nodes. Indeed, at the second generation $`d_{3,4}=2`$, $`d_{3,5}=d_{3,6}=4`$ and consequently $`G_2(2)=(\alpha ^2+2\alpha ^4)/3`$ with the shorthand notation $`\alpha =\tau =12p`$. This generalizes into a geometric series $`G_2(k)=(\alpha ^2+2\alpha ^4+\mathrm{}+2^{k1}\alpha ^{2k})/(2^k1)`$. Evaluating this sum gives the pair correlation $$G_2(k)=\frac{\alpha ^2}{2\alpha ^21}\frac{(2\alpha ^2)^k1}{2^k1}.$$ (4) Interestingly, pair correlations are not affected by the initial state, i.e., the value of the tree root. For sufficiently large generation numbers, the leading order of the pair correlation decays exponentially with the generation number. However, different constants characterize this decay, depending on the mutation probability $$G_2(k)\{\begin{array}{cc}\frac{\alpha ^2}{2\alpha ^21}\alpha ^{2k}\hfill & p<p_c\text{;}\hfill \\ \frac{\alpha ^2}{12\alpha ^2}\mathrm{\hspace{0.17em}2}^k\hfill & p>p_c\text{.}\hfill \end{array}$$ (5) As seen from Eq. (4), the transition between the two different behaviors occurs when $`2\alpha ^2=1`$ or alternatively at the following mutation probability $$p_c=\frac{1}{2}\left(1\frac{1}{\sqrt{2}}\right).$$ (6) Although in general correlations decay exponentially $`G_2(k)\beta ^{2k}`$, the decay constant $`\beta `$ exhibits two distinct behaviors which depend on the mutation probability $`\alpha `$. When the mutation probability is smaller than the critical one $`p<p_c`$ then $`\beta =\alpha `$ while in the complementary case $`\beta =1/\sqrt{2}`$. As a reference, it is useful to consider the decay of the average node value $`G_1(k)=\sigma `$. At the $`k`$th generation, the path to each node involves $`k`$ branches and thus, $`G_1(k)=G_1(0)\alpha ^k`$ with $`G_1(0)=\sigma _0`$. Writing $`G_1(k)\beta ^k`$ then $`\beta =\alpha `$ for all mutation probabilities, in contrast with the asymptotic behavior of $`G_2(k)`$. Below the critical mutation rate, $`G_2(k)[G_1(k)/G_1(0)]^2`$, indicating that knowledge of the one-point average suffices to characterize correlations. In fact, the above behavior can be attributed to the tree morphology. To see that, it is useful to consider a structureless morphology where the only ancestor shared by two nodes is the tree root itself (see Fig. 2). Using the notation $`G^{}`$ to denote correlations on this “star” morphology, we see that the average remains unchanged $`G_1(k)=G_1^{}(k)=G_1^{}(0)\alpha ^k`$. The star morphology is trivial in that all genetic distances are equal: $`d_{i,j}=2k`$ when $`ij`$. Thus, pair correlations are immediately obtained from the average $`G_2^{}(k)=[G_1^{}(k)/G_1^{}(0)]^2=\alpha ^{2k}`$. As branches in the star morphology do not interact, no correlations develop. In contrast, nontrivial phylogenies do induce correlations. Indeed, $`G_2(k)>G_2^{}(k)`$ when $`p>0`$. Interestingly, when $`p<p_c`$, merely the asymptotic prefactor $`\alpha ^2/(2\alpha ^21)>1`$ in Eq. (5) is enhanced and $`G_2(k)G_2^{}(k)`$. As the critical point is approached, this constant diverges thereby signaling the transition into a second regime. When $`p>p_c`$, the decay constant itself is enhanced and the ratio $`G_2(k)/G_2^{}(k)`$ grows exponentially. The mutation probability affects only the asymptotic prefactor, and the decay constant $`\beta =1/\sqrt{2}`$ is determined by the tree morphology. We conclude that the nontrivial phylogeny generates significant correlations for larger than critical mutation probabilities. This behavior can be understood and partially rederived using a heuristic argument. Genetically close nodes are highly correlated, while distant pairs are weakly correlated, as indicated by Eq. (3). On the other hand, distant pairs are more numerous. Both effects are magnified exponentially for large generation numbers, and their competition results in a critical point. Different mechanisms dominate on different sides of this point. Specifically, the number of minimal genetic distance pairs ($`d=2`$) is $`2^{k1}`$, while the number of maximal distance pairs ($`d=2k`$) is $`2^{2(k1)}`$. The rule (3) gives the relative contributions of these two terms to the overall two-point correlation: $`2^{k1}\alpha ^2`$ versus $`2^{2(k1)}\alpha ^{2k}`$. These are simply the first and last terms in the geometric series that led to Eq. (4). Comparing these two terms in the limit $`k\mathrm{}`$ correctly reproduces the most relevant aspects, i.e., the location of the critical point (6) and the decay constants of Eq. (5). We conclude that competition between the multiplicity and the degree of correlation of close and distant nodes underlies the transition. ## III Higher Order Correlations The above analysis gives useful intuition for the overall qualitative behavior. Yet, it can be generalized into a more complete treatment that addresses correlations of arbitrary order. This set of quantities is helpful in determining the extent to which this picture applies, and in particular, whether the transition is actually a phase transition. Multiple point correlations obey a rule similar to Eq. (3). For example, consider the four-node average $`\sigma _3\sigma _4\sigma _5\sigma _6`$ in Fig. 1. Using the $`\tau `$ variables, we rewrite $`\sigma _3\sigma _4\sigma _5\sigma _6=\sigma _0^4\tau _1^2\tau _2^2\tau _3\tau _4\tau _5\tau _6`$, and since $`\sigma ^2=\tau ^2=1`$ we get $`\sigma _3\sigma _4\sigma _5\sigma _6=\tau _3\tau _4\tau _5\tau _6=\tau ^4`$ or $`\sigma _3\sigma _4\sigma _5\sigma _6=\sigma _3\sigma _4\sigma _5\sigma _6`$. The four point average equals a product of two-point averages with the indices chosen as to minimize the total number of branches. This can also be seen by tracing the path of each node to the tree root and canceling doubly counted branches. Thus, Eq. (3) generalizes as follows: $$\sigma _i\sigma _j\sigma _k\sigma _l=\tau ^{d_{i,j,k,l}},$$ (7) with the four-point genetic distance $$d_{i,j,k,l}=\mathrm{min}\{d_{i,j}+d_{k,l},d_{i,k}+d_{j,l},d_{i,l}+d_{j,k}\}.$$ (8) Similarly, the law for arbitrary order averages is $`\tau `$ raised to a power equal to the $`n`$-point genetic distance. Such distance is obtained by considering all possible decompositions into pairs of nodes. The genetic distance is the minimal sum of the corresponding pair distances. Averages over an odd number of nodes can be obtained by adding a “pseudo” node at the root of the tree and using the convention $`d_{i,\mathrm{root}}=k`$ when $`i`$ belongs to the $`k`$th generation. The average $`\sigma _0`$ is generated by the root and this factor multiplies all odd order correlation. Since even order correlations are independent of the root value, and odd correlations are simply proportional to $`\sigma _0`$, we set $`\sigma _0=1`$ in what follows without loss of generality. The average $`n`$-point correlation is defined as follows $$G_n(k)=\sigma _{i_1}\sigma _{i_2}\mathrm{}\sigma _{i_n},$$ (9) where the averages are taken over all realizations and over all possible choices of $`n`$ distinct nodes at the $`k`$th generation. For the trivial star phylogeny, the $`n`$-point genetic distance is constant and equals a product of the correlation order and the generation number, $`d=nk`$. Consequently, all averages are trivial as knowledge of the one-point average immediately gives all higher-order averages, $`G_n^{}(k)=[G_1^{}(k)]^n`$, or explicitly $$G_n^{}(k)=\alpha ^{nk}.$$ (10) When the tree morphology is nontrivial, the minimal-sum rules (7)-(8) imply that such factorization no longer holds. For binary trees, it is possible to obtain these correlations recursively. Let us assign the indices $`1,2,\mathrm{},2^k`$ to the $`k`$th generation nodes and order them as follows $`1i_1<i_2<\mathrm{}<i_n2^k`$. As the average over the realizations is performed first, the average correlation requires a summation over all possible choices of nodes $$F_n(k)=\underset{1i_1<i_2<\mathrm{}<i_n2^k}{}\sigma _{i_1}\sigma _{i_2}\mathrm{}\sigma _{i_n}.$$ (11) Proper normalization gives the $`n`$-node correlation $$G_n(k)=F_n(k)/\left(\genfrac{}{}{0pt}{}{2^k}{n}\right).$$ (12) Consider a group of $`n`$ nodes taken from the $`k`$th generation. They all share the tree root as a common ancestor. The two first generation nodes naturally divide this group into two independently evolving subgroups. This partitioning procedure allows a recursive calculation of the correlations. Formally, a given choice of nodes $`1i_1<i_2<\mathrm{}<i_n2^k`$ is partitioned into two subgroups as follows $`1i_1<\mathrm{}<i_m2^{k1}`$ and $`2^{k1}+1i_{m+1}<\mathrm{}<i_n2^{k1}+2^{k1}`$. These subgroups involve different $`\tau `$ variables, so their correlations factorize $$\sigma _{i_1}\mathrm{}\sigma _{i_n}\sigma _{i_1}\mathrm{}\sigma _{i_m}\sigma _{i_{m+1}}\mathrm{}\sigma _{i_n}.$$ (13) The proportionality constant depends upon the parity of $`m`$ and $`nm`$. Even correlations are independent of the tree root, while odd correlations are proportional to the average value of the tree root. This extends to sub-trees as well, and since $`\sigma _0=1`$, the average value of the root of both sub-trees is $`\tau `$. This factor accompanies all odd correlations. Substituting Eq. (13) into Eq. (11) shows that the summation factorizes as well. Using $`F_m(k1)=_{1i_1<\mathrm{}<i_m2^{k1}}\sigma _{i_1}\mathrm{}\sigma _{i_m}`$ reduces the problem to two sub-trees that are one generation shorter, and a recursion relation for $`F_n(k)`$ emerges $$F_n(k)=\underset{m=0}{\overset{n}{}}F_m(k1)B_mF_{nm}(k1)B_{nm},$$ (14) with the boundary conditions $`F_n(0)=\delta _{n,0}+\delta _{n,1}`$. The summation corresponds to the $`n+1`$ possible partitions of a group of $`n`$ nodes into two subgroups. The weight of the odd correlations is accounted for by $`B_n`$ $$B_n=\{\begin{array}{cc}1\hfill & n=2r\text{;}\hfill \\ \tau \hfill & n=2r+1\text{.}\hfill \end{array}$$ (15) Using the definition (11), the sums $`F_n(k)`$ vanish whenever $`n>2^k`$. This behavior emerges from the recursion relations as well. Additionally, one can check that the sums are properly normalized in the no mutation case ($`\alpha =1`$), $`F_n(k)=\left(\genfrac{}{}{0pt}{}{2^k}{n}\right)`$ when $`n2^k`$. For sufficiently small $`n`$, it is possible to evaluate the sums explicitly using Eqs. (14). The average correlations are then found using Eq. (12) $`G_0(k)`$ $`=`$ $`1,`$ (16) $`G_1(k)`$ $`=`$ $`\alpha ^k,`$ (17) $`G_2(k)`$ $`=`$ $`{\displaystyle \frac{\alpha ^2}{2\alpha ^21}}{\displaystyle \frac{(2\alpha ^2)^k1}{2^k1}},`$ (18) $`G_3(k)`$ $`=`$ $`{\displaystyle \frac{3\alpha ^{k+2}}{2\alpha ^21}}{\displaystyle \frac{\frac{(4\alpha ^2)^k(4\alpha ^2)}{4\alpha ^21}(2^k2)}{(2^k1)(2^k2)}}.`$ (19) Indeed, these quantities agree with the previous results for $`n=1`$, $`2`$ and equal unity when $`p=0`$. We see that correlations involve a sum of exponentials. Furthermore, it appears that the condition $`2\alpha ^2=1`$ still separates two different regimes of behaviors. However, calculating higher correlations explicitly is not feasible as the expressions are involved for large $`n`$. Instead, we perform an asymptotic analysis that more clearly exposes the leading large generation number behavior. Let us consider first the regime $`p<p_c`$ or equivalently $`2\alpha ^2>1`$. From Eq. (16), we see that the leading large $`k`$ behavior of the average correlation satisfies $`G_n(k)\alpha ^{nk}`$ for $`n=0,1,2`$, and $`3`$. We will show below that this behavior extends to higher order correlations, i.e., $$G_n(k)g_n\alpha ^{nk}.$$ (20) In other words, the following limit $`\alpha =lim_k\mathrm{}[G_n(k)]^{1/nk}`$ exists and is independent of $`n`$. As correlations are larger when the phylogeny is nontrivial, one expects that $`G_n(k)G_n^{}(k)`$ or in terms of the prefactors, $`g_ng_n^{}=1`$. Combining Eq. (12) with the leading behavior of the combinatorial normalization constant $`\left(\genfrac{}{}{0pt}{}{2^k}{n}\right)2^{nk}/n!`$ gives the the asymptotic behavior of the sums $$F_n(k)=f_n(2\alpha )^{nk},\mathrm{with}f_n=\frac{g_n}{n!}.$$ (21) Substituting Eq. (21) into the recursion relation Eq. (14) eliminates the dependence on the generation number $`k`$, and a recursion relation for coefficients $`f_n`$ is found $$f_n(2\alpha )^n=\underset{m=0}{\overset{r}{}}f_mB_mf_{nm}B_{nm},$$ (22) with $`B_n`$ of Eq. (15). These recursion relations are consistent with the conditions $`f_0=f_1=1`$. The case $`n=2`$ reproduces the coefficient $`f_2=\alpha ^2/[(2\alpha )^22]`$. The divergence at $`2\alpha ^2=1`$ indicates that the ansatz (20) breaks down at the critical point. To show that the ansatz holds in the entire range $`0p<p_c`$, one has to show that the coefficients $`f_n`$ are positive and finite for all $`n`$. Rewriting the recursion (22) explicitly $`f_n[(2\alpha )^n2B_n]=_{m=1}^{r1}f_mB_mf_{nm}B_{nm}`$ allows us to prove this. Since $`f_0=1>0`$, then to complete a proof by induction one needs to show that a positive $`f_{n1}`$ implies a positive $`f_n`$. The right hand side of the recursion is clearly positive and thus the positivity of $`f_n`$ hinges on the positivity of the term $`(2\alpha )^n2B_n`$. When $`2\alpha ^2>1`$, then $`\alpha >1/\sqrt{2}`$ and certainly $`2\alpha >1`$. Combining this with the inequality $`(2\alpha )^2>2>2B_n`$ shows that $`(2\alpha )^n2B_n>0`$ when $`n2`$. Hence $`f_n`$ is positive and finite for all $`n`$, which validates the ansatz (20) in the regime $`p<p_c`$. In principle, the coefficients can be found by introducing the generating functions $$f(z)=\underset{n}{}f_nz^n.$$ (23) Multiplying Eq. (22) by $`z^n`$ and summing over $`n`$ yields the following equation for the generating functions $$f(2\alpha z)=\left[\frac{f(z)+f(z)}{2}+\alpha \frac{f(z)f(z)}{2}\right]^2.$$ (24) This equation reflects the structure of the recursion relations. A factor $`\alpha `$ is generated by each odd-index coefficient and as a results, the odd part of the generating functions $`[f(z)f(z)]/2=f_1z+f_3z^3+\mathrm{}`$ is multiplied by $`\alpha `$. Although a general solution of this equation appears rather difficult, it is still possible to obtain results in the limiting cases. It is useful to check that when $`\alpha =1`$, the above equation reads $`f(2z)=f^2(z)`$ which together with the boundary conditions $`f_0=f_1=1`$ gives $`f(z)=\mathrm{exp}(z)`$ or $`f_n=\frac{1}{n!}`$. As $`g_n1`$, the trivial correlations are recovered, $`G_nG_n^{}`$ indicating that role played by the tree morphology diminishes in the no mutation limit. In the limit $`pp_c^{}`$ it is possible to extract the leading behavior of the asymptotic prefactors. Here, it is sufficient to keep only the highest powers of the diverging term $`1/(2\alpha ^21)`$. The calculation in this case is identical to the one detailed below for the case $`p>p_c`$ and we simply quote the results $$G_n(k)\{\begin{array}{cc}\frac{2r!}{r!}\left[\frac{\alpha ^2}{2(2\alpha ^21)}\right]^r\alpha ^{nk}\hfill & n=2r\text{;}\hfill \\ \frac{(2r+1)!}{r!}\left[\frac{\alpha ^2}{2(2\alpha ^21)}\right]^r\alpha ^{nk}\hfill & n=2r+1\text{.}\hfill \end{array}$$ (25) In this limit, the odd order correlations simply follow from their even counterparts and for example $`f_{2r+1}=f_{2r}`$. In the complementary case $`p>p_c`$, it proves useful to rewrite the recursion relations (22) for the even and odd correlations separately $`F_{2r}(k)`$ $`=`$ $`{\displaystyle \underset{s=0}{\overset{r}{}}}F_{2s}(k1)F_{2r2s}(k1)`$ (26) $`+`$ $`\alpha ^2{\displaystyle \underset{s=0}{\overset{r1}{}}}F_{2s+1}(k1)F_{2r2s1}(k1)`$ (27) $`F_{2r+1}(k)`$ $`=`$ $`2\alpha {\displaystyle \underset{s=0}{\overset{r}{}}}F_{2s}(k1)F_{2r2s+1}(k1).`$ (28) The leading asymptotic behavior of Eq. (16) implies $`F_0(k)=f_0`$, $`F_1(k)f_0(2\alpha )^k`$, $`F_2(k)f_22^k`$, and $`F_3(k)f_22^k(2\alpha )^k`$ with $`f_0=1`$ and $`f_2=\alpha ^2/[2(2\alpha )^2]`$. Let us assume that this even-odd pattern is general $`F_{2r}(k)`$ $`=`$ $`f_{2r}2^{rk},`$ (29) $`F_{2r+1}(k)`$ $`=`$ $`f_{2r}2^{rk}(2\alpha )^k.`$ (30) Substituting this ansatz into Eq. (26) shows that the second summation in the recursion for the even correlations is negligible asymptotically. Both equations reduce to $$f_{2r}2^r=\underset{s=0}{\overset{r}{}}f_{2s}f_{2r2s},$$ (31) and therefore the pattern (29) holds when $`p>p_c`$. It is seen that odd correlators are enslaved to the even ones. To obtain the coefficients, we introduce the generating functions $`f(z)=_rf_{2r}z^{2r}`$ which satisfies $`f(0)=1`$, $`f^{}(0)=0`$ and $`f^{\prime \prime }(0)=f_2=\alpha ^2/[2(12\alpha ^2)]`$. The recursion relation translates into the following equation for $`f(z)`$ $$f\left(\sqrt{2}z\right)=[f(z)]^2.$$ (32) Its solution is $`f(z)=\mathrm{exp}[(\alpha z)^2/2(12\alpha ^2)]`$. Thus, $`f_{2r}=\frac{1}{r!}[f_2]^r`$. From Eqs. (20)-(21), the leading asymptotic behavior in the regime $`p_c<p<1/2`$ is found $$G_n(k)\{\begin{array}{cc}\frac{2r!}{r!}\left[\frac{\alpha ^2}{2(12\alpha ^2)}\right]^r2^{kr}\hfill & n=2r\text{;}\hfill \\ \frac{(2r+1)!}{r!}\left[\frac{\alpha ^2}{2(12\alpha ^2)}\right]^r\alpha ^k2^{kr}\hfill & n=2r+1\text{.}\hfill \end{array}$$ (33) Using the Stirling formula $`n!\sqrt{2\pi n}n^ne^n`$ it is seen that the coefficients $`g_{2r}`$ have nontrivial $`r`$ behavior as $`g_{2r}=g_{2r+1}/(2r+1)\sqrt{2}[2\alpha ^2/(12\alpha ^2)]^rr^r`$. The even order correlations have identical asymptotic behavior to the two point correlation: $`lim_k\mathrm{}[G_{2r}(k)]^{1/2rk}=\frac{1}{\sqrt{2}}`$ for all $`r`$. The odd order correlations behave differently, however, as this limit depends on the correlation order: $`lim_k\mathrm{}[G_{2r+1}(k)]^{1/(2r+1)k}=\frac{1}{\sqrt{2}}(\sqrt{2}\alpha )^{1/2r+1}`$. Thus, only in the limit $`r\mathrm{}`$ do the even and odd order correlations agree. However, this conclusion is misleading since the decay rate of the (properly normalized) odd order correlations $`G_{2r+1}(k)/G_1(k)G_{2r}(k)`$ is identical to that of the even order correlations. We conclude that the decay rate of two-point correlations characterizes the decay of all higher order correlations. From Eqs. (25) and (33), we see that the coefficients diverge according to $$f_{2r}=f_{2r+1}|p_cp|^r$$ (34) as the critical point is approached, $`pp_c`$. Since the correlations must remain finite, this indicates that the purely exponential behavior must be modified when $`p=p_c`$. Indeed, evaluating Eq. (16) at $`p=p_c`$ yields $`F_2(k)f_22^k`$ and $`F_3(k)=f_22^{3k/2}`$ with $`f_2=k/4`$, i.e., the even-odd pattern of Eq. (29) is reproduced. Furthermore, the value of $`f_2`$ shows that the diverging quantity $`1/|12\alpha ^2|`$ is simply replaced by $`k`$. This implies that the coefficients become generation dependent, $`f_nf_n(k)`$. Assuming the pattern Eq. (29), substituting it into Eq. (25), and following the steps that led to Eq. (33) yields the critical behavior $$G_n(k)\{\begin{array}{cc}\frac{2r!}{r!}\left[\frac{k}{4}\right]^r2^{kr}\hfill & n=2r\text{;}\hfill \\ \frac{(2r+1)!}{r!}\left[\frac{k}{4}\right]^r2^{k(r+1/2)}\hfill & n=2r+1\text{.}\hfill \end{array}$$ (35) Generally, the diverging quantity $`1/|12\alpha ^2|`$ is replaced with the finite (but ever growing) quantity $`k`$. The algebraic modification to the leading exponential behavior in Eq. (35) is reminiscent of the logarithmic corrections that typically characterize critical behavior in second order phase transitions . ## IV Stochastic Tree Morphologies The question arises: how general is the behavior described above? The binary tree considered was particularly simple as it involved a fixed number of children and a fixed generation lifetime. Below we show that relaxing either of these conditions does not affect the nature of the results. Let us first consider tree morphologies with a varying number of children, i.e., the trees are generated by a stochastic branching process where with probability $`P_r`$ there are $`r`$ children. This probability sums to unity $`_rP_r=1`$, and the average number of children is given by $`r=_rrP_r`$. As a result, the average number of nodes at the $`k`$th generation is $`r^k`$, indicating that the tree “survives” only if $`r>1`$, a classical result of branching processes theory . The rule (3) is independent of the tree morphology, and therefore, one can repeat the heuristic argument in Sec. II. The extreme contributions to the average pair correlations have the relative weights $`r^{k1}\alpha ^2`$ and $`r^{2(k1)}\alpha ^{2k}`$. Comparing these two terms asymptotically shows that the critical point is a simple generalization of Eq. (6) $$p_c=\frac{1}{2}\left(1\sqrt{\frac{1}{r}}\right).$$ (36) The critical mutation rate varies from $`0`$ to $`1/2`$ as the average ancestry size varies between $`1`$ and $`\mathrm{}`$. This indicates that correlations are significant over a larger range of mutation rates for smaller trees. The heuristic argument also gives the decay constant $`\beta `$, and the leading asymptotic behavior of Eq. (5) is generalized by simply replacing $`2`$ with $`r`$. A more complete treatment of this problem is actually possible and closely follows Eq. (4). Again, the ancestry size $`r`$ replaces the deterministic value $`2`$. As both the results and the overall behavior closely follow the deterministic case, we do not detail them here. A second possible generalization is to morphologies with a varying generation lifetime. Such tree morphologies can be realized by considering a continuous time variable. Branching is assumed to occur with a constant rate $`\nu `$. For such tree morphologies, the number of nodes $`n(t)`$ obeys $`\dot{n}(t)=\nu n(t)`$ which gives an exponential growth $`n(t)=e^{\nu t}`$. Similarly, the mutation process is assumed to occur with a constant rate $`\gamma `$. A useful characteristic of this process is the autocorrelation $`A(t)=\sigma (0)\sigma (t)`$. To evaluate it’s evolution, we note that $`A(t+dt)=(1\gamma dt)A(t)\gamma dtA(t)`$ when $`dt0`$. Therefore, $`\dot{A}(t)=2\gamma A(t)`$ and one finds $`A(t)=e^{2\gamma t}`$. The quantities $`n(t)`$ and $`A(t)`$ allow calculation of the average pair correlation. Let us pick two nodes at time $`t`$ and denote their values by $`\sigma _i(t)`$ and $`\sigma _j(t)`$, and let the genetic distance between these two nodes be $`\tau `$. Using their first common ancestor $`\sigma _c(t\tau )=\sigma _i(t\tau )=\sigma _j(t\tau )`$ and the identity $`\sigma ^2=1`$, their correlation can be evaluated as follows $`\sigma _i(t)\sigma _j(t)=\sigma _i(t)\sigma _c(t\tau )\sigma _c(t\tau )\sigma _j(t)=\sigma _i(t)\sigma _i(t\tau )\sigma _j(t)\sigma _j(t\tau )=A^2(\tau )`$. Integrating over all possible genetic distances gives the average pair correlation $$G_2(t)=\frac{_0^t𝑑\tau n(\tau )A^2(\tau )}{_0^t𝑑\tau n(\tau )}.$$ (37) The factor $`n(\tau )/_0^t𝑑\tau n(\tau )`$ accounts for the multiplicity of pairs with genetic distance $`\tau `$. Using $`A(t)=e^{4\gamma t}`$ and $`n(\tau )=e^{\nu t}`$, the average pair correlation is evaluated $$G_2(t)=\frac{\nu }{\nu 4\gamma }\frac{e^{(\nu 4\gamma )t}1}{e^{\nu t}1}.$$ (38) For the star phylogeny the genetic distance is always $`t`$ and therefore $`G_2^{}(t)=e^{4\gamma t}`$. Here the relevant parameter is the normalized mutation rate $`\omega =\gamma /\nu `$. Again, there exists a critical point $`\omega _c=1/4`$. For smaller than critical mutation rates, $`\omega <\omega _c`$, correlations due to the tree morphology are not pronounced, $`G_2(t)G_2^{}(t)`$. On the other hand, when $`\omega >\omega _c`$, strong correlations are generated and $`G_2(k)e^{\nu t}`$ is exponentially larger than $`G_2^{}(t)`$. We conclude that the behavior found for the deterministic case is robust. ## V Multistate Sequences We now consider larger alphabets. Previously, the two states satisfied $`\sigma ^2=1`$. A natural generalization is to $`\sigma ^n=1`$, i.e, the $`n`$th order roots of unity $`\sigma =e^{i2\pi l/n}`$ with $`l=0,1,\mathrm{},n1`$. Previously, with probability $`p`$ the mutation $`\sigma \tau \sigma `$ occurred with $`\tau =e^{i\theta }`$ and $`\theta =\pi `$. We thus impose the same transition but with $`\theta =2\pi /n`$. This can be viewed as a clockwise rotation in the complex plane by an angle $`\theta `$. Since the states are now complex, the definition of the pair correlation is now $$G_2(k)=\overline{\sigma }_i\sigma _j,$$ (39) with $`\overline{\sigma }`$ the complex conjugate of $`\sigma `$. The real part of $`\overline{\sigma }_i\sigma _j`$ gives the inner product of the two-dimensional vectors corresponding to $`\sigma _i`$ and $`\sigma _j`$, respectively. Consider the average $`\overline{\sigma }_3\sigma _4`$ in Fig. 1. Using the $`\tau `$ variables and $`\overline{\tau }\tau =\overline{\sigma }\sigma =1`$ one has $`\overline{\sigma }_3\sigma _4=\overline{\sigma }_0\overline{\tau }_1\overline{\tau }_3\sigma _0\tau _1\tau _4=\overline{\tau }_3\tau _4=\overline{\tau }_3\tau _3=\overline{\tau }\tau =|\tau |^2`$. All of our previous results hold if one replaces the average $`\tau `$ with its magnitude $`\alpha =|\tau |=|1p(1e^{i\theta })|=\sqrt{12p(1p)(1\mathrm{cos}\theta )}`$. Furthermore, it is sensible to consider arbitrary phase shifts $`0<\theta <2\pi `$ since the identity $`\overline{\sigma }\sigma =\overline{\tau }\tau =1`$ rather than $`\sigma ^n=\tau ^n=1`$ was used to evaluate correlations. The critical point is determined from the condition $`r\alpha ^2=1`$. This equation has a physical solution only when $`2\phi <\theta <2(\pi \phi )`$ with the shorthand notation $$\phi =\mathrm{cos}^1\sqrt{\frac{1}{r}}.$$ (40) In terms of the number of states, this translates to $$\frac{\pi }{\pi \phi }<n<\frac{\pi }{\phi }.$$ (41) Hence, the transition may or may not exist depending on the details of the model, in this particular “clock” model case, the number of states. As we have seen before, correlations become less pronounced when the number of ancestors increases. Indeed, the transition always exists in the limit $`r1`$, while the transition is eliminated in the other extreme $`r\mathrm{}`$. When the transition do occur, the following critical mutation probability is found $$p_c=\frac{1}{2}\left[1\sqrt{1\frac{\mathrm{sin}^2\phi }{\mathrm{sin}^2\frac{\theta }{2}}}\right].$$ (42) Indeed, Eq. (36) is reproduced in the two state case ($`\theta =\pi `$). This turns out to be the minimal critical point, $`p_c(1\sqrt{1/r})/2`$, reflecting the fact that transition $`\sigma \sigma `$ provides the most effective mutation mechanism. Interestingly the transition is restored when both the mutation and the duplication processes occur continuously in time. In this continuous description duplication occurs with rate $`\nu `$ and the mutation $`\sigma e^{i\theta }\sigma `$ occurs with rate $`\gamma `$. The autocorrelation $`A(t)=\sigma (0)\overline{\sigma }(t)=\mathrm{exp}\left[\gamma (1e^{i\theta })t\right]`$ is found from its time evolution $`\dot{A}(t)=\gamma (1e^{i\theta })A(t)`$. It can be easily shown from the definition of the pair correlation (39) that $`A^2(\tau )`$ should be replace with $`|A(\tau )|^2=\mathrm{exp}\left[2\gamma (1\mathrm{cos}\theta )\tau \right]`$ in the integral (37). Comparing with the previous section results, we see that the effective mutation rate is now $`\gamma (1\mathrm{cos}\theta )/2`$. As a result, the location of the critical point is increased by a factor $`2/(1\mathrm{cos}\theta )`$. Using the normalized mutation rate $`\omega =\gamma /\nu `$ one finds $$\omega _c=\frac{1}{2(1\mathrm{cos}\theta )}.$$ (43) This critical point increases with the number of states, and it diverges according to $`\omega _c(n/2\pi )^2`$ when $`n\mathrm{}`$. This behavior is intuitive as one expects that mutations between a large number of states diminishes correlations and consequently, phylogenetic effects. ## VI Two-site Correlations When sequences are not of unit length, i.e., when there are two or more sites per sequence, the results can be used to characterize a correlation measure quantifying the interaction between sites. Assume there are two or more sites per sequence and that the sites evolve independently of each other. Denote the state of position $`a`$ in sequence $`i`$ as $`\sigma _i^a`$, and similarly denote the state of position $`b`$ in sequence $`i`$ as $`\sigma _i^b`$. If the sequences were not related by a phylogenetic tree, but instead were independent samples drawn from a given distribution, then the following quantity defined on a finite set of $`N=2^k`$ samples specifies a two-site correlation measure: $$\rho =\frac{1}{N}\underset{i}{}\sigma _i^a\sigma _i^b\frac{1}{N^2}\underset{i}{}\sigma _i^a\underset{j}{}\sigma _j^b.$$ (44) Correlation between sites $`a`$ and $`b`$ is indicated by a non-zero value of $`\rho `$. The quantity $`\rho `$ is well defined also when the sequences are related by a phylogenetic tree. Due to the assumption of independent positions, the mean of $`\rho `$ over all realizations vanishes $`\rho =0`$. This behavior is independent of the tree morphology. To see the effects of the phylogeny, one needs to consider fluctuations, i.e., the variance $`\mathrm{\Delta }\rho =\rho ^2`$ $`\mathrm{\Delta }\rho `$ $`=`$ $`\left({\displaystyle \frac{1}{N}}{\displaystyle \underset{i}{}}\sigma _i^a\sigma _i^b\right)^2\left({\displaystyle \frac{1}{N}}{\displaystyle \underset{i}{}}\sigma _i^a\right)^2\left({\displaystyle \frac{1}{N}}{\displaystyle \underset{i}{}}\sigma _i^b\right)^2`$ (45) $`=`$ $`{\displaystyle \frac{1}{N^2}}{\displaystyle \underset{ij}{}}\sigma _i^a\sigma _j^a\sigma _i^b\sigma _j^b{\displaystyle \frac{1}{N^4}}{\displaystyle \underset{ij}{}}\sigma _i^a\sigma _j^a{\displaystyle \underset{kl}{}}\sigma _k^b\sigma _l^b`$ (46) $`=`$ $`{\displaystyle \frac{1}{N^2}}\left[N+{\displaystyle \underset{ij}{}}\tau ^{2d_{i,j}}\right]{\displaystyle \frac{1}{N^4}}\left[N+{\displaystyle \underset{ij}{}}\tau ^{d_{i,j}}\right]^2.`$ (47) The first equality in the above equation was obtained by rewriting Eq. (44) as $`\rho =\rho _1\rho _2`$ and noting that $`\rho _1\rho _2=\rho _2^2`$. The final expression can be simplified using $`_{ij}\tau ^{2d_{i,j}}=N(N1)G_2(\alpha ^2,k)`$ with $`G_2(\alpha ^2,k)`$ the pair correlation of Eq. (4), considered as a function of $`\alpha ^2`$. The following expression for the variance is obtained $$\mathrm{\Delta }\rho =\left[\frac{1}{N}+\left(1\frac{1}{N}\right)G_2(\alpha ^2,k)\right]\left[\frac{1}{N}+\left(1\frac{1}{N}\right)G_2(\alpha ,k)\right]^2.$$ (48) For the star morphology the leading order of the fluctuations is independent of the mutation rate and it scales as the familiar $`N^1`$. For the binary tree morphology, there are again two regimes, characterized by $`p>p_c`$ or $`p<p_c`$ where $`p_c`$ is now defined by $`2\alpha ^4=1`$, i.e., $$p_c=\frac{1}{2}\left(1\left(\frac{1}{2}\right)^{\frac{1}{4}}\right).$$ (49) When $`p<p_c`$, the phylogeny plays a significant role and the variance is exponentially enhanced $`\mathrm{\Delta }\rho \alpha ^{4k}`$, while when $`p>p_c`$, the variance is still statistical in nature $`\mathrm{\Delta }\rho A\mathrm{\Delta }^{}\rho `$ with $`A>1`$. Hence, it is more likely to observe large values of $`\rho `$ in the tree morphology than it is in the star morphology, even when the sites evolve independently. Since correlations and variance play opposite roles, they are influenced in different ways by the phylogeny. ## VII Summary In summary, we have studied the influence of the phylogeny on correlations between the tree’s nodes. In general, for sufficiently small mutation rates, the morphology plays a minor role. For sufficiently high mutation rates large correlations that can be attributed to the phylogeny may occur. The transition between the two regimes of behavior is sharp and is marked by a critical mutation rate. Below this critical point all correlations are well described by the average, while above it, correlations decay much slower than the average. Underlying this transition is the competition between the multiplicity and the the degree of correlations between genetically close and distant leafs. This competition also leads to larger fluctuations in the correlation between different sites, even when these evolve independently. We have also seen that this behavior is robust and appears to be independent of many details of the model. While the overall behavior generally holds, specific details such as the location of the critical point and the decay rate in the regime $`p>p_c`$ depend on a specific tree dependent parameter: the average number of children. The above results can be extended in several directions. It will be interesting to see whether the recursive methods can be generalized to stochastic tree morphologies and in particular to the continuous time case. This methods should still be applicable even when the mutation rates are time dependent or disordered. In such cases it will be interesting to determine which parameters determine the critical point, the decay constants, etc. Correlations can serve as useful measure of the diversity of a system since small correlations indicate large diversity and vice versa. If the diversity can be measured in an experiment where the phylogeny is controlled, its time dependence can be used to infer the mutation probability. Similarly, if the mutation probability can be controlled, than the degree of correlation/diversity can be used to infer characteristics of the phylogeny. Thus, our results may be useful for inferring statistical properties of actual biological systems. This research is supported by the Department of Energy under contract W-7405-ENG-36.
no-problem/9812/cs9812023.html
ar5iv
text
# Virtual Kathakali: Gesture-Driven Metamorphosis ## 1 Introduction Recognizing and tracking human gestures, with its great promise for simplifying the man-machine interface, has seen considerable emphasis in recent years. The main approaches have been to use dedicated hardware such as dataglove or polhemus sensors, or visual recognition which requires little hardware but yields less direct results . One application of gesture recognition that is beginning to emerge is off-site training for motor skills, e.g. in activities such as athletics, surgery, theater/dancing, or gymnastics. Here the user’s motions can be transmitted using some low-bandwidth representation (e.g. joint angles or facial expressions), and the instructor or coach at the remote site can visualize the student using a local graphics model at real-time animation rate, and provide appropriate feedback, possibly illustrating the correct procedure through a similar virtual metamorphosis channel. For example, a renowned master in the Kathakali dance form <sup>1</sup><sup>1</sup>1Kathakali: This celebrated dance tradition involves wearing elaborate costumes and headgear, and also the use of special eye-masking paints and other cosmetics. may be able to provide personalized feedback to a student far away - the sensation of being co-located in the same virtual space makes communication much more natural. Furthermore, the master (or disciple) has the ability to zoom in on a particular part of the performance or view the scene from a particular vantage point, or to have the actions repeated in slower speeds. Of course, other usual Virtual Reality applications such as full body interaction in a virtual space, as in games or advanced chat rooms, can also be conducted with such a system. Figure 1 shows the basic setup that would be needed. Early approaches to gesture modeling used specialized arm motion detection sensors . Such sensors encumber the user and impose constraints on their motion to a certain extent. The camera based model provides a simpler, more flexible, and far cheaper alternative to other approaches. However, with the camera the body pose is not directly available, and considerable effort is needed in image processing. Different parts of this problem have been tackled for many years now: * The DigitEyes system Rehg/Kanade 94 () recovered a detailed kinematic description of the hand using a 27 degrees of freedom full hand model and one or two cameras. * Markov model based gesture identification . * Body tracking and behaviour interpretation . In general, camera based systems are not able to simultaneously identify both fine and gross motions since a full body field of view reduces the accuracy available for looking at the hand. See for a recent survey of the field. Combining gesture recognition with graphics reconstruction provides a virtual space where the user’s action can be reflected. Applications in this genre include games , Virtual interaction spaces , remote tele-operation , and Virtual Metamorphosis . Our application is in the metamorphosis category where the user is metamorphosed as a Kathakali dancer in a virtual environment. The following section gives a brief outline of the techniques used in the paper. ## 2 Outline The system can be broken up into three modules: * Real-time Detection of arm movements of the user (Section 3). * Modelling of the Kathakali dancer (Section 4). * Reproduction of the pose in the Kathakali dancer’s model (Section 5). Unlike other models that use thermal imaging to obtain the user’s silhouette , the Virtual Kathakali system uses a visible-light monochrome camera against a black background. The user’s silhouette is obtained by dynamically binarizing the images and the 3-D positions of the user’s shoulder, elbow and the wrists are obtained in real time from the image coordinates. This compact data is then transmitted to a local or off-site virtualization system in real time. Also, by using skin-tone colour/greyscale information, it is possible to identify occlusion, which is not possible to do in thermal imaging systems. The overall cost of this system is likely to be several times less than that of other comparable systems used in Virtual Metamorphosis systems. The next phase is to create an articulated 3D model that will follow the user’s poses and reflect the traditional costumes of a classical Indian dance form such as the Kathakali. The 3D model needs to have appropriate motion constraints at the joints and suitable dress/headgear/texture. The 3D arm pose sequence is now communicated to the graphics model, which recreates it as an animated graphics display. In this process, the very low-bandwidth joint angle data can be used to animate the 3D Dancer model. ## 3 Real Time detection of User’s Arm Pose In the initial pose, the user stands with his arms separated wide apart and the following calibration data are obtained :- * arm-length * range of pixel intensity within which the pixels corresponding to his body tone lie * arm width * width of his shoulders To simplify the image processing costs, the user is required to stand in front of a dark background. This permits image binarization based on a dynamic threshold, set at a point of sharp variation in the intensity histogram. Noise may yet interfere with the robust determination of arm posture, and Gaussian convolution is used to smooth out some of the noise. The pose of the user’s arms at each instant is obtained by identifying the elbow and wrist in the image. The body width is identified based on points of high intensity change in the lower parts of the image. The elbow and wrist are distinguished by different techniques depending on whether the hand is occluding the body (Section 3.1) or not. In the latter case, a fast and simple technique is to locate the extreme points in the image and test if the line joining it to the shoulder is part of the arm or not (Figure 5). Based on this and the initial calibration information, the 3D pose of the arm is estimated based on foreshortening. ### 3.1 Resolving Occlusion Many dance poses involve the hand being in front of the body; these postures are particularly important in many mudras<sup>2</sup><sup>2</sup>2mudras: certain specific postures or positions in Indian dance forms.. The binarized processing described above is not sufficient for resolving this occlusion, so the wrist is identified in the image based on the greyscale skin tones (see Figure 3). During the performance, for each grabbed image, the pixels outside this intensity range are discarded. If the user is wearing clothes contrasting with his body color, only the pixels corresponding to his body parts retain the high value of intensity. This leads to an effective separation of the hand from the body in cases of self-occlusion (see Figure 4). The hand can now easily be detected. In the case where the hand is before the body, we still need to identify the elbow for constructing the 3-D arm pose. This is simple here, since the outermost tip of the image corresponds to the elbow joint. ## 4 Modelling of Kathakali Dancer The first step is to create the wireframe of the human using primitives. A frustum with variable elliptical cross-sections has been used as a building block for modelling the Kathakali dancer. The outer surface of this primitive is realised by a triangular mesh. A stack of such primitives are used to model the head, arms and body separately. Though the resolution can be easily controlled, a very high resolution is sacrificed for faster rendering. The head has three degrees of freedom i.e.. it can be twisted, bent forward and sideways. Moreover, the movements are limited in the range that is humanely possible. To model the arms, first the shoulder joints with three degrees of freedom have been created. Next, the upper arm, the lower arm and the wrist are created using the frustum primitive. The elbow and wrist joints are simulated as hinge joints i.e.. with only one degree of freedom. The Kathakali dance form is famous for its elaborate apparel and ornamentation. Hence, to give a realistic effect, texture mapping has been used to create an appropriate pattern on the front of the dress. The image used for texture mapping has been obtained from images of Kathakali dancers. Various other Graphics rendering techniques like Lighting and Shading have been used to create the effect of a 3D Virtual World. The real time nature of the application requires the use of special techniques for optimizing performance. ## 5 Reproduction of User’s Pose in Virtual Model Based on the initial data about user’s arm-length and forearm-length the 3D position can be recovered using the foreshortening seen in the image. The angles of the upper arm with respect to the shoulders and the forearm with respect to the upper arm are calculated and used to apply appropriate transformations of the limbs in the virtual metamorphosis model. This application was developed on a a PentiumII/Linux-OpenGL PC for the graphics and an older 486 PC/DOS with a Matrox Image Processing card under Matrox Imaging Library(MIL 2.1). Some images with user’s poses and the corresponding graphics output are shown in Figure 8. ## 6 Conclusions In this paper, we have presented a communication environment in which a person could transform himself into any character he desires in a virtual environment. In our case, the character was that of a Kathakali Dancer. Computer Vision techniques are used for passive detection of user’s arm pose in real time. Some simple constraints on the user (that he stand in front of a dark background, and that his skin tone differs from his clothing) result in being able to use some extremely simple and fast detection methods based on dynamic threshold binarization. A single camera is used and the user does not need to wear any special devices. The 3-D model is instantiated during the calibration phase, and the user’s motions are reproduced using various transformations on the dancer model. Our implementation showed good performance with a processing speed of 5 frames per seconds on a very old and simple image processing system. The primary application for such a system would be in the coaching of motor skills with distant trainers and coaches. In the arts, such a system would permit quick informal recording of creative insights, which would otherwise require elaborate stage and makeup arrangements before being presented in the final form. In general, the expansion of Internet, even with low-bandwidth connectivity, will permit gestural interaction of this type as long as major computational tasks such as identification of limb postures and recreation/graphics output are performed on the local machines. In the coming decades, systems of this type are likely to have a profound effect on the declining trend in artistic traditions worldwide. Also, in sports as well as critical operations such as surgery, this could enable expert trainers to provide guidance to a far larger set of students than would be possible in a face-to-face mode. Since the image processing is carried on an image in 2D, the system is not able to resolve between two arm poses with the same projection, as when the entire arm is on a horizontal plane. However, due to the very nature of this deficiency, it will not matter to the viewer so long as it is viewed from the same angle. In this work, we have only used the arm pose of the user to control the virtual model. In traditional dance form, facial expressions and finger movements constitute an important component of the dancer’s emotional expression (abhinaya). In the current phase, with a single camera of fixed resolution, this is not possible; in fact, no vision system today can model both the finger and the gross body motions. However, some beginnings have been made towards integrating face recognition by having a camera look down on the user’s face from a fixture mounted on the head itself. With multiple cameras, one camera could be used for the full-body field of view, whereas one more other cameras could track the important aspects of the dance - particularly the two hands and also the face. Such a multi-scale imaging and tracking system would enable detailed reconstructions of the important aspects of the scene. Moreover the system, which supports only a single user at present, can easily be extended to support multiple users all sharing the same virtual environment. Hence creation of Virtual Theaters become a distinct possibility where different actors situated far away from each other can be actors in the same virtual theater. Since the bandwidth required by the system is very small, transmitting the data over the Internet is a feasible option. Another challenging problem is to have real and virtual actors share the same space in a seamless manner from the audience’s viewpoint.
no-problem/9812/astro-ph9812405.html
ar5iv
text
# COMPTEL DETECTION OF PULSED EMISSION FROM PSR B1509-58 UP TO AT LEAST 10 MEV ## Abstract ABSTRACT We report the COMPTEL detection of pulsed $`\mathrm{\gamma }`$-emission from PSR B1509-58 up to at least 10 MeV using data collected over more than 6 years. The 0.75-10 MeV lightcurve is broad and reaches its maximum near radio-phase 0.38, slightly beyond the maximum found at hard X-rays/ soft $`\mathrm{\gamma }`$-rays. In the 10-30 MeV energy range a strong source is present in the skymap positionally consistent with the pulsar, but we do not detect significant pulsed emission. However, the lightcurve is consistent with the pulse shape changing from a single broad pulse into a double-peak morphology. Our results significantly constrain pulsar modelling. 1) SRON-Utrecht, Sorbonnelaan 2, 3584CA Utrecht, The Netherlands 2) Astrophysics Division, ESTEC, 2200AG Noordwijk, The Netherlands 3) Max-Planck Institut für Extraterrestrische Physik, D-8046 Garching, Germany 4) INAOE, Luis Enrique Erro 1, Tonantzintla, Puebla 72840, Puebla, México 5) Australian Telescope National Facility, CSIRO, Epping, Australia 6) Physics Department, University of Melbourne, Australia KEYWORDS: gamma-rays; pulsars; PSR B1509-58; COMPTEL. 1. INTRODUCTION PSR B1509-58 was discovered in the late seventies as a 150 ms X-ray pulsar in Einstein data of the Supernova Remnant MSH 15-52 (Seward et al. 1982). Its inferred characteristic age and polar surface magnetic field strength are 1570 year and $`3.1\times 10^{13}`$ Gauß. The latter estimate is among the highest of the radio-pulsar population. Ginga (2-60 keV) detected pulsed emission at hard X-rays (Kawai et al. 1991), the profile being broad and asymmetric, and its maximum trails the radio-pulse $`0.25\pm 0.02`$ in phase. After the launch of the Compton Gamma-Ray Observatory (CGRO) pulsed emission in the soft $`\gamma `$-ray band was seen by BATSE (Wilson et al. 1993) and OSSE (Ulmer et al. 1993; Matz et al. 1995). At medium energy $`\gamma `$-rays indications were found near $`1`$ MeV in COMPTEL data (Hermsen et al. 1994; Carramiñana et al. 1995). The non-detection by EGRET (e.g. Brazier et al. 1994) indicates that the pulsed spectrum must break before the high-energy (HE) $`\gamma `$-rays. Here we report the results from a COMPTEL (0.75-30 MeV) study of PSR B1509-58 using data collected over more than 6 years, applying improved event selections. More detailed information will be given in Kuiper et al. (1999). 2. COMPTEL TIMING ANALYSIS RESULTS The arrival times of the selected events at the spacecraft, each recorded with a 0.125 ms accuracy, have been converted to Solar System Barycentric arrival times and subsequently phase folded with a proper radio-pulsar ephemeris. The resulting 0.75-30 MeV lightcurve shows a $`5.4\sigma `$ modulation significance ($`Z_2^2`$-test) with a single pulse roughly aligned with the pulse detected at lower energies. The maximum of the pulse is at phase 0.38 slightly above 0.27 found at hard X-rays and coincides with the “shoulder” in the RXTE 2-16 keV lightcurve (Fig. 1a). For the differential energy windows 0.75-3 MeV, 3-10 MeV and 10-30 MeV we show the lightcurves in Fig. 1; the modulation significances are $`3.7\sigma ,4.0\sigma `$ and $`2.1\sigma `$, respectively. This proves that we detected pulsed emission from this source at least up to 10 MeV. The 10-30 MeV lightcurve (Fig. 1b) shows an indication for the broad pulse and a high bin near phase 0.85, which seems to be responsible for the low modulation significance. Based on the RXTE lightcurve (Fig. 1a) we defined an “unpulsed” phase interval: 0.65-1.15. The excess counts in the “pulsed” interval, 0.15-0.65, have been converted to flux estimates for the 3 COMPTEL energy windows taking into account the efficiency factors due to our event selection criteria. These “pulsed” fluxes are: $`(3.69\pm 0.73)\times 10^5`$, $`(4.52\pm 0.77)\times 10^6`$ and $`(1.21\pm 0.85)\times 10^7`$ in units $`ph/cm^2sMeV`$ for the energy windows 0.75-3 MeV, 3-10 MeV and 10-30 MeV, respectively. 3. COMPTEL IMAGING ANALYSIS RESULTS We performed imaging analyses selecting also on pulse phase (“Total”, phase range 0-1; “Pulsed” and “Unpulsed” intervals). Below 10 MeV the signal from PSR B1509-58 is consistent with being 100% pulsed (Kuiper et al. 1999). In the 10-30 MeV range, where the timing analysis did not reveal significant modulation, we surprisingly detect in the “Total” map a $`6\sigma `$ source atop the instrumental and galactic diffuse background positionally consistent with the pulsar (see Collmar et al., these proceedings). In view of the absence of any measurable pulsar/nebula DC emission or nearby unrelated source below 10 MeV, where COMPTEL is more sensitive, we consider it most likely that the pulsar is also detected above 10 MeV, possibly changing its pulse morphology. In order to find support for the latter interpretation we also analysed contemporaneous EGRET 30-100 MeV data. 4. EGRET 30-100 MeV AND COMPTEL 10-30 MeV RESULTS In the spatial analysis in the EGRET 30-100 MeV energy window we detected a $`6.7\sigma `$ excess positionally consistent with PSR B1509-58, which is probably composed of contributions from both the unidentified EGRET source 2EG J1443-6040 and PSR B1509-58. In the timing analysis, applying the same event selections as in the spatial analysis in combination with a “standard” energy dependent cone selection (Thompson et al. 1996), we found only a $`1.1\sigma `$ deviation from a flat distribution (Fig. 1d). Summing the independent COMPTEL 10-30 MeV and EGRET 30-100 MeV lightcurves results in a suggestive double-peak lightcurve with a marginally significant modulation of $`2.3\sigma `$ (Fig. 1f). If the double-peak interpretation is correct, we have underestimated our 10-30 MeV “pulsed” ( phase 0.15-0.65) flux in the timing analysis (Sect. 2) and then the flux of the broad pulse increases to $`(3.37\pm 0.70)\times 10^7ph/cm^2sMeV`$. Our flux estimates (0.75-30 MeV) are shown in Fig. 2 along with spectra from other HE-instruments. It is clear from this picture that the “pulsed” spectrum breaks above 10 MeV. 5. COMPARISON WITH THEORY For explaining pulsed HE-radiation two catagories of models exist, polar cap and outer gap, with as main difference the production site of the HE-radiation. Recently, in a polar cap scenario Harding et al. (1997) (see also Baring & Harding these proceedings) tried to explain the earlier HE-spectrum of PSR B1509-58 by including the exotic photon splitting process, only effective for magnetic field strengths near $`4.41\times 10^{13}`$ Gauß, to attenuate the primary HE-$`\gamma `$-rays in their cascade calculations. Our new 0.75-30 MeV data constrain the magnetic co-latitude of the emission rim, one of their model parameters, to be about $`2^{}`$, close to the “classical” polar cap half-angle. In the outer gap scenario (Romani 1996) several distinct HE-radiation components can be identified. If the synchrotron component, most important at medium energy $`\gamma `$-rays, is dominant, then also this model might qualitatively explain the HE-spectrum of PSR B1509-58. Another high B-field young pulsar resembling PSR B1509-58, although $`20`$ times weaker in spin-down flux, is PSR B1610-50. This pulsar will also be a promising target for future INTEGRAL observations. REFERENCES Brazier, K. T. S, Bertsch, D. L., Fichtel, C. E., et al., 1994, MNRAS 268, 517 Carramiñana, A., Bennett, K., Buccheri, R., et al., 1995, A&A 304, 258 Gunji, S., Hirayama, M., Kamae, T., et al., 1994, ApJ 428, 284 Harding, A., Baring, M., Gonthier, P., 1997, ApJ 476, 246 Hermsen, W., Kuiper, L., Diehl, R., et al., 1994, ApJS 92, 559 Kawai, N., Okayasu, R., Brinkmann, W., et al., 1991, ApJ 383, L65 Kawai, N., Okayasu, R., Sekimoto, Y., 1993, AIP Conf. Proc. 280, p. 213 Kuiper, L., Hermsen, W., Krijger, J., et al., 1999, A&A (in preparation) Marsden, D., Blanco, P. R., Gruber, D. E., et al. 1998, ApJ 491, L39 Matz, S. M., Ulmer, M. P., Grabelsky, D. A., et al. 1995, ApJ 434, 288 Romani, R., 1996, ApJ 470, 469 Rots, A. H., Jahoda, K., Macomb, D. J., et al., 1998, ApJ 501, 749 Saito, Y., Kawai, N., Kamae, T., et al., 1997, AIP Conf. Proc. 410, p. 628 Seward, F. D., Harnden Jr., F. R., 1982, ApJ 256, L45 Thompson, D. J., Bailes, M., Bertsch, D. L., et al., 1996, ApJ 465, 385 Ulmer, M. P., Matz, S. M., Wilson, R. B., et al., 1993, ApJ 417, 738 Wilson, R.B., Fishman, G. J., Finger, M. H., et al., 1993, AIP Conf. Proc. 280, p. 291
no-problem/9812/astro-ph9812140.html
ar5iv
text
# 1 Introduction ## 1 Introduction Study of accretion disks and outflows around black holes began twenty-five years ago \[1-2\]. The subject has evolved considerably since then and it is now clear that these two apparently dissimilar objects are related to each other. The accretion solutions of purely rotating disk have been improved to include the effect of radiation pressure \[3-7\] (See Chakrabarti for a review.). Jet solutions have changed from speculative ideas such as de-Laval nozzles to electrodynamically acceleration model , self-similar centrifugally driven outflows , ‘cauldrons’ etc. Centrifugally driven outflows are subsequently modified to include accretion disks . Chakrabarti & Bhaskaran (see also, Contopoulos, ) showed that it is easier produce outflows from a sub-Keplerian inflow. Parallelly, efforts were on to study the accretion and jets within the same framework. Chakrabarti \[16-17\] found that the same solution of purely rotating flows around a black hole could describe accretion flows on the equatorial plane and pre-jet matters near the axis. With a natural angular momentum distribution of $`l(r)=c\lambda (r)^n`$, (where $`c`$ and $`n`$ are constants and $`\lambda `$ is the von Zeipel parameter) it was found that for large $`c`$ and small $`n`$ ($`n<1`$), solutions are regular on the equatorial plane and they describe thick accretion disks. For small $`c`$ and large $`n`$ ($`n>1`$), the solutions are regular on the axis and they describe pre-jet matters. It was speculated that some viscous process might be responsible to change the parameters from one set to the other. Even in the absence of viscosity, constant angular momentum flow was found to bounce back from the centrifugal barrier in numerical simulations of accretion flows . Eggum, Coroniti & Katz considered radiatively outflows emerging from a Keplerian disk. Further progress of this topic required a fundamental understanding of these flows which emerged in the late 1980s. It was found out just as Bondi solution of accretion onto stars is related to outflow solutions , fundamentally transonic black hole accretions and winds are also related to each other. All possible accretion and wind type solutions are found, including solutions which may contain standing shocks and the entire parameter space is classified according to the nature of solutions \[22-23\]. Fig. 1 (taken from Chakrabarti, ) shows these solutions (Mach number is plotted against logarithmic radial distance \[in Units of the Schwarzschild Radius\] and outer boxes, and specific energy is plotted against specific angular momentum in the central box) and the classification of the parameter space when the Kerr parameter $`a=0.5`$ and when the equatorial plane solutions are considered (similar solutions are present for conical flows in winds as well, see ). The inward pointing arrows indicate accretion solutions and the outward pointing arrows indicate wind solutions. The flow from regions I, O pass through the inner sonic point and outer sonic point respectively. Those from NSA have no shock in accretion, from SA have shocks in accretion, from NSW have no shocks in winds, and SW have shocks in winds respectively. The Global Inflow Outflow Solutions (GIOS) as will be described in §3 combine one solution of each kind to produce wind out of accretion. The horizontal line in the central box denotes the rest mass of the inflow. Note that the outflows are produced only when the specific energy is higher than the rest mass energy. Flow with with lesser energy produces solutions with closed topologies (I\* and O\*). When viscosity was added the nature of the solutions changed fundamentally (see, \[26-27\] for details) allowing matter to directly come out of a Keplerian disk and enter into a black hole through the sonic point. These solutions, both for the accretion and winds have been verified by complete time-dependent simulations \[28-31\]. In the case of steady state solutions, outflows are found to occur between the centrifugal barrier and the funnel wall (Fig. 2a below) while in a non-steady solution the outflow could spread in regions outside the centrifugal barrier as well. ## 2 Recent Progresses in Accretion Disk Solutions Fig. 2a shows the general picture that emerges of the non-magnetized inflow and outflow around a black hole. Centrifugal force tends to fight against gravity close to a black hole to produce a denser region called CENBOL (centrifugal barrier supported boundary layer). Matter farther out rotates in a Keplerian disk, but close to the black hole the flow is puffed up due to radiation pressure or ion pressure depending on whether the accretion rate is sufficiently high or not. The two-dimensional nature of the density distribution is given in Chakrabarti . Chakrabarti & Titarchuk using this generalized accretion solution suggested that hard and soft states of a black hole could be understood simply by re-distribution of matter between Keplerian and sub-Keplerian components which is effected by variation of viscosity in the flow in the vertical direction. This general conclusions is verified observationally (Zhang et al. 1997). The constancy of energy spectral index $`\alpha _e`$ ($`F_\nu \nu ^{\alpha _e}`$) separately in hard and soft states, as well as possible shifting of the inner edge with the Keplerian component with accretion rate as predicted by the general accretion solution have also been verified \[34-36\]. It was also recognized that the non-stationary solutions of the accretion flows \[37-38\] might cause quasi-periodic oscillations. It was pointed out that the outflows, and not accretion disks, could be responsible for the Iron K<sub>α</sub> lines and the so-called reflection components . A schematic representation of the detailed picture as above is drawn in Fig. 2b. We shall use this picture below to analytically estimate the outflow rates. A new concept which was found useful in identifying black holes by using spectral features alone is the ‘bulk motion Comptonization’. Basically, as matter flows into a black hole rapidly with almost the speed of light, photons scattering from them gain momentum and frequency is blue-shifted due to Doppler effect . The resulting spectra is of power-law and extends till about an MeV. The presence of this part of the spectra in soft states is widely accepted to be the only way to identify a black hole most convincingly, whether galactic or extragalactic, since the all absorbing property of the horizon is directly used in obtaining the spectral slope. Black holes in X-ray binaries often show quiescence states. This could be the result of very low accretion rates. A novel solution that was proposed by Das & Chakrabarti (also see, \[41-42\]) is that outflows generated from the inflow could, in some certain circumstances especially when the accretion rate is low, be so high that the disk may be almost evacuated. They proposed that such outflows could generate quiescence states of a black hole candidate. ## 3 Global Inflow-Outflow Solutions (GIOS) Outflows are common in many astrophysical systems which contain black holes and neutron stars. Difference between stellar outflows and outflows from these systems is that the outflows in these systems have to form out of the inflowing material only. We now present a simple analytical approach by which the outflow rate is computed out of the inflow rate. The ratio of these two rates is found to be a function the compression ratio of the gas at the boundary between CENBOL and the inflow. The problem of mass outflow rate in the context of a black hole accretion has been attacked quite recently . A simple approach widely used in stellar astrophysics has been followed where the flow upto the sonic point is assumed to be isothermal. This was possible due to the novel understanding that around a black hole, the centrifugal pressure supported dense matter could behave like a ‘boundary layer’. This CENBOL is hot, puffed up and very similar to the classical thick accretion disk, except that at the inner edge the matter is strongly advective. The thermal pressure is expected to drive matter out in between the centrifugal barrier and the funnel wall as in Fig 2a. However, we use a simple model (Fig. 2b) where both the inflow is axially symmetric and wedge shaped while the outflow is conical with solid angles $`\mathrm{\Theta }_{in}`$ and $`\mathrm{\Theta }_{out}`$ respectively. CENBOL could form either with or without shocks as long as the angular momentum of the flow close to the compact object is roughly constant as is the case in reality \[22-24\]. This region replaces \[32, 44-45\] the so called ‘Compton cloud’ in explaining hard and soft states of black hole. The oscillation of this region can explain the general properties of the quasi-periodic oscillation from black holes and neutron stars. It is therefore of curiosity if this region plays any major role in formation of outflows. Several authors have also mentioned of denser regions closer to a black hole due to different physical effects. Chang & Ostriker showed that pre-heating of the gas could produce standing shocks at a large distance (typically a few tens of thousands Schwarzschild radii away). Kazanas & Ellison mentioned that pressure due to pair plasma could produce standing shocks at smaller distances around a black hole as well. Our computation is insensitive to the actual mechanism by which the boundary layer is produced. All we require is that the gas should be hot at the region where the compression takes place. Thus, since Comptonization processes cool this region for larger accretion rates ($`\dot{M}\stackrel{>}{}0.1\dot{M}_{Eddington}`$) our process is valid only for low-luminosity objects, consistent with current observations. Begelman & Rees talked about a so-called ‘cauldron’ model of compact objects where jets were assumed to emerge from a dense mixture of matter and radiation by boring a de-Laval nozzle as in Blandford & Rees model. The difference between this model and the present one is that very high accretion rate was required ($`\dot{M}_{in}1000\dot{M}_E`$) there while we consider thermally driven outflows out of accretion with smaller rates. Second, the size of the ‘cauldron’ was thousands of Schwarzschild radii (where gravity was so weak that channel has to have shape of a de-Laval nozzle to produce transonicity), while we have a CENBOL of about $`10R_g`$ (where the gravity plays an active role in creating the transonic wind) in our mind. Third, in the present case, matter is assumed to pass through a sonic point using the pre-determined funnel where rotating pre-jet matter is accelerated and not through a ‘bored nozzle’ even though symbolically a quasi-spherical CENBOL is considered for mathematical convenience. Once the presence of the centrifugal pressure supported boundary layer (CENBOL) is accepted, the mechanism of the formation of the outflow becomes clearer. One basic criteria is that the outflowing winds should have positive Bernoulli constant . Just as photons from the stellar surface deposit momentum on the outflowing wind and keeps the flow roughly isothermal at least upto the sonic point, one may assume that the outflowing wind close to the black hole is kept isothermal due to deposition of momentum from hard photons. In the case of the sun, it’s luminosity is only $`10^5L_{Edd}`$ and the typical mass outflow rate from the solar surface is $`10^{14}M_{}`$ year<sup>-1</sup> . Proportionately, for a star with a Eddington luminosity, the outflow rate would be $`10^9M_{}`$ year<sup>-1</sup>. This is roughly half as the Eddington rate for a stellar mass star. Thus if the flow is compressed and heated at the centrifugal barrier around a black hole, it would also radiate enough to keep the flow isothermal (at least up to the sonic point) if the efficiency were exactly identical. Physically, both requirements may be equally difficult to meet, but in reality with photons shining on outflows near a black hole with almost $`4\pi `$ solid angle (from funnel wall) it is easier to maintain the isothermality in the slowly moving (subsonic) region in the present context. Another reason is this: the process of momentum deposition on electrons is more efficient near a black hole. The electron density $`n_e`$ falls off as $`r^{3/2}`$ while the photon density $`n_\gamma `$ falls off as $`r^2`$. Thus the ratio $`n_e/n_\gamma r^{1/2}`$ increases with the size of the region. Thus a compact object will have lesser number of electrons per photon and the momentum transfer is more efficient. In a simpler minded way the physics is scale-invariant, though. In solar physics, it is customary to chose a momentum deposition term which keeps the flow isothermal to be of the form , $$F_r=_{R_s}^rD𝑑r$$ where, $`D`$ is the momentum deposition (localized around $`r_p`$) factor with a typical spatial dependence, $$D=D_0e^{\alpha (r/r_p1)^2}$$ Here, $`D_0`$, $`\alpha `$ are constants and $`R_s`$ is the location of the stellar surface. Since $`r`$ and $`r_p`$ comes in ratio, exactly same physical consideration would be applicable to black hole physics, with the same result provided $`D_0`$ is scaled with luminosity (However, as we showed above, $`D_0`$ goes up for a compact object due to higher solid angle.). However, as Chakrabarti & Titarchuk showed, high accretion rate ($`\dot{M}\stackrel{>}{}0.3\dot{M}_{Edd}`$ ) will reduce the temperature of the CENBOL catastrophically, and therefore our assumption of isothermality of the outflow would severely breakdown at these high rates simply because cooler outflow would have sonic point very far away from the black hole. It is to be noted that in the context of stellar physics, it is shown that the temperature stratification is the outflowing wind has little effect on the mass loss rate. Effect of radiation momentum deposition on the topology of the outflows is separately discussed in Chattapadhyay . ### 3.1 Derivation of the outflow rate using simple GIOS The accretion rate of the incoming accretion flow is given by, $$\dot{M}_{in}=\mathrm{\Theta }_{in}\rho \vartheta r^2.$$ $`(1)`$ Here, $`\mathrm{\Theta }_{in}`$ is the solid angle subtended by the inflow, $`\rho `$ and $`\vartheta `$ are the density and velocity respectively, and $`r`$ is the radial distance. For simplicity, we assume geometric units ($`G=1=M_{BH}=c`$; $`G`$ is the gravitational constant, $`M_{BH}`$ is the mass of the central black hole, and $`c`$ is the velocity of light) to measure all the quantities. In this unit, for a freely falling gas, $$\vartheta (r)=[\frac{1\mathrm{\Gamma }}{r}]^{1/2}$$ $`(2)`$ and $$\rho (r)=\frac{\dot{M}_{in}}{\mathrm{\Theta }_{in}}(1\mathrm{\Gamma })^{1/2}r^{3/2}$$ $`(3)`$ Here, $`\mathrm{\Gamma }/r^2`$ (with $`\mathrm{\Gamma }`$ assumed to be a constant) is the outward force due to radiation. We assume that the boundary of the denser cloud is at $`r=r_s`$ (typically a few Schwarzschild radii, see, Chakrabarti ) where the inflow gas is compressed. The compression could be abrupt due to standing shock or gradual as in a shock-free flow with angular momentum. This details are irrelevant. At this barrier, then $$\rho _+(r_s)=R\rho _{}(r_s)$$ $`(4a)`$ and $$\vartheta _+(r_s)=R^1\vartheta _{}(r_s)$$ $`(4b)`$ where, $`R`$ is the compression ratio. Exact value of the compression ratio is a function of the flow parameters, such as the specific energy and the angular momentum (e.g., \[22-24\]) Here, the subscripts $``$ and $`+`$ denote the pre-shock and post-shock quantities respectively. At the shock surface, the total pressure (thermal pressure plus ram pressure) is balanced. $$P_{}(r_s)+\rho _{}(r_s)\vartheta _{}^2(r_s)=P_+(r_s)+\rho _+(r_s)\vartheta _+^2(r_s).$$ $`(5)`$ Assuming that the thermal pressure of the pre-shock incoming flow is negligible compared to the ram pressure, using eqs. 4(a-b) we find, $$P_+(r_s)=\frac{R1}{R}\rho _{}(r_s)\vartheta _{}^2(r_s).$$ $`(6)`$ The isothermal sound speed in the post-shock region is then, $$C_s^2=\frac{P_+}{\rho _+}=\frac{(R1)(1\mathrm{\Gamma })}{R^2}\frac{1}{r_s}=\frac{(1\mathrm{\Gamma })}{f_0r_s}$$ $`(7)`$ where, $`f_0=R^2/(R1)`$. An outflow which is generated from this dense region with very low flow velocity along the axis is necessarily subsonic in this region, however, at a large distance, the outflow velocity is expected to be much higher compared to the sound speed, and therefore the flow must be supersonic. In the subsonic region of the outflow, the pressure and density are expected to be almost constant and thus it is customary to assume isothermality condition up to the sonic point . With isothermality assumption or a given temperature distribution ($`Tr^\beta `$ with $`\beta `$ a constant; see eq. below) the result is derivable in analytical form. The sonic point conditions are obtained from the radial momentum equation, $$\vartheta \frac{d\vartheta }{dr}+\frac{1}{\rho }\frac{dP}{dr}+\frac{1\mathrm{\Gamma }}{r^2}=0$$ $`(8)`$ and the continuity equation $$\frac{1}{r^2}\frac{d(\rho \vartheta r^2)}{dr}=0$$ $`(9)`$ in the usual way, i.e., by eliminating $`d\rho /dr`$, $$\frac{d\vartheta }{dr}=\frac{N}{D}$$ $`(10)`$ where $$N=\frac{2C_s^2}{r}\frac{1\mathrm{\Gamma }}{r^2}$$ and $$D=\vartheta \frac{C_s^2}{\vartheta }$$ and putting $`N=0`$ and $`D=0`$ conditions. These conditions yield, at the sonic point $`r=r_c`$, for an isothermal flow, $$\vartheta (r_c)=C_s.$$ $`(11a)`$ and $$r_c=\frac{1\mathrm{\Gamma }}{2C_s^2}=\frac{f_0r_s}{2}$$ $`(11b)`$ where, we have utilized eq. (7) to substitute for $`C_s`$. Since the sonic point of a hotter outflow is located closer to the black hole, clearly, the condition of isothermality is best maintained if the temperature is high enough. However if the temperature is too high, so that $`r_c<r_s`$, one has to solve this case more carefully, using considerations of Fig. 2a, rather than of Fig. 2b. The constancy of the integral of the radial momentum equation (eq. 8) in an isothermal flow gives: $$C_s^2ln\rho _+\frac{1\mathrm{\Gamma }}{r_s}=\frac{1}{2}C_s^2+C_s^2ln\rho _c\frac{1\mathrm{\Gamma }}{r_c}$$ $`(12)`$ where, we have ignored the initial value of the outflowing radial velocity $`\vartheta (r_s)`$ at the dense region boundary ($`r=r_s`$), and also used eq. (11a). We have also put $`\rho (r_c)=\rho _c`$ and $`\rho (r_s)=\rho _+`$. Upon simplification, we obtain, $$\rho _c=\rho _+exp(f)$$ $`(13)`$ where, $$f=f_0\frac{3}{2}$$ Thus, the outflow rate is given by, $$\dot{M}_{out}=\mathrm{\Theta }_{out}\rho _c\vartheta _cr_c^2$$ $`(14)`$ where, $`\mathrm{\Theta }_{out}`$ is the solid angle subtended by the outflowing cone. Upon substitution, one obtains, $$\frac{\dot{M}_{out}}{\dot{M}_{in}}=R_{\dot{m}}=\frac{\mathrm{\Theta }_{out}}{\mathrm{\Theta }_{in}}\frac{R}{4}f_0^{3/2}exp(f)$$ $`(15)`$ which, explicitly depends only on the compression ratio: $$\frac{\dot{M}_{out}}{\dot{M}_{in}}=R_{\dot{m}}=\frac{\mathrm{\Theta }_{out}}{\mathrm{\Theta }_{in}}\frac{R}{4}[\frac{R^2}{R1}]^{3/2}exp(\frac{3}{2}\frac{R^2}{R1})$$ $`(16)`$ apart from the geometric factors. Notice that this simple result does not depend on the location of the sonic points or the the size of the dense cloud or the outward radiation force constant $`\mathrm{\Gamma }`$. This is because the Newtonian potential was used throughout and the radiation force was also assumed to be very simple minded ($`\mathrm{\Gamma }/r^2`$). Also, effects of centrifugal force was ignored. Similarly, the ratio is independent of the mass accretion rate which should be valid only for low luminosity objects. For high luminosity flows, Comptonization would cool the dense region completely and the mass loss will be negligible. Pair plasma supported quasi-spherical shocks forms for low luminosity as well . In reality there would be a dependence on these quantities when full general relativistic considerations of the rotating flows are made. Exact and detailed computations using both the transonic inflow and outflow (where the compression ratio $`R`$ is also computed self-consistently) are presented elsewhere . Figure 3 contains the basic results. The solid curve shows the ratio $`R_{\dot{m}}`$ as a function of the compression ratio $`R`$ (plotted from $`1`$ to $`7`$), while the dashed curve shows the same quantity as a function of the polytropic constant $`n=(\gamma 1)^1`$ (drawn from $`n=3/2`$ to $`3`$), $`\gamma `$ being the adiabatic index. The solid curve is drawn for any generic compression ratio and the dashed curve is drawn assuming the strong shock limit only: $`R=(\gamma +1)/(\gamma 1)=2n+1`$. In both the curves, $`\mathrm{\Theta }_{out}\mathrm{\Theta }_{in}`$ has been assumed for simplicity. Note that if the compression does not take place (namely, if the denser region does not form), then there is no outflow in this model. Indeed for, $`R=1`$, the ratio $`R_{\dot{m}}`$ is zero as expected. Thus the driving force of the outflow is primarily coming from the hot and compressed region. The basic form is found to agree with results obtained from rigorous calculation with transonic inflow and outflow. In a relativistic inflow or for a radiation dominated inflow, $`n=3`$ and $`\gamma =4/3`$. In the strong shock limit, the compression ratio is $`R=7`$ and the ratio of inflow and outflow rates becomes, $$R_{\dot{m}}=0.052\frac{\mathrm{\Theta }_{out}}{\mathrm{\Theta }_{in}}.$$ $`(17a)`$ For the inflow of a mono-atomic ionized gas $`n=3/2`$ and $`\gamma =5/3`$. The compression ratio is $`R=4`$, and the ratio in this case becomes, $$R_{\dot{m}}=0.266\frac{\mathrm{\Theta }_{out}}{\mathrm{\Theta }_{in}}.$$ $`(17b)`$ Since $`f_0`$ is smaller for $`\gamma =5/3`$ case, the density at the sonic point in the outflow is much higher (due to exponential dependence of density on $`f_0`$, see, eq. 7) which causes the higher outflow rate, even when the actual jump in density in the postshock region, the location of the sonic point and the velocity of the flow at the sonic point are much lower. It is to be noted that generally for $`\gamma >1.5`$ shocks are not expected , but the centrifugal barrier supported dense region would still exist. As is clear, the entire behavior of the outflow depends only on the compression ratio, $`R`$ and the collimating property of the outflow $`\mathrm{\Theta }_{out}/\mathrm{\Theta }_{in}`$. Outflows are usually concentrated near the axis, while the inflow is near the equatorial plane. Assuming a half angle of $`10^o`$ in each case, we obtain, $$\mathrm{\Theta }_{in}=\frac{2\pi ^2}{9};\mathrm{\Theta }_{out}=\frac{\pi ^3}{162}$$ and $$\frac{\mathrm{\Theta }_{out}}{\mathrm{\Theta }_{in}}=\frac{\pi }{36}.$$ $`(18)`$ The ratios of the rates for $`\gamma =4/3`$ and $`\gamma =5/3`$ are then $$R_{\dot{m}}=0.0045$$ $`(19a)`$ and $$R_{\dot{m}}=0.023$$ $`(19b)`$ respectively. Thus, in quasi-spherical systems, in the case of strong shock limit, the outflow rate is at the most a couple of percent of the inflow. If this assumption is dropped, then for a cold inflow, the rate could be higher by about fifty percent (see, Fig. 3). It is to be noted that the above expression for the outflow rate is strictly valid if the flow could be kept isothermal at least up to the sonic point. In the event this assumption is dropped the expression for the outflow rate becomes dependent on several parameters. As an example, we consider a polytropic outflow of same index $`\gamma `$ but of a different entropy function $`K`$ (We assume the equation of state to be $`P=K\rho ^\gamma `$, with $`\gamma 1`$). The expression (11b) would be replaced by, $$r_c=\frac{f_0r_s}{2\gamma }$$ $`(20)`$ and eq. (12) would be replaced by, $$na_+^2\frac{1\mathrm{\Gamma }}{r_s}=(\frac{1}{2}+n)a_s^2\frac{1\mathrm{\Gamma }}{r_c}$$ $`(21)`$ where $`n=1/(\gamma 1)`$ is the polytropic constant of the flow and $`a_+=(\gamma P_+/\rho _+)^{1/2}`$ and $`a_c=(\gamma P_c/\rho _c)^{1/2}`$ are the adiabatic sound speeds at the starting point and the sonic point of the outflow. It is easily shown that a power law temperature fall off of the outflow ($`Tr^\beta `$) would yield $$R_{\dot{m}}=\frac{\mathrm{\Theta }_{out}}{\mathrm{\Theta }_{in}}(\frac{K_i}{K_o})^n(\frac{f_0}{2\gamma })^{\frac{3}{2}\beta },$$ $`(22)`$ where, $`K_i`$ and $`K_o`$ are the entropy functions of the inflow and the outflow. This derivation is strictly valid for a non-isothermal flow. Since $`K_i<K_o`$, $`n>3/2`$ and $`f_0`$, for $`\mathrm{\Theta }_{out}\mathrm{\Theta }_{in}`$, $`R_{\dot{m}}<<1`$ is guaranteed provided $`\beta >\frac{3}{2}`$, i.e., if the temperature falls for sufficiently rapidly. For an isothermal flow $`\beta =0`$ and the rate tends to be higher. Note that since $`n\mathrm{}`$ in this case, any small jump in entropy due to compression will off-balance the the effect of $`f_0^{3/2}`$ factor. Thus $`R_{\dot{m}}`$ remains smaller than unity. The first factor decreases with entropy jump while the second factor increases with the compression ratio ($`R`$) when $`\beta <3/2`$. Thus the solution is still expected to be similar to what is shown in Fig. 3. ### 3.2 Results from Rigorously obtained GIOS When inflow and outflow solutions are obtained more rigorously, i.e., actually making the flow pass through separate sonic surfaces, one has to include the effect of mass-loss on the Rankine-Hugoniot condition at the boundary between CENBOL and the accretion flow . Accordingly, we use, $$\dot{M}_+=(1R_{\dot{m}})\dot{M}_{}$$ $`(23)`$ where, the subscripts $`+`$ and $``$ denote the pre- and post-shock values respectively. Since due to the loss of matter in the post-shock region, the post-shock pressure goes down, the shock recedes backward for the same value of incoming energy, angular momentum & polytropic index. The combination of three changes, namely, the increase in the cross-sectional area of the outflow and the launching velocity of the outflow and the decrease in the post-shock density decides whether the net outflow rate would increased or decreased than from the case when the exact Rankine-Hugoniot relation was used. Fig. 4 shows a typical global inflow-outflow solution (GIOS) where actual transonic problem was solved. The input parameters are $`=0.0005`$, $`\lambda =1.75`$ and $`\gamma =4/3`$ corresponding to relativistic inflow. The solid curve with an arrow represents the pre-shock region of the inflow and the long-dashed curve represents the post-shock inflow which enters the black hole after passing through the inner sonic point (I). The solid vertical line at $`X_{s3}`$ (the leftmost vertical transition; the notation of the shock location is from ) with double arrow represents the shock transition obtained with exact Rankine-Hugoniot condition (i.e., with no mass loss). The actual shock location obtained with modified Rankine-Hugoniot condition (eq. 23) is farther out from the original location $`X_{s3}`$. Three vertical lines connected with the corresponding dotted curves represent three outflow solutions for the parameters $`\gamma _o=1.3`$ (top), $`1.15`$ (middle) and $`1.05`$ (bottom). The outflow branches shown pass through the corresponding sonic points. It is evident from the figure that the outflow moves along solution curves which are completely different from that of the ‘wind solution’ of the inflow which passes through the outer sonic point ‘O’. The mass loss ratio $`R_{\dot{m}}`$ in these cases are $`0.256`$, $`0.159`$ and $`0.085`$ respectively. This figure is taken from . Fig. 3 of that work shows the general behaviour of the variation of $`R_{\dot{M}}`$ with compression ratio of the shock. This agrees well with what is obtained analytically (Fig. 3 of this paper). Das & Chakrabarti , for the first time, pointed out that in certain region of the parameter space, the mass loss could be so much that disk would be almost evacuated and conjectured that inactive phases of a black hole (such as the present day Sgr A\* at our galactic center) may be formed when such profuse mass loss takes place. It is to be noted that although the existence of outflows are well known, their rates are not. The only definite candidate whose outflow rate is known with any certainty is probably SS433 whose mass outflow rate was estimated to be $`\dot{M}_{out}\stackrel{>}{}1.6\times 10^6f^1n_{13}^1D_5^2M_{}`$ yr<sup>-1</sup> , where $`f`$ is the volume filling factor, $`n_{13}`$ is the electron density $`n_e`$ in units of $`10^{13}`$ cm<sup>-3</sup>, $`D_5`$ is the distance of SS433 in units of $`5`$kpc. Considering a central black hole of mass $`10M_{}`$, the Eddington rate is $`\dot{M}_{Ed}0.2\times 10^7M_{}`$ yr<sup>-1</sup> and assuming an efficiency of conversion of rest mass into gravitational energy $`\eta 0.06`$, the critical rate would be roughly $`\dot{M}_{crit}=\dot{M}_{Ed}/\eta 3.2\times 10^7M_{}`$ yr<sup>-1</sup>. Thus, in order to produce the outflow rate mentioned above even with our highest possible estimated $`R_{\dot{m}}0.4`$ (see, Fig. 2), one must have $`\dot{M}_{in}12.5\dot{M}_{crit}`$ which is very high indeed. One possible reason why the above rate might have been over-estimated would be that below $`10^{12}`$cm from the central mass , $`n_{13}>>1`$ because of the existence of the dense region at the base of the outflow. In numerical simulations the ratio of the outflow and inflow has been computed in several occasions . Eggum et al. found the ratio to be $`R_{\dot{m}}0.004`$ for a radiation pressure dominated flow. This is generally comparable with what we found above (eq. 19a). In Molteni et al. the centrifugally driven outflowing wind generated a ratio of $`R_{\dot{m}}0.1`$. Here, the angular momentum was present in both inflow as well as outflow, and the shock was not very strong. Thus, the result is again comparable with what we find here. ### 3.3 Spectral Softening in Presence of Profuse Mass Loss Creation of winds out of accretion disk has an interesting consequence. As more fraction of matter is expelled, it becomes easier for the soft photons from the Keplerian disk (see Fig. 2a) surrounding the advective region to cool this region due to Comptonization (see, Chakrabarti & Titarchuk for details). In other words, the presence of winds would soften the spectra of the power-law component for the same multicolour blackbody component. Such an observation would point to profuse mass loss from the hot advective region. The model we use here is the two component accretion flow which has a CENBOL close to the hole. We assume a weakly viscous flow of constant specific angular momentum $`\lambda =1.75`$ (in units of $`2GM/c`$) for the sake of concreteness. The Keplerian component close to the equatorial plane has a low accretion rate ($`\dot{M}_{in}0.050.3`$ in units of the Eddington rate) and the sub-Keplerian halo surrounding it has a higher rate ($`\dot{M}_h1`$ in units of Eddington rate). Before the accreting matter hits the inner advective region, both the rates are constant, but as Das & Chakrabarti has shown, winds, produced from CENBOL will deplete the disk matter at the rate determined by the temperature of the CENBOL, when other parameters, such as the specific angular momentum and specific energy are kept fixed. Figures 5 and 6 show the outcome of our calculation of the spectra for three different accretion rate of the Keplerian component $`\dot{M}_{in}`$ . The mass of the central black hole is chosen to be $`M=10M_{}`$. The size of the CENBOL is assumed to be $`10r_g`$ (where $`r_g`$ is the Schwarzschild radius), a typical location for the sub-Keplerian flow of average specific angular momentum $`\lambda =1.75`$ and specific energy $`=0.003`$. Following Das & Chakrabarti , we first compute the mass outflow rate from the advective region. The long dashed curve in Fig. 5 shows the variation of the percentage of mass loss (vertical axis on the right) as a function of the inflow accretion rate. The dotted curve and the solid curve denote the variation of the energy spectral index $`\alpha `$ ($`F_\nu \nu ^\alpha `$) with and without winds taken into account. Note that the spectra is overall softened ($`\alpha `$ increased) when winds are present. For higher Keplerian rates, the mass loss through winds is negligible and therefore there is virtually no change in the spectral index. For lower inflow rates, on the other hand, mass loss rate is more than twenty percent. It is easier to Comptonize the depleted matter by the same number of incoming soft photons and therefore the spectra is softened. In Fig. 6, we show the resulting spectral change. As in Fig. 5, solid curves represent solutions without winds and the dotted curves represent solutions with winds. Solid curves are drawn for $`\dot{M}_{in}=0.3`$ (uppermost at the bump), $`0.15`$ (middle at the bump) and $`0.07`$ (lowermost at the bump) respectively. For $`\dot{M}_{in}=0.3`$ both curves are identical. Note the crossing of the solid curves at around $`10^{18.6}Hz`$ ($`15`$ keV) when winds are absent. This is regularly observed in black hole candidates. If this is shifted to higher energies, the presence of winds may be indicated. Strong winds are suspected to be present in Sgr $`A^{}`$ at our Galactic Center (see, Genzel et al. for a review, and Eckart & Genzel ). Chakrabarti suggested that the inflow could be of almost constant energy transonic flow, so that the emission is inefficient. However, from global inflow-outflow solutions \[GIOS\], Das & Chakrabarti showed that when the inflow rate itself is low (as is the case for Sgr A; $`10^3`$ to $`10^4\dot{M}_{Eddington}`$) the mass outflow rate is very high, almost to the point of evacuating the disk. This prompted them to speculate that spectral properties of our Galactic Center could be explained by inclusion of winds. This will be done in near future. Not only our Galactic Center, the consideration should be valid for all the black hole candidates (e.g., V404 Cyg) which are seen in quiescence. Chakrabarti & Titarchuk suggested that the iron K<sub>α</sub> line as well as the so called ‘reflection component’ could be due to outflows off the advective region. Combined with the present work, we may conclude that simultaneous enhancement of the ‘reflection component’ and/or iron K<sub>α</sub> line intensity with the softening of the spectra in hard X-rays would be a sure signature of the presence of significant winds in the advective region of the disk. References 1. Shakura, N.I. & Sunyaev, R.A. 1973, Black holes in binary systems: Observational appearance, Astr. Ap., 24, 337-355 2. Longair, M.S., Ryle, M. & Scheuer, P.A.G., 1973,Models of extended radio sources, Mon. Not. R. Astron. Soc. 164, 243-250 3. Maraschi, L., Reina C., & Treves, A., 1976, The effect of radiation pressure on accretion disks around black holes, Astrophys. J. 206, 295-300 4. Paczyński B. & Wiita, P.J., 1980, Thick accretion disks and supercritical luminosities, Astron. Ap. 88, 23-31 5. Liang E.P.T. & Thompson, K.A. 1980, Transonic disk accretion onto black holes, Astrophys. J., 240, 271-274 6. Paczyński B. & Bisnovatyi-Kogan, G. 1981, Acta Astron. 31, 283-293 7. Muchotrzeb B., & Paczyński, B. 1982, Transonic accretion flow in a thin disk around a black hole, Acta Astron. 32, 1-11 8. Chakrabarti, S. K. 1996, in Accretion Processes on Black Holes, Physics Reports, 266, No. 5 & 6, p. 229-390 9. Blandford, R.D. & Rees, M.J. 1974, A ’twin-exhaust’ model for double radio sources, Mon. Not. R. Astro. Soc., 169, 395-415 10. Znajek, R.L., 1978, Charged current loops around Kerr holes, Mon. Not. R. Astron. Soc., 182, 639-646 11. Blandford R.D. & Payne, D.G. 1981, Hydromagnetic flows from accretion discs and the production of radio jets, Mon. Not. R. Astron. Soc. 194, 883-903 12. Begelman, M.C. & Rees, M.J., 1984, The cauldron at the core of SS 433, Mon. Not. R. Astro. Soc., 206, 209-220 13. Königl, A. 1989, Self-similar models of magnetized accretion disks, Astrophys. J., 342, 208-223 14. Chakrabarti, S.K. & Bhaskaran, P. 1992, Mon. Not. R. Astron. Soc. 255, 255-260 15. Contopoulos, J. 1995, Force-free Self-similar Magnetically Driven Relativistic Jets Astrophys. J., 446, 67-74 16. Chakrabarti, S.K. 1984 in Active Galactic Nuclei, ed. J. Dyson, (Manchester University Press), 346-350 17. Chakrabarti, S.K., 1985 Astrophys. J. 288, 7-13 18. Hawley, J.W., Wilson, J. & Smarr, L. 1984, A numerical study of nonspherical black hole accretion: I- Equations and test problems, Astrophys. J., 277, 296-311 19. Eggum, G. E., Coroniti, F. V., Katz, J. I. 1985, Jet production in super-Eddington accretion disks, Astrophys. J., 298, L41-L45 20. Bondi, H. 1952, On Spherically Symmetrical Accretion, Mon. Not. R. Astron. Soc. 112, 195-199 21. Parker, E.N., 1979, Cosmical Magnetic Fields, (Clarendon Press: Oxford) 22. Chakrabarti, S. K. 1989, Astrophys. J., 347, 365-372 23. Chakrabarti, S. K. 1990, Theory of Transonic Astrophysical Flows (Singapore: World Sci.) 24. Chakrabarti, S. K. 1996, Astrophys. J., 471, 237-247 25. Chakrabarti, S.K. 1990 Mon. Not. R. Astron. Soc. 243, 610-619 26. Chakrabarti, S. K. 1996, Astrophys. J., 464, 664-683 27. Chakrabarti, S.K., 1998 in ‘Observational Evidence for Black Holes in the Universe’, ed. S.K. Chakrabarti (Kluwer Academic Publishers: Dordrecht), 19-48 astro-ph/9807104 28. Chakrabarti, S.K. & Molteni D. 1993, Astrophys. J., 417, 671-676 29. Molteni, D., Lanzafame, G. & Chakrabarti, S. K. 1994, Simulation of thick accretion disks with standing shocks by smoothed particle hydrodynamics, Astrophys. J., 425, 161-170 30. Chakrabarti S.K. & Molteni, D. 1995, Mon. Not. R. Astron. Soc. 272, 80-88 31. Molteni, D., Ryu., D. & Chakrabarti 1996, Numerical Simulations of Standing Shocks in Accretion Flows around Black Holes: A Comparative Study, Astrophys. J., 470, 460 32. Chakrabarti, S. K., & Titarchuk, L.G. 1995, Astrophys. J., 455, 623-639 33. Zhang, S.N., Cui, W., Harmon, B.A., Paciesas, W.S., Remillard R.E., & van Paradijs J. 1997, The 1996 Soft State Transition of Cygnus X-1, Astrophys. J., 477, L95-L100 34. Gilfanov, M., Churazov, E. & Sunyaev, R.A. 1997, Spectral and Temporal variations of the X-ray emission from black hole and neutron star binaries, in Accretion Disks – New Aspects, Eds. E. Meyer-Hofmeister & H. Spruit, Springer (Heidelberg) 35. Sunyaev, R.A., et al., 1994, Observations Of X-Ray Novae In Vela 1993 Ophiuchus 1993 And Perseus 1992 Using The Instruments Of The MIR / KVANT Module, Astron. Lett., 20, 777-784 36. Ebisawa, K. et al. 1994, Spectral evolution of the bright X-ray nova GS 1124-68 (Nova Muscae 1991) observed with Ginga, P.A.S.J., 46, 375-394 37. Molteni, D., Sponholz, H. & Chakrabarti, S. K. 1996, X-rays from Shock Waves In Accretion Flows Around Black Holes, Astrophys. J., 457, 805-812 38. Ryu, D., Chakrabarti, S. K., & Molteni, D. 1997, Zero-Energy Rotating Accretion Flows near a Black Hole, Astrophys. J., 474, 378-388 39. Titarchuk, L.G., Mastichiadis, A., & Kylafis, N. G. 1997, X-Ray Spectral Formation in a Converging Fluid Flow: Spherical Accretion into Black Holes, Astrophys. J., 487, 834-850 40. Das T. & Chakrabarti S.K. 1997, Astrophys. J. (submitted) astro-ph/9809109 41. Das, T.K. 1998a, ‘Observational Evidence for Black Holes in the Universe’ ed. S.K. Chakrabarti (Kluwer Academic: Holland) p. 113-122 astro-ph/9807105 42. Das, T.K. 1998b, (this volume) 43. Chakrabarti, S. K. 1998, Indian Journal of Physics, 72B, 565-569, astro-ph/9810412 44. Chakrabarti, S. K., Titarchuk, L.G., Kazanas, D. & Ebisawa, K. 1996, A & A Supp. Ser., 120, 163-166 45. Chakrabarti S.K., 1997 Astrophys. J., 484, 313-322 46. Paul, B., Agrawal, P.C., Rao, A.R., et al., 1998, Quasi-regular X-Ray Bursts from GRS 1915+105 Observed with the IXAE: Possible Evidence for Matter Disappearing into the Event Horizon of the Black Hole, Astrophys. J., 492, L63-L67 47. Chang, K.M. & Ostriker, J.P. 1985, Standing shocks in accretion flows onto black holes, Astrophys. J., 288, 428-437 48. Kazanas, D. & Ellison, D. C. 1986, The central engine of quasars and active galactic nuclei Hadronic interactions of shock-accelerated relativistic protons, Astrophys. J., 304, 178-187 49. Tarafdar, S.P. 1988, A unified formula for mass-loss rate of O to M stars, Astrophys. J., 331, 932-938 50. Priest, E.R., 1982, Solar Magnetohydrodynamics, (Dordrecht, Holland) 51. Kopp, R.A. & Holzer, T.E., 1976, Dynamics of coronal hole regions. I - Steady polytropic flows with multiple critical points, Sol. Phys., 49, 43-56 52. Pauldrach, A., Puls, J., and Kudritzki, R.P. 1986, Radiation-driven winds of hot luminous stars - Improvements of the theory and first results, Astr. Ap., 164, 86-100 53. Chattapadhyay, I 1998, (this volume) 54. Watson, M.G., Stewart, G. C., Brinkann, W., & King, A. R. 1986, Doppler-shifted X-ray line emission from SS433, Mon. Not. R. Astro. Soc., 222, 261-271 55. Genzel, R., Hollenbach D. & Townes, C.H., 1994, The nucleus of our Galaxy. Rep. Prog. Phys. 57, 417-479 56. Eckart A. & Genzel, R. 1997, Stellar proper motions in the central 0.1 pc of the Galaxy, Mon. Not. R. Astro. Soc., 284, 576-598 FIGURE CAPTIONS : Classification of the parameter space (central box) in the energy-angular momentum plane in terms of various topology of the black hole accretion. Eight surrounding boxes show the solutions from each of the independent regions of the parameter space. Each small box shows Mach number $`M`$ against the logarithmic radial distance $`r`$ (measured in units of $`GM_{BH}/c^2`$) Contours are of constant entropy accretion rate $`\dot{}`$. Similar classification is possible for all adiabatic index $`\gamma <1.5`$. For $`\gamma >1.5`$, only the inner sonic point is possible other than an unphysical ‘O’ type point (Chakrabarti 1996c). : Cartoon diagram of a very general inflow-outflow configuration of non-magnetized matter around a compact object. Keplerian and sub-Keplerian matter accretes along the equatorial plane. Centrifugally and thermally driven outflows preferentially emerge between the centrifugal barrier and the funnel wall. Schematic diagram of inflow and outflow around a compact object. Hot, dense region around the object either due to centrifugal barrier or due to plasma pressure effect or pre-heating, acts like a ‘stellar surface’ from which the outflowing wind is developed. Ratio $`\dot{R}_{\dot{m}}`$ of the outflow rate and the inflow rate as a function of the compression ratio of the gas at the dense region boundary (solid curve). Also shown in dashed curve is its variation with the polytropic constant $`n`$ in the strong shock limit. Solid angles subtended by the inflow and the outflow are assumed to be comparable (Chakrabarti 1998a). : Variation of the percentage of mass loss (long dashed curve and right axis) and the energy spectral index $`\alpha `$ ($`F_\nu \alpha ^\alpha `$) (solid and dotted curves and left axis) with the accretion rate (in units of Eddington rate) of the Keplerian component. Solid curve is drawn when winds are neglected from the advective region and dotted curve includes effect of winds. Overall spectra is softened when the inflow rate is reduced (Chakrabarti 1998b). : Spectra of emitted radiation from the accretion disk with (dotted) and without (solid) effects of winds. Hard X-ray component is softened while keeping soft X-ray bump unchanged (Chakrabarti 1998b). : Few typical solutions which combine accretion and outflow. Input parameters are $`=0.0005`$, $`\lambda =1.75`$ and $`\gamma =4/3`$. Solid curve with an incoming arrow represents the pre-shock region of the inflow and the dashed curve with an incoming arrow represents post-shock inflow which enters the black hole after passing through the inner sonic point (I). Dotted curves are the outflows for various $`\gamma _o`$ (marked). Open circles are sonic points of the outflowing winds and the crossing point ‘O’ is the outer sonic point of the inflow. The leftmost shock transition ($`X_{s3}`$) is obtained from unmodified Rankine-Hugoniot condition, while the other transitions are obtained when the mass-outflow is taken into account.
no-problem/9812/astro-ph9812471.html
ar5iv
text
# Abstract ## Abstract We investigate the evolution of RR Tel after the outburst by fitting the emission spectra in two epochs. The first one (1978) is characterized by large fluctuations in the light curve and the second one (1993) by the slow fading trend. In the frame of a colliding wind model two shocks are present: the reverse shock propagates in the direction of the white dwarf and the other one expands towards or beyond the giant. The results of our modeling show that in 1993 the expanding shock has overcome the system and is propagating in the nearby ISM. The large fluctuations observed in the 1978 light curve result from line intensity rather than from continuum variation. These variations are explained by fragmentation of matter at the time of head-on collision of the winds from the two stars. A high velocity (500 $`\mathrm{km}\mathrm{s}^1`$) wind component is revealed from the fit of the SED of the continuum in the X-ray range in 1978, but is quite unobservable in the line profiles. The geometrical thickness of the emitting clumps is the critical parameter which can explain the short time scale variabilities of the spectrum and the trend of slow line intensity decrease. ## 1 Introduction RR Tel belongs to the group of symbiotic novae. They are distinguished from classic symbiotic stars by having undergone only one single major outburst and have not yet recovered. The outburst of RR Tel started in 1944 when the system brightened by 7 mag. The light curve after the outburst shows a gradual fading which is still in progress. Besides the general decrease, the light curve is marked by periods of large fluctations. The system is believed to be a binary, consisting of a nuclear burning white dwarf (WD) and a late type companion, and an emission nebula photoionized by the hot star. The cool component is a Mira-type variable, with a pulsation period of 387 days (Feast et al. 1983). Weak TiO absorption bands of spectral type about M5 were observed by Thackeray (1977). The system has been monitored in many different spectral regions, ranging from radio to X-rays, and, recently, was observed by the HST. In spite of being extensively studied (Aller et al 1973, Thackeray 1977, Hayes & Nussbaumer 1986, etc.), many questions about RR Tel remain unsolved. The emission line spectra of symbiotic stars are usually calculated using a classical model where the main excitation mechanism is photoionization from the hot star (Murset et al 1991). However, in the last few years, the importance of the winds from both components in symbiotic systems and their collision has been addressed by many authors (e.g. Wallerstein et al 1984, Girard & Willson 1987, Nussbaumer & Walder 1993, etc). Recently a model has beeen successfully applied to the calculation of the line and continuum spectra of HM Sge (Formiggini, Contini, & Leibowitz 1995) and AG Peg (Contini 1997). This model is able to fit the high and low ionization emission lines by taking into account both the shock created by the winds and the photoinization flux from the hot star. In this paper we investigate the evolution of RR Tel through the interpretation of both line and continuum spectra in different epochs. Both the photoionizing flux from the hot star and the collision of the winds are considered, therefore the calculation was performed with the SUMA code. SUMA (Contini 1997 and references therein) consistently accounts for both the radiation from the hot star and for the shock in a plane parallel geometry. Dust reradiation is also consistently calculated. In §2 the evolution of the outburst is described. In §3 the general model is presented. The spectra in 1978 and 1993 are discussed in §4 and §5, respectively. Model results in the two epochs 1978 and 1993 are compared in §6 and conclusions follow in §7. ## 2 The evolution of the outburst Fig. 1 shows the visual light curve of RR Tel between the years 1949 and 1995 which was obtained collecting data from various sources (AAVSO circulars, Heck & Manfroid 1985, and Bateson 1995). A period of 387 days was observed (Heck & Manfroid 1982) for the system after 1930, the same which is observable in the infrared (Feast et al 1983). It can be noticed that a general decreasing trend started in 1949 and is still in progress. Above the fading trend the curve displays a decrease slightly larger than 0.5 mag in the years 1962 - 1963 and particularly large fluctuations in the years 1975-1982. During these years the cool companion remained quite stable as shown by many observations collected at infrared wavelengths where the cool star dominates. In fact a periodicity of 387 days has been detected on JHKL photometric data during 1975-1981 by Feast et al (1983). After these events the system recovered the general fading trend. Penston et al (1983) reported optical photometric variations from night to night and within a single night between 1975 and 1982. We recall that the line spectrum is very rich, and that some strong lines fall in the passbands of the broad band photometry. It is then likely that large fluctuations in the line intensities are the source of the strong variations in the visual light curve. However, the question of whether the variation in the continuum or in the line intensities are the cause of the photometric fluctuations, is still open. The outburst event led initially to an extended atmosphere around the white dwarf. The nebular phase emerged some years later and is characterized by a wind with a terminal velocity increasing from 400 $`\mathrm{km}\mathrm{s}^1`$ in 1949 to 1300 $`\mathrm{km}\mathrm{s}^1`$ in 1960 (Thackeray 1977). However, when IUE became operational in 1978 Penston et al (1983) measured a terminal velocity of about 75 $`\mathrm{km}\mathrm{s}^1`$, indicating that the wind had decelerated from initial velocities of thousands of $`\mathrm{km}\mathrm{s}^1`$ to less than 100 $`\mathrm{km}\mathrm{s}^1`$. As the outburst evolved, emission lines from high ioniziation levels appeared (Viotti 1988). The emission line spectra of RR Tel were studied by Penston et al (1983), Hayes & Nussbaumer (1986), Aufdenberg (1993), etc. Recently Zuccolo, Selvelli, & Hack(1997) (from hereafter ZSH) retrieved all the available IUE spectra up to 1993 and after an accurate reduction and correction for spectral artifacts, saturation and other instrumental effects, a homogenous and complete list of the emission lines in the UV range was presented. ZSH claim that the variations observed in the emission line intensities from 1978 to 1993 show a rather strong correlation with the line ionization level. The decrease is by a factor larger than 3 and up to 10 for low ionization lines and between 2 and 3 for medium and high ionization lines. The fluxes for some representative lines from ZSH Table 1, are plotted in Fig. 2. It can be noticed that among the high ionization level lines, only \[MgVI\] 1806 has increased between 1983 and 1993. It seems that the conditions of the system did not change significantly. Hayes & Nussbaumer (1986) already noticed a lack of periodic variation in the UV line fluxes over the period 1978-1984, finding only a slight decrease of the intensities. A modulation of the line intensities with orbital phase is observed in symbiotic stars (e.g. Kenyon et al 1993 for AG Peg). The lack of periodicity of the RR Tel line intensities over a 15 years span (1978-1993) can be explained either by the system being face-on or by a period longer than 15 y. Orbital periods of symbiotic Miras are believed to be longer than 20 years (Withelock 1988). Actually, for the modeling purpose, we consider the system in two representative epochs, one in 1978 when the system underwent high variability in the visual light on small time scale, and the other in 1993 after recovering the slow fading trend. We take advantage of the ZSH collection of data for the fitting of model calculations. ## 3 The general model A description of the colliding wind model is given by Girard & Willson (1987). Recall that the cool component of RR Tel is a Mira, characterized by a strong mass loss rate that produces an extended atmosphere. The outburst is the starting point of the wind from the hot star. The winds from the two components approach from opposite directions and collide head-on in the region between the stars and head-on-tail outside. It is plausible that fragmentation of matter in geometrically thin clumps results from turbulence caused by collision. The WD ejecta propagate in the giant atmosphere towards higher densities. Due to the collision two main shocks appear, the reverse shock and the expanding one. The reverse shock propagates back in the WD ejecta facing the WD. Therefore, the photoionizing radiation flux from the WD reaches the very shock front. On the other hand, the secondary shock expanding towards the red giant, propagates in a denser medium, therefore, its velocity is lower than that of the reverse one. In this case the radiation from the WD reaches the matter downstream, opposite to the shock front. A simple sketch of the model in different epochs is given in Figs. 3a and 3b. The initially high velocity of the WD wind will produce high densities and temperatures behind the shock front, and hence hard X-ray emission would be expected. However, after propagating and sweeping up additional material the shock decelerates. Lower shock velocities correspond to lower temperatures downstream and the ionized region behind the shock front will become visible in the UV and optical spectra. In our model of RR Tel two gas regions contribute to the emission spectra : downstream of the reverse shock and downstream of the expanding one. In this paper we consider the system in the two epochs 1978 and in 1993. The input parameters for the calculation of the emitted spectra by SUMA are the following : the shock velocity, $`\mathrm{V}_\mathrm{s}`$; the preshock density, $`\mathrm{n}_0`$; the magnetic field, $`\mathrm{B}_0`$; the colour temperature of the hot star, $`\mathrm{T}_{}`$; the ionization parameter, U; the relative abundances of the elements; and d/g, the dust-to-gas ratio by number. The models are matter bound and the geometrical thickness of the emitting clumps, d, is defined by the best results. The choice of the input parameters is based on the observational evidence at the epochs selected for modeling. The densities are constrained by the critical line ratios (Nussbaumer & Schild, 1981) and the shock velocities by the FWHM of the line profiles. For all models $`\mathrm{T}_{}`$= 1.4 $`\times 10^5`$ K (Murset et al. 1991) and $`\mathrm{B}_0`$ = $`10^3`$ gauss, which is suitable to red giant athmospheres (Bohigas et al 1989), are adopted. Relative abundances are calculated phenomenologically within the range of values which are consistent with CV systems. ## 4 The spectra in 1978 The models which best fit the observation data are selected from a grid of model calculations. Notice, however, that, whenever different regions of emitting gas coexist in one object, the observed spectrum contains the contribution from all of them. Therefore, although some models better fit high ionization lines and other low ionization lines, the most probable model must consistently account for all the lines and the continuum. ### 4.1 The line spectrum in the UV In Table 1 the models are described and in Table 2 the calculated line ratios (to CIV 1533+1548 = 100) are compared with the observed data from ZSH. The arrows indicate that the multiplet is summed up. FWHM of line profiles from the observations are listed in column 3 of Table 2. Penston et al (1983) measured in 1978 a terminal velocity of the wind of about 75 $`\mathrm{km}\mathrm{s}^1`$, indicating that the wind decelerated from initial velocities of thousands of $`\mathrm{km}\mathrm{s}^1`$ to less than 100 $`\mathrm{km}\mathrm{s}^1`$. Large features were actually not observed, however, wide feet of the high level lines are not excluded by Nussbaumer & Dumm (1997) in later spectra. We consider here the most significative lines. The spectrum at 1978 contains lines from relatively high levels (OV\], NV, etc) down to neutral lines. As already noticed by ZSH, the broader the line, the higher the level of the ion. The line widths range between 70 $`\mathrm{km}\mathrm{s}^1`$ and 30 $`\mathrm{km}\mathrm{s}^1`$. The profiles are separated into two groups, one with FWHM $``$ 40 $`\mathrm{km}\mathrm{s}^1`$ and the other with FWHM up to $``$ 80 $`\mathrm{km}\mathrm{s}^1`$. It is known that broad and narrow profiles of single lines characterize most of the symbiotic spectra. In RR Tel the two main widths are very close to each other and difficult to recognize. Model REV1(Table 1) with a shock velocity of 80 $`\mathrm{km}\mathrm{s}^1`$ and a density of $`10^6`$ $`\mathrm{cm}^3`$ represents the reverse shock. This model nicely fits high ionization lines (from levels higher than III). NIV\] 1386 is overestimated by a factor of 4, while \[Mg V\] 1324 is underestimated, but the identification of this line is doubtful (ZSH). An alternative model for the reverse shock is REV2 characterized by a lower density ($`10^5`$ $`\mathrm{cm}^3`$), corresponding to a different location of the emitting clump in the system. Model EXP represents the shock propagating towards the red giant, and is characterized by a velocity of 45 $`\mathrm{km}\mathrm{s}^1`$and a density of 2 $`\times 10^6`$ $`\mathrm{cm}^3`$ as indicated by the observations. The absolute intensity of CIV is extremely low for $`\mathrm{V}_\mathrm{s}`$$`<`$ 50 $`\mathrm{km}\mathrm{s}^1`$and U $`<`$ 0.01, therefore, the line ratios to CIV =100 are very high for this model. However, these are necessary conditions to obtain strong neutral lines. In fact, this model is constrained by the ratio of the O I/ C II line intensities. The observed O I lines are relatively strong and show that the emitting clumps should contain a large region of cool recombined gas (see §6). In the last two columns of Table 2 the composite models SUM1 and SUM2 are listed. They result from the weighted sum of the contributions of the expanding shock (EXP) spectrum and that emitted by the reverse shocks REV1 and REV2, respectively. The relative weights adopted are given in the bottom of Table 1. The fit presented in Table 2 is quite good, taking into account the disomogeneity of the emitting matter in the colliding region and the uncertainty of some atomic data in the code. The spectrum is so rich in number of lines that the fitting procedure constrains the model, thus providing the most probable description of the system. It seems that the best fit to the observed data is obtained by model SUM1. The relative abundances calculated phenomenologically for models SUM1 and SUM2 are in agreement with those adopted in the model of Hayes & Nussbaumer (1986) within a factor of 2 (Table 1), except for Mg/H which is higher by a factor $`>`$20 . Model XX, which is also listed in Table 1, is characterized by a high $`\mathrm{V}_\mathrm{s}`$ and a high $`\mathrm{n}_0`$ and will be discussed in the context of the continuum radiation in the next section. In fact, the full modeling of an object by the calculation of the emitted spectra implies the cross-checking of the results obtained for the line spectra with those obtained for the spectral energy distribution (SED) of the continuum. This procedure is necessary because it sometimes reveals that models other than those indicated by the fit of the line spectra must be taken into consideration. ### 4.2 The continuum Model calculations are compared with the observations in Fig. 4. The observed data are taken from Nussbaumer & Dumm (1997) in the wavelength range between 1280 and 2600 Å and from Kenyon et al (1986) in the IR. RR Tel has been detected as an X-ray source in 1978 by Einstein (Kwok & Leahy 1984) and in 1992 by ROSAT (Jordan, Murset, & Werner 1994). The data in the X-ray are taken from Fig. 3 of Kwok & Leahy (1984). The SED of the continua calculated by models REV1 and EXP are represented by solid lines and dotted lines, respectively. For each model besides the curve representing bremsstrahlung emission from the gas, reradiation by dust is represented by the curve in the IR ($`\nu <10^{14}`$ Hz). Generally, the temperature of the grains downstream follows the trend of the temperature of the gas (Viegas & Contini 1994, Fig. 3) therefore, the location of the dust reradiation peak depends on the shock velocity. On the other hand, the intensity of the dust radiation flux relative to bremsstrahlung emission depends on the dust-to-gas parameter, d/g. The fit of model REV1 to the data in the IR yields to d/g = 5 $`\times 10^{15}`$, whereas for model EXP to d/g = $`10^{14}`$. Since model EXP represents a shock front closer to the giant, a higher d/g implies a dustier atmosphere. Fig. 4 shows that the radiation flux calculated by model REV1 strongly prevails on that calculated by EXP, and that some absorption is present in the low frequency domain. As for the X-ray domain ($`10^{16}`$ \- $`10^{18}`$ Hz), the data show emission from a gas at a temperature between $`10^6`$ and $`10^7`$ K . Jordan et al. (1994) suggested that such a plasma could be produced by a fast wind with velocity of $``$ 500 $`\mathrm{km}\mathrm{s}^1`$. This emission cannot be obtained either by the models which fit the emitting lines or by a black body spectrum corresponding to a colour temperature of 1.4 $`\times 10^5`$ K (dot-dashed line in Fig. 4). Actually, bremsstrahlung emission calculated by a model characterized by a high shock velocity (model XX in Table 1) nicely fits the data at higher frequencies (long dashed lines in Fig. 4). Notice that the point at $`\nu `$ = 3 $`\times 10^{16}`$ Hz is below the calculated curve since it is affected by absorption of the interstellar medium (ISM) (Zombeck 1990). The high velocity model XX ($`\mathrm{V}_\mathrm{s}`$= 500 $`\mathrm{km}\mathrm{s}^1`$) could explain the wide foot underneath the nebular emission of CIV, NV, and HeII lines observed by Nussbaumer & Dumm (1997). However, since large FWHM are not observed in the line profiles, the contribution of the spectrum calculated by the high velocity model to the line fluxes should be negligible. The line intensities calculated by model XX depend strongly on the geometrical thickness of the emitting clump (d). To obtain line intensities lower than those from model REV1, d should be lower than 6 $`\times 10^{13}`$ cm. A high $`\mathrm{V}_\mathrm{s}`$ is justified in a colliding scenario, as the wind collision yields to the disruption of the colliding matter and to the formations of clumps. The fast wind will propagate almost unaffected in the voids between the clumps. An alternative scenario is represented by model X0 (short-dashed lines in Fig. 4) characterized by $`\mathrm{V}_\mathrm{s}`$= 500 $`\mathrm{km}\mathrm{s}^1`$, but with a lower density (about $`10^5`$ $`\mathrm{cm}^3`$). The downstream region at high temperature in the clumps is rather extended ( $`>`$ 5 $`\times 10^{14}`$ cm) and leads to the emission which fits the high frequency data and to the requested almost negligible line fluxes. ### 4.3 Variability on short time scale The photometric lightcurve of RR Tel in 1978 is characterized by large fluctuations in the visual band (see §2). Penston et al (1983) addressed this problem but could not conclude whether these variations are caused by line or by continuum variations. Actually, \[Cl III\] 5539, \[OI\] 5577, \[FeVI\] 5631, and \[NII\] 5755 are the strongest lines which enter the broad band visual filter. In our model the shocks propagate throughout disrupted matter, heating and ionizing clumps of different geometrical width. We present in Fig. 5 the calculated flux of these lines and of the continuum as a function of the distance from the shock front for model REV1. This distance represents the geometrical width of the clumps since the models are matter bound. It can be seen that, particularly, between $`10^{13}`$ and $`10^{14}`$ cm the line flux increases with distance from the shock front more than the continuum. This is a critical range for the thickness of the clumps and within this range the line variability dominates the variability of the continuum in the light curve. ## 5 The spectra in 1993 We have calculated the spectra in 1993 in the frame of the scenario proposed in §2. The models which better fit the observations are given in Table 3 and the line ratios are compared with the observations in Tables 4a and 4b. ### 5.1 The UV spectrum In the 1993 UV spectrum the high ionization lines dominate, and the OI and CII lines are still present, although weakened. Therefore, a region of gas emitting low ionization-level and neutral lines must still be present. This corresponds to a model with low $`\mathrm{V}_\mathrm{s}`$, low or even absent U, and a large zone of emitting gas at low temperature. Similar conditions were adopted to fit the 1978 spectrum (model EXP). In other words, the contribution of the expanding shock to the emitted spectrum is unavoidable. Notice that the flux intensity of CIV has decreased by a factor of $`>`$ 2 with respect to the 1978 flux. In Table 4a line intensities relative to CIV = 100 are listed together with the FWHM from ZSH and the observed line ratios (columns 3 and 4). Two models for the reverse shock follow in columns 5 and 6, and three models representing the expanding shock are given in columns 7, 8, and 9. In the last three columns of Table 4a the composite models sum1, sum2, and sum3 (which better fit the observations) are presented. The relative weight of the single contribution in these models are listed in the bottom of Table 3. The best fit to the 1993 high level line spectrum, even if rough, is obtained by model rev1 which is characterized by a high $`\mathrm{n}_0`$. In particular, the calculated intensity of \[MgVI\] 1806 line increased by a factor of 1.9 from 1978. This trend agrees with the data listed by ZSH (Fig. 2). The other parameters of model rev1 (Table 3) suggest that, compared with 1978, the shock velocity has slightly decreased, the densities are higher, and U is lower than for model REV1 (see Table 1). A similar situation was found in the evolution of HM Sge (Formiggini et al 1995). On the other hand, comparing the FWHM of the low ionization and neutral lines between 1978 and 1993, the expanding shock has maintained the same velocity, or even increased slightly. Three different models are considered, because, depending on the location of the shock front, the preshock density can be very different. We have adopted the same velocity for all the low level lines even if the FWHM of these lines (Table 4b) show a wide range of values from 17 $`\mathrm{km}\mathrm{s}^1`$of \[OI\] 6300 to 83 $`\mathrm{km}\mathrm{s}^1`$of \[NII\] 6584. Model exp1 represents the case in which the shock front is still near the giant star (Fig. 3b) and a high density characterizes the preshocked gas. The expanding shock has maintained roughly the same velocity compared to that in 1978 (model REV in Table 1). In 15 years the shock could have overcome the red giant and be now propagating in the giant atmosphere on the side opposite to the WD, where the density gradient is decreasing. It is therefore predictable that radiation from the WD should be very low or even absent because of dilution and/or screening effects. Two other models seem acceptable. In model exp2 the shock has reached a larger distance from the giant on the side opposite to the WD (Fig. 3b). At this distance, the density is lower and the shock velocity is higher because the shock accelerates while propagating through the decreasing density gradient in the giant atmosphere. The relative abundances are the same as those adopted for the other models because the shock is propagating within the system. Model exp3 represents a different situation: the expanding shock colliding face-on-tail with the wind from the giant reached the outskirts of the system (Fig. 3b) and is now propagating in the ISM characterized by a lower density. The relative abundances are close to the solar ones because mixing is rather strong at those large distances. In this model the shock could have traveled in about 15 years to a distance beyond 2.3 $`\times 10^{15}`$ cm. The geometrical thickness of the emitting slab is large. The radiation flux from the hot star is not prevented from reaching the inner edge of the slab but is low. Notice that the three models exp1, exp2, and exp3 contribute only to low level lines in the UV spectrum which are few compared to the numerous high level lines. Therefore, we will constrain the models by the fit of the optical spectrum where the low level lines are as numerous as the high level ones. The consistent fit of line spectra in the different frequency ranges implies cross checking of one another until a fine tuning of all of them is found. ### 5.2 The spectrum in the optical range In Table 4b observed line intensities in the optical range are compared with model calculations. These lines are those which are common to the observations of McKenna et al (1997) and the SUMA code. The lines in the optical range are given relative to H$`\beta `$= 1. For consistency, the same models used to fit the line in the UV are adopted. However, model rev2 is omitted. Notice that \[OII\] 3727, even if low, is present in the spectra. Considering that \[OII\] has a critical density for collisional deexcitation $``$ 3 $`\times 10^3`$ $`\mathrm{cm}^3`$, the density in the emitting gas cannot be much higher. The electron density downstream decreases with recombination (§6), therefore, even preshock densities higher than the critical one can fit the \[OII\]/ H$`\beta `$ line ratio. On this basis, model exp3 is more acceptable than exp1 and exp2. The calculated \[OIII\] 4959+5007 is rather high compared with the observed line. A lower weight of model exp3 in the weighted sum (sum3) could improve the fit of \[NII\] 5754 H$`\beta `$ and \[OIII\]/ H$`\beta `$, but definitively spoil the fit of \[OII\]/ H$`\beta `$. Interestingly, a low R<sub>\[OIII\]</sub>=\[OIII\] 5007 / \[OIII\] 4363 ratio coexists with a relatively high \[OII\] 3727 / H$`\beta `$ ratio. Low R<sub>\[OIII\]</sub> is generally due to either high density or high temperature of the emitting gas. High temperatures are excluded because the shock velocity is low. Therefore, high densities are indicated by R<sub>\[OIII\]</sub> and low densities by \[OII\]/ H$`\beta `$. This peculiarity of the spectrum strengthens the hypothesis of a composite model. In fact, the low \[OIII\] 5007 / \[OIII\] 4363 ratio comes from the rev1 model while the relatively high \[OII\] 3727 / H$`\beta `$ ratio comes from model exp3. The \[OI\] 6300 / H$`\beta `$ line ratio, even if not very high, indicates that a large zone of neutral oxygen is present and is consistent with the large d (2 $`\times 10^{15}`$ cm) of model exp3. The relatively high \[OI\]/ H$`\beta `$ line ratio appears also in model sum2 if a weight of 100 (see Table 3) is adopted for model exp2. This is rather high and definitively spoils the fit of most of the other line ratios. The composite model sum1 nicely fits most of the optical-near IR lines, however, \[OI\]/ H$`\beta `$ and \[NII\] 6584+/ H$`\beta `$ are largely overestimated. In the composite model sum3, model exp3 has been given a weight three times that of model rev1. This is acceptable considering that the emission from the nebula between the components is certainly smaller than that from the nebula encircling the system. Following the analysis of the optical spectrum, the composite model sum3 is selected. This model also fits the UV spectrum best, even if the OI 1304 / CIV line ratio is underestimated (Table 4a). ## 6 The RR Tel system in the epochs 1978 and 1993 After selecting the composite models which best fit the spectra observed in each of the epochs, a more quantitative description of the system can be given. The results of model calculations give a rough physical picture of RR Tel in 1978. The distance $`\mathrm{r}_{\mathrm{rs}}`$ of the reverse shock from the WD can be calculated from : U n c = $`\mathrm{N}_{\mathrm{ph}}`$ $`(\mathrm{R}_{\mathrm{wd}}/\mathrm{r}_{\mathrm{rs}})^2`$ (1) where $`\mathrm{N}_{\mathrm{ph}}`$ is the Plank function corresponding to $`\mathrm{T}_{}`$= 1.4 $`\times 10^5`$ K, and the WD radius $`\mathrm{R}_{\mathrm{wd}}`$ = $`10^9`$ cm. Adopting the parameters of model REV1, $`\mathrm{N}_{\mathrm{ph}}`$ = 3.4 $`\times 10^{26}`$ $`\mathrm{photons}\mathrm{cm}^2\mathrm{s}^1`$, and considering the density n = 4 $`\mathrm{n}_0`$ at the leading edge of the clumps due to the adiabatic jump, $`\mathrm{r}_{\mathrm{rs}}`$ = 1.1 $`\times 10^{14}`$ cm. Applying equation (1) to the expanding shock (model EXP), the distance between the wd and the low velocity slab ($`\mathrm{r}_{\mathrm{es}}`$) can be calculated. In this case the density n is given after compression downstream (n = 3.2 $`\times 10^7`$ $`\mathrm{cm}^3`$) and $`\mathrm{r}_{\mathrm{es}}`$ results 1.37 $`\times 10^{15}`$ cm. The expanding shock is very close or even overpassing the red giant, considering the wide interbinary separation ($`10^{15}`$ cm). In 1993 the reverse shock is at about the same distance from the wd as in 1978. On the other hand, from equation (1), the expanding shock represented by model exp3 has reached a distance of about 5 $`10^{15}`$ cm corresponding to the outskirts of the system (§5.1) As for the X-ray, the detection by ROSAT in 1992 (Jordan et al 1994) indicates that the high velocity wind (500 $`\mathrm{km}\mathrm{s}^1`$) component is unchanged from 1978 (§4.2). We compare now the physical conditions in the emitting gas at the two epochs. The profiles of the fractional abundance of the most significant ions throughout the clumps in the two epochs are compared in Figs. 6a (1978) and 6b (1993). In both figures the WD is on the left. The diagrams on the left represent the clump downstream of the reverse shock and the diagrams on the right the clump downstream of the expanding shock. Notice that the black body radiation from the hot star reaches the very shock front edge in the reverse shocked clumps and the edge opposite to the shock front in the expanding ones. To better understand the physical conditions in the emitting regions the profiles of the electron temperature, $`\mathrm{T}_\mathrm{e}`$, and of the electron density, $`\mathrm{N}_\mathrm{e}`$, in a logarithmic scale are also given in the top of the figures. First, we focus on models REV1 and rev1 (left side of Figs. 6a and 6b), the spectra of which largely prevail in the weighted sums of both epochs. It can be noticed that $`\mathrm{He}^{+2}`$/He and $`\mathrm{C}^{+4}`$/C are very high throughout the clump geometrical width. The dotted vertical lines in Figs. 6a and 6b show the actual geometrical thickness of matter bound models. For comparison the radiation bound model results are also shown beyond these lines. The observational data show that CIV and HeII line intensities prevail and that NV/CIV, HeII/CIV, and, - in a reduced way - also OV/CIV line ratios, increase from 1978 to 1993 (see Tables 2 and 4). In the same time CIV absolute flux decreases by a factor of $`>`$ 2. The fit to these observations can be obtained by slightly reducing the shock velocity and the ionization parameter, and by increasing the preshock density, but, particularly, the geometrical thickness of the clump must be reduced by a factor larger than 2 (Tables 1 and 3). Regarding the expanding shock, the physical conditions across the clumps in 1978 (model EXP) and 1993 (model exp3) are very different (Figures 6a and 6b diagrams on the right side of the figures). The clump is divided in two halves by the vertical solid line. The x-axis scale is logarithmic and symmetric to have a comparable view of the two parts of the clump. The shock front is on the right while the edge photoionized by the WD radiation is on the left. In 1993 the electron density never exceeds $`10^5`$ $`\mathrm{cm}^3`$. The region corresponding to the $`\mathrm{O}^{+0}`$ ion prevails in 1978 throughout the whole clump, while in 1993 is reduced to the shock dominated region. ## 7 Conclusions In the previous sections we presented a model for RR Tel at two epochs. The first one (1978) is characterized by large fluctuations in the light curve and in the line intensities on very short time scales. In the second epoch (1993) the system recovered the fading trend. This epoch is considered with the aim of investigating the evolution of the system. After the outburst, two shocks are present : the reverse shock propagates in the direction of the WD and the other one expands towards or beyond the giant. A good fit of the observed emission line spectra and continuum in 1978 is provided by a composite model, where the reverse shock is characterized by a velocity of $``$ 80 $`\mathrm{km}\mathrm{s}^1`$ and the expanding one by a velocity of $``$ 45 $`\mathrm{km}\mathrm{s}^1`$. A high velocity (500 $`\mathrm{km}\mathrm{s}^1`$) wind component is revealed from the fit of the SED of the continuum in the X-ray range in 1978, but it is quite unobservable in the line profiles if the geometrical thickness of the clumps is lower than 6 $`\times 10^{13}`$ cm. The large fluctuations observed in the 1978 light curve result from line intensity rather than from continuum variation. These variations are explained by fragmentation of matter at the time of head-on collision of the winds from the two stars. The results of our modeling show that in 1993 the reverse shock velocity has slightly decreased (70 - 50 $`\mathrm{km}\mathrm{s}^1`$) and the expanding shock with velocity between 50 to 100 $`\mathrm{km}\mathrm{s}^1`$has overcome the symbiotic system and is propagating in the nearby ISM. The decrease in the absolute flux of CIV is justified by the reduced shock velocity and ionization parameter. However, the geometrical thickness of the emitting clumps is the critical parameter which can explain the short time scale variabilities of the spectrum and the trend of slow line intensity decrease. Relative abundances of carbon, nitrogen, oxygen, and silicon, to hydrogen, calculated by the present model, are in agreement with those assumed by the model of Hayes & Nussbaumer (1986). Finally, although our modeling is rather simplistic it shows that shock models can contribute to a better understanding of the outburst evolution of RR Tel. Acknowledgements We are grateful to an anonymous referee for helpful comments and to G. Drukier for reading the manuscript. References Aller, L.H., Polidan, R.S., & Rhodes, E.J. 1973, Ap&SS, 20, 93 Aufdenberg, J.P. 1993 ApJS 87, 337 Bateson, F.M. 1995, private communication Bohigas, J., Echevarria, J., Diego, F., Sarmiento, J. A., 1989, MNRAS, 238, 1395 Contini, M. 1997, ApJ, 483, 886 Feast, M.W. et al. 1983, MNRAS, 202, 951 Formiggini, L., Contini, M., Leibowitz, E.M. 1995, MNRAS, 277, 1071 Girard, T., Willson, L. A., 1987, A&A, 183,247 Hayes, M.A. & Nussbaumer, H. 1986, A&A, 161, 287 Heck, A. & Manfroid, J. 1985, A&A, 142, 341 Jordan,S., Murset, U., & Werner, K. 1994, A&A, 283, 475 Kenyon, S.J., Fernendez-Castro, T., & Stencel, R.E. 1986, AJ, 92, 1118 Kenyon, S.J. et al 1993, AJ, 106, 1573 Kwok, S. & Leahy, D.A. 1984, ApJ, 283, 675 McKenna, F. C. et al. 1997, ApJS, 109, 225 Murset, U., Nussbaumer,H.,Schmid, H.M., & Vogel, M. 1991, A&A, 248, 458 Nussbaumer, H. & Dumm, T. 1997, A&A, 323, 387 Nussbaumer, H., & Schild, H. 1981, A&A, 101, 118 Nussbaumer, H., Walder, R., 1993, A&A, 278, 209 Penston, M.V. et al. 1983, MNRAS, 202, 833 Thackeray, A.D. 1977, MNRAS, 83, 1 Viegas, S. M., Contini, M., 1994, ApJ, 428, 113 Viotti, R. 1988 in ”The Symbiotic Phoenomenon” eds. J.Mikolajewska et al. Kluwer Academic Publishers, p. 269 Wallerstein, G., Willson, L. A., Salzer, J., Brugel, E., 1984, A&A, 133, 137 Withelock, P.A. 1988 in ”The Symbiotic Phoenomenon” eds. J.Mikolajewska et al. Kluwer Academic Publishers, p. 47 Zombeck, M.V. 1990 in ”Handbook of Space Astronomy and Astrophysics” Cambridge University Press, p. 199 Zuccolo, R., Selvelli, P., & Hack, M. 1997, A&AS, 124, 425 (ZSH) Figure Captions Fig.1 : The light curve of RR Tel between the years 1944 and 1993. Fig. 2 : The evolution of some significant line intensities between 1978 and 1993. from ZSH Table 1. Saturated fluxes are arbitrarly set at log(I) = 2. Figs. 3 : Simple sketch of the model for RR Tel in 1978 and in 1993. The asterisk indicates the WD and the full circle the cool giant. Shock fronts are indicated by solid lines. Dotted and long-dashed lines indicate the clump edge opposite to the shock front. Dashed arrows represents the radiation from the WD. a) 1978; b) 1993 Fig. 4 : The SED of the continuum in the year 1978. Full squares represent the observations and the curves represent model calculations (see text). Fig. 5 : The intensity of some lines in the visual band calculated as a function of the distance from the shock front. Figs. 6 : The distribution of the ions corresponding to the strongest line throughout the clumps (see text). a) 1978, b) 1993.
no-problem/9812/astro-ph9812221.html
ar5iv
text
# Sample Variance of the Higher-Order Cumulants of Cosmic Density and Velocity Fields ## 1 Introduction It is now commonly believed that the large-scale structure in the Universe observed today is explained by gravitational evolution of small initial density inhomogeneities (Peebles 1980). These initial fluctuations are usually assumed to obey random Gaussian distribution which is not only plausible from the central limit theorem but is also predicted by standard inflation models (Guth & Pi 1982, Hawking 1982, Starobinski 1982). Quantitative analysis of statistical measures of present-day cosmic fields are very important to confirm or disprove the structure formation scenario based on gravitational instability from primordially Gaussian fluctuations. Their higher-order cumulants provide useful tools for this purpose. In addition to the anisotropy of the cosmic microwave background radiation (CMB), distribution of galaxies and the peculiar velocity field are basic measures for the statistical analysis of the large-scale inhomogeneities in the Universe. The former has been widely investigated and there are two ongoing large redshift surveys now, namely, the Anglo-Australian 2dF Survey (Colless 1998) and the Sloan Digital Sky Survey (Gunn & Weinberg 1995) which are expected to revolutionarily improve our knowledge of three-dimensional galaxy distribution. We should notice, however, that what we can directly observe is the distribution of galaxies whereas what we can discuss from the first principle is the distribution of the underlying matter. In spite of the rapid increase of observational data, our understanding of the relation between distribution of galaxies and that of underlying gravitating matter, namely biasing (Kaiser 1984), is far from satisfactory. This hampers straight forward comparison between theories and observations. In contrast to the number-density field of galaxies, the cosmic velocity field reflects the dynamical nature of underlying matter fluctuation and is basically independent of the poorly understood biasing relation at least on large scales (Dekel 1994). This is a fundamental merit of the cosmic velocity field. On the other hand, we must point out that the survey depth of the comic velocity field is currently limited only to $`L70h^1\mathrm{Mpc}`$ around us even in the case of recent Mark III Catalog of Galaxy Peculiar Velocities (Willick et al. 1997), which is much smaller than that of the redshift surveys<sup>1</sup><sup>1</sup>1Bernardeau et al. (1995) used $`L40h^1\mathrm{Mpc}`$ as a practical current limit of high-quality data of the peculiar velocity field.. Therefore uncertainties due to the finiteness of our survey volume, or the sample variance, must inevitably become large and are therefore very important in the analysis of the cosmic velocity field. The higher-order cumulants of velocity divergence field have been extensively investigated in the framework of nonlinear perturbation theory, and are expected to work as useful quantities in observational cosmology, for example, to constrain the density parameter independent of the biasing (Bernardeau et al. 1995, 1997). In this Letter we investigate the sample variance of the higher-order cumulants of the velocity divergence field assuming that the initial fluctuation obeys isotropic random Gaussian distribution. Our formalism is also applicable to the density field and is similar to Srednicki (1993) who analyzed the skewness parameter of cosmic microwave background radiation. In the previous Letter (Seto & Yokoyama 1998) we discussed the sample variance of the second-order moment or the variance of a component of the peculiar velocity and that of the linear density fluctuation, whose expectation values and sample variances are of the same order in perturbation. In the case of higher-order cumulants, their expectation values vanish in linear theory and are generated from nonlinear mode coupling in higher-order perturbation theory. Nevertheless their sample variance is nonvanishing even at the linear order. Thus the sample variance is expected to be much more important for higher-order cumulants. In this Letter we compare the expectation values of the lowest-order contribution of the higher-order cumulants of these fields and their sample variances predicted by linear theory. It is true that there are other sources of errors in the observational analysis of these fields. Using Monte Carlo calculations we could basically take various effects into account at one time (e.g. Borgani et al. 1997). But the sample variance due to the finiteness of the survey region can be regarded as a fundamental limitation in the sense that this uncertainty is independent of how accurately we could measure the cosmic fields in a specific region in the Universe. In addition, to investigate the sample variance by means of a numerical simulation, we need a simulation box much larger than the (mock) survey region, as the sample variance is heavily weighted to Fourier modes which are comparable or greater than the survey depth. Considering these two factors, the simple analysis presented in this Letter is a useful and convenient approach to estimate a fundamental limitation in the observational determination of higher-order cumulants of cosmic fields. ## 2 Formulation We denote the density contrast field by $`\delta (𝒙)`$ and the velocity divergence field by $`\theta (𝒙)H_0^1𝑽(𝒙)`$ where $`𝑽(𝒙)`$ is the peculiar velocity field and $`H_0`$ is the Hubble parameter. At the linear order, which is indicated by the suffix “lin” hereafter, we have the following relation. $$\theta _{\mathrm{lin}}(𝒙)=f(\mathrm{\Omega }_0)\delta _{\mathrm{lin}}(𝒙),$$ (1) where the function $`f`$ is the logarithmic time derivative of the logarithm of the linear growth rate of the density contrast $`\delta _{\mathrm{lin}}(𝒙)`$ and is well fitted by $`f(\mathrm{\Omega }_0)\mathrm{\Omega }_0^{0.6}`$ with $`\mathrm{\Omega }_0`$ being the density parameter (Peebles 1980). We define the linear dispersion of these fields as $$\sigma ^2\delta _{\mathrm{lin}}^2(𝒙),\sigma _\theta ^2\theta _{\mathrm{lin}}^2(𝒙),$$ (2) where $`X`$ represents to take an ensemble average of the a field $`X`$. From equation (1) the linear root-mean-square (RMS) fluctuation of the velocity divergence field $`\sigma _\theta `$ is written in terms of $`\sigma `$ and $`f(\mathrm{\Omega }_0)`$ as $$\sigma _\theta =f(\mathrm{\Omega }_0)\sigma .$$ (3) These two quantities $`\sigma `$ and $`\sigma _\theta `$ work as the expansion parameters for perturbative analysis in this Letter. In the observational study of the cosmic fields in the framework of perturbation theory a smoothing operation is crucially important to get rid of strong nonlinearities on small scales and noises due to the discreteness of galaxies which work as tracers of these fields. In this Letter we only discuss fields smoothed with a Gaussian filter defined by $$W(𝒙)\frac{1}{\sqrt{(2\pi R_\mathrm{s}^2)^3}}\mathrm{exp}\left(\frac{𝒙^2}{2R_\mathrm{s}^2}\right).$$ (4) Here $`R_\mathrm{s}`$ is the smoothing radius but we omit its explicit dependence in most part of this Letter for notational simplicities. Next we briefly summarize the expectation values of the third- and forth-order cumulants for two fields, $`\delta (𝒙)`$ and $`\theta (𝒙)`$. We introduce their first-nonvanishing contributions predicted by higher-order (nonlinear) Eulerian perturbation theory (e.g. Peebles 1980). First we assume that the power spectrum of density fluctuation has a power-law form characterized by a single power index $`n(3)`$ as $$P(k)k^n.$$ (5) In this case the third-order cumulants or the skewness of $`\delta (𝒙)`$ and $`\theta (𝒙)`$ smoothed with the Gaussian filter (4) are evaluated perturbatively as follows. $$\delta (𝒙)^3=S_3(n)\sigma ^4+O(\sigma ^6),\theta (𝒙)^3=S_{3\theta }(n)\sigma _\theta ^4+O(\sigma _\theta ^6),$$ (6) where the factors $`S_3(n)`$ and $`S_{3\theta }(n)`$ are of order unity and have been given by Matsubara (1994) and Łokas et al. (1995) in terms of the hypergeometric function $`F`$. $`S_3(n)`$ $``$ $`3F({\displaystyle \frac{n+3}{2}},{\displaystyle \frac{n+3}{2}},{\displaystyle \frac{3}{2}},{\displaystyle \frac{1}{4}})\left(n+{\displaystyle \frac{8}{7}}\right)F({\displaystyle \frac{n+3}{2}},{\displaystyle \frac{n+3}{2}},{\displaystyle \frac{5}{2}},{\displaystyle \frac{1}{4}}),`$ (7) $`S_{3\theta }(n)`$ $``$ $`{\displaystyle \frac{1}{f(\mathrm{\Omega }_0)}}\left[3F({\displaystyle \frac{n+3}{2}},{\displaystyle \frac{n+3}{2}},{\displaystyle \frac{3}{2}},{\displaystyle \frac{1}{4}})\left(n+{\displaystyle \frac{16}{7}}\right)F({\displaystyle \frac{n+3}{2}},{\displaystyle \frac{n+3}{2}},{\displaystyle \frac{5}{2}},{\displaystyle \frac{1}{4}})\right].`$ (8) Here we neglect extremely weak dependence on cosmological parameters except for the function $`f(\mathrm{\Omega }_0)`$ in $`S_{3\theta }(n)`$. In principle, equations (7) and (8) are valid only for a pure power-law spectrum as equation (5) but we extrapolate them to more realistic power spectra with an effective power index $`n(R_\mathrm{s})`$ defined at the smoothing scale by the following equation (see Bernardeau et al. 1995): $$n(R_\mathrm{s})=3\frac{d\mathrm{ln}\sigma ^2(R_\mathrm{s})}{d\mathrm{ln}R_\mathrm{s}}=3\frac{d\mathrm{ln}\sigma _\theta ^2(R_\mathrm{s})}{d\mathrm{ln}R_\mathrm{s}}.$$ (9) In the same manner the forth-order cumulants, or the kurtosis, of $`\delta (𝒙)`$ and $`\theta (𝒙)`$ are written perturbatively as follows. $`\delta (𝒙)^43\sigma ^4`$ $`=`$ $`S_4(n)\sigma ^6+O(\sigma ^8),`$ (10) $`\theta (𝒙)^43\sigma _\theta ^4`$ $`=`$ $`S_{4\theta }(n)\sigma _\theta ^6+O(\sigma _\theta ^8).`$ (11) Łokas et al. (1995) derived analytic formulas for $`S_4(n)`$ and $`S_{4\theta }(n)`$ based on higher-order perturbation theory and evaluated them numerically for $`3n1`$. In the present analysis we use fitting formulas for $`S_4(n)`$ and $`S_{4\theta }(n)`$ given in their paper. In observational cosmology, we usually estimate an ensemble average $`X(𝒙)`$ of a field $`X`$ by taking its volume average $`A(X,𝒱)`$ in an observed patch $`𝒱`$, $$A(X,𝒱)\frac{1}{𝒱}_𝒱X(𝒙)d^3xX.$$ (12) We can commute the ensemble average with a volume integral above to obtain $$A(X,𝒱)=X.$$ (13) Thus the ensemble average of the volume average $`A(X,𝒱)`$ is identical to the universal value $`X`$. However, the observed value $`A(X,𝒱)`$ in one specific patch $`𝒱`$ is expected to fluctuate around its mean $`X`$ because of the spatial correlation and inhomogeneity of the field $`X(𝒙)`$ beyond the patch $`𝒱`$. These fluctuations are nonvanishing even in linear theory and we define its RMS value $`E_{\mathrm{lin}}(X,𝒱)`$ as follows. $$E_{\mathrm{lin}}(X,𝒱)\left\{A(X_{\mathrm{lin}},𝒱)X_{\mathrm{lin}}\right\}^2^{1/2}.$$ (14) Our basic strategy is to compare the magnitude of this linear sample variance $`E_{\mathrm{lin}}(X,𝒱)`$ with the expectation value $`X`$. This fluctuation should be smaller than the expectation value $`X`$; otherwise the particular value of $`A(X,𝒱)`$ obtained in one survey volume would lose its universality and one could not extract any cosmological information from it. Let us now calculate the linear fluctuation $`E_{\mathrm{lin}}(X,𝒱)`$ for the skewness and the kurtosis of the velocity divergence field. Using the nature of the multivariate Gaussian variables, we obtain the sample variance of the skewness of $`\theta (𝒙)`$ as $$E_{\mathrm{lin}}^2(\theta ^3,𝒱)=\frac{3\sigma _\theta ^6}{𝒱^2}_𝒱d^3x_𝒱d^3y\mathrm{\Xi }(r_{xy})\{3+2\mathrm{\Xi }(r_{xy})^2\},$$ (15) where we have denoted the separation between two points $`𝒙`$ and $`𝒚`$ by $`r_{xy}=|𝒙𝒚|`$ and defined the normalized linear two-point correlation function $`\mathrm{\Xi }(r)`$ as $$\mathrm{\Xi }(r_{xy})=\frac{\theta _{\mathrm{lin}}(𝒙)\theta _{\mathrm{lin}}(𝒚)}{\sigma _\theta ^2}=\frac{\delta _{\mathrm{lin}}(𝒙)\delta _{\mathrm{lin}}(𝒚)}{\sigma ^2}=_0^{\mathrm{}}\frac{k^2dk}{2\pi ^2\sigma ^2}\frac{\mathrm{sin}kr}{kr}P(k)\mathrm{exp}(k^2R_\mathrm{s}^2).$$ (16) In the same manner the linear fluctuation for the forth-order and second-order cumulants are given as follows. $`E_{\mathrm{lin}}^2(\theta ^43\sigma _\theta ^4,𝒱)`$ $`=`$ $`{\displaystyle \frac{24\sigma _\theta ^8}{𝒱^2}}{\displaystyle _𝒱}d^3x{\displaystyle _𝒱}d^3y\mathrm{\Xi }(r_{xy})^2\{3+\mathrm{\Xi }(r_{xy})^2\},`$ (17) $`E_{\mathrm{lin}}^2(\theta ^2,𝒱)`$ $`=`$ $`{\displaystyle \frac{2\sigma _\theta ^4}{𝒱^2}}{\displaystyle _𝒱}d^3x{\displaystyle _𝒱}d^3y\mathrm{\Xi }(r_{xy})^2.`$ (18) In the next section we calculate the ratio of $`E_{\mathrm{lin}}(X,𝒱)`$ to the lowest nonvanishing order of $`X`$. Here one should notice that the skewness and kurtosis given in equations (6), (10), and (11) are obtained from higher-order contributions in perturbation in contrast to $`E_{\mathrm{lin}}(X,𝒱)`$ obtained in linear theory. We summarize the order of the expansion parameter $`\sigma _\theta `$ for these ratios below. $`{\displaystyle \frac{E(\theta ^2,𝒱)}{\theta ^2}}`$ $`=`$ $`O(1),`$ (19) $`{\displaystyle \frac{E(\theta ^3,𝒱)}{\theta ^3}}`$ $`=`$ $`O(\sigma _\theta ^1),`$ (20) $`{\displaystyle \frac{E(\theta ^43\sigma _\theta ^4,𝒱)}{\theta ^43\sigma _\theta ^4}}`$ $`=`$ $`O(\sigma _\theta ^2).`$ (21) So far we have mainly discussed linear fluctuation of the velocity divergence field $`\theta (𝒙)`$ but extension to the case of the density field $`\delta (𝒙)`$ is simple and straightforward (see eq.). ## 3 Results In this section we calculate the ratio $`E_{\mathrm{lin}}(X,𝒱)/|X|`$ using specific cosmological models. For calculational simplicity we assume that our survey patch $`𝒱`$ is a sphere with radius $`L`$ and volume $`𝒱=(4\pi /3)L^3`$. In this case we can simplify the six-dimensional integral of equations (15), (17), and (18) to a three-dimensional one owing to the rotational symmetry (Seto & Yokoyama 1998). We investigate two cold-dark-matter (CDM) models with different density parameter $`\mathrm{\Omega }_0`$, namely, $`\mathrm{\Omega }_0=0.3`$ (open model) and $`\mathrm{\Omega }_0=1.0`$ (Einstein de-Sitter model) both with vanishing cosmological constant<sup>2</sup><sup>2</sup>2Cosmological constant is almost irrelevant in our analysis, even if it is nonvanishing within the current observational limit. and the Hubble parameter $`h=H_0/(100\mathrm{k}\mathrm{m}/\mathrm{sec}/\mathrm{Mpc})=0.7`$. As for the initial matter fluctuation we use CDM power spectrum given in Efstathiou et al. (1992) as $$P(k)=\frac{Bk}{\left\{1+\left[\alpha k+(\beta k)^{3/2}+(\gamma k)^2\right]^\mu \right\}^{2/\mu }},$$ (22) where $`\alpha =(6.4/\mathrm{\Gamma })h^1\mathrm{Mpc}`$, $`\beta =(3.0/\mathrm{\Gamma })h^1\mathrm{Mpc}`$, $`\gamma =(1.7/\mathrm{\Gamma })h^1\mathrm{Mpc}`$, $`\mu =1.13`$, and the normalization factor $`B=(96\pi ^2/5)\mathrm{\Omega }_0^{1.54}H_0^4(Q_{rms}/T_0)^2`$ with the current temperature of CMB $`T_0=2.73`$K and the quadruple fluctuation amplitude of it $`Q_{rms}=15.3\mu `$K from 4yr COBE data (Górski et al. 1996). We fix the shape parameter $`\mathrm{\Gamma }`$ by $`\mathrm{\Gamma }=h\mathrm{\Omega }_0`$. As explained before, the cosmic velocity field is considered to be less contaminated by the poorly understood biasing effect but its survey depth is much smaller than that of the redshift surveys of galaxies. Therefore we mainly consider a typical observational situation of the cosmic velocity field and adopt a Gaussian filter with $`R_\mathrm{s}=12h^1\mathrm{Mpc}`$ following the POTENT analysis (Bertschinger & Dekel 1989, Dekel 1994). Using the formulas given in the previous section, we plot the sample variance due to the smallness of the survey volume in Fig.1. The expansion parameter $`(\sigma ,\sigma _\theta )`$ is (0.37,0.18) for the open model $`\mathrm{\Omega }_0=0.3`$ and (0.38,0.38) for the Einstein de-Sitter model $`\mathrm{\Omega }_0=1.0`$ in our case. We have $`\theta (𝒙)\delta (𝒙)`$ at the linear order and thus the relative fluctuations of the second-order moments are identical for these two fields (thick solid lines in Fig.1). For the higher-order cumulants, nonlinear mode coupling arises in a different manner for $`\theta (𝒙)`$ and $`\delta (𝒙)`$ and their relative fluctuations are no longer identical. As is seen in Fig.1, at the current survey depth $`L40h^1\mathrm{Mpc}`$ (Bernardeau et al. 1995), the sample variance of the skewness of the velocity divergence field $`\theta (𝒙)^3`$ remains as much as the expectation value itself. To reduce it smaller than $`30\%`$, we have to take the survey radius $`L`$ as deep as $`L180h^1\mathrm{Mpc}`$ for the open model and $`L150h^1\mathrm{Mpc}`$ for the Einstein de-Sitter model. These values are much larger than the current observational limit. Bernardeau (1995) proposed a method to estimate the density parameter using a relation $`\theta ^3/\theta ^2^2\mathrm{\Omega }_0^{0.6}`$. Analysis above show that even if we could take the survey radius $`L`$ as big as $`150h^1\mathrm{Mpc}`$, the error of such estimation of $`\mathrm{\Omega }_0`$ would remain as large as $`50\%`$. ## 4 Summary In this Letter we have discussed the magnitude of the sample variance in the observational determination of the reduced higher-order cumulants of smoothed density and velocity divergence fields. We have compared the sample variance predicted by linear theory with the lowest-order nonvanishing contribution to the cumulants assuming that the primordial fluctuation was random Gaussian distributed. We have paid much attention to the velocity divergence field as (i) it is less contaminated by the biasing relation and extensively investigated in the framework of perturbation theory but (ii) velocity survey is currently limited to a relatively small region. The skewness of the velocity divergence is an interesting quantity to characterize the non-Gaussianity induced by gravity and is expected to constrain the density parameter with small theoretical ambiguities as long as primordial fluctuations are Gaussian distributed. But according to the present analysis we cannot determine the skewness of this field with an error less than $`30\%`$ if our survey depth is not as deep as $`200h^1\mathrm{Mpc}`$. In the previous Letter (Seto & Yokoyama 1998) we have shown that the peculiar velocity field suffers from much larger sample variance than the linear density field because the former depends on small wavenumber modes much more strongly. On the other hand, the peculiar velocity divergence field discussed here has the same spectral dependence as the density field in linear theory (eq. ). Hence the large relative sample variance we have encountered in the present Letter is entirely due to the fact that it is nonvanishing even in linear theory whereas the expectation values of the higher-order cumulants become nonvanishing only after nonlinear effects are taken into account. This work was partially supported by the Japanese Grant in Aid for Science Research Fund of the Monbusho Nos. 3161 (N. S.) and 09740334 (J. Y.).
no-problem/9812/hep-ph9812246.html
ar5iv
text
# Triple and Quartic Interactions of Higgs Bosons in the General Two-Higgs-Doublet Model ## 1 Introduction A particularly simple extension of the Standard Model containing two scalar doublets has been very extensively investigated in the framework of minimal supersymmetry. In order to cancel gauge anomalies introduced by the fermionic superpartners of gauge bosons and to generate masses of up- and down- quarks in a consistent manner two doublets of Higgs fields are necessary. Soft supersymmetry-breaking terms introduce large radiative corrections to the tree-level Higgs boson masses and couplings and the effective lagrangian of the Higgs sector at the electroweak scale does not satisfy the supersymmetry constraints valid at the SUSY scale. In the most general case when the supersymmetry scale and the scale of heavy Higgs boson mass (usually defined by the mass of the pseudoscalar) are different, the effective theory at the electroweak scale is a two-Higgs-doublet model, where the self-interaction couplings are defined by the renormalization group evolution of the supersymmetric potential couplings from the SUSY scale down to the electroweak scale . The investigation of direct phenomenological consequences of a two-doublet Higgs sector at a future high luminosity colliders, such as LHC and TESLA, could provide a possibility to study in detail the structure of effective Higgs potential, mass spectrum and couplings of the scalar particles. As usual, the variety of channels where scalars could be produced individually or in association with vector bosons requires a systematical calculation in order to find out what particular channels could have a sufficient counting rates for experimental detection at a given collider luminosity. We propose a convenient compact form of Feynman rules for a general two-Higgs-doublet model that can be used in the following systematical study of the Higgs boson production channels and use these rules for the calculation of two and three Higgs boson production at a high energy $`e^+e^{}`$ collider. ## 2 Diagonalisation of the mass matrix in the general two-Higgs-doublet model General form of the (nonsupersymmetric) $`SU(2)\times U(1)`$ invariant potential in the case of two doublets of complex scalar fields $`\phi _1`$, $`\phi _2`$ can be found in $`V(\phi _1,\phi _2)=`$ $`\lambda _1(\phi _1^+\phi _1{\displaystyle \frac{v_1^2}{2}})^2+\lambda _2(\phi _2^+\phi _2{\displaystyle \frac{v_2^2}{2}})^2`$ $`+\lambda _3[(\phi _1^+\phi _1{\displaystyle \frac{v_1^2}{2}})+(\phi _2^+\phi _2{\displaystyle \frac{v_2^2}{2}})]^2`$ $`+\lambda _4[(\phi _1^+\phi _1)(\phi _2^+\phi _2)(\phi _1^+\phi _2)(\phi _2^+\phi _1)]`$ $`+\lambda _5[Re(\phi _1^+\phi _2){\displaystyle \frac{v_1v_2}{2}}Re(e^{i\xi })]^2+\lambda _6[Im(\phi _1^+\phi _2){\displaystyle \frac{v_1v_2}{2}}Im(e^{i\xi })]^2`$ where $`\lambda _i`$ are real constants. Components of scalar doublets $`\phi _{1,2}`$ are $$\phi _1=\{iw_1^+,\frac{1}{\sqrt{2}}(v_1+h_1+iz_1)\},\phi _1=\{iw_2^+,\frac{1}{\sqrt{2}}(v_2+h_2+iz_2)\}.$$ (2) where $`w`$ is a complex field and $`z`$, $`h_{1,2}`$ are real scalar fields. Vacuum expectation values $`v_1`$, $`v_2`$ correspond to the minimum of the potential $$\phi _1=\frac{1}{\sqrt{2}}\{0,v_1\},\phi _2=\frac{1}{\sqrt{2}}\{0,v_2e^{i\xi }\}$$ (3) where the phase $`\xi `$ can be removed by the rotation of $`\phi _1^+\phi _2`$ not affecting the $`\lambda _4`$ term in (1). Substiution of (2) to (1) gives a bilinear form of the mass term with mixed components $`w,h_{1,2},z`$, which can be diagonalized by an orthogonal transformation of the fields in order to define the tree level masses of physical bosons. The resulting spectrum of scalars consists of two charged $`H^\pm `$, three neutral $`h`$, $`H`$, $`A^0`$ scalar fields, and three Goldstone bosons $`G`$. This procedure is described in many papers (for instance, ). The $`w_{1,2}`$ sector is diagonalized by the rotation of $`w_1,w_2H,G`$ $$w_1^\pm =H^\pm s_\beta +G^\pm c_\beta ,w_2^\pm =H^\pm c_\beta +G^\pm s_\beta $$ (4) defined by the angle $$\mathrm{𝚝𝚐}\beta =\frac{v_2}{v_1}$$ (5) and leading to the massless $`G`$ field and the field of massive charged Higgs boson $`H^\pm `$, $`m_{H\pm }^2=\lambda _4(v_1^2+v_2^2)/2`$. The $`z_{1,2}`$ sector is diagonalized by the rotation $`z_1,z_2A^0,G^{^{}}`$ defined by the angle $`\beta `$ (4) and giving again one massless field $`G^{^{}}`$ and the field of CP-odd Higgs boson $`A^0`$ with the mass $`m_A^2=\lambda _5(v_1^2+v_2^2)/2`$. Finally, the $`h_1,h_2`$ sector is diagonalised by the rotation $`h_1,h_2h,H`$ defined by the angle $`\alpha `$ $$\mathrm{𝚜𝚒𝚗}2\alpha =\frac{2m_{12}}{\sqrt{(m_{11}m_{22})^2+4m_{12}^2}},\mathrm{𝚌𝚘𝚜}2\alpha =\frac{m_{11}m_{22}}{\sqrt{(m_{11}m_{22})^2+4m_{12}^2}}$$ (6) where $`m_{11}=`$ $`{\displaystyle \frac{1}{4}}[4v_1^2(\lambda _1+\lambda _3)+v_2^2\lambda _5]`$ $`m_{22}=`$ $`{\displaystyle \frac{1}{4}}[4v_2^2(\lambda _2+\lambda _3)+v_1^2\lambda _5]`$ $`m_{12}=`$ $`{\displaystyle \frac{1}{4}}(4\lambda _3+\lambda _5)v_1v_2`$ giving two massive fields of CP-even Higgs bosons $`H,h`$ with the mass values $$m_{H,h}^2=m_{11}+m_{22}\pm \sqrt{(m_{11}m_{22})^2+4m_{12}^2}$$ (7) In the explicit form the diagonal mass matrix of scalar fields and the physical boson interaction vertices are obtained after the following substitution of $`\lambda _i`$ to the potential $`V(\phi _1,\phi _2)`$ (1): $`\lambda _1=`$ $`{\displaystyle \frac{1}{2v^2}}{\displaystyle \frac{1}{c_\beta ^2}}[{\displaystyle \frac{s_\alpha }{s_\beta }}c_{\alpha \beta }m_h^2{\displaystyle \frac{c_\alpha }{s_\beta }}s_{\alpha \beta }m_H^2]+{\displaystyle \frac{c_{2\beta }}{4c_\beta ^2}}\lambda _5`$ (8) $`\lambda _2=`$ $`{\displaystyle \frac{1}{2v^2}}{\displaystyle \frac{1}{s_\beta ^2}}[{\displaystyle \frac{c_\alpha }{c_\beta }}c_{\alpha \beta }m_h^2+{\displaystyle \frac{s_\alpha }{c_\beta }}s_{\alpha \beta }m_H^2]{\displaystyle \frac{c_{2\beta }}{4s_\beta ^2}}\lambda _5`$ $`\lambda _3=`$ $`{\displaystyle \frac{1}{2v^2}}[{\displaystyle \frac{s_{2\alpha }}{s_{2\beta }}}m_h^2+{\displaystyle \frac{s_{2\alpha }}{s_{2\beta }}}m_H^2]{\displaystyle \frac{1}{4}}\lambda _5`$ $`\lambda _4=`$ $`{\displaystyle \frac{2}{v^2}}m_{H^\pm }^2`$ $`\lambda _6=`$ $`{\displaystyle \frac{2}{v^2}}m_{A^0}^2`$ where we used the notation $`v^2=v_1^2+v_2^2`$, $`s_\alpha =\mathrm{𝚜𝚒𝚗}\alpha `$, $`c_\alpha =\mathrm{𝚌𝚘𝚜}\alpha `$. Diagonalization of the mass term takes place if $`\lambda _5`$ is arbitrary, but the necessary condition for the $`CP`$-invariance of potential (1) is $`\lambda _5=\lambda _6`$ . Unfortunately, after the substitution of (8) to the potential (1) the intermediate expressions for the four scalar boson interaction vertices turn out to be extremely cumberous and it is very difficult to reduce them to some compact convenient form, where the dependence of the coupling from the parameters could be clearly seen. This is a technical problem of the symbolic manipulaion program that we used. However, symbolic transformations of the intermediate expressions are simpler, if we rewrite the potential $`V(\phi _1,\phi _2)`$ in the oftenly used representation $`U(\phi _1,\phi _2)=`$ $`\mu _1^2(\phi _1^+\phi _1)\mu _2^2(\phi _2^+\phi _2)\mu _{12}^2(\phi _1^+\phi _2+\phi _2^+\phi _1)`$ $`+\overline{\lambda }_1(\phi _1^+\phi _1)^2+\overline{\lambda }_2(\phi _2^+\phi _2)^2+\overline{\lambda }_3(\phi _1^+\phi _1)(\phi _2^+\phi _2)`$ $`+\overline{\lambda }_4(\phi _1^+\phi _2)(\phi _2^+\phi _1)+{\displaystyle \frac{\overline{\lambda }_5}{2}}[(\phi _1^+\phi _2)(\phi _1^+\phi _2)+(\phi _2^+\phi _1)(\phi _2^+\phi _1)]`$ It is easy to check that in the case of zero $`\phi _1^+\phi _1`$ phase the potentials (1) and (9) are equivalent if the constants $`\overline{\lambda }_i`$, $`\mu `$ and $`\lambda _i`$ are related by the formulas $`\overline{\lambda }_1=\lambda _1+\lambda _3,\overline{\lambda }_2=\lambda _2+\lambda _3,\overline{\lambda }_3=2\lambda _3+\lambda _4,`$ (10) $`\overline{\lambda }_4=\lambda _4+{\displaystyle \frac{\lambda _5}{2}}+{\displaystyle \frac{\lambda _6}{2}},\overline{\lambda }_5={\displaystyle \frac{\lambda _5}{2}}{\displaystyle \frac{\lambda _6}{2}}`$ and $$\mu _{12}^2=\lambda _5\frac{v_1v_2}{2},\mu _1^2=\lambda _1v_1^2+\lambda _3v_1^2+\lambda _3v_2^2,\mu _2^2=\lambda _2v_2^2+\lambda _3v_1^2+\lambda _3v_2^2$$ (11) The expressions (11) are sometimes called ’minimization conditions’, if one starts from the potential $`U(\phi _1,\phi _2)`$ (9), where the symbolic structure does not show clearly a possible minimum. In the MSSM $`\overline{\lambda }_5=`$0 and it follows that $`\mu _{12}^2`$ is fixed and equal to $`m_A^2s_\beta c_\beta `$. If this equality is not satisfied (or, equivalently, $`\lambda _5\lambda _6`$ in (1)), $`CP`$-violation in the Higgs sector can be introduced. The diagonal form of $`U(\phi _1,\phi _2)`$ and the physical scalar boson interaction vertices are obtained by the substitution of the following expressions for $`\overline{\lambda }_i`$ and $`\mu _i`$ in the potential (9): $`\overline{\lambda }_1=`$ $`{\displaystyle \frac{1}{2v^2}}[({\displaystyle \frac{s_\alpha }{c_\beta }})^2m_h^2+({\displaystyle \frac{c_\alpha }{c_\beta }})^2m_H^2{\displaystyle \frac{s_\beta }{c_\beta ^3}}\mu _{12}^2]`$ (12) $`\overline{\lambda }_2=`$ $`{\displaystyle \frac{1}{2v^2}}[({\displaystyle \frac{c_\alpha }{s_\beta }})^2m_h^2+({\displaystyle \frac{s_\alpha }{s_\beta }})^2m_H^2{\displaystyle \frac{c_\beta }{s_\beta ^3}}\mu _{12}^2]`$ $`\overline{\lambda }_3=`$ $`{\displaystyle \frac{1}{v^2}}[2m_{H^\pm }^2{\displaystyle \frac{\mu _{12}^2}{s_\beta c_\beta }}{\displaystyle \frac{s_{2\alpha }}{s_{2\beta }}}(m_H^2m_h^2)]`$ $`\overline{\lambda }_4=`$ $`{\displaystyle \frac{1}{v^2}}({\displaystyle \frac{\mu _{12}^2}{s_\beta c_\beta }}+m_A^22m_{H^\pm }^2)`$ $`\overline{\lambda }_5=`$ $`{\displaystyle \frac{1}{v^2}}({\displaystyle \frac{\mu _{12}^2}{s_\beta c_\beta }}m_A^2)`$ $`\mu _1^2=`$ $`{\displaystyle \frac{1}{2}}[{\displaystyle \frac{s_\alpha }{c_\beta }}s_{\alpha \beta }m_h^2+{\displaystyle \frac{c_\alpha }{c_\beta }}c_{\alpha \beta }m_H^22tg\beta \mu _{12}^2]`$ $`\mu _2^2=`$ $`{\displaystyle \frac{1}{2}}[{\displaystyle \frac{c_\alpha }{s_\beta }}s_{\alpha \beta }m_h^2+{\displaystyle \frac{s_\alpha }{s_\beta }}c_{\alpha \beta }m_H^22ctg\beta \mu _{12}^2]`$ Our expressions for $`\overline{\lambda }_4`$ and $`\overline{\lambda }_5`$ are the same as given in for the case of zero $`\lambda _6`$ and $`\lambda _7`$. Complete sets of Feynman rules (unitary gauge) for the triple and quartic Higgs boson interactions in the general two-Higgs-doublet model with a possibility of $`CP`$-violation in the Higgs sector (defined by $`\mu _{12}`$ parameter), are shown in Tables 1-2. These sets were obtained by means of LanHEP package . LanHEP package is a specialized symbolic manipulation system capable to generate Feynman rules for the $`SU(2)`$, $`SU(3)`$ gauge invariant lagrangians with arbitrary sets of particle multiplets, in the standard input lagrangian format of CompHEP package . We do not show here a rather long set of Feynman rules in the ’tHooft-Veltman gauge, that can be also generated after the introduction of ghost and ghost-goldstone lagrangian terms to LanHEP program. <sup>1</sup><sup>1</sup>1The generation process takes 15 sec. of CPU time (i686). Complete lagrangian tables in CompHEP format and LanHEP package are available at http://theory.npi.msu.su/~semenov/lanhep.html We assume that in the Yukawa sector $`<\phi _1>`$ couples only to down fermions $$V_{ud}\frac{em_d}{2\sqrt{2}m_Ws_Wc_\beta }[\overline{\psi }_1(1+\gamma _5)\psi _2\phi _1+\overline{\psi }_2(1\gamma _5)\psi _1\phi _1^+]$$ (13) (here for the $`u`$, $`d`$ quarks $`\overline{\psi }_1=\{\overline{u},V_{ud}\overline{d}+V_{us}\overline{s}+V_{ub}\overline{b}\},\psi _2=d`$ and analogous structures for $`s`$,$`b`$ quarks and leptons, in the case of quarks $`V_{ab}`$ denotes the CKM matrix elements), and $`<\phi _2>`$ couples only to up fermions (so-called model of type II ): $$\frac{em_u}{2\sqrt{2}m_Ws_Ws_\beta }[\overline{\psi }_1(1+\gamma _5)i\tau _2\psi _2\phi _2^++\overline{\psi }_2(1\gamma _5)i\tau _2\psi _1\phi _2]$$ (14) (here $`\overline{\psi }_1=\{\overline{u},V_{ud}\overline{d}+V_{us}\overline{s}+V_{ub}\overline{b}\},\psi _2=u`$ and analogous structures for $`c`$ and $`t`$ quarks). Higgs-gauge boson interaction is defined by the straightforward extension of the covariant derivative in the case of two scalar doublets. It is easy to find the relation between the vacuum expectation values of the potential and the $`W`$-boson mass and coupling $`g=e/\mathrm{𝚜𝚒𝚗}\vartheta _W`$ $$v^2=v_1^2+v_2^2=\frac{4m_W^2}{e^2}s_W^2$$ (15) following from the structure of scalar fields kinetic term $`D_\mu \phi D^\mu \phi `$. From the phenomenological point of view the general multiparametric two-Higgs-doublet model is too flexible to be systematically used for data analysis. Practically no limits on the masses of individual scalars can be set if their couplings to gauge bosons and fermions depend on some free parameters, and can be very small in a rather large regions of parameter space. Recent discussion of the possible limits can be found in . However, the parameter space can be strongly restricted by the constraints imposed by the supersymmetry. Let us consider the reduction of the general two-doublet model Feynman rules shown in Tables 1,2 to the case of minimal supersymmetry model (MSSM). The potential $`V(\phi _1,\phi _2)`$ (1) contains eight parameters: two VEV’s $`v_1`$, $`v_2`$ and six $`\lambda _i`$ ($`i`$=1,…6). Eight parameters of the potential $`U(\phi _1,\phi _2)`$ (9) $`\mu _1`$, $`\mu _2`$, $`\mu _{12}`$ and $`\overline{\lambda }_i`$ ($`i`$=1,…5) can be found using (10),(11). From the other side, in order to define the Higgs sector we need eight physical parameters: the mixing angle $`\beta `$ and $`W`$-boson mass $`m_W`$, mixing angle $`\alpha `$, the parameter $`\mu _{12}`$ and four masses of scalars $`m_h`$, $`m_H`$, $`m_A`$, $`m^\pm `$. Two VEV’s can be expressed through $`m_W`$, $`\mathrm{𝚝𝚐}\beta `$ by (5) and (15) and only one degree of freedom remains here. In the case of superpotential five additional constraints are imposed, relating all Higgs boson self couplings $`\overline{\lambda }_i`$, ($`i`$=1,…5) to the gauge coupling constants at the energy scale $`M_{SUSY}`$ : $$\overline{\lambda }_1=\overline{\lambda }_2=\frac{g^2+g_1^2}{8},\overline{\lambda }_3=\frac{g^2g_1^2}{4},\overline{\lambda }_4=\frac{g^2}{2},\overline{\lambda }_5=0.$$ (16) As we already noticed, if $`\overline{\lambda }_5=`$0, $`\mu _{12}`$ is fixed and $`CP`$-parity is conserved. The remaining two independent parameters may be used to define all Higgs boson masses and mixing angles. One can choose, for instance, $`r_1,r_2`$ parametrization ($`r_{1,2}=m_{h,H}^2/m_Z^2`$) or the well-known $`m_A`$, $`\mathrm{𝚝𝚐}\beta `$ parametrization. In order to reduce the general two-Higgs-doublet model vertices to the case of MSSM it is convenient to use the $`\alpha `$, $`\beta `$ parametrization: $`m_h^2=m_Z^2\mathrm{𝚌𝚘𝚜}2\beta {\displaystyle \frac{\mathrm{𝚜𝚒𝚗}(\alpha +\beta )}{\mathrm{𝚜𝚒𝚗}(\alpha \beta )}},m_H^2=m_Z^2\mathrm{𝚌𝚘𝚜}2\beta {\displaystyle \frac{\mathrm{𝚌𝚘𝚜}(\alpha +\beta )}{\mathrm{𝚌𝚘𝚜}(\alpha \beta )}},`$ (17) $`m_A^2=m_Z^2{\displaystyle \frac{\mathrm{𝚜𝚒𝚗}2(\alpha +\beta )}{\mathrm{𝚜𝚒𝚗}2(\alpha \beta )}},\mu _{12}^2=m_A^2\mathrm{𝚜𝚒𝚗}\beta \mathrm{𝚌𝚘𝚜}\beta .`$ Substitution of these expressions to the vertex factors in Tables 1,2 after trivial trigonometric transformations reduces them to simpler MSSM factors . Complete list of Feynman rules at the MSSM scale is shown in Table 3. Renormalization group (RG) evolution of the coupling constants $`\lambda _i`$ from the energy scale $`M_{SUSY}`$ to the electroweak scale $`M_{EW}`$ violates the constraints (16) and the effective low energy potential at the scale $`M_{EW}`$ is the potential of a general two-Higgs doublet model with RG evolved couplings $`\overline{\lambda }_i`$. At a given values of $`m_A`$, $`\mathrm{𝚝𝚐}\beta `$ (or $`\alpha `$, $`\beta `$), masses of Higgs bosons and the mixing angle $`\alpha `$ (or $`m_A`$) at the energy scale $`M_{SUSY}`$ can be obtained using (17). Detailed analysis of the following RG evolution and the calculation of leading-logarithmic radiative corrections to the mixing angles, masses and couplings of Higgs bosons can be found, for instance, in . We briefly point out that the additional input parameters to be defined in order to fix the scheme are the scale of SUSY breaking $`M_{SUSY}`$, the mass parameter in higgsino-gaugino sector $`\mu `$, and the squark mixing parameters $`A`$. ## 3 Multiple production of neutral Higgs bosons The processes of multiple neutral Higgs boson production in the MSSM were considered in in the framework of effective potential approach to the calculation of radiatively corrected scalar masses and couplings of the SUSY Higgs sector. The reactions $$e^+e^{}hhZ,e^+e^{}hhA,e^+e^{}\nu _e\overline{\nu }_ehh$$ (18) were considered and it was shown that the cross sections of double and triple Higgs boson production are not small and the experimental measurements of triple Higgs boson couplings are realistic. We used the results of to calculate the radiatively corrected masses of Higgs bosons and mixing angle $`\alpha `$ in the renormalization group approach to the Higgs potential couplings evolution from the SUSY scale $`M_{SUSY}`$ down to the electroweak scale (see also ). We set the $`M_{SUSY}=`$1 TeV and have not included the effects of squark mixing by setting the parameters $`A`$ and $`\mu `$ equal to zero. In the case of not too large $`\mathrm{𝚝𝚐}\beta `$ (we used $`\mathrm{𝚝𝚐}\beta =3`$) and the pseudoscalar mass $`m_A`$ of order 150–250 GeV, masses of heavy CP-even Higgs boson $`H`$ and charged Higgs boson $`H^\pm `$ are also at the scale 150-250 GeV. The lightest Higgs boson mass is approaching 100 GeV when the pseudoscalar mass surpasses 200 GeV. (Changes of the SUSY scale and mixing parameters can in principle shift $`m_h`$ by about 50 GeV, see the details in ). These radiatively corrected parameters were used in our set of Feynman rules. The following calculation of the complete tree level amplitude for the multiple Higgs boson production processes (18) was performed by means of CompHEP package , when the exact symbolic result for the matrix element squared is converted to FORTRAN code and integrated by multichannel Monte-Carlo method. The s-channel resonant peaks of the amplitude (see Fig.1) are regularized by phase space mappings to ensure an efficient application of VEGAS integrator . While in the Standard Model the cross section of $`hhZ`$ production is of order $`210^1`$ fb at the Higgs boson mass 100 GeV and slowly decreasing when the mass of Higgs boson increases, the picture in the two-doublet MSSM sector is strongly changed by the availability of resonant production mechanisms, when the decays of on-shell $`Hhh`$ and $`A^0Zh`$ become possible. We show the dependence of total cross sections in the channels (18) from the masses of CP-even states $`m_h`$ and $`m_H`$ in Fig.2,3. In order to understand qualitatively the cross section behaviour we show also the $`h`$, $`H`$ branching ratio dependence (in the two-body decay channels with the contribution greater than 1%) from their masses in Fig.4,5. Rapid decrease of the total rate at $`m_h=`$ 60 GeV ($`m_H=`$ 120 GeV) and rapid increase at $`m_h=`$ 95 GeV ($`m_H=`$ 190 GeV) are directly connected with the resonant threshold of the heavy scalar decay $`Hhh`$ (see diagrams in Fig.1). The channel $`e^+e^{}hhZ`$ receives some enhancement at $`m_h=`$ 95 GeV ($`m_H=`$ 210 GeV) when the resonance threshold $`A^0Zh`$ is opened. Our results are qualitatively consistent with the results of , where somewhat different regions of the two-Higgs-doublet model parameter space were explored. Radiatively corrected masses are rather sensitive to the input parameter values. At smaller value of $`\mathrm{𝚝𝚐}\beta `$ a mass interval between the closing and opening $`hh`$ thresholds decreases to a few GeV (, $`\mathrm{𝚝𝚐}\beta =`$ 1.5). The reactions (18) do not include quartic Higgs boson interaction vertices. We calculated the cross-section of the simplest process $`e^+e^{}hhhZ`$ (see Fig.2,3), where quartic vertices $`hhhh`$ and $`hhhH`$ participate (21 diagrams in the unitary gauge). In a very limited region of parameter space the reaction has an observable cross-section if the luminosity is high, and the experimental reconstruction of multijet events is very efficient. ## 4 Conclusions Large increase of the estimate of the possibly achievable integrated luminosity in the next linear colliders (especially L$`=`$ 500 fb<sup>-1</sup>/year for the TESLA project) makes quite realistic the experimental study of Higgs boson self-interaction. Such an investigation is especially interesting if the Higgs sector of the model includes more than one $`SU(2)`$ multiplet. Untrivial spectrum of scalars leads to the resonant multiple Higgs boson production mechanisms, when the final states with 4 or 6 $`b`$-jets from their decays will appear with the cross sections of one-two orders of magnitude greater than in the SM case of only one scalar boson in the Higgs sector. For a systematical study of various production channels we derive in a compact form a complete set of Feynman rules for the general case of two-Higgs doublet model. We demonstrate that in the case of minimal supersymmetry, when additional constraints are imposed on the general parameter space, the interaction vertices are reduced to the well-known vertices of the MSSM at the scale $`M_{SUSY}`$. Useful connection of LanHEP output in the standard lagrangian format of CompHEP input makes possible the following efficient calculation of various reactions. Acknowledgements M.D. is grateful to S.Y.Choi and P.M.Zerwas for useful discussions. The authors would like to express their gratitude to H.S.Song and the Center for Theoretical Physics, Seoul National University, where this work was completed, for hospitality. The authors acknowledge thankfully discussions with the members of Minami-Tateya group (KEK, Tsukuba) and N.Skachkov. The research was partially supported by RFBR grant 96-02-19773a, INTAS grant INTAS-YSF 98-50 and St.-Petersburg University KCFE grant. Table 1. Triple Higgs boson interaction vertices in the general two Higgs doublet model . Table 2. Quartic Higgs boson interaction vertices in the general two Higgs doublet model Table 3. Triple and quartic Higgs boson interaction vertices at the scale $`M_{SUSY}`$ Figure captions Fig. 1 Feynman diagrams for the process $`e^+e^{}hhZ`$ Fig. 2 Total cross sections for the reactions $`e^+e^{}hhZ`$, $`e^+e^{}hhA`$, $`e^+e^{}\nu _e\overline{\nu }_ehh`$ and $`e^+e^{}hhhZ`$ versus the mass of light $`CP`$-even Higgs boson at $`\sqrt{s}=`$500 GeV Fig. 3 Total cross sections for the reactions $`e^+e^{}hhZ`$, $`e^+e^{}hhA`$, $`e^+e^{}\nu _e\overline{\nu }_ehh`$ and $`e^+e^{}hhhZ`$ versus the mass of heavy $`CP`$-even Higgs boson at $`\sqrt{s}=`$500 GeV Fig. 4 Two-body branching ratios of heavy $`CP`$-even Higgs boson Fig. 5 Two-body branching ratios of $`CP`$-odd Higgs boson .