id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
|---|---|---|---|
no-problem/9901/gr-qc9901039.html
|
ar5iv
|
text
|
# On Equivalence of Critical Collapse of Non-Abelian Fields
## Abstract
We continue our study of the gravitational collapse of spherically symmetric skyrmions. For certain families of initial data, we find the discretely self-similar Type II critical transition characterized by the mass scaling exponent $`\gamma 0.20`$ and the echoing period $`\mathrm{\Delta }0.74`$. We argue that the coincidence of these critical exponents with those found previously in the Einstein-Yang-Mills model is not accidental but, in fact, the two models belong to the same universality class.
This is a second paper in the series devoted to the study of critical gravitational collapse in the Einstein-Skyrme (ES) model. The main motivation for using this model is an attempt to understand the role of different length scales and stationary solutions in the dynamics of Einstein’s equations at the treshold of black hole formation. In the first paper we showed that the presence of sphaleron solutions gives rise to Type I critical behavior for certain families of initial data. In particular, we found a new kind of first order phase transition in which subcritical data relax to the static gravitating skyrmion. Here, we focus our attention on Type II critical behavior. After providing the numerical evidence for the existence of such transition, we discuss the universality of critical behavior with respect to intrinsic scales present in the model. We show that the two length scales become gradually irrelevant on small spacetime scales near criticality and consequently the near-critical solutions can be asymptotically self-similar. Interestingly enough, the critical exponents characterizing the Type II transition, the mass-scaling exponent $`\gamma 0.20`$ and the echoing period $`\mathrm{\Delta }0.74`$, agree (within the numerical error limits) with those previously found in the Einstein-Yang-Mills (EYM) model . We argue that this fact is a reflection of equivalence of Type II critical transitions between the EYM and the ES models, that is the critical solutions in both models are asymptotically identical. To our knowledge, this is the first example of universality of critical collapse across two physically fundamentally different systems (the universality classes observed previously in collapse simulations comprised models differing only by potential terms in a lagrangian).
For this paper to be self-contained let us briefly recall the setup for the spherically symmetric ES model we used in . Adopting the polar time slicing and the areal radial coordinate, the spacetime metric can be written as
$`ds^2`$ $`=`$ $`N(r,t)e^{2\delta (r,t)}dt^2+N^1(r,t)dr^2+r^2d\mathrm{\Omega }^2.`$ (1)
As a matter source we take the $`SU(2)`$-valued scalar field $`U(x)`$ (called the chiral field) and assume the hedgehog ansatz $`U=exp(i\stackrel{}{\sigma }\widehat{r}F(r,t))`$, where $`\stackrel{}{\sigma }`$ is the vector of Pauli matrices. The components of stress-energy tensor $`T_{ab}`$ expressed in the orthonormal frame determined by the metric (1) are
$$T_{00}=\frac{u}{2r^2}(NF^2+N^1e^{2\delta }\dot{F}^2)+\frac{\mathrm{sin}^2F}{r^2}\left(f^2+\frac{\mathrm{sin}^2F}{2e^2r^2}\right),$$
(2)
$$T_{11}=\frac{u}{2r^2}(NF^2+N^1e^{2\delta }\dot{F}^2)\frac{\mathrm{sin}^2F}{r^2}\left(f^2+\frac{\mathrm{sin}^2F}{2e^2r^2}\right),$$
(3)
$$T_{01}=\frac{u}{r^2}e^\delta \dot{F}F^{},$$
(4)
where overdots and primes denote $`/t`$ and $`/r`$ respectively, and $`u=f^2r^2+\frac{2}{e^2}\mathrm{sin}^2F`$. The two coupling constants $`f^2`$ and $`e^2`$ have dimensions: $`[f^2]=ML^1`$ and $`[e^2]=M^1L^1`$ (we use units in which $`c=1`$). In order to write the evolution equations in the first order form, we define an auxilary variable $`P=ue^\delta N^1\dot{F}`$. Then, the full set of ES equations is
$`\dot{F}`$ $`=`$ $`e^\delta N{\displaystyle \frac{P}{u}},`$ (5)
$`\dot{P}`$ $`=`$ $`(e^\delta NuF^{})^{}\mathrm{sin}(2F)e^\delta \left[f^2+{\displaystyle \frac{1}{e^2}}(NF^2N{\displaystyle \frac{P^2}{u^2}}+{\displaystyle \frac{\mathrm{sin}^2F}{r^2}})\right],`$ (6)
$`N^{}`$ $`=`$ $`{\displaystyle \frac{1N}{r}}8\pi GrT_{00},`$ (7)
$`\dot{N}`$ $`=`$ $`8\pi Gre^\delta NT_{01},`$ (8)
$`\delta ^{}`$ $`=`$ $`4\pi GrN^1(T_{00}+T_{11}).`$ (9)
In order to make an identification of certain terms in the equations easier, all the coupling constants are displayed, however we remind that, apart from the overall scale, solutions depend on these constants only through the dimensionless parameter $`\alpha =4\pi Gf^2`$. We solve Eqs. (5-9) for regular asymptotically flat initial data. The condition of regularity at the center $`N(r,t)=1+O(r^2)`$ is ensured by the boundary condition $`F(r,t)=O(r)`$ for $`r0`$. The asymptotic flatness of initial data $`N(r,0)=1+O(1/r)`$ for $`r\mathrm{}`$ is guaranteed by the initial condition $`F(r,0)=B\pi +O(1/r^2)`$, where the integer $`B`$, usually referred to as the baryon number, is the topological degree of the chiral field. Since the baryon number is dynamically preserved, the Cauchy problem falls into infinitely many superselection sectors labelled by $`B`$ \- here we concentrate on the $`B=0`$ sector. A typical one-parameter family of initial data in this class, interpolating between black-hole and no-black-hole spacetimes, is an initially ingoing “Gaussian”
$$F(r,0)=Ar^3\mathrm{exp}\left[\left(\frac{rr_0}{s}\right)^4\right],$$
(10)
where one of the parameters $`A,s`$, or $`r_0`$ is varied (hereafter this parameter is denoted by $`p`$) while the others are fixed. As usually, the critical value $`p^{}`$ is located by performing a bisecting search in $`p`$. In order to get into close proximity of $`p^{}`$ we have implemented an adaptive mesh refinement algorithm. This code was essential in probing the critical region with sufficient resolution. Our numerical results demonstrate the existence of Type II critical transition with its two main characteristic features, first observed by Choptuik in the massless scalar field collapse , namely:
* Mass scaling: For supercritical data, the final black hole mass scales as $`m_{BH}C|pp^{}|^\gamma `$ with the exponent $`\gamma 0.20`$. As shown in Fig. 1 this scaling law holds over two orders of magnitude of mass.
* Echoing: For near-critical data, the solutions approach (for sufficiently small $`r`$) a certain universal intermediate attractor which is discretely self-similar with the echoing period $`\mathrm{\Delta }0.74`$. This is illustrated in Fig. 2.
We find that the critical exponents, $`\gamma `$ and $`\mathrm{\Delta }`$, are universal not only with respect to initial data, but also with respect to the parameter $`\alpha `$. The universality of Type II critical collapse with respect to a dimensionless parameter is a rare phenomenon which can occur only if the parameter enters evolution equations through terms which become “irrelevant” near criticality (see the discussion of this issue in Gundlach’s review ). Now, we would like to show that this is exactly what happens for $`\alpha `$ in the ES equations. Along the way, we will see that discrete self-similarity is compatible with the presence of two length scales in the model. Our basic assumption is that $`F(r,t)/\sqrt{r}`$ is an echoing quantity (scaling variable). This assumption is not only justified empirically (see Fig. 2) but, as follows from a simple dimensional analysis of Einstein’s equations (7-9), it seems to be the only possibility compatible with the discrete self-similarity of metric functions $`N`$ and $`\delta `$. We introduce a unit of length $`L=\sqrt{4\pi G}/e`$ and dimensionless coordinates
$$\tau =\mathrm{ln}\left(\frac{t^{}t}{L}\right)\text{and}\xi =\mathrm{ln}\left(\frac{r}{t^{}t}\right),$$
(11)
where $`t^{}`$ is the accumulation time of the infinite number of echos. We also define new dimensionless variables
$$\mathrm{\Phi }(\tau ,\xi )=\frac{F(r,t)}{\sqrt{r/L}},Z(\tau ,\xi )=\sqrt{rL}F^{}(r,t),\mathrm{\Pi }(\tau ,\xi )=\sqrt{rL}e^\delta N^1\dot{F}(r,t).$$
(12)
By assumption $`\mathrm{\Phi },Z`$, and $`\mathrm{\Pi }`$ are the scaling variables, that is they are asymptotically periodic in $`\tau `$: $`\mathrm{\Phi }(\tau ,\xi )\mathrm{\Phi }(\tau +\mathrm{\Delta },\xi )`$ etc. for large negative $`\tau `$ and fixed $`\xi `$. Rewritting Eqs.(5-9) in these new variables, we find that $`\alpha `$ is always multiplied by $`e^\tau `$, and the only other terms depending explicitly on $`\tau `$ appear through the combination
$$X=e^{\frac{1}{2}(\tau +\xi )}\mathrm{sin}\left(e^{\frac{1}{2}(\tau +\xi )}\mathrm{\Phi }\right).$$
(13)
For example, the hamiltonian constraint (7) takes the form
$$1N\frac{N}{\xi }=N(\alpha e^{\tau +\xi }+2X^2)(Z^2+\mathrm{\Pi }^2)+X^2(2\alpha e^{\tau +\xi }+X^2).$$
(14)
In the limit $`\tau \mathrm{}`$, the terms containing $`\alpha `$ become negligible (“irrelevant” in the language of renormalization group theory), and therefore the critical behavior does not depend on $`\alpha `$. In the same limit, $`X\mathrm{\Phi }`$, so the equations become asymptotically autonomous in $`\tau `$, and thereby scale invariant. The equivalence of Type II critical behavior between ES and EYM models is an immediate consequence of the universality with respect to $`\alpha `$, because, as we pointed out in , for $`\alpha =0`$ Eqs.(5-9) reduce (after the substitution $`w=\mathrm{cos}F`$) to the EYM equations. This means that the EYM critical solution constructed by Gundlach is valid (to the leading order) in the ES case as well, and hence the critical exponents in both models are the same. As mentioned above, this theoretical retrodiction is confirmed numerically.
We conclude with two remarks concerning possible extensions of the research presented here and in . First, we would like to point out that by setting all terms in Eqs.(5-9) containing the coupling constant $`e^2`$ (the Skyrme terms) to zero, one obtains the $`\sigma `$-model coupled to gravity. This model is scale invariant, so it does not admit a Type I transition but a Type II transition is expected to exist. If so, the natural scaling variables will be
$$\stackrel{~}{\mathrm{\Phi }}(\tau ,\xi )=F(r,t),\stackrel{~}{Z}(\tau ,\xi )=rF^{}(r,t),\stackrel{~}{\mathrm{\Pi }}(\tau ,\xi )=re^\delta N^1\dot{F}(r,t).$$
(15)
It is easy to check that in this case the terms proportional to $`\alpha `$ are not irrelevant. For example the analogue of Eq. (14) is
$$1N\frac{N}{\xi }=\alpha N(\stackrel{~}{Z}^2+\stackrel{~}{\mathrm{\Pi }}^2)+2\alpha \mathrm{sin}^2\stackrel{~}{\mathrm{\Phi }}.$$
(16)
Therefore, the critical solution, and eo ipso the critical exponents, are anticipated to depend strongly on $`\alpha `$. We have been informed that the group of researchers led by Peter Aichelburg is in the process of investigating this and related problems . If their results confirm our expectation, the universality with respect to $`\alpha `$ in the ES model could be interpreted as another nonperturbative effect of the Skyrme correction to the $`\sigma `$-model.
Second, we recall that, in contrast to Type II, the Type I critical transition in the ES model is manifestly nonuniversal with respect to $`\alpha `$ because the critical solution (the sphaleron) changes with $`\alpha `$, in particular, it exists only for sufficiently small $`\alpha `$. Thus, for large $`\alpha `$ only Type II behavior is possible, while for small $`\alpha `$ the two types of critical behavior can coexist. In the latter case, one can anticipate the existence of crossover effects at the border of basins of attractions of Type I and Type II critical solutions. We leave an investigation of these fascinating effects to other researchers who, as we have recently accidentaly found out , are also interested in the ES model. We would like to emphasize that a full description of an extremely rich phenomenology of the ES model is not a per se goal our studies – for us this model is only a testing ground for addressing certain issues of the dynamics of Einstein’s equations in the presence of intrinsic scales and stationary (stable and unstable) solutions.
Acknowledgments. This research was supported by the KBN grant PB 99/P03/96/11. Part of the computations were performed on the facilities of the Institute of Nuclear Physics in Cracow supported by the Stiftung für Deutsche-Polnische Zusammenarbeit, project 1522/94/LN.
|
no-problem/9901/quant-ph9901055.html
|
ar5iv
|
text
|
# Untitled Document
Consistent-Histories Description of A World with Increasing Entropy
C. H. Woo
Physics Department
University of Maryland
College Park, MD 20742
1. Attempt at a self-contained quantum description
The consistent-histories program<sup>1,2,3</sup> represents a more concerted effort than previous attempts to overcome what some workers regard as the main defect of the Copenhagen interpretation: the need to invoke the existence of a classical world as a pre-condition to the meaningful interpretation of quantum mechanics. Hence, just as for any other attempt at a self-contained quantum description, the first order of business for this program is to represent the classical world correctly. It is precisely on this point, however, that the consistent-histories program has not yet been carried to completion to the satisfaction of all concerned. It is not the purpose of this note to discuss the nature of the difficulties encountered; rather, assuming that there is a quantum description of the quasi-classical world along the lines of consistent-histories, I ask what the description will be like. In particular, I investigate the rule of association between the histories in a suitable family and the quasi-classical worlds that the formalism is supposed to describe.
A family of consistent histories is specified by an initial state $`\rho (0)`$, the Hamiltonian of the world $`H`$, and sequences of events. Each sequence is represented by a chain of Heisenberg-picture projectors:
$$C(\alpha _1\alpha _2\mathrm{}\alpha _n)E_1^{\alpha _1}(t_1)E_2^{\alpha _2}(t_2)\mathrm{}E_n^{\alpha _n}(t_n)$$
$`(1),`$
where the subscript on the projector $`E_i^{\alpha _i}`$ refers to the nature of the resolution of the identity, and the superscript to the specific element within that set of projectors. The probability $`P(C)`$ for the occurrence of a history corresponding to the chain $`C`$ is:
$$P(C)=Tr(C^+\rho (0)C)$$
$`(2).`$
All histories within a family must fulfill certain consistency conditions so that the probabilities for their being realized satisfy normal additivity rules. The Eq.(1) has been given in the simplest form which does not take the possible branch-dependence of projections into account, but I will later return to the issue of branch-dependence where it might be relevant to the subject of this note.
Consistency alone does not guarantee that the events in a history correspond to what in the Copenhagen interpretation would be called “actualized” or “registered” events. In the absence of external observers, it appears that a selection criterion needs to be added to the consistent-histories formalism to characterize such special events, which correspond to the ordinary experience that these occurrences actually happened. It turns out that a mathematical formulation of such a criterion is no easy task.<sup>4,5</sup> But the physical basis that qualifies certain types of events as actualized can be examined, and is discussed in some detail in the Appendix. Roughly speaking, the essence of actualization is verifiability from accessible records; and events which may be regarded as having been actualized or registered will be called “verifiable” from here on. A verifiable event is different from a robust event<sup>6</sup>, the consistency of which is maintained only by decoherence. Decoherence ensures that the different alternatives will not later interfere, but does not guarantee that only one of the alternatives is “actually realized.” It is only when the accessible records in a branch can corroborate which alternative is realized that one has a verifiable event. A similar point has been made by Zurek and co-workers within the framework of what they call “the existential interpretation of quantum mechanics,”<sup>7</sup> although it is not always clear whether they want to make a sharp distinction between a robust event, where the “record” may consist of little more than scattered photons which escape to infinity, and a verifiable event supported by accessible records. The main point of this note is sufficiently simple that the precise mathematical conditions qualifying an event as verifiable, which have resisted formulation so far, are not needed; but it is necessary to assume that a selection criterion does exist. The formulation of a set of clearly stated conditions for distinguishing certain events as verifiable is absolutely essential for the completion of the consistent-histories program, because this program tries to describe the classical world with events, and the quantum events which are directly relevant to the description of occurrences in the observable world are exclusively verifiable events.
Already in classical physics it is imagined that there exists an underlying fine-grained structure, the complete details of which can never be checked within the limitations imposed by the availability of resources. Turning to a quantum description of the world adds the need to consider superpositions with phase correlations, and subtracts the possibility of certain types of simultaneously precise data, but it is not greatly different in spirit as far as entertaining the idea of the world as a closed system is concerned. In particular, among classical statistical physics expositions there are statements like “the entropy of the world, regarded as a closed system, is non-decreasing.” The question this article addresses is: What is the nature of the appropriate coarse-grained quantum state, so as to describe a quasi-classical world in which entropy increases?
It should be made clear at the outset that the goal here is only to clarify the nature of the appropriate quantum representation. There is no claim that this quantum description provides any a priori explanation for irreversibility – the common assumption that the world, under suitable coarse-graining, was in a particularly low entropy state once, and is still relatively low in entropy today, as the cause for the validity of the second law in the present era, will again be needed (in the form of an assumption on the near saturation of records, see section 3). But even with such a modest goal something can be learned. Because the aim of a self-contained quantum description of the world is so ambitious, whereas the language of consistent histories is so very economical – thus far the branching of histories is essentially the only type of events seriously considered – it turns out that even the mere task of describing entropy increase requires some modification of that perspective. The modification consists of the realization that the process of merging two or more histories together is as relevant to describing the observable world as the opposite process of splitting a history into separate branches. As a consequence, the number of possible quasi-classical branches does not increase indefinitely. In other words, there is no population explosion of the kind in Everett’s many-world picture.
2. Average entropy is non-increasing under branching
Although some workers prefer to speak of only histories and not the instantaneous density matrix of the world, most researchers, once taking the plunge of considering a self-contained quantum description of the world as a whole, do not seem to object to this notion. Since the purpose here is to provide a quantum representation of the change in time of the macroscopic world of verifiable facts, it is appropriate to have an instantaneous density matrix to correspond to the situation at a specific instant. The consistent-histories formalism suggests a natural density matrix to consider. The matrix $`\rho (C,t_n)`$ defined by
$$\rho (C,t_n)=C^+(\alpha _1\mathrm{}\alpha _n)\rho (0)C(\alpha _1\mathrm{}\alpha _n)/Tr[C^+(\alpha _1\mathrm{}\alpha _n)\rho (0)C(\alpha _1\mathrm{}\alpha _n)]$$
$`(3a)`$
is a suitable candidate, because it has the property that the conditional probability for the outcome of the next event to be that particular alternative represented by $`E_{n+1}^{\alpha _{n+1}}(t_{n+1})`$, given the fact that the past history is already specified by $`C(\alpha _1\mathrm{}\alpha _n)`$, equals appropriately $`Tr[\rho (C,t_n)E_{n+1}^{\alpha _{n+1}}(t_{n+1})]`$. The density matrix is alternatively defined by the recursive relation
$$\rho (C(\alpha _1\mathrm{}\alpha _n),t_n)=E_n^{\alpha _n}(t_n)\rho (C(\alpha _1\mathrm{}\alpha _{n1}),t_{n1})E_n^{\alpha _n}(t_n)/N$$
where
$$NTr[E_n^{\alpha _n}(t_n)\rho (C(\alpha _1\mathrm{}\alpha _{n1}),t_{n1})]$$
$`(3b).`$
The way this density matrix enters the probability for the next event recommends it as a candidate for a quantum representation for the instantaneous state of the world, provided $`\rho (0)`$ is a suitable initial state of the world.
Only three things enter into $`\rho (C,t_n)`$: the initial density matrix, the Hamiltonian, and the chain of projections. The Hamiltonian is presumably fixed once and for all, and so is the initial density matrix. What changes as time progresses is, in the usual consistent-histories formalism, an indefinite elongation of the chain of projections. For a verifiable history, the projections are all supported by classical records, and hence the increase in the information content of the records in the classical world which $`\rho (C,t_n)`$ is supposed to describe, due to the occurrence of a new event, is no less than the increase in the information content of $`\rho (C,t_n)`$ itself. The increase of the information content of classical records can be greater, however, because for the quantum state some of the initial information may have been destroyed by the new projection (noncommutivity). One notes that the information content of classical records is really no different from that of the classical world. Mallarme famously said:“Everything there is in the world exists to be put in a book,” but the “book” should be generalized to include all types of classical records.
As to the quantum state, since the choice of an initial state must ultimately be justified by empirical support, from the position of trying to obtain a quantum description of the classical world that stays as close to empirical facts as possible, there is no good reason to treat the initial state and later events by different principles. If the later events are factual only by virtue of the records available today, then the initial state is to be determined by the same criterion: maximum ignorance consistent with present data. Indeed, one could let $`\rho (0)`$ be proportional to identity, and put in the earliest known information through the first projection at time $`t_1`$, and in this way the initial data and subsequent events are treated on the same footing. Formally, the method of choosing a least-biased state by a variation principle subject to constraints is well known, although in practice this will be incredibly complicated for the real world. If one wants, however, to have a quantum description that is as close to the classical world as possible, leaving out no relevant classical data but also putting in as little extraneous input as possible, then it is the maximally detailed verifiable histories with the least-biased initial state that are relevant. In other words, in principle one would like to have every piece of factual data in the classical world represented in the quantum description: if a classical datum arises out of deterministic evolution, the information resides in the initial state and earlier events; and if it arises out of an amplified quantum fluctuation, then the datum emerges at the point of a new verifiable event. The objective of the consistent-histories program to have a self-contained quantum description of the quasi-classical world is not realized unless there is a family of such “least-biased and maximally detailed verifiable histories.” It will be understood from here on that this is the kind of history which we are concerned with.
From the consideration of information gain, one might think that for such a least-biased and maximally detailed verifiable history the change in the von Neumann entropy $`s[\rho (C,t_n)]Tr[\rho (C,t_n)log\rho (C,t_n)]`$ due to the occurrence of a new verifiable event has at least the same sign as the change in the statistical entropy of the classical world that this history describes. Making this association, however, gives rise to an immediate difficulty. We state this difficulty as follows: Proposition Suppose that (i) there is a faithful quantum description of quasi-classical worlds by means of a family of least-biased and maximally detailed verifiable histories, which undergo only branchings, and (ii) the change in the von Neumann entropy of the density matrix of Eq.(3) due to a verifiable event is in the same direction as the change in the classical statistical entropy of the world being described. Then on the average the second law is violated in these worlds.
This proposition follows directly from a theorem of Groenewold and Lindblad. Groenewold first conjectured<sup>8</sup> that under a branching the average entropy of a branch is equal to or less than that of the parent-state, but a geometrical approach to proving this conjecture turned out to be difficult. Lindblad<sup>9</sup> then gave an elegant proof drawing on the some previous results.<sup>10</sup> Theorem (Groenewold-Lindblad)
$$If\rho _\alpha ^{}E^\alpha \rho E^\alpha /p_\alpha ,andp_\alpha Tr(E^\alpha \rho ),$$
$$\overline{s[\rho ^{}]}\underset{\alpha }{}p_\alpha s[\rho _\alpha ^{}]s[\rho ]$$
$`(4).`$
It is understood here that the set $`\{E^\alpha \}`$ consists of orthogonal projectors which resolve the identity. Since any chain of projections entering into the specification of a history is built up from projections satisfying the conditions of this theorem, it follows that, when averaged over the branches of each splitting, the entropy for the density matrix $`\rho (C,t_n)`$ is non-increasing with $`n`$, regardless of what kind of coarse-graining has been built into the projections. This theorem does not rule out the possibility that along a particular branch the entropy is non-decreasing, but that would be an exceptional situation requiring an explanation, for it goes against the average trend of Eq.(4), which is for the entropy associated with $`\rho (C,t_n)`$ not to increase with increasing $`n`$.
3. The merging of histories We have used expressions like “the observable world” and “accessible records” without being specific on “observable” and “accessible” to whom. This is because a detailed examination would have involved a lengthy digression, and I decided to relegate all such matters to an appendix. At this point let it be tentatively supposed that these expressions have sensible meanings. Then the entropy-decreasing tendency of branching is at least intuitively comprehensible from the information-gain argument made earlier. This argument suggests that entropy-decrease becomes unavoidable for a history in which all the events are verifiable. The quoted theorem says that entropy tends to decrease even when earlier information can be partially destroyed, but for verifiable events the records keep the earlier information intact, thus making it even more likely that entropy would decrease with the prolongation of a history, i.e., with the accumulation of more and more records.
The most natural way out of the quandary created by a tendency for the von Neumann entropy to decrease under branching is to take into account the fact that in a world of increasing entropy records do decay. If a branching in the past is still relevant later because of the persistence of its associated accessible records, then the decay of records, by their becoming inaccessible as evidence, would partially undo the effect of that branching. Such partial undoing is what I call “merging.” It is desirable to incorporate merging without spoiling the simplicity of the consistent-histories approach. The following proposal for bringing the deterioration of records into the quantum description of a closed system is guided by two considerations: (a) the verifiability of a given event is a time-dependent property, since records change with time; and (b) the decay of the records concerning a past event is not the same as a quantum erasure. The implication of the first premise is that, unlike a branching which involves just one time, a merging commonly involves two times: an earlier moment when the event and its registration occurred, and the later moment when the records concerning the outcome of that event are obliterated. The implication of the second premise is that the decay of records does not completely undo a projection. If one were to represent the decay of the records concerning an event at time $`t_i`$ by removing the projection $`E_i^{\alpha _i}(t_i)`$ altogether from the chain $`C`$ which helps to define $`\rho (C,t_n)`$, it would be as if no event happened at $`t_i`$ at all; whereas the decay of records presupposes that an event did occur, and it was even verifiable at one stage, and the different outcomes were decoherent. When the records decay at time $`t_n`$ and it is no longer possible to verify the outcome of what actually happened at $`t_i`$, the relative likehood of the various alternatives being the best that can be deduced from the surviving evidence, then these alternatives are to be incoherently summed. In contrast, removal of the projection at $`t_i`$ would correspond to a quantum erasure, with the alternative components added back together with exactly the correct phases, which is imaginable in a laboratory setting but is not realistic for the overwhelming majority of actual events. The expression “merging of histories” for describing the incoherent summation seems appropriate because of the following consideration. The most common pictorial representation for the structure of a family of histories is that of a branching tree. Although the summation over those branches which the records can no longer distinguish is akin to bundling several branches together rather than fusing them, the subsequent offshoots undergo a real reduction in number. For example, a branching into $`n`$ branches followed by a second splitting each into $`m`$ branches would result in a total of $`nm`$ alternatives; but if the records for the first branching are later destroyed, the final outcome leaves only $`m`$ alternatives, and in that sense the $`nm`$ branches have merged into the $`m`$ branches. The situation described corresponds to a family structure where the projections are not branch-dependent. If the later projections depend on which branch the event is taking place in, ther situation becomes more complicated. But the existence of branch-dependence by itself provides some sort of a record, and therefore in considering the complete destruction of records, we limit ourselves to situations where branch-dependence is absent.
The mathematical formulation of merging is relatively straightforward. Thus the erosion at time $`t_n`$ of the records concerning an event at $`t_i`$, where $`t_n>t_i`$, is to be represented by the transformation $`T`$:
$$T:\rho (C(\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _{n1}),t_{n1})\rho (\overline{C}(\alpha _1\mathrm{}\overline{\alpha }_i\mathrm{}\alpha _{n1}),t_n)$$
$$\underset{\alpha _i}{}b_{\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _{n1}}e^{iH(t_nt_{n1})}\rho (C(\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _{n1}),t_{n1})e^{iH(t_nt_{n1})}$$
where
$$p_{\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _{n1}}Tr[C^+(\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _{n1})\rho (0)C(\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _{n1})]$$
$$b_{\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _{n1}}p_{\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _{n1}}/\underset{\alpha _i^{}}{}p_{\alpha _1\mathrm{}\alpha _i^{}\mathrm{}\alpha _{n1}}$$
$`(5).`$
This corresponds to a situation where the last step in the destruction of records is through the deterministic processes which result from the evolution generated by $`exp[iH(t_nt_{n1})]`$. If, on the contrary, the destruction of records is itself accompanied by the actualization of a new quantum event, then one has instead:
$$T^{}:\rho (C(\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _{n1}),t_{n1})\rho (\overline{C}(\alpha _1\mathrm{}\overline{\alpha }_i\mathrm{}\alpha _n),t_n)$$
$$\underset{\alpha _i}{}b_{\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _n}\rho (C(\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _n),t_n)$$
where
$$p_{\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _n}Tr[C^+(\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _n)\rho (0)C(\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _n)]$$
$$b_{\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _n}p_{\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _n}/\underset{\alpha _i^{}}{}p_{\alpha _1\mathrm{}\alpha _i^{}\mathrm{}\alpha _n}$$
$`(6).`$
The proposal is that the instantaneous quantum state suitable for describing what is happening in the macro-world corresponds to a density matrix of the form $`\rho (\overline{C}(\alpha _1\mathrm{}\overline{\alpha }_i\mathrm{}\alpha _n),t_n)`$ rather than of the form $`\rho (C(\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _n),t_n)`$. In other words, the coarse-graining necessary for describing the observable world cannot be all effected through the use of suitably coarse projections alone: certain coarse-graining requires convex summations, as in Eq.(5) and Eq.(6). The resulting change in the formalism is very minor, and in particular the transformation corresponding to a branching is given by:
$$\rho (\overline{C}(\alpha _1\mathrm{}\overline{\alpha }_i\mathrm{}\alpha _{n1}),t_{n1})E_n^{\alpha _n}(t_n)\rho (\overline{C}(\alpha _1\mathrm{}\overline{\alpha }_i\mathrm{}\alpha _{n1}),t_{n1})E_n^{\alpha _n}(t_n)/N$$
where
$$NTr[E_n^{\alpha _n}(t_n)\rho (\overline{C}(\alpha _1\mathrm{}\overline{\alpha }_i\mathrm{}\alpha _{n1}),t_{n1})]$$
$`(7),`$
identical in structure to Eq.(3b). It should be noted, however, that the original chains $`C(\alpha _1\mathrm{}\alpha _n)`$ are still relevant because the additional consistency conditions to be fulfilled with the introduction of any new projection $`E_n^{\alpha _n}(t_n)`$ are in terms of the individual $`C(\alpha _1\mathrm{}\alpha _i\mathrm{}\alpha _n)`$ and not in terms of a sum over $`\alpha _i`$.
The processes represented by Eq.(5) are clearly entropy non-decreasing. For the process of Eq.(6) where $`t_n>t_i`$, or processes where Eqs.(5),(6) and (7) all contribute, there is competition between the entropy non-decreasing convex summations and the average entropy non-increasing additional branchings. Without further input it is not possible to say which tendency wins. If, however, the world’s capability for carrying records is already near saturation, in the sense that the creation of new records requires in most cases the destruction of some existing records, then records are almost continuously being created and destroyed. Over the course of many such creations and destructions there is a sense in which the entropy-increasing trend wins. This is because of the fact that the average decrease in entropy due to a branching event is necessarily less than or equal in magnitude to the average increase in entropy when subsequently the records corroborating this event are destroyed.
This can be proved as follows. With the notation of Eq.(4), the absolute magnitude of the average decrease in entropy as a result of a branching is $`s[\rho ]p_\alpha s[\rho _\alpha ^{}],`$ and the average entropy increase accompanying the destruction of those records which verify the outcome of the branching is $`s[p_\alpha \rho _\alpha ^{}]p_\alpha s[\rho _\alpha ^{}]`$. The fact that the entropy is non-decreasing as a result of these two steps follows trivially since $`s[p_\alpha \rho _\alpha ^{}]s[\rho ]`$.
Thus, analogous to the usual assumption that the world is currently in a low-entropy state, the operative assumption here that can lead to a tendency for entropy increase is that the present world is already near saturation for record-keeping, so that records are continuously being generated and destroyed. The degree of plausibility of this assumption is also discussed in the Appendix.
Bennett and Landauer<sup>11,12</sup> have already pointed out that entropy increase is associated with the erasure of records. Their analysis was in a classical setting, and therefore they did not address the entropy-decreasing tendency associated with the branching of quantum histories, nor need they be concerned with the difference between classical erasure and quantum erasure. Their starting point involving Maxwell’s demons and computers seemingly has very little in common with our starting point of formulating consistent histories in order to describe an entropy-non-decreasing world. Nevertheless their analysis is relevant to our consideration in two respects. One is Bennett’s demonstration that it makes sense to speak of the entropy of a single system instead of that of an ensemble – the quantum history description of the quasi-classical world is most simply viewed as dealing with a single system. The second relevant point is their analysis showing that measurement and copy-making need not be accompanied by an entropy increase. Such reversible record-keeping is highly idealized, and most practical record-making is irreversible. Nonetheless as a matter of principle it is important to recognize that record-making is not unavoidably entropy-increasing, whereas record-erasing is. Our application of the average entropy-decrease of Eq.(4) to an amplified quantum fluctuation, i.e., to a measurement-like event, should be viewed as referring to a measurement with ideal record-making. Once the nature of entropy increase is clarified in such ideal events, then the additional entropy-increase associated with the difference between ideal record-making and realistic record-making can be accounted for as an accumulation of elementary events, each accompanied by the destruction of some previously existing records.
There is an alternative possibility for accommodating entropy increase as long as one is willing to add convex sums to the consistent-histories formalism. Noting the existence of many physical processes having outcomes that are decoherent but not verifiable, one may be tempted to adopt the rule that every time such an event occurs the state representing the quasiclassical world becomes an incoherent sum over these alternative outcomes. This modification will also bring about an entropy increase. The defect of this approach is the arbitrariness in the choice of “relevant variables” which decohere as a result of summing over the “irrelevant” variables, that is, an arbitrariness in the division between the environment and the subsystem of interest. If the alternatives involving the subsystem are not accessible to verification, why are they “relevant” and “of interest”? Remember that one is attempting a closed-system description here, and there is no pre-determined environment. Our proposal hews close to what are verifiable, and requires incoherent sums only when there is a destruction of records, i.e., when formerly verifiable alternatives later become unverifiable.
Lastly, although I have argued that the dilemma posed by the Proposition is to be resolved by adding merging to branching, others may wish to avoid the conundrum altogether by arguing that the change of von Neumann entropy is unrelated to that of the classical entropy.<sup>13</sup> If one were to look at the entropy changes during a measurement-like event, whether it occurred in nature or in the laboratory, by the usual way of reckoning, it would indeed appear that the latter position is obviously valid. For example, a quantum event having two possible outcomes in a particular measurement situation can bring about a decrease in the von Neumann entropy of the density matrix for the quantum system by at most $`klog2`$, but the recording of the outcome is usually associated with irreversible processes, causing a rise in the classical entropy of the world by an amount far exceeding $`klog2`$. Thus the signs are opposite and the magnitudes do not match. One must remember, however, from the analysis by Bennett and Landauer that irreversibility is ultimately attributable to erasure, that is to say, to processes described by Eq.(5) or Eq.(6). Hence the overall change in the statistical entropy of the world can be accounted for by the change in the von Neumann entropy of $`\rho (\overline{C})`$. In other words, the merging of histories has to be taken into account. The two changes may not be exactly equal because of the possibility of the destruction of some quantum information by a new projection, discussed earlier, but the inequality is such that an increase in classical statistical entropy requires a corresponding increase in the von Neumann entropy of $`\rho (\overline{C})`$ if the quantum representation is as close to a detailed description of the quasi-classical world as possible. On the other hand, if one insists on having only branching and no merging, then the change in the von Neumann entropy of $`\rho (C)`$ is indeed unrelated to the change in the classical entropy of the world. But to adopt this as the solution is to unnecessarily impoverish the consistent-histories formalism, rendering only a very partial quantum description of some aspects of the quasi-classical world. Such a skeletal description is likely to be insufficient as replacement for the classical underpinning of the Copenhagen interpretation.
In summary, the tendency for the von Neumann entropy to decrease under branching, on the average, is a fact to be reckoned with. This is regardless of how a coarse-grained quantum state is defined: as long as it undergoes only branchings, the average entropy tends to decrease with time. If the second law in the macro-world is taken as an input, and if a coarse-grained quantum state is to faithfully describe that macro-world so that changes of the von Neumann entropy of the quantum state track those of the classical entropy of the world, then its time evolution has to incorporate the merging of previously verifiably distinct histories. If the consistent-histories program is to improve on the Copenhagen interpretation, it has to provide a faithful description of the quasi-classical world strictly within a self-contained quantum theory. But that task is not finished even when a specific family of histories is singled out by some criterion as being suitable for describing quasi-classical physics, for there must be in addition a rule of association between the histories in that family and a quasi-classical world with all its macroscopic details. And our conclusion is that even if at one time a single history in this fixed family describes that world, it will be the incoherent sum of several histories that has to be associated with the same world at a later time. Another way of saying this is that it is inadequate to use projections alone to represent all coarse-graining: there are events which require incoherent sums, besides coarse projectors, to represent what is happening to the quantum state. The suggested modifications are easy to incorporate and does not require any major overhaul of the consistent-histories formalism; but conceptually the picture of an ever multiplying number of potential quasi-classical worlds is changed to one where the population of quasi-classical worlds need not inexorably grow.
(This paper was presented at the E.H. Wichmann Symposium at the University of California at Berkeley in June 1999.)
Appendix
This article contains expressions like “observable world,”“information,” “verifiability,” and “accessible records,” which inevitably invite questions: Whose information? Observable and verifiable by whom and accessible to whom? How stable does a physical correlation have to be in order to count as a record? Discussion of these issues leads to what many physicists would dismiss as “philosophy,” and yet avoidance of these issues does not save time. As continuing controversies surrounding the subject of quantum measurement<sup>14</sup> show, some knotty issues refuse to go away. Even when the final answers are not available, it is better to state one’s tentative understanding as clearly and explicitly as possible.
One way to specify the meaning of such expressions is by linking them to operations. The basic issues as far as this paper is concerned are first of all what kind of an event can be said to have verifiably occurred, and subsequently when the event can be said to cease to be verifiable. In an operational sense, an event is verifiable if the community of scientists, on the basis of records, is capable of reaching a consensus on the outcome, provided efforts are devoted to the task of checking this event, limited only by the availability of natural resources. Similarly the information content of the observable world is interpreted to be the total database needed for a complete description of the macroscopic world, including every bit of datum that is verifiable. This way of describing the relation between records and the verifiability of an event does not require the records to be unchanging: they can be evolving in time, because all that is required is for the scientists to be able to use them to unequivocally interpret an event in the past. By the same token, the decay of records can take many forms: some due to the corruption of macroscopic information through classical processes, and some through random events in which quantum fluctuations play a role. That is why in the sentence following Eq.(5) a distinction is made between the decay of records through deterministic processes and through quantum fluctuations. That records can decay even when the process results from $`exp[iH(t_nt_{n1})]`$ is not a contradiction, because realistically the scientific community cannot completely evaluate this operator within the limits of finite resources, to the precision needed to overcome widespread deterministic chaos.
Note that it is potential verifiability that matters rather than actual verification: because rigorous verification is costly in labor and resources, a coarse-grain description in which every datum is actually verified, rather than just being verifiable, would turn out to be very crude indeed. In contrast, a description in terms of verifiable events can be much finer. The distinction between verified and verifiable events also helps to answer the following question: With the notion of information being so closely tied to what the scientific community can verify, how is the increase in scientific knowledge over time to be reconciled with entropy increase? The answer is that whereas the amount of scientifically verified data is increasing, the (much greater) amount of scientifically verifiable data is non-increasing when entropy is non-decreasing. But the idea of calling an event “verifiable” provided only that it can be actually verified when attention is focused on the task of its validation may seem to lead to a contradiction: the attention of the scientific community can after all be turned to checking the position of a particle with great accuracy or the momentum of a particle with great accuracy; hence would not the position and momentum be simultaneously verifiable? It must be remembered, however, that the branching structure of a consistent family is regarded as being first given, and the verifiability of the events entering a history is being considered in that context. The consistency conditions exclude histories with events which refer to the position and the momentum of a particle at the same time. As long as the focus of validation is directed to one or another of the events already in consistent histories, no contradiction would arise.
By using potential verifiability rather than actual verification as a selection criterion, one can envision a relatively refined coarse-grain description of the observable world. In cosmological terms such a stage was reached probably only after the recombination era: the entropy non-increasing tendency of branching discussed in this article becomes relevant only after it is possible for branches to be sharply distinguished through the existence of records. Initially this tends to bring most branches towards low entropy states. It is only after a vast number of verifiable branchings already left a wealth of records, allowing a fairly detailed description of the macro-world, that new records are mostly made at the expense of erasing old ones. Even then, if an event has a huge number of redundant records, the erasing of a few of these will not destroy the credibility of the event; and therefore the hypothesis of “near saturation of record-keeping capacity” presupposes that overall there are far more nonredundant records than redundant ones. In a refined description this is not implausible. The most obvious kinds of redundant records occur right here on earth, but biologically related individual organisms carry a great deal more data than their shared genetic information, and redundant records made by people also do not approach anything close to the capacity of refined classical information that the earth can carry.
One may ask why “scientists” are not replaced by the IGUSes (information gathering and utilizing systems) of Gell-Mann and Hartle. The answer is that the greater generality of IGUSes is not particularly helpful in this case: an ant is an IGUS, and presumably it has some notion of whether some kinds of events happened or not, but one would be most reluctant to add to the list of verifiable events something that only ants are aware of with no possibility of independent checks by human scientists, now or in the future, even when attention is directed to that event. In other words, even with the introduction of IGUSes, reference to the “community of scientists” would still be necessary.
With the above explanation of what is meant by “verifiable through records,” it is finally possible to specify, in principle, when a formerly verifiable event ceases to be verifiable: a verifiable event at $`t_i`$ may be said to have become unverifiable at time $`t_n`$ if the records in none of the future extensions from $`t_n`$ onward would permit the scientists in those branches to corroborate the outcome of this event.
Although a scientific-community-based approach results in a verifiability criterion that is close to the common-sensical notion of “objective reality,” there are nonetheless some counter-intuitive consequences. By the expression “scientific community” one usually includes not only today’s scientists but also scientists of the future, because technology improves with time and defining verifiability as what current technology can ascertain is too restrictive. Furthermore, there were no scientists in the early stages of cosmological evolution, and yet some of those early occurrences are regarded as verifiable today, because scientists living considerably later than these events can still check them out from the records. Once it is accepted that future scientists must be included in the notion of a scientific community, one has to face the awkward fact that there are different future branches. Suppose, for example, that a continuation of our history into two different future branches, branch A and branch B, results in different consequences for life. Whereas in branch A life continues to thrive and instruments continue to improve, in branch B life is extinguished forever after a cataclysmic event. Then according to the scientific-community-based criterion, there may be current events which count as verifiable if our future is in branch A but not verifiable if our future is in branch B. Although from a pragmatic standpoint this difference is immaterial because present-day scientists cannot check these subtle events in either case, this example shows the value of an alternative, strictly objective standard that is independent of the existence of people. For example, one may try to abstract the essence of “verifiability by a scientific community” into a mathematical criterion for factuality that can be applied even when life does not exist. This objective has not yet been accomplished, and it is not obvious that it can be reached at all; although a formulation in terms of suitable complexity measures, in effect having finite-resource computers filling in for scientists, appears promising, that approach suffers from the defect that some of the notions, such as minimal program length, though well defined cannot be operationally checked exactly. There can be heuristic checks, but heuristic standards are likely to be again evolving with time. As to conceptual rigor, separate from the question of mathematical rigor, one pitfall that is easy to fall into is to unconsciously slip in some criteria which are reasonable only because of our experience up to now, and then to regard the result as being more general than it really is. For example, the environment-induced superselection approach implicitly regards cuts between environment and system in ways that are based on our usual notion of what variables are essentially disentangled from the rest of the world, and the de Broglie-Bohm theory contains arbitrariness in giving spatial positions a privileged role. These arbitrary inputs all seem very reasonable on the basis of our experience up to now, but then we cannot yet claim that these theories are totally free from biases associated with people and their state of advancement.
In a way it is preferable to explicitly acknowledge this possible lack of finality. The dependence on the existence of a scientific community in the proposed verifiability criterion is similar to the need to refer to normal people for Locke’s secondary qualities. Locke’s definition is useful only if our standard for normal people is not going to change significantly in the future. Similarly if the standard of objectivity achieved by contemporary science is not going to undergo substantial improvements in whichever future branch our world evolves into, then the proposed criterion will be useful. Otherwise the utility of this criterion will be limited because of its still evolving nature.
Notes and References
1. R.B.Griffiths, J.Stat.Phys., 36, 219 (1984).
2. R.Omnes, The Interpretation of Quantum Mechanics, Princeton Univ. Press, Princeton (1994); Understanding Quantum Mechanics, Princeton Univ. Press, Pinceton (1999).
3. M.Gell-Mann and J.B.Hartle in Complexity, Entropy, and the Physics of Information, W. Zurek ed., Addison-Wesley, Reading (1990).
4. For example, Hartle and Gell-Mann have presented various sets of criteria for classicity, but in a telephone conversation with Jim Hartle in 1998 he informed me that none of these fully satisfied them. For other views on some of the difficulties, see, e.g., the reference in note 5.
5. F.Dowker and A.Kent, J. Stat. Phys. 82, 1575 (1996).
6. The distinction between robust histories and verifiable histories has been emphasized by the author in Found. of Phys. Lett., 6, 275 (1993).
7. W.H.Zurek, Phil Trans. R. Soc. Lond. A 356, 1793 (1998).
8. H.J.Groenewald, Int. J. Theor. Phys., 4, 327 (1971).
9. G.Lindblad, Comm. Math. Phys., 28, 245 (1972).
10. Readers who are interested in a quick grasp of the entropy non-increasing nature of branching may consider the quadratic entropy $`s^{}[\rho ]=Tr\rho ^2`$ instead of the von Neumann entropy, for in that case the analog of the Lanford-Robinson result used by Lindblad, viz., $`s^{}[A+B]s^{}[A]+s^{}[B]`$ for positive trace-class operators $`A`$ and $`B`$, is trivially true.
11. R.Landauer, Rev. IBM J. Res. Dev., 3, 183 (1961).
12. C.H.Bennett, Sci. Am., 257, 11-108 (1987).
13. I thank Lawrence Landau for raising such an objection during my oral presentation at the Symposium; the following elaboration was added subsequently.
14. For example, see the exchange between J.S. Bell and N.G. van Kampen, both long immersed in the subject, in Physics World, August 1990, p.33, and October 1990, p.20.
|
no-problem/9901/hep-th9901082.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Chiral bosons in two-dimensional space-time and $`(2+1)`$ -dimensional Chern-Simons(CS) gauge theories are related problems which have been attracting much attention.These problems are important for the string program and for the development of the quantum Hall effect,.Floreanini and Jackiw suggested an action suitable for the quantization of a two-dimensional chiral boson.Siegel proposed an apparently unrelated action for the same system. In the connection between these two approaches was investigated. The quantization of Siegel action was investigated by Faddeev-Jackiw formalism in.
The equivalent Lagrangian method is used to obtain the set of Hamilton-Jacobi partial differential equations .In other words the equations of motion are written as total differential equations in many variables.
The main aim of this paper is to investigate the quantization of the Floreanini-Jackiw chiral oscillator using $`G\ddot{u}ler^{}s`$ formalism.
The plan of our paper is the following:
In Section 2 we present the quantization of the field theories with constraints using Hamilton-Jacobi method. In Section 3 the path integral quantization for Floreanini-Jackiw chiral oscillator is given.The Siegel’s action was obtained using $`G\ddot{u}ler^{}s`$ formalism. In Section 4 we present our conclusions.
## 2 Hamilton-Jacobi quantization of the field theories with constraints
Starting from Hamilton-Jacobi partial-differential equation the singular systems was investigated using a formalism introduced by $`G\ddot{u}ler`$(see for example Refs.,).
The canonical formulation gives the set of Hamilton-Jacobi partial-differential equation as
$$H_\alpha ^{^{}}(\chi _\beta ,\varphi _\alpha ,\frac{S}{\varphi _\alpha },\frac{S}{\chi _\alpha })=0,\alpha ,\beta =0,nr+1,\mathrm{},n,a=1,\mathrm{},nr,$$
(1)
where
$$H_\alpha ^{^{}}=H_\alpha (\chi _\beta ,\varphi _a,\pi _a)+\pi _\alpha $$
(2)
and $`H_0`$ is the canonical hamiltonian. The equations of motion are obtained as total differential equations in many variables as follows
$$d\varphi _a=\frac{H_\alpha ^{^{}}}{\pi _a}d\chi _\alpha ,d\pi _a=\frac{H_\alpha ^{^{}}}{q_\alpha }d\chi _\alpha ,d\pi _\mu =\frac{H_\alpha ^{^{}}}{\chi _\mu }d\chi _\alpha ,\mu =1,\mathrm{},r$$
(3)
$$dz=(H_\alpha +\pi _a\frac{H_\alpha ^{^{}}}{\pi _a})d\chi _\alpha $$
(4)
where $`z=S(\chi _\alpha ,\varphi _a)`$.The set of equations(3,4) is integrable if
$$dH_0^{^{}}=0,dH_\mu ^{^{}}=0,\mu =1,\mathrm{}r$$
(5)
If conditions(5) are not satisfied identically , one considers them as a new constraints and again tests the consistency conditions. Thus repeating this procedure one may obtain a set of conditions.
Let suppose that for a system with constraints we found all independent hamiltonians $`H_\mu ^{^{}}`$ using the calculus of variations ,.At this stage we will use Dirac’s procedure of quatization . We have
$$H_\alpha ^{^{}}\mathrm{\Psi }=0,\mu =1,\mathrm{},r$$
(6)
where $`\mathrm{\Psi }`$ is the wave function. The consistency conditions are
$$[H_\alpha ^{^{}},H_\beta ^{^{}}]\mathrm{\Psi }=0,\alpha ,\beta =1,\mathrm{},r$$
(7)
If the hamiltonians $`H_\alpha ^{^{}}`$ satisfies
$$[H_\alpha ^{^{}},H_\beta ^{^{}}]=C_{\alpha \beta }^\gamma H_\gamma ^{^{}}$$
(8)
they are of first class in the Dirac’s classification. On the other hand if
$$[H_\alpha ^{^{}},H_\beta ^{^{}}]=C_{\alpha \beta }$$
(9)
where $`C_{\alpha \beta }`$ do not depend of $`\varphi _i`$ and $`\pi _i`$ then from(7) there arises naturally Dirac’s brackets and the canonical quatization will be performed taking Dirac’s brackets into commutators.
$`G\ddot{u}ler^{}s`$ formalism gives an action when all hamiltonians $`H_\alpha ^{^{}}`$ are in involution.Since in this formalism we work from the beginning in the extended space we suppose that variables $`t_\alpha `$ depend of $`\tau `$.Here $`\tau `$ is canonical conjugate with $`p_0`$.
If we are able , for a given system with constraints, to find the independent hamiltonians $`H_\alpha ^{^{}}`$ in involution then we can perform the quantization of this system using path integral quantization method with the action given by (4)
$$z=(H_\alpha +\pi _\beta \frac{H_\alpha ^{^{}}}{\pi _\beta })\dot{\chi _a}𝑑\tau $$
(10)
where $`\dot{\chi }_\alpha =\frac{d\chi _\alpha }{d\tau }`$.
## 3 Chiral Oscillator
We consider the Lagrangian
$$L_0=\omega \dot{q_i}^{(0)}ϵ_{ij}q_j^{(0)}+\omega ^2q_i^{(0)}q_i^{(0)},i,j=1,2$$
(11)
From(11) we found the constraints
$$\mathrm{\Omega }_i=p_i^{(0)}\omega ϵ_{ij}q_j^{(0)},i=1,2$$
(12)
and the canonical hamiltonian
$$H_c=(p_i^{(0)}\omega ϵ_{ij}q_j^{(0)})\dot{q}_{i}^{}{}_{}{}^{(0)}\omega ^2q_k^{(0)}q_k^{(0)}$$
(13)
Then in the $`G\ddot{u}ler^{}s`$ formalism we have following hamiltonians
$$H_0^{^{}}=p_0+H_c,H_i^{^{}}=p_i^{(0)}\omega ϵ_{ij}q_j^{(0)},i=1,2$$
(14)
and all the variables $`q_i^{(0)}`$ are gauge variables. The hamiltonians are not in involution because
$$[H_i^{^{}},H_j^{^{}}]=2\omega ϵ_{ij}$$
(15)
In order to obtain the hamiltonians in involution we will extend the space with new variable $`p_i^{(1)}`$ and $`q_i^{(1)}`$. The new expressions for the hamiltonians $`H_i^{^{\prime \prime }}`$ in involution are
$$H_i^{^{\prime \prime }}=H_i^{^{}}\omega ϵ_{ij}q_j^{(1)}p_i^{(1)}$$
(16)
but we get a new set of constraints
$$H_i^1^{}=p_i^{(1)}\omega ϵ_{ij}q_j^{(1)}$$
(17)
If we repeat the procedure after N steps we get N+1 hamiltonians in involution and the hamiltonians $`H_i^N^{}=p_i^{(N)}\omega ϵ_{ij}q_j^{(N)}`$ fulfilling
$$[H_i^N^{},H_j^N^{}]=2\omega ϵ_{ij}$$
(18)
The final form of the canonical hamiltonian obtained after an infinite repetation of the conversion process is
$$H_c^{(\mathrm{})}=\mathrm{\Sigma }_{k=0}^{\mathrm{}}(p_i^{(k)}\omega ϵ_{ij}q_i^{(k)})\dot{q}_i^k\mathrm{\Sigma }_{k=0}^{\mathrm{}}\omega ^2q_i^{(k)}q_i^{(k)}2\mathrm{\Sigma }_{k=1}^{\mathrm{}}\mathrm{\Sigma }_{m=0}^{k1}\omega ϵ_{ij}q_j^{(m)}(\dot{q}_i^{(m)}+\omega ϵ_{il}q_l^{(m)})$$
(19)
Then in the $`G\ddot{u}ler^{}s`$ formalism we have infinite numbers of hamiltonians in involution
$$H_0^{^{}}=p_0+H_c^{(\mathrm{})},H_k^{^{}}=p_i^{(k)}\omega ϵ_{ij}q_j^{(k)},k=1,\mathrm{},\mathrm{}$$
(20)
Using (10) we found after some calculations that the action has the form
$$z=L𝑑\tau $$
(21)
where L is given by
$$L=\mathrm{\Sigma }_{k=0}^{\mathrm{}}(\omega ϵ_{ij}q_i^{(k)}\dot{q}_j^k+\omega ^2q_i^{(k)}q_i^{(k)})+2\mathrm{\Sigma }_{k=1}^{\mathrm{}}\mathrm{\Sigma }_{m=0}^{k1}\omega ϵ_{ij}q_j^{(m)}(\dot{q}_i^{(m)}+\omega ϵ_{il}q_l^{(m)})$$
(22)
This result is in agreement with those from .
### 3.1 Siegel’s action
Siegel was first one who proposed an action for the chiral-boson problem . The Lagrangean density is
$$L=_{}\varphi _+\varphi +\lambda (_{}\varphi )^2$$
(23)
and the canonical hamiltonian becomes
$$H_c=\frac{1}{2}(1+\lambda )^1(\pi +\lambda \varphi ^{^{}})^2+\frac{1}{2}(1\lambda )(\varphi ^{^{}})^2$$
(24)
where $`\pi `$ is the canonical momentum conjugate to $`\varphi `$. On the other hand we observe that
$$\pi _\lambda =0$$
(25)
In $`G\ddot{u}ler^{}s`$ formalism we have the following hamiltonians
$$H_0^{^{}}=p_0+H_c,H_1^{^{}}=\pi _\lambda $$
(26)
Imposing $`dH_0^{^{}}=0`$ and $`dH_1^{^{}}=0`$ we generate another hamiltonian $`H_2^{^{}}`$ as
$$H_2^{^{}}=\pi _\varphi \varphi ^{^{}}$$
(27)
From (26) and (27) we conclude that $`\lambda `$ and $`\varphi `$ are gauge variables in this formalism.
Now we are interested in performing the quantization of the system. Using Dirac’s procedure we have
$$H_0^{^{}}\mathrm{\Psi }=0,H_1^{^{}}\mathrm{\Psi }=0,H_2^{^{}}\mathrm{\Psi }=0$$
(28)
Because
$$[H_1^{^{}},H_0^{^{}}]=\frac{1}{2}(1+\lambda )^1(\pi \varphi ^{^{}})^2$$
(29)
and
$$[H_2^{^{}},H_2^{^{}}]=_x\delta (xy)$$
(30)
we conclude that the system has second class constraints in the Dirac’s classification and we can quantize it using path integral quantization method . Because we have obtained the same constraints $`H_0^{^{}}`$,$`H_1^{^{}}`$ and $`H_2^{^{}}`$ as in we found the same result after performing path integral quantization.
## 4 Concluding remarks
Using Hamilton-Jacobi formalism for systems with constraints we found that Floreanini-Jackiw chiral harmonic oscillator is a theory with an infinite number of Hamiltonians in involution. The path integral quantization, using the action given by $`G\ddot{u}ler^{}s`$ formalism ,was performed and the results are in agreement with those obtained by others authors .
For Floreanini-Jackiw chiral harmonic oscillator we found that all the fields are gauge fields in the $`G\ddot{u}ler^{}s`$ formalism. Because all hamiltonians $`H_\alpha ^{^{}}`$ are constraints in the extended space in this formalism we have no first and second class constraints in the Dirac’s classification at the classical level.
However the first and second class constraints become important in this formalism in the process of quantization . For the Siegel’s action we found only three independent hamiltonians $`H_0^{^{}}`$ ,$`H_1^{^{}}`$, $`H_2^{^{}}`$.This set of hamiltonians give us the correct result if the path integral quantization for this system is performed.
In the $`G\ddot{u}ler^{}s`$ formalism all the variations of constraints $`H_0^{^{}}`$, $`H_1^{^{}}`$ and $`H_2^{^{}}`$ do not give us new constraints.
The problem if the constraint $`(H_2^{^{}})^2`$(for more details see Ref.) is of first or second class does not arise in the $`G\ddot{u}ler^{}s`$ formalism. In this case we found that all variables are gauge variables.
## 5 Acknowledgements
One of the authors (D.B.) would like to thank TUBITAK for financial support and METU for the hospitality during his working stage at Department of Physics.
|
no-problem/9901/astro-ph9901368.html
|
ar5iv
|
text
|
# The Spectra of Main Sequence Stars in Galactic Globular Clusters I. CH and CN Bands in M13Based on observations obtained at the W.M. Keck Observatory, which is operated jointly by the California Institute of Technology and the University of California
## 1 INTRODUCTION
The galactic globular clusters contain some of the oldest stars in our galaxy. The main sequence in such a cluster consists of the unevolved member stars still in their H-burning phases. There is a long tradition of research on the much brighter globular cluster giants, for example high dispersion spectroscopy by Cohen (1983), Pilachowski (1984) and Gratton & Ortolani (1987) among others, and of photometric studies (see, for example, Frogel, Cohen & Persson 1983). These studies have contributed much to our understanding of stellar evolution and mixing processes, as well as of the structure of our Galaxy, galactic halos, and the integrated light of old stellar systems. A few pioneers have begun to press downward in luminosity with high dispersion spectroscopy reaching the horizontal branch (Clementini et al. 1996, Pilachowski et al. 1996, Cohen & McCarthy 1997), but until recently no one has tried to reach the globular cluster main sequence stars spectroscopically.
As reviewed by Kraft (1994) and by Briley, Hesser & Smith (1994) (see also the more general reviews by McWilliam 1997 and by Pinnsoneault 1997), observational studies over the past two decades have revealed that variations of a factor of more than 10 for the C, N, and O elements between globular cluster giants and sub-giants within a particular globular cluster, including M13, are common. Suntzeff (1981) conducted the first detailed survey of the strength of the CH and CN bands among M13 giants. The frequency of strong CN and weak CN stars among the red giants is approximately equal, and the contrast in band strength between the two groups is large. Weaker star-to-star variations of Na, Al, and Mg are often seen as well among the giants in the best studied clusters. These abundance variations obey specific correlations; for example, enhanced N is accompanied by depleted carbon. Pilachowski et al. (1996) examined 130 giants and sub-giants in M13 the faintest of which had M<sub>V</sub> = +1.2 mag, slightly fainter than the level of the horizontal branch. They found star-to-star variations in Na abundance of a factor of 6 and in Mg abundance of a factor of 4. Sneden et al. (1997) explore the behavior of C, N and O among the bright giants in M15, which is found to be similar to that shown by giants in M13 and M92. They find a star-to-star variation in the abundance of Eu and Ba in M15 which is a factor of 4. Similar variations among the giants are detected within all globular clusters studied in suitable detail.
Spectroscopic study of main sequence stars in galactic globular clusters is much more limited as the stars are faint, and achieving a suitable dispersion and signal-to-noise ratio is difficult. The only globular cluster whose main sequence stars have been studied in some detail is 47 Tuc. Hints of CN variations were found by Hesser (1978), and this work was continued by Hesser & Bell (1980), Bell, Hesser & Cannon (1983), Briley, Hesser & Bell (1991), Briley et al. (1994) and Briley et al. (1996). The most recent work on 47 Tuc (Cannon et al. 1998) demonstrates convincingly that variations in CN and CH are found at the level of the main sequence. Suntzeff (1989) and Suntzeff & Smith (1991) found very preliminary indications of an anti-correlation between CH and CN for a small sample of main sequence stars in NGC 6752. Molaro & Pasquini (1994) got a relatively noisy spectrum of a single main sequence turnoff star in NGC 6397 to look for lithium. Pilachowski & Armandroff (1996) summed spectra of 40 stars at the base of the giant branch in M13 in an unsuccessful search for the (weak) 7700Å OI triplet. King, Stephens & Boesgaard (1998) attempted to derive \[Fe/H\] from Keck spectra of several subgiants near the turnoff of M92, and got rather surprising results, \[Fe/H\] = $``$2.52, a factor of two lower than the value in common use.
It was not believed possible to produce Na and Al in globular cluster red giants, given their low masses and relatively unevolved evolutionary state, until Denisenkov & Denisenkova’s (1990) seminal paper. They suggest that instead of neutron captures on Ne<sup>22</sup>, proton captures on Ne<sup>22</sup> could produce Na and Al enhancements. Since there is no source of free neutrons in globular cluster red giants, the former cannot occur in H-burning stars, while the latter can. (See also Langer, Hoffman & Sneden 1993.) Debate still centers on mixing versus primordial variations as the origin of the observed abundance variations. The interaction between internal rotation and mixing may be a critical one (Sweigart & Mengel 1979, Sweigart 1997), as may that between rotation and diffusion in the photosphere. Mass transfer in binary stars may also play a role under some circumstances (McClure 1984, 1997). Recently as a result of this understanding of how Na and Al could be produced in the interiors of red giants and mixed to the stellar surface via convection zones, the pendulum has swung towards favoring mixing as the explanation for most of the observed variations. The current theoretical picture is summarized in Cavallo, Sweigart & Bell (1996), Langer, Hoffman & Zaidens (1997) and in Cannon et al. (1998).
Globular cluster main sequence stars should not yet have synthesized through internal nuclear burning any elements heavier than He (and Li and Be) and hence will be essentially unpolluted by the internal nuclear burning and production of various heavy elements that occur in later stages of stellar evolution. Theory predicts that these stars are unaffected by gravitational settling and that their surfaces should be a fair representation of the gas from which the globular cluster formed. Thus the persistence of variations in C and N to such low luminosities in 47 Tuc (Cannon et al. 1998 and references therein) is surprising.
The advent of the Keck Telescope with the Low Resolution Imaging Spectrograph (henceforth LRIS, Oke et al. 1995), an efficient multi-object spectrograph coupled to a 10–m telescope, makes a high precision study of the spectra of main sequence stars in galactic globular clusters feasible. The major issue I intend to explore in this series of papers is that of star-to-star variations in abundances within a single globular cluster at and below the level of the main sequence turnoff.
## 2 THE SAMPLE OF STARS
M13 was chosen to begin this effort because it is nearby, hence the turnoff stars will be relatively bright, it has very low reddening, and its giants and sub-giants have been the subject of an extensive series of papers by Kraft and his collaborators. (See Pilachowski et al. 1996 for references to their earlier papers.) Its high galactic latitude guarantees minimum contamination of a sample by field stars.
Short exposure images in $`B`$ and $`R`$ were taken with LRIS centered on the field used by Fahlman & Richer (1986) in their photometric study of the main sequence of M13. Photometry was obtained with DAOPHOT (Stetson 1987) using these short exposures calibrated on the system of Landolt (1992). The zero point for each color in each field is uncertain by $`\pm 0.05`$ mag. A sample of main sequence stars was chosen based on their position on the locus of the main sequence as defined by this photometry. Each candidate was inspected for crowding and stars were chosen for the spectroscopic sample on the basis of minimum crowding. A second field was chosen 400” North of this field, somewhat closer to the center of M13, and a similar procedure was carried out. Table 1 gives the object’s coordinates (J2000), $`R`$ mag, $`BR`$ color, and indices (together with their errors) for two molecular bands for the M13 main sequence stars in the spectroscopic sample.
Since the fields are very crowded, in addition to providing the star coordinates, we provide an identification chart for a few stars in each of the two fields, from which, given the accurate relative coordinates, the rest of the stars can be located. Relative stellar coordinates are defined from the LRIS images themselves assuming the telescope pointing recorded in the image header is correct and taking into account the optical distortions in the images. The astrometry of Cudworth & Monet (1979) is used to fix the absolute coordinates. Star M13ms J1641019+362403 in field 1 is the star at location (141,273) in Figure 1 of Fahlman & Richer (1986). A finding chart for several stars in our sample in Field 2 is given in Figure 1 below.
Figure 2 presents a color-magnitude diagram for the main sequence stars in the M13 sample. The stars that have been observed spectroscopically are shown as filled circles. To guide the eye in establishing the iosochrone locus, stars in Field 2 that lie in the region somewhat brighter and somewhat fainter than that spanned by the sample stars are shown as open circles; for the fainter stars, every eighth star is plotted.
## 3 SPECTROSCOPIC OBSERVATIONS AND MEASUREMENT OF BAND INDICES
Two slitmasks, one in each field, were designed containing 50 stars from the M13 main sequence star sample. These were used at relatively low dispersion with the LRIS (300 g/mm grating, 2.46Å/pixel, 0.7 arcsec slit width) for a spectral resolution of 8Å. The CCD detector is digitized at 2 electrons/DN with a readout noise of 8 electrons. Two 800 sec exposures were taken with each slitmask under conditions of good seeing and dark sky in the spring of 1998. The data were reduced in a straightforward manner as described in Cohen et al. (1999) using Figaro (Shortridge 1988) except that the wavelength calibration came from arc lamp exposures, rather than from night sky lines on the spectra themselves. The spectra are not fluxed.
All 50 stars are members of M13 based on the metal poor appearance of their spectra and on their radial velocities.
Since these stars are metal poor, the absorption lines are in general quite weak, and rather than adhere to the usual definition of a single sided band index, we measure a CH index using continuum bandpasses on both sides of the G band at 4300Å, with a feature bandpass chosen to avoid H$`\gamma `$. Thus the blue continuum bandpass goes from 4180 to 4250Å, and the red one from 4380 to 4460 A. The feature bandpass covers the wavelengths 4285 to 4315Å. Weights of 0.6 and 0.4 are assigned to the blue and red continuum bandpasses respectively based on their offset from the wavelength of the G band. The CH index thus measured is given in Table 1. The values are the fraction of absorption from the continuum, and are not in magnitudes.
For the ultraviolet CN band with its head at 3883Å, because of crowding by the higher Balmer lines, it is impossible to find a suitable blue continuum bandpass. Thus the feature is defined in the usual way by a red continuum bandpass at 3894 to 3910Å, with the feature bandpass including 3860 to 3888Å. Again the index feature strengths as a fraction of absorption from the continuum are given in Table 1. A minimum continuum strength (700 DN/pixel) was established for an accurate measurement of the uvCN index; 44 of the 50 stars in the sample met this requirement, and no uvCN index is listed for the six stars whose spectra did not achieve the necessary continuum level. These are among the faintest stars, but not the six faintest, as slitmask alignment also plays a role here, particularly for such narrow slits.
Errors (1$`\sigma `$) for the molecular band indices were calculated based on Poisson statistics from the observed count rates in the continuum and in the feature bandpasses. These are listed for each star in Table 1. The values given in Table 1 thus do not include the effect of cosmic rays nor the effect of the background signal from the night sky, both of which are small.
Even if a mean continuum were applied to normalize the spectra, the observed dispersions within the defined continuum bandpasses cannot themselves be used due to the probable presence of many weak absorption features in the spectra. Errors calculated in this way (without normalizing the continuum) are typically twice those calculated from the Poisson statistics, and provide a firm upper limit on the uncertainties of the measured molecular band indices.
To illustrate the quality of the spectra, Figure 3 shows the spectrum of the brightest star at $`R`$ in our sample of main sequence stars in M13 (M13ms J1641079+362413) and of the faintest star for which both a CH index and a uvCN index were measured (M13ms J1641085+362317). The thin line is the spectrum of the fainter star multiplied by a factor of 3.1 in the region around the CH band to facilitate comparison with the spectrum of the brightest star.
## 4 ANALYSIS
The color-magnitude diagram for our sample of main sequence stars in M13 displayed in Figure 2 illustrates a very important point. The total range in $`(BR)`$ color of the sample members is very small, $`<0.2`$ mag. The locus of the main sequence is very tight in color at a fixed luminosity. The total range in $`R`$ magnitude is less than 1.5 mag. We therefore ignore all subtleties, all model atmosphere and line synthesis calculations, all calculations of molecular equilibria, all variations in $`T_{eff}`$ and surface gravity, and instead proceed in a very simple manner.
We rank the stars according to the position along the main sequence, a ranking which is almost identical to that in $`R`$ magnitude, with the star highest up the isochrone as first, and the star lowest on the main sequence as having “star order” = 50.
Figures 4 and 5 show the results for the 50 M13 main sequence stars. (Only 44 stars are plotted in Figure 5 due to the minimum continuum level requirement imposed for measuring a uvCN index.) The 1$`\sigma `$ errors for each star are plotted as well. A linear fit was derived for the uvCN index, and the 1$`\sigma `$ rms dispersion around that fit is 0.023 (2.3% of the continuum), while the Poisson errors calculated from the measurements range from 0.008 to 0.017. For the CH index, a quadratic fit was made, as shown in Figure 5. The 1$`\sigma `$ rms dispersion around that fit is 0.010, while the calculated uncertainties for the faintest stars are $`0.005`$.
It seems reasonable to assume that the rise in the strength of the CH band index seen in Figure 5 for the most luminous stars in this sample is due to their slightly redder color, hence slightly cooler $`T_{eff}`$ as one begins to turn off the main sequence (see Figure 2) enhancing the strength of the CH band. Because the CN index is defined as a single sideband index with the continuum to the red of the feature, the measured index is affected by the continuum slope as well as by the feature strength. The contribution from the continuum itself to $`I(uvCN)`$ is on average $``$0.09. There is also some contribution to $`I(uvCN)`$ from the Balmer line H8 at 3889Å, which in such metal poor stars may be a substantial fraction of the total absorption within the bandpass of the uv CN feature.
## 5 DISCUSSION
Our data provide no evidence for variations of CH or CN band strengths among the 50 main sequence stars in our sample in M13. The errors are small compared to the size of the measured indices. Our analysis is extremely simple and does not depend in any way on model atmospheres or spectral synthesis. The CH bands are quite weak, and hence one might argue that they hide a variation in band strength from star-to-star which goes from $`ϵ`$ to a maximum of $`6\%`$, while still having a range of more than a factor of two. But a careful examination of figure 5 makes this difficult to envision, as only one star has a feature strength below 1% (M13ms J1641072+362756, with $`I(CH)=0.008\pm 0.004`$), and the errors are quite small for the entire sample. The mean absorption in the uvCN band is much larger, so such an argument cannot be applied to $`I(uvCN)`$. The uvCN band indices in Figure 4 demonstrate quite convincingly that in M13 there are no star-to-star variations in the strength of this molecular band. But here the issue of the contribution of H8 to this index in such metal poor stars cannot be ignored. A full spectral synthesis is required to establish the maximum size of the variations that can be hidden within the constraints of our data. This is deferred to a future paper in this series.
It is not clear how our results can be reconciled with the observations in 47 Tuc of Cannon et al. (1998) and references therein, where substantial band strength variations were seen for main sequence stars. Our sample in M13 is larger and our spectra are of higher signal-to-noise ratio, but M13 is somewhat farther away, the main sequence stars are somewhat fainter than those in 47 Tuc, and the abundance of the cluster is lower. The issue of membership for our sample of 50 main sequence stars is clear; all are cluster members.
Similar data from the LRIS at the Keck Observatory for an even larger sample of main sequence stars in M71 are now in hand and the analysis and results of that sample should prove illuminating in trying to understand the origin of this discrepancy.
## 6 SUMMARY
I have determined the strength of the CH and CN bands from spectra of 50 main sequence stars in M13. The data would seem to suggest that large variations of C and N are not seen at the level of the main sequence and below it, but the reader is cautioned that a firm conclusion must await a detailed analysis of C and N abundances which will appear in a later paper (Briley & Cohen 1999). Significant primoridal variations of C and N do not appear to be present in M13. This supports the hypothesis that abundance variations found among the light elements in the evolved stars of M13 by Sneden (1981), and commonly seen on the giant and subgiant branches of globular clusters of comparable metallicity, are due primarily or entirely to mixing within a fraction of individual stars as they evolve.
###### Acknowledgements.
The entire Keck/LRIS user community owes a huge debt to Jerry Nelson, Gerry Smith, Bev Oke, and many other people who have worked to make the Keck Telescope and LRIS a reality and to operate and maintain the Keck Observatory. We are grateful to the W. M. Keck Foundation, and particularly its late president, Howard Keck, for the vision to fund the construction of the W. M. Keck Observatory. I also thank Jim Hesser for a guide to the literature on 47 Tuc, Kevin Richberg for help with the data reduction and Patrick Shopbell for help at the telescope.
|
no-problem/9901/hep-th9901013.html
|
ar5iv
|
text
|
# The Theory of Stochastic Space-Time. II. Quantum Theory of Relativity11footnote 1Published in ”Z.Zakir (2003) Structure of Space-Time and Matter. CTPA, Tashkent”
## 1 Introduction
A physical description is based on the analysis of results of observations, however the laws of physics should not depend on methods of observations and a choice of an observer. This condition we shall name as the principle of relativity of observations, and consider as a general principle to which must satisfy all physical theories. In the paper we shall consider some measurement procedures and mathematical structures, through which this general principle is manifested in classical and quantum physics.
Particularly, the canonical transformations of Hamiltonian dynamics will be treated as transformations of systems of unperturbing devices, the unitary transformations of Hilbert space of states in quantum physics will be represented as transformations of the systems of perturbing devices. This means that one can take as the first principles of quantum mechanics the such physical principles as the relativity under the systems of measuring devices and the invariance of the fundamental constants $`\mathrm{}`$ and $`c`$ which lead to a stochastic geometry of the physical spacetime.
In the preceding paper some consequences of stochastic treatment of gravitation has been considered. In this paper it will be shown that this treatment can be derived from general physical principles.
## 2 Canonical transformations as transformations of the systems of unperturbing devices
In the classical physics the system of coordinates have been constructed by using the systems of devices allowing one to measure complete sets of dynamical variables near each point of space. There it has been implicitly supposed, that the basic equations of physics do not depend on a choice of the system of unperturbing devices. We shall name this statement as the principle of relativity of unperturbing observations (PRUO).
If on a system of unperturbing devices an object is described by a full set of dynamical variables - generalized coordinates and momentum $`(q,p)`$, and on another system by another set of coordinates and momentum $`(P,Q),`$ so, according to this principle, the equations of motion should not depend on the transition from the first pair of variables to the second one. We see, that the definition of transformations of the systems of unperturbing devices is the same as the definition of the canonical transformations, and therefore, we can suppose that the canonical invariance of the equations of motion of classical physics is expression of the principle of relativity of unperturbing observations. This treatment allows one to describe the states of mechanical objects by means of the systems of unperturbing devices in a phase space with the symplectic structure, where the principle of relativity of unperturbing observations is represented across the canonical group of symmetry.
From this point of view, the applicability of the Hamiltonian dynamics to various physical structures testifies that their dynamical variables can be measured by the system of unperturbing devices, or they can be reduced to the such variables. This interpretation partly explains the universality of the Hamiltonian structures not only in the classical mechanics, but in other fields of physics also.
## 3 Spacetime in the systems of perturbing devices
The existence of the Planck constant $`\mathrm{}`$ as an action quantum requires to extend the principle of relativity of observations to the systems of perturbing devices (SPD). Further we shall convince, that the such extension is possible, and moreover, that the quantum theory can be considered as its result. Here we shall consider the change of the structure of space and time in SPD.
Let we have a system of perturbing devices as a set of classically described measurement devices near each point of euclidean space. Let during the measurements of coordinates and times of classical particles of small mass, the particles scatter on devices at many points of their trajectories. Then, in the limiting case, the trajectories of the such particles will be similar to Brownian trajectories, and the classical mechanics must be replaced by the theory of Brownian motion. The system of perturbing devices here plays a role of an environment with some diffusion coefficient and the observables should be defined in a statistical ensemble of measurements. Instead of the definite coordinates and momentum of the particles, here we deal with the probability densities and the transition probability densities.
Two kinds of Brownian processes are known - the usual dissipative (Wiener’s) diffusion and nondissipative (Nelson’s) diffusion . The examples of the dissipative systems of perturbing devices with Wiener’s diffusion are the Wilson and bubble chambers in which a high energy particle interacts with atoms of the medium along the trajectory and loses the energy.
At Nelson’s diffusion the energy of an ensemble of particles is conserved and the equations of diffusion are reversible in time in contrary to irreversible Wiener processes. Here we consider the construction of such conservative systems of devices, in which the motion of particles represents Nelson’s diffusion . This is a set of large number massive screens with infinite number slits on each screen . Let the slits have massive shutters, which must rapidly open and close the slits during very short time. At the opening and closing of the shutters the sample particles scatter on them. As the result, the energy and momentum of the particles may change sufficiently, while the mean energy and momentum of the massive shutters and screens do not change at the such scatterings. Therefore, the tangent components of the momentum and kinetic energy of the scattering particles change stochastically but their mean values remain unchanged.
Here a physical reason for the conservative behavior is the fact, that all elements of the measuring devices are massive (macroscopic) objects, essentially exceeding the masses of the sample particles. At the such ratio of the masses, it is possible to consider the collisions of the classical particles of small mass with very massive devices as absolutely elastic (on a rest frame of the shutter). The energy conservation of the ensemble of particles leads to the temporal reversibility of their equations of motions.
Thus in SPD the structure of spacetime becomes stochastic and the Galilean (or Minkowski) geometry of spacetime should be replaced by the stochastic geometry. In the dissipative SPD this is stochastic spacetime with Wiener’s measure, and in nondissipative SPD - spacetime with Nelson’s measure.
## 4 Relativity of perturbing observations and transformations of SPD
In classical mechanics with unperturbing devices it is insufficient how many screens there are between two spatial points at particle’s trajectory. However, if we take into account the scattering of the particle on the screens with moving shutters, then a number of scatterings becomes sufficient for the final probability density of the such observed particles. At the increasing of the number $`N`$ of the screens between initial and final positions, the temporal intervals between the scatterings of the particles on neighbor shutters decrease, and at $`N\mathrm{}`$ we have $`\mathrm{\Delta }t0`$. In fact, this is nothing but as some transformation of the system of perturbing devices, and the physically interesting variables are those which tend to finite values at the such increasing of $`N`$.
Here we consider the such transformations of SPD and some conditions required by the principle of relativity of observations. An ensemble of particles in SPD is described by a probability density $`\rho (𝐱,t)`$ and a transition probability density $`p(𝐱^{},t^{};𝐱,t)`$. In fact, there are two families of transition probabilities $`p_\pm `$, where $`p_+(𝐱,t;𝐱_0,t_0)`$, describes the direct in time transition probabilities $`t>t_0`$, while $`p_{}(𝐱,t;𝐱_0,t_0)`$ describes the backward in time transition probabilities $`t<t_0`$. It is clear that in the conservative diffusion both types must appear in the symmetric form. In the particular case of Nelson’s kinematics these two types of transition probabilities can be reduced to one a function $`S(𝐱,t)`$ and a diffusion coefficient $`\nu `$ (see review ).
A given SPD differs from another one only by the transition probability densities $`p_\pm (𝐱,t;𝐱_0,t_0)`$ describing the temporal evolution of $`\rho (𝐱,t)`$. For $`\rho (𝐱,t)`$ on the first SPD one has:
$$\rho (𝐱,t)=p_\pm (𝐱,t;𝐱_0,t_0)\rho (𝐱_0,t_0)𝑑𝐱_0,$$
(1)
and on the second one with the same initial probability density $`\rho (𝐱_0,t_0)`$:
$$\rho ^{}(𝐱,t)=p_\pm ^{}(𝐱,t;𝐱_0,t_0)\rho (𝐱_0,t_0)𝑑𝐱_0.$$
(2)
Here $`\rho ^{}`$ and $`p_\pm ^{}`$ transform as:
$$\rho ^{}(𝐱,t)=B(𝐱,t)\rho (𝐱,t)=(1+\delta B)\rho =\rho +\delta \rho ,$$
(3)
$$p_\pm ^{}(𝐱,t;𝐱_0,t_0)=B(𝐱,t)p_\pm (𝐱,t;𝐱_0,t_0)B^1(𝐱_0,t_0)=$$
(4)
$$=p_\pm +(\delta Bp_\pm p_\pm \delta B_0)=p_\pm +\delta p_\pm ,$$
(5)
where $`B(𝐱,t)`$ is the operator for the transformations of $`\rho (𝐱,t)`$ at the changings of SPD.
The probability conservation conditions:
$$\rho (𝐱,t)𝑑𝐱=\rho ^{}(𝐱,t)𝑑𝐱=B(𝐱,t)\rho (𝐱,t)𝑑𝐱=1,$$
(6)
lead for small variations:
$$\delta \rho (𝐱,t)𝑑𝐱=0,$$
(7)
$$\delta B(𝐱,t)\rho (𝐱,t)𝑑𝐱=\delta B=0,$$
(8)
since only the spatial distributions of $`\rho (𝐱,t)`$ and $`\delta B`$ change at the such local deformations.
The velocities and diffusion coefficients of particles in SPD are defined as the conditional expectations:
$$𝑑𝐱_0p_\pm (𝐱,t;𝐱_0,t_0)(𝐱𝐱_0)=\pm 𝐛_\pm (𝐱,t)\mathrm{\Delta }t,$$
(9)
$$𝑑𝐱_0p_\pm (𝐱,t;𝐱_0,t_0)(x_ix_{i0})(x_jx_{j0})=\pm 2n_\pm (𝐱,t)\delta _{ij}\mathrm{\Delta }t,$$
(10)
which transform at the SPD transformations as:
$$\underset{\mathrm{\Delta }t0}{lim}𝑑𝐱_0B(𝐱,t)p_\pm (𝐱,t;𝐱_0,t_0)B^1(𝐱_0,t_0)(𝐱𝐱_0)=\pm 𝐛_\pm ^{}(𝐱,t)dt,$$
(11)
$$\underset{\mathrm{\Delta }t0}{lim}𝑑𝐱_0B(𝐱,t)p_\pm (𝐱,t;𝐱_0,t_0)B^1(𝐱_0,t_0)(x_ix_{i0})(x_jx_{j0})=\pm 2n_{ij\pm }^{}(𝐱,t)dt.$$
(12)
## 5 The principle of constancy of action quantum and the diffusion coefficient
In classical mechanics it can be constructed SPD with arbitrary small perturbations, and for a classical particle observing by SPD the conditional expectation of the action function $`\mathrm{\Delta }A_\pm `$ vanish for infinitesimal temporal intervals:
$$\underset{\mathrm{\Delta }t0}{lim}𝑑𝐱_0p_\pm (𝐱,t;𝐱_0,t_0)\mathrm{\Delta }A_\pm (𝐱,t,𝐱_0,t_0)=0,$$
(13)
which means that:
$$\underset{\mathrm{\Delta }t0}{lim}𝑑𝐱_0p_\pm (𝐱,t;𝐱_0,t_0)\left[\frac{m\left(𝐱𝐱_0\right)^2}{\pm \mathrm{\Delta }t}\right]=0.$$
(14)
Since the classical mechanics is not exact theory for the microscopic phenomena, in the general case one must to take into account the existence of the action quantum (Planck’s constant) $`\mathrm{}`$ which should be invariant under the transformations of SPD. The last statement we shall call as the principle of constancy of action quantum . Particularly, if the conditional expectation of $`\mathrm{\Delta }A_\pm `$ is equal to $`\mathrm{}`$ in one of SPD, then it should be equal to $`\mathrm{}`$ in all other SPD. So, we have the expression:
$$\underset{\mathrm{\Delta }t0}{lim}𝑑𝐱_0p_\pm (𝐱,t;𝐱_0,t_0)\left[\frac{m\left(𝐱𝐱_0\right)^2}{\pm \mathrm{\Delta }t}\right]=\mathrm{},$$
(15)
or in the SPD transformed form:
$$\underset{\mathrm{\Delta }t0}{lim}𝑑𝐱_0B(𝐱,t)p_\pm (𝐱,t;𝐱_0,t_0)B^1(𝐱_0,t_0)\left[\frac{m\left(𝐱𝐱_0\right)^2}{\pm \mathrm{\Delta }t}\right]=\mathrm{}.$$
(16)
We see that, as the consequence of this principle, the conditional expectation $`E[dA_\pm 𝐱(t)]`$ don’t vanish at $`\mathrm{\Delta }t0.`$ As the result, we obtain the SPD covariant formula for the mean square values of particle’s coordinates:
$$\underset{\mathrm{\Delta }t0}{lim}𝑑𝐱_0p_\pm ^{}(𝐱,t;𝐱_0,t_0)\left(𝐱𝐱_0\right)^2=\pm \frac{\mathrm{}}{m}dt=\pm 2\nu dt,$$
(17)
where the diffusion coefficient $`\nu =\mathrm{}/2m`$ is exactly the same as in Nelson’s stochastic formulation of quantum mechanics . But in the stochastic mechanics the value of diffusion coefficient has been given equal to $`\nu =\mathrm{}/2m`$ by hand, whereas in the present treatment this formula directly follows from the physically clear first principles.
Since for the SPD with the such invariant diffusion coefficients $`n_{ij\pm }^{}=\pm 2\nu \delta _{ij}`$ the velocities $`𝐮=(𝐛_+𝐛_{})/2`$ can be expressed across $`\rho (𝐱,t)`$, the SPD transformations can be reduced to the transformations of $`\rho (𝐱,t)`$ and $`𝐯=(𝐛_++𝐛_{})/2`$ only:
$$\delta 𝐮=\nu \delta \left(\frac{\rho }{\rho }\right),$$
(18)
$$\delta 𝐯=\frac{1}{m}(\delta S),$$
(19)
where $`S(𝐱,t)`$ is some function, introduced instead of $`𝐯(𝐱,t)`$ by the expression:
$$m𝐯=S,$$
(20)
and which can be derived from an initial Lagrangian after the special SPD transformations .
Finally, we have a functional ”phase space” by the canonical pair $`(\rho ,S)`$ for particle’s motion in SPD, and the corresponding algebra of observables which are exactly equivalent to the Hilbert space of states and operator algebra of the ordinary quantum mechanics.
## 6 The quantum principle of equivalence
The mass parameter $`m`$ in the diffusion coefficient of SPD $`\nu _d=\mathrm{}/2m_{in}`$, considered in the previous Section, is the inertial mass $`m_{in}`$ determined by the kinetic term of an action function describing the scattering of particles on SPD.
In Nelson’s stochastic mechanics we also have the diffusion coefficient with the same inertial mass $`\nu _s=\mathrm{}/2m_{in}.`$ The classical sample particle with the inertial mass $`m_{in}`$ freely moves in the stochastic space, and the process represents the conservative diffusion on some background.
In quantum mechanics, rewritten in terms of the stochastic processes, we have the effective diffusion coefficient $`\nu _q=\mathrm{}/2m_q`$, where $`m_q`$ is some mass parameter, determining the fluctuations of coordinates of quantum particles in flat and regular Galilean space and time . As it was shown from analysis of the Lamb shift data, the quantum diffusion mass $`m_q`$ is equal to the inertial mass with high accuracy :
$$(m_{in}m_q)/m_{in}10^{13}.$$
(21)
Therefore, two kind of masses of the particle are equal to each other:
$$m_{in}=m_q,$$
(22)
and this fact is very important for the understanding of a geometrical nature of quantum phenomena.
Firstly this means the equivalence of the motion of the classical particle on SPD in the ordinary smooth space to the motion of the classical particle in the stochastic space with unperturbing devices. This fact means also the equivalence of the transition to SPD and the quantum mechanical description (quantization) of the motion of the classical particle.
We can demonstrate this situation in the simple double slit experiment. Let we have a source and a detector of the particles and there exists between them a screen with two slits. Let the particles, emitted by the source and penetrated across the slits, have been registered by the detectors. After the repeating the experiment many times, an observer obtains the interference picture on the detectors. The observer, which have only photographic plate with interference picture, can not distinguish differences between three interpretations:
a) Space is empty and euclidean, but the particles have ”quantum” properties (wave function) leading to the interference. This is the quantum mechanical interpretation;
b) Space is empty and stochastic, and the motion of the classical particles on this background leads to the interference. This is the interpretation of the stochastic mechanics;
c) Space is smooth and euclidean, but it not empty, and there exist infinity number devices near each point of space. The classical particles interact with this medium of devices, and as the result, the observer detects the interference. This treatment based on the thought experiment illustrating the principle of relativity of observations.
Further we will call this fact as the quantum principle of equivalence . Can we conclude from these statements that quantum mechanics is the stochastic geometry of spacetime and that the stochastic mechanics is a true physical formulation of quantum mechanics? Is the physical spacetime stochastic?
For the answer to these questions, let us remind an analogue with Einstein’s proof of a geometrical nature of gravitation. From the equality of the inertial and gravitational masses in general relativity, one has indistinguishability for the observer of three treatments in the explaining of acceleration of a sample particle:
a) spacetime is euclidean, the reference frame is inertial, but there exist a gravitational field with the potential $`\phi `$ (the field treatment);
b) spacetime is Riemannian, the reference frame is (locally) inertial, and no any external field (geometric treatment);
c) spacetime is euclidean in a global inertial frame, but the reference frame of the observer is accelerated, and no external field (kinematic treatment).
After analyzing these situations in general relativity, the geometric theory of gravitation has been established. Analogously, we can conclude, that the quantum principle of equivalence allows us to justify the stochastic geometrical version of quantum mechanics. It is important, that quantum mechanics is nothing but as the stochastic geometry of spacetime, has the very important consequence as an explanation of gravitation as inhomogeneous quantum diffusion .
|
no-problem/9901/astro-ph9901222.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
One of the most striking accomplishments of the Energetic Gamma Ray Experiment Telescope (EGRET) instrument on the Compton Gamma-Ray Observatory (CGRO) is the detection of high-energy $`\gamma `$-rays from active galaxies whose emission at most wavebands is dominated by non-thermal processes. These objects, called “blazars,” are highly variable at most frequencies and are bright radio sources. Prior to the launch of CGRO, 3C 273, discovered by COS-B (Swanenburg et al. 1978), was the only known extragalactic source of $`\gamma `$-rays. Since then, EGRET has detected more than 50 blazars in high energy ($`>100`$ MeV) $`\gamma `$-rays (Mukherjee et al. 1997; Thompson et al. 1995; 1996).
The blazars detected by EGRET all share the common characteristic that they are radio-loud, flat-spectrum radio sources, with radio spectral indices $`\alpha _r0.6`$ (von Montigny et al. 1995). Several of these blazars are known to demonstrate superluminal motion of components resolved with VLBI (3C 279, 3C 273, 3C 454.3, PKS 0528+134, for example). The blazar class of active galactic nuclei (AGN) includes BL Lac objects, highly polarized quasars (HPQ), or optically violent variable (OVV) quasars and are characterized by one or more of the properties of this source class, namely, a non-thermal continuum spectrum, a flat radio spectrum, strong variability and optical polarization. For many of the EGRET-detected blazars, the $`\gamma `$-ray energy flux is dominant over the flux in lower energy bands. The redshifts of these sources range from 0.03 to 2.28 and the average photon spectral index, assuming a simple power law fit to the spectrum, is $`2.2`$. Many of the blazars exhibit variability in their $`\gamma `$-ray flux on timescales of several days to months. In addition, blazars exhibit strong and rapid variability in both optical and radio wavelengths.
Of the 51 blazars reviewed here, 14 are BL Lac objects, and the rest are flat spectrum radio quasars (FSRQs). BL Lac objects generally have stronger polarization and weaker optical lines. In fact, some BL Lac objects have no redshift determination because they have no identified lines above their optical continuum. FSRQs are generally more distant and more luminous compared to the BL Lac objects.
This review summarizes the present knowledge on $`\gamma `$-ray observations of blazars by EGRET. A brief description of the EGRET instrument and data analysis techniques, and the list of blazars detected by EGRET is given in §2. Temporal variations and $`\gamma `$-ray luminosity of blazars are discussed in §§3 & 4. Section 5 describes the spectral energy distribution of blazars and summarizes the various models that have been proposed to explain the $`\gamma `$-ray emission in blazars.
## 2 EGRET observations and analysis
### 2.1 The EGRET Instrument
EGRET is a $`\gamma `$-ray telescope that is sensitive in the energy range $``$ 30 MeV to 30 GeV. It has the standard components of a high-energy $`\gamma `$-ray instrument: an anticoincidence dome to discriminate against charged particles, a spark chamber particle track detector with interspersed high-$`Z`$ material to convert the $`\gamma `$-rays into electron-positron pairs, a triggering telescope to detect the presence of the pair with the correct direction of motion, and an energy measurement system, which in the case of EGRET is a NaI(Tl) crystal. EGRET has an effective area of 1500 cm<sup>2</sup> in the energy range 0.2 GeV to 1 GeV, decreasing to about one-half the on-axis value at $`18^{}`$ off-axis and to one-sixth at $`30^{}`$. The instrument is described in details by Hughes et al. (1980) and Kanbach et al. (1988, 1989) and the preflight and postflight calibrations are given by Thompson et al. (1993) and Esposito et al. (1998), respectively.
Although EGRET records individual photons in the energy range 30 MeV to about 30 GeV, there are several instrumental characteristics that limit the energy range for which time variation investigations of blazars are viable. At the low end of the energy range, below $`70`$ MeV, there are systematic uncertainties that make the spectral information marginally useful. In addition, the deteriorating point spread function (PSF) and energy resolution at low energies, make analysis more difficult. At high energies, although the systematic uncertainties are reduced, and the PSF and energy resolution are more reasonable, because of the steeply falling spectra, few photons are detected above 5 GeV.
The angular resolution of EGRET is energy dependent, varying from about $`8^{}`$ at 60 MeV to $`0.4^{}`$ above 3 GeV (68% containment). The positions of sources are detected with varying accuracy: better than $`0.1^{}`$ for the very bright sources, or at least 0.5 for sources just above the detection threshold.
The threshold sensitivity of EGRET ($`>100`$ MeV) for a single observation is $`3\times 10^7`$ photons cm<sup>-2</sup> s<sup>-1</sup>, and is only about a factor of 50-100 below the maximum blazar flux ever observed. The dynamic range for most observations of blazar variations is, therefore, fairly small.
### 2.2 EGRET Data Analysis
The blazars described here were typically observed by EGRET for a period of 1 to 2 weeks; however, several of them were observed for 3 to 5.5 weeks. Following the standard EGRET processing of individual $`\gamma `$-ray events, summary event files were produced with $`\gamma `$-ray arrival times, directions and energies. For the observations reported here, photons coming from directions greater than $`30^{}`$ from the center of the field of view (FOV) were not used, in order to restrict the analysis to photons with the best energy and position determinations. In addition, exposure history files were produced containing information on the instrument’s mode of operation and pointing. These maps were used to generate skymaps of counts and intensity for the entire field of view for each observation, using a grid of $`0.5^{}\times 0.5^{}`$. The intensity maps were derived simply by dividing the counts by the exposure. The EGRET data processing techniques are described further by Bertsch et al. (1989).
The number of source photons, distributed according to the instrument PSF in excess of the diffuse background, was optimized. An $`E^2`$ photon spectrum was initially assumed for the source search. The background diffuse radiation was taken to be a combination of a Galactic component caused by cosmic ray interactions in atomic and molecular hydrogen gas (Hunter et al. 1997), as well as an almost uniformly distributed component that is believed to be of extragalactic origin (Sreekumar et al. 1998).
The data were analyzed using the method of maximum likelihood as described by Mattox et al. (1996) and Esposito et al. (1998). The likelihood value, $`L,`$ for a model of the number of $`\gamma `$-rays in each pixel of a region of the map is given by the product of the probability that the measured counts are consistent with the model counts assuming a Poisson distribution. The probability of one model with likelihood, $`L_1,`$ better representing the data than another model with likelihood, $`L_2,`$ is determined from twice the difference of the logarithms of the likelihoods, $`2(\mathrm{ln}L_2\mathrm{ln}L_1).`$ This difference, referred to as the test statistic $`TS`$, is distributed like $`\chi ^2`$ with the number of degrees of freedom being the difference in the number of free parameters in the two models. The flux of the point source and the flux of the diffuse background emission in the model are adjusted to maximize the likelihood. The significance of a source detection in sigma is given approximately by the square root of $`TS`$.
### 2.3 EGRET Observations
The 51 blazars listed in Table 1 were all detected by EGRET above 100 MeV during the period of EGRET observations from 1991 April to 1995 September (Phases 1 through 4 of CGRO) (Mukherjee et al. 1997). Some of these blazar associations are not certain; Mattox et al. (1997a) find only 42 identifications to have high confidence. Conversely, some of the unidentified high-latitude EGRET sources are likely to be blazars. In addition to the 42 considered strongest, Mattox et al. (1997a) note 16 possible associations with bright flat-spectrum, blazar-like radio sources. Typically, each blazar listed in Table 1 was seen in several different viewing periods (VPs). The maximum and minimum fluxes observed for each blazar is indicated in Table 1. A more complete list of blazar detections by EGRET may be found in the third EGRET catalog (Hartman et al. 1998).
## 3 Time variability
The fluxes of the blazars detected by EGRET have been found to be variable on time scales of a year or more down to well under a day. Long term variations of blazars have been addressed earlier by several authors (eg. von Montigny et al. 1995; Hartman et al. 1996a; Mukherjee et al. 1997). In some cases, many of the detected blazars have exhibited flux variations up to a factor of about 30 between different observations. Figure 1 shows the flux history of four EGRET-detected blazars. The horizontal bars on the individual data points denote the extent of the VP for that observation. Fluxes have been plotted for all detections greater than $`2\sigma `$. For detections below $`2\sigma `$, upper limits at the 95% confidence level are shown. A systematic uncertainty of 6% was added in quadrature with the statistical uncertainty for each flux value, consistent with the analysis of McLaughlin et al. (1996) on EGRET source variability.
In order to quantify the flux variability of the blazars in Table 1, Mukherjee et al. (1997) calculated the variability index, $`V=\mathrm{log}Q`$, as defined by McLaughlin et al. (1996), where $`Q=1P_\chi (\chi ^2,\nu ).`$ Here $`P_\chi (\chi ^2,\nu )`$ is the probability of observing $`\chi ^2`$ or something larger from a $`\chi ^2`$ distribution with $`\nu `$ degrees of freedom. The flux versus time data were fit to a constant flux and the reduced $`\chi _\nu ^2`$, for $`\nu `$ degrees of freedom, was calculated using the least square fit method. For a nonvariable source, a constant flux is expected to fit the data well, and the mean value of the $`\chi ^2`$ distribution is expected to be equal to the number of degrees of freedom in the data. The quantity $`V`$ is used to judge the strength of the evidence for flux variability. Following the classification of McLaughlin et al. , $`V<0.5`$ was taken to indicate non-variability, $`V1`$ to indicate variability, and $`0.5V<1`$ as uncertain. Table 1 lists the value of $`V`$ for each source.
Of the 51 blazars reviewed here, 35 are found to be variable ($`V1`$), 9 are non-variable ($`V<0.5`$), and 7 fall in the range of uncertain variability. It should be noted that, although the criterion used here to gauge variability of a blazar is somewhat arbitrary, it does provide a way to compare the numbers obtained. Also, as McLaughlin et al. (1996) have noted, changing these criteria by 20% yields similar results. If the FSRQs and the BL Lac objects are considered separately, it is found that 76% of the FSRQs in the sample are variable, while 16% are non-variable. Similarly, for the BL Lac objects, 50% are variable, while 21% are definitely non-variable. It should be noted that the low intrinsic luminosity of BL Lac objects could bias observations (see discussions in §5.2). Figure 2 shows the distribution of the variability indices for the FSRQs and the BL Lac objects. The BL Lac objects in the data set are found to be less variable on the average than the FSRQs. Recent observations of flares in BL Lac objects, however, modify some of these conclusions. For example, BL Lac was detected during a $`\gamma `$-ray outburst with an average flux of $`(171\pm 42)\times 10^8`$ photons cm<sup>-2</sup> s<sup>-1</sup> in July 1997 (Bloom et al. 1997). BL Lac would have a high value of $`V`$ in Table 1, if this information were taken into account.
Figure 3 shows a plot of the variability index as a function of the weighted average flux for the blazars in Table 1. Note that the sources that have the highest average fluxes all have high variability indices. Only 2 out of 18 blazars with average flux less than $`1\times 10^7`$ photons cm<sup>-2</sup> s<sup>-1</sup> have a variability index greater than 2.5. In fact, there are no non-variable blazars in the sample that have high flux.
The study of short term variability in blazars is always limited by the small numbers of photons detected in the short time intervals at $`\gamma `$-ray energies. The shortest time-scale variations detected for blazars with EGRET are for PKS 1622-297 (Mattox et al. 1997b) and 3C 279 (Wehrle et al. 1997). For both these objects the flux was found to increase by a factor of two or more in less than 8 hours. Other objects that have shown flux variations over the period of a few days are 3C 279 (Kniffen et al. 1993), 3C 454.3 (Hartman et al. 1993), 4C 38.41 (Mattox et al. 1993), PKS 1406-076 (Wagner et al. 1995), and PKS 0528+134 (Hunter et al. 1993; Mukherjee et al. 1996). The short time-scale of $`\gamma `$-ray flux variability (e.g. in 3C 279 or PKS 1622-297) when combined with the large inferred $`\gamma `$-ray luminosities, implies that the blazar emission region is very compact. Gamma-ray tests for beaming from variability and flux measurements using the Elliott-Shapiro relation and $`\gamma `$-ray transparency arguments are summarized in a recent review on $`\gamma `$-ray blazars by Hartman et al. (1997). A factor-of-two flux variation on an observed time-scale $`\delta t_{\mathrm{obs}}`$ limits the size $`r`$ of a stationary isotropically emitting region to be roughly $`rc\delta t_{\mathrm{obs}}/(1+z)`$ by simple light-travel time arguments. Under the assumptions of isotropic radiation and Eddington-limited accretion, the implied minimum black hole masses of blazars are $`8\times 10^{11}`$ $`M_{}`$ for PKS 1622-297 (Mattox et al. 1997b) from EGRET observations and $`7.5\times 10^8`$ $`M_{}`$ for PKS 0528+134 from COMPTEL observations (Collmar et al. 1997).
## 4 Luminosity
The $`\gamma `$-ray luminosity can be estimated by considering the relationship between the observed differential energy flux $`S_0(E_0),`$ where the subscript “$`0`$” denotes the observed or present value, and $`Q_e`$ the power emitted in $`dE`$, where $`E=E_0(1+z)`$ in the Friedman universe.
$$Q_e[E_0]=4\pi S_0(E_0)(1+z)^{b1}\mathrm{\Theta }D_{L}^{}{}_{}{}^{2}(z,q_0)$$
$`(1)`$
where
$$D_L=\frac{c}{H_0q_{0}^{}{}_{}{}^{2}}\left[1q_0+q_0z+(q_01)(2q_0z+1)^{1/2}\right]\frac{cz}{H_0}g(z,q_0).$$
$`(2)`$
$`H_0`$ is the Hubble parameter, $`q_0`$ is the deceleration parameter, $`b`$ is the spectral index, $`z`$ is the redshift, and $`\mathrm{\Theta }`$ is the beaming factor. $`H_0`$ is chosen to be 70 and $`q_0`$ to be 0.5, although the results obtained here are not highly sensitive to these choices. The beaming factor is taken to be 1. The spectral index is obtained using the analysis described in §4. The luminosity as a function of the redshift is determined using equation (1) and is plotted in Figure 4 for the blazars detected by EGRET. The typical detection threshold for EGRET as a function of $`z`$, for relatively good conditions, is also shown in the figure. The actual threshold varies somewhat with exposure and region of the sky, and the average threshold is a little higher than the curve shown, but the shape is the same. The BL Lac objects are indicated in the figure with dark diamonds, and one sees clearly that they are predominantly closer and lower in luminosity.
Recently, Chiang & Mukherjee (1998) have calculated the evolution and luminosity function of the EGRET blazars, and have estimated the contribution of this source class to the diffuse extragalactic gamma-ray background. They find that the evolution is consistent with pure luminosity evolution. According to their estimates, only 25% of the diffuse extragalactic emission measured by SAS-2 and EGRET can be attributed to unresolved $`\gamma `$-ray blazars, contrary to some of the other estimates (eg. Stecker & Salamon 1996). Below 10 MeV, the average blazar spectrum suggests that only about 50% of the measured $`\gamma `$-ray emission could arise from blazars (Sreekumar, Stecker & Kappadath 1997). This leads to the exciting possibility that other sources of diffuse extragalactic $`\gamma `$-ray emission must exist.
## 5 Spectra
### 5.1 Spectra in the EGRET energy range
EGRET spectra of blazars typically covers at least two decades in energy (from 30 MeV to 10 GeV) and are well described by a simple power-law model of the form $`F(E)=k(E/E_0)^\alpha `$ photons cm<sup>-2</sup> s<sup>-1</sup> MeV<sup>-1</sup>, where the photon spectral index, $`\alpha `$, and the coefficient, $`k`$, are the free parameters. The energy normalization factor, $`E_0`$, is chosen so that the statistical errors in the power law index and the overall normalization are uncorrelated.
The average blazar spectrum has a spectral index of about $`2.15`$. Figure 5 shows the photon spectral index of the blazars plotted as a function of the redshift. There are marginal indications that suggest that the BL Lac objects have slightly harder spectrum in the EGRET energy range than the FSRQs. Mukherjee et al. (1997) find that the average spectral index of the BL Lac objects is about $`2.03`$, compared to about $`2.20`$ for the FSRQs. For some individual blazars there has been noted a trend for the spectrum to harden during a flare state (eg. in blazars 1222+216, 1633+382, and 0528+134; Sreekumar et al. 1996; Mukherjee et al. 1996). A spectral study of blazars as a class has been performed by Mücke et al. (1996) and Pohl et al. (1997).
### 5.2 Spectral Energy Distributions and Gamma-ray Models
The processes by which $`\gamma `$-rays are produced in blazars can be best understood by the study of the correlated multiwaveband observations of blazars extending from radio to $`\gamma `$-ray wavebands. One of the most significant findings of EGRET is that, in the radio to $`\gamma `$-ray multiwavelength spectra of blazars, the power in the $`\gamma `$-ray range equals or exceeds the power in the infrared-optical band. Any model of high-energy $`\gamma `$-ray emission in blazars needs to explain this basic observational fact. The high $`\gamma `$-ray luminosity of the blazars suggests that the emission is likely to be beamed and, therefore, Doppler-boosted into the line of sight. This is in agreement with the strong association of EGRET blazars with radio-loud flat-spectrum radio sources, with many of them showing superluminal motion in their jets. This information has helped to favor jet models of emission over models in which the $`\gamma `$-ray production is directly associated with accretion onto a massive black hole (e.g. Becker & Kafatos 1993).
The jet models explain the radio to UV continuum from blazars as synchrotron radiation from high energy electrons in a relativistically outflowing jet which has been ejected from an accreting supermassive black hole (Blandford & Königl 1979). The emission in the MeV-GeV range is believed to be due to the inverse Compton scattering of low-energy photons by the same relativistic electrons in the jet. However, two main issues remain questionable: the source of the soft photons that are inverse Compton scattered, and the structure of the inner jet, which cannot be imaged directly. The soft photons can originate as synchrotron emission either from within the jet (the synchrotron-self-Compton or SSC process: Maraschi, Ghisellini, & Celotti 1992; Bloom & Marscher 1996), or from a nearby accretion disk, or they can be disk radiation reprocessed in broad-emission-line clouds (the external radiation Compton process or the ERC process: Dermer & Schlickeiser 1994; Sikora, Begelman, & Rees 1994; Blandford & Levison 1995; Ghisellini & Madau 1996). In contrast to these leptonic jet models, the proton-initiated cascade (PIC) model (Mannheim & Biermann 1989, 1992) predicts that the high-energy emission comes from knots in jets as a consequence of diffusive shock acceleration of protons to energies so high that the threshold of secondary particle production is exceeded.
Figure 6 shows the simultaneous spectral energy distribution of 3C 279 during January-February 1996, when the source was detected at its highest state ever (Wehrle et al. 1997). The figure shows the relative amounts of energy detected in equal logarithmic frequency bands. The power output in $`\gamma `$-rays dominates the bolometric luminosity of the sources, as mentioned in §1. Wehrle et al. (1997) note that the $`\gamma `$-rays vary by more than the square of the observed IR-optical flux change, a fact that could be hard to explain by some specific blazar emission models. Although the data do not rule out SSC models, Wehrle et al. point out that the data are most likely explained by the “mirror” model of (Ghisellini & Madau 1996). In this model the flaring region in the jet photo-ionizes nearby broad-emission-line clouds, which in turn provide low energy external seed photons that are inverse Compton-scattered to high energy $`\gamma `$-rays.
Recently, a model combining the ERC and SSC scenarios has been used to fit the simultaneous COMPTEL and EGRET spectra of PKS 0528+134 by Böttcher & Collmar (1998). Figure 7 shows their fit to the gamma-ray spectrum during the high state of the source during March 1993. In their model Böttcher & Collmar assume a spherical blob filled with ultrarelativistic pair plasma which is moving out along an existing jet structure perpendicular to an accretion disk around a black hole of mass $`5\times 10^{10}M_{}`$. They argue that the observed spectral break between COMPTEL and EGRET energy ranges can plausibly be explained by a variation of the Doppler beaming factor in the framework of a relativistic jet model for AGNs.
The EGRET results have demonstrated that in order to model the spectra of blazars it is very important to get a truly simultaneous coverage across the entire electromagnetic spectrum before, during, and after a flare in the high-energy $`\gamma `$-ray emission. The limited data that we have on most of the blazars prevents us from being able to distinguish between the different theoretical models, on the basis of the spectra alone. For example, both the SSC and ERC models have been shown to reproduce the multiwavelength spectrum of 3C 279 rather well (Hartman et al. 1996b; Maraschi, Ghisellini & Celotti 1992; Ghisellini & Maraschi 1996). The SSC model was similarly found to fit the multiwavelength spectrum of PKS 0528+134 during the March 1993 flare reasonably well (Mukherjee et al. 1996). The low-state data of PKS 0528+134 (Aug 1994) was fit well with the ERC model, as demonstrated by Sambruna et al. (1997). The SSC, ERC, and PIC models have all been shown to fit the multiwavelength spectrum of 3C 273 well (von Montigny 1997).
The differences between the $`\gamma `$-ray variability properties of BL Lac objects and FSRQs can be explained in light of the model of Ghisselini and Madau (GM) (1996). In their model, soft photons from the jet are reprocessed by broad line region (BLR) clouds. Subsequently, these soft photons are emitted back into the jet where they scatter off of electrons in relativistically moving “blobs” to create high-energy $`\gamma `$-rays. Since BL Lac objects generally have very weak emission lines, it may be that they have much less BLR gas available for reprocessing soft photons. If there is an initial outburst of soft photons created via the synchrotron process in a jet “blob,” then BL Lac objects can still create $`\gamma `$-rays via the SSC process, though perhaps with lower amplitude than $`\gamma `$-rays created via the GM model. (This effect is somewhat dependent on adjustable model parameters.) In this scenario, BL Lac objects may undergo several SSC outbursts which fail to reach the EGRET detection threshold, thus giving the appearance that BL Lac objects as a class experience less dramatic and less frequent $`\gamma `$-ray flares. The general properties of the low frequency outbursts of BL Lac objects, however, would be very similar to that of the FSRQs.
In order to achieve a better understanding of the emission mechanisms of $`\gamma `$-rays from blazars a study of the correlated short time scale ($`13`$ days) $`\gamma `$-ray variations with those at other frequency bands is needed. Since the predictions of time delays between the flux changes at various frequencies are different for the individual models for both the seed photons and the nature of the inner jet, this method could provide a means to discriminate between the different models. The differences in the model predictions are discussed in more detail by Marscher et al. (1995). Gamma-ray variability in the different models may have different impacts on the spectral behavior during the build-up and decline of an outburst. Studying the short-time-scale behavior and looking for spectral changes while following a complete outburst may be the key to pin down the basic emission mechanisms.
## 6 Summary
In conclusion, the EGRET results have shown the importance of the $`\gamma `$-ray window on blazars. The high luminosities and strong time variability observed have pushed theoretical models to emphasize relativistic jets of particles seen at small angles to the line of sight. The EGRET observations have established that the $`\gamma `$-ray window is critical for understanding the properties of blazars. Future observations with CGRO and successor $`\gamma `$-ray observatories like INTEGRAL and GLAST should play a key role in resolving the physics of these powerful sources.
The author presents this work on behalf of the EGRET Team and acknowledges contributions from D. L. Bertsch, S. D. Bloom, B. L. Dingus, J. A. Esposito, C. E. Fichtel, R. C. Hartman, S. D. Hunter, G. Kanbach, D. A. Kniffen, Y. C. Lin, H. A. Mayer-Hasselwander, L. M. McDonald, P. F. Michelson, C. von Montigny, A. Mücke, P. L. Nolan, M. Pohl, O. Reimer, E. Schneid, P. Sreekumar, and D. J. Thompson. The author would particularly like to thank P. Sreekumar for critical comments on the draft. The author also acknowledges support from NASA Grant NAG5-3696.
## 7 References
Becker, P. A. & Kafatos, M. 1993, in: Proceedings of the 2nd COMPTON Symposium, College Park, MD 1993, AIP Conference Proc. No. 304, eds: C. E. Fichtel, N. Gehrels, & J. P. Norris, pg. 620
Bertsch, D. L., et al. 1989, Proc. of the Gamma Ray Observatory Science Workshop, ed. W. N. Johnson, 2, 52
Blandford, R. D. & Königl, A. 1979, ApJ, 232, 34
Blandford, R. D. & Levison, A. 1995, ApJ, 441, 79
Bloom, S. D. & Marscher, A. P. 1996, ApJ, 461, 657
Bloom, S. D., et al. 1997, ApJ, 490, L145
Böttcher, M. & Collmar, W. 1998, A&A, 329, L57
Chiang, J. & Mukherjee, R. 1998, ApJ, 496, 752
Collmar, W., et al. 1997, A&A, 328, 33
Dermer, C. D. & Schlickeiser, R. 1994, ApJS, 90, 945
Esposito, J. A., et al. 1998, in preparation
Ghisellini, G. & Madau, P. 1996, MNRAS, 280, 67
Ghisellini, G. & Maraschi, L. et al. 1996, “Blazar Continuum Variability,” A. S. P. Conf. Series Vol. 110, pg 436
Hartman, R. C., et al. 1993, ApJ, 407, L41
Hartman, R. C., et al. 1996a, “Blazar Continuum Variability,” A. S. P. Conf. Series Vol. 110, pg 333
Hartman, R. C., et al. 1996b, ApJ, 461, 698
Hartman, R. C., et al. 1997, Proc. of the Fourth Compton Symposium, eds. C. D. Dermer, M. S. Strickman, & J. D. Kurfess, CP410, 307
Hartman, R. C., et al. 1998, ApJ, submitted
Hughes, E. B., et al. 1980, IEEE Trans. Nucl. Sci., NS-27, 364
Hunter, S. D., et al. 1993, ApJ, 409, 134
Hunter, S. D., et al. 1997, ApJ, 481, 205
Kanbach, G., et al. 1988, Space Sci. Rev., 49, 69
Kanbach, G., et al. 1989, Proc. of the Gamma Ray Observatory Science Workshop, ed. W. N. Johnson, 2, 1
Kniffen, D. A., et al. 1993, ApJ, 411, 133
Mannheim, K. & Biermann, P. L. 1989, A&A, 221, 211
Mannheim, K. & Biermann, P. L. 1992, A&A, 53, L21
Maraschi, L., Ghisellini, G., & Celotti, A. 1992, ApJ, 397, L5
Marscher, A. P., et al. 1995, PNAS, 92, 11439
Mattox, J. R., et al. 1993, ApJ, 410, 609
Mattox, J. R., et al. 1996, ApJ, 461, 396
Mattox, J. R., et al. 1997a, ApJ, 481, 95
Mattox, J. R., et al. 1997b, ApJ, 476, 692
McLaughlin, M. A., et al. 1996, ApJ, 473, 763
von Montigny, C., et al. 1995, ApJ, 440, 525
von Montigny, C., et al. 1997, ApJ, 483, 161
Mücke A., et al. 1996, IAU Symposium 175, (Dordrecht: Kluwer)
Mukherjee, R., et al. 1996, ApJ, 470, 831
Mukherjee, R., et al. 1997, ApJ, 490, 116
Pohl, M., et al. 1997, A&A, submitted
Sambruna, R. M., et al. 1997, ApJ, 474, 639
Sikora, M., Begelman, M. C., & Rees, M. J. 1994, ApJ, 421, 153
Sreekumar, P., et al. 1996, ApJ, 464, 628
Sreekumar, P., Stecker, F. W., & Kappadath, S. C. 1997, Proc. of the Fourth Compton Symposium, eds. C. D. Dermer, M. S. Strickman, & J. D. Kurfess, CP410, 307
Sreekumar, P., et al. 1998, ApJ, 494, 523
Stecker, F. W., & Salamon, M. H. 1996, ApJ, 464, 600
Swanenburg, B. N., et al. 1978, Nature, 275, 298
Thompson, D. J., et al. 1993a, ApJS, 86, 629
Thompson, D. J., et al. 1995, ApJ, 101, 259
Thompson, D. J., et al. 1996, ApJS, Vol. 107, 227
Wagner, S. et al. 1995, ApJ, 454, L97
Wehrle, A. et al. 1997, ApJ, submitted
|
no-problem/9901/quant-ph9901028.html
|
ar5iv
|
text
|
# Measurement-induced quantum diffusion
## Abstract
The dynamics of a “kicked” quantum system undergoing repeated measurements of momentum is investigated. A diffusive behavior is obtained even when the dynamics of the classical counterpart is not chaotic. The diffusion coefficient is explicitly computed for a large class of Hamiltonians and compared to the classical case.
The classical and quantum dynamics of bound Hamiltonian systems under the action of periodic “kicks” are in general very different. Classical systems can follow very complicated trajectories in phase space, while the evolution of the wave function in the quantum case is more regular. This phenomenon, discovered two decades ago , as well as the features of the quantum mechanical suppression of classical chaos and the semiclassical approximation ($`\mathrm{}0`$) are now well understood .
The “kicked” rotator (standard map) has played an important role in the study of the different features of the classical and quantum case. This model is very useful not only because it elucidates several conceptual differences between these two cases, but also for illustrative purposes. One of the most distinctive features of an underlying chaotic behavior is the diffusive character of the dynamics of the classical action variable in phase space. In the quantum case, this diffusion is always suppressed after a sufficiently long time. On the other hand, it has been shown that, in the case of the kicked rotator, a diffusive behavior is obtained even in the quantum case, if a measurement is performed after every kick. The purpose of this Letter is to investigate this situation in more detail, focussing our attention on the role played by the measurement process in the quantum dynamics. We will see that quantum measurements provoke diffusion in a very large class of “kicked” systems, even when the corresponding classical dynamics is regular.
We consider the following Hamiltonian (in action-angle variables)
$$H=H_0(p)+\lambda V(x)\delta _T(t),$$
(1)
where
$$\delta _T(t)=\underset{k=\mathrm{}}{\overset{\mathrm{}}{}}\delta (tkT),$$
(2)
$`T`$ being the period of the perturbation. The interaction $`V(x)`$ is defined for $`x[\pi ,\pi ]`$, with periodic boundary conditions. This Hamiltonian gives rise to the radial twisting map. This is a wide class of maps, including as a particular case the standard map , which describes the local behavior of a perturbed integrable map near resonance. The free Hamiltonian $`H_0`$ has a discrete spectrum and a countable complete set of eigenstates $`\{|m\}`$:
$$x|m=\frac{1}{\sqrt{2\pi }}\mathrm{exp}\left(imx\right),m=0,\pm 1,\pm 2,\mathrm{}.$$
(3)
We shall consider the evolution engendered by (1) interspersed with quantum measurements, in the following sense: the system evolves under the action of the free Hamiltonian for $`(N1)T+\tau <t<NT`$ ($`0<\tau <T`$), undergoes a “kick” at $`t=NT`$, evolves again freely and then undergoes a “measurement” of $`p`$ at $`t=NT+\tau `$. The evolution of the density matrix between measurements is
$`\rho _{NT+\tau }`$ $`=`$ $`U_{\mathrm{free}}(\tau )U_{\mathrm{kick}}U_{\mathrm{free}}(T\tau )\rho _{(N1)T+\tau }U_{\mathrm{free}}^{}(T\tau )U_{\mathrm{kick}}^{}U_{\mathrm{free}}^{}(\tau ),`$ (5)
$`U_{\mathrm{kick}}=\mathrm{exp}\left(i\lambda V/\mathrm{}\right),U_{\mathrm{free}}(t)=\mathrm{exp}\left(iH_0t/\mathrm{}\right).`$
At each measurement, the wave function is “projected” onto the $`n`$th eigenstate of $`p`$ with probability
$$P_n(NT+\tau )=\mathrm{Tr}(|nn|\rho _{NT+\tau })$$
(6)
and the off-diagonal terms of the density matrix disappear. The occupation probabilities $`P_n(t)`$ change discontinuously at times $`NT`$ and their evolution is governed by the master equation
$$P_n(N)=\underset{m}{}W_{nm}P_m(N1),$$
(7)
where we defined, with a little abuse of notation,
$$P_n(N)P_n(NT+\tau )$$
(8)
and where
$$W_{nm}|n|U_{\mathrm{free}}(\tau )U_{\mathrm{kick}}U_{\mathrm{free}}(T\tau )|m|^2=|n|U_{\mathrm{kick}}|m|^2$$
(9)
are the transition probabilities. Althought the map (7) depends on $`\lambda ,V,H_0`$ in a complicated way, very general conclusions can be drawn about the average value of a generic regular function of momentum $`g(p)`$. Let
$$g(p)_t\mathrm{Tr}(g(p)\rho (t))=\underset{n}{}g(p_n)P_n(t),$$
(10)
where $`p|n=p_n|n`$ $`(p_n=n\mathrm{})`$, and consider
$$g(p)_N=\underset{n}{}g(p_n)P_n(N)=\underset{n,m}{}g(p_n)W_{nm}P_m(N1),$$
(11)
where $`g(p)_Ng(p)_{NT+\tau }`$ is the average value of $`g`$ after $`N`$ kicks. Substitute $`W_{nm}`$ from (9) to obtain
$`g(p)_N`$ $`=`$ $`{\displaystyle \underset{n,m}{}}g(p_n)m|U_{\mathrm{kick}}^{}|nn|U_{\mathrm{kick}}|mP_m(N1)`$ (12)
$`=`$ $`{\displaystyle \underset{m}{}}m|U_{\mathrm{kick}}^{}g(p)U_{\mathrm{kick}}|mP_m(N1),`$ (13)
where we used $`g(p)|n=g(p_n)|n`$. We are mostly interested in the evolution of the quantities $`p`$ and $`p^2`$. By the Baker-Hausdorff lemma
$$U_{\mathrm{kick}}^{}g(p)U_{\mathrm{kick}}=g(p)+i\frac{\lambda }{\mathrm{}}[V,g(p)]+\frac{1}{2!}\left(\frac{i\lambda }{\mathrm{}}\right)^2[V,[V,g(p)]]+\mathrm{},$$
(14)
we obtain the exact expressions
$`U_{\mathrm{kick}}^{}pU_{\mathrm{kick}}`$ $`=`$ $`p+i{\displaystyle \frac{\lambda }{\mathrm{}}}[V,p],`$ (15)
$`U_{\mathrm{kick}}^{}p^2U_{\mathrm{kick}}`$ $`=`$ $`p^2+i{\displaystyle \frac{\lambda }{\mathrm{}}}[V,p^2]+\lambda ^2\left(V^{}\right)^2,`$ (16)
where prime denotes derivative. Substituting into (13) and iterating, one gets
$`p_N`$ $`=`$ $`p_{N1}=p_0,`$ (17)
$`p^2_N`$ $`=`$ $`p^2_{N1}+\lambda ^2f^2=p^2_0+\lambda ^2f^2N,`$ (18)
where $`f=V^{}(x)`$ is the force and
$$f^2=\mathrm{Tr}\left(f^2\rho _{NT+\tau }\right)=\underset{n}{}n|f^2|nP_n(N)=\frac{1}{2\pi }_\pi ^\pi 𝑑xf^2(x)$$
(19)
is a constant that does not depend on $`N`$ because $`n|f^2|n`$ is independent of the state $`|n`$ \[see (3)\] and $`P_n=1`$. In particular, the kynetic energy $`K=p^2/2m`$ grows at a constant rate: $`K_N=K_0+\lambda ^2f^2N/2m`$. By using (17)-(18) we obtain the friction coefficient
$$F=\frac{p_Np_0}{NT}=0$$
(20)
and the diffusion coefficient
$$D=\frac{\mathrm{\Delta }p^2_N\mathrm{\Delta }p^2_0}{NT}=\frac{\lambda ^2f^2}{T},$$
(21)
where $`\mathrm{\Delta }p^2_N=p^2_Np_N^2`$. The above results are exact: their derivation involves no approximation. This shows that this class of Hamiltonian systems, if “measured” after every kick, has a constant diffusion rate in momentum with no friction, for any perturbation $`V=V(x)`$.
In particular, in the seminal kicked-rotator model, one gets ($`H_0=p^2/2I`$ and $`V=\mathrm{cos}x`$)
$$D=\frac{\lambda ^2}{2T}:$$
(22)
this is nothing but the diffusion constant obtained in the classical case . Notice that one obtains the quasilinear diffusion constant without higher-order correction .
The above results may seem somewhat puzzling, essentially because one finds that in the quantum case, when repeated measurements of momentum (action variable) are performed on the system, a chaotic behavior is obtained for every value of $`\lambda `$ and for any potential $`V(x)`$. On the other hand, in the classical case, diffusion occurs only for some $`V(x)`$, when $`\lambda `$ exceeds some critical value $`\lambda _{\mathrm{crit}}`$. (For instance, the kicked rotator displays diffusion for $`\lambda \lambda _{\mathrm{crit}}0.972`$ .) It appears, therefore, that quantum measurements not only yield a chaotic behavior in a quantum context, they even produce chaos when the classical motion is regular. In order to bring to light the causes of this peculiar situation, it is necessary to look at the classical case. The classical map for the Hamiltonian (1) reads
$`x_N`$ $`=`$ $`x_{N1}+H_0^{}(p_{N1})T,`$ (23)
$`p_N`$ $`=`$ $`p_{N1}\lambda V^{}(x_N).`$ (24)
A quantum measurement of $`p`$ yields an exact determination of momentum $`p`$ and, as a consequence, makes position $`x`$ completely undetermined (uncertainty principle). This situation has no classical analog: it is inherently quantal. However, the classical “map” that best mymics this physical picture is obtained by assuming that position $`x_N`$ at time $`\tau `$ after each kick (i.e. when the quantum counterpart undergoes a measurement) behaves like a random variable $`\xi _N`$ uniformly distributed over $`[\pi ,\pi ]`$:
$`x_N`$ $`=`$ $`\xi _N,`$ (25)
$`p_N`$ $`=`$ $`p_{N1}\lambda V^{}(x_N).`$ (26)
Introducing the ensemble average $`\mathrm{}`$ over the stochastic process (i.e. over the set of independent random variables $`\{\xi _k\}_{kN}`$), it is straightforward to obtain
$`p_N`$ $`=`$ $`p_{N1}\lambda V^{}(\xi _N),`$ (27)
$`\mathrm{\Delta }p_N^2`$ $`=`$ $`\mathrm{\Delta }p_{N1}^2+\lambda ^2\left(V^{}(\xi _N)^2V^{}(\xi _N)^2\right),`$ (28)
where $`\mathrm{\Delta }p_N^2=p_N^2p_N^2`$ and
$$g(\xi )\frac{1}{2\pi }_\pi ^\pi g(\xi )𝑑\xi $$
(29)
is the average over the single random variable $`\xi `$ \[this coincides with the quantum average: see for instance the last term of (19)\]. In deriving (28), the average of $`V^{}(\xi _N)p_{N1}`$ was factorized because $`p_{N1}`$ depends only on $`\{\xi _k\}_{kN1}`$. The average of $`V^{}(\xi _N)`$ in (28) vanishes due to the periodic boundary conditions on $`V`$, so that
$$\mathrm{\Delta }p_N^2=\mathrm{\Delta }p_{N1}^2+\lambda ^2f^2$$
(30)
and the momentum diffuses at the rate (21), as in the quantum case with measurements. What we obtain in this case is a diffusion taking place in the whole phase space, without effects due to the presence of adiabatic islands.
It is interesting to frame our conclusion in a proper context, by comparing the different cases analyzed: (A) a classical system, under the action of a suitable kicked perturbation, displays a diffusive behavior if the coupling constant exceeds a certain threshold (KAM theorem); (B) on the other hand, in its quantum counterpart, this diffusion is always suppressed. (C) The introduction of measurements between kicks encompasses this limitation, yielding diffusion in the quantum case. More so, diffusion takes place for any potential and all values of the coupling constant (namely, even when the classical motion is regular). (D) The same behavior is displayed by a “randomized classical map,” in the sense explained above. These conclusions are sketched in Table 1.
Table 1: Classical vs quantum diffusion
| A | classical | diffusion for $`\lambda >\lambda _{\mathrm{crit}}`$ |
| --- | --- | --- |
| B | quantum | no diffusion |
| C | quantum + measurements | diffusion $`\lambda `$ |
| D | classical + random | diffusion $`\lambda `$ |
As we have seen, the effect of measurements is basically equivalent to a complete randomization of the classical angle variable $`x`$, at least for the calculation of the diffusion coefficient in the chaotic regime. There are two points which deserve clarification. Indeed, one might think that: i) the randomized classical map (26) and the quantum map with measurements (7), (17)-(21) are identical; ii) the diffusive features in a quantum context are to be ascribed to the projection process (6) (hence to a non-unitary dynamics). Both expectations would be incorrect. As for i), there are corrections in $`\mathrm{}`$: it is indeed straightforward to show that the two maps have equal moments up to third order, while the fourth moment displays a difference of order $`O(\mathrm{}^2)`$:
$$p^4_Np^4_{N1}=p_N^4p_{N1}^4+\lambda ^2\mathrm{}^2(f^{})^2.$$
(31)
As for ii), it suffices to observe that the very same results can be obtained by making only use of a purely unitary evolution. To this end, we must give a model for measurement, by looking more closely at the physics of such a process. When a quantum measurement is performed, the relevant information is recorded in an apparatus. For example, the measured system scatters one or more photons (phonons) and each $`p`$-eigenstate gets entangled with the photon (phonon) wave function. A process of this sort can be schematized by associating an additional degree of freedom (a “spin” is the simplest possible case) with every momentum eigenstate, at time $`\tau `$ after every kick. This is easily accomplished by adding the following “decomposition” Hamiltonian to (1)
$$H_{\mathrm{dec}}=\frac{\pi }{2}\underset{n,k}{}|nn|\sigma ^{(n,k)}\delta (tkT\tau ),$$
(32)
where $`|n`$ is an eigenstate of $`p`$ and $`\sigma ^{(n,k)}(n,k)`$ is the first Pauli matrix, whose action is given by
$$\sigma ^{(n,k)}|\pm _{(n,k)}=|_{(n,k)},$$
(33)
where $`|+_{(n,k)},|_{(n,k)}`$ denote spin up, down, respectively, in “channel” $`(n,k)`$. Let us prepare the system in the initial ($`t=0^+`$) state
$$|\mathrm{\Psi }_{\mathrm{in}}=\underset{m}{}c_m|m\underset{k,n}{}|_{(n,k)}$$
(34)
(all “spins” down). For the sake of simplicity, we shall concentrate our attention on the first two kicks. In the same notation as in (8), the evolution of the state $`|\mathrm{\Psi }(N)|\mathrm{\Psi }(NT+\tau ^+)`$ reads
$$|\mathrm{\Psi }(0)=i\underset{m}{}c_m^{}|m|+_{(m,0)}\underset{k1,n}{}|_{(n,k)},$$
(35)
$$|\mathrm{\Psi }(1)=(i)^2\underset{\mathrm{},m}{}|\mathrm{}|+_{(\mathrm{},1)}A_\mathrm{}mc_m^{}|+_{(m,0)}\underset{k2,n}{}|_{(n,k)},$$
(36)
$$|\mathrm{\Psi }(2)=(i)^3\underset{j,\mathrm{},m}{}|j|+_{(j,2)}A_j\mathrm{}|+_{(\mathrm{},1)}A_\mathrm{}mc_m^{}|+_{(m,0)}\underset{k3,n}{}|_{(n,k)},$$
(37)
where $`c_m^{}=c_m\mathrm{exp}[iH_0(p_m)\tau ]`$ and
$$A_\mathrm{}m\mathrm{}|U_{\mathrm{free}}(\tau )U_{\mathrm{kick}}U_{\mathrm{free}}(T\tau )|m$$
(38)
is the transition amplitude ($`W_\mathrm{}m=|A_\mathrm{}m|^2`$). We see that at time $`\tau `$ after the $`k`$th kick, the $`n`$th eigenstate of the system becomes associated with spin up in channel $`(n,k)`$. By using (36)-(37) one readily shows that the occupation probabilities evolve according to
$$P_n(2)\mathrm{\Psi }(2)|\left(|nn|\mathrm{𝟏}_{\mathrm{spins}}\right)|\mathrm{\Psi }(2)=\underset{m}{}W_{nm}P_m(1).$$
(39)
The generalization to $`N`$ kicks is straightforward and it is very easy to obtain the same master equation (7). The observables of the quantum particle evolve therefore like in (11): in particular, the average value of the quantum observable $`\stackrel{~}{p}=p\mathrm{𝟏}_{\mathrm{spins}}`$ displays diffusion with coefficients (20)-(21). This shows that projection operators are not necessary to obtain a quantal diffusive behavior and the unitary dynamics engendered by (1) and (32) yields the same results.
Our analysis can be easily generalized to radial twisting maps in higher dimensions. It would be interesting to extend it to a slightly different class of Hamiltonians, such as those used in to analyze the effect of an oscillating perturbation on an atomic system.
We thank Hiromichi Nakazato and Mikio Namiki for early discussions.
|
no-problem/9901/hep-ph9901230.html
|
ar5iv
|
text
|
# Universal Pion Freeze-out Phase-Space Density
G. Bertsch has indicated a possibility to measure the pion freeze-out phase-space density and thereby test the local thermal equilibrium in a pion source. In case of thermal equilibrium at temperature $`T`$, identical pions of energy $`E`$ would follow the Bose-Einstein distribution
$$f=\frac{1}{e^{\frac{E}{T}}1}.$$
(1)
An average of this function over different phase-space regions is the quantity to be measured. When the $`p_T`$-spectrum is parameterized by an exponential with $`T_{\mathrm{eff}}(y)`$ being the inverse slope parameter, averaging over the spatial coordinates yields
$$f(p_T,y)=\frac{\frac{\sqrt{\pi }}{2}\frac{\sqrt{\lambda _{\mathrm{dir}}(p_T,y)}}{E_pT_{\mathrm{eff}}^2(y)}\mathrm{exp}\left(\frac{p_T}{T_{\mathrm{eff}}(y)}\right)\frac{dn^{}}{dy}(y)}{R_s(p_T,y)\sqrt{R_o^2(p_T,y)R_l^2(p_T,y)R_{ol}^4(p_T,y)}}.$$
(2)
This equation comprises information from essentially two different classes of experimental results: the single particle momentum spectra ($`dn^{}/dy,T_{eff}`$), and the two pion Bose-Einstein correlations ($`R_s,R_o,R_l,R_{ol},\lambda _{dir}`$). We have calculated $`f(p_T,y)`$ for the S-S, S-Cu, S-Ag, S-Au, S-Pb and Pb-Pb data from the experiments NA35 , NA49 , NA44 , and for the $`\pi `$-p data from the NA22 experiment at CERN-SPS.
From the results for $`f`$ as a function of $`p_T`$, presented in Fig. 1, one may conclude:
1. universal phase-space density
All the nuclear collision data from the SPS in Fig. 1 are indistinguishably similar, in spite of a factor of $``$ 10 difference in multiplicity density.
2. agreement with Bose-Einstein distribution
Using simultaneously pion spectra and pion correlations NA49 has disentangled thermal motion from collective expansion . Taking the NA49 result for the local freeze-out temperature $`T`$ = 120 MeV as the only parameter in Eq. (1) one indeed finds good agreement with the data. This is consistent with the thermal nature of the pion source, and inconsistent with the presence of a hypothetic pion condensate.
3. radial flow
Looking in more detail, one finds that the data indicate a somewhat slower decrease with increasing $`p_T`$ than the Bose-Einstein curve. This is most likely due to radial collective expansion which adds extra transverse momentum to particles, i.e. the local $`f`$ values appear in the measurement at a $`p_T`$ that is higher than the local $`p_T`$ in the source reference frame. A detailed study is under way.
4. rapidity dependence
A certain departure from the universal scaling is seen for the data at rapidities close to the projectile rapidity, both at AGS and SPS; moreover, the two results are consistent.
5. high temperature decoupling in $`\pi p`$ collisions
In contrast to freeze-out in nuclear collisions which takes place in two steps (chemical at $`T`$ 170-180 MeV and thermal at $`T`$ 120 MeV), pion production in $`\pi `$-p collisions is essentially immediate, without the second evolution stage, and therefore freeze-out temperatures of around 180 MeV should be expected. The data are indeed consistent with this expectation, as seen in Fig. 1.
|
no-problem/9901/physics9901056.html
|
ar5iv
|
text
|
# A phenomenological electronic stopping power model for molecular dynamics and Monte Carlo simulation of ion implantation into silicon
## I Introduction
Ion implantation in semiconductors is an important technology in integrated circuit device fabrication . A reliable description of as-implanted profiles and the resulting damage is needed for technological development, such as device design and modeling, as well as process optimization and control in the fabrication environment. For semiconductor devices whose physical dimensions are of order of submicrons or smaller, low implant energies and reduction of thermal processing are necessary, resulting in more prominent channeling effects in the as-implanted profiles and less post-implant diffusion. At these physical dimensions, it is essential to obtain the two- or three-dimensional details of the ever shallower and more compact dopant and damage profiles for post-implant diffusion simulations.
Study of the energy loss of channeled particles has a long history , for the channeling features can be used to elucidate the energy-loss mechanisms. Earlier analytical treatments of the implant profiles based on moment distributions, derived from the Lindhard-Scharff-Schiott theory (LSS) , preclude channeling because of the amorphous nature of the targets assumed in the studies. Later, it was realized that, because of the channeling effect, electronic stopping power plays a much more significant role in ion implantation into crystalline solids than otherwise would be deduced from the application of the LSS theory to amorphous materials. It is especially true for heavy ion implants at low energies, such as arsenic ions in the energy range below $`700`$keV . For implantation into silicon, most Monte Carlo (MC) models are only concerned with boron implants, and have not modeled arsenic implants accurately with an electronic stopping power model consistent with that used for boron . As will be shown below, the phenomenological model we developed for electronic stopping power can be implemented into a Monte Carlo simulation program for both boron and arsenic implants in different channels with equal success over a wide range of implant energies.
In addition to Monte Carlo simulations with the binary collision approximation (BCA) , molecular dynamics (MD) incorporating multiple-interactions via many-body potentials can also be used to simulate the behavior of energetic ions in amorphous or crystalline silicon. This method is especially applicable at low energies, for which many-body, and multiple interactions are increasingly important . Although it is well known that the BCA is valid for high incident energies ($`0.1`$keV up to $``$MeV, the upper limit is set by relativistic effects), in a cascade, especially initiated by a relatively low energy ion, the energy of the ions will decrease and eventually reach the lower validity limit of the BCA at which many-body effects become important . For crsytals of high symmetry, the BCA can be modified to account for simultaneous collisions in channels , and MD results can provide good insight into how to successfully modify the BCA in this situation. Moreover, MD results can be compared to BCA Monte Carlo simulations and used to establish the low energy limits of the binary collision approximation.
An extremely important issue in deploying molecular dynamics to model collision processes in covalent and ionic solids is how to incorporate energy transfer mechanisms between electrons and ions . A good description of dynamical processes in energetic collisions, such as initial displacement damage, relaxation processes, and the cooling phase as the energy dissipates into the ambient medium, requires a theoretical framework that encompasses all interactions between ion-ion, ion-electron, and their interaction with the thermal surroundings. Especially, it should capture the nonequilibrium thermodynamic nature of these physical processes involving a wide range of energy scales, from a low energy electron-phonon interaction regime to a high energy radiation damage regime. . Traditional MD simulations can capture the thermal behavior of an insulator. Since they do not take into account coupling between the phonons and the conduction electron system, obviously, these simulations underestimate the heat-transfer rate for noninsulating materials. In addition to lattice thermal conductivity, the issue of the conductivity due to electrons must be addressed. Furthermore, a correct description of the electronic stopping power should be incorporated into MD simulations for high energy implantation. For example, in sputtering processes by particle bombardment, examination of MD simulations with and without inelastic electronic energy loss has established that, independent of the ion’s mass or energy, the inelastic electronic energy losses by target atoms within the collision cascade have greater influence on the ejected atom yield than the ion’s electronic losses . This is in contrast to the belief that the electronic loss mechanism is important only for cascades initiated by light ions or by heavy ions at high bombardment energies . Although a convincing experimental verification of the electronic effects in sputtering is still lacking, the effects should be relevant to defect production rates, defect mobility and annealing, etc. . Also as shown in Ref. , traditional MD simulations produce extremely long channeling tails due to the absence of electronic stopping. In order to incorporate the ion-electron interaction into molecular dynamics simulations, a simple scheme was proposed by adding a phenomenological term, which describes the inelastic electronic stopping in the high energy radiation damage regime, while also capturing the thermal conductivity by coupling low energy ions to a thermal reservoir . The empirical expression used in Ref. for the strength of the ion-electron coupling is a function of the local electronic density. At the low charge density limit, a density functional result was reproduced , and at the high charge density limit, the linear response results were captured. In the same spirit, we develop a stochastic MD model incorporating the electronic stopping power as a damping mechanism. Our model is based on an effective charge theory with the electronic stopping power factorized into two parts. One is the effective charge of the incident ion, which is a globally averaged quantity determined by the average unbound electron density in the medium. The other factor is the electronic stopping power for a proton, for which the same local density functional results are used. Naturally, our damping mechanism incorporates both regimes, i.e., the electronic stopping regime and the electron-phonon interaction regime, into our molecular dynamics simulation, because the inelastic loss for a proton exhibits a similar density dependence as prescribed in Ref. , with additional modifications due to the velocity dependence of the effective charge. In the present work, however, we emphasize mainly the electron stopping power in the high energy regime ($``$ keV to $``$100keV), i.e., the electrons behave as an energy sink. The validity of the model for the electronic heat conduction regime will be discussed elsewhere. In the following, for boron and arsenic implants into single-crystal silicon in both the channeling and off-axis directions, we will show that a classical MD with the physically-based damping mechanism can generate dopant profiles in excellent agreement with experimentally-measured profiles obtained by secondary-ion mass spectrometry (SIMS).
As discussed above, the phenomenological model we have developed for electronic stopping power is successfully implemented into both BCA Monte Carlo programs and MD simulations. Wide applicability requires that a model be valid for different implant species over a wide range of energies. We emphasize that this electronic stopping model is accurate both for boron and arsenic implants, thus providing a crucial test of the generality and validity of the model in capturing the correct physics of electronic stopping.
The paper is organized as follows. In Sec. II, we present the phenomenological model for electronic stopping power in detail. Atomic units $`e=\mathrm{}=m_e=1`$ are used throughout the paper unless otherwise specified. In Sec. III, we briefly discuss different electronic stopping models implemented on the versatile BCA Monte Carlo simulation platform, MARLOWE . Then the results of the BCA Monte Carlo simulations on a rare-event algorithm enhanced UT-MARLOWE platform with our electronic stopping model are summarized. In Sec. IV, the results of the MD with the inelastic electronic energy loss are presented. In Sec. V, we make closing remarks and point out directions for future studies.
## II The Model
According to the Brandt-Kitagawa (BK) theory , the electronic stopping power of an ion can be factorized into two components based on an effective charge scaling argument. One is the effective charge of the ion (if not fully ionized), $`Z_1^{}`$, which is in general a function of ion velocity $`v`$ and the charge density of the target $`\rho `$, or equivalently, the one electron radius $`r_s=[3/(4\pi \rho (𝐱))]^{1/3}`$; the other is the electronic stopping power for a proton, $`S_p(v,r_s)`$. In the local density approximation, therefore, the total inelastic energy loss $`\mathrm{\Delta }E_e`$ of an ion of constant velocity $`v`$ is
$$\mathrm{\Delta }E_e=[Z_1^{}(v,r_s)]^2S_p(v,r_s)𝑑x,$$
(1)
where the integral is along the ion path. Since the effective charge is a continuous function of electronic density, mathematically, it is always possible to find a mean value, $`r_s^{}`$ of $`r_s`$, such that Eq. (1) can be rewritten as
$$\mathrm{\Delta }E_e=[Z_1^{}(v,r_s^{})]^2S_p(v,r_s)𝑑x.$$
(2)
If the effective charge is a slowly varying function of space, physically, this means that $`r_s^{}`$ describes an average number of unbound electrons in the sea and thus can be assumed to determine the Fermi surface. Therefore, we have the relation between the Fermi velocity and $`r_s^{}`$:
$$v_F=\frac{1}{\alpha r_s^{}},$$
(3)
where $`\alpha =[4/(9\pi )]^{1/3}`$. We note that this $`r_s^{}`$ will be the only tunable parameter in our electronic stopping power model.
Next we turn to a simple statistical model for this partially ionized, moving projectile. For an ion with $`N=Z_1Q`$ bound electrons, where $`Q`$ is the charge number of the ion of atomic number $`Z_1`$, a radially symmetric charge density
$$\rho _e=\frac{N}{4\pi \mathrm{\Lambda }^2r}\mathrm{exp}\left(\frac{r}{\mathrm{\Lambda }}\right)$$
(4)
is used in the BK theory. Here $`\mathrm{\Lambda }`$ is the ion size parameter, a function of the fractional ionization, $`q=(Z_1N)/Z_1`$. The total energy of the electrons comes from the sum of the kinetic energy estimated by the local density approximation, the electron-electron interaction in the Hartree approximation weighted by a variational parameter $`\lambda `$ to account for correlation, and the Coulomb energy of the electrons in the electric field of the nucleus. A variational approach minimizing the total energy leads to the following dependence of the ion size on the ionization fraction $`q`$:
$$\mathrm{\Lambda }=\frac{2a_0(1q)^{2/3}}{Z_1^{1/3}\left[1(1q)/7\right]},$$
(5)
where $`a_0=0.24005`$. In the BK theory, the generalized Lindhard theory of the electronic stopping in a homogeneous electron gas with an electron density $`n=3/(4\pi \overline{r}_s^3)`$ is used. The total electronic stopping is estimated from the sum of the energy loss in soft, distant collisions, i.e., small momentum transfers with target electrons seeing a charge $`qZ_1`$, and the energy loss to the target electrons experiencing increased nuclear interaction in hard, close collisions corresponding to large momentum transfers. As extensively discussed in the literature (see, e.g., Ref. , and references therein), it is assumed that the charge state of a proton in a solid is unity. Given an ionization fraction $`q`$ and using the scaling argument for the ratio of ion stopping to the proton stopping at the same velocity, the BK theory produces a simple expression for the fractional effective charge of an ion
$$\gamma (\overline{r}_s)=q+C(\overline{r}_s)(1q)\mathrm{ln}\left[1+\left(\frac{4\mathrm{\Lambda }}{\overline{r}_s}\right)^2\right],$$
(6)
where $`C(\overline{r}_s)`$ is weakly dependent on the target and has a numerical value of about $`1/2`$. We will set $`C=0.5`$ below. Then, the effective charge is
$$Z_1^{}=Z_1\gamma (\overline{r}_s).$$
(7)
For our model, using the procedure (2) outlined above, this dependence of $`\overline{r}_s`$ is identified with the dependence of the mean value $`r_s^{}`$. Therefore, the effective charge $`Z_1^{}`$ has a nonlocal, i.e., spatially independent, character and depends on the Fermi surface. In the above discussion, as can be seen, $`q`$ is a parameter which is not fixed by the BK theory. For obtaining this ionization fraction, there are velocity and energy criteria originally proposed by Bohr and Lamb , respectively. Kitagawa also used a statistical argument to justify scaling analyses in terms of the scaling parameter $`v_1/(v_BZ_1^{2/3})`$ . Recently, the issue of which stripping criterion can give rise to a better physical understanding has been raised . However, in light of the large amount of experimental data employed in Ref. to extract an ionization scaling consistent with the Brandt-Kitagawa theory, we will use this empirically verified scaling in our model. As summarized in Ref. , a new criterion in the BK approach is proposed , i.e., a relative velocity criterion, which assumes that the electrons of the ion which have an orbital velocity lower than the relative velocity between the ion and the electrons in the medium are stripped off. The relative velocity $`v_r`$ is obtained by averaging over the difference between the ion velocity $`𝐯_1`$ and the electron velocity $`𝐯_e`$ under the assumption that the conduction electrons are a free electron gas in the ground state, therefore, whose velocity distribution is isotropic. Performing a further averaging of $`𝐯_e`$ over the Fermi sphere leads to
$`v_r`$ $`=`$ $`v_1\left(1+{\displaystyle \frac{v_F^2}{5v_1^2}}\right)\text{ for }v_1v_F\text{,}`$ (8)
$`v_r`$ $`=`$ $`{\displaystyle \frac{3v_F}{4}}\left(1+{\displaystyle \frac{2v_1^2}{3v_F^2}}{\displaystyle \frac{v_1^4}{15v_F^4}}\right)\text{ for }v_1<v_F\text{.}`$ (9)
For the ionization scaling, a form of the Northcliffe type is then assumed for the scaling variable, i.e., the reduced relative velocity:
$$y_r=\frac{v_r}{v_BZ_1^{2/3}},$$
(10)
where $`v_B`$ is the Bohr velocity and $`v_B=1`$ in our units. The extensive experimental data for ions $`3Z_192`$ are used in Ref. to determine
$$q=1\mathrm{exp}[0.95(y_r0.07)].$$
(11)
In Ref. , an ionization scaling fit with even tighter bunching of the experimental data along the fit is presented. However, this approach entails a much more involved computational procedure . The accuracy level of Eq. (11) is adequate for our present purposes.
In our model, the electronic stopping power for a proton is derived from a nonlinear density-functional formalism . In the linear response theory, the energy loss per unit path length of a proton moving at velocity $`v`$ in the electron gas is obtained by Ritchie
$$\left(\frac{dE}{dx}\right)_\mathrm{R}=\frac{2v}{3\pi }\left[\mathrm{ln}\left(1+\frac{\pi }{\alpha r_s}\right)\frac{1}{1+\alpha r_s/\pi }\right],$$
(12)
using an approximation to the full random-phase approximation dielectric function, which amounts to the exponential screening potential around the ion induced by density fluctuations of the electrons. The nonlinear, density-functional calculation based on the formalism of Hohenberg and Kohn, and Kohn and Sham has been performed to obtain the charge density and scattering phase shifts for the conduction band as a function of energy self-consistently. The final stopping power for a proton is obtained via
$$\frac{dE}{dx}=\frac{3v}{k_Fr_s^3}\underset{l=0}{\overset{\mathrm{}}{}}(l+1)\mathrm{sin}^2\left[\delta _l(E_F)\delta _{l+1}(E_F)\right],$$
(13)
where $`\delta _l(E_F)`$ is the phase shift at the Fermi energy for scattering of an electron of angular momentum $`l`$ and $`k_F`$ is the Fermi momentum . As shown in Ref. , comparison with expermental data demonstrates that the density functional treatment provides an improvement over the linear response (dielectric) result , which underestimates the stopping powers. In our implementation, the result of the nonlinear, density functional formalism for the electronic stopping power for a proton is used, which can be expressed as
$$S_p(v,r_s)=\left(\frac{dE}{dx}\right)_\mathrm{R}G(r_s),$$
(14)
where, for computational convenience, the correction factor $`G(r_s)`$ takes the form
$$G(r_s)=1.00+0.717r_s0.125r_s^20.0124r_s^3+0.00212r_s^4$$
(15)
for $`r_s<6`$. We note that a different correction factor was used in Refs. , which does not have the following desired behavior for $`r_s1`$. Since the density functional result converges to the Ritchie formula as $`r_s`$ decreases towards values sufficiently small compared to unity , this requires that the correction factor smoothly tend to unity as $`r_s0`$. Obviously, the above $`G(r_s)`$ possesses the correct convergence property.
The last ingredient needed for our model is the charge distribution $`\rho (𝐱)`$ for silicon atoms in the crystal. We use the solid-state Hartree-Fock atomic charge distribution , which is spherically symmetric due to the muffin-tin construction. In this approximation, there is about one electron charge unit (0.798 electrons for Si) left outside the muffin-tin. This small amount of charge can be either distributed in the volume between the spherical atoms, resulting in an interstitial background charge density $`0.119e/\mathrm{\AA }^3`$, or distributed between the maximal collision distance used in Monte Carlo simulations and the muffin-tin radius (see details below).
## III BCA Monte Carlo simulation results
First, in comparison with other electronic stopping models used in Monte Carlo simulations based on the MARLOWE platform, we stress that in our model the effective charge is a nonlocal quantity, neither explicitly dependent on the impact parameter nor on the charge distribution, and the stopping power for a proton depends on the local charge density of the solid. A purely nonlocal version of the BK theory was implemented into MARLOWE , in which both the effective charge and the stopping power for a proton depend on a single nonlocal parameter, namely, the averaged one electron radius. Its results demonstrated that energy loss for well-channeled ions in the keV region has high sensitivity to the one-electron radius in the channel. It was pointed out that a correct density distribution is needed to account for the electronic stopping in the channel . Later, a purely local version of the BK theory was developed to take into account the charge distribution of the electrons . Comparison with other electronic stopping models, such as Lindhard and Scharff , Firsov , and the above nonlocal implementation , showed a marked improvement in modeling electronic stopping in the channel . Good agreement between simulated dopant profiles and the SIMS profiles for boron implants into $`100`$ single crystal silicon was obtained. However, this purely local implementation of the BK theory did not successfully model the electronic stopping for the boron implants into the $`110`$ axial channel and arsenic implants as noted in Ref. .
In the present work, UT-MARLOWE was selected as the platform for our electronic stopping model implementation. UT-MARLOWE is an extension of the MARLOWE code for simulating the behavior of energetic ions in crystalline materials . It has been enhanced with: (i) atomic pair-specific interatomic potentials for $`B`$-$`Si`$, $`B`$-$`O`$, $`As`$-$`Si`$, $`As`$-$`O`$ for nuclear stoppings , (ii) variance reduction algorithm implemented for rare events, (iii) important implant parameters accounted for, e.g., tilt and rotation angles, the thickness of the native oxide layers, beam divergence, and wafer temperature, etc. In our simulations, we have turned off certain options, such as the cumulative damage model in the UT-MARLOWE code, which is a phenomenological model to estimate defect production and recombination rates. Individual ion trajectories were simulated under the BCA and the overlapping of the damage caused by different individual cascades was neglected. In order to test the electronic stopping model we also used low dose ($`10^{13}/\mathrm{cm}^2`$) implants so that cummulative damage effects do not significantly complicate dopant profiles . Also, for the simulation results we report below, 16$`\mathrm{\AA }`$ native oxide surface layer, 300K wafer temperature were used. The maximum distance for searching a collision partner is 0.35 lattice constant, the default value in the UT-MARLOWE . The excess charge outside the muffin-tins is distributed in the space between this maximum collision distance and the muffin-tin radius. In the simulation, the electronic stopping power is evaluated continuously along the path the ion traverses through regions of varying charge density, i.e., the energy loss is given by
$$\mathrm{\Delta }E_e=_{\mathrm{ion}\mathrm{path}}[Z_1\gamma (v_1,r_s^{})]^2S_p(v_1,r_s(𝐱))𝑑x.$$
(16)
In the simulations, the free parameter $`r_s^{}`$ was adjusted to yield the best results in overall comparison with the experimental data. The value $`r_s^{}=1.109\mathrm{\AA }`$ was used for both boron and arsenic ions for all energies and incident directions. This value is physically reasonable for silicon. Note that the unbound electronic density in silicon with only valence electrons taken into account will give rise to a value of $`1.061\mathrm{\AA }`$ for $`r_s`$. The fact that our $`r_s^{}`$ value is greater than $`1.061\mathrm{\AA }`$ indicates that not all valence electrons participate in stopping the ion as unbound electrons.
We display the Monte Carlo dopant profile simulation results as follows. We note in passing that the lower and upper limits of energy used in our simulations are determined by the energy range of the SIMS data available to us.
In Figs. 1, 2 and 3, we show boron dopant profiles for the energies 15keV, 35keV, and 80 keV along $`100`$, $`110`$, and the off-axis direction with tilt $`=7^{}`$ and rotation $`=30^{}`$, respectively. It can be seen that the overall agreement with the SIMS data is excellent. In Fig. 1, the simulations show a good fit for the cutoff range. In the high energy regime, the simulated distribution shows a slightly peaked structure. This can be attributed to a strong channeling due to insufficient scatterings in the implants. We have noticed that by increasing, e.g., the native oxide layer thickness, the peak can be reduced. For the $`110`$ channeling case, the distribution indicates a possibility that the total electronic stopping power along the channel is a little too strong at the high energy end. However, it should be kept in mind that for this channel, the UT-MARLOWE model becomes sensitive to the multiple collision parameter which is employed as an approximate numerical correction for the effect of multiple overlapping nuclear encounters. It is not clear how to separate the contributions from these two different sources.
For comparison, in Fig. 4, we also display a low energy (5keV) implant case . Again the agreement for both the channeling and off-axis directions is striking (thin lines without symbols). In order to illustrate the importance of electronic stopping power at this low energy for boron implants, we have used an artificially reduced electronic stopping power, i.e, multiplying $`\mathrm{\Delta }E_e`$ in Eq. (16) by a factor of $`1/10`$ in the simulation, to genearte dopant profiles in the channeling and off-axis directions. From Fig. 4, evidently, it can be concluded that, for boron implants even in this low energy regime, electronic stopping power has significant influence on the channeled tail of the dopant distribution and on the cutoff range for both channeling and off-axis directions.
Figs. 5 and 6 show arsenic dopant profiles for energies ranging from 15keV to 180 keV along $`100`$, and the off-axis direction with tilt $`=8^{}`$ and rotation $`=30^{}`$, respectively. It can readily be concluded that our electronic model works successfully with arsenic as well as boron implants into crystalline silicon. For comparison, a case of arsenic implant into amorphous silicon is also shown in Fig. 7. The implant energy is $`180`$keV. The effect of electronic stopping, which shows clearly in the long sloping channeling tail in the crystalline counterpart (see Fig. 5), is less prominent for the amorphous case (Fig. 7).
To examine the role that electronic stopping power has on arsenic implants in the low energy regime ($`5`$keV), we again simulated arsenic dopant profiles with the artificially reduced electronic stopping power. For these low energy implants, the oxide layer thickness $`3\mathrm{\AA }`$ were used in the BCA simulations on account of the fact that the wafers used for these implants were treated by dilute HF etch for 30 seconds, then implanted within 2 hours to prevent native oxide regrowth . In Fig. 8, we show that our electronic stopping power model is successful in both the $`100`$ channeling and off-axis directions (thin lines without symbols). Clearly, the artificial reduction of electronic stopping power leads to incorrect dopant distributions and cutoff ranges for both the channeling and off-axis directions, although the deviations indicate a less significant contribution from electronic stopping for arsenic implants than for boron implants at the energy $`5`$keV. However, the deviation in the cutoff range due to the electronic stopping power reduction for the channeling case is still significant. Obviously, this reinforces the conclusion that, for channeling implants even at low energies, electronic stopping is not negligible.
In summary, the above results demonstrate clearly that our electronic stopping power indeed captures the correct physics of the electronic stopping for ion implants into silicon over a wide range of implant energies.
## IV MD simulation results
We have also used classical molecular dynamics simulation to study the electronic stopping power as one of the damping mechanisms in the high energy regime, as discussed above. Here we demonstrate that experimental data, such as SIMS, can be used to test the validity of this physically-based damping model. The interaction between silicon atoms are modeled by Tersoff’s empirical potential :
$$E=\frac{1}{2}\underset{ij}{}f(r_{ij})\left[V_R(r_{ij})b_{ij}V_A(r_{ij})\right],$$
(17)
where $`f(r_{ij})`$ is a cutoff function that restricts interactions to nearest neighbors, $`V_R(r_{ij})`$ and $`V_A(r_{ij})`$ are pair terms, and $`b_{ij}`$ is a many-body function that can be regarded as an effective Pauling bond order. We have modified the repulsive part of the Tersoff potential by splinning to the ZBL universal potential at close-range. . The ZBL universal potential is also used to model the ion-silicon interactions. In our full MD simulations for the low dose implantation, the lattice temperature was initialized to 300K and the above electronic stopping model was applied to all the atoms. The only modification required for implementation in MD is to take into account the contributions from multiple silicon atoms to the local electron density, while ensuring that the background electron density is only counted once. For each individual cascade, all recoils and the accumulation of damage in the ion path are taken into account. Using the parameter value $`1.109`$ for $`r_s^{}`$ from the comparison of BCA Monte Carlo simulation results with the SIMS data, we have simulated the implantation of low energy boron and arsenic ions into the $`Si`$ $`\{100\}(2\times 1)`$ surface at energies between 0.5keV and 5keV, with both channeling and off-axis directions of incidence. We mention here that, for the $`100`$ channeling case up to 0.16 keV ($`32\%`$) of 0.5 keV boron implant energy and 0.64 keV ($`13\%`$) of 5keV arsenic implant energy are lost via electronic stopping in our simulations. Simulations were terminated when the total energy of the ion became less than 5eV, giving typical simulation times of around 0.2ps. Figs. 9, 10 and 11 show the calculated dopant concentration profile for various energies and directions. Each MD profile is generated by a set of between 500 and 1300 individual ion trajectories. Also shown are the profiles obtained using the modified UT-MARLOWE BCA code described in Sec. III. Obviously, the MD calculation results are in very good agreement with the experimental data, and with the BCA results. This demonstrates that our electronic stopping power model provides a good physically-based damping mechanism for MD simulations of ion implantation.
## V Conclusion
We have developed a phenomenological electronic stopping power model for the physics of ion implantation. It has been implemented into MD and BCA Monte Carlo simulations. SIMS data have been used to verify this model in the MD and BCA Monte Carlo platforms. This model has only one free parameter, namely, the one electron radius of unbound electrons in the medium. We have fine tuned this parameter to obtain excellent results of dopant profiles compared with SIMS data in both MD and BCA Monte Carlo simulations. We emphasize that this model with a single parameter can equally successfully model both boron and arsenic implants into silicon over a wide range of energies and in different channeling and off-axis directions of incidence. This versatility indicates wide applicability of the model in studies of other physical processes involving electronic stopping. As a more stringent test of the model, it should also be applied to implantation of species other than boron and arsenic. Using arsenic implantation as an example, we have also addressed the issue of how significant electronic stopping is for heavy ions in a low energy regime. For instance, to achieve a good quantitative understanding, we still have to take into account the physics of electronic stopping for arsenic implants at 5keV.
As discussed above, it is important to incorporate ion-electron couplings into MD simulations in both the high energy radiation damage regime and the low energy electron-phonon interaction regime. We have demonstrated that this model provides a crucial piece of physics in MD simulations for modeling energetic collisions in the electronic stopping power regime. The agreement of the simulated dopant profiles with the SIMS data shows that the incorporation of this physically-based damping term into MD simulations is a phenomenologically reliable approach in the regime concerned. Under way is an investigation of whether it can be used as a good phenomenological model for electron-phonon coupling in the low energy regime. This agreement also suggests that MD can be used to generate dopant profiles for testing against the low energy BCA results when experimental data is not available. Furthermore, MD simulations incorporating this physically-based damping mechanism can provide valuable insight into how to modify the binary collision approximation. This will enable the validity of the Monte Carlo simulation to be extended further into the lower energy regime, while not destroying computational efficiency required in realistic simulation environments.
## VI Acknowledgment
We thank Al Tasch for useful discussions and for providing us with SIMS data, which facilitate validation of the model. This work is performed under the auspices of the U.S. Department of Energy.
## VII Figures
|
no-problem/9901/cond-mat9901043.html
|
ar5iv
|
text
|
# Logarithmic Relaxations in a Random Field Lattice Gas Subject to Gravity
## I Introduction
Relaxation properties of granular media in presence of low amplitude vibrations are dominated by steric hindrance, friction and inelasticity. They typically show very long characteristic times . For instance the grain compaction process in a box, i.e., the increase of bulk density in presence of gentle shaking, seems to follow a logarithmic law .
This very slow dynamics resemble those of glassy systems . Moreover these materials also show “aging” and “memory” effects with logarithmic scaling, and metastability . The presence of reversible-irreversible cycles (along which one can identify a sort of “glassy transition”), as well as dependence on “cooling” rates enhance the similarities with glassy systems. However in granular media, as soon as external forces are cut off, grains come rapidly to rest. The microscopic origin of motion is thus quite different from thermal systems (glasses, magnets) where random particle dynamics is ensured by temperature. On this basis it is rather surprising to find those similar behaviors in off equilibrium dynamics.
Based on this resemble, frustrated lattice gas models have been recently introduced to describe these slow logarithmic processes found in granular matter (where the shaking amplitude plays the role of an “effective” temperature of the system) .
The aim of the present paper is twofold: on one side to describe the correspondences with experiments on granular matter, of some dynamical behaviors during observed in a frustrated lattice gas model subject to gravity; on the other side to emphasize the appearance of some very general features in the off equilibrium dynamics in presence of gravity in a broad class of frustrated lattice gas models. At first, we thus introduce a very simple random field model to describe a system of grains moving in a disordered environment. The relaxation properties of the model are then studied via Monte Carlo simulations. We stress the relations with granular media and the implications of our results for the understanding of the behaviors of such materials. Finally, the connections with others kinds of “frustrated” models are discussed to single out the universal features of this class of lattice systems.
## II The model
Along the lines of the Ising Frustrated Lattice Gas (IFLG) and TETRIS models , we consider a system of particles moving on a square lattice tilted by $`45`$ degrees. Because of the hard core repulsion, every site of the lattice cannot be occupied by more than one particle. Moreover, the system is dilute, so that there will be empty sites on the lattice.
Although in real systems particles have a rich variety of shapes, dimensions and orientations in space, here for the sake of simplicity we restrict ourselves to the case of one type of grain with an elongated form (like in the TETRIS model ) which can lay on the lattice with two different orientations (Fig.1). Each particle is thus characterized by an intrinsic degree of freedom which is its orientation in space. It can have only two possible values corresponding to the two possible positions of the particle on the lattice sites. Since particles cannot overlap, particles with the same orientation cannot occupy two nearest neighboring sites on the lattice. As a result a local geometrical constraint is generated in the dynamics.
In a real granular material each grain moves in a disordered environment made of the rest of the granular systems itself. To schematically describe such a situation without entering the full complexity of the problem, we consider the grains of our model immersed in a disordered media whose disorder we suppose “quenched”. This choice may correspond, more strictly, to the case of grains motion on a geometrically disordered substrate such as a box with geometric asperities on its surface. At this stage, the physical feature to embody is the restriction in the grain motion produced by the environment disorder. Therefore, in our model a particle can occupy a lattice site only if its orientation fits both the local geometry of the medium (Fig.1) and the geometrical interaction with the closely neighboring particles.
These ideas can easily be formalized using Ising spins in a magnetic language. A spin $`S_i`$ (with $`S_i=\pm 1`$) can be identified with the internal degree of freedom which characterizes the twofold orientation of particle $`i`$. The “geometrical” interaction between nearest neighboring grains, which must be antiparallel (to be non-overlapping neighbors), is realized with antiferromagnetic couplings (of infinite strength) between nearest neighbors . Analogously, the random geometry of the environment, which force a grain to have a definite orientation to fit the local geometry, may be described as a strong random magnetic field attached to each site of the lattice.
The Hamiltonian of this Random Field Ising system with vacancies, in presence of gravity, can be thus written as,
$$=J\underset{ij}{}(S_iS_j+1)n_in_jH\underset{i}{}h_iS_in_i+g\underset{i}{}n_iy_i$$
(1)
where $`h_i`$ are random fields, i.e. quenched variables which assume the values $`\pm 1`$, and $`n_i`$ is the occupancy variable: $`n_i=1`$ if site $`i`$ is occupied by a particle, $`n_i=0`$ otherwise. Here $`g`$ is the gravity constant and $`y_i`$ corresponds to the height of the site $`i`$ with respect to the bottom of the box (grain mass is set to unity). $`J`$ and $`H`$ represent the repulsions felt by particles if they have a wrong reciprocal orientations or if they do not fit the local geometry imposed by the random fields, respectively. Here we study the case when $`J=H=\mathrm{}`$, i.e., the case when the geometric constraints are infinitely strong. In this case there will be always sites where “spins” cannot fulfill simultaneously every constraint, and thus will remain vacant.
## III The tapping
In a tapping experiment on granular media, a dynamic is imposed to the grains by vibrations characterized by the shaking normalized intensity $`\mathrm{\Gamma }`$ ($`\mathrm{\Gamma }`$ is the ratio of the shake peak acceleration to the gravity acceleration $`g`$, see ). In the regime of low shaking amplitudes (small $`\mathrm{\Gamma }`$), the presence of dissipation dominates grain dynamics, and in first approximation the effects of inertia on particles motion may be neglected. To embody this scheme in our model we introduce a two-step diffusive Monte Carlo dynamics for the particles.
The dynamics is very simple with particles moving on the lattice either upwards or downwards with respective probabilities $`p_{up}`$ and $`p_{down}`$ (with $`p_{down}=1p_{up}`$).
In the first step, vibrations are on with $`p_{up}0`$. Particles can then diffuse for a time $`\tau _0`$ in any direction yet with the inequality $`p_{up}<p_{down}`$, and always preserving the above local geometric constraints. In the second step, vibrations are switched off and the presence of gravity imposes $`p_{up}=0`$. Particles can then move only downwards. In both steps particle orientation (i.e. its spin $`S_i`$) can flip with probability one if there is no violation of the above constraints and does not flip otherwise.
In this single tap two-step dynamics, we let the system to reach a static configuration, i.e. a configuration in which particles cannot move anymore. In our Monte Carlo tapping experiment a sequence of such a vibration is applied to the system.
Under tapping a granular system can move in the space of available microscopic configurations in a similar way thermal systems explore their phase space. The tapping dynamics key parameter is thus the ratio $`x=\frac{p_{up}}{p_{down}}`$, which is linked to the experimental amplitude of shaking. An effective “temperature”, $`T`$, can thus be introduced for the above Hamiltonian with $`xe^{\frac{2g}{T}}`$ which in turn relates to granular media real shakes (see ) via the equality $`\mathrm{\Gamma }^a\frac{T}{2g}1/\mathrm{ln}(1/x)`$, with $`a1,2`$ .
## IV Compaction
In order to investigate the dynamical properties of the system when subjected to vibrations, we study the behavior of two basic observables which are the density and the density-density time dependent correlation function. The latter being considered in next section.
The system is initialized filling the container by randomly pouring grains at the top, one after the other. Particles then fall down subjected only to gravity, always preserving the model local geometric constraints. Once they cannot move down any longer, they just stop. ¿From this loose packing condition, the system is then shaken by a sequence of vibrations of amplitude $`x=\frac{p_{up}}{p_{down}}`$ with the two steps diffusive dynamics described above.
Specifically, we have studied a $`2D`$ square lattice of size $`30\times 60`$ (the results have been observed to be robust to system size changes). The lattice has periodic boundary conditions in the horizontal direction and a rigid wall at its bottom. The particle motion is thus occurring on a cylinder.
After each vibration $`t_n`$ (where $`t_n`$ is the n-th “tap”) the bulk density is measured, i.e. the density $`\rho (x,t_n)`$ in the box lower $`25\%`$.
Results for the compaction process are shown in Fig. 2. Different curves correspond to different values of the amplitude $`x`$ which ranges from $`x=0.001`$ up to $`x=0.2`$. Data are averaged over $`10`$ different initial conditions and, for each initial condition, over $`10`$ random fields configurations. The duration of each tap was kept fixed to $`\tau _0=30`$ (time is measured in terms of per particle Monte Carlo step) in all the simulations. We have also checked that the qualitative general features exhibited by the model do not depend substantially on the choice of this value.
In analogy to experimental results, the value of $`x`$ is a crucial parameter which controls the dynamics of the compaction process and as well as the final static packing density . In qualitative agreement with experimental findings, we observe that the stronger is the shaking (i.e., the higher is $`x`$, or $`T`$) the faster the system reaches an higher packing density, as shown in Fig.2. However this is in contrast with the intuitive expectation about simple systems, as (lattice) gases in presence of gravity, where higher temperatures correspond to lower equilibrium densities .
It is known experimentally that, by gently shaking a granular system, the density at the bottom of the box increases very slowly until it reaches its equilibrium value. The best fit for the density relaxation is an inverse logarithmic form of the type ,
$$\rho (t_n)=\rho _{\mathrm{}}\mathrm{\Delta }\rho _{\mathrm{}}\frac{\mathrm{log}A}{\mathrm{log}(\frac{t_n}{\tau }+A)}$$
(2)
where $`\rho _{\mathrm{}}`$ is the asymptotic density, $`\mathrm{\Delta }\rho _{\mathrm{}}=\rho _{\mathrm{}}\rho _i`$ is the difference between the asymptotic and initially measured density, $`\rho _i\rho (t_n=0)0.522`$, $`A`$ and $`\tau `$ are two fitting parameters. Experimentally $`\rho _{\mathrm{}},\mathrm{\Delta }\rho _{\mathrm{}},A`$ and $`\tau `$ depend on the value of the normalized shaking amplitude $`\mathrm{\Gamma }`$.
Our different Monte Carlo relaxation curves for the density, obtained for different intensity of vibration $`x`$, show a behavior very similar to the experimental findings and can be well fitted with the inverse logarithmic law (2). As stated, the amplitude of shaking $`x`$ determines the fitting parameters ($`\rho _{\mathrm{}},A`$ and $`\tau `$) and their behaviors seems to indicate a smooth crossover between two different regimes at varying $`x`$.
Fig. 3 shows the relaxation of $`\tau `$ to a constant value with increasing $`x`$. The analytical dependence on $`x`$ can be fitted with a power law form of the type:
$$\tau \left(B+\frac{1}{x^\gamma }\right)$$
(3)
$`\tau `$ can be interpreted as the minimum time over which one starts to observe a compaction in the system. For small $`x`$, $`\tau `$ exhibits an algebraic dependence on $`x`$ with $`\gamma =2.0`$, while when $`x`$ crosses a certain threshold, say $`\stackrel{~}{x}0.1`$, it saturates to a constant value which is independent on the tapping amplitude.
The behavior of the final packing density of the system, $`\rho _{\mathrm{}}`$, is also shown in Fig.3. It increases with increasing $`x`$ and then, when the shaking intensity is greater of the typical value $`\stackrel{~}{x}`$, it becomes almost constant up to the $`x`$ value we considered. These results are consistent with those found in Ref., and are in qualitative agreement with the above described experimental findings.
It is more difficult to distinguish a definite trend for the value of the parameter $`A`$ versus $`x`$ (see Fig.3). However it shows a change of its behavior in correspondence of a well definite tapping amplitude which has still approximately the same value $`\stackrel{~}{x}`$.
The recorded behavior of $`\rho _{\mathrm{}}`$ with $`x`$ (or $`\mathrm{\Gamma }`$ in the experiments), which, as stated, is in contrast with the expected equilibrium values, and the observed rough crossover between two regions suggest that, on typical time scales of both our Monte Carlo and real experiments, one is typically still far from equilibrium. Actually if $`x`$ is sufficiently small, the granular system has very long characteristic times, as shown by the presence of logarithmic relaxations. We will address the question of the out of equilibrium dynamics in next section.
To complete our results about density compaction, we have plotted in Fig.4 the density profile $`\rho (z)`$ as a function of the height from the bottom of the box. Starting from a common initial configuration, the system is let to evolve while subjected to shaking with two different amplitudes $`x=0.001`$ and $`x=0.1`$. As already stated, this gives a difference in the two final density profiles, which seems to approximately have (see also ) a Fermi-Dirac dependence on the depth, $`z`$, as shown in Fig.4:
$$\rho (z)=\rho _b\left[1\frac{1}{1+e^{(zz_0)/s}}\right]$$
(4)
where $`\rho _b`$ is the asymptotic bulk density, $`z_0`$ and $`s`$ are two fitting parameters describing the properties of the “surface” of the system. They all depend on the shaking amplitude $`x`$. The fit parameters for the initial profile are $`\rho _b=0.53`$, $`z_0=35.3`$ and $`s=1.50`$; after the shakes with $`x=0.001`$ we find $`\rho _b=0.56`$, $`z_0=36.5`$ and $`s=0.83`$, while after the shakes with $`x=0.1`$ we measure $`\rho _b=0.575`$, $`z_0=37.1`$ and $`s=0.69`$. All these show that, during compaction, the “surface” region of the system shrinks.
## V The density-density autocorrelation function
The above discussion about compaction under gentle shaking, with the presence of logarithmic relaxations shows we are in presence of a form of out of equilibrium process. To characterize its features in a quantitative way, we study the time dependent correlation functions.
The system is let to evolve for a time interval $`t_w`$ (the “waiting time”), then correlations are measured as a function of time for $`t>t_w`$. In the present case, we record the relaxation features of the two time density-density correlation function,
$$C(t,t_w)=\frac{\rho (t)\rho (t_w)\rho (t)\rho (t_w)}{\rho (t_w)^2\rho (t_w)^2}$$
(5)
where $`<\mathrm{}>`$ means the average over a number of random field configurations and different initial configurations. As above, $`\rho (t)`$ is the bulk density of the system at time $`t`$.
For a system at equilibrium the time dependent correlation function $`C(t,t_w)`$ is invariant under time translations, i.e. it depends only on the difference $`tt_w`$. However, if the system is off equilibrium, the subsequent response is expected to depend explicitly on the waiting time. Then $`C(t,t_w)`$ is a function of $`t`$ and $`t_w`$ separately. This “memory” effect is usually termed $`aging`$ and plays an important role in the study of disordered thermal systems as glassy systems .
As described above, the initial configuration of the system at $`t=0`$ is obtained by pouring grains in a cylindrical box from the top and letting them to fall down randomly. For having the system in a well definite configuration of its parameters, we started to shake it continuously with a fixed amplitude $`x`$, i.e. measures were taken during a single long “tap”.
The results about the bulk density-density correlations of eq. (5), averaged over $`10`$ initial configurations and over $`100`$ random field configurations, are shown in Fig.5.
The various curves correspond to different waiting time values, $`t_w=72,360,720,7200`$ for a fixed shaking amplitude $`x=0.01`$. The dynamics of the system depends strongly on the value of the waiting time $`t_w`$, which is the signature of an aging phenomenon.
This behavior is not expected when shaking for long times at high $`x`$ (i.e., high $`T`$), where our model is very close to a standard diluted lattice gas.
The specific scaling of the correlation function describes the system’s aging properties. Therefore, it is interesting to analyze the features of $`C(t,t_w)`$ behavior as function of $`t`$ and $`t_w`$ for small shaking amplitudes.
For long enough time, the correlation function scales with the ratio $`\mathrm{log}(t_w)/\mathrm{log}(t)`$. It can thus be approximated by a scaling form recently introduced to analyze memory effects in IFLG and TETRIS ,
$$C(t,t_w)=(1c_{\mathrm{}})\frac{\mathrm{log}(\frac{t_w+t_s}{\tau })}{\mathrm{log}(\frac{(t+t_s)}{\tau })}+c_{\mathrm{}}$$
(6)
where $`\tau `$, $`t_s`$ and $`c_{\mathrm{}}`$ are fitting parameters. It is worth noticing these fitting parameters, for a given $`x`$, seems to be constant for different waiting times as shown in (Fig.6).
All these results on time dependent correlation functions thus confirm our picture of an off equilibrium state of the dynamics.
## VI Discussion
The above results thus show logarithmic scaling in off equilibrium relaxations of a lattice gas model with antiferromagnetic interactions and infinite random fields in presence of gravity. These results are, interestingly, consistent with the known properties of standard random field Ising systems . Surprisingly, they are also in strong correspondence with the behavior of analogous properties observed, by numerical investigation, in apparently different lattice models under gravity as the quoted TETRIS and IFLG, also introduced to describe granular media . The TETRIS is a model which can be mapped, along the same lines outlined in the present paper, into a usual lattice gas in presence of antiferromagnetic interactions of infinite strength and gravity. Its dynamics is “frustrated” by the presence of purely kinetic constraints: particles cannot turn their orientation if too many of their neighboring sites are filled. The IFLG is, instead, a model very close to an Ising Spin Glass (also with infinite interaction strengths). It may be described in terms of a lattice gas under gravity made of particles moving in an environment with quenched disorder. The presence of the quoted strong similarities in the off equilibrium dynamics of these apparently heterogeneous systems suggests the existence of an unexpected inherent universality. This seems caused by the important effect of gravity on the “frustrated” dynamics of particles. The deep origin of such a phenomenon is yet an open problem and it is an important issue to be further investigated.
## VII Conclusions
In conclusion, we have studied processes of density relaxation in presence of gentle shaking in a lattice model for granular particles. The model has a simple geometrical interpretation in terms of elongated grains moving in a disordered environment and it admits a mapping into a Random Field Ising system with vacancies, in presence of gravity. The crucial ingredient in the model is the presence of geometric frustration dominating particle motion and the necessity of cooperative rearrangements. The present model shows logarithmic compaction of its bulk density and a two time density-density correlation function, $`C(t,t^{})`$, which has an aging behavior well described by a logarithmic scaling $`C(t,t^{})=𝒞(\mathrm{ln}(t^{})/\mathrm{ln}(t))`$.
At this stage, an experimental check of our finding of aging in the process of granular media compaction will give a strong ground to our very simple model.
It is interesting that these results numerically coincide with the findings from others two kinds of lattice models which seem apparently different: a model (the TETRIS) which can be mapped into an Ising antiferromagnetic lattice gas whose dynamics is characterized by purely kinetic constraints, and a model (the IFLG) which is, instead, closer to an Ising Spin Glass in presence of gravity. The intriguing observation of these similarities seems to suggest the presence of a form of universality which appears in “frustrated” particles dynamics, due to the crucial effects of gravity.
|
no-problem/9901/cond-mat9901246.html
|
ar5iv
|
text
|
# Lattice dynamics of BaTiO3, PbTiO3 and PbZrO3: a comparative first-principles study
## I Introduction
In the family of perovskite ABO<sub>3</sub> compounds, a wide variety of distorted variants of the high-temperature cubic structure are observed as a function of composition and temperature. First-principles density-functional energy-minimization methods have proved to be generally quite accurate in the theoretical prediction of the ground state structure type and structural parameters of perovskite oxides. According to the soft-mode theory of structural phase transitions, the ferroelectric phases can be related to the high temperature symmetric structure by the freezing-in of unstable zone-center phonons. However, to predict finite temperature behavior at phase transitions, as well as the temperature dependence of the dielectric and piezoelectric responses of the compounds, it is necessary to have information about the energies of non-uniform instabilities and low energy distortions of the cubic perovskite structure . For this purpose, the calculation of the phonon dispersion relations through density-functional perturbation theory (DFPT) has been found to be extremely useful, allowing easy identification of the unstable phonon branches and their dispersion throughout the Brillouin zone. This information also permits the investigation of the geometry of localized instabilities in real space, which can be directly related to the anisotropy of the instability region in reciprocal space . To date, phonon calculations away from the zone center have been reported for only a few individual compounds, with full phonon dispersion relations given for KNbO<sub>3</sub> , SrTiO<sub>3</sub> and BaTiO<sub>3</sub> , and selected eigenmodes for PbTiO<sub>3</sub> .
In this paper, we present the first comparative study of the full phonon dispersion relations of three different cubic perovskites: BaTiO<sub>3</sub>, PbTiO<sub>3</sub> and PbZrO<sub>3</sub>, computed from first principles using the variational DFPT method as described in Section II. These compounds have been chosen both because of their scientific and technological importance, and because they allow us to investigate the effects on the lattice dynamics of substituting one cation with the other atoms unchanged. As we will see in Sections III and IV, these substitutions lead to very pronounced differences in the eigenvectors and dispersions of the unstable phonons. The origin of these differences will be clarified in Section V, where, using a systematic method previously applied to BaTiO<sub>3</sub> , the reciprocal space force constant matrices are transformed to obtain real-space interatomic force constants for all three compounds. There, we will see that the differences can be attributed to trends in cation-oxygen interactions, with other IFCs remarkably similar among the three compounds. These trends and similarities, and their implications for the study of the lattice dynamics of solid solutions, are discussed in Section V. Section VI concludes the paper.
## II Method
The first-principles calculations for BaTiO<sub>3</sub> follow the method previously reported in Ref. . For PbTiO<sub>3</sub> and PbZrO<sub>3</sub>, calculations were performed within the Kohn-Sham formulation of density functional theory , using the conjugate-gradients method . The exchange-correlation energy functional was evaluated within the local density approximation (LDA), using the Perdew-Zunger parametrization of the Ceperley-Alder homogeneous electron gas data . The “all-electron” potentials were replaced by the same ab initio pseudo-potentials as in Refs. and . The electronic wavefunctions were expanded in plane waves up to a kinetic energy cutoff of 850 eV. Integrals over the Brillouin zone were approximated by sums on a $`4\times 4\times 4`$ mesh of special $`k`$-points .
The optical dielectric constant, the Born effective charges and the force constant matrix at selected $`q`$-points of the Brillouin zone were computed within a variational formulation of density functional perturbation theory . The phonon dispersion curves were interpolated following the scheme described in Ref. . In this approach, the long-range character of the dipole-dipole contribution is correctly handled by first subtracting it from the force constant matrix in reciprocal space and treating it separately. The short-range contribution to the interatomic force constants in real space is then obtained from the remainder of the force constant matrix in $`q`$-space using a discrete Fourier transformation. In this work, the short-range contribution was computed from the force constant matrices on a $`2\times 2\times 2`$ centered cubic mesh of $`q`$-points comprised of $`\mathrm{\Gamma }`$, X, M, R and the $`\mathrm{\Lambda }`$ point halfway from $`\mathrm{\Gamma }`$ to R . From the resulting set of interatomic force constants in real space, the phonon spectrum can be readily obtained at any point in the Brillouin zone.
Our calculations have been performed in the cubic perovskite structure. For PbTiO<sub>3</sub>, the optimized LDA lattice parameter (3.883 Å) slightly underestimates the experimental estimated value of 3.969 Å and we have decided to work, as for BaTiO<sub>3</sub> , at the experimental volume. For PbZrO<sub>3</sub>, we have chosen, as in Ref. , to work at the optimized lattice parameter of 4.12 Å, which is nearly indistinguishable from the extrapolated experimental value of 4.13 Å .
## III Dielectric properties
Knowledge of the Born effective charges ($`Z_\kappa ^{}`$) and the optical dielectric tensor ($`ϵ_{\mathrm{}}`$) is essential for describing the long-range dipolar contribution to the lattice dynamics of a polar insulator. In Table I, we present results for PbTiO<sub>3</sub> and PbZrO<sub>3</sub>, computed using the method described in Section II, and for BaTiO<sub>3</sub>, obtained previously in Ref. . The effective charges have been corrected following the scheme proposed in Ref. in order to satisfy the charge neutrality sum rule. Our results differ by at most 0.09 electrons with values reported for cubic PbTiO<sub>3</sub> and PbZrO<sub>3</sub> using slightly different methods and/or lattice constants .
As usual in the class of perovskite ABO<sub>3</sub> compounds, the amplitudes of some elements of the effective charge tensors deviate substantially from the nominal value expected in a purely ionic picture (for a review, see for instance Ref. ). This effect is especially pronounced for the Ti and associated O charges in BaTiO<sub>3</sub> and PbTiO<sub>3</sub>. It reflects the sensitivity to atomic displacement of the partially-covalent character of the Ti–O bond . In contrast, the effective charge of the Zr atom in PbZrO<sub>3</sub> and the associated O charge are significantly closer to their nominal ionic values of $`+4`$ and $`2`$, respectively. The Zr effective charge is comparable to that reported recently for ZrO<sub>2</sub> ($`+5.75`$. In addition, in both PbTiO<sub>3</sub> and PbZrO<sub>3</sub>, the anomalous contribution to the Pb charge, beyond the nominal charge of +2, is more than twice as large as that for the Ba charge in BaTiO<sub>3</sub>. This feature, too, with a concomitent increase in the magnitude of $`Z_O^{}`$, reflects the sensitivity to atomic displacement of the partially covalent character of the bond between lead and oxygen.
Within the LDA, the computed optical dielectric constant (Table I) usually overestimates the experimental value. The error is of the order of 20% in BaTiO<sub>3</sub> , for which the extrapolated experimental value is 5.40 , consistent with analogous comparisons in KNbO<sub>3</sub> and SrTiO<sub>3</sub> . For PbZrO<sub>3</sub>, it appears that experimental data is available only for the orthorhombic phase, where the value is about 4.8 , significantly less than the value of 6.97 which we have computed for the cubic phase. For PbTiO<sub>3</sub>, our value of 8.24 is comparable to a recent first-principles result of 8.28 obtained using a different method . In contrast to the other perovskites, this represents a slight underestimate of the extrapolated experimental value of 8.64 reported in Ref. .
The origin of the LDA error in the optical dielectric constant is a complex question. It arises at least partly from the lack of polarization dependence of the approximate exchange-correlation functional . In the cubic phase of the perovskite ferroelectrics, the comparison with experiment is also complicated by the fact that the high-temperature cubic phases for which the measurements are made do not have a perfect cubic perovskite structure, as assumed in the calculations. In fact, the observed cubic structure of BaTiO<sub>3</sub> and PbTiO<sub>3</sub> represents the average of large local distortions. The character of these distortions depends strongly on the material, which could well have different effects on the observed optical dielectric constant.
For the BaTiO<sub>3</sub> phonon dispersion, it has been checked that the inaccuracy in $`ϵ_{\mathrm{}}`$ only significantly affects the position of the highest longitudinal optic branch, while other frequencies are relatively insensitive to the amplitude of the dielectric constant . The effects of possible discrepancies for the other two compounds are likely to be similarly minor.
## IV Phonon dispersion curves
In this section, we describe phonon dispersion relations for BaTiO<sub>3</sub>, PbTiO<sub>3</sub> and PbZrO<sub>3</sub>, providing a global view of the quadratic-order energy surface around the cubic perovskite structure. The calculated phonon dispersion curves along the high symmetry lines of the simple cubic Brillouin zone are shown in Fig. 1. The unstable modes, which determine the nature of the phase transitions and the dielectric and piezoelectric responses of the compounds, have imaginary frequencies. Their dispersion is shown below the zero-frequency line. The character of these modes also has significant implications for the properties of the system. This character has been depicted in Fig. 1 by assigning a color to each eigenvalue, determined by the percentage of each atomic character in the normalized eigenvector of the dynamical matrix (red for A atom, green for B atom and blue for O atoms) .
Barium titanate (BaTiO<sub>3</sub>) and potassium niobate (KNbO<sub>3</sub>) both undergo a transition sequence with decreasing temperature through ferroelectric tetragonal, orthorhombic and rhombohedral (ground state) structures, all related to the cubic perovskite structure by the freezing-in of a polar mode at $`\mathrm{\Gamma }`$. The main features of the phonon dispersion of BaTiO<sub>3</sub> have been previously discussed in Ref. and are very similar to those of KNbO<sub>3</sub> . The most unstable mode is at $`\mathrm{\Gamma }`$, and this mode, dominated by the Ti displacement against the oxygens (Table II), is the one that freezes in to give the ferroelectric phases. However, the instability is not restricted to the $`\mathrm{\Gamma }`$ point. Branches of Ti-dominated unstable modes extend over much of the Brillouin zone. The flat dispersions of the unstable transverse optic mode towards X and M, combined with its rapid stiffening towards R, confine the instability to three quasi-two-dimensional “slabs” of reciprocal space intersecting at $`\mathrm{\Gamma }`$. This is the fingerprint of a “chain-like” unstable localized distortion for the Ti displacements in real space . Except for these modes, all the other phonons are stable in BaTiO<sub>3</sub>, which makes the behavior of the unstable branches relatively easy to understand.
Lead titanate (PbTiO<sub>3</sub>) has a single transition to a low-temperature ferroelectric tetragonal structure, related to the cubic perovskite structure by the freezing-in of a polar mode at $`\mathrm{\Gamma }`$. The phonon dispersion of PbTiO<sub>3</sub> shows similar features to that of BaTiO<sub>3</sub>, with some important differences. As in BaTiO<sub>3</sub>, the most unstable mode is at $`\mathrm{\Gamma }`$, consistent with the observed ground state structure. However, the eigenvector is no longer strongly dominated by the displacement of the Ti against the oxygen along the Ti–O chains, but contains a significant component of the Pb moving against the O atoms in the Pb–O planes (see Table II). Unstable Ti-dominated modes, similar to those in BaTiO<sub>3</sub>, can be identified in the vicinity of the M–X line (M$`_3^{}`$, X<sub>5</sub> modes). However, Pb now plays an active role in the character of the majority of the unstable branches, notably those terminating at M$`_5^{}`$ and X$`_5^{}`$. Also, the Pb-dominated branch emanating from the ferroelectric $`\mathrm{\Gamma }`$ mode towards R has a much weaker dispersion than the corresponding, Ti-dominated, branch in BaTiO<sub>3</sub>. In consequence, the unstable localized ferroelectric distortion in real space is nearly isotropic, in contrast to the pronounced anisotropy in BaTiO<sub>3</sub>. Finally, there is an antiferrodistortive instability at the R-point (R<sub>25</sub> mode). As similarly observed in SrTiO<sub>3</sub> , this instability is confined to quasi-one-dimensional “tubes” of reciprocal space running along the edges of the simple cubic Brillouin zone (R<sub>25</sub> and M<sub>3</sub> modes and the branch connecting them). The branches emanating from this region stabilize rapidly away from the Brillouin zone edge towards, in particular, $`\mathrm{\Gamma }_{25}`$ and X<sub>3</sub>. In real space, this instability appears as a cooperative rotation of oxygen octahedra, with strong correlations in the plane perperpendicular to the axis of rotation, and little correlation between rotations in different planes. The lack of interplane correlation, arising from the flatness of the R<sub>25</sub>–M<sub>3</sub> branch, suggests the absence of coupling between the oxygen motion in different planes. This will be discussed further in the next section.
The ground state of PbZrO<sub>3</sub> is an antiferroelectric with 8 formula units per unit cell, obtained by freezing in a set of coupled modes, most importantly modes at R and $`\mathrm{\Sigma }(\frac{1}{4}\frac{1}{4}0)`$. The phonon dispersion correspondingly shows even more pronounced and complex instabilities than for PbTiO<sub>3</sub>. Overall, the unstable branches are dominated by Pb and O displacements, with no significant Zr character. There is still a polar instability at the $`\mathrm{\Gamma }`$ point but the eigenvector (see Table II) is clearly dominated by the displacement of lead against the oxygens while the Zr atom now moves with these oxygens. In fact, the modes where the Zr is displaced against the oxygens ($`\mathrm{\Gamma }_{\mathrm{LO}}`$ at 160 cm<sup>-1</sup>, M$`_3^{}`$, X<sub>5</sub> modes) are now all stable. The octahedral rotation branch is again remarkably flat and is significantly more unstable at R<sub>25</sub> and M<sub>3</sub> than in PbTiO<sub>3</sub>. The antiferrodistortive instability retains some one-dimensional character but spreads into a larger region of reciprocal space : the $`\mathrm{\Gamma }_{25}`$ and X<sub>3</sub> transverse oxygen motions, related to the R<sub>25</sub> mode, are still stable but with a relatively low frequency. We note finally that the stiffest longitudinal and tranverse oxygen branches have been shifted to higher energy relative to the titanates.
## V Interatomic force constants
In the previous section, comparisons between the three compounds were made by analyzing phonon dispersion relations along high-symmetry lines in reciprocal space. A complementary, highly instructive picture of the quadratic-order structural energetics of the system is provided by direct examination of the real-space interatomic force constants (IFC).
The interatomic force constants (IFC) are generated in the construction of the phonon dispersion relations; their computation has been described in Section II. Our convention is that the IFC matrix $`C_{\alpha ,\beta }(l\kappa ,l^{}\kappa ^{})`$ which relates the force $`F_\alpha (l\kappa )`$ on atom $`\kappa `$ in cell $`l`$ and the displacement $`\mathrm{\Delta }\tau _\beta (l^{}\kappa ^{})`$ of atom $`\kappa ^{}`$ in cell $`l^{}`$ is defined through the following expression: $`F_\alpha (l\kappa )=C_{\alpha ,\beta }(l\kappa ,l^{}\kappa ^{}).\mathrm{\Delta }\tau _\beta (l^{}\kappa ^{})`$. Moreover, the total IFC can be decomposed into a dipole-dipole part (DD) and a short-range part (SR) , following Refs. . Such a decomposition is somewhat arbitrary but is useful for understanding the microscopic origin of the trends among different compounds. For convenience, the atoms are labeled according to Table III, as illustrated in Fig. 2. The interatomic force constants are reported either in cartesian coordinates or in terms of their longitudinal ($``$) and transverse ($``$) contributions along the line connecting the two atoms. The results for BaTiO<sub>3</sub>, PbTiO<sub>3</sub> and PbZrO<sub>3</sub> are presented in Tables IV, V and VI.
First, we examine the “self-force constant,” which specifies the force on a single isolated atom at a unit displacement from its crystalline position, all the other atoms remaining fixed. The values are given in Table IV. The self-force constants are positive for all atoms in the three compounds, so that all three are stable against isolated atomic displacements. Therefore, it is only the cooperative motion of different atoms that can decrease the energy of the crystal and generate an instability, such as is observed in the phonon dispersion relations presented in the previous Section. The analysis of the IFCs will help us to identify the energetically favorable coupling in the displacements and elucidate the origin of the unstable phonon branches.
Next, we discuss the ferroelectric instability at $`\mathrm{\Gamma }`$, and the phonon branches which emanate from it. In barium titanate, it was found that the unstable eigenvector is dominated by Ti displacement along the Ti–O–Ti chain. If we consider the simple case where only Ti atoms are allowed to displace, we find that the destabilizing contribution from the Ti<sub>0</sub>–Ti<sub>1</sub> $``$ interaction itself is nearly enough to compensate the Ti self-force constant (Table V). In addition, the fact that the Ti<sub>0</sub>–Ti<sub>1</sub> $``$ interaction is comparatively small can account directly for the characteristic flat dispersion along $`\mathrm{\Gamma }`$-X and $`\mathrm{\Gamma }`$-M and the strong stiffening along $`\mathrm{\Gamma }`$-R, associated with the chain-like nature of the instability. For the true eigenvector, another important, though relatively small, destabilizing contribution comes from the cooperative displacement of the O<sub>1</sub> atoms against the titaniums along the Ti–O chains. This, together with the total contribution of the rest of the IFCs, is responsible for the actual instability of the ferroelectric Ti-dominated branches in BaTiO<sub>3</sub>.
For lead titanate, the energetics of the Ti-only displacements, dominated by the Ti self-force constant and the Ti<sub>0</sub>–Ti<sub>1</sub> $``$ and $``$ interactions, are remarkably similar to those in BaTiO<sub>3</sub> (Table V). However, in PbTiO<sub>3</sub> there is also an important destabilization associated with pure Pb displacements . This can be fully attributed to the large difference in the Ba and Pb self-force constants, while the A<sub>0</sub>–A<sub>1</sub> $``$ and $``$ interactions are very similar in the two compounds. Also, the A<sub>0</sub>–B<sub>0</sub> $``$ and $``$ cation interactions are of the same order of magnitude as in BaTiO<sub>3</sub> and combine to give a surprisingly small $`xx`$ coupling. At $`\mathrm{\Gamma }`$, symmetry considerations permit the mixing of Ti–O and Pb–O displacements and in the phonon branches which emanate from it, thus accounting for the nature of the ferroelectric eigenvector. However, at X, M and R symmetry labels distinguish the Ti-dominated (X<sub>5</sub>, M$`_3^{}`$ and R$`_{25^{}}`$) and Pb-dominated (X$`_5^{}`$, M$`_2^{}`$ and R<sub>15</sub>) modes, which can be readily identified in the calculated phonon dispersion. Also, the Pb<sub>0</sub>–Pb<sub>1</sub> coupling is much smaller in magnitude than the Ti<sub>0</sub>–Ti<sub>1</sub> coupling, which accounts for the relatively weak dispersion of the Pb-dominated branch from $`\mathrm{\Gamma }`$ to R. In the true eigenvectors, these instabilities are further reinforced by displacements of the oxygens. While the longitudinal IFC between Ba<sub>0</sub> and O<sub>1</sub> was very small in BaTiO<sub>3</sub>, there is a significant destabilizing interaction between Pb<sub>0</sub> and O<sub>1</sub> in PbTiO<sub>3</sub>, which further promotes the involvement of Pb in the unstable phonon branches. We note that the Ti<sub>0</sub>–O<sub>1</sub> longitudinal interaction is repulsive in PbTiO<sub>3</sub>, but it is even smaller in amplitude than in BaTiO<sub>3</sub> and its stabilizing effect is compensated by the transverse coupling between Pb and O<sub>1</sub>.
In lead zirconate, the unstable eigenvector at $`\mathrm{\Gamma }`$ is strongly dominated by Pb–O motion, with little involvement of Zr. This can be understood by comparing, in Table V, the energetics of Zr-only displacements with those of Ti-only displacements in PbTiO<sub>3</sub> and BaTiO<sub>3</sub>: the Zr self-force constant is significantly larger and the Zr<sub>0</sub>–Zr<sub>1</sub> $``$ and $``$ interactions are smaller, so that Zr cannot move as easily as Ti. Also, the Zr<sub>0</sub>–O<sub>1</sub> $``$ interaction is now significantly repulsive, explaining why the Zr atom does not move against the oxygens, but with them. As for the titanates, we note finally that the Zr atoms are mainly coupled along the B–O chains, so that the characteristic dispersion of the B-atom modes is preserved, only at higher frequencies. On the other hand, the Pb self-force constant is much smaller, the Pb<sub>0</sub>–Pb<sub>1</sub> $``$ and $``$ interactions are only slightly smaller, and the destabilizing coupling between lead and oxygen is similar to that in PbTiO<sub>3</sub>, accounting for the involvement of Pb in the instability.
Finally, we discuss the antiferrodistortive instability identified with the R<sub>25</sub> and M<sub>3</sub> modes and the branch along R–M connecting them. There is a marked variation in the frequency of the R<sub>25</sub> mode in the three compounds, ranging from the lack of any instability in BaTiO<sub>3</sub>, to PbTiO<sub>3</sub> with an unstable R<sub>25</sub> mode that nonetheless does not contribute to the ground state, and finally to PbZrO<sub>3</sub> in which the R<sub>25</sub> mode is even more unstable and contributes significantly to the observed ground state . The eigenvector of this mode is completely determined by symmetry and corresponds to a coupled rotation of the corner-connected oxygen octahedra. Its frequency depends only on the oxygen IFCs, predominantly the self-force constant and the off-diagonal coupling between nearest neighbor oxygen atoms. In fact, the latter (for example, O<sub>1y</sub>–O<sub>2z</sub> in Table VI) is remarkably similar in all three compounds. The trend is therefore associated with the rapid decrease in the transverse O self-force constant from BaTiO<sub>3</sub> to PbTiO<sub>3</sub> to PbZrO<sub>3</sub> and the resulting compensation of the contribution from the self-force constant by the destabilizing contribution from the off-diagonal coupling.
The self-force constant can be written as a sum over interatomic force constants, according to the requirement of translational invariance: $`C_{\alpha ,\beta }(l\kappa ,l\kappa )=_{l^{}\kappa ^{}}^{}C_{\alpha ,\beta }(l\kappa ,l^{}\kappa ^{})`$. It is therefore of interest to identify which interatomic force constants are responsible for the trend in the transverse oxygen self-force constant. The suggestion that the trend is due to covalency-induced changes in the Pb–O interactions can be directly investigated through a “computer experiment.” Everything else being equal, we artificially replace the IFC between A<sub>0</sub> and O<sub>1</sub> atoms in BaTiO<sub>3</sub> by its value in PbTiO<sub>3</sub>, consequently modifying the self-force constant on A and O atoms. For this hypothetical material, the A-atom dominated modes are shifted to lower frequencies while the frequency of the R<sub>25</sub> mode is lowered to 40$`i`$ cm<sup>-1</sup>. If we introduce the stronger A<sub>0</sub>–O<sub>1</sub> interaction of PbZrO<sub>3</sub>, we obtain an even larger R<sub>25</sub> instability of 103$`i`$ cm<sup>-1</sup>.
The previous simulation demonstrates the crucial role played by the lead-oxygen interaction in generating the AFD instability. However, this change alone is not sufficient to reproduce the flatness of the R<sub>25</sub>–M<sub>3</sub> branch, as the corresponding frequencies of the M<sub>3</sub> mode in the two hypothetical cases above are 92 cm<sup>-1</sup> and 25$`i`$ cm<sup>-1</sup>, respectively. Naively, the absence of dispersion of the antiferrodistortive mode along that line would be interpreted as the absence of coupling between the oxygens in the different planes. However, as can be seen in Table VI, the $`yy`$ transverse coupling between O<sub>1</sub> and O<sub>3</sub> is far from negligible, and acts to amplify the AFD instability at R with respect to M. In the lead compounds, however, this is compensated by another $`yz`$ coupling, between O<sub>1</sub> and O<sub>5</sub>. The latter is significantly smaller in BaTiO<sub>3</sub> (by 35 %). If we consider a third hypothetical compound in which this coupling in BaTiO<sub>3</sub> is additionally changed to its value in PbTiO<sub>3</sub>, we recover a flat behavior along the R–M line. In the lead perovskites, the flatness of this band appears therefore as a consequence of a compensation between different interplane interactions, and cannot be attributed to complete independence of oxygen motions in the different planes.
## VI Discussion
In Section IV, we observed marked differences between the phonon dispersion relations and eigenvectors in the three related compounds. Through the real-space analysis in the previous section, we have seen that these differences arise from changes in a few key interatomic force constants.
First, we remark that B–O interactions depend strongly on the B atom, being similar in PbTiO<sub>3</sub> and BaTiO<sub>3</sub>, and quite different in PbZrO<sub>3</sub>. In fact, the SR force contribution to the Zr<sub>0</sub>–O<sub>1</sub> interaction and Ti<sub>0</sub>–O<sub>1</sub> are very similar, so that the difference arises from the dipolar contribution. In PbZrO<sub>3</sub>, this contribution is reduced in consequence of the lower values of the Born effective charges (see Table V). This trend provides another example of the very delicate nature of the compensation between SR and DD forces, previously pointed out for BaTiO<sub>3</sub> .
Next, we remark that A–O interactions depend strongly on the A atom, being similar in PbTiO<sub>3</sub> and PbZrO<sub>3</sub>, and quite different in BaTiO<sub>3</sub>. This change originates in the covalent character of the bonding between Pb and O, which results both in smaller A–O SR coupling and a larger Born effective charge for Pb. Even though the impact of the latter on destabilizing the DD interaction is partly compensated by the increased $`ϵ_{\mathrm{}}`$, the net effect is to promote the Pb–O instability.
As discussed above, the self-force constant can be written as a sum over interatomic force constants. It can be easily verified that the trends in the self-force constants observed in Table III are primarily associated with the trends in A–O and B–O interactions.
The rest of the IFCs given in Table VI are actually remarkably similar. For example, A–B interactions are apparently insensitive to the identity of A (Ba, Pb) or B (Ti, Zr). This is true also for A–A, B–B and most O–O interactions. The small differences observed can at least in part be attributed to differences in the lattice constants and in $`ϵ_{\mathrm{}}`$ for the three compounds.
The similarities in IFC’s among compounds with related compositions offer an intriguing opportunity for the modelling of the lattice dynamics of solid solutions. In the simplest case, the lattice dynamics of ordered supercells of compounds such as PZT could be obtained by using the appropriate A–O and B–O couplings from the pure compounds and averaged values for the A–B, A–A, B–B and O–O interactions. In a more sophisticated treatment, the dipolar contributions could be separately handled within an approach correctly treating the local fields and Born effective charges. Implementation of these ideas is in progress.
## VII Conclusions
In this paper, we have described in detail the first-principles phonon dispersion relations and real-space interatomic force constants of cubic PbTiO<sub>3</sub> and PbZrO<sub>3</sub> and compared them with results previously obtained for BaTiO<sub>3</sub>. The modifications induced by the substitution of Ba by Pb and of Ti by Zr are seen to be most easily understood by considering the real-space IFCs. The replacement of Ba by Pb strongly strengthens the A–O coupling, which is directly responsible for both the involvement of Pb in the ferroelectric eigenvector and the appearance of the antiferrodistortive instability. The two-dimensional real-space character of the latter results from an additional slight modification of the O–O coupling. The replacement of Ti in PbTiO<sub>3</sub> by Zr strongly modifies the B–O interaction, suppressing the involvement of Zr in the unstable modes of PbZrO<sub>3</sub>. The decrease of the Born effective charges along the B–O bonds is a crucial factor in modifying this interaction. In addition, the substitution of Ti by Zr slightly strengthens the Pb–O coupling. Apart from these modifications, the other IFCs are remarkably similar in the three compounds studied. The consequent prospects for transferability to solid solutions were discussed.
###### Acknowledgements.
The authors thank Ch. LaSota and H. Krakauer for communicating unpublished results concerning SrTiO<sub>3</sub> as well as X. Gonze for the availability of the abinit package, helpful for the interpolation of the phonon dispersion curves and the analysis of the interatomic force constants. PhG, UVW and KMR acknowledge useful discussions with many participants in the 1998 summer workshop “The Physics of Insulators” at the Aspen Center for Physics, where part of this work was performed. This work was supported by ONR Grant N00014-97-0047. EC received partial funding from an NRC Postdoctoral Fellowship and from Sandia National Laboratories. Sandia is a multi-program national laboratory operated by the Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract DE-AL04-94AL8500.
|
no-problem/9901/cond-mat9901257.html
|
ar5iv
|
text
|
# Possible Mechanism for Superconductivity in Sulfur—Common Theme for Unconventional Superconductors?
## Abstract
Sulfur has recently been found to be a superconductor at high pressure. At $``$93 GPa $`T_c`$ is 10.1 K, and the sulfur is in a base-centered orthorhombic (b.c.o.) structure. At $``$160 GPa $`T_c`$ is 17 K and sulfur is in a rhombohedral ($`\beta `$-Po) structure. The mechanism for superconductivity in sulfur is not known; in particular, a band-structure calculation does not find superconductivity in sulfur until 500 GPa. Following from work by Anderson, in a 2D strongly interacting, non-fermi liquid system with some degree of disorder at $`T=0`$, the only known conducting state is a superconductor. Following this idea it has been suggested that both the $`HT_c`$ cuprates and 2D electron gas systems are superconductors with planar conducting planes. Similarly, here we suggest that the mechanism for conductivity in sulfur are 2D conducting planes which emerge as the planar rings in sulfur at low pressure pucker at higher pressures (b.c.o. and $`\beta `$-Po). As well, we note some other consequences for study of $`HT_c`$ materials of Anderson’s work.
Recently Struzhkin et al. have found that at high pressures sulfur becomes a superconductor. At low pressure sulfur is an insulator with a planar ring structure. Struzhkin et al. find that at $``$93 GPa sulfur is a superconductor with $`T_c`$ of 10.1 K. At this pressure sulfur adopts a base-centered orthorhombic (b.c.o.) structure in which the planar rings are now puckered. At $``$160 GPa Struzhkin et al. find $`T_c`$ of 17 K. At this pressure sulfur is in a rhombohedral phase ($`\beta `$-Po structure) which also features puckered rings. The mechanism for superconductivity of sulfur is not completely well understood. Indeed, Struzhkin et al. note that using band-structure calculations of electron-phonon interactions Zakharov and Cohen found sulfur to be superconducting above 550 GPa, but not at the much lower pressure in which superconductivity was found experimentally. Here we suggest that similarly to proposed mechanisms of superconductivity in copper oxide materials, and 2D electron gases at low temperatures , (which mechanisms we grant themselves are controversial), is due to conduction in 2D planes, which, in the case of sulfur emerge in puckered rings Fig. 1.
Philips et al. seizing upon recent experimental observations finding that in a number of systems 2D electron gases at low temperature are conductors, in contradiction to theory which predicted the electron gas to be an insulator, suggested that not only is the electron gas a conductor, but a superconductor. They provide a number of arguments to support this notion including the features of the transition from insulator to conductor in the electron gas system being reminiscent of an insulator-superconducting transition; there exists a critical magnetic field above which conductivity is destroyed; and the insulating-conducting transition is near an electron crystal state in which large charge retardation effects could possibly lead to Cooper pairing. Furthermore, with reference to a classic paper written by Anderson , they note that in 2D at T=0 the only known conducting non-Fermi liquid state in the presence of disorder with zero magnetic field, is a superconductor. In general Anderson’s paper emphasizes that often in the presence of some disorder, a superconducting state can be more stable and less likely to be abolished than other conducting states. High pressures may configure sulfur into such a strongly interacting non-Fermi liquid state with emergent planes. A similar mechanism may explain superconductivity in oxygen at high pressure . Experiments consistent with this idea would include the finding of conduction preferentially in the direction indicated in the figure.
The application of the idea of Anderson’s paper to unconventional superconducting materials has a number of current and future applications: (1) It has recently been reported that $`T_c`$ can nearly be doubled in certain superconducting perovskites by making thin films of the material under epitaxial strain . This result may be hard to explain by theories proposing a single mechanism responsible for pairing in the superconducting state. However, assuming, at least in arguendo that 2D planes are important for superconductivity in these materials, this result is much easier to understand from the perspective of Anderson’s paper: Perhaps there are a number of different contributions to the pairing mechanism, but regardless of the nature or number of such contributions, $`T_c`$ in a conducting (and thus superconducting) material will rise proportionally to a reduction in the localizing ability of the host state which is accomplished by growth of a material under epitaxial strain. (2) Theoretically and computationally it might be useful to try to find strongly interacting non-Fermi liquid systems which are not superconductors in the presence of some disorder, to help steer experiments from non-productive paths. As well, it would be helpful to try to find other general classes of geometries or materials which are strongly interacting non-Fermi liquids and possible conductors. (3) Experimentalists should appreciate that, especially when studying systems which are predicted to be insulators, if a material is a conductor, it might be a superconductor. As well, it might be possible, for example in 2D systems, to screen large numbers of materials looking for conductors.
|
no-problem/9901/cond-mat9901062.html
|
ar5iv
|
text
|
# Comment on “A landscape theory of aggregation”
<sup>*</sup><sup>*</sup>footnotetext: Laboratoire associé au CNRS (URA n 800) et à l’Université P. et M. Curie - Paris 6 (B. J. Pol. S. 28 (1998) 411-412)
## Abstract
The problem of aggregation processes in alignments is the subject of a paper published recently in a Statistical Physics Journal (Physica A230, 174-188, 1996). Two models are presented and discussed in that paper. First the energy landscape model proposed by Axelrod and Bennett (B. J. Pol. S. 23, 211-233, 1993), is analysed. The model is shown not to include most of its claimed results. Then a second model is presented to reformulate correctly the problem within statistical physics and to extend it beyond the initial Axelrod-Bennett analogy.
Mathematical tools and physical concepts might be a promising way to describe social collective phenomena. Several attempts along these lines have been made, in particular to study political organisations , voting systems , and group decision making . However, such an approach should be carefully controlled. A straightforward mapping of a physical theory built for a physical reality onto a social reality could be rather misleading.
In their work Axelrod and Bennett (AB) used the physical concept of minimum energy to build a landscape model of aggregation . On this basis, they study the coalitions which countries or firms could make to optimize their respective relationship, which is certainly an interesting problem. To achieve their purpose, they constructed a model of magnetic disorder from the available data for propensities of countries or firms to co-operate or to conflict. Using their model, they drewn several conclusions based on the existence of local frustration between the interacting parties .
However, there was some confusion in their use of physics, and they did not stick to their equations. In their model, unfortunately, the disorder is only apparent in the existence of just two energy minima. It is called the Mattis spin glass model . It has been shown that performing an appropriate change of variables, removes the disorder and the model then becomes identical to a well ordered system, the zero temperature finite size ferromagnetic Ising model .
In contrast, most AB comments and conclusions are based, on the existence of frustration in the countries or firms interactions . Such local frustration would produce a degeneracy of the energy landscape which in turn would yield instabilities in the global system. However, there is no frustration in the model they derived from their data.
In fact they are confusing two models associated with disordered magnetic systems: one without frustration, the Mattis spin glass model, and one with frustration, the Edwards-Anderson spin glass model . The AB model turns out to be of the Mattis spin glass type, while all their comments are drawn from the physics associated with an Edwards-Anderson spin glass model. Most of Axelrod and Bennett’s conclusions cannot be drawn from their model.
To demonstrate our statement requires the use of some mathematical technicalities which are lenghty and not appropriate to the present journal. Therefore our demonstration has been published in a Physics journal , where first, the AB model is analysed within the field of Statistical Physics and then the conclusions mentioned above are demonstrated. Furthermore, we are able to build up a new coalition model to describe alignment and competition among a group of actors . Our model does embody the main properties claimed in the AB model. Morevover it also predicts new behavior related to the dynamics of bimodal coalitions. In particular the stability of the cold war period and the East European fragmentation process induced by the collapse of the Warsaw pact are given an explanation.
|
no-problem/9901/hep-lat9901003.html
|
ar5iv
|
text
|
# Untitled Document
RU-99-03
The overlap lattice Dirac operator and dynamical fermions
Herbert Neuberger
neuberg@physics.rutgers.edu
Department of Physics and Astronomy
Rutgers University, Piscataway, NJ 08855-0849
Abstract
I show how to avoid a two level nested conjugate gradient procedure in the context of Hybrid Monte Carlo with the overlap fermionic action. The resulting procedure is quite similar to Hybrid Monte Carlo with domain wall fermions, but is more flexible and therefore has some potential worth exploring.
By now it is clear that strictly massless QCD can be put on the lattice without employing fine tuning . At the moment, all practical ways to do this are theoretically based on the overlap . The procedures are practical only because the fermionic matrix $`D`$ admits a simple expression in terms of a function $`\epsilon `$ of a very sparse matrix $`H_W`$. $`D`$, $`\epsilon `$ and $`H_W`$ will be defined below.
One may wonder why we need to settle for a function $`\epsilon `$ of a sparse matrix, and not use a matrix $`D`$ which is sparse by itself. The main point is that $`\epsilon `$ is not analytic in its argument while $`H_W`$ is analytic in the link gauge fields, given by unitary matrices $`U_\mu (x)`$, where $`\mu =1,2,3,4`$ denotes a direction and $`x`$ a lattice site. However, for strictly massless fermions, $`D`$ cannot be analytic in the link variables. Indeed, if it were, we could not have eigenvalues of $`D`$ depending nonanalytically on the link variables: On the lattice a set of link variables all set to unity can be smoothly deformed to a good approximation to an instanton. In the process of this deformation the eigenvalue of $`D`$ closest to zero will move until some intermediate set of links variables is reached and after that will be stuck at zero. More complicated evolutions can also occur, but they all have to be nonanalytic in the link variables. Moreover, the lack of analyticity comes from the global structure of the gauge background described by the link variables. A sparse and local $`D`$ cannot provide such an eigenvalue movement. In the overlap, the burden of introducing the nonanalyticity is carried by the function $`\epsilon `$. The dynamics can then be relegated to a sparse matrix $`H_W`$ which is local and analytic in the link variables. That only a function of a sparse matrix enters makes it still possible to use polynomial Krylov space methods and avoid full storage of the fermion matrix. It is well known that full storage is prohibitive at reasonable system sizes.
The explicit form of the massless fermionic matrix $`D`$ is
$$D=\frac{1}{2}(1+ϵ^{}ϵ),$$
$`(1)`$
where $`ϵ^{}`$ and $`ϵ`$ are hermitian and square to unity. Thus, $`Vϵ^{}ϵ`$ is unitary. Replacing $`1`$ by a parameter $`\rho `$ slightly larger (smaller) than unity corresponds to giving the fermions a positive (negative) mass.<sup>*</sup><sup>*</sup>The pure gauge field action is assumed to have zero theta-parameter. Switching the sign of the physical mass corresponds to replacing $`\rho `$ by $`\frac{1}{\rho }`$ . $`D`$ acts on a space of dimension $`4Vn_c`$ where $`V`$ is the number of lattice sites ($`x`$), $`4`$ is the number of spinorial indices ($`\alpha ,\beta `$) and the fermions are in the fundamental representation of $`SU(n_c)`$, also carrying a group index $`i`$. Usually, one suppresses the spinorial and group indices, but indicates the site indices explicitly. One uses the indices $`\mu ,\nu `$ both for directions and for vectors of length one in the respective direction.
Clearly, $`detϵ=(1)^{\frac{1}{2}trϵ}`$ and the same is true of $`ϵ^{}`$. The latter is picked so that $`trϵ^{}=0`$ for all gauge fields. The simplest choice is $`ϵ^{}=\gamma _5`$, in which case all the gauge field dependence comes in through $`ϵ`$. $`\frac{1}{2}trϵ`$ is the topological charge of the background . $`ϵ`$ is defined by
$$\begin{array}{cc}& ϵ=\frac{H_W}{\sqrt{H_W^2}},H_W=\gamma _5D_W=H_W^{},\hfill \\ & (D_W\psi )(x)=\psi (x)/\kappa \underset{\mu }{}[(1\gamma _\mu )U_\mu (x)\psi (x+\mu )+(1+\gamma _\mu )U_\mu ^{}(x\mu )\psi (x\mu )]\hfill \end{array}$$
$`(2)`$
where $`D_W`$ is the Wilson lattice Dirac operator with hopping $`\kappa `$ in the range $`(.125,.25)`$. The matrices $`\gamma _\mu `$ are Euclidean four by four Dirac matrices acting on spinor indices.
$`ϵ`$ is defined for all gauge orbits with $`H_W^2>0`$. It is nonanalytic when $`H_W`$ has a zero eigenvalue. This exclusion of the “zero measure” set of gauge fields where $`H_W`$ has exact zero modes is necessary , as explained above, in order to cut the space of lattice gauge orbits up into different topological sectors. The space of allowed gauge backgrounds has also to provide a base manifold capable of supporting the nontrivial $`U(1)`$ bundles needed to reproduce chiral anomalies .
Saying that the space of forbidden gauge orbits has zero measure is not really sufficient to discount other possible consequences of the nonanalyticity of $`D`$: for example, one may be worried that nonlocal effects are somehow introduced, and the regularized theory isn’t going to become massless $`QCD`$ in the continuum limit. That there are no bad side consequences of the nonanalyticity is obvious from the following observation: Gauge fields with relatively small local curvature (in other words, with all parallel transporters round elementary plaquettes close to unitary matrices - note that this is a gauge invariant requirement) will produce an $`H_W^2`$ bounded away from zero. Indeed, the spectrum of $`H_W`$ is gauge invariant and has a gap around zero on the trivial orbit. Thus, the above is evident by continuity. (More formal arguments have recently appeared in .) The continuum limit is dominated by gauge configurations which are far from the excluded backgrounds where $`H_W`$ is non-invertible. That this had to be quite obvious follows from the fact that $`D_W`$, by itself, would describe massive fermions on the lattice, and that this mass, controlled by the variable $`\kappa `$, is kept of order inverse lattice spacing $`a`$ when $`a`$ is taken to zero.
In theory we need $`ϵ=\epsilon (H_W)`$, where $`\epsilon (x)`$ is the sign function giving the sign of $`x`$, and is nonanalytic at $`x=0`$. However, since $`H_W`$ will never have exactly zero eigenvalues, a numerical implementation of $`\epsilon (x)`$ seems possible. Of course, what determines the level of difficulty is how close the numerical implementation of $`\epsilon (x)`$ has to be to the true $`\epsilon (x)`$ for all values of $`x`$ that are possible. The set of values of $`x`$ we need to consider is the set of values the eigenvalues of $`H_W`$ can take. In practice, $`H_W`$ can have eigenvalues close to zero, and, since in that vicinity the true $`\epsilon (x)`$ has a jump, the numerical implementation inevitably becomes expensive (in cycles) for gauge fields that produce an $`H_W`$ with numerically tiny eigenvalues. Of course, the overall scale of $`H_W`$ is irrelevant for the sign function, so it is the ratio between the largest and smallest eigenvalues (in absolute value) that matters. This is the condition number of $`H_W`$, and it plays a significant role in all numerical considerations that follow.
The method that seems most promising is to compute the action of $`ϵ`$ on a vector $`\varphi `$ by approximating the sign function $`\epsilon (x)`$ by a ratio of polynomials
$$\epsilon (x)\epsilon _n(x)\frac{P(x)}{Q(x)},$$
$`(3)`$
where the $`deg(Q)=deg(P)+1=2n`$. A simple choice, which obeys in addition $`|\epsilon _n(x)|<1`$, is :
$$\epsilon _n(x)=\frac{(1+x)^{2n}(1x)^{2n}}{(1+x)^{2n}+(1x)^{2n}}=\frac{x}{n}\underset{s=1}{\overset{n}{}}\frac{1}{\mathrm{cos}^2[(s\frac{1}{2})\frac{\pi }{2n}]x^2+\mathrm{sin}^2[(s\frac{1}{2})\frac{\pi }{2n}]}.$$
$`(4)`$
This choice also respects the symmetry $`\epsilon (x)=\epsilon (\frac{1}{x})`$. Thus, it treats the extremities of the spectrum of $`H_W`$ equally. There is a certain advantage in knowing that the approximation $`\epsilon _n(x)`$ never exceeds unity in absolute magnitude. This ensure that the related approximate matrix $`D`$ never has strictly zero eigenvalues, a source of concern when $`D`$ itself is inverted, something we need to also do.
It was pointed out in that abandoning $`|\epsilon _n(x)|<1`$, the quantity $`\mathrm{max}_{x(a,b)}|\epsilon (x)\epsilon _n(x)|`$ ($`a<H_W<b`$) can be minimized for fixed $`n`$ and the needed $`n`$ for a given accuracy can be reduced relative to (4). It is advantageous to work with a small $`n`$. It is not known at the moment whether the tradeoff between a smaller $`n`$ and $`|\epsilon _n(x)|<1`$ is beneficial. In this context let me observe that one can always use (4) with $`n=1`$ on any other sign-function approximation of $`H_W`$. This doubles the effective $`n`$, but reintroduces $`|\epsilon _n(x)|<1`$. It does not reintroduce the inversion symmetry under $`x1/x`$, but the latter may be less important in practice.
Whichever polynomials one uses, the main point is that a fractional decomposition as in (4) makes it possible to evaluate the action of $`ϵ`$ at the rough cost of a single conjugate gradient inversion of $`H_W^2`$ . The parameter $`n`$ only affects storage requirements, and even this can be avoided at the expense of an increase by a factor of order unity in computation . Thus, keeping $`n`$ as small as possible is not necessarily a requirement. On the other hand, for the approximation to the sign function to be valid down to small arguments, one shall need to pick relatively large $`n`$’s and face the slow-down stemming from the single conjugate gradient inversions being controlled essentially by the condition number of $`H_W^2`$ itself, with no help from the $`n`$-dependent shift.
In both types of rational approximants there exists a polynomial $`q`$ of rank $`n`$ such that
$$Q(x)=|q(x)|^2.$$
$`(5)`$
This equation simply reflects the positivity of $`Q`$ on the real line; it makes it possible to work with Hermitian matrices below.
In quenched simulations one needs quantities of the form :
$$\frac{1V}{1+V}\varphi (\frac{1}{D}1)\varphi =\frac{1}{ϵ^{}+ϵ}(ϵ^{}ϵ)\varphi =(ϵϵ^{})\frac{1}{ϵ^{}+ϵ}\varphi .$$
$`(6)`$
The inversion of $`D`$, or $`ϵ^{}+ϵ`$, needs yet another conjugate gradient iteration, and one ends up with a two level nested conjugate gradient algorithm. The operator that needs to be inverted $`ϵ^{}+ϵ`$, is hermitian and this is a potentially useful property numerically. This operator, as seen in the above equation, anticommutes with $`ϵ^{}ϵ`$, and since the latter is generically non-degenerate, has a spectrum which is symmetric about zero.
The operators $`H_\pm \frac{1}{2}[ϵ^{}\pm ϵ]`$ have some nice properties so some comments about how they relate to $`D`$ are in order : Since $`VH_\pm V=V^{}H_\pm V^{}=H_\pm `$, $`[H_\pm ,V+V^{}]=\{H_\pm ,VV^{}\}=0`$. Note that $`V+V^{}=\{ϵ,ϵ^{}\}`$ and $`VV^{}=[ϵ^{},ϵ]=\pm 4H_{}H_\pm `$. We also know that $`KKer([ϵ,ϵ^{}])=Ker(H_+)Ker(H_{})=Ker(1+V)Ker(1V)`$. On the complement of $`K`$, $`K_{}`$, eigenvalues of $`H_+`$, $`h`$, satisfy $`0<|h|<1`$ and come in pairs $`h^\pm =\pm |h|\pm \mathrm{cos}\frac{\alpha }{2}`$ with $`0<\alpha <\pi `$. The eigenvalues of $`H_{}`$ are $`\pm \mathrm{sin}\frac{\alpha }{2}`$ on $`K_{}`$ in the same subspaces. Corresponding to each eigenvectors/eigenvalues pair of $`H_\pm `$ are a pair of eigenvectors/eigenvalues of $`V`$ (and $`V^{}`$) with eigenvalues $`e^{\pm i\alpha }`$. The two pairs of eigenvectors are linearly related. States relevant to the continuum limit have $`\alpha \pi `$. These features generalize to the massive case, where one has to deal with $`H_{ab}`$ and $`H_{ba}`$, where the matrix pencils are defined as $`H_{ab}=aϵ+bϵ^{}`$ with real $`a,b`$.
To see directly why the $`H_\pm `$ are special we follow and represent them using two distinct bases, one associated with $`ϵ`$ and the other associated with $`ϵ^{}`$: $`ϵ\psi _i=ϵ_i\psi _i`$, $`ϵ^{}\psi _i^{}=ϵ_i^{}\psi _i^{}`$. Then $`<\psi _i^{},H_\pm \psi _j>=\frac{ϵ_i^{}\pm ϵ_j}{2}<\psi _i^{},\psi _j>`$, showing that $`detH_\pm `$ factorizes since matrix elements corresponding to $`ϵ_i^{}\pm ϵ_j=0`$ vanish. Exactly half of the $`ϵ_i^{}`$ are $`1`$ and the rest are $`1`$. When $`ϵ`$ is approximated, $`|ϵ_i|`$ will no longer be precisely unity and we get some right-left mixing.
Getting back to our main topic, we have ended up with a nested conjugate gradient procedure. This is not prohibitive in the quenched case , but makes the entire approach only tenuously feasible with present computational resources when dynamical simulations using Hybrid Monte Carlo are contemplated .
My objective here is to show that in the context of Hybrid Monte Carlo, a nested conjugate gradient procedure can possibly be avoided. Of course, this comes at some cost and only future work can tell how well the idea works. At this stage I only wish to draw attention to an alternative to using a nested conjugate gradient procedure in simulations with dynamical fermions.
As usual with Hybrid Monte Carlo, we work with an even number of flavors. Obviously,
$$detD=det(ϵ^{}D)=det\frac{ϵ^{}+ϵ}{2}det\frac{1}{2}[\gamma _5+\epsilon _n(H_W)].$$
$`(7)`$
But,
$$det\frac{1}{2}[\gamma _5+\epsilon _n(H_W)]=\frac{det\frac{1}{2}[q(H_W)\gamma _5q^{}(H_W)+P(H_W)]}{det[Q]}.$$
$`(8)`$
For example, with the polynomials of (4) we have $`q(x)=(1+x)^n+i(1x)^n`$.
The denominator $`det[Q]`$ in (8) can be implemented by pseudofermions - by this term I mean variables in the functional integrals that carry the same set of indices as fermions do, only they are bosonic, so integration over the exponent of a quadratic form in pseudofermions is restricted to positive kernels, and produces the inverse of the kernel’s determinant. Note that $`Q>0`$ and one does not need an even power here to ensure positivity.
One does need an even power nevertheless, because it is a requirement embedded in the Hybrid Monte Carlo algorithm: In that algorithm, one needs to invert the fermion matrix, $`M`$, in the course of computing the Hybrid Monte Carlo force. $`M=q(H_W)\gamma _5q^{}(H_W)+P(H_W)`$ is not positive definite, but should be - and this is achieved by doubling the number of fermions. In equation (8) I chose to factor the expression in such a way that $`M`$ come out hermitian. I did this because numerical procedures are easier understood theoretically when the matrices are hermitian, and also, because this may help to reduce the condition number. But, there is no guarantee that it is really beneficial to make $`M`$ hermitian. Therefore let me mention that other factorizations are possible: For example, using the approximation in (4),
$$det\frac{1}{2}[\gamma _5+\epsilon _n(H_W)]=\frac{det\left[\frac{1+\gamma _5}{2}(1+H_W)^{2n}\frac{1\gamma _5}{2}(1H_W)^{2n}\right]}{det\left[(1+H_W)^{2n}+(1H_W)^{2n}\right]}.$$
$`(9)`$
The matrix in the denominator is still positive definite, but $`M`$ is not hermitian now, and resembles expressions obtained in the context of another truncation of the overlap , known as domain wall fermions
The appearance of pseudofermions renders this case even closer to so called domain wall fermions. The trade-off is between an extra dimension there and the higher degree polynomials here. In the present approach there is more flexibility and one does not keep unneeded degrees of freedom in memory; still, it would be premature to decide which approach is best. Optimizing to make $`n`$ as low as possible seems now again worthwhile, more so that in the quenched case.
The cost of an $`M\varphi `$ operation is roughly $`4n`$ times the cost of an $`H_W\varphi `$ operation. The condition number of $`M`$ may also be larger than that of $`H_W`$ and increase with $`n`$. It is therefore important to find out what the smallest $`n`$ one can live with is. It could be that it turned out to be too hard to maintain $`\epsilon _n(x)`$ a good approximation to $`\epsilon (x)`$ while keeping the condition number of $`M`$ manageable. If one focuses only on the quality of the approximation to the sign function it is actually likely that the condition number of $`M`$ will be large<sup>*</sup><sup>*</sup>R. Edwards, private communication. because of the high degrees of the polynomials: Consider $`\psi `$, a normalized eigenstate of $`H_W`$ with eigenvalue $`h`$. We find $`\psi ^{}M\psi =P(h)+Q(h)\psi ^{}\gamma _5\psi `$. Both $`P(h)`$ and $`Q(h)`$ can be very big numbers (for large degree $`n`$). In absolute magnitude they are very close, this is why the ratio $`P(h)/Q(h)`$ is close to $`\pm 1`$. But, this cancelation can be easily spoiled by the $`\psi ^{}\gamma _5\psi `$ factor, and thus $`M`$ can have very large eigenvalues. There is little reason to hope for $`M`$ to have no small eigenvalues, so it might be the case that $`M`$ has unacceptable large condition numbers when $`n`$ is too large.
I now wish to show that one can try to avoid this latter problem by introducing extra fields. This is, I believe, the essential reason why domain wall fermions are at all practical.
To understand this additional trick we start from some relatively easily proven identities. Consider a fermionic bilinear action $`S_0`$:
$$S_0=\overline{\psi }\gamma _5\psi +\overline{\psi }\overline{A}_1\varphi _1\overline{\varphi }_1A_1\psi +\varphi _1B_1\varphi _1+\mathrm{}+\overline{\varphi }_{n1}\overline{A}_n\varphi _n\overline{\varphi }_nA_n\varphi _{n1}+\overline{\varphi }_nB_n\varphi _n.$$
$`(10)`$
The fields with bars are rows, the ones without are columns and $`A_i,\overline{A}_i,B_i`$ are commuting matrices. One can visualize this action as a chain, extending into a new dimension. The degrees of freedom we are interested in sit at one end of the chain; these are the $`\overline{\psi },\psi `$ fields. The $`\overline{\varphi },\varphi `$ fields are the extra fields I introduced to handle the condition number problem. The idea is to arrange matters so that integrating out all the $`\overline{\varphi },\varphi `$ variables will produce an action for the variables $`\overline{\psi },\psi `$ of the precise rational form we wish. But, the condition number that will be relevant numerically, will be the condition number of the bigger kernel in $`S_0`$, involving all fermionic fields.
To get the induced action for the fields $`\overline{\psi },\psi `$ we integrate out the fields $`\overline{\varphi },\varphi `$ starting from the other end of the chain. The integration over the pair of fermions at the end of the chain produces a factor of $`(detB_n)`$ in front and adds a piece to the quadratic term $`B_{n1}`$, coupling $`\overline{\varphi }_{n1}`$ to $`\varphi _{n1}`$, of the form $`A_n\overline{A}_n/B_n`$. Now this can be iterated until the last pair of $`\overline{\varphi },\varphi `$ is integrated out. We have obtained the following identity:
$$𝑑\overline{\varphi }_1𝑑\varphi _1\mathrm{}𝑑\overline{\varphi }_n𝑑\varphi _ne^{S_0}=\underset{i=1}{\overset{n}{}}(detB_i)e^{\overline{\psi }(\gamma _5+R)\psi },$$
$`(11)`$
where,
$$R=\frac{A_1\overline{A}_1}{B_1+{\displaystyle \frac{A_2\overline{A}_2}{B_2+{\displaystyle \frac{A_3\overline{A}_3}{B_3+\mathrm{}{\displaystyle \frac{\mathrm{}}{B_{n1}+{\displaystyle \frac{A_n\overline{A}_n}{B_n}}}}}}}}}$$
$`(12)`$
Now, the expression for $`R`$ is recognized as a truncated continued fraction. Any ratio of polynomials can be written as a truncated continued fraction by the Euclid algorithm (invert, divide with remainder in the denominator and continue). Thus, we learn that any fractional approximation we wish to use for the sign function can be mapped into a chain with only nearest neighbor interactions.
Usually, $`P`$ in (3) is odd and $`Q`$ is even: $`P(x)=xP_1(x^2),Q(x)=Q_1(x^2)`$. $`P_1`$ is of degree $`n1`$ and $`Q_1`$ is of degree $`n`$. Therefore, the truncated continued fraction has the following structure:
$$\frac{P_1(u)}{Q_1(u)}=\frac{\alpha _0}{u+\beta _1+{\displaystyle \frac{\alpha _1}{u+\beta _2+{\displaystyle \frac{\alpha _2}{u+\beta _3+\mathrm{}{\displaystyle \frac{\mathrm{}}{u+\beta _{n1}+\frac{\alpha _{n1}}{u+\beta _n}}}}}}}}$$
$`(13)`$
Picking $`B_i=H_W^2+\beta _i,i=1,2,\mathrm{},n`$ and $`A_1\overline{A}_1=\alpha _0H_W,A_i\overline{A}_i=\alpha _{i1},i=2,3\mathrm{},n`$ produces the desired expression. Again, one needs pseudofermions to compensate for the prefactor $`det(H_W^2+\beta _i)`$ in (11). For the kernels of the pseudofermions to be positive definite we need $`\beta _i0`$.
A similar trick can be used if one wants to implement $`\frac{P_1(u)}{Q_1(u)}=_{i=1}^n\frac{\alpha _i^2}{u+\beta _i^2}`$. Now $`S_0=\overline{\psi }\gamma _5\psi +_i\alpha _i(\overline{\psi }H_W\varphi _i+\overline{\varphi }_i\psi )_i\overline{\varphi }_i(H_W^2+\beta _i^2)\varphi _i`$. Again, one needs pseudofermions. The structure is somewhat different.
One can avoid having the squares of $`H_W`$ in the chain by more continued fraction expansion. As an example, I worked out the explicit map of the approximation $`\epsilon _n(x)`$ of equation (4) to a chain involving only $`H_W`$ terms (which are the least expensive to implement). To start, I use a formula that goes as far back as Euler:
$$\epsilon _n(x)=\frac{2nx}{1+{\displaystyle \frac{(4n^21)x^2}{3+{\displaystyle \frac{(4n^24)x^2}{5+\mathrm{}{\displaystyle \frac{\mathrm{}}{4n3+\frac{\left[4n^2\left(2n1\right)^2\right]x^2}{4n1}}}}}}}}$$
$`(14)`$
Now, I use invariance under inversion of $`x`$ to move the $`x`$ factors around. I also change some signs to make the expression more symmetrical. The end result for the path integral is:
$$𝑑\overline{\varphi }_1𝑑\varphi _1\mathrm{}𝑑\overline{\varphi }_n𝑑\varphi _ne^S_{}=(detH_W)^{2n}e^{\overline{\psi }(\gamma _5+\epsilon _n(H_W))\psi },$$
$`(15)`$
To write down the quadratic action $`S_{}`$ we introduce the extended fermionic fields $`\overline{\chi },\chi `$:
$$\overline{\chi }=\left(\begin{array}{cccc}\overline{\psi }& \overline{\varphi }_1& \mathrm{}& \overline{\varphi }_{2n}\end{array}\right),\chi =\left(\begin{array}{cc}\psi & \\ \varphi _1& \\ \mathrm{}& \\ \varphi _{2n}\end{array}\right)$$
$`(16)`$
$`S_{}=\overline{\chi }𝐇\chi `$ where the new kernel, $`𝐇`$, in block form, has the following structure:
$$𝐇=\left(\begin{array}{ccccccc}\gamma _5& \sqrt{\alpha _0}& 0& 0& \mathrm{}& \mathrm{}& 0\\ \sqrt{\alpha _0}& H_W& \sqrt{\alpha _1}& 0& \mathrm{}& \mathrm{}& 0\\ 0& \sqrt{\alpha _1}& H_W& \sqrt{\alpha _2}& \mathrm{}& \mathrm{}& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& H_W& \sqrt{\alpha _{2n1}}& \\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \sqrt{\alpha _{2n1}}& H_W\end{array}\right)$$
$`(17)`$
The numerical coefficients $`\alpha `$ are given below:
$$\alpha _0=2n,\alpha _j=\frac{(2nj)(2n+j)}{(2j1)(2j+1)},j=1,2,\mathrm{}$$
$`(18)`$
The main point is that the structure of the extended kernels is sufficiently close to five dimensional fermions that we can be quite sure that the condition numbers are similar to the ones encountered with domain wall fermions.
Note that we now have a hermitian kernel, $`𝐇`$. This would be useful if we wanted to use Lanczos techniques to study the entire eigenvalue spectrum of $`𝐇`$. Actually, to do this in an efficient way (i.e. applying the Cullum-Willoughby method) one needs full 64 bit precision, to be able to distinguish so called spurious eigenvalues from true ones. One cannot get the needed accuracy if one uses a direct evaluation of the action of $`\gamma _5D`$.
One may wonder whether the law of conservation of difficulties is not violated: how can it be that we compute the same $`\overline{\psi }\psi `$ propagator ($`\frac{1}{\gamma _5+R}`$ is given by the $`\overline{\psi }`$-$`\psi `$ block in $`𝐇^1`$ ) both in the pole method and in the extra fields method, and use conjugate gradient algorithms (essentially) in both cases, but hope for big gains in one method relative to the other ? At its essence the new trick is algorithmical - basically, the space in which the conjugate gradient operates is enlarged when extra fermions are added, and thus, some of the difficulties one encounters in the smaller space are avoided. Very roughly, what the extra fermions do, is to provide ways around barriers one faces in the original space. This is somewhat similar to solving the problem of minimizing a function over a discrete set, by first making the set continuous.
Clearly, $`n`$ plays a role analogous to the size of the extra dimension, $`N`$, in domain wall simulations. Thus, these two truncations of the overlap may end up being similar not only conceptually, but also numerically. The derivation of our identities went through essentially the same steps as those employed in only now in reverse order. When comparing to domain wall fermions it becomes apparent that now we can work with a hermitian kernel, that we have much more flexibility, and that it is probably possible to exploit any efficient, possibly computer architecture dependent, implementation of the action of the lattice Wilson-Dirac operator $`D_W`$. I have not separated the chiral components of $`\overline{\psi },\psi `$ here, something that is quite natural in the domain wall viewpoint, . Of course, if there is an efficiency reason, one could try to separate the chiral components in the more general framework presented here, too.
As always, if any single trick is useful, one usually considers possible combinations. By this I mean to exploit both the direct approach based on a sum of pole terms and the indirect approach based on extra fields. The essence of the numerical problem is that we wish to contract the spectrum of $`H_W`$ to two points, $`\pm 1`$. The negative half of the spectrum of $`H_W`$ is intended to map into $`1`$ and the positive half to $`+1`$. Any map that reduces the ranges of the negative and positive halves of the spectrum is useful. It produces a new $`H_W`$ that can be used as an argument for a new map, which now can work easier. In this way we try to combine the good properties of various maps. Moreover, all that $`H_W`$ needs to be, an this is important, is a reasonable discretization of the continuum Dirac operators with a negative mass of order inverse lattice spacing. Actually, one can add as many fermion fields with large positive masses as one wishes. Thus, if we used a few extra fields to produce an effective $`H_W`$ (say for the end of the chain) which had some reasonable spectral properties, we could, conceivably, by adding a parameter $`\rho `$ (as discussed at the beginning of this paper) make the mass of one fermion negative and use this action as the argument of a rational approximation implemented by the pole method. A better behaved input would be much easier to handle. Or, we could reverse the procedures. For example assume you wish to use a very short chain, say consisting only of $`\overline{\psi }\psi `$ and $`\overline{\varphi }\varphi `$ with an action (employing previous notation) $`\overline{\chi }𝐇\chi `$ given by:
$$𝐇=\left(\begin{array}{cc}\gamma _5& (H_W^2)^{1/4}\\ (H_W^2)^{1/4}& H_W\end{array}\right)$$
$`(19)`$
Since the quantity $`(H_W^2)^{1/4}`$ is less violently behaved around zero than $`\epsilon (H_W)`$ a rational approximation (or maybe even just a polynomial approximation) might be quite manageable.
My main message in this paper is that in the context of dynamical fermion simulations there are many alternatives and tricks that have not been yet explored, and it might be a waste to exclusively focus on the most literal numerical implementations of the recent theoretical advances on the topic of chiral symmetry on the lattice.
Acknowledgments: This research was supported in part by the DOE under grant # DE-FG05-96ER40559. Thanks are due to R. Edwards, C. Rebbi and to P. Vranas for comments that helped and motivated me to write this note.
References:
H. Neuberger, Phys. Lett. B417 (1997) 141; Phys. Lett. B427 (1998) 125.
R. Narayanan, H. Neuberger, Phys. Lett. B302 (1993) 62; Nucl. Phys. B412 (1994) 574; Phys. Rev. Lett. 71 (1993) 3251; Nucl. Phys. B443 (1995) 305.
H. Neuberger, Phys. Rev. D57 (1998) 5417.
H. Neuberger, hep-lat/9802033.
P. Hernandez, K. Jansen, M. Lüscher, hep-lat/9808010, C. Adams, hep-lat/9812003.
H. Neuberger, Phys. Rev. Lett. 81 (1998) 4060.
R. Edwards, U. Heller, R. Narayanan, hep-lat/9807017.
H. Neuberger, hep-lat/9811019.
R. Edwards, U. Heller, R. Narayanan, hep-lat/9811030.
Chuan Liu, hep-lat/9811008.
H. Neuberger, Phys. Lett. B437 (1998) 117.
Y. Kikukawa, H. Neuberger, A. Yamada, Nucl. Phys. B526 (1998) 572.
R. Narayanan, Phys. Rev. D58 (1998) 097501.
D. B. Kaplan, Phys. Lett B288 (1992) 342; D. Boyanowski, E. Dagotto, E. Fradkin, Nucl. Phys. B285 (1987) 340; Y. Shamir, Nucl. Phys. B406 (1993) 90; P. Vranas, Phys. Rev. D57 (1998) 1415; P. Chen et. al. hep-lat/9809159; T. Blum, A. Soni, Phys. Rev. D56 (1997) 174; J.-F. Lagae, D. K. Sinclair, hep-lat/9809134.
|
no-problem/9901/astro-ph9901193.html
|
ar5iv
|
text
|
# The effect of supernova natal kicks on compact object merger rate
## 1 Introduction
The gravitational wave detectors LIGO and VIRGO will soon be operational. Their completion brings up the question of the possible sources of gravitational waves and also their brightness and observable rate. The most often considered sources of gravitational waves are mergers of compact objects; neutron stars or black holes. A number of authors have already considered these phenomena, and several estimates have been published (Narayan et al., 1991; Phinney, 1991; Tutukov and Yungelson, 1993; Portegies Zwart and Spreeuw, 1996).
There are two approaches leading to the calculation of the merger rate. The first method, let us call it the experimental approach, is based on the observational fact that we do see three binary systems of neutron stars: PSR1913+16 (Taylor and Weisberg, 1989), PSR1534+12 (Wolszczan, 1991), and PSR2303+46 (Taylor and Dewey, 1988). Based on this number, and considering various observational selection effects, like e.g. the pulsar beam width, one can estimate the number of such systems in our Galaxy. Given the observed orbital parameters for these systems one can then calculate the lifetime of each one and thus obtain the expected merger rate in the Galaxy. This approach suffers from several weaknesses: it is based on observations of only three objects, so small number statistics strongly affects the results. Moreover, the estimate of the rate is based on theoretical assumptions regarding the selection effects. These assumptions may lead to systematic errors of uncertain value.
The other approach that we can use, (let us call it a theoretical one), is based on population synthesis of binary systems. All compact objects that will finally merge, must have their origin as ordinary stars in binary systems. First studies in that field were performed for Monte Carlo simulations of radio pulsars by Dewey and Cordes (1987). Thus it seems that by modeling the evolution of stellar binaries and the statistical properties of large ensembles of binaries one can calculate the expected population of compact object binaries. Analyzing this population and then considering the systems that will have merged within the Hubble time one can also estimate the present compact object merger rate in the Galaxy. This calculation requires the supernova rate in the Galaxy, and also the fraction of all stars that are in binaries. This ”theoretical approach” also suffers from the fact that it is based on several assumptions. In order to create a population of binaries one requires distributions of initial parameters, like the mass of the primary star or the initial mass ratio in the binary. Some stages in the stellar evolution require parameterization as well, e.g. mass loss and angular momentum loss through stellar wind, mass exchange in the common envelope phase etc. Another parameter that may strongly influence the number of compact object binaries as well as their lifetime is the value of the velocity kick that a neutron star receives as a result of supernova explosion. The importance of this parameter has been shown by Portegies Zwart and Spreeuw (1996).
The measurement of the distribution of kicks in supernova explosions is not easy. One approach that can be taken is to measure velocities of pulsars and use this distribution as the distribution of supernova kicks. Here one has to take into account selection effects, like for example the fact the fast neutron stars may leave the Galaxy and not be visible as radio pulsars. A comprehensive study of this and other selection effects (Arzoumanian et al., 1997) shows that a large fraction of neutron stars has velocities above $`500`$ km s<sup>-1</sup>. Iben and Tutukov (1996) argue that the velocities of radio pulsars can be explained by just taking into account the recoil velocity due to mass loss in supernova explosions in binaries, and neglecting a possible ”natal kick”! An independent study of velocities of young pulsar through measuring offsets from the centers of supernova remnants (Frail et al., 1994; Lyne and Lorimer, 1994) confirms that the distribution initial velocities of neutron stars may have a large high velocity tail. On the other hand Blaauw and Ramachandran (1998) argue that the observed properties of pulsars can be explained in a model with a single unique value of the kick $`v_{kick}=200`$ km s<sup>-1</sup>. A similar result has been obtained by Lipunov et al. (1997) who show that the observed population of neutron stars is consistent with the kick velocities in the range $`150\mathrm{\hspace{0.17em}200}`$ km s<sup>-1</sup>. The distribution of the kick velocities has been parameterized by different authors: Portegies Zwart and Spreeuw (1996) used a Gaussian with the width of $`450`$ km s<sup>-1</sup>, while Cordes and Chernoff (1997) proposed a weighted sum two Gaussians: 80 percent with the width $`175`$ km s<sup>-1</sup> and 20 percent with the width $`700`$ km s<sup>-1</sup>. Here we attempt to investigate systematically the effect of the kick velocity distribution on the compact objects merger rate. In section 2 we describe the model of binary evolution used for simulating the population of binaries, in section 3 we show the results of the simulation and calculate the compact object merger rate. Finally we discuss our results in section 4.
## 2 Population and evolution of binary systems
While the evolution of a single star is only a function of its mass and metallicity the evolution of a binary presents a more complicated problem. It depends on the mass of a more massive component, the mass ratio of the smaller to the larger star $`q`$, and on the initial parameters of their orbit: $`e`$ \- the eccentricity and $`a`$ the semi-major axis of the orbit. Following the general approach in the field we assume that these parameters are independent, i.e. the probability density can be expressed as a product: $`p(M,q,e,a)=\mathrm{\Psi }(M)\mathrm{\Phi }(q)\mathrm{\Xi }(e)\mathrm{\Gamma }(a)`$. Our population synthesis code is mainly based on Bethe and Brown (1998). The evolution of a single star is described by Eggleton et al. (1989). In the following all masses are in the units of solar mass $`M_{}`$.
### 2.1 Distribution of the initial parameters
Beyond $`10M_{}`$ the distribution of the masses of the primary stars that we use follows Bethe and Brown (1998),
$$\mathrm{\Psi }(M)M^{1.5}$$
(1)
Other authors have used similar distributions, e.g. Portegies Zwart and Verbunt (1996) use the exponent of $`1.7`$. We restrict the range of the masses to the interval between $`10M_{}`$ to $`50M_{}`$ as we want at least the more massive star to undergo a supernova explosion.
We adopt the following distribution of the mass ratio $`q`$
$$\mathrm{\Phi }(q)\mathrm{const}$$
(2)
There is some uncertainty regarding $`\mathrm{\Phi }`$. Portegies Zwart and Verbunt (1996) use $`\mathrm{\Phi }(1+q)^2`$, while Tutukov and Yungelson (1994) use $`\mathrm{\Phi }q`$. Yet another distribution is used in the simulation by Pols and Marinus (1994), who use a distribution slightly peaked for small values of q and falling down when $`q`$ goes to unity.
Initial binary eccentricity $`e`$ is assumed to have value between 0 and 1 and its initial distribution is taken from (Duquennoy and Mayor, 1991) :
$$\mathrm{\Xi }(e)=2e.$$
(3)
The distribution of the initial semi-major axis $`a`$ used in population synthesis codes is flat in the logarithm, i.e.
$$\mathrm{\Gamma }(a)\frac{1}{a}.$$
(4)
Both stars are initially massive main sequence stars, with the radii of at least 4–5 R, so we take minimum value of $`a`$ to be 10 R. However if the sum of radii of two zero age main sequence stars exceeds 10 R, we chose the minimal separation to be twice the radius of the primary component. This should give the stars the space to evolve without merging before they go through their main sequence lifetime. Nevertheless some systems might be born with high eccentricity, and thus be in contact and merge forming a single very massive star. At this point we stop our calculation as we are not interested in this type of mergers in our study.
Although very wide binaries with periods up to 10<sup>10</sup> days are observed (Duquennoy and Mayor, 1991) and models with initial separation up to 10<sup>6</sup> R are used (Portegies Zwart and Verbunt, 1996), we have set the maximal separation to 10<sup>5</sup> R as wider binaries are very unlikely to survive two supernova explosions and form a compact object system. Our models showed that setting the upper limit to 10<sup>6</sup> R does not change the results noticeably.
We assume that the distribution of the kick velocity a newly born neutron star receives in a supernova explosion is a three dimensional Gaussian and parameterize it by its width $`\sigma _v`$. We vary $`\sigma _v`$ to determine the dependence of the merger rate and properties of the compact object binaries population on this parameter. Alternatively, we draw the natal kicks from a weighted sum two Gaussians (Dewey and Cordes, 1987): 80 percent with the width $`175`$ km s<sup>-1</sup> and 20 percent with the width $`700`$ km s<sup>-1</sup>.
### 2.2 Evolution of binaries
In our calculations we neglect the effects of stellar wind on evolution of binaries.
Tidal circularization takes place in binaries for which the size of any component (or both components) is large in relation to their separation(Zahn, 1978). This happens when the stars evolve to the giant stage, and/or if the initial binary separation is small. We follow the prescription proposed by Portegies Zwart and Verbunt (1996); the circularization takes place if the stellar radius of one component is larger then 0.2 of the periastron binary separation. The orbital elements ($`a,e`$) change with conservation of angular momentum until the new binary separation is 5 stellar radii of the component in which tidal effects take place, or the binary is totally circularized ($`e=0`$).
Binary stars may undergo mass transfer and common envelope phases in different evolutionary stages. Depending on the physical conditions this will result in the change of mass of each star, mass loss from the system, and also in change of the size and shape of the orbit. We describe below three schematic evolutionary paths and accompanying mass transfer types used in the code.
#### 2.2.1 Evolution for small mass ratios, $`(q<0.8897)`$
The more massive star (primary) evolves faster in the binary. We use the following approximate formula for the main sequence lifetime of a star with mass $`M`$: $`T_{\mathrm{MS}}=20\times 10^6(M/10M_{})^2`$ yrs. After this time the primary evolves to the giant stage which lasts about 20 per cent of its main sequence lifetime, and increases its size. For $`q<0.8897`$ the secondary is still on main sequence at the end of primary giant stage. If the giant primary overfills its Roche lobe mass transfer to the main sequence companion (star 2, with mass $`M_2^i`$) and mass loss takes place. We denote this regime by type I mass transfer. The mass transfer continues until the giant is stripped of its envelope, and thus the final mass of the giant will be just that of the its helium core. We use the approximation $`M_1^f=0.3M_1^i`$ (Bethe and Brown, 1998). A part of the envelope mass $`0.7M_1^i`$ is lost from the system while another part is transferred to the companion. The fraction of mass transferred to the companion is proportional to the square of the mass ratio (Vrancken et al., 1991; Bethe and Brown, 1998), so the mass of the companion after the mass transfer is
$$M_2^f=M_1^i(q+0.7q^2),$$
(5)
where $`q`$ is the initial mass ratio.
The orbit was already circularized by tidal forces, when the giant was filling its Roche lobe, and now its size changes due to mass transfer and mass loss. We describe the change in semi-major axis by (Pols and Marinus, 1994)
$$\frac{a_f}{a_i}=\left(\frac{M_1^f}{M_1^i}\frac{M_2^f}{M_2^i}\right)^2\left(\frac{M_1^f+M_2^f}{M_1^i+M_2^i}\right)^{2\beta +1},$$
(6)
where $`a`$ is binary separation, $`M_1`$ is the mass of the giant losing material, $`M_2`$ is the mass of its main sequence companion and indices $`i,f`$ correspond respectively to the values before and after the Roche lobe overflow. The parameter $`\beta `$ describes the specific angular momentum of material leaving binary (Pols and Marinus, 1994). The above equation assumes that $`\beta `$ is constant with time. The value of $`\beta `$ is uncertain; typically the values such as $`\beta 6`$ or $`\beta =3`$ are used, for discussion see Pols and Marinus (1994). Here all calculations are performed for $`\beta =6`$. The initial primary, now either a helium star (after the mass transfer) or a giant goes supernova, leaving behind a neutron star. The initial secondary evolves, becomes a giant and may begin to transfer material to the newly formed neutron star increasing its mass, and possibly turning it to a black hole. We assume that the maximum mass of a neutron star is $`2.2M_{}`$.
In describing accretion on a neutron star we follow Bethe and Brown (1998), and we will call this regime type II mass transfer. A neutron star accretes matter while moving in orbit through the extended envelope of the giant. We use the Bondi-Hoyle accretion rate
$$\dot{M}=\pi \rho vR_{ac}^2$$
(7)
where $`\rho `$ is the density in the vicinity of the compact object of mass $`M_1`$, $`v`$ is its velocity, and $`R_{ac}=2GM_1/v^2`$ is the accretion radius. Consideration of the energy loss equations lead to the final mass of the compact object (Bethe and Brown, 1998)
$$M_1^f=2.4\times \left(\frac{M_2^i}{M_1^i}\right)^{1/6}M_1^i.$$
(8)
We assume that the giant loses its envelope so its final mass is just that of the helium core $`M_2^f=0.3M_2^i`$. The orbital separation follows from the energy integration i.e. equations 5.18 through 5.23 in (Bethe and Brown, 1998):
$$a_f=0.145\times a_i\left(\frac{M_1^i}{M_2^i}\right).$$
(9)
Finally the initial secondary undergoes a supernova explosion, and provided that the system survives, we obtain a compact object binary, consisting of either two neutron stars or of a black hole and a neutron star.
#### 2.2.2 Evolution of two stars for intermediate mass ratio, $`(0.8897<q<0.95)`$.
When $`q>0.8897`$ the secondary is already a giant when the primary explodes as a supernova. The upper limit is explained in the next subsection.
For the case of small orbital separations the evolution begins in a similar way as for small $`q`$ described above. The evolution begins in a similar way as in the case of small mass ratio, and the primary may transfer mass to the secondary as described by type I above. When the secondary becomes a giant the primary is already a helium star, stripped of its envelope. The secondary may transfer mass to the primary and we approximate this process also by type I. The primary explodes as a supernova and forms a neutron star, which may accrete from the secondary, provided that it is still a giant. The mass transfer in this phase is described by type II. The secondary undergoes a supernova explosion and if the system survives we obtain a compact object binary.
In the case of large orbital separations the first stage of mass transfer from the giant to the main sequence star (type I) does not occur. The stars might begin to interact only when both of them are already in the giant stage. In this case they loose the common envelope. To describe this stage of evolution we assume that the giants loose the entire envelopes and become helium stars of 30 per cent of the initial giant masses. We then approximate the change in the size of the orbit by:
$$\frac{a_f}{a_i}\left[1+\frac{(M_1^i+M_2^i)M_1^i(1q^2)}{\alpha _{CE}M_1^fM_2^f}\right]^1.$$
(10)
As above $`a`$ is binary separation, $`M_1,M_2`$ are the masses of primary and secondary. The indices $`i,f`$ correspond to the values before and after common envelope phase, respectively, and $`\alpha _{CE}`$ describes the efficiency of the orbital energy expenditure in the dispersal of common envelope. Equation (10) was adopted from Yungelson and Tutukov (1993) and describes the change of the orbital separation $`a`$ following decrease of the orbital energy which was used to eject a common envelope. They have also showed that $`0.6\alpha _{CE}1`$, and we have performed our calculations for different values: $`\alpha _{CE}=0.4,0.6,0.8,1.0`$. Our calculations show that the results (the production rate of the compact object binaries, and their properties) are almost indistinguishable for these three values, so for simplicity we present our results for one value of $`\alpha _{CE}=0.8`$.
Similarly to the case of small orbital separations if the secondary is still a giant it may transfer mass to the neutron star formed in the supernova explosion of the primary. This mass transfer is described as type II.
#### 2.2.3 Evolution of two stars of almost equal mass, $`(q>0.95)`$
Two stars almost simultaneously (within $`10`$% of their main sequence lifetime) leave main sequence and become giants. If their radii are large enough in relation to the periastron orbital separation, the orbit is tidally circularized. If the binary separation is small enough the giants may finally overfill their Roche lobe and enter a common envelope phase, which we describe by equation (10). The two stars go supernova one after another, and if the system survives, we obtain a compact object binary consisting of two neutron stars.
After each process which changes the orbital parameters and sizes of the components we check if the components sizes are not larger than the periastron binary separation.
### 2.3 Supernova explosion
The supernova explosions are very likely to have an impact on the binary systems. We assume that in each supernova explosion a neutron star with mass of $`1.4M_{}`$ is formed and we neglect the interaction of the companion star with the envelope ejected in the supernova explosion.
We draw a random time for a supernova explosion in the orbital motion of the binary. We find the relative position and velocity of the two stars for this moment. The exploding star is then replaced by a neutron star and we add the kick velocity to its orbital velocity, while the expanding supernova shell carries away a part of the total momentum of the binary. If the energy of such a system is less than zero the system is bound and we calculate the parameters of a new orbit of the binary and its new spatial velocity. For a detailed discussion of the effect of sudden mass loss and a random kick velocity on binary parameters see Hills (1983).
After the first supernova explosion we verify if the neutron star does not collide and merge with the companion, i.e. if the radius of the companion is larger than the periastron binary separation.
### 2.4 Energy loss through gravitational radiation
Once a compact object binary is formed its orbit will evolve because of the gravitational wave energy loss. Orbital energy loss through radiation of gravitational waves becomes important once a compact object binary is in a tight and/or highly eccentric orbit. The evolution of the semi-major axis $`a`$ and eccentricity of the orbit $`e`$, in a binary emitting gravitational waves have been calculated by Peters (1964). The lifetime of a compact object binary is
$$t_{merg}=\frac{5c^5a^4(1e^2)^{7/2}}{256G^3M_1M_2(M_1+M_2)}\left(1+\frac{73}{24}e^2+\frac{37}{96}e^4\right)^1$$
(11)
where $`a`$ is the semi-major axis of the orbit, $`e`$ is its eccentricity, and $`M_1,M_2`$ are the masses of the compact objects.
## 3 Results
The code described above allows to generate populations of compact object binaries and trace their statistical properties. In this paper we mainly concentrate on the dependence of these properties as a function of the kick velocity distribution. In Figures 1 and 2 we present two example evolutionary paths leading to formation of compact object binaries: black hole neutron star binary and double neutron star systems.
Let us first consider different evolutionary paths of the binaries depending on their initial parameters. We first consider a model with the Cordes and Chernoff (1997) kick velocity distribution. Tracing the evolution now depends on four parameters: $`M`$ (primary initial mass), $`q`$ (initial mass ratio), $`a`$ (initial semi-major axis), and $`e`$ (initial eccentricity). In order to visualize the evolutionary effects we fix two of them and present the types of systems obtained in the course of the binary evolution. In Figure 3 we fix the initial orbital separation $`a=20R_{}`$ and eccentricity $`e=0.5`$. The graphs are empty in the lower left part for which $`M\times q<10M_{}`$. This is the region for which the secondary is not massive enough to undergo a supernova explosion. However, in some systems for which the primary mass is $`M>20M_{}`$, the mass of the secondary is increased by accretion when the primary goes through the giant phase. Most of the systems end up either by disruption in the first supernova explosion or by a spiral in of the neutron star to the secondary (top panels) The remaining systems may be disrupted in the second supernova explosion. The compact object binary population (bottom right panel) is bimodal. The neutron star neutron star binaries are formed from the systems with $`q`$ very close to unity, while the black hole neutron star systems are primarily formed when $`q`$ is intermediate. Thus the initial distribution of the mass ratio $`q`$ has a strong influence on the production rates of these two types of compact object binaries. The number of systems shown in each panel does not correspond to the actual production rates. We present one thousand systems in each panel. The actual calculation produces 51% systems with the secondary not massive enough to undergo a supernova explosion, 40% systems torn after the first explosion, 6.2% of systems merged after the first explosion, 2.7% systems torn after the second explosion, and 0.35% of compact object binaries.
In Figure 4 we present the results of the evolution of systems for which we fix the initial primary mass and the initial mass ratio, while varying the the initial orbital parameters $`a`$ and $`e`$. The systems with small orbital separations and high eccentricity merge in the early phase of the evolution and are not considered in this paper. Disruption after the first supernova explosion may occur to all systems. The population of systems that end up merging after the first explosion, are disrupted in the second explosion, or form a compact object binary originate from the same region of the parameter space. One should note that because of the circularization of orbits already in the initial stages of the binary evolution the final population is not very sensitive to the initial eccentricity. On the other hand the distribution of the initial orbital separation is important, as the compact object binaries originate in systems with relatively small orbital separations (see bottom right panel in Figure 4). One should note that all the compact object binaries shown in Figure 4 are black hole neutron star binaries. Double neutron star systems are formed only when $`q`$ is nearly unity in our simulations. As before the number of systems shown in each panel does not correspond to the actual production rates, and show one thousand systems in each panel. The actual calculation produces 12% systems born in contact, 43% systems with the secondary not massive enough to undergo a supernova explosion, 34% systems torn after the 1st explosion, 0.7% systems merged after the first explosion, 0.50% systems torn in the second explosion and only 0.15% mergers.
We present the dependence of the production rates of different types of compact object binaries on the width of the kick velocity distribution $`\sigma _v`$ in Figure 5. The number of double neutron star systems that merge within the Hubble time increases with the kick velocity, while the production rate of black hole neutron star systems becomes smaller. These two rates are nearly equal when the kick velocity is roughly that given by Cordes and Chernoff (1997).
In Figure 6 we present the distributions of the orbital parameters of compact object binaries for a few representative values of the width of the kick velocity distribution $`\sigma _v`$. Each panel contains one thousand compact object binaries. To understand these plots let us first consider the properties of the population of objects just before the second supernova explosion in the case $`\sigma _v=0`$ km s<sup>-1</sup>. Some of the systems are wide, with the eccentricity varying from zero to unity, however, most of the systems populate a region with eccentricities above $`0.45`$. These systems originate in binaries with small initial mass ratios. A characteristic property of this group is a correlation between eccentricity and period. Objects in this group originate in systems with small initial value of $`q`$. The masses of the primary compact objects in these systems have a narrow distribution, the mass of the secondary after the accretion phase weakly depends on the initial mass before accretion - see equation 8, and is typically $`M_{}^{}{}_{2}{}^{}2.3M_{}`$. The mass of the secondary just before the second supernova explosion, is the mass of the helium core of secondary star and is in the range $`3.0M_{}<M_{}^{}{}_{2}{}^{}<5.5M_{}`$. The orbital parameters of such system after an explosive mass loss (second supernova explosion) depend only on the total mass loss, see e.g. Pols and Marinus (1994). In our calculation the newly formed neutron star has a mass of $`1.4M_{}`$, thus the relative mass loss $`x=(1.4+M_{}^{}{}_{2}{}^{})/(M_{}^{}{}_{2}{}^{}+M_{}^{}{}_{2}{}^{})`$ is in the range $`0.47<x<0.69`$. Systems that loose more than half of the mass ($`x<0.5`$) are unbound. Other systems become eccentric and the eccentricity is given by $`e^{}=(1x)/x`$. Hence the lowest eccentricity a system can have after the second supernova explosion in our simulations is $`e^{}=0.44`$. The relation between the new orbital period $`P^{}`$ and the new eccentricity $`e^{}`$ for different relative mass loss $`x`$ is
$$P=P_0\frac{(1+e^{})^{1/2}}{(1e^{})^{3/2}},$$
(12)
where $`P_0`$ is the orbital period before the explosion. This relation defines the curved shape of the distribution in the $`P,e`$ diagram. The objects populating the right hand side of the $`P,e`$ plot, originate from systems with intermediate or large $`q`$. The intermediate $`q`$ objects went through accretion onto the neutron star (regime II of a mass transfer described above) and therefore similar reasoning as above applies to them. However systems with high initial value of $`q`$ results in wide binaries (periods larger than 100 days) with different eccentricities. The compact object binaries in Figure 6 (top left panel) with eccentricities below 0.45 and orbital periods smaller then $`10^3`$ days originate in binaries of intermediate and large initial mass ratio.
In the case of non zero kick velocity the systems with high $`q`$ are very unlikely to survive. In fact there are only two such systems on the plot for $`\sigma _v=200`$km s<sup>-1</sup>, and none for higher velocities. The escape velocity in long period systems is low, and if such systems existed after the first supernova event, they have are very likely disrupted in the second supernova explosion.
As the kick velocity is increased the shape of the distribution in the $`P,e`$ diagram also changes. In the case $`\sigma _v=200`$ km s<sup>-1</sup> there is a small fraction of high eccentricity, low period systems. This region of the parameter space fills up as the kick velocity increases, see lower panel in Figure 6 where the case $`\sigma _v=400`$ km s<sup>-1</sup> and $`\sigma _v=800`$ km s<sup>-1</sup> are shown. This is due to the fact that with the increasing kick velocity the long period systems are easier to disrupt and the fraction of surviving short period systems becomes larger. We note that the distributions shown in Figure 6 are similar to those obtained previously (Portegies Zwart and Spreeuw, 1996; Tutukov and Yungelson, 1993).
In Figure 7 we present the cumulative distributions of the lifetimes of compact object binaries for the same set of kick velocities as in Figure 6. In the case of no kick velocity $`\sigma _v=0`$ km s<sup>-1</sup> the distribution is bimodal: the systems with small $`q`$ merge typically within the Hubble time (which we take to be $`15`$ Myr). However the systems with higher $`q`$ remain in wide orbits and their merger times exceed the Hubble time. With the increasing kick velocity only the tightly bound systems survive. The lifetime of a system scales with the fourth power of the semi-major axis, and therefore the median lifetime decreases with increasing kick velocity. In the case of $`\sigma _v=200`$ km s<sup>-1</sup> the median lifetime is $`4\times 10^8`$ years, and it decreases by a factor of four when the kick velocity is doubled. For the highest kick velocity velocity $`\sigma _v=800`$ km s<sup>-1</sup> the median lifetime is only $`3\times 10^7`$ years. One should note however, that the distribution of $`t_{\mathrm{merge}}`$ is very skewed, and we present it on a logarithmic axis. Thus there is a long tail extending to about the Hubble time. Most of the mergers take place in the Hubble time and only a small fraction of the total population lasts longer.
Finally in Figure 8 we present $`f_{merge}`$ – the fraction of all binaries with the primary star more massive than $`10M_{}`$ that produce a pair of compact objects in a binary system. We calculate $`f_{merge}`$ for a large range of the kick velocity distribution widths. To calculate each point in Figure 8 we calculated one thousand of compact binaries. This fraction falls down very approximately exponentially with the increasing kick velocity. We have fitted a modified exponential to this dependence
$$f_{merge}\mathrm{exp}(4.218.51\times 10^3\sigma _v+2.6\times 10^6\sigma _v^2).$$
(13)
The fit is shown by the dashed line in Figure 8 and is accurate to about 6%. Note that equation (13) can be used to determine the merger fraction for any kick velocity distribution that can be expressed as a linear combination of three dimensional Gaussians. Equation (13) can be used together with Figure 5 to obtain the production rates of any type of compact object binaries as a function of the width of the kick velocity distribution.
The actual compact object merger rate in the Galaxy can be calculated given the observed supernova rate, the fraction of stars that exist in binaries, and assuming some form of the star formation history. Assuming that there is one supernova explosion every fifty years in the Galaxy and that binary fraction is 50%, we denote the supernova rate as $`0.02f_{SN}`$ year<sup>-1</sup>, and the binary fraction as $`0.5f_{bin}`$, where $`f_{SN}`$ and $`f_{bin}`$ are factors of the order of unity.
In the simplest case when we assume that the star forming process has been going at the same rate throughout the history of the Galaxy we obtain the compact object merger rate
$$r=0.0001\times f_{SN}f_{bin}f_{merge}\mathrm{year}^1,$$
(14)
where $`f_{merge}`$ is expressed in percents. Taking values of $`f_{merge}`$ from our Figure 8, we may calculate number of expected compact objects gravitational merging events, which for, let say, $`\sigma _v=100`$ km s<sup>-1</sup> will be 0.0002 per year which yields approximately 1 event per 5000 years per Galaxy.
Since the compact object merger rate is directly proportional to $`f_{merge}`$ it also depends exponentially on the kick velocity distribution width! The assumption about the constant star formation throughout the history of the Galaxy is in fact not really crucial. As we have seen most of the mergers take place within a few times $`10^8`$ years even for rather small velocity kicks, therefore in reality we are only concerned about the star forming history in the last $`10^9`$ years. The star forming rate in the Galaxy has most probably been constant over the last $`5\times 10^9`$ years (Miller and Scalo, 1979).
The calculation of the detection rate in gravitational wave detectors (Abramovici et al., 1992) requires a number of additional assumptions, like for example the galaxy density in our local and far neighborhood etc., for a discussion see e.g. Phinney (1991); Chernoff and Finn (1993). Regardless of the assumptions the detection rate is proportional to the compact object merger rate in the Galaxy, provided that stellar populations are similar in other galaxies. When calculating the expected rates one has to take into account the masses of the systems that merge. The volume of the space in a flux limited sample of events scales as $`M^{5/2}`$ (Tutukov and Yungelson, 1993). Thus mergers of heavy objects like a neutron star and a black hole, or a pair of black holes are visible in a larger volume and may yield a similar observational rate despite the fact that they are not as frequent as the double neutron star mergers.
## 4 Conclusions
We have modeled the evolution of binary systems using a Monte Carlo code. The results of the simulations are consistent with the results of other codes (Portegies Zwart and Yungelson, 1998; Lipunov et al., 1997; Portegies Zwart and Verbunt, 1996). Using this code we find that the merger rate of compact object binaries and consequently the detection rate in gravitational wave detectors falls approximately exponentially with the width of kick velocity distribution. While the code that we use is far from describing all the details of binary stellar evolution we must emphasize that it produces similar results to the ones obtained elsewhere and in this work we only concentrate on the relative scaling of the resultant merger rate with the kick velocity in a supernova explosion.
The exact shape of the kick velocity distribution is very difficult to measure. Cordes and Chernoff (1997) and Bethe and Brown (1998) use a distribution which is a weighted sum of two Gaussian distributions: 80 percent with the width $`175`$ km s<sup>-1</sup>, 20 percent with $`700`$ km s<sup>-1</sup>; Portegies Zwart and Spreeuw (1996) use a Gaussian with the width of $`450`$ km s<sup>-1</sup>. On the other hand Iben and Tutukov (1996) argue that no velocity kicks are required at all, however the lack of pulsars in wide binaries suggests that at least a small kick of a few tens of km s<sup>-1</sup> must be present (Portegies Zwart et al., 1997). Thus, the velocity kick determination remains uncertain. Consequently the detection rate estimates in gravitational wave detectors may be uncertain by this amount. Approximating the kick velocity distribution by a single Gaussian profile and changing its dispersion, we calculated the merger rate for a wide range of velocity kicks. Changing the width of the assumed profile within the values proposed by other authors, namely from $`\sigma _{\mathrm{min}}=0.0`$ to $`\sigma _{\mathrm{max}}=500`$ km s<sup>-1</sup> results in decrease of the merger rate by a factor of 30. The expected number of compact object mergers varies by more than an order of magnitude when the kick velocity goes from $`200`$km s<sup>-1</sup> (the value preferred in population studies) to $`500`$km s<sup>-1</sup> (the measured in the observed population of pulsars).
The measurements of gravitational wave signals may thus allow some determination of the kick velocities. The rates and also perhaps measurements of the ”chirp” masses, $`=\mu ^{\frac{3}{5}}M^{\frac{2}{5}}`$, where $`\mu `$ and $`M`$ are the reduced and total mass of binary system – Chernoff and Finn (1993), and their distribution will pose yet another constraint on the stellar evolution.
###### Acknowledgements.
This work has been funded by the KBN grants 2P03D00911, 2P03D01311, and 2P03D00415. and also made use of the NASA Astrophysics Data System. The authors are very grateful to Dr. Portegies Zwart, the referee, for many helpful comments.
|
no-problem/9901/astro-ph9901138.html
|
ar5iv
|
text
|
# Accurate determination of the Lagrangian bias for the dark matter halos
## 1 Introduction
Galaxies and clusters of galaxies are believed to form within the potential wells of virialized dark matter (DM) halos. Understanding the clustering of DM halos can provide important clues to understanding the large scale structures in the Universe. A number of studies have therefore been carried out to obtain the two-point correlation function $`\xi _{hh}`$ of DM halos. Two distinctive approaches are widely adopted. One is analytical and is based on the Press-Schechter (PS) theories (e.g. Kashlinsky k87 (1987), k91 (1991); Cole & Kaiser ck89 (1989); Mann, Heavens, & Peacock mhp93 (1993); Mo & White mw96 (1996), hereafter MW96; Catelan et al. 1998a ; Porciani et al. pmlc98 (1998)). The other is numerical and is based on N-body simulations (e.g. White et al. wfde87 (1987); Bahcall & Cen bc92 (1992); Jing et al. jmbf93 (1993); Watanabe, Matsubara, & Suto wms94 (1994); Gelb & Bertschinger gb94 (1994); Jing, Börner, & Valdarnini jbv95 (1995); MW96; Mo, Jing, & White mjw96 (1996); Jing j98 (1998); Ma m98 (1999)). The up to date version of the analytical studies is given by MW96, which states that $`\xi _{hh}(r,M)`$ of halos with a mass $`M`$ is proportional to the DM correlation function $`\xi _{mm}(r)`$ on the linear clustering scale ($`\xi _{mm}(r)1`$), i.e. $`\xi _{hh}(r,M)=b^2(M)\xi _{mm}(r)`$, with the bias factor
$$b_{mw}(M)=1+\frac{\nu ^21}{\delta _c},$$
(1)
where $`\delta _c=1.68`$, $`\nu \delta _c/\sigma (M)`$, and $`\sigma (M)`$ is the linearly evolved rms density fluctuation of top-hat spheres containing on average a mass $`M`$ (see MW96 and references therein for more details about these quantities). The subscript $`mw`$ for $`b(M)`$ in Eq.(1) denotes the result is analytically derived by MW96. On the other hand, the most accurate simulation result was recently presented by our recent work (Jing j98 (1998)), where we studied $`\xi _{hh}`$ for halos in four scale-free models and three CDM models with the help of a large set of high-resolution N-body simulations of $`256^3`$ particles. Our result unambiguously showed that while the bias is linear on the linear clustering scale, the bias factor given by MW96 significantly underestimates the clustering for small halos with $`\nu <1`$. Our simulation results both for the CDM models and the scale-free models can be accurately fitted by
$$b_{fit}(M)=(\frac{0.5}{\nu ^4}+1)^{(0.060.02n)}(1+\frac{\nu ^21}{\delta _c}),$$
(2)
where $`n`$ is the index of the linear power spectrum $`P_m(k)`$ at the halo mass $`M`$
$$n=\frac{d\mathrm{ln}P_m(k)}{d\mathrm{ln}k}|_{k=\frac{2\pi }{R}};R=\left(\frac{3M}{4\pi \overline{\rho }}\right)^{1/3}.$$
(3)
In the above equation $`\overline{\rho }`$ is the mean density of the universe.
MW96 derived their formula in two steps. First they obtained the bias factor $`b_{mw}^L(M)`$ in the Lagrangian space using the PS theories. The Lagrangian bias reads,
$$b_{mw}^L(M)=\frac{\nu ^21}{\delta _c}.$$
(4)
But the bias that is observable is in the Eulerian space. MW96 obtained the Eulerian bias (Eq.1) with a linear mapping from the Lagrangian clustering pattern, $`b_{mw}(M)=b_{mw}^L(M)+1`$ (cf. Catelan et al. 1998a ). From their derivation, we conjectured in Jing (j98 (1998)) that two possibilities could have failed the MW96 formula for small halos. The first possibility is that the PS theories are not adequate for describing the formation of small halos. The halo formation in the PS theories is uniquely determined by the local peak height through the spherical collapse model, while in reality, especially the formation of small halos, can be significantly influenced by the non-local tidal force (e.g. Katz, Quinn, & Gelb kqg93 (1993); Katz et al. kqbg94 (1994)). A recent analysis by Sheth & Lemson (sl98 (1998)) for simulations of $`100^3`$ particles also gave some evidence that the Lagrangian bias of small halos has already deviated from the MW96 prediction (Eq.4). Possible invalidity of the linear mapping, the second possibility that fails the MW96 formula, was recently discussed by Catelan et al. (1998b ). They pointed out that the linear mapping might not be valid for small halos because of large scale non-linear tidal force. All these evidences are important but very preliminary and qualitative. It would be important to find out if one or a combination of the two possibilities can quantitatively explain the failure of the MW96 formula.
In this Letter, we report our new determination for the Lagrangian bias factor $`b^L(M)`$ using the simulations of Jing (j98 (1998)). We use a novel method, which we call the cross-power spectrum between the linear density field and the halo number density field, to measure the bias factor. The method has several important advantages over the conventional correlation function (CF) estimator. Applying this method to our high-resolution simulations yields a very accurate determination of $`b^L(M)`$ for halo mass over four magnitudes both in scale-free models and in CDM models. Our result of $`b^L(M)`$ can be accurately represented by $`b_{fit}(M)1`$, which quantitatively indicates that it is the failure of the PS theories for describing the formation of small halos that results in the failure of the MW96 formula. This result has important implications for the PS theories.
An independent, highly related work by Porciani et al. (pcl98 (1999)) appeared when the present work had been nearly finished. They measured the Lagrangian bias for two scale-free simulations ($`n=1`$ and $`n=2`$) of $`128^3`$ particles by fitting the halo two-point correlation function with the linear and second-order terms (corresponding to the coefficients $`b_1^L`$ and $`b_2^L`$ in Eq. 5). They concluded that the failure of the MW96 formula essentially exists in the Lagrangian space and that their result can be reasonably described by $`b_{fit}(M)1`$. While our present study ($`n=1`$ and $`n=2`$) confirms their result in these aspects, our simulations have significantly higher resolution (a factor 8 in mass) that is essential for a robust accurate measurement of clustering for small halos. Moreover, we explore a much larger model space and use a superior measurement method. In addition, there is some quantitative difference between their measured bias and ours which will be discussed in Sect. 3.
We will describe and discuss the cross-power spectrum method in Sect. 2, where we will also briefly describe our halo catalogs. Our measurement results will be presented and compared, in Sect. 3, to the analytical formula Eq.(4) and the fitting formula Eq.(2) for the Eulerian bias. In Sect. 4, we will summarize our results and discuss their implications for the PS theories and for galaxy formation.
## 2 Methods and simulation samples
We use the fluctuation field $`\delta (𝐫)=\rho (𝐫)/\overline{\rho }1`$ to denote the density field $`\rho (𝐫)`$, where $`\overline{\rho }`$ is the mean density. Smoothing both the halo number density field and the linear density field over scales much larger than the halo Lagrangian size, we assume that the smoothed halo field of $`\delta _h(𝐫)`$ can be expanded in the smoothed linear density field $`\delta _m(𝐫)`$ (Mo et al. mjw97 (1997)),
$$\delta _h(𝐫)=b_1^L\delta _m(𝐫)+\frac{1}{2}b_2^L\delta _m^2(𝐫)+\mathrm{}.$$
(5)
This general assumption, especially to the first order, has been verified by previous simulation analysis (e.g. MW96; Mo, Jing, & White mjw97 (1997); Jing j98 (1998)). This expansion is however naturally expected in the PS theories, with the first coefficient $`b_1^Lb^L`$ given by Eq.(4) and the higher order coefficients by MW96, Mo et al. (mjw97 (1997)), and Catelan et al. (1998a ).
By Fourier transforming Eq.(5), multiplying the both sides by $`\delta _m^{}(𝐤)`$, and taking an ensemble average, we get the cross-power spectrum $`P_c(𝐤)\delta _h(𝐤)\delta _m^{}(𝐤)`$. Because the bispectrum of the linear density field vanishes for Gaussian fluctuations, the second term at the right hand side is zero. Therefore we have
$$P_c(k)=b^LP_m(k)+(3\mathrm{r}\mathrm{d}\mathrm{and}\mathrm{higher}\mathrm{order}\mathrm{terms}),$$
(6)
where $`P_m(k)`$ is the linear power spectrum. This equation serves as a base for our measuring $`b^L`$. The linear density field $`\delta _m(𝐤)`$ is known when we set the initial condition for the simulations. The halo density field $`\delta _h(𝐤)`$ can be easily measured for a sample of the DM halos with the FFT method. The ensemble average in Eq.(6) can be replaced in measurement by the average over different modes within a fixed range of the wavenumber $`k`$. Thus both $`P_c(k)`$ and $`P_m(k)`$ can be easily measured. The bias factor $`b^L`$ is just the ratio of these two quantities, if higher order corrections are small and can be neglected.
This method has several important advantages over the conventional CF analysis. On the linear scale where the clustering of small halos is weak (about 10% of the mass correlation), the cross-power spectrum can be estimated accurately, because it does not suffer from the finite volume effect or the uncertainty of the mean halo number density. The errors of $`P_c(k)`$ and $`P_m(k)`$ are uncorrelated among different $`k`$-bins for a Gaussian field, which ease our error estimate for $`b^L`$. The second order correction ($`b_2^L`$-term) vanishes in the cross-power spectrum. The method yields a determination of $`b_1^L`$, not the square of $`b^L`$, thus we can see if the bias factor $`b_1^L`$ is positive for $`M/M_{}>1`$ and negative for $`M/M_{}<1`$ as Eq.(4) predicts, where $`M_{}`$ is the characteristic mass defined by $`\sigma (M_{})=\delta _c`$. All these attractive features do not present in the CF analysis. An additional interesting feature is that the shot noise of the finite number of halos is greatly suppressed in the estimate of $`P_c(k)`$, because the linear density field dose not contain any shot noise and the cross-power spectrum of this field with the shot noise (of a random sample) is zero in the mean. It is quite different from the (self) power spectrum estimate of the halos which we must correct for the shot noise $`1/N`$ ($`N`$ is the number of halos). In the next section, we will quantitatively show the shot noise effect is indeed negligible in our measurement of $`b^L`$ even for a sample containing as many as $`100`$ halos.
The halo catalogs analyzed here are the same as those used in Jing (j98 (1998)). The cosmological models, the simulations, and the halo catalogs were described in detail by Jing (j98 (1998)). Here we only briefly summarize the features that are relevant to the present work. The catalogs were selected, with the friends-of-friends algorithm with the linking length 0.2 times the mean particle separation, from a set of $`\mathrm{P}^3\mathrm{M}`$ N-body simulations of $`256^3`$ particles. The simulations cover four scale-free models $`P_m(k)k^n`$ with $`n=0.5`$, $`1.0`$, $`1.5`$ and $`2.0`$ and three typical cold dark matter (CDM) models which are SCDM, LCDM and OCDM models respectively. Each of the $`n1.5`$ scale-free simulations has seven outputs, and that of the $`n=2`$ has eight outputs. The CDM models are simulated with two different box sizes, $`100`$ and $`300h^1\mathrm{Mpc}`$. Three to four realizations were run for each model and for each box size in the case of the CDM models. In order to study the halo distribution in the Lagrangian space, we trace back each of the halo members to its initial position before the Zel’dovich displacement, i.e. the position in the Lagrangian space. The position of a halo is defined as the center of the mass of its members in the Lagrangian space. In this way our halo catalogs in the Lagrangian space are compiled.
## 3 The Lagrangian bias parameter
We present our results for the linear clustering scales where the variance $`\mathrm{\Delta }^2(k)k^3P_m(k)/2\pi ^2`$ is less than $`\mathrm{\Delta }_{max}^2`$. In this paper, we take $`\mathrm{\Delta }_{max}^2=0.5`$, but our results change little if we take $`\mathrm{\Delta }_{max}^2=0.25`$ or $`\mathrm{\Delta }_{max}^2=1.0`$. Figure 1 shows the ratio of the cross-power spectrum $`P_c(k)`$ to the linear power spectrum $`P_m(k)`$ as a function $`k`$ for two halo masses in the $`n=0.5`$ scale-free model. The ratios, i.e. the bias factors, do not depend on the scale $`k`$, so the linear bias approximation is valid. The bias factor is positive for the large halos of $`M=13M_{}`$, but negative for the small halos of $`M=0.16M_{}`$, consistent with the MW96 prediction Eq.(4). Here we only show two examples, but the above features are found in all the models.
To quantitatively examine the shot noise effect caused by the finite number of the halos (§2), we repeat our calculation for random samples. For each halo sample, we generate ten random samples. The mean bias factor calculated for the random samples is zero, as expected. More interesting is that the fluctuation of the bias factor between the random samples is always small compared to the halo bias factor. This is shown in Figure 1, where the dashed lines are the $`1\sigma `$ upper limits of the shot noise. The catalog of $`M=13M_{}`$ contains only $`200`$ halos in each realization. Even in this case, the shot noise leads to an uncertainty of only $`10\%`$ of the halo bias at every $`k`$bin (except for the first bin). The shot noise is indeed effectively suppressed in our cross-power spectrum measurement.
From Figure 1, we know that the Lagrangian bias $`b^L`$, on the linear clustering scale, is a function of the halo mass $`M`$ only. The self-similar nature of the scale-free models makes the bias depend only on $`M/M_{}`$. Three panels of Figure 2 show our result of $`b_1^L(M/M_{})`$ for the models $`n=0.5`$, $`1.0`$ and $`2.0`$ at different output times. We do not plot, for the limited space, our result for the $`n=1.5`$ model, but all of our following discussion still includes this model. The excellent scaling exhibited by $`b^L(M/M_{})`$ at different outputs supports that our measurement is not contaminated by any numerical artifacts. The bias factor is negative for $`M<M_{}`$, zero for $`M=M_{}`$, and positive for $`M>M_{}`$, in agreement with the MW96 formula. More quantitatively, the MW96 formula describes very well the Lagrangian clustering for large halos $`M>M_{}`$, but systematically underestimates (more negative bias) for small halos $`M<M_{}`$. If the linear mapping is valid, i.e. $`b=b^L+1`$, these results are fully consistent with our previous finding (Jing j98 (1998)) that the MW96 formula systematically underestimates the Eulerian bias for small halos. More interestingly, with the linear mapping assumption and with our fitting formula Eq.(2) for the Eulerian bias, we can get a Lagrangian bias $`b_{fit}^Lb_{fit}1`$ that agrees very well with our simulation results (the solid lines on Fig. 2). So the inaccuracy of the MW96 Eulerian bias formula Eq. (1) already exists in their derivation for the Lagrangian bias. Our results for the $`n=1`$ and $`n=2`$ models are qualitatively in good agreement with Porciani et al. (pcl98 (1999)). Quantitatively, we note that their Lagrangian bias is significantly lower than the MW96 formula for $`M>M_{}`$ in the $`n=1`$ model, in contrast to the good agreement we find with the MW96 formula for all models for $`M>M_{}`$.
Our result for the LCDM model is shown in the lower-right panel of Figure 2. Comparing to the MW96 analytical formula Eq(1) and the fitting formula Eq(2), we have the same conclusion as we got for the scale-free models: the analytical formula underestimates the Lagrangian bias value for small halos and the fitting formula agrees quite well with the simulation results after the difference between the Eulerian and the Lagrangian spaces is considered with the linear mapping. The SCDM and OCDM models give essentially the same results, and we omit their plots.
Our fitting formula for the Eulerian bias can accurately describe the Lagrangian bias under the assumption of the linear mapping. The difference between the fitting formula and the simulation result is generally less than $`15\%`$ or $`2\sigma `$ ($`\sigma `$ is the error derived from the different realizations) except for one bin of the smallest halos in the $`n=0.5`$ model. The simulation result is slightly higher in the $`n=0.5`$ model but lower in the $`n=2.0`$ model (both about 15%) than the fitting formula for small halos ($`MM_{}`$). The difference could come from the possible effect of non-linear mapping (Catelen et al. 1998b ) and/or from higher order contributions which depend on measurement methods (for the Eulerian bias we used the CF analysis). We will address this problem more closely in a future paper.
## 4 Discussion and conclusions
In this Letter, we use a new method, the cross-power spectrum between the linear density field and the halo number density field, to determine the Lagrangian bias. The method has several apparent advantages over the conventional correlation function estimator in determining the bias factor. Applying this method to the halo catalogs of Jing (j98 (1998)), we find that the Lagrangian bias is linear on the linear clustering scale. The Lagrangian bias $`b^L(M)`$ is positive for halo mass $`M>M_{}`$, zero for $`M=M_{}`$, and negative for $`M<M_{}`$, qualitatively consistent with the MW96 prediction for the Lagrangian bias. Quantitatively, our simulation results of $`b^L(M)`$ are in good agreement with the MW96 formula for large halos $`M/M_{}>1`$, but the MW96 formula significantly underestimates the Lagrangian clustering for small halos $`M/M_{}<1`$. Our measured Lagrangian bias can be described very well by our fitting formula for the Eulerian bias Eq.(2) under the linear mapping assumption. Our results therefore unambiguously demonstrate that the inaccuracy of the MW96 formula for the Eulerian bias already exists in their derivation for the Lagrangian bias.
A very subtle point is that there exists a small difference ($`<15\%`$) between our measured Lagrangian bias and our fitting formula \[Eq.(2)\] for Eulerian bias after the linear mapping is applied. This difference could be due to the difference of the measurement methods or/and the non-linear mapping effect. Our result however assures that the non-linear effect in the mapping, if any, must be small compared to the linear mapping.
The result of this paper has important implications for the Press-Schechter theories. The spherical collapse model that connects the halo formation with the density peaks has to be replaced with a model that can better describe the formation of small halos. This might be related to solving the long standing problem that the halo number density predicted by the Press-Schechter theories has a factor of a few difference from simulation results (e.g., Gelb & Bertschinger gb94 (1994); Lacey & Cole lc94 (1994); Ma & Bertschinger mb94 (1994); Jing 1998, unpublished; Tormen t98 (1998); Kauffmann et al. ketal98 (1998); Somerville et al. setal98 (1998); Governato et al. getal98 (1998); Lee & Shandarian ls98 (1998); Jing, Kitayama, & Suto jks99 (1999)). Because galaxies are believed to form within small halos ($`M<M_{}`$), solving these problems is of fundamental significance to studying galaxy formation.
It is also my pleasure to thank an anonymous referee for a constructive report and the JSPS foundation for a postdoctoral fellowship. The simulations were carried out on VPP/16R and VX/4R at the Astronomical Data Analysis Center of the National Astronomical Observatory, Japan.
|
no-problem/9901/nucl-th9901058.html
|
ar5iv
|
text
|
# Model independent constraints from vacuum and in-medium QCD Sum Rules
<sup>1</sup><sup>1</sup>institutetext: Physik-Department, Theoretische Physik, Technische Universität München, D-85747 Garching, Germany (Received: date / Revised version: date)
## Abstract
We discuss QCD sum rule constraints based on moments of vector meson spectral distributions in the vacuum and in a nuclear medium. Sum rules for the two lowest moments of these spectral distributions do not suffer from uncertainties related to QCD condensates of dimension higher than four. We exemplify these relations for the case of the $`\omega `$ meson and discuss the issue of in-medium mass shifts from this viewpoint.
offprints: weise@physik.tu-muenchen.de
QCD sum rules have repeatedly been used in recent times to arrive at estimates for possible in-medium mass shifts of vector mesons 2 ; 7 . The validity of such estimates has been questioned, however, for several reasons. First, for broad structures such as the $`\rho `$ meson whose large vacuum decay width is further magnified by in-medium reactions, the QCD sum rule analysis does not provide a reliable framework to extract anything like a “mass shift” 4 ; 5 . Secondly, notorious uncertainties exist at the level of factorization assumptions commonly used to approximate four-quark condensates in terms of $`\overline{q}q^2`$, the square of the standard chiral condensate. The first objection is far less serious for the $`\omega `$ meson which may well have a chance to survive as a reasonably narrow quasi-particle in nuclear matter 4 ; 4b . The second objection, however, is difficult to overcome: factorization of four-quark condensates may indeed be questionable.
In the present note we focus on the two lowest moments ($`𝑑ss^nR(s)`$ with $`n=0,1`$) of vector meson spectral distributions, in vacuum as well as in nuclear matter, and point out that these are subject to sum rules which do not suffer from the uncertainties introduced by four-quark condensates. These sum rules are shown to provide useful, model independent constraints which we exemplify for the case of the $`\omega `$ meson spectral distribution and its change in the nuclear medium. The sum rule for the second moment, $`𝑑ss^2R(s)`$, does involve the four-quark condensate. In fact it can be used in principle to determine this particular condensate and test the factorization assumption. The detailed analysis of this question will be defered to a longer paper. In this short note we confine ourselves to conclusions that can be drawn without reference to four-quark condensates.
The starting point is the current-current correlation function
$$\mathrm{\Pi }_{\mu \nu }(q)=id^4xe^{iqx}𝒯j_\mu (x)j_\nu (0)$$
(1)
where $`𝒯`$ denotes the time-ordered product and the expectation value is taken either in the vacuum or in the ground state of nuclear matter at rest. In vacuum the polarization tensor (1) can be reduced to a single scalar correlation function, $`\mathrm{\Pi }(q^2)={\scriptscriptstyle \frac{1}{3}}g^{\mu \nu }\mathrm{\Pi }_{\mu \nu }(q)`$. In nuclear matter there are two (longitudinal and transverse) correlation functions which coincide for a meson at rest with respect to the medium (i.e. with $`q^\mu =(\omega ,\stackrel{}{q}=0)`$).
The reduced correlation function is written as a (twice subtracted) dispersion relation,
$$\mathrm{\Pi }(q^2)=\mathrm{\Pi }(0)+\mathrm{\Pi }^{}(0)q^2+\frac{q^4}{\pi }𝑑s\frac{\mathrm{Im}\mathrm{\Pi }(s)}{s^2(sq^2iϵ)}.$$
(2)
where $`\mathrm{\Pi }(0)`$ vanishes in vacuum but contributes in nuclear matter. At large spacelike $`Q^2=q^2>0`$ the QCD operator product (Wilson) expansion gives
$$12\pi ^2\mathrm{\Pi }(q^2=Q^2)=c_0Q^2\mathrm{ln}\left(\frac{Q^2}{\mu ^2}\right)+c_1+\frac{c_2}{Q^2}+\frac{c_3}{Q^4}+\mathrm{}$$
(3)
We specify the coefficients $`c_i`$ for the isoscalar current $`j^\mu ={\scriptscriptstyle \frac{1}{6}}(\overline{u}\gamma ^\mu u+\overline{d}\gamma ^\mu d)`$, the case of the $`\omega `$ meson that we wish to use here for explicit evaluations. In vacuum we have:
$`c_0`$ $`=`$ $`{\displaystyle \frac{1}{6}}\left(1+{\displaystyle \frac{\alpha _S}{\pi }}\right),c_1={\displaystyle \frac{1}{2}}(m_u^2+m_d^2),`$ (4)
$`c_2`$ $`=`$ $`{\displaystyle \frac{\pi ^2}{18}}{\displaystyle \frac{\alpha _S}{\pi }}𝒢^{\mu \nu }𝒢_{\mu \nu }+{\displaystyle \frac{2\pi ^2}{3}}m_uu\overline{u}+m_dd\overline{d},`$ (5)
while $`c_3`$ involves combinations of four-quark condensates of (mass) dimension 6. The quark mass term $`c_1`$ is small and can be dropped in the actual calculations. For the gluon condensate we use $`\frac{\alpha _S}{\pi }𝒢^2=(0.36\mathrm{GeV})^4`$ 9 , and the (chiral) quark condensate is given by $`m_u\overline{u}u+m_d\overline{d}dm_q\overline{u}u+\overline{d}d=m_\pi ^2f_\pi ^2(0.11\mathrm{GeV})^4`$ through the Gell-Mann, Oakes, Renner relation.
In the nuclear medium with baryon density $`\rho `$ we have $`c_i(\rho )=c_i(\rho =0)+\delta c_i(\rho )`$ with $`c_i(0)`$ given by eqs.(4,5), and
$$\delta c_2(\rho )=\frac{\pi ^2}{3}\left[\frac{4}{27}M_N^{(0)}+2\sigma _N+A_1M_N\right]\rho $$
(6)
to linear order in $`\rho `$. The first term in brackets is the leading density dependent correction to the gluon condensate and involves the nucleon mass in the chiral limit, $`M_N^{(0)}0.75\mathrm{GeV}`$ 6 . The second part proportional to the nucleon sigma term $`\sigma _N45\mathrm{MeV}`$ is the first order correction of the quark condensate, and the third term introduces the first moment of the quark distribution function in the nucleon:
$$A_1=2𝑑xx\left[u(x)+\overline{u}(x)+d(x)+\overline{d}(x)\right].$$
(7)
It represents twice the fraction of momentum carried by quarks in the proton. We take $`A_11`$ as determined by deep-inelastic lepton scattering at $`Q2\mathrm{GeV}`$. Note that $`\delta c_2(\rho _0)410^3\mathrm{GeV}^4`$ at $`\rho =\rho _0=0.17\mathrm{fm}^3`$, the density of nuclear matter, and almost all of this correction comes from the term proportional to $`A_1`$.
Next we introduce the Borel transform of eq. (3):
$$12\pi ^2\mathrm{\Pi }(0)+_0^{\mathrm{}}𝑑sR(s)e^{s/^2}=c_0^2+c_1+\frac{c_2}{^2}+\frac{c_3}{2^4}+\mathrm{}$$
(8)
with $`R(s)=\frac{12\pi }{s}\mathrm{Im}\mathrm{\Pi }(s)`$ and $`\mathrm{\Pi }(0)=\rho /4M_N`$, the vector meson analogue of the Thomson term in photon scattering.
We separate the spectrum $`R(s)`$ into a resonance part with $`ss_0`$ and a continuum $`R_c(s)`$ which must approach the perturbative QCD limit for $`s>s_0`$:
$$R_c(s)=\frac{1}{6}\left(1+\frac{\alpha _S}{\pi }\right)\mathrm{\Theta }(ss_o).$$
(9)
The factor $`{\scriptscriptstyle \frac{1}{6}}`$ is again specific for the isoscalar channel. The Borel mass parameter $``$ must be sufficiently large so that eq.(8) converges rapidly, but otherwise it is arbitrary. We choose $`>\sqrt{s_0}`$ so that $`e^{s/^2}`$ can be expanded in powers of $`s/^2`$ for $`s<s_0`$. The remaining integral $`_{s_0}^{\mathrm{}}𝑑sR_c(s)e^{s/^2}`$ is evaluated inserting the running coupling strength $`\alpha _S(s_o)`$ in eq.(9). Then the term-by-term comparison in eq.(8) gives the following set of sum rules for the moments of the spectrum $`R(s)`$ (see also refs. 7 ; 8 )
$`{\displaystyle _0^{s_0}}𝑑sR(s)`$ $`=`$ $`s_0c_0+c_112\pi ^2\mathrm{\Pi }(0),`$ (10)
$`{\displaystyle _0^{s_0}}𝑑ssR(s)`$ $`=`$ $`{\displaystyle \frac{s_0^2}{2}}c_0c_2,`$ (11)
$`{\displaystyle _0^{s_0}}𝑑ss^2R(s)`$ $`=`$ $`{\displaystyle \frac{s_0^3}{3}}c_0+c_3.`$ (12)
Note that the first two sum rules are well determined and represent useful constraints for the spectrum $`R(s)`$. Only the third sum rule (12) involves four-quark condensates which are uncertain. In this short paper we concentrate on eqs. (10,11). A detailed analysis of eq.(12) will be presented in a forthcoming longer paper. It is instructive to illustrate the sum rules (10,11) for the $`\omega `$ meson in vacuum using Vector Meson Dominance (VMD) for the resonant part of $`R(s)`$. In this model we have
$$R(s)=12\pi ^2\frac{m_\omega ^2}{g_\omega ^2}\delta (sm_\omega ^2)+\frac{1}{6}\left(1+\frac{\alpha _S}{\pi }\right)\mathrm{\Theta }(ss_o).$$
(13)
with $`g_\omega =3g16.8`$ (using the vector coupling constant $`g=5.6`$). We can neglect the small quark mass term $`c_1`$ and find from eq. (10):
$$\frac{8\pi ^2}{g^2}\frac{m_\omega ^2}{s_0}=1+\frac{\alpha _S}{\pi },$$
(14)
which fixes $`\sqrt{s_0}=1.16\mathrm{GeV}`$ using $`\alpha _S(s_0)0.4`$ and $`m_\omega =0.78\mathrm{GeV}`$. It is interesting to identify the spectral gap $`\mathrm{\Delta }=\sqrt{s_0}`$ with the scale for spontaneous chiral symmetry breaking, $`\mathrm{\Delta }=4\pi f_\pi `$, where $`f_\pi =92.4MeV`$ is the pion decay constant. In the VMD model, taking the zero width limit, eq.(14) holds for both the $`\omega `$ and $`\rho `$ meson, with equal mass $`m_V=m_\rho =m_\omega `$. Inserting $`s_0=16\pi ^2f_\pi ^2`$ in eq.(14) one recovers the famous KSFR relation $`m_V=\sqrt{2}gf_\pi `$ up to a small QCD correction.
The sum rule (11) for the first moment gives
$$\frac{8\pi ^2}{g^2}m_\omega ^4=\frac{s_0^2}{2}\left(1+\frac{\alpha _S}{\pi }\right)\frac{\pi ^2}{3}\left[\frac{\alpha _S}{\pi }𝒢^2+12m_u\overline{u}u+m_d\overline{d}d\right].$$
(15)
Inserting the values for the gluon and quark condensates we find indeed perfect consistency. Given a model for the $`\omega `$ meson spectral function in the vacuum and in the nuclear medium, the sum rules (10,11) therefore provide useful constraints to test the calculated spectra.
We now continue on from VMD to a more realistic approach. In refs. 4 ; 4b we have used an effective Lagrangian based on chiral $`SU(3)SU(3)`$ symmetry with inclusion of vector mesons as well as anomalous couplings from the Wess-Zumino action in order to calculate the $`\omega `$ meson spectrum both in the vacuum and in nuclear matter. The resulting vacuum spectrum reproduces the observed $`e^+e^{}`$hadrons$`(I=0)`$ data very well 4 (see Fig. 1a). The predicted in-medium mass spectrum (for $`\omega `$ excitations with $`\stackrel{}{q}=0`$) shows a pronounced downward shift of the $`\omega `$-meson peak and a substantial, but not overwhelming increase of its width from reactions such as $`\omega N\pi N,\pi \pi N`$ etc. (see Fig. 1b). At large $`s>s_0`$, both spectra should approach the QCD limit (9). The consistency test of these calculated spectral distributions with the sum rules (10) and (11) goes as follows:
* the vacuum case:
the two sides of eq. (10),
$$_0^{s_0}𝑑sR(s)=\frac{s_0}{6}\left(1+\frac{\alpha _S(s_0)}{\pi }\right),$$
(16)
now match at $`\sqrt{s_0}=1.25\mathrm{GeV}`$, with $`_0^{s_0}𝑑sR(s)=0.29\mathrm{GeV}^2`$. The sum rule for the first moment gives $`_0^{s_0}𝑑ssR(s)=0.19\mathrm{GeV}^4`$, to be compared with $`\frac{1}{2}s_0^2(1+\alpha _s/\pi )c_2=0.22\mathrm{GeV}^4`$, so there is consistency at the 10% level.
* the in-medium case:
now we have to match the moments of the density dependent spectral distributions,
$$_0^{s_0}𝑑sR(s,\rho )=\frac{s_0}{6}\left(1+\frac{\alpha _S(s_0)}{\pi }\right)+\frac{3\pi ^2}{M_N}\rho ,$$
(17)
together with
$$_0^{s_0}𝑑ssR(s,\rho )=\frac{s_0^2}{12}\left(1+\frac{\alpha _S(s_0)}{\pi }\right)c_2(0)\delta c_2(\rho ).$$
(18)
Using our calculated spectrum 4b shown in Fig. 1b, we find $`\sqrt{s_0}=1.08\mathrm{GeV}`$ at $`\rho =\rho _0=0.17\mathrm{fm}^3`$, with $`_0^{s_0}𝑑sR(s,\rho _0)=0.26\mathrm{GeV}^2`$. Then $`_0^{s_0}𝑑ssR(s,\rho _0)=0.11\mathrm{GeV}^4`$ is to be compared with the right hand side of eq. (18) which gives $`0.12\mathrm{GeV}^4`$, so there is again excellent consistency.
Note again that these tests do not involve uncertain four-quark condensates. Furthermore, if the in-medium spectrum shows a reasonably narrow quasi particle excitation, the quantity $`\overline{m}^2=_0^{s_0}𝑑ssR(s)/_0^{s_0}𝑑sR(s)`$ can indeed be interpreted as the square of an in-medium “mass” of this excitation. For our $`\omega `$ meson case we find $`\overline{m}=0.65\mathrm{GeV}`$ at $`\rho =\rho _0`$, a substantial downward mass shift as discussed in refs. 4 ; 4b . (For the broad $`\rho `$ meson spectrum, on the other hand , the interpretation of $`\overline{m}`$ as an in-medium mass is not meaningful as demonstrated in ref. 4 ).
Amusingly, the spectral gap $`\mathrm{\Delta }=\sqrt{s_0}`$ decreases by about 15 percent when replacing the vacuum by nuclear matter. This is in line with the proposition that this gap reflects the order parameter for spontaneous chiral symmetry breaking and scales like the pion decay constant $`f_\pi `$ (or, equivalently, like the square root of the chiral condensate $`\overline{q}q`$).
In summary, we have shown that the combination of sum rules (10) and (11) for the lowest moments of the spectral distributions does serve as a model-independent consistency test for calculated spectral functions.
|
no-problem/9901/cond-mat9901280.html
|
ar5iv
|
text
|
# Group-Theoretical Analysis of Second Harmonic Generation at (110) and (111) Surfaces of Antiferromagnets
## I Introduction
Nonlinear optics has been proven to be very useful for the investigation of ferromagnetism at surfaces due to its enhanced sensitivity to twodimensional ferromagnetism . The magnetic effects are usually much stronger than in linear optics (rotations up to 90, pronounced spin polarized quantum well state oscillations , magnetic contrasts close to 100$`\%`$) . Recently, Second Harmonic Generation (SHG) has been successfully applied to probe antiferromagnetism (visualization of bulk AF domains ). The potential of SHG to study the surface antiferromagnetism has been announced in Ref. extensively discussed in our previous paper .
The practical importance of studies in this field follows from the applications of antiferromagnetic (AF) oxide layers in devices such as those based on TMR (tunneling magnetoresistance), where a trilayer structure is commonly used. The central layer of TMR devices consists of an oxide sandwiched between a soft and a hard magnetic layer. For these technological applications it is necessary to develop a technique to study buried oxide interfaces. Such a technique can be SHG. One of the promising materials for the mentioned devices is NiO. However, to the best of our knowledge, the understanding of its detailed spin structure is scarce - even the spin orientation on the ferromagnetically ordered (111) surfaces is not known.
Our recent paper presented an extensive study of the nonlinear electrical susceptibility tensor $`\chi _{el}^{(2\omega )}`$ (the source for SHG within the electrical dipole approximation), mostly for monolayer structures. It has been proven there that the spin structure of an antiferromagnetic monolayer can be detected by means of SHG. Also the possibility of antiferromagnetic surface domain imaging has been presented for the first time. As it was mentioned in this previous work, bilayer spin structures are enough to account for the symmetry of a surface of a cubic antiferromagnet. Here we present an extention of that work to the (110) and (111) bilayer structures, thus completing our group theoretical analysis of low Miller-index antiferromagnetic surfaces.
## II Results
We follow exactly the group theoretical method described in Ref. . At this point it is necessary to define the notions of “phase” and “configuration”, used henceforth to classify our results. “Phase” describes the magnetic phase of the material, i.e. paramagnetic, ferromagnetic, or AF. Secondly, the word “configuration” is reserved for the description of the magnetic ordering of the surface. It describes the various possibilities of the spin ordering, which are different in the sense of topology. We describe AF configurations, denoted by little letters a) to l), as well as several ferromagnetic configurations, denoted as “ferro1”, “ferro2”, etc. The number of possible configurations varies depending on the surface orientation. All the analysis concerns collinear antiferromagnets, with one easy axis.
The tables show the allowed tensor elements for each configuration. The tables also contain the information on the parity of the nonvanishing tensor elements: the odd ones are printed in boldface. In some situations an even tensor element (shown in lightface) is equal to an odd element (shown in boldface), this means that this pair of tensor elements is equal in the domain which is depicted on the corresponding figure, but they are of opposite sign in the other domain. The parity of the elements has been checked in the operations $`2_z`$, $`4_z`$, and in the operation connecting mirror-domains to each other (for the definition of the mirror-domain structure see Ref. ). The domain operation(s) on which the parity depends is (are), if applicable, also displayed in the tables. If two or more domain operations have the same effect, we display all of them together. To make the tables shorter and more easily readable some domain operations (and the corresponding parity information for the tensor elements) are not displayed, namely those that can be created by a superposition of the displayed domain operations. We also do not address the parity of tensor elements in the $`6_z`$ nor $`3_z`$ operations for (111) surfaces nor any other operation that “splits” tensor elements, although these operations also lead to a domain structure . As has been discussed in Ref. it is possible to define a parity of the tensor elements for the $`3_z`$ and $`6_z`$ operations, however the tensor elements then undergo more complicated changes. The situations where the parity of the tensor elements is too complicated to be displayed in the tables are indicated by a hyphen in the column “domain operation”. For the paramagnetic phase, where no domains exist, we display the hyphen as well. For some configurations, none of the operations leads to a domain structure - in those configurations we display the information “one domain”. The reader is referred to Ref. for the particularities of the parity check.
### A (110) bilayer
The previously described AF configurations of the (001) monolayer most commonly get split into two different configurations when a bilayer structure is considered. For the (110) bilayer it is not the case - only two of twelve AF configurations get split in this way, thus one obtains 14 AF configurations of the (110) bilayer. Describing the results of our analysis we use the nomenclature of our previous article, i.e. the antiferromagnetic configurations are labeled by small letters. Only the four configurations that result from splitting of the two configurations of the monolayer structure are labeled by small letters with subscripts that carry the information about how they have been constructed from the (110) monolayer. For configurations with subscript “a” the lower layer is constructed by translation of the topmost layer by vector (0.5a, 0.5b), where a and b are interatomic distances within the (110) plane along $`x`$ and $`y`$ axes, respectively. For configurations with subsript “b” the vector of translation is (-0.5a, 0.5b). This corresponds to the way we constructed the (001) bilayers in .
The configurations of the (110) monolayer structure are depicted in Fig. 1, and the way the bilayer is constructed is depicted in Fig. 2. The tensor elements are presented in Table I. In general, we can observe five types of response. However, the possibility to distinguish AF configurations is not much improved compared to the (110) monolayer. Even the possibility to detect the magnetic phase of the surface is not evident.
As for the (001) surface , there is no difference in SHG signal between the monolayer and bilayer for the paramagnetic and ferromagnetic phases. For most AF configurations, however (confs. a), b), c), e), f<sub>a</sub>, f<sub>b</sub>), g), h), j), k), and l)) such a difference is present due to a lower symmetry of the bilayer.
### B (111) bilayer
In order to be consistent with our previous work we keep the same configuration names as in this earlier paper. That is why, for example, conf. b) is not present here. The spin configurations of the (111) bilayer are constructed from the configurations of the (111) surface of our previous work in the way that the spin structure in the second atomic layer is the same as in the topmost layer, but shifted accordingly to form a hcp structure. Taking into account the spin structure of the second layer causes all the AF configurations to split, thus one obtains 10 AF configurations of the (111) bilayer. The configurations are labeled by small letters (indicating their “parent” configuration) with subscript “a” if the mentioned shifting is along the positive $`x`$ axis, and “b” if the shifting is along the negative S<sub>xy</sub> axis.
The configurations of the (111) monolayer are depicted in Fig. 3 and the construction of the bilayer is depicted in Fig. 4. The corresponding tensor elements are displayed in Tab. II. The results are identical to those of our previous work , where the second layer of the (111) surface was treated as nonmagnetic. This means that the spin spin structure of the second layer does not play any role for SHG, however the presence of the atoms in the second layer does.
## III Conclusion
From our results follows that SHG can probe maximally two atomic layers of the surface of cubic two sublattice antiferromagnets, and only one of the paramagnetic or ferromagnetic surface. For the (111) surface, the spin structure of the second layer does not have any influence on SHG, i.e. it does not matter from the group-theoretical point of view if the investigated surface is a termination of a bulk antiferromagnet or a monolayer grown on a nonmagnetic substrate. However, these two situations can be very different from the band-theoretical point of view.
|
no-problem/9901/astro-ph9901211.html
|
ar5iv
|
text
|
# Backreaction effects of dissipation in neutrino decoupling
## I Introduction
Non-equilibrium processes in the early universe are typically associated with dynamical transitions or particle decouplings. In the case of neutrino decoupling, the standard approach is to treat the process as adiabatic (see e.g. ). The small non-equilibrium effects are thus usually neglected, which provides a reasonable approximation. However, given the increasing accuracy of cosmological observations and theoretical modeling, it is worthwhile revisiting the standard equilibrium models of processes such as neutrino decoupling, in order to see whether non-equilibrium corrections can lead to observable consequences. Recently, non-equilibrium corrections in neutrino decoupling have been calculated in a number of papers, using complicated kinetic theory and numerical computations (see for a short review). The corrections are very small, as expected. For example, in it was found that non-equilibrium effects lead to a small change in the decoupling temperature for neutrinos. Spectral distortions have also been analyzed , showing the remarkable fact that they amount to as much as 1% or more for the higher-energy side of the spectrum. Although these corrections in the spectrum, energy density and temperature of the neutrino component have hardly any effect on primordial helium synthesis, yielding a change in the mass fraction of $`10^4`$, they can lead to other effects that may be observable. Thus it is shown that the non-equilibrium increase in neutrino temperature, which leads to an extra injection of energy into the photon spectrum, leads to a shift of equilibrium epoch between matter and radiation which, in turn, modifies the angular spectrum of fluctuations of the cosmic microwave background radiation .
Despite the accuracy of these models in obtaining corrections to the decoupling temperature and distribution function due to non-equilibrium effects, they still make use of the standard Friedman equations for a perfect (i.e non-dissipative) fluid. This leads to the physically inconsistent situation in which, say, the energy density and expansion evolve in time like a radiative fluid in equilibrium. One expects that small distortions in the particle equilibrium distribution function should be reflected in the macroscopic (i.e fluid) description, as given by the stress-energy tensor, by adding a bulk viscous pressure to the equilibrium one. Here we consider an alternative thermo-hydrodynamic model of dissipative effects in neutrino decoupling, simple enough to produce analytic solutions for the backreaction effects on the universal scale factor, and estimates for the entropy production due to dissipation. As explained above these effects are not the focus of recent papers, which use sophisticated kinetic theory models focusing on the neutrino temperature. Our simplified approach cannot compete with these models for accuracy and completeness, but it has the advantage of simplicity, allowing for a qualitative understanding of effects not previously investigated in detail. A similar approach has previously been developed in to the reheating era that follows inflation.
The thermo-hydrodynamic model is based on an approximation to kinetic theory which respects relativistic causality. This approximation is the Grad moment method, leading to the causal thermodynamics of Israel and Stewart in the hydrodynamic regime (see also for an alternative but equivalent approach). This causal theory is a generalization of the more commonly used relativistic Navier-Stokes-Fourier theory. The latter, due to Eckart , may be derived via the Chapman-Enskog approximation in kinetic theory. The resulting theory is quasi-stationary and noncausal, and suffers from the pathologies of infinite wavefront speeds and instability of all equilibrium states . The main new ingredient in the causal transport equations is a transient term which contains the relaxation time. Our simple model is based on a one-component fluid. In , relaxation time processes are incorporated in a two-fluid model. In this setting, electrons and positrons on the one side and neutrinos and antineutrinos on the other side, are found to be in two different equilibrium states with slightly different temperatures. The system evolves towards a state of thermal equilibrium in a characteristic relaxation time.
Dissipative effects in the decoupling of a given species of particles arise from the growing mean free path of the decoupling particles in their weakening interaction with the cosmic fluid. Eventually the mean collision time exceeds the gravitational expansion time, and decoupling is complete. A hydrodynamic model may be used to cover the early stages of the decoupling process, but it will eventually break down when the mean collision time becomes large enough .
In the conditions prevailing at the time of neutrino decoupling, it is reasonable to neglect sub-horizon metric fluctuations and treat the spacetime as a Friedmann model. (The incorporation of perturbations in our model would use the covariant formalism for dissipative fluids developed in .) The dynamical effects of spatial curvature and any surviving vacuum energy will be negligible, so that we can reasonably assume a spatially flat geometry. Furthermore, we assume that the average 4-velocities of the neutrinos (regarded as massless) and of the photon-electron-positron gas are the same. With all these assumptions, only scalar dissipation is possible. Dissipation during neutrino decoupling arises because the falling temperature lowers the interaction rate with leptons as the lepton mass can no longer be ignored relative to the thermal energy. Thus dissipation is directly reflected in a deviation of the equation of state from the thermalized radiation form $`p=\frac{1}{3}\rho `$. Within a hydrodynamic one-fluid model, such dissipation is described via bulk viscosity, which vanishes in the $`p=\frac{1}{3}\rho `$ limit, but is nonzero otherwise. We will use the full (i.e. non-truncated) version of the causal transport equation for bulk stress.
## II Causal transport equation for bulk stress
The particle number 4-current and the energy-momentum tensor are
$`N^a=nu^a,T^{ab}=\rho u^au^b+(p+\mathrm{\Pi })h^{ab},`$
where $`\rho `$ is the energy density, $`p`$ is the equilibrium (hydrostatic) pressure, $`n`$ is the particle number density, $`\mathrm{\Pi }`$ is the bulk viscous pressure, and $`h^{ab}=g^{ab}+u^au^b`$ is the projector into the comoving instantaneous rest space. Particle and energy-momentum conservation
$`_aN^a=0,_bT^{ab}=0,`$
lead to the equations
$`\dot{n}+3Hn=0,`$ (1)
$`\dot{\rho }+3H(\rho +p+\mathrm{\Pi })=0,`$ (2)
where $`H`$ is the Hubble expansion rate. The specific entropy $`s`$ and the temperature $`T`$ are related via the Gibbs equation
$$nTds=d\rho \frac{\rho +p}{n}dn.$$
(3)
Then it follows that
$$nT\dot{s}=3H\mathrm{\Pi },$$
(4)
where $`\mathrm{\Pi }`$ is always non-positive. The Grad moment approximation in kinetic theory (or phenomenological arguments) leads to the full causal transport equation for $`\mathrm{\Pi }`$:
$$\tau \dot{\mathrm{\Pi }}+\mathrm{\Pi }=3\zeta H\frac{1}{2}\tau \mathrm{\Pi }\left[3H+\frac{\dot{\tau }}{\tau }\frac{\dot{\zeta }}{\zeta }\frac{\dot{T}}{T}\right],$$
(5)
where $`\tau `$ is the relaxation time scale, which allows for causal propagation of viscous signals, and $`\zeta 0`$ is the bulk viscous coefficient as given below. Quasi-stationary, noncausal theories have $`\tau =0`$, which reduces the evolution equation (5) to an algebraic equation $`\mathrm{\Pi }=3\zeta H`$. This leads to instantaneous propagation of viscous signals. Note also that the causal relaxational effects lead to a small increase in the sound speed over its adiabatic value :
$$c_\mathrm{s}^2c_\mathrm{s}^2+c_\mathrm{b}^2\text{ where }c_\mathrm{b}^2=\frac{\zeta }{(\rho +p)\tau }.$$
(6)
This result, which is not well known, is derived in the appendix.
The approximation used in deriving the transport equation (also in the quasi-stationary case) requires that $`|\mathrm{\Pi }|\rho `$, which is reasonable for most dissipative processes (see for a nonlinear generalization of the causal transport equation.)
Equation (5) as it stands is known as the full or non-truncated transport equation for bulk viscous pressure . When the term containing the square bracket on the right is neglected, we get the truncated equation which is usually used. Under many conditions, truncation leads to a reasonable approximation. We will use the full equation.
Taking $`n`$ and $`\rho `$ as independent variables, the Gibbs equation (3) leads to the integrability condition
$$n\left(\frac{T}{n}\right)_\rho +(\rho +p)\left(\frac{T}{\rho }\right)_n=T\left(\frac{p}{\rho }\right)_n,$$
(7)
and together with the energy conservation equation (2) this gives the temperature evolution equation
$$\frac{\dot{T}}{T}=3H\left(\frac{p}{\rho }\right)_n\frac{1}{T}\left(\frac{T}{\rho }\right)_n3H\mathrm{\Pi }.$$
(8)
The first term on the right accounts for adiabatic cooling due to expansion, whereas in the second term, viscosity contributes to heating of the fluid (note that $`\mathrm{\Pi }`$ is always non-positive).
Using equations (1) and (2), the Gibbs equation takes the form
$$n^2Tds=\left[\frac{n3H\mathrm{\Pi }}{3H(\rho +p)+3H\mathrm{\Pi }}\right]d\rho +(\rho +p)\left(\frac{n}{p}\right)_\rho \left[\frac{\dot{p}}{\dot{\rho }}d\rho dp\right].$$
(9)
As expected we learn from the last equation that when the fluid is perfect ($`\mathrm{\Pi }=0`$), the specific entropy is conserved along the flow lines ($`\dot{s}=0`$). Furthermore, if a barotropic equation of state for $`n`$ holds, i.e. $`n=n(\rho )`$, then $`ds=0`$ so that $`s`$ is a universal constant, the same on all flow-lines, and the fluid is called isentropic.<sup>*</sup><sup>*</sup>* The same reasoning applies when the temperature is barotropic. Yet, as Eq. (9) shows, this is no longer true in the presence of dissipation, i.e. a barotropic particle number density no longer forces $`ds`$ to vanish.
For simplicity, we assume the linear barotropic equation of state
$$p=(\gamma 1)\rho ,$$
(10)
where $`\gamma `$ is constant and we are interested in the case $`\gamma \frac{4}{3}`$. The adiabatic speed of sound $`c_\mathrm{s}`$ is given by
$`c_\mathrm{s}^2=\left({\displaystyle \frac{p}{\rho }}\right)_s,`$
which for a perfect fluid (either barotropic or not) becomes
$`c_\mathrm{s}^2={\displaystyle \frac{\dot{p}}{\dot{\rho }}}.`$
When Eq. (10) holds then $`c_\mathrm{s}=\sqrt{\gamma 1}`$. Using Eq. (10) and the integrability condition (7), we find
$$T=\rho ^{(\gamma 1)/\gamma }F\left(\frac{\rho ^{1/\gamma }}{n}\right),$$
(11)
where $`F`$ is an arbitrary function which satisfies $`\dot{F}=0`$. If $`T`$ is barotropic, then $`F`$ is constant and we have a power-law form with fixed exponent for the temperature
$$T\rho ^{(\gamma 1)/\gamma }.$$
(12)
In the non-dissipative case, these barotropic equations for $`p`$ and $`T`$ are compatible with the ideal gas law
$$p=nT,$$
(13)
but in the presence of dissipation this is no longer true. In effect, equations (10), (12) and (13) imply $`n\rho ^{1/\gamma }`$, i.e.
$`{\displaystyle \frac{\dot{n}}{n}}={\displaystyle \frac{1}{\gamma }}{\displaystyle \frac{\dot{\rho }}{\rho }},`$
which implies, by using Eq. (2), that $`\mathrm{\Pi }=0`$. We shall drop in the sequel a barotropic equation of state for the temperature in favour of the more physically appealing equation of state (13) together the $`\gamma `$-law in (10).
## III Dissipation in neutrino decoupling
A hydrodynamic approach in the expanding universe requires a particle collision time $`t_\mathrm{c}`$ short enough to adjust to the falling temperature. As the natural time-scale for the expanding universe is $`H^1`$, we have
$`t_\mathrm{c}<H^1.`$
If $`t_\mathrm{c}H^1`$, then an equilibrium state can in principle be attained. Dissipative phenomena could play a prominent role for $`t_\mathrm{c}H^1`$.
We learn from kinetic theory that $`t_\mathrm{c}`$ is determined by
$$t_\mathrm{c}=\frac{1}{n\sigma v},$$
(14)
where $`n`$ is the number density of the target particles with which the given species is interacting, $`\sigma `$ the cross-section and $`v`$ the mean relative speed of interacting particles. For the decoupling of massless neutrinos in the early universe, $`v=1`$, the target number density is that of electrons, and
$`\sigma G__FT^2,`$
where $`G__F`$ is the Fermi coupling constant. At the neutrino decoupling temperature $`T_\mathrm{d}`$, we have $`m_\mathrm{e}/T_\mathrm{d}\frac{1}{2}`$, so that the rest mass energy $`m_\mathrm{e}`$ of electrons starts to become important. Since the electron number density in the radiation dominated era evolves as $`n_\mathrm{e}a^3`$, where $`a`$ is the scale factor, we have from Eq. (14) that
$$t_\mathrm{c}\frac{a^3}{T^2}.$$
(15)
Dissipation due to massless particles with long mean free path in a hydrodynamic fluid is described by the radiative transfer model. The bulk viscous coefficient takes the form
$$\zeta =4rT^4\mathrm{\Gamma }^2t_\mathrm{c},$$
(16)
where $`r`$ is $`\frac{7}{8}`$ times the radiation constant and $`\mathrm{\Gamma }`$ measures the deviation of $`p/\rho `$ from its pure-radiation value:
$$\mathrm{\Gamma }=\frac{1}{3}\left(\frac{p}{\rho }\right)_n,$$
(17)
where $`p`$ and $`\rho `$ refer to the pressure and energy density of the radiation/matter mixture as a whole. Since we assume the linear equation of state (10), it follows that $`\mathrm{\Gamma }`$ is a perturbative constant parameter in our simple model:
$`\mathrm{\Gamma }=\frac{4}{3}\gamma 1.`$
The assumption that $`\mathrm{\Gamma }`$ is constant relies on the assumption that decoupling takes place rapidly. Since standard adiabatic treatments of decoupling assume instantaneous decoupling, this assumption should be a reasonable first approximation.
We may neglect the $`3\zeta H`$ term on the right of the transport equation (5), since it is $`O(\mathrm{\Gamma }^2)`$. Note that our simple model would thus break down in the quasi-stationary Eckart theory, since it would immediately lead to $`\mathrm{\Pi }=O(\mathrm{\Gamma }^2)`$. The relaxation timescale $`\tau `$ in causal radiative transfer is given by $`\tau =t_\mathrm{c}`$. The term $`\dot{\zeta }/\zeta `$ on the right of Eq. (5) becomes
$`{\displaystyle \frac{\dot{\zeta }}{\zeta }}=H+O(\mathrm{\Gamma }),`$
on using equations (8) and (15). The full transport equation (5) becomes, to lowest order
$$\tau \dot{\mathrm{\Pi }}+\mathrm{\Pi }=4\tau H\mathrm{\Pi }.$$
(18)
(We can think of the right hand side as an effective source term relative to the truncated transport equation.) We can rewrite this in the standard truncated form as
$$\tau _{}\dot{\mathrm{\Pi }}+\mathrm{\Pi }=0,$$
(19)
where the effective relaxation time acquires an expansion correction:
$$\tau _{}=\frac{\tau }{1+4\tau H}.$$
(20)
The amount of reduction depends on the size of $`\tau =t_\mathrm{c}`$ relative to $`H`$. The hydrodynamical description requires $`\tau H<1`$. If $`\tau H1`$, then $`\tau _{}\tau `$. But if $`\tau H`$ is close to 1, the reduction could be significant.
The Friedmann equation
$$\rho =3H^2,$$
(21)
together with Eq. (2) leads to
$$\mathrm{\Pi }=2\dot{H}(43\mathrm{\Gamma })H^2.$$
(22)
On using equation (22) we get from (18) the evolution equation for $`H`$
$$\ddot{H}+H\dot{H}(83\mathrm{\Gamma }+N)+H^3(2\frac{3}{2}\mathrm{\Gamma })(N+4)=0,$$
(23)
where
$$N=(\tau H)^1,$$
(24)
which is of the order of the number of interactions in an expansion time. Now, from equations (10), (13), (15) and (24) we have
$$N=\left(\frac{Ha}{H_\mathrm{d}a_\mathrm{d}}\right)^3,$$
(25)
where the expression $`na^3`$ has been used and $`a_\mathrm{d}`$ and $`H_\mathrm{d}=H(a_\mathrm{d})`$ are the values at which $`N=1`$, so that $`a_\mathrm{d}`$ is determined by the equation
$$t_\mathrm{c}(a_\mathrm{d})H(a_\mathrm{d})=1.$$
(26)
Changing the independent variable to the scale factor $`a`$, developing equation (23) and collecting the previous results, yields
$`a^2HH^{\prime \prime }+a^2H^2+aHH^{}\left[93\mathrm{\Gamma }+\left({\displaystyle \frac{Ha}{H_\mathrm{d}a_\mathrm{d}}}\right)^3\right]`$ (27)
$`+\left(2\frac{3}{2}\mathrm{\Gamma }\right)H^2\left[4+\left({\displaystyle \frac{Ha}{H_\mathrm{d}a_\mathrm{d}}}\right)^3\right]=0,`$ (28)
where a prime denotes $`d/da`$. We expand $`H`$ as
$$H=\overline{H}+\delta H\text{ where }\delta H=\mathrm{\Gamma }h+O(\mathrm{\Gamma }^2).$$
(29)
The equilibrium Hubble rate $`\overline{H}`$ corresponds to the thermalized radiation state $`p=\frac{1}{3}\rho `$, so that $`\mathrm{\Gamma }=0`$, and Eq. (29) becomes
$`a^2\overline{H}\overline{H}^{\prime \prime }+a^2\overline{H}^2+9a\overline{H}\overline{H}^{}+8\overline{H}^2+\left(a\overline{H}\overline{H}^{}+2\overline{H}^2\right)\left({\displaystyle \frac{\overline{H}a}{\overline{H}_\mathrm{d}a_\mathrm{d}}}\right)=0.`$
The unique power-law solution is the well-known perfect radiative solution
$$\overline{H}=H_0\left(\frac{a_0}{a}\right)^2=\frac{1}{2t},$$
(30)
where $`a_0`$ marks the start of the dissipative decoupling process, so that $`H=\overline{H}`$ for $`a<a_0`$.
Substituting Eq. (29) into (28) and using the fact that
$`{\displaystyle \frac{H_0a_0}{H_\mathrm{d}a_\mathrm{d}}}={\displaystyle \frac{a_\mathrm{d}}{a_0}}+O(\mathrm{\Gamma }),`$
we find that to $`O(\mathrm{\Gamma })`$:
$$a^2h^{\prime \prime }+a\left[5+\left(\frac{a_\mathrm{d}}{a}\right)^3\right]h^{}+\left[4+2\left(\frac{a_\mathrm{d}}{a}\right)^3\right]h=\frac{3}{2}H_0\left(\frac{a_0}{a_\mathrm{d}}\right)^2\left(\frac{a_\mathrm{d}}{a}\right)^5.$$
(31)
Defining $`\alpha =a/a_\mathrm{d}`$, we can rewrite this as
$$\frac{d^2h}{d\alpha ^2}+\left[\frac{5}{\alpha }+\frac{1}{\alpha ^4}\right]\frac{dh}{d\alpha }+\left[\frac{4}{\alpha ^2}+\frac{2}{\alpha ^5}\right]h=\left(\frac{3}{2}H_0\alpha _0^2\right)\frac{1}{\alpha ^7}.$$
(32)
Now we use the following general result : if $`\phi `$ is a solution of
$`y^{\prime \prime }+f(x)y^{}+g(x)y=k(x)`$
when $`k=0`$, then the general solution is
$`y=C_1\phi +C_2\phi {\displaystyle \frac{dx}{\phi ^2E}}+\phi {\displaystyle \frac{1}{\phi ^2E}\left(\phi Ek𝑑x\right)𝑑x},`$
where $`E=\mathrm{exp}f𝑑x`$. By inspection, a solution of the homogeneous equation (32) is $`1/\alpha ^2`$. It follows that the general solution is
$$h(a)=H_0\left(\frac{a_0}{a}\right)^2\left\{c_1+c_2\mathrm{Ei}\left[\frac{1}{3}\left(\frac{a_\mathrm{d}}{a}\right)^3\right]+\frac{3}{2}\mathrm{ln}\left(\frac{a}{a_\mathrm{d}}\right)\right\},$$
(33)
where $`c_1`$ and $`c_2`$ are arbitrary integration constants and Ei is the exponential-integral function
$`\mathrm{Ei}(x){\displaystyle _{\mathrm{}}^x}{\displaystyle \frac{e^v}{v}}𝑑v=𝒞+\mathrm{ln}x+{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{x^k}{k!k}},`$
with $`𝒞`$ denoting Euler’s constant.
By equations (22) and (33), the bulk stress to first order is
$$\mathrm{\Pi }=(3\overline{H}^24\overline{H}h2h^{}Ha)\mathrm{\Gamma },$$
(34)
This expression holds for $`a>a_0`$, where $`a_0`$ marks the onset of dissipative evolution. Thereafter, the bulk stress decays according to the causal law (19). In order to relate the constants $`c_1`$ and $`c_2`$, we require, according to standard matching conditions, that $`H`$ is continuous. Thus $`h(a_0)=0`$, which fixes $`c_1`$:
$$c_1=c_2\mathrm{Ei}\left[\frac{1}{3}\left(\frac{a_\mathrm{d}}{a_0}\right)^3\right]\frac{3}{2}\mathrm{ln}\left(\frac{a_0}{a_\mathrm{d}}\right).$$
(35)
Thus, using Eq. (33), we see that the backreaction of the dissipative decoupling process on the expansion of the universe is given by
$`\delta H`$ $`=`$ $`\overline{H}\left\{c_2\left(\mathrm{Ei}\left[{\displaystyle \frac{1}{3}}\left({\displaystyle \frac{a_\mathrm{d}}{a}}\right)^3\right]\mathrm{Ei}\left[{\displaystyle \frac{1}{3}}\left({\displaystyle \frac{a_\mathrm{d}}{a_0}}\right)^3\right]\right)+{\displaystyle \frac{3}{2}}\mathrm{ln}\left({\displaystyle \frac{a}{a_0}}\right)\right\}\mathrm{\Gamma }`$ (37)
$`+O(\mathrm{\Gamma }^2).`$
Substituting Eq. (35) into Eq. (34), we find that the bulk stress becomes
$$\mathrm{\Pi }=\overline{\rho }\left\{2c_2\mathrm{exp}\left[\frac{1}{3}\left(\frac{a_\mathrm{d}}{a}\right)^3\right]\right\}\mathrm{\Gamma }+O(\mathrm{\Gamma }^2),$$
(38)
where $`\overline{\rho }=3\overline{H}^2`$ is the equilibrium energy density. Since $`\mathrm{\Pi }<0`$, we require $`c_2<0`$. Below we find a prescription for $`c_2`$ in terms of physical parameters.
## IV Conclusion
In order to complete the model, we need to determine the remaining arbitrary constant $`c_2`$ in terms of physical parameters. A rough estimate, which is consistent with the simplicity of the model, arises as follows. We estimate the duration of the dissipative process as
$$\mathrm{\Delta }aa_\mathrm{d}a_0,$$
(39)
i.e. we assume that the process ends at $`a_\mathrm{d}`$. Then by Eqs. (8) and (13), the fractional viscous rise in temperature due to decoupling is approximately
$$\frac{\mathrm{\Delta }T}{T}\frac{\mathrm{\Pi }(a_0)}{\overline{\rho }(a_0)}\frac{\mathrm{\Delta }a}{a_0}.$$
(40)
We can consider the fractional temperature increase as an input from previous kinetic-theory investigations (as described in the introduction), which typically predict it to be $`O(10^3)`$. Note that this small temperature increase is due to dissipative heating, and is not to be confused with the larger temperature increase arising from electron-positron annihilation, which occurs after neutrino decoupling. Our model does not consider the annihilation process. Then equations (38)–(40) and (34) allow us to estimate the constant $`c_2`$ in terms of the physical parameters $`a_\mathrm{d}/a_0`$, $`\mathrm{\Delta }T/T`$ and $`\mathrm{\Gamma }`$ as:
$$c_2\mathrm{\Gamma }\frac{1}{2}\frac{\mathrm{\Delta }T}{T}\left\{\frac{\mathrm{exp}\left[\frac{1}{3}\left(\frac{a_d}{a_0}\right)^3\right]}{\left(\frac{a_d}{a_0}\right)1}\right\}.$$
(41)
Finally, we can also estimate the entropy production due to decoupling. By Eqs. (4) and (40), the viscous increase in entropy per particle is approximately
$$\mathrm{\Delta }s3\frac{\mathrm{\Delta }T}{T}.$$
(42)
Our model describes the response of the cosmic fluid to a bulk stress, which is a very simple thermo-hydrodynamic approximation to more realistic kinetic theory models of neutrino decoupling, but which nevertheless accommodates the dissipative effects and respects relativistic causality. The simplicity of our model allows us to derive analytic forms for the dynamical quantities and the backreaction effects, but it does not incorporate a mechanism for bringing the dissipative process to an end.
Acknowledgements:
This work was partially supported by a European Science Exchange Programme grant.
## A Characteristic velocities for bulk viscous perturbations
Following , we derive equation (6) for the dissipative contribution to the sound speed. The full analysis of the causality and stability of the Israel-Stewart theory was performed in a series of papers by Hiscock and Salmonson . They showed that both issues are closely related and obtained general expressions for the characteristic velocities for dissipative perturbations. Here we extract from their general expressions specific results for the case in which only bulk viscosity is present.
The purely bulk viscous case stems from the general expressions of by setting all the coefficients coupled to heat flux and shear viscosity to zero. This yields for the speed of propagating transverse modes
$`v__T^2={\displaystyle \frac{(\rho +p)\alpha _1^2+2\alpha _1+\beta _1}{2\beta _2[\beta _1(\rho +p)1]}}0,`$
which is what one expects for scalar sound-wave perturbations. Equation (128) of governing the speed $`v=v__L`$ of propagating longitudinal modes becomes, on dividing by $`\beta _0\beta _2`$ and setting $`\alpha _0=\alpha _1=0`$,
$`[\beta _1(\rho +p)1]v^4`$
$`+\left[{\displaystyle \frac{2n}{T}}\left({\displaystyle \frac{T}{n}}\right)_s{\displaystyle \frac{(\rho +p)}{nT^2}}\left({\displaystyle \frac{T}{s}}\right)_n\beta _1\left\{(\rho +p)\left({\displaystyle \frac{p}{\rho }}\right)_s+{\displaystyle \frac{1}{\beta _0}}\right\}\right]v^2`$
$`+{\displaystyle \frac{1}{nT^2}}\left({\displaystyle \frac{T}{s}}\right)_n\left[(\rho +p)\left({\displaystyle \frac{p}{\rho }}\right)_s+{\displaystyle \frac{1}{\beta _0}}\right]\left[{\displaystyle \frac{n}{T}}\left({\displaystyle \frac{T}{n}}\right)_s\right]^2=0.`$
Dividing by $`\beta _1`$ and taking $`\beta _1\mathrm{}`$, we have
$$v^2=\left(\frac{p}{\rho }\right)_s+\frac{1}{(\rho +p)\beta _0}.$$
(A1)
The first term on the right is the adiabatic contribution $`c_\mathrm{s}^2`$ to $`v^2`$, and the second term is the dissipative contribution $`c_\mathrm{b}^2`$, which, requiring $`v^21`$, leads to
$$c_\mathrm{b}^2\frac{\zeta }{(\rho +p)\tau }1c_\mathrm{s}^2.$$
(A2)
We also learn from that causality and stability require
$$\mathrm{\Omega }_3(\lambda )(\rho +p)\left\{1\lambda ^2\left[\left(\frac{p}{\rho }\right)_s+\frac{1}{(\rho +p)\beta _0}\right]\right\}0,$$
(A3)
for all $`\lambda `$ such that $`0\lambda 1`$. This condition is seen to be hold on account of the inequality (A2).
The expression for $`c_\mathrm{b}`$ refines and corrects the statement in (the first paper to apply causal bulk viscosity in cosmology) that $`\zeta /\rho \tau =1`$ is required by causality.
|
no-problem/9901/hep-th9901076.html
|
ar5iv
|
text
|
# 1 Surfaces 𝑆₁ and 𝑆₂ intersecting an incoming wavepacket.
## Appendix
To analyze wavepackets with $`\widehat{e}=(1,0,0,0)`$ it is convenient to use coordinates $`(\rho ,𝐲)`$ with $`𝐲`$ a three-vector, as defined by
$$𝐱=(\mathrm{tan}\rho ,𝐲/\mathrm{cos}\rho ).$$
(A.1)
The classical trajectory of interest is simply
$$\rho =t,𝐲=0.$$
(A.2)
In these coordinates the metric is
$$ds^2=\frac{R^2}{\mathrm{cos}^2\rho }\left[(1+y^2)dt^2+d\rho ^2+d𝐲d𝐲\frac{(𝐲d𝐲)^2}{1+y^2}\right].$$
(A.3)
The d’Alembertian is
$$R^2^2=\frac{\mathrm{cos}^2\rho }{1+y^2}_t^2+\mathrm{cos}^5\rho _\rho (\mathrm{cos}^3\rho _\rho )+\mathrm{cos}^2\rho (_𝐲_𝐲+_𝐲𝐲𝐲_𝐲).$$
(A.4)
We seek a solution of the form
$$\varphi =\mathrm{exp}\left[i\omega f(t,\rho )\omega y^2g_1(\rho )\omega (t\rho )^2g_2(\rho )+h(\rho )\right].$$
(A.5)
We are interested in the case $`\omega 1`$ and so make an analysis of geometric optics (WKB) type. The first term in the exponent is the rapidly varying phase. The second and third terms produce the envelope in space and time; it follows that $`𝐲`$ and $`t\rho `$ are of order $`\omega ^{1/2}`$.
Expanding $`^2\varphi =0`$ in powers of $`\omega `$, the term of order $`\omega ^2`$ is
$$\omega ^2[(_tf)^2(_\rho f)^2]=0$$
(A.6)
with solution $`f=\rho t`$. At order $`\omega `$,
$$\omega \mathrm{cos}^2\rho \left[\omega y^22i\omega y^2g_1^{}2i\omega (t\rho )^2g_2^{}+2ih^{}+3i\mathrm{tan}\rho 2g_1+4\omega y^2g_1^2\right]=0,$$
(A.7)
where the prime is a $`\rho `$ derivative. Thus,
$`g_1^{}`$ $`=`$ $`{\displaystyle \frac{i}{2}}2ig_1^2,`$
$`g_2^{}`$ $`=`$ $`0,`$
$`h^{}`$ $`=`$ $`{\displaystyle \frac{3}{2}}\mathrm{tan}\rho ig_1.`$ (A.8)
These are readily integrated. A simple particular solution, which we will use henceforth, is
$$g_1=g_2=\frac{1}{2},h=\frac{3}{2}\mathrm{ln}\mathrm{cos}\rho \frac{i\rho }{2}.$$
(A.9)
At the origin this solution is of the form (6) with
$$F_{\omega 𝐞}(t,𝐱)=\mathrm{exp}\left\{\frac{\omega }{2}\left[x_{}^2+(t𝐞𝐱)^2\right]\right\}.$$
(A.10)
Here $`𝐱_{}`$ is the part of $`𝐱`$ that is orthogonal to $`𝐞`$.
Very near the boundary, $`r\omega `$, the WKB approximation breaks down. Here we can match onto the large-$`r`$ behavior
$$\varphi =A(t,\widehat{𝐱})\frac{e^{i\omega t}}{r^2}H_2^{2,1}(\omega /r),$$
(A.11)
where the variation of $`A(t,\widehat{𝐱})`$ is slow compared to the remaining factors. The superscripts $`1,2`$ on the Bessel function refers to the behavior at $`t=\pm \pi /2`$. In the regime $`\omega r1`$ both the large-$`r`$ and WKB expressions are valid and so we can match, with the result
$$A(t,\widehat{𝐱})=ie^{\pm i\pi \omega /2}(\pi \omega /2)^{1/2}\mathrm{exp}\left\{\frac{\omega }{2}\left[|\widehat{𝐱}𝐞|^2+(t\pi /2)^2\right]\right\}.$$
(A.12)
The $`r\omega `$ behavior of the Bessel function then gives the wavepacket on the boundary,
$$G_\pm (\tau ,\theta )=e^{\pm i\pi \omega /2}(2/\omega )^{3/2}\pi ^{1/2}\mathrm{exp}\left\{\frac{\omega }{2}[\theta ^2+\tau ^2]\right\}.$$
(A.13)
For scalars with masses of order the Kaluza–Klein scale $`R^1`$, the trajectory and WKB analysis are unaffected for $`r`$ less than $`\omega `$. The effect of the mass is then simply to change the order of the Bessel function to $`\nu `$, and the result for the wavepacket is
$$G_\pm (\tau ,\theta )=e^{\pm i\pi (\omega +\nu )/2}(2/\omega )^{\nu 1/2}\mathrm{\Gamma }(\nu )\pi ^{1/2}\mathrm{exp}\left\{\frac{\omega }{2}[\tau ^2+\theta ^2]\right\}.$$
(A.14)
## Acknowledgments
I would like to thank V. Balasubramanian, T. Banks, I. Bena, S. Giddings, D. Gross, G. Horowitz, N. Itzhaki, P. Pouliot, and M. Srednicki for discussions. This work was supported in part by NSF grants PHY94-07194 and PHY97-22022.
|
no-problem/9901/cond-mat9901117.html
|
ar5iv
|
text
|
# Reply to the Comment on “Charged impurity scattering limited low temperature resistivity of low density silicon inversion layers”
In a recent Comment on our earlier preprint proposing a theoretical explanation for the observed strong temperature dependence of the low temperature resistivity of low density “metallic” Si inversion layers Kravchenko et al. have raised a number of issues which require careful consideration. In this Reply to their Comment we discuss the issues raised in ref. with respect to our work and respond to the specific questions about our model brought up by Kravchenko et al. .
It was conceded in ref. that our theoretical results “are qualitatively similar to those observed experimentally” but questions were raised about our choice of parameters for the charged impurity density ($`N_i`$) and the free carrier density ($`n_sn_c`$) with the implication that our choice of $`N_i`$ and $`n_sn_c`$ may be inconsistent with the experimentally used samples in ref. . We first point out that neither quantity ($`N_i`$ or $`n_sn_c`$) has been experimentally measured, and therefore the authors of ref. are expressing their opinions based purely on theoretical speculations rather than making statements of facts in ref. . (This distinction was not clearly made in ref. .) Their remark on our choice of the charged impurity density $`N_i`$ is based on a naive misunderstanding of our theory whereas our choice of $`n_sn_c`$ as the free carrier density (in fact, this is the definition of the effective free carrier density in our model ) is based on more subtle arguments than indicated by the discussion of ref. .
As emphasized in our original paper the parameter $`N_i`$ only sets the overall resistivity scale in our theory in the sense that the resistivity $`\rho N_i`$$`N_i`$ does not affect the calculated temperature dependence at all. The actual value of $`N_i`$ is set in our work by demanding overall quantitative agreement between theory and experiment, and the very low value of $`N_i`$ ($`0.3\times 10^{10}cm^2`$) reflects the anomalously high peak mobility ($`71,000cm^2/Vs`$ at $`100mK`$) of the Si-15 sample, which has a peak mobility roughly a factor of five to seven higher than that in other typical good quality Si MOSFETs ($`10,00020,000cm^2/Vs`$). An independent confirmation of the low value of $`N_i`$ in the Si-15 sample comes from the empirical formula connecting $`N_i`$ and the peak mobility ($`\mu _m`$) derived by Cham and Stern : $`\mu _m=1250(N_i/10^{12})^{0.79}`$ where $`\mu _m`$ is expressed in $`cm^2/Vs`$ and $`N_i`$ in units of $`10^{12}cm^2`$. This empirical formula leads to an $`N_i0.5\times 10^{10}cm^2`$ within a factor of two of the value $`N_i0.3\times 10^{10}cm^2`$ used in our work . The actual discrepancy is even less because the empirical formula refers to the $`T=0`$ mobility, which for Si-15 should be considerably higher than $`71,000cm^2/Vs`$ because of the strong temperature dependence of the mobility observed in ref. . Thus the low value of $`N_i`$ used in our work is necessitated by the anomalously high (almost an order of magnitude higher than usual) mobility of the Si-15 sample. The fact that a low value of $`N_i`$ is needed to obtain quantitative numerical agreement between our theory and the experimental Si-15 data at high densities ($`n_sn_c`$), where the standard Drude-Boltzmann transport theory should be eminently valid, shows rather decisively the correctness of our choice of $`N_i`$ in ref. . We emphasize that the low value of $`N_i`$ used in ref. simply reflects the anomalously high quality of the Si-15 sample of ref. , and nothing else.
Having established that our choice of $`N_i`$ in ref. is quite reasonable we should now mention two aspects of our model which seem to have been overlooked in the Comment . First, characterizing the charged impurity scattering by a single two-dimensional (2D) charge density of $`N_i`$ with all the impurity centers located randomly in a plane placed precisely at the Si-SiO<sub>2</sub> interface, as we do in ref. , is surely a highly simplified zeroth order model for a complicated situation where the charged impurity centers will be distributed over some distance inside the oxide layer. We use the simple model to keep the number of free parameters to a minimum (just one, $`N_i`$). If we modify the model slightly, for example, displace the random charged impurity plane some finite distance inside the oxide (or consider a three-dimensional impurity distribution, as is likely), we could considerably increase $`N_i`$, making it sound “more reasonable”. We believe that such fine tuning is unwarranted in a zeroth order model and should be left for future improvements of the theory. Second, our estimated $`N_i`$ is surely a theoretical lower bound on the possible value of the charged impurity density because our theory clearly overestimates the impurity scattering strength as we neglect the modification in scattering due to electron binding to the impurities, and assume that the charged impurities scatter via the screened Coulomb interaction without taking into account the binding effect discussed qualitatively in ref. . Thus, the parameter $`N_i`$ in our model should be taken as an effective (single) parameter which characterizes the overall impurity scattering strength in the system, which should not necessarily be precisely the same as the number density of bare Coulomb scatterers in the sample.
Now , we discuss the second point raised in ref. regarding our choice of $`(n_sn_c)n_e`$ as the “effective” free carrier density participating in the “metallic” Drude-Boltzmann transport for $`n_s>n_c`$. This is a rather subtle issue because in our model $`n_e`$ is, by definition, the free carrier density entering the conductivity $`\sigma =n_ee\mu `$ where $`\mu `$ is the carrier mobility — Drude-Boltzmann theory allows for an intuitive separation of the conductivity into an effective carrier density ($`n_e`$) and an effective carrier mobility ($`\mu `$). Our choice of $`n_en_sn_c`$ as the effective free carrier density leads to excellent qualitative (actually, semi-quantitative) agreement between our calculated theoretical results and all the published experimental results in Si MOSFETs and p-type GaAs heterostructures as far as the temperature dependent resistivity on the “metallic” side ($`n_s>n_c`$) is concerned. Our theory also produces the experimentally observed non-monotonic temperature dependence in the resistivity arising from the quantum-classical crossover behavior which, we believe, to be playing an important role. We note that our choice of $`n_en_sn_c`$ as the effective free carrier density is not only consistent with the impurity binding/freeze-out scenario , but also with the percolation model of ref. .
The important open issue is, of course, a direct experimental measurement of $`n_e`$, the effective free carrier density near threshold (i.e., $`n_sn_c`$) at zero magnetic field. It has been known for long time in MOSFET physics that a direct determination of the free carrier density near threshold at low temperatures is almost impossibly difficult (particularly if $`n_s10^{11}cm^2`$ as it is in the cases of our interest), requiring a number of simultaneous complex measurements which, to the best of our knowledge, have never been attempted in the samples showing the so-called 2D metal-insulator-transition (M-I-T) phenomena. Since there seems to be confusion or misunderstanding on this point we take the liberty of quoting from the authoritative review article by Ando, Fowler, and Stern: “If trapping, band tailing, or depletion charge are important (as they are near threshold) no single experiment can unambiguously give the mobility and carrier concentrations. … . The interpretation of the measurements is increasingly unreliable at carrier concentrations below $`10^{12}cm^2`$.” (p. 490 in ref. .) We believe that a direct measurement of the zero-field, low-temperature, low-density, near-threshold free carrier density through careful capacitance studies (which are extremely problematic at low densities and low temperatures) is the only way to decisively settle the question of the effective free carrier density participating in the “metallic” transport in the Si samples of ref. .
Following the procedure used in ref. for GaAs systems, we can however determine the operational value of $`n_e`$ participating in the Si-15 sample. We show in Fig. 1
the extrapolated zero temperature conductivity $`\sigma (T=0)`$ of the Si-15 sample plotted as a function of the carrier density $`n_s`$, both for experimental data and our theoretical results . It is clear that $`\sigma (T=0)(n_sn_c^{})`$ in Fig. 1 where $`n_c^{}n_c`$. We believe that this specific functional behavior of the $`T=0`$ conductivity, vanishing approximately linearly in $`(n_sn_c)=n_e`$ near threshold, provides compellingly strong phenomenological support for our model of using $`n_e=n_sn_c`$ as the effective free carrier density in the Drude-Boltzmann theory. Note that in our model $`\sigma =n_ee\mu `$ with $`n_e=n_sn_c`$, and the calculated $`\mu `$ has very weak $`n_s`$ dependence near threshold (see Fig. 1) — this is strong support for our assumption that $`n_e=n_sn_c`$ provides a measure of the effective free carrier density in the theory. We have found the same behavior in all the published Si data.
Third, we address the issue of the unpublished finite magnetic field (e.g. Hall effect) measurement of the free carrier density alluded to in ref. . This connection made in ref. is, in fact, misleading since the application of an external magnetic field to a 2D system completely changes its physics by converting it to a quantum Hall system whose localization properties are still poorly understood. The free carrier density ($`n_e`$) entering our Drude-Boltzmann theory is, by definition, the zero-field free carrier density which is not necessarily the same as that measured in finite field quantum Hall-type experiments alluded to in ref. . This fact is most easily demonstrated by pointing out that the finite field measurements of the type mentioned in ref. give the free carrier density to be $`n_s`$ even deep inside the localized regime ($`n_s<n_c`$) where the free carrier density in the Drude-Boltzmann theory is obviously zero, $`n_e=0`$ in the insulating phase. Thus the finite-field carrier density measurements give the free carrier density to be $`n_s`$ everywhere (both in the metallic and the insulating phase), and cannot therefore have anything to do with the parameter $`n_e`$ entering our Drude-Boltzmann theory. While understanding these interesting finite field measurements is clearly an important theoretical challenge (well beyond the scope of our manifestly zero field theory aimed exclusively at understanding the temperature dependent resistivity on the “metallic” side), we disagree with the suggestion in ref. that an understanding of the zero-field transport properties, as in ref. , must somehow depend crucially on a complete theoretical understanding of the quantum Hall behavior of these systems. (None of the proposed theoretical models for 2D M-I-T can account for the finite field quantum Hall behavior.)
Finally, we mention that characterization of our theory as a “simple classical model” in the concluding sentence of ref. is misleading. While quantum interference corrections are neglected in our theory (we argue that quantum interference corrections are overwhelmed by the screening effects considered in our theory in the $`T100mK`$ temperature range considered in the experiments ), the striking temperature dependence of our theoretical results arises entirely from the classical-quantum crossover phenomenon and the quantum screening of charged impurity scattering – a classical theory would not have any of the effects obtained in our calculation .
This work is supported by the U.S.-ARO and the U.S.-ONR.
|
no-problem/9901/astro-ph9901384.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
It is presently believed that the formation of the planets in the solar system took place via a progressive aggregation of dust grains in the primordial solar nebula. However, a mechanism for building planetesimals between the centimer-size grains (formed by agglomeration and sticking) and the meter-size objects capable of triggering a gravitational instability is still lacking (see e.g. Adams & Lin 1993 and the recent review of Beckwith, Henning & Nakagawa 1999). Recently, it has been suggested (Barge & Sommeria 1995; Adams & Watkins 1995; Tanga et al. 1996), that heavy dust particles rapidly concentrate in the cores of anticyclonic vortices in the solar nebula, increasing the local density of centimeter-size grains and favoring the formation of larger scale objects which are then capable of efficiently triggering a gravitational instability. The gravitational instability itself is also enhanced because a vortex creates a larger local surface density in the disk. Consequently, even if giant planets form (as an alternative model suggests) initially due to a gravitational instabilty of the gaseous disk (rather than by planetesimal agglomeration), vortices may still play an important role. It is the change in the Keplerian velocity of the flow in the disk, due to the anticyclonic motion in the vortex, that induces a net force towards the center of the vortex (the inverse happens in a cyclonic vortex). As a consequence, within a few revolutions of the vortex around the protostar, the concentration of dust grains and the density in the anticyclonic vortex become much larger than outside. Analytical estimates (Tanga et al. 1996) have shown that, depending on the unknown drag on the dust particles in the gas, the trapping time for dust in a small anticyclonic vortex (of size $`D/r0.01`$ or less) located at 5 AU (the distance of Jupiter from the Sun) is about $`20<\tau <10^3`$years, i.e. between about 2 and 100 local periods of revolution. The mass that can accumulate in the core of the vortex on this time scale is of the order of $`10^5`$ earth masses (i.e. a planetesimal). Barge & Sommeria (1995) considered larger vortices (of size $`DH`$, where $`H`$ is the disk half thickness, and $`H/r0.06`$ at 5AU) and obtained that on a timescale of the order of 500 orbits, a core mass of about 16 earth masses can accumulate in the core of the vortex. Since small vortices are expected to merge into larger ones, these results should be regarded as complementary. The results of Tanga et al. (1996) could represent the early stages in planet formation (planetesimals), while the results of Barge & Sommeria (1995) could represent a later stage in the evolution (the formation of planets cores).
It is therefore important for the theory of planet formation to assess under which conditions vortices form in disks, whether they are stable or not, and for how long they can survive in the disk.
Analytical results (Adams & Watkins 1995, using a geostrophic approximation) have shown that many different types of vortex solutions are possible in circumstellar disks. However, Adams & Watkins (1995) used simplifying assumptions and they were unable to estimate the lifetime and stability of the vortices. In order to resolve the issue of the non-linear behavior of vortices (e.g. vortex merging, scattering, growth, etc.) one needs to carry out detailed numerical simulations of the flow.
recent simplified simulations of a disk (Bracco et al. 1998), which assume an incompressible flow and solve the shallow water equations, indicate that two-dimensional (large scale) vortices do not fragment into small vortices because of the inverse cascade of energy (characteristic of two-dimensional flows). Rather, the opposite happens: small vortices merge together to form and sustain a large vortex. It is believed for example that the Great Red Spot in the atmosphere of Jupiter is a steady solitary vortex also sustained by the merging of small vortices (Marcus 1993; see also Ingersoll 1990). However, in this case the strong winds in Jupiter’s atmosphere are also partially feeding vorticity from the background flow into the vortex, and the decay time of the vortex is much longer.
The shear in Keplerian disks inhibits the formation of vortices larger than a given scale length $`L_s`$ ($`L_s^2=v/(\mathrm{\Omega }/r)`$, where $`v`$ is the rotational velocity of the vortex (assumed to be subsonic, otherwise the energy is dissipated due to sound waves and shocks), and $`\mathrm{\Omega }`$ is the angular velocity in the disk). At the same time, the viscosity dissipates quickly vortices smaller than a viscous scale length $`L_\nu `$. Consequently, only vortices of size $`L`$ satisfying $`L_\nu <L<L_s`$ can survive in the flow for many orbits before being dissipated. In the calculations of Bracco et al. (1998), positive density perturbation counter rotating (anticyclonic) vortices were found to be long lived, while cyclonic vortices dissipated very quickly. However, as we noted above, Bracco et al. (1998) assumed a shallow water approximation for the disk. These authors do not specify the value of the specific heats ratio $`\gamma `$ that was used in the simulations (in order for the shallow water equation to represent a polytropic thin disk one has to assume $`\gamma =2`$; see e.g. Nauta & Tóth 1998). Most importantly the Mach number of the Keplerian flow (or equivalently the thickness of the disk) is not defined. It is presently not clear whether incompressible results (using a shallow water approximation) are valid for more realistic disks, i.e. for compressible (and potentially viscous) disks, with a density profile $`\rho r^{15/8}`$ and a temperature profile $`Tr^{3/4}`$ \- the standard Shakura-Sunyaev disk model (Shakura & Sunyaev 1973).
It is the goal of the present work to model vortices in disks, using a more realistic approach. We conduct a numerical study of the stability and lifetime of vortices in a standard disk model of a protoplanetary nebula. For this purpose, we use a time-dependent, two-dimensional high-order accuracy hydrodynamical compressible code, assuming a polytropic relation. Here it is important to stress the following. The vorticity (circulation) of the flow (with a velocity field $`\stackrel{}{u}`$) is defined as
$$\stackrel{}{w}=\stackrel{}{}\times \stackrel{}{u}.$$
(1)
Taking the curl of the Navier-Stokes Equations (see e.g. Tassoul 1978) one obtains an equation for the vorticity
$$\frac{\stackrel{}{w}}{t}+\stackrel{}{u}.\stackrel{}{}\rho \stackrel{}{w}\stackrel{}{}P\times \stackrel{}{}\rho +\mathrm{}$$
(2)
The first term on the right hand side of the equation is the source term for the vorticity. Therefore, vorticity can be generated by the non-alignment of $`\rho `$ with $`P`$ (e.g. meridional circulation in stars is generated by this term). The resultant growth of circulation due to this term is also known as the baroclinic instability. However, in the present work we do not address the question of the vortex generation process, since we are using a polytropic relation $`P=K\rho ^\gamma `$ ($`K`$ is the polytropic constant, $`\gamma =1+1/n`$, and $`n`$ is the polytropic index), and the source term vanishes. In this case the vortencity (vorticity per unit mass) is conserved (Kelvin’s circulation theorem, see e.g. Pedlosky 1987) and vortices cannot be generated.
In the next section we present the numerical modelling of the protoplanetary disk. The results are presented §3 and a discussion follows.
## 2 Accretion Disks modelling
The time-dependent equations (e.g. Tassoul 1978) were written in cylindrical coordinates $`(r,\varphi ,z)`$, and were further integrated in the vertical (i.e. $`z`$) direction (e.g. Pringle 1981). We use a Fourier-Chebyshev spectral method of collocation (described in Godon 1997) to solve the equations of the disk. Spectral Methods are global high-order accuracy methods (Gottlieb & Orszag 1977; Voigt, Gottlieb & Hussaini 1984; Canuto et al. 1988). These methods are fast and accurate and are frequently used to study turbulent flows (e.g. She, Jackson & Orszag 1991) and interactions of vortices (Cho & Polvani 1996a). It is important to stress that spectral codes have very little numerical dissipation, and that the only dissipation that occurs in the present simulations is due to the $`\alpha `$ viscosity introduced in the equations. All the details of the numerical method and the exact form of the equations can be found in Godon (1997). We therefore give here only a brief overview of the modelling.
The equations are solved in the plane of the disk $`(r,\varphi )`$, with $`0\varphi 2\pi `$ and $`R_0r2R_0`$, where the inner radius of the computational domain, $`R_0`$, has been normalized to unity. We use an alpha prescription (Shakura & Sunyaev 1973) for the viscosity law, $`\nu =\alpha c_sH=\alpha c_s^2/\mathrm{\Omega }_K`$, where $`c_s`$ is the sound speed and $`H=c_s/\mathrm{\Omega }_K`$ is the vertical height of the disk. The unperturbed flow is Keplerian, and we assume a polytropic relation $`P=K\rho ^{1+1/n}`$ ($`n`$ is the polytropic index, while the polytropic constant $`K`$ is fixed by choosing $`H/r`$ at the outer radial boundary). Both the inner and the outer radial boundaries are treated as free boundaries, i.e. with non-reflective boundary conditions where the conditions are imposed on the characteristics of the flow at the boundaries.
The Reynolds number in the flow is given by
$$R_e=\frac{Lu}{\nu },$$
where $`L`$ is a characteristic dimension of the computational domain, $`u`$ is the velocity of the flow (or more precisely, the velocity change in the flow over a distance $`L`$) and $`\nu `$ is the viscosity. Since we are solving for the entire disk, $`Lr`$ and $`uv_K`$, and the Reynolds number becomes
$$R_e=\alpha ^1(H/r)^2,$$
where we have substituted $`\nu =\alpha H^2\mathrm{\Omega }_K`$, since we are using an $`\alpha `$ viscosity prescription. The Reynolds number in the flow is very high (of the order of $`10^410^5`$ for the assumed parameters).
The simulations were carried out without the use of spectral filters and with a moderate resolution of $`128\times 128`$ collocations points. As we noted above, the only dissipation in the flow is due to the $`\alpha `$ viscosity introduced in the equations (i.e. the Navier-Stokes equations).
## 3 Numerical Results
In all the models presented here we chose $`H/r=0.15`$ to match protoplanetary disks, however similar results were obtained using $`H/r=0.05`$ and $`H/r=0.25`$. The models were first evolved in the radial dimension for an initial dynamical relaxation phase (lasting several Keplerian orbits at least). Then an axisymmetric disk was constructed from the one-dimensional results, on top of which we introduced an initial vorticity perturbation. The initial vorticity perturbation had a Gaussian-like form and a constant rotational velocity of the order of $`0.2c_s`$. In all the models we found that cylonic vortices dissipate very quickly, within about one local Keplerian orbit, while anticyclonic disturbances persist in the flow for a much longer period of time. It is important to stress that in all the models, the anticyclonic vortex, although rotating in the retrograde direction (in the rotating frame of reference), has a rotation rate that is slower than the local Keplerian flow, and consequently, in the inertial frame of reference the vortex rotates in the prograde direction, like a planet. As the models are evolved, the initial vorticity perturbation is streched and elongated by the shear, within a few Keplerian orbits. The elongation of the vorticity into a thin structure transfers enstrophy (the potential enstrophy is defined as the average of the square of the potential vorticity) towards high wave numbers. This process is consistent with the direct cascade of enstrophy characteristic of two-dimensional turbulence. It forms an elongated negative (relative to the local flow) vorticity strip in the direction of the shear, with an elongated vortex in the middle of it (Figure 1). Due to a Kelvin-Helmoltz instability, perturbations in a forming vorticity strip propagate (along the strip) in the direction opposite to the shear. These propagating waves are called Rossby waves in Geophysics (see e.g. Hoskins et al. 1985; in Geophysics this instability is referred to as a shearing instability, e.g. Haynes 1987; Marcus, 1993, section 6.2). As a result of this instability the two bendings in the vorticity strip (namely, the trailing and the leading vorticity waves of the vortex) move in the direction opposite to the shear and fold onto themselves. The extremities of the vorticity waves can be again elongated by the shear and and can undergo further folding due to the propagation of the Rossby waves. This results in spiral vorticity arms around the vortex (Figure 2). The shape of the vortex thus formed, then does not change any more during the remaining time of the simulation (dozens of orbits).
We also find that anticyclonic vortices act like overdense regions in the disk, i.e. within about one orbit the density increases by about 10 percent in the core of the vortex. We cannot check, however, whether a cyclonic vortex decreases the density in its core, since cyclonic vortices dissipate very quickly.
### 3.1 Flat density profile models
In a first series of models, we chose a constant density profile throughout the disk with a polytropic index $`n=3`$. These models are less realistic and are probably closer to the models of Bracco et al. (1998) who used an incompressible approximation (the shallow water equation). We ran four models with different values of the viscosity parameter $`\alpha =1\times 10^4`$, $`3\times 10^4`$, $`6\times 10^4`$ and $`1\times 10^3`$. The viscosity parameter was chosen so as to be consistent with values inferred for protostellar disks (e.g. Bell et al. 1995). The simulations were followed for up to a maximum time of 60 local Keplerian orbits of the vortex in the disk. We found exponential decay times for the vortex of $`\tau =3.9`$ periods for $`\alpha =10^3`$ and $`\tau =32.4`$ periods for $`\alpha =10^4`$ (see Figure 4). In all cases the decay was exponential.
### 3.2 Standard disk models
In an attempt to study the stability and decay times of vortices in more realistic disks, we modelled a standard Shakura Sunyaev disk (Shakura & Sunyaev, 1973) with $`\rho r^{15/8}`$ and $`Tr^{3/4}`$ (this was achieved by choosing a polytropic index $`n=2.5`$ together with an ideal gas equation of state). In this model we chose $`\alpha =10^4`$, and the simulations were followed for up to 20 local Keplerian orbits of the vortex in the disk. Although the initial vorticity perturbation was the same as before, the decay time of the vortex ($`\tau =60`$), and its size were found to be larger than for the flat density models (see Figure 4). However, difficulties in assessing precisely the size of the vortex make it difficult to determine whether the different decay time is due to the different size of the vortex or merely to the different density profiles of the models. We also ran an additional model with a lower viscosity parameter of $`\alpha =10^5`$ and found an exponential decay time of more than 315 orbits. In this case the resolution was increased to $`256\times 256`$ and the vortex that formed was slightly smaller in size than in the previous models.
In order to gain further insight into the dynamics of vortices, we also ran two additional models (one with $`\alpha =10^4`$ and one with $`\alpha =10^3`$) where two vorticity perturbations were initially introduced in the flow. In the case of $`\alpha =10^4`$, the vortices interact by propagating vorticity waves and eventually the two vortices merge together to form a single larger vortex (see Figures 5a - 5d). In the case of $`\alpha =10^3`$, the vortices do not interact (no vorticity waves were observed) and dissipate quickly.
The results indicate that the exponential decay time of a vortex is inversely proportional to the viscosity (Figure 6), and it can be very large indeed (in $`3060`$ orbits the amplitude of the vortex decreases by a factor of $`e`$) for disks around Young Stellar Objects, where the viscosity might be low ($`\alpha =10^4`$). One expects the decay time to behave like $`\tau d^2/\nu `$ (see also Bracco et al. 1998), where $`d`$ is the size of the vortex and $`\nu `$ is the viscosity in the flow. In principle, an inviscid model could have vortices that do not decay. Therefore, any decay that has previously been observed in inviscid numerical simulations (e.g. the shallow water approximation of Bracco et al. 1998) was probably due to numerical diffusion in the code (the hyperviscosity introduced by Bracco et al. 1998 in their model).
## 4 Discussion
Accretion disks possess a very strong shear, which is normally believed to lead to a rapid destruction of any structures that form within it (e.g. the spiral waves obtained in a perturbed disk by Godon & Livio 1998, dissipate or exit the computational domain within about 10 orbits for $`\alpha =1\times 10^4`$). However, we find that anticyclonic vortices are surprisingly long-lived, and they can survive for hundreds of orbits (the amplitude of the vortex decreases exponentially with a time constant of 60 orbits for $`\alpha =10^4`$ in disks around young stellar objects). These results are in agreement with other similar simulations of the decay of two-dimensional turbulence, e.g. the simulations of rotating shallow-water decaying turbulence on the surface of a sphere (Cho & Polvani, 1996a & b; modelling of the Jovian atmosphere), where the only dissipation of the vorticity is due to the (hyper) viscosity. The size and the elongation of the vortices that form out of an initial vorticity perturation increase with the viscosity. In order for the model to be self consistent, the size of the vortices has to be larger than the thickness of the disk (validity of the two-dimensional assumption, otherwise the vortices are three dimensional). We found that for an alpha viscosity of the order of $`10^5`$ (with $`H/r=0.15`$) the semi-major axis $`a`$ of the vortices is slightly smaller than $`H`$. However, when the viscosity is increased to $`10^3`$ the semi-major axis becomes up to three times the thickness of the disk. In all the cases the semi-minor axis $`b`$ remains smaller than $`H`$, while the elongation ($`a/b`$) varies from about 4 (for the less viscous case) to about 10 (for the most viscous model). Tanga et al. (1996) and Barge & Sommeria (1995) solved numerically the equations of motion for dust particles in vortices, and confirmed that dust particles concentrate inside vortices on a relatively short timescale. The time taken by a dust particle to reach the center of an anticyclonic vortex at a few AU ranges from a few orbits to a hundred orbits, depending on the exact value of the drag parameter. We have shown that in a standard disk model for a protoplanetary disk (a polytropic disk with $`H/r=0.15`$), vortices can survive long enough to allow dust particles to concentrate in their core. For $`\alpha =10^3`$ only small vortices would form and would not merge together. In this case (using the estimate of Tanga et al. 1996 for small vortices) one finds that the vortices would decay within about 10 orbits and the dust concentration in the core of the vortices would only reach $`10^6`$ Earth masses (planetesimals, at 5AU). For $`\alpha =10^4`$ small vortices would merge together to form larger vortices. The concentration of dust grains in the core of the larger vortices could reach about 2 Earth masses (within a hundred revolutions and using the estimate of Barge & Sommeria 1995 at 5AU). Therefore, the local density of centimeter-size grains could be increased, thus favoring the formation of larger scale objects which are then capable of efficiently triggering a gravitational instability. Our results therefore confirm earlier suspicions that were based on a simplified solution of shallow water equations for an incompressible fluid.
It is important to note though, that in the present work we have not addressed yet the problem of vortex formation. In addition to the baroclinic instability described in the Introduction, another potential way to generate vortices in disks around young stellar objects is through infall of rotating clumps of gas. It has been suggested that protostellar disks could grow from the accretion (or collapse) of rotating gas cloud (e.g. Cassen & Moosman 1981; Boss & Graham 1993; Graham 1994; Fiebig 1997). The clumps with the proper rotation vectors could then give rise to small vortices, that would subsequently merge together.
Finally, we would like to mention that vortices can (in principle at least) have many other astrophysical applications. For example, interacting Rossby waves can result in radial angular momentum transport (e.g. Llewellyin Smith 1996). Vortices are also believed to be important in molecular cloud substructure formation in the Galactic disk (e.g. Chantry, Grappin & Léorat 1993; Sasao 1973).
## Acknowledgments
This work has been supported in part by NASA Grant NAG5-6857 and by the Director Discretionary Research Fund at STScI.
## References
Adams, F. C., & Lin, D. N. C. 1993, in Protostars and Planets III, ed. E. H. Levy & J. I. Lunine (Tuscon: Univ.Arizona Press), 721
Adams, F. C., & Watkins, R. 1995, ApJ, 451, 314
Barge, P., & Sommeria, J. 1995, A & A, 295, L1
Beckwith, S. V. W., Henning, T., Nakagawa, Y., 1999, in Protostars and Planets IV, in press, astro-ph/9902241
Bell, K. R., Lin, D. N. C., Hartmann, L. W., & Kenyon, S. J., 1995, ApJ, 444, 376
Boss, A. P., Graham, J. A., 1993, ICARUS, 106, 168
Bracco, A., Provenzale, A., Spiegel, E., Yecko, P. 1998, in Abramowicz A. (ed.), Proceedings of the Conference on Quasars and Accretion Disks, Cambridge Univ. Press, astro-ph/9802298
Canuto, C., Hussaini, M. Y., Quarteroni, A. & Zang, T. A. 1988, Spectral Methods in Fluid Dynamics (New York: Springer Verlag)
Cassen, P., & Moosman, A. 1981, ICARUS, 48, 353
Chantry, P., Grappin, R., & Léorat, J. 1993, A & A, 272, 555
Cho, J. Y. K., & Polvani, L. M. 1996a, Phys. Fluids, 8 (6), 1531
Cho, J. Y. K., & Polvani, L. M. 1996b, Science, 273, 335
Fiebig, D., 1997, A & A, 327, 758
Godon, P., & Livio, M. 1998, submitted to ApJ
Godon, P. 1997, ApJ, 480, 329
Gottlieb, D., & Orszag, S.A. 1977, Numerical Analysis of Spectral Methods: Theory and Applications (NSF-CBMS Monograph 26; Philadelphia: SIAM)
Graham, J. A., 1994, AASM, 184, 44.07
Haynes, P.H., 1987, J.Fluid Mech., 175, 463
Hoskins, B., McIntyre, M., Robertson, A., 1985, Q.J.R.Meteorol.Soc., 111, 877
Ingersoll, A. P., 1990, Science, 248, 308
Marcus, P. S. 1993, ARAA, 31, 523
Nauta, M. D., & Tóth, G. 1998, A & A, 336, 791
Pedlosky, J. 1987, Geophysical Fluid Dynamics, 2nd ed. (Springer Verlag, New York)
Shakura, N.I., & Sunyaev, R. A. 1973, A & A, 24, 337.
She, Z.S., Jackson, E., & Orszag, S.A. 1991, Proceedings of the Royal Society of London, Series A: Mathematical and Physical Sciences, vol. 434 (1890), 101.
Sasao, T. 1973, PASJ, 25, 1
Tanga, P., Babiano, A., Dubrulle, B., & Provenzale, A., 1996, ICARUS, 121, 158
Tassoul, J. L., 1978, Theory of Rotating Stars, Princeton Series in Astrophysics, Princeton, New Jersey
Voigt, R.G., Gottlieb, D., & Hussaini, M.Y. 1984, Spectral Methods for Partial Differential Equations (Philadelphia: SIAM-CBMS)
Figures Caption
Figure 1: A colorscale of the vorticity shows the stretching of an anticyclonic vorticity perturbation in a disk model with a viscosity parameter of $`\alpha =10^4`$. This results in a negative vorticity strip in the flow, which is then further affected by the propagation of Rossby waves along its wings.
Figure 2: A detailed view of an anticyclonic vortex in the disk, a few orbits later than shown in Figure 1. The vorticity wings have folded up onto themselves due to a Kelvin-Helmoltz instability (accompanied by the propagation of Rossby waves).
Figure 3: The momentum ($`\rho \stackrel{}{v}`$) field is shown in the vicinity of the vortex. The coordinates ($`r,\varphi `$) have been displayed on a Cartesian grid.
Figure 4: The maximum amplitude $`A`$ of the absolute vorticity of the vortex is drawn in arbitrary logarithmic units as a function of time (in orbits), for values of the alpha viscosity parameter ranging from $`\alpha =10^5`$ to $`\alpha =10^3`$. The full lines represents the initial flat density profile models, while the doted lines are for an initial density profile matching a standard disk model. In all the models, during the first orbits, the decay of the maximum amplitude of the vorticity is stronger, due to the relaxation of the initial vorticity perturbation. The figure shows that the amplitude behaves like $`Ae^{t/\tau }`$, where $`\tau `$, the decay time, increases as the viscosity decreases.
Figures 5a-d. As two vortices interact, they emit vorticity waves and eventually merge to form one large vortex. In this model the viscosity parameter was taken to be $`\alpha =10^4`$. For convenience the vortices are shown roughly in the same orientation in the disk. The complete process of merging takes about $`510`$ orbits.
Figure 6: The decay time $`\tau `$ against $`\alpha ^1`$. The flat-density models of Figure 4 are represented by stars, together with a straight line $`\tau =c/\alpha `$, where $`c=3.2\times 10^3`$.
|
no-problem/9901/math9901102.html
|
ar5iv
|
text
|
# On relative normal modes
## 1. Introduction.
In this paper we generalize the Weinstein-Moser theorem (\[W1, Ms, W2, MnRS, B\] and references therein) on the nonlinear normal modes near an equilibrium in a Hamiltonian system to a theorem on the relative periodic orbits near a relative equilibrium in a symmetric Hamiltonian system.
Let $`M`$ be a symplectic manifold, with a Hamiltonian action of a connected Lie group $`G`$ and a moment map $`\mathrm{\Phi }:M𝔤^{}`$. Recall that an orbit of the Hamiltonian vector field $`X_h`$ of a $`G`$-invariant Hamiltonian $`hC^{\mathrm{}}(M)^G`$ is a relative equilibrium if its image in the orbit space $`M/G`$ is a point, and that an orbit of $`X_h`$ is a relative periodic orbit if its image in $`M/G`$ is a closed curve.
Later improvements of the Weinstein-Moser theorem lend themselves to generalizations by our method; to streamline the presentation, however, we shall treat only the original version:
###### Weinstein’s Theorem.
\[W1\] Let $`h`$ be a Hamiltonian on a symplectic vector space $`V`$ such that its differential at the origin $`dh(0)`$ is zero and its Hessian at the origin $`d^2h(0)`$ is positive definite. Then for every small $`\epsilon >0`$, the energy level $`h^1(h(0)+\epsilon )`$ carries at least $`\frac{1}{2}dimV`$ periodic orbits of the Hamiltonian vector field of $`h`$.
Now suppose $`xM`$ lies on a relative equilibrium for a $`G`$-invariant Hamiltonian $`h`$. If $`x`$ is a regular point of the moment map $`\mathrm{\Phi }`$, then the reduced space at $`\mu =\mathrm{\Phi }(x)`$ is smooth near the image $`\overline{x}`$ of $`x`$. Provided appropriate conditions hold on the Hessian of the reduced Hamiltonian, Weinstein’s theorem applied to the reduced system guarantees at least $`\frac{1}{2}dim(\text{reduced space})`$ periodic orbits, and hence as many families of relative periodic orbits near the relative equilibrium. (If $`x`$ lies on a relative periodic orbit, then the orbit through $`gx`$ is relative periodic also for every $`gG`$.) On the other hand, if $`x`$ is a singular point of $`\mathrm{\Phi }`$, then the reduced space at $`\mu `$ is a stratified space, and the reduced dynamics preserves the stratification \[AMM, SL\]. Unless the stratum through $`\overline{x}`$ is an isolated point, we have again $`\frac{1}{2}dim(\text{stratum})`$ families of relative periodic orbits, provided appropriate conditions hold on the Hessian of the restriction of the reduced Hamiltonian to the stratum. But what if the stratum through $`\overline{x}`$ is an isolated point? It is difficult to make sense of Hessians on singular spaces.
We shall show that in this case also there are families of relative periodic orbits near the relative equilibrium provided a certain substitute for the Hessian is definite and the isotropy group $`G_\mu `$ of $`\mu `$ is a torus. The proof amounts to a computation in ‘good coordinates’ that allows us to reduce our problem to Weinstein’s theorem. We proceed via the following ‘correspondence theorem’:
###### Theorem 1.
Let $`M`$ be a symplectic manifold, with a Hamiltonian action of a connected Lie group $`G`$ and a moment map $`\mathrm{\Phi }:M𝔤^{}`$, and let $`hC^{\mathrm{}}(M)^G`$ be a $`G`$-invariant Hamiltonian.
Suppose $`xM`$ lies on a relative equilibrium for $`h`$ and the isotropy group of $`\mathrm{\Phi }(x)`$ is a torus. Then there exist a symplectic vector space $`V`$ with a Hamiltonian action of the isotropy group $`G_x`$ of $`x`$ and a $`G_x`$-invariant Hamiltonian $`h_VC^{\mathrm{}}(V)^{G_x}`$ such that
1. the origin $`0V`$ is an equilibrium for $`h_V`$;
2. the Hessian $`d^2h_V(0)`$ of $`h_V`$ can be computed in terms of $`h`$;
3. every $`G_x`$-relative periodic orbit for $`h_V`$ on $`V`$ sufficiently close to the origin gives rise to a family of $`G`$-relative periodic orbits for $`h`$ on $`M`$.
The meaning of the vector space $`V`$ and of the quadratic form $`d^2h_V(0)`$ is as follows. Note that
$$xM\text{ lies on a relative equilibrium for }hd(h\mathrm{\Phi }|\xi )(x)=0\text{ for some }\xi 𝔤.$$
The vector $`\xi `$ is called a velocity of $`x`$. Velocity is not unique: any two velocities of $`x`$ differ by a vector in the isotropy Lie algebra $`𝔤_x`$ of $`x`$.
The function $`h\mathrm{\Phi }|\xi `$ is constant along the orbit $`G_\mu x`$, where as above $`G_\mu `$ is the isotropy group of $`\mu =\mathrm{\Phi }(x)`$. The form $`d^2(h\mathrm{\Phi }|\xi )(x)|_{\mathrm{ker}d\mathrm{\Phi }(x)}`$ is therefore always degenerate and descends to a well-defined form on the vector space $`V:=\mathrm{ker}d\mathrm{\Phi }(x)/T_x(G_\mu x)`$; alternatively we can think of $`V`$ as the maximal symplectic subspace of $`\mathrm{ker}d\mathrm{\Phi }(x)`$. $`V`$ is called the symplectic slice at $`x`$. It follows from the local normal form theorem of Guillemin-Sternberg and Marle \[GS, Mr\] that there exists a $`G_x`$-invariant symplectic submanifold $`\mathrm{\Sigma }`$ passing through $`x`$ such that $`T_x\mathrm{\Sigma }=V`$. Thus, locally $`\mathrm{\Sigma }V`$ as symplectic manifolds, with $`x`$ corresponding to the origin in $`V`$. The function $`h_V`$ in Theorem 1 is the restriction $`(h\mathrm{\Phi }|\xi )|_\mathrm{\Sigma }=(h\mathrm{\Phi }|\xi )|_V`$. Since Hessians are functorial, $`d^2h_V(0)=d^2(h\mathrm{\Phi }|\xi )|_V(0)=d^2(h\mathrm{\Phi }|\xi )(x)|_V.`$ This notion of Hessian has been used in \[LS, O, OR\] to study the stability and bifurcation of relative equilibria at singular points of the moment map.
As an example of applications of Theorem 1, we combine it with Weinstein’s theorem to obtain:
###### Corollary.
Let $`M`$, $`G`$, $`\mathrm{\Phi }:M𝔤^{}`$, $`hC^{\mathrm{}}(M)^G`$ be as in Theorem 1. Suppose $`xM`$ lies on a relative equilibrium for $`h`$ and the isotropy group of $`\mathrm{\Phi }(x)`$ is a torus; call $`V`$ the symplectic slice at $`x`$. If there is a velocity $`\xi `$ of $`x`$ for which $`d^2(h\mathrm{\Phi }|\xi )(x)|_V`$ is positive definite, then for every small $`\epsilon >0`$, the level set $`\{yM(h\mathrm{\Phi }|\xi )(y)=(h\mathrm{\Phi }|\xi )(x)+\epsilon \}`$ carries families of relative periodic orbits.
We expect that the assumption on the isotropy group of $`\mathrm{\Phi }(x)`$ being a torus can be dropped, but the proof is unlikely to be quite so simple. In case $`G`$ is compact, the assumption is satisfied generically.
## 2. Proof of Theorem 1.
Let $`M`$, $`G`$, $`\mathrm{\Phi }:M𝔤^{}`$, $`hC^{\mathrm{}}(M)^G`$ be as in the statement of Theorem 1. We are supposing that $`xM`$ lies on a relative equilibrium for $`h`$ and that the isotropy group $`G_\mu `$ of $`\mu =\mathrm{\Phi }(x)`$ is a torus.
First, we show that it suffices to consider the case when the whole $`G`$ is a torus. Since $`G_\mu `$ is compact, we can produce an $`Ad^{}(G_\mu )`$-invariant inner product on $`𝔤^{}`$ and take the corresponding $`G_\mu `$-equivariant splitting $`𝔤=𝔤_\mu 𝔥`$. Then a small enough $`G_\mu `$-invariant neighborhood $`B`$ of $`\mu `$ in the affine plane $`\mu +𝔥^{}`$ is transverse to the moment map (here $`𝔥^{}`$ denotes the annihilator of $`𝔥`$ in $`𝔤`$). Hence $`R:=\mathrm{\Phi }^1(B)`$ is a $`G_\mu `$-invariant submanifold of $`M`$ containing $`x`$. In fact, by the symplectic cross-section theorem of Guillemin and Sternberg (cf. \[GLS\], Corollary 2.3.6), $`R`$ is a symplectic submanifold of $`M`$ and the action of $`G_\mu `$ on $`R`$ is Hamiltonian; the moment map for this action is the restriction of $`\mathrm{\Phi }`$ to $`R`$ followed by the natural projection $`𝔤^{}𝔤_\mu ^{}`$. Since $`𝔤^{}𝔤_\mu ^{}`$ restricted to $`\mu +𝔥^{}`$ is an isomorphism, the moment map for the action of $`G_\mu `$ on $`R`$ is $`\mathrm{\Phi }|_R`$ up to a linear isomorphism. It follows that
$$\mathrm{ker}d\mathrm{\Phi }(y)=\mathrm{ker}d(\mathrm{\Phi }|_R)(y)$$
for every $`yR`$ (cf. \[LS\], Lemma 2.5). Moreover, because $`h`$ is $`G`$-invariant, the flow of $`X_h`$ preserves the fibers of the moment map, and so the flow preserves $`R`$. It follows that
$$(X_h)|_R=X_{(h|_R)}$$
(cf. \[L\], p.218). Thus, we have found a Hamiltonian sub-system $`(R,G_\mu ,\mathrm{\Phi }|_R,h|_R)`$ for which the symmetry group $`G_\mu `$ is a torus. Passing to this sub-system, we may and shall assume without loss of generality that $`G`$ is a torus and $`G=G_\mu `$.
Second, we construct the Hamiltonian $`h_V`$. The connected component $`K`$ of $`1`$ in the isotropy group $`G_x`$ of $`x`$ is a torus and has a complementary torus $`L`$, so that $`G=L\times K`$. The finite abelian group $`\mathrm{\Gamma }=G_x/K`$ may be regarded as a subgroup of $`L`$. Then $`\mathrm{\Gamma }`$ acts symplectically (by the lifted action) on $`T^{}L=L\times 𝔩^{}`$ and on the symplectic slice $`V`$ at $`x`$. Hence $`\mathrm{\Gamma }`$ acts diagonally on $`T^{}L\times V`$. Note that $`G`$ too acts on $`T^{}L\times V`$: $`L`$ acts on its own cotangent bundle and $`K`$ acts on $`V`$. Hence $`G`$ acts in a Hamiltonian fashion on $`(T^{}L\times V)/\mathrm{\Gamma }`$. The orbit of $`[1,0,0](T^{}L\times V)/\mathrm{\Gamma }`$ is isotropic and is isomorphic to $`G/(\mathrm{\Gamma }\times K)Gx`$. By the equivariant isotropic embedding theorem, there exist $`G`$-invariant neighborhoods $`U_0`$ of $`[1,0,0]`$ in $`(T^{}L\times V)/\mathrm{\Gamma }`$ and $`U`$ of $`x`$ in $`M`$ and an equivariant symplectomorphism $`\sigma :U_0U`$ such that $`\sigma ([1,0,0])=x`$ and the derivative $`d\sigma ([1,0,0])`$ sends $`VT_{[1,0,0]}(T^{}L\times V)/\mathrm{\Gamma }`$ to $`V\mathrm{ker}d\mathrm{\Phi }(x)T_xM`$ by the identity map. Define $`\stackrel{~}{h}=\sigma ^{}(h\mathrm{\Phi }|\xi )`$. Because $`G`$ is assumed to be abelian, $`\xi `$ is trivially in the center of $`𝔤`$, and so $`\stackrel{~}{h}`$ is $`G`$-invariant. At this juncture it is convenient to pretend that $`U_0=(T^{}L\times V)/\mathrm{\Gamma }`$. By the invariance of $`\stackrel{~}{h}C^{\mathrm{}}((T^{}L\times V)/\mathrm{\Gamma })^{L\times K}`$, $`\stackrel{~}{h}([l,\lambda ,v])=\stackrel{~}{h}([1,\lambda ,v])`$ for all $`(l,\lambda ,v)L\times 𝔩^{}\times V`$. Now define $`h_V(v)=\stackrel{~}{h}([1,0,v])`$. This $`h_VC^{\mathrm{}}(V)^{\mathrm{\Gamma }\times K}`$ is the desired Hamiltonian: we have
$$dh_V(0)=d\stackrel{~}{h}([1,0,0])|_V=0$$
and
$$d^2h_V(0)=d^2\stackrel{~}{h}([1,0,0])|_V=d^2\left(\sigma ^{}(h\mathrm{\Phi }|\xi )\right)([1,0,0])|_V=\sigma ^{}\left(d^2(h\mathrm{\Phi }|\xi )(x)|_V\right).$$
Third and last, we prove that relative periodic orbits for $`h_V`$ give rise to relative periodic orbits for $`h`$. Consider the fully reduced system $`(P,h_P)`$ obtained by reducing the system $`(V,h_V)`$ by the action of $`\mathrm{\Gamma }\times K`$. We can arrive at $`(P,h_P)`$ also by reducing the system $`((T^{}L\times V)/\mathrm{\Gamma },\stackrel{~}{h})`$ by the action of $`G=L\times K`$. Thus, relative periodic orbits for $`h_V`$ correspond to periodic orbits for $`h_P`$, which in turn give rise to relative periodic orbits for $`\stackrel{~}{h}`$, or equivalently to relative periodic orbits for $`h\mathrm{\Phi }|\xi `$. But relative periodic orbits for $`h\mathrm{\Phi }|\xi `$ are relative periodic orbits for $`h`$. Indeed, writing $`\phi _f^t`$ for the Hamiltonian flow of $`f`$, we have $`\phi _{h\mathrm{\Phi }|\xi }^t=\phi _{\mathrm{\Phi }|\xi }^t\phi _h^t`$ by the $`G`$-invariance of $`h`$. Therefore, $`\phi _{h\mathrm{\Phi }|\xi }^t(x)=gx`$ for some $`gG`$ if and only if $`\phi _h^t(x)=g^{}x`$ with $`g^{}=\mathrm{exp}(t\xi )gG`$. This concludes the proof of Theorem 1.
## 3. Concluding remarks
I. Corollary proves the existence of relative periodic orbits for a pair of axially symmetric rigid bodies subject to a coupling potential which depends on the angle between the bodies. We expect the theorem to prove the existence of relative periodic orbits for the motion of Riemann ellipsoids.
II. In Theorem 1, we compute $`d^2h_V(0)`$ from $`d^2(h\mathrm{\Phi }|\xi )(x)|_{\mathrm{ker}d\mathrm{\Phi }(x)}`$ by ‘killing’ the direction of degeneracy. This computation needs only the existence of the ‘Darboux coordinates’ $`\sigma :(T^{}L\times V)/\mathrm{\Gamma }U_0M`$ (see the proof above), but by computing without the explicit form of $`\sigma `$, we lose some information. Alternatively we can compute the Taylor expansion of $`\sigma `$ and translate the higher-order information on the jet of the original Hamiltonian $`h`$ into the information on the jet of the model Hamiltonian $`h_V`$.
|
no-problem/9901/astro-ph9901187.html
|
ar5iv
|
text
|
# ABSTRACT
## ABSTRACT
Building on the success of the Energetic Gamma Ray Experiment Telescope (EGRET) on the Compton Gamma Ray Observatory, the Gamma-ray Large Area Space Telescope (GLAST) will make a major step in the study of such subjects as blazars, gamma-ray bursts, the search for dark matter, supernova remnants, pulsars, diffuse radiation, and unidentified high-energy sources. The instrument will be built on new and mature detector technologies such as silicon strip detectors, low-power low-noise LSI, and a multilevel data acquisition system. GLAST is in the research and development phase, and one full tower (of 25 total) is now being built in collaborating institutes. The prototype tower will be tested thoroughly at SLAC in the fall of 1999.
## 1 INTRODUCTION
As the highest-energy photons, gamma rays have an inherent interest to astrophysicists and particle physicists studying high-energy, nonthermal processes. Gamma-ray telescopes complement those at other wavelengths, especially radio, optical, and X-ray, providing the broad, mutiwavelength coverage that has become such a powerful aspect of modern astrophysics. EGRET, the high-energy telescope on the Compton Gamma Ray Observatory, has led the way in such an effort, contributing to broad-band studies of blazars, gamma-ray bursts, pulsars, solar flares, and diffuse radiation. Now development is underway for the next significant advance in high-energy gamma-ray astrophysics, GLAST, which will have $``$30 times the sensitivity of EGRET at 100 MeV and more at higher energies, including the largely-unexplored 30–300 GeV band. The following sections describe the science goals, instrument technologies, and international collaboration for GLAST.
Some key scientific parameters for GLAST are shown in Table 1 (Bloom, 1996; Michelson, 1996).
## 2 SCIENTIFIC GOALS FOR GLAST
### 2.1 Blazars
Blazars are thought to be active galactic nuclei consisting of accretion-fed supermassive black holes with jets of relativistic particles directed nearly toward our line of sight. The formation, collimation, and particle acceleration in these powerful jets remain important open questions. Many blazars are seen as bright, highly-variable gamma-ray sources, with the high-energy gamma rays often dominating the luminosity (Hartman et al. 1997). For this reason, the gamma rays provide a valuable probe of the physics under these extreme conditions, especially when studied as part of multiwavelength campaigns (e.g. Shrader and Wehrle, 1997). With its wide field of view and high sensitivity, GLAST will enable blazar studies with far better resolution and time coverage than was possible with EGRET. GLAST should detect thousands of blazars.
Blazars are often very distant objects, and the extension of the gamma-ray spectrum into the multi-GeV range opens the possibility of using blazars as cosmological probes. In the energy range beyond that observed by EGRET but accessible to GLAST, the blazar spectra should be cut off by absorption effects of the extragalactic background light produced by galaxies during the era of star formation. A statistical sample of high-energy blazar spectra at different redshifts may provide unique information on the early universe (Macminn and Primack, 1996).
### 2.2 Gamma-Ray Bursts
The recent breakthrough associating gamma-ray bursts with distant galaxies (e.g. Djorgovski et al. 1997) has changed the focus of gamma-ray burst research from the question of where they are to the questions of what they are and how they work. The power source and emission mechanisms for gamma-ray bursts remain mysteries. The high-energy gamma radiation seen from some bursts by EGRET (Dingus et al. 1995) indicates that GLAST will provide important information about these questions. With its large field of view, GLAST can expect to detect over 100 bursts per year at GeV energies compared to about one per year for EGRET, allowing studies of the high-energy component of the burst spectra.
### 2.3 Search for Dark Matter
One of the leading candidates for the dark matter now thought to dominate the universe is a stable, weakly-interacting massive particle (WIMP). One candidate in supersymmetric extensions of the standard model in particle physics is the neutralino, which might annihilate into gamma rays in the 30-300 GeV range covered by GLAST (see Jungman, Kamionkowski and Griest, 1996, for a general discussion of dark matter candidates). The good energy resolution possible with the GLAST calorimeter will make a search for such WIMP annihilation lines possible.
### 2.4 Pulsars
A number of young and middle-aged pulsars have their energy output dominated by their gamma-ray emission. Because the gamma rays are directly related to the particles accelerated in the pulsar magnetospheres, they give specific information about the physics in these high magnetic and electric fields. Models based on the EGRET-detected pulsars make specific predictions that will be testable with the larger number of pulsars that GLAST’s greater sensitivity will provide (Thompson, et al. 1997).
### 2.5 Supernova Remnants and the Origin of Cosmic Rays
Although a near-consensus can be found among scientists that the high-energy charged particle cosmic rays originate in supernova remnants (SNR), the proof of that hypothesis has remained elusive. Some EGRET gamma-ray sources appear to be associated with SNR, but the spatial and spectral resolution make the identifications uncertain (e.g. Esposito et al. 1996). If SNR do accelerate cosmic rays, they should produce gamma rays at a level that can be studied with GLAST, which will be able to resolve some SNR spatially.
### 2.6 Diffuse Gamma Radiation
Within the Galaxy, GLAST will explore the diffuse radiation on scales from molecular clouds to galactic arms, measuring the product of the cosmic ray and gas densities. The extragalactic diffuse radiation may be resolved; GLAST should detect all the blazars suspected of producing this radiation. Any residual diffuse extragalactic gamma rays would have to come from some new and unexpected source.
### 2.7 Unidentified Sources and New Discoveries
Over half the sources seen by EGRET in the high-energy gamma-ray sky remain unidentified with known astrophysical objects. Some may be radio-quiet pulsars, some unrecognized blazars, and some are likely to be completely new types of object (for a recent discussion, see Mukherjee et al., 1997). In general, the EGRET error boxes are too large for spatial correlation, and the photon density is too small for detailed timing studies. Both these limitations will be greatly alleviated with GLAST. In particular, the combination of GLAST with the next generation of X-ray telescopes should resolve a large part of this long-standing mystery. The new capabilities of GLAST will surely produce unanticipated discoveries, just as each previous generation of gamma-ray telescope has done.
## 3 GLAST HARDWARE DEVELOPMENT
### 3.1 Glast Technologies
Any high-energy gamma-ray telescope operates in the range where pair production is the dominant energy loss process; therefore, GLAST (see Fig. 1 for one concept configuration) shares some design heritage with SAS-2, COS-B, and EGRET: it has a plastic anticoincidence system, a tracker with thin plates of converter material, and an energy measurement system. What GLAST benefits from most is the rapid advance in semiconductor technology since the previous gamma-ray missions. The silicon revolution affects GLAST in two principal ways as will be described below.
#### 3.1.1 Multi-Layer Si Strip Tracker
The tracker consists of solid-state devices instead of a gas/wire chamber. The baseline design for GLAST uses Si strip detectors with 195 $`\mu `$m pitch (see Fig. 2), offering significantly better track resolution with no expendable gas or high voltage discharge required. Low-power application specific integrated circuits (ASICs) allow readout of approximately 10<sup>6</sup> channels of tracker with only 260 W.
The 77 m<sup>2</sup> of Si strip detectors planned for GLAST will be the largest Si strip detector system ever made. Since manufacturers (Hamamatsu, Micron and others) have decided to move to 6-inch wafers, we can expect a good cost/performance.
#### 3.1.2 On-board Computer
On-board computing, which was extremely limited in the Compton Observatory era, is now possible on a large scale. The 32-bit, radiation-hard processors now available allow software to replace some of the hardware triggering of previous missions and also enable considerable on-board analysis of the tracker data to enhance the throughput of useful gamma-ray data.
### 3.2 Plan and Schedule
Two prototype Si strip detectors have been made. The first prototype was a 6-cm-detector. The second prototype (Fig. 2), which had redundancy strips and bypass strips, showed superb characteristics (bad strips $`<`$ 0.03%). A third prototype, made from 6-inch-wafer, is being requested.
A “mini-tower” consisting of a stack of the 6 cm Si strip detectors, a CsI(Tl) calorimeter, and a plastic scintillator anticoincidence, was tested at a tagged gamma-ray beam at SLAC in Fall, 1997 (Ritz, et al., 1998). A full prototype tower will be built by summer, 1999, for the Fall, 1999, beam test. We are expecting full production of the GLAST hardware to begin in 2001. We are currently waiting for approvals from DOE and NASA. We will also apply for the grant-in-aids from Monbusho (the Ministry of Education of Japan).
## 4 THE GLAST COLLABORATION
GLAST is planned as a facility-class mission involving an international collaboration of the particle physics and astrophysics communities. Currently, scientists from the United States, Japan, France, Germany, and Italy are involved in the development effort. GLAST is currently listed as a candidate for a new start at NASA, with a possible launch in 2005. Further information about GLAST can be found at
http://www-glast.stanford.edu/
## 5 REFERENCES
* Bloom, E.D., Sp. Sci. Rev., 75, 109 (1996)
* Dingus, B.L., Ap. & Sp. Sci., 231, 187 (1995)
* Djorgovski, S.G. et al., IAU Circ., 6655 (1997)
* Esposito, J.A. et al., ApJ, 461, 820 (1996)
* Hartman, R.C., Collmar, W., von Montigny, C., & Dermer, C.D., in Proc. Fourth Compton Symposium, ed. C.D. Dermer, M.S. Strickman, J.D. Kurfess, pp. 307–327, AIP CP410, Woodbury, NY (1997).
* Jungman, G., Kamionkowski, M., & Griest, K., Phys. Reports, 267, 195 (1996)
* Macminn, D. & Primack, J.R., Sp. Sci. Rev., 75, 413 (1996)
* Michelson, P.F., SPIE, 2806, 31 (1996)
* Mukherjee, R., Grenier, I.A., & Thompson, D.J., in Proc. Fourth Compton Symposium, ed. C.D. Dermer, M.S. Strickman, J.D. Kurfess, pp. 394–406, AIP CP410, Woodbury, NY (1997)
* Ritz, S.M. , et al., NIM, submitted (1998)
* Shrader, C.R. & Wehrle, A.E., in Proc. Fourth Compton Symposium, ed. C.D. Dermer, M.S. Strickman, J.D. Kurfess, pp. 328–343, AIP CP410, Woodbury, NY (1997)
* Thompson, D.J., Harding, A.K., Hermsen, W., & Ulmer, M.P., Proc. Fourth Compton Symposium, ed. C.D. Dermer, M.S. Strickman, J.D. Kurfess, pp. 39–56, AIP CP410, Woodbury, NY (1997)
|
no-problem/9901/hep-ph9901340.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The CPLEAR collaboration has measured a hitherto unreported C- and CP-asymmetry in $`\overline{p}p`$ annihilation which, under reasonable assumptions, can be identified with a previously predicted T-asymmetry. Until now, there has been no credible evidence of any departure from reciprocity in any reaction; also, questions have been raised about the significance of the test proposed in Ref. . Therefore, it may be useful to critically examine the circumstances under which the C- and CP-asymmetry reported by CPLEAR can be interpreted as a demonstration of deviation from T-invariance. We indicate further tests, and the conditions which must be satisfied for the CP-asymmetry found by CPLEAR to be consistent with T-invariance.
## 2 Expectation of T-Asymmetry
The departure from CP-invariance in neutral kaon decays has been reliably established by a number of independent measurements, including a predicted asymmetry in $`K_L\pi ^+\pi ^{}e^+e^{}`$ decays which has been observed recently . However, despite many searches, there has been no clear evidence of CP-noninvariance in any phenomenon other than neutral kaon decays. There,the observed effects can be attributed entirely to $`K^0\overline{K}^0`$ mixing, which could arise from CP-noninvariant interactions much weaker than the weak interactions responsible for the decay of kaons. This may explain the failure to see measureable CP-asymmetric effects in other phenomena.
Invariance of physical laws under inversion of 4-dimensional space-time — which is not required by Lorentz-invariance but obtains in most Lorentz-invariant theories with further minimal analytic properties, e.g. field theories described by local Lagrangians — can be given a consistent interpretation only if space-time inversion PT is accompanied by particle-antiparticle conjugation C. Within the class of such TCP-invariant theories, lack of symmetry with respect to any of the constituent operations, e.g. particle-antiparticle exchange C or “combined inversion” CP, in which space-coordinates are inverted simultaneously with particle-antiparticle interchange, must be compensated by a corresponding asymmetry with respect to one or more of the other constituent operations, to preserve the overall TCP- symmetry. On this basis, Lee, Oehme and Yang showed that the possible noninvariance with respect to space-inversion proposed to explain the “tau-theta puzzle” necessarily required another presumed symmetry to be broken; they showed that observation of the suggested P-noninvariant effects would require C-invariance also to be broken. An elegant way to preserve the symmetry of space, even if P is abandoned, suggested by several authors , is to require exact CP-symmetry, in which case TCP-invariance would automatically assure exact T-invariance as well. The subsequent discovery that CP is not a valid symmetry in K-meson decays, requires T-invariance also to fail if TCP-invariance is to survive. Following the discovery of parity- nonconservation, searches for T-noninvariance were based largely on philosophical grounds: if physical laws are not indifferent to space-inversion, perhaps they might not be symmetric with respect to $`t`$-inversion either. After the discovery of CP-nonconservation, the search for T-noninvariance became a logical imperative. Either T-invariance would also fail, as TCP-invariance requires, or one would face the even greater challenge of TCP-noninvariance.
As long as deviations from CP-symmetry are confined to neutral kaon decays and associated effects, the only place where one has a definite expectation of seeing T-noninvariance must be in the same phenomena. Furthermore, if TCP-invariance is valid, the observed CP- noninvariance manifested in neutral kaon decays must be accompanied by corresponding deviations from T-invariance, which is more precisely described as symmetry with respect to motion-reversal. TCP-invariance requires
$$(\stackrel{~}{a}_T|S|\stackrel{~}{b}_T)=(b|S|a)$$
(1)
where $`\stackrel{~}{c}`$ represents the CP-transform of the channel $`c`$ and $`c_T`$ represents its time-reverse, viz. the channel $`c`$ with all particle momenta and spins reversed. The requirement of CP-invariance:
$$(\stackrel{~}{b}|S|\stackrel{~}{a})=(b|S|a)$$
(2)
taken together with Eq. (1), would require that
$$(\stackrel{~}{a}_T|S|\stackrel{~}{b}_T)=(\stackrel{~}{b}|S|\stackrel{~}{a})$$
(3)
i.e. CP-invariance requires reciprocity if TCP-invariance is valid. Conversely, if the requirement, Eq. (2), of CP-invariance fails for a related pair of transition matrix-elements, there must be a corresponding failure of reciprocity in the same case .
We already mentioned that a very feeble CP-noninvariant interaction contributing to $`K^0\overline{K}^0`$ mixing suffices to account for all observed CP-asymmetric effects. Therefore, the departure from T-invariance expected on the basis of TCP-invariance must also appear in $`K^0\overline{K}^0`$ mixing. Departure from reciprocity would appear in a difference between the rates of $`\overline{K}^0K^0`$ and $`K^0\overline{K}^0`$ transitions, expressed by a T-asymmetry parameter:
$$A_T=\frac{P_{K\overline{K}}(\tau )P_{\overline{K}K}(\tau )}{P_{K\overline{K}}(\tau )+P_{\overline{K}K}(\tau )}$$
(4)
which is found to be a constant in the generalized Weisskopf-Wigner approximation. Its value is given by
$$A_T^{\mathrm{th}}=2\mathrm{Re}(ϵ_S+ϵ_L)=2\mathrm{Re}K_L|K_S$$
(5)
to lowest order in the CP-nonconserving parameters $`ϵ_{S,L}`$, defined by
$$K_{S,L}[1+ϵ_{S,L}]K^0\pm [1ϵ_{S,L}]\overline{K}^0.$$
(6)
TCP-invariance requires $`ϵ_S`$ and $`ϵ_L`$ to be equal; on that basis, the value of $`A_T`$ could be predicted to be 4Re$`ϵ=(6.4\pm 1.2)\times 10^3`$ . Even without assuming any symmetry, the last quantity on the right-hand side of Eq. (5) can be deduced by appeal to unitarity . On the basis of reasonable assumptions, the most relevant of which were subsequently verified , about upper limits on minor modes of neutral kaon decay, it was shown that the expected T-asymmetry should have substantially the value predicted for the TCP-invariant case, whether that symmetry is assumed or not.
## 3 CP-Asymmetry Measured by CPLEAR
$`\overline{p}p`$ annihilations into $`\pi ^+K^{}`$$`K^0`$” and $`\pi ^{}K^+`$$`\overline{K}^0`$”, which are expected to occur equally frequently by CP-invariance, were selected by kinematic analysis, and the frequencies of beta-decay of the neutral kaons were compared for the two cases. If we accept the $`\mathrm{\Delta }S=\mathrm{\Delta }Q`$ rule which requires that $`\pi ^{}e^+\nu `$ and $`\pi ^+e^{}\overline{\nu }`$ arise only from $`K^0`$ and $`\overline{K}^0`$, respectively, and assume that the two decay rates are equal, as required by TCP-invariance, then the observed $`\pi ^{}e^+\nu `$ and $`\pi ^+e^{}\overline{\nu }`$ rates at any time $`\tau `$ measure the $`K^0`$ and $`\overline{K}^0`$ populations at that time. Assuming initial equality of $`K^0`$ and $`\overline{K}^0`$ populations and survival probabilities, any inequality between the observed annihilation rates into:
$$p\overline{p}\pi ^+K^{}\{\pi ^+e^{}\overline{\nu }\}\mathrm{and}\pi ^{}K^+\{\pi ^{}e^+\nu \}$$
(7)
must arise from a difference between $`\overline{K}^0K^0`$ and $`K^0\overline{K}^0`$ transition rates. This is the conclusion drawn by CPLEAR.
The CP-asymmetry which they measure is:
$$A_l=\frac{R[\pi ^+K^{}\{\pi ^+e^{}\overline{\nu }\}]R[\pi ^{}K^+\{\pi ^{}e^+\nu \}]}{R[\pi ^+K^{}\{\pi ^+e^{}\overline{\nu }\}]+R[\pi ^{}K^+\{\pi ^{}e^+\nu \}]}$$
(8)
In Eqs. (7) and (8), the $`\pi e\nu `$ configurations in braces are observed as (delayed) end-products deduced kinematically to arise from beta-decays of neutral kaons. Assuming the validity of the $`\mathrm{\Delta }S=\mathrm{\Delta }Q`$ rule, this asymmetry can be written as:
$$A_l=\frac{P_{K\overline{K}}(\tau )R[K^0\pi ^{}e^+\nu ]P_{\overline{K}K}(\tau )R[\overline{K}^0\pi ^+e^{}\overline{\nu }]}{P_{K\overline{K}}(\tau )R[K^0\pi ^{}e^+\nu ]+P_{\overline{K}K}(\tau )R[\overline{K}^0\pi ^+e^{}\overline{\nu }]}.$$
(9)
TCP-invariance requires that
$$R[K^0\pi ^{}e^+\nu ]=R[\overline{K}^0\pi ^+e^{}\overline{\nu }],$$
therefore, under the assumption of TCP-invariance, the CPLEAR asymmetry becomes
$$A_l^{\mathrm{TCP}}=\frac{P_{K\overline{K}}(\tau )P_{\overline{K}K}(\tau )}{P_{K\overline{K}}(\tau )+P_{\overline{K}K}(\tau )}=A_T$$
(10)
which is a measure of T-asymmetry at the same time as CP-asymmetry. Over a time-interval $`\tau _S<\tau <20\tau _S`$, the observed asymmetry is consistent with being a constant, with a value reported as
$$A_T^{\mathrm{exp}}=(6.6\pm 1.3)\times 10^3$$
(11)
which agrees with the theoretical prediction. On the other hand, if we insist on exact reciprocity,
$$P_{K\overline{K}}(\tau )=P_{\overline{K}K}(\tau ),$$
(12)
then Eq. (9) reduces, for the case of exact T-invariance, to
$$A_l^T=\frac{R[K^0\pi ^{}e^+\nu ]R[\overline{K}^0\pi ^+e^{}\overline{\nu }]}{R[K^0\pi ^{}e^+\nu ]+R[\overline{K}^0\pi ^+e^{}\overline{\nu }]}$$
(13)
and represents a ( CP- and ) CPT-violating effect. The observed asymmetry $`A_l`$, Eq. (8), requires the beta-decay rate for $`K^0\pi ^{}e^+\nu `$ to exceed that for $`\overline{K}^0\pi ^+e^{}\overline{\nu }`$ by about 1.3$`\%`$, if exact T-invariance is imposed. If we parametrize the deviation from TCP-invariance of kaon beta-decay amplitudes by setting :
$$\pi ^+e^{}\overline{\nu }|T|\overline{K}^0=(1+y)\pi ^{}e^+\nu |T|K^0$$
(14)
where $`y`$ can be taken to be real without loss of generality, the CP-asymmetry, Eq. (13), is given, to lowest order in $`y`$, by -$`y`$; $`y`$ is therefore required to have the value:
$$y=(6.6\pm 1.3)\times 10^3$$
(15)
if exact T-invariance is demanded.
The charge-asymmetry in $`K_L^0\pi e\nu `$ decays was accurately measured in several concordant experiments, whose combined result is quoted as :
$$\delta _l=(3.27\pm 0.12)\times 10^3$$
(16)
The phenomenological analysis without assumption of any symmetry, but assuming the validity of $`\mathrm{\Delta }Q=\mathrm{\Delta }S`$, yields
$$\delta _{l,L}=2\mathrm{Re}ϵ_Ly.$$
(17)
The corresponding quantity for $`K_S^0`$ decays is
$$\delta _{l,S}=2\mathrm{Re}ϵ_Sy.$$
(18)
T-invariance requires $`ϵ_S=ϵ_L`$, therefore Eqs. (17) and (18) would constrain the leptonic charge-asymmetry from $`K_S^0`$ decays to have the value:
$$\delta _{l,S}^T=\delta _{l,L}2y=(9.9\pm 1.3)\times 10^3,$$
(19)
viz. three times the value, Eq. (16), for $`K_L^0\pi e\nu `$ decays, if T-invariance is to be sustained. The CPLEAR data probably contain the information required to confirm or refute this expectation . If not, $`\mathrm{\Phi }`$-decays from DA$`\mathrm{\Phi }`$NE, which provide a certified $`K_S^0`$ in association with each $`K_L^0`$ decay, should provide a clean $`K_S^0`$ sample to test the unambiguous prediction (19) required by the hypothesis of T-invariance.
## 4 Conclusions
The simplest interpretation of the CPLEAR asymmetry, reported in Eq. (11), is that it exhibits the T-asymmetry predicted previously, and confirms the sign and magnitude of the expected effect. To this, the logical objection may be raised that the CP-asymmetry measured by CPLEAR translates into the T-asymmetry factor $`A_T`$ defined in Eq. (4) only if the $`\overline{p}p`$ annihilation rates into $`\pi ^+K^{}K^0`$ and $`\pi ^{}K^+\overline{K}^0`$ and the beta-decay rates for $`K^0\pi ^{}e^+\nu `$ and for $`\overline{K}^0\pi ^+e^{}\overline{\nu }`$ are assumed to be equal. The latter is required by TCP-invariance; but if one is prepared to accept TCP as an article of faith, then T-noninvariance follows as soon as CP-invariance fails and no further demonstration is required. Analysis of the CPLEAR asymmetry, without assuming equality of $`K^0`$ and $`\overline{K}^0`$ beta-decay rates, shows that, subject to the $`\mathrm{\Delta }Q=\mathrm{\Delta }S`$ rule, the leptonic charge-asymmetry for $`K_S^0\pi e\nu `$ decays should be three times larger than the measured asymmetry for $`K_L^0`$ decays, if T-invariance is valid. Thus, it should not be too difficult to distinguish between the simple interpretation of the CPLEAR charge-asymmetry as a direct demonstration of T-noninvariance, and the desperate and radical resort to TCP-noninvariance required to preserve T-invariance; these are the only two alternatives unless one is willing to countenance unequal $`production`$ of $`K^0`$ and $`\overline{K}^0`$ in $`\overline{p}p`$ annihilations.
## 5 Acknowledgments
I thank G.V. Dass, G. Karl, and A. Pilaftsis for careful reading of the manuscript and for helpful suggestions to clarify its meaning.
|
no-problem/9901/hep-lat9901015.html
|
ar5iv
|
text
|
# FSU-SCRI-99-04 Approach to the Continuum Limit of the Quenched Hermitian Wilson–Dirac Operator
## I Introduction
Continuum gauge field theory works under the assumption that all fields are smooth functions of space–time. This assumption is certainly a valid one for quantum gauge field theories that respect gauge invariance: One should always be able to fix a gauge so that the gauge fields are smooth functions of space–time since the action that contains derivatives in gauge fields will not allow it otherwise. The space of smooth gauge fields typically has an infinite number of disconnected pieces where the number of pieces is in one to one correspondence with the set of integers . Every gauge field in each piece can be smoothly interpolated to another gauge field in the same piece but there is no smooth interpolation between gauge fields in different pieces. This is the case for U(1) gauge fields in two dimensions and SU(N) gauge fields in four dimensions.
In lattice gauge theory, gauge fields are represented by link variables $`U_\mu (x)`$ that are elements of the gauge group. Continuum derivatives are replaced by finite differences and the concept of smoothness of gauge fields does not apply. Any lattice gauge field configuration, $`U_\mu (x)=e^{iA_\mu (x)}`$ can be deformed to the trivial gauge field configuration by the interpolation $`U_\mu (x;\tau )=e^{i\tau A_\mu (x)}`$ with $`U_\mu (x;1)=U_\mu (x)`$ and $`U_\mu (x;0)=1`$. Since smoothness does not hold on the lattice away from the continuum limit, the space of gauge fields on the lattice forms a simply connected space. Separation of the gauge field space into an infinite number of disconnected pieces can only be realized in the continuum limit.
In this paper, we will address the following basic question: Do we see a separation of lattice gauge fields configurations into topological classes as we approach the continuum limit? To answer this question, we will use several ensembles of lattice gauge field configurations obtained from pure SU(2) and SU(3) gauge field theory. We will use a Wilson–Dirac fermion to probe the lattice gauge field configuration. Our motivation is the overlap formalism for chiral gauge theories. Topological aspects of the background gauge field are properly realized by the chiral fermions in this formalism and therefore it provides a good framework to answer the above question. The hermitian Wilson–Dirac operator enters the construction of lattice chiral fermions in the overlap formalism and topological properties of the gauge fields are studied by looking at the spectral flow of the hermitian Wilson–Dirac operator as a function of the fermion mass. Contrary to some other approaches to investigate the topological properties of lattice gauge field configurations we do not modify the gauge fields, generate by some Monte Carlo procedure, in any way.
The paper is organized as follows. We begin by explaining in Section II the connection between the spectral flow of the hermitian Wilson–Dirac operator and the topological content of the background gauge field. Possible scenarios for the qualitative nature of the spectrum on the lattice are presented. In Section III we present numerical results on the spectral properties of lattice gauge field ensembles and their behavior as the continuum limit is approached. Results for the topological susceptibility in pure SU(2) and SU(3) gauge theory computed using the overlap definition of the topological charge are shown in Section IV. We also present results on the size distribution of the zero modes of the Hermitian Wilson–Dirac operator.
## II Spectral flow, topology and condensates
The massless Dirac operator in the continuum anticommutes with $`\gamma _5`$. Therefore, the non-zero imaginary eigenvalues of the massless Dirac operator come in pairs, $`\pm i\lambda `$, with $`\psi `$ and $`\gamma _5\psi `$ being the two eigenvectors. The zero eigenvalues of the massless Dirac operator are also eigenvalues of $`\gamma _5`$. These chiral zero modes are a consequence of the topology of the background gauge field. It is useful to consider the spectral flow of the Hermitian Dirac operator:
$$\mathrm{H}(m)=\gamma _5(\gamma _\mu D_\mu m)$$
(1)
The non-zero eigenvalues of the massless Dirac operator combine in pairs to give the following eigenvalue equation:
$$\mathrm{H}(m)\chi _\pm =\lambda _\pm (m)\chi _\pm =\pm \sqrt{\lambda ^2+m^2}\chi _\pm .$$
(2)
$`\chi _\pm `$ are linear combinations of $`\psi `$ and $`\gamma _5\psi `$. The eigenvalues $`\lambda _\pm (m)`$ of these modes never cross the x-axis in the spectral flow of $`\mathrm{H}(m)`$ as a function of $`m`$. The zero eigenvalues, $`\gamma _\mu D_\mu \varphi _\pm =0`$ with $`\gamma _5\varphi _\pm =\pm \varphi _\pm `$ result in
$$\mathrm{H}(m)\varphi _\pm =m\varphi _\pm $$
(3)
These modes, associated with topology, result in flow lines that cross the x-axis. A positive slope corresponds to negative chirality and vice-versa. The net number of lines crossing zero (the difference of positive and negative crossings) is the topology of the background gauge field. Global topology of gauge fields cause exact zero eigenvalues at $`m=0`$. In addition, one can have a non-zero spectral density at zero. In an infinite volume in the continuum, the spectrum is continuous and $`\rho (\lambda ;m)d\lambda `$ is the number of eigenvalues in the infinitesimal region $`d\lambda `$ around $`\lambda `$. The spectral gap $`\lambda _g(m)`$ defined as the lowest eigenvalue at $`m`$ is equal to $`|m|`$. The spectral density at zero, $`\rho (0;m)`$, can be non-zero only at $`m=0`$ indicating spontaneous chiral symmetry breaking in a theory like QCD. The continuum picture is shown in Fig. 1.
To study the possible emergence of the above picture as the continuum limit of a lattice gauge theory picture, we need to have a lattice realization of $`\mathrm{H}(m)`$. It is important to note that we are interested in the spectral flow of a single Dirac fermion. With this in mind, we choose the hermitian Wilson–Dirac operator obtained by multiplying the standard Wilson–Dirac operator by $`\gamma _5`$:
$$\mathrm{H}_L(m)=\left(\begin{array}{cc}\mathrm{B}(U)m& \mathrm{C}(U)\\ \mathrm{C}^{}(U)& \mathrm{B}(U)+m\end{array}\right).$$
(4)
$`\mathrm{C}`$ is the naive lattice first derivative term and $`\mathrm{B}`$ is the Wilson term. We are interested in the spectral flow of $`\mathrm{H}_L(m)`$ as a function of $`m`$. We note that $`m=0,2,4,6,8`$ are the points where the free fermions become massless with degeneracies $`1,4,6,4,1`$ respectively. Next we observe that $`\mathrm{H}_L(m)`$ can have a zero eigenvalue only if $`m>0`$ .
We focus on the range $`0m2`$ and propose the following scenarios for the spectral gap and the spectral density at zero on the lattice and their approach to the continuum limit. Six different, but not completely independent scenarios are possible as shown in Fig. 2.
* On the lattice we have (a) and (i) with $`m_c0`$ in the continuum limit. $`\rho (0;m_c)`$ approaches the continuum limit with proper scaling taken into account.
* On the lattice we have (b) and (ii) where (a) and (i) are the continuum limit. In this case, $`m_10`$ and $`m_20`$. In the limit we also get $`\rho (0;0)`$.
* On the lattice we first have (c) and (ii) going, at weaker coupling to (b) and (ii), where (a) and (i) are the continuum limit. The gap opens up at some $`m_2>m_1`$ at some coupling and afterwards the approach to the continuum is as in the previous scenario.
* On the lattice we have (c) and (ii) where (c) and (i) are the continuum limit. In this case, $`m_10`$. However, the gap does not open up for $`m>0`$ in the continuum limit.
* On the lattice we first have (c) and (iii), going to (b) and (ii) at some coupling. Afterwards, the approach to the continuum limit is again as in the second scenario.
* On the lattice we have (c) and (iii) where (c) and (i) are the continuum limit. Here also $`m_10`$ and $`\rho (0;m)=0`$ if $`m>0`$. However, the gap does not open up for $`m>0`$.
We will show that numerical studies of the spectral flow on various ensembles favor the last scenario. Before we do that, we present a topological argument which will show that zero eigenvalues of $`\mathrm{H}_L(m)`$ can occur anywhere in the region $`0<m<8`$ . The spectrum of $`\mathrm{H}_L(m)`$ and $`\mathrm{H}_L(8m)`$ are identical for an arbitrary gauge field background. Since zero eigenvalues can occur only for $`m>0`$ in $`\mathrm{H}_L(m)`$, it follows that zero eigenvalues can occur only in the region $`0<m<8`$. It also follows that every level crossing zero from above in the spectral flow of $`\mathrm{H}_L(m)`$ must be accompanied by a level crossing zero from below. In a single instanton background a level crossing zero from above at $`m_+`$ is accompanied by another level crossing zero from below at $`2>m_{}>m_+`$. The second crossing is due to one of the four doubler modes. Both $`m_\pm `$ will be functions of the size of the instanton $`\rho `$ in lattice units. For $`\rho >>a`$, $`m_+0`$ and $`m_{}2`$. As $`\rho `$ decreases, $`m_+`$ moves farther away from zero and $`m_{}`$ moves away from $`2`$ and closer to $`m_+`$. This motion as a function of $`\rho `$ is smooth and for some value of $`\rho `$, $`m_+=m_{}`$. The spectral flow changes smoothly as the configuration is changed slowly. As we move in configuration space the topological charge of a configuration changes. Tracing the spectral flow as a function of configurations shows that zero eigenvalues of $`\mathrm{H}_L(m)`$ can occur anywhere in the region $`0<m<8`$.
## III Spectral density at zero
In the previous section, we argued that $`\mathrm{H}_L(m)`$ can have zero crossings anywhere in the region $`m_1m2`$. Therefore the spectral gap is zero in this region on the lattice. This has direct implications for how the spectral density at zero behaves on the lattice. A careful study of the spectral density at zero has been performed on a variety of SU(3) pure gauge ensembles. This is done by computing the low lying eigenvalues of $`\mathrm{H}_L(m)`$ using the Ritz functional . The low lying eigenvalues over the whole ensemble are then used to obtain the integral of the spectral density function, $`_0^\lambda \rho (\lambda ^{};m)𝑑\lambda ^{}`$. A linear fit in $`\lambda `$ is made, and $`\rho (0;m)`$ is obtained as the coefficient of the linear term.
All ensembles show a peak in $`\rho (0;m)`$ at some value of $`m`$ near $`m_1`$. There is a sharp rise to the peak from the left and a gradual fall from the peak on the right. There is a gradual rise again to a second peak at the location of the first set of doublers. The peak itself gets sharper and moves to the left as one goes toward the continuum limit. $`\rho (0;m)`$ is non-zero for $`m_1m2`$ in the infinite volume limit at any finite value of the lattice gauge coupling (see below). $`m_1`$ goes to zero as the lattice coupling approaches the continuum. $`\rho (0;m)`$ approaches the infinite lattice volume limit from below as expected. We are fairly confident that we have the infinite volume limit estimate for $`\rho (0;m)`$ at all the lattice spacings plotted in Fig. 3.
In Fig. 4 and Fig. 5, we focus on the behavior of $`\rho (0;m)`$ at a fixed $`m`$ as one approaches the continuum limit. In Figure 4 we plot $`\rho (0;m)`$ as a function of the lattice spacing measured in units of the square root of the string tension (the values for the string tension are taken from Ref. ). In this figure $`\rho (0;m)`$ appears to go to zero exponentially in the inverse lattice spacing. This is given some credence by plotting the same figure in a logarithmic scale in Fig. 5 where the data is shown for several values of $`m`$. For $`\beta =5.7`$, the peak in $`\rho (0;m)`$ is quite close to $`m=1.2`$ as can be seen in Fig. 3, resulting in a large value for $`\rho (0;1.2)`$.
We remark that the $`\rho (0;m)`$ plotted in Fig. 5 seem to favor a functional form fitting $`be^{c/\sqrt{a}}`$ for each $`m`$. The power of $`a`$ in the exponent is a consequence of an empirical fit but the data presents substantial evidence for the following: $`\rho (0;m)`$ in the supercritical mass region is non-zero for all finite lattice spacings. The approach to zero at zero lattice spacing is faster than any power of the lattice spacing. This shows that the last scenario presented in the previous section is favored by our numerical results.
## IV Topological susceptibility
In addition to studying $`\rho (0;m)`$, we also looked at the density of levels crossing zero in an infinitesimal range $`dm`$ centered at $`m`$. In the continuum we expect levels crossing zero only at $`m=0`$, but on the lattice we find a finite density of levels crossing zero wherever $`\rho (0;m)`$ is non-zero on the lattice. The overlap formalism for constructing a chiral gauge theory on the lattice provides a natural definition of the index, $`I`$, of the associated chiral Dirac operator. The index is equal to half the difference of negative and positive eigenvalues of the hermitian Wilson–Dirac operator. A simple way to compute the index $`I`$ is to compute the lowest eigenvalues of $`\mathrm{H}_L(m)`$ at some suitably small $`m`$ before any crossings of zero occurred. Then $`m`$ is slowly varied and the number and direction of zero crossings are tracked. The net number at some $`m_t`$ is the index of the overlap chiral Dirac operator. Since crossings occur for all values of $`m`$ in the range $`m_1m2`$, the topological charge of a lattice gauge field configuration defined as the net level crossings in $`\mathrm{H}_L(m)`$ in the range $`[0,m_t]`$ will depend on $`m_t`$.
The topology of a single lattice gauge field configuration is not interesting in a field theoretic sense. One has to obtain an ensemble average of the topological susceptibility and study its dependence on $`m`$. This has been done on a variety of ensembles and the results show that the topological susceptibility is essentially independent of $`m`$ in the region to the left of the peak in $`\rho (0;m)`$. A detailed study of the SU(3) ensemble at $`\beta =6.0`$ on a $`16^3\times 32`$ lattice presented in Figure 6 illustrates this point. In the first line is shown the density of zero eigenvalues $`\rho (0;m)`$ and the number of crossings in each mass bin. We see that $`\rho (0;m)`$ rises sharply in $`m`$, then falls to a nonzero value where there is a small number of levels crossing zero. In the second line of Figure 6, we show the size of the zero modes $`\rho _z(m)`$. We define a size of the eigenvector associated with the level crossing zero mode as
$$\rho _z(m)=\frac{1}{2}\frac{\underset{t}{}f(t)}{f_{\mathrm{max}}}f(t)=\underset{\stackrel{}{x}}{}\mathrm{tr}(\varphi ^{}(\stackrel{}{x},t)\varphi (\stackrel{}{x},t))$$
where $`\varphi (\stackrel{}{x},t)`$ is the eigenvector of $`\mathrm{H}_L`$ at the crossing point and $`f_{max}`$ is the maximum value of $`f(t)`$ over $`t`$. Another definition based on the second moment of $`f(t)`$ was used in Ref. . We should emphasize that we look only at the sizes of eigenmodes that cross, and only close to the crossing point. Only then can we expect to get a good estimate of the localization size inspired by the ’t Hooft zero mode. The modes are large near $`m_1`$ where $`\rho (0;m)`$ is large, then $`\rho _z(m)`$ drops sharply to about $`1`$ or $`2`$ lattice spacings and stays there up to $`m=2`$. We see that the corresponding topological susceptibility rises sharply when $`\rho _z(m)`$ is large for $`m`$ near $`m_1`$ and then it is quite stable when $`\rho _z(m)`$ is small. This result shows that while the index, $`I`$, of the field is $`m`$ dependent, the topological susceptibility, $`\chi `$ (a physical quantity) is independent of the contribution from the small modes for $`m\mathrm{}>1`$.
To further clarify the relative contribution of the zero modes, in the last line of Figure 6 the zero mode size distribution is plotted as a function of $`\rho _z`$. In the adjacent graph, the topological susceptibility, $`\chi `$, here defined by the contribution of zero modes of size $`\rho _z`$ and larger, is stable when $`\rho _z<2`$. Hence, the small modes do not affect the estimate of $`\chi `$ even though there is an abundance of such modes. Our estimates of $`\chi `$ are shown in Table I where we use the string tension value $`\sqrt{\sigma }=440`$ MeV to set the scale. Our results are in rough agreement with other groups and also show good evidence for scaling.
## V Size distribution of zero modes
Studies in smooth gauge field backgrounds on the lattice have shown that single instantons result in a single level crossing zero at some $`m`$ in the region $`[0,2]`$ where the shape of the mode at the crossing is a good representation of the shape of the instanton . Similarly, there is a pair of levels crossing zero (one from above and another from below) when the gauge field background has an instanton and anti-instanton. Again, the shape of the modes at the crossing points are good representations of the instanton and anti-instanton. This motivates us to look at the size distribution of the zero modes of the Hermitian Wilson-Dirac operator on the lattice. Since lattice gauge fields generated in typical Monte Carlo simulations are rough, the correspondence between zero modes and topological objects might be questionable – the existence of topological objects (a collection of instantons and anti-instantons) “underneath” the typically large quantum fluctuations is somewhat questionable as well. However, levels crossing zero contribute to the global topology and an analysis of the size distribution of the zero modes is therefore interesting. Such size distributions are shown in Figure 7 for gauge group SU(2). All distributions show a sharp rise at small sizes due to the abundance of small zero modes that occur in the bulk of the region $`m_1m2`$. These modes do not affect the computation of the topological susceptibility and can be viewed as being due to the ultra-violet fluctuations in the gauge field background. If we eliminate the small modes from the distribution, the size distribution at $`\beta =2.4`$ on the $`16^4`$ lattice shows some evidence for a broad peak around $`\rho _z=0.6`$ fm. One should keep in mind that the box size is roughly $`1.92`$ fm and the peak is occurring at a value which is roughly a third of the box. It is tempting to explain this peak as a finite volume effect. Some support for this explanation is provided by looking at the distributions at $`\beta =2.5`$ and $`\beta =2.6`$ on a $`16^4`$ lattice. These boxes are now roughly $`1.38`$ fm and $`0.98`$ fm, respectively. After discarding the small zero modes, both the distributions show a broad peak at roughly $`\rho _z=0.45`$ fm and $`\rho _z=0.3`$ fm, respectively. As in the $`\beta =2.4`$ case these peaks occur at roughly a third of the box size and the magnitude of the peak is larger as one goes to weaker coupling for a fixed lattice volume. This is quite consistent with the peak being a finite volume effect. In Fig. 8 we show all the SU(2) size distributions together plotted in lattice units. There is evidence for a broad peak at roughly $`5`$ lattice units – roughly a third of the lattice box size. Therefore, we conclude that the size distribution of zero modes does not show evidence for a peak at a physical scale even after we remove the small modes which are most likely lattice artifacts. We have to conclude that it is not possible to relate the size analysis of the zero modes carried out here to a size distribution of topological objects as it is postulated for the instanton liquid model of QCD .
The conclusions remain the same for the various SU(3) ensembles that we studied. The size distributions are plotted in Figure 9. The $`\beta =5.7,5.85\mathrm{and}6.0`$ ensembles come from lattices with linear extent roughly equal to $`1.4`$ fm, $`1.04`$ fm and $`1.6`$ fm, respectively. Clearly the size distribution on the $`\beta =5.85`$ ensemble suffers strongly from finite volume effects whereas the $`\beta =6.0`$ ensemble is not affected as much. We should remark that we do not see any evidence for a finite volume effect in the computation of the topological susceptibility. This is probably because the size distribution of the individual zero modes is not that relevant for the global topology which only depends in principle on the net number of level crossings and not on the size and shape of these crossing modes.
## VI Discussion
A probe of pure lattice gauge field ensembles using Wilson fermions has revealed that the gauge fields are not continuum like on the lattice at gauge couplings that are typically considered to be weak. If they were continuum like, we should have seen evidence that $`\rho (0;m)`$ is non-zero at a single value of $`m`$ or in a region in $`m`$ that is of the order of the lattice spacing. Furthermore, we should have seen a symmetry in the spectrum at values of $`m`$ on either side of the point (or region) where $`\rho (0;m)`$ is non-zero. Instead, we found that $`\rho (0;m)`$ is non-zero in a region $`m_1m2`$. In the continuum limit, there is evidence that $`m_1`$ goes to zero and that $`\rho (0;m)`$ goes to zero away from $`m=0`$. However, the spectral distribution does not show evidence for a symmetry as $`mm`$.
A remark on the approach of $`\rho (0;m)`$ to the thermodynamic limit at a fixed lattice gauge coupling is in order. For a small lattice, with linear size of the order of the extent $`N_t`$ for which the finite temperature deconfinement transition occurs at the given gauge coupling, the number of very small eigenvalues of the Wilson-Dirac operator will be essentially zero. When the lattice size is increased this number will grow rapidly, leading, for a while, to a rapid increase in the extracted estimate of $`\rho (0;m)`$ and then leveling off at the infinite volume value of $`\rho (0;m)`$. We found that this happens for a linear size about twice the extent $`N_t`$ mentioned above.
The density of level crossing zero modes, $`dN/dm`$, of the Hermitian Wilson–Dirac operator is in accordance with the behavior of $`\rho (0;m)`$. In spite of a large number of levels crossing zero in the bulk of $`m_1m2`$, we found that the topological susceptibility is unchanged by these small localized modes. We therefore interpret them as due to ultra-violet fluctuations. The size distribution of the zero modes is dominated by these small modes. However, the distribution, after we remove these small modes, does not show any clear peak at a physical scale. Some broad peaks we see in are explained as a consequence of finite volume effects.
We finally remark that all our studies in this paper have been on pure gauge field ensembles. There is a prediction for the spectral distribution of the Wilson-Dirac operator in full QCD using a continuum chiral lagrangian . The prediction resembles the second scenario presented in section II and is qualitatively different from the result we have obtained for pure gauge ensembles. It would be interesting to test this prediction by numerical simulations of full QCD with Wilson fermions in the supercritical region. There is, however, a technical problem in using standard Hybrid Monte Carlo type algorithms for such simulations: the system will be locked in a single topological sector (with topology defined as half the difference of negative and positive eigenvalues of the hermitian Wilson-Dirac operator at the supercritical mass, $`m_d`$, where the simulation is carried out). This is due to the fact that a change in topology will require a change of net level crossings in the region $`0<m<m_d`$. However, the spectral flow has to be smooth as we update the configurations using classical dynamics in HMC type algorithms. Therefore, at some point in the change of topology the level crossing would need to occur at $`m_d`$, but such a configuration has a vanishing fermion determinant and hence can not be reached. Modifications of the HMC algorithm to circumvent this problems would need to be developed before a study of the spectral flow in full QCD in the supercritical region can be attempted.
## ACKNOWLEDGMENTS
This research was supported by DOE contracts DE-FG05-85ER250000 and DE-FG05-96ER40979. Computations were performed on the CM-2 and QCDSP at SCRI.
|
no-problem/9901/chao-dyn9901020.html
|
ar5iv
|
text
|
# ACKNOWLEDGMENTS
## ACKNOWLEDGMENTS
This work was supported in part by a grant from the National Institutes of Health (R01 HL56139) \[DJC\] and sabbatical support from Macalester College \[DTK\].
Fig. 1
Fig. 2
|
no-problem/9901/hep-ph9901329.html
|
ar5iv
|
text
|
# References
hep-ph/9901329
LMU-99-02
WIS-99/01/Jan.DPP
TAU-2547-99
January 1999
Flavor Symmetry, $`K^0`$-$`\overline{K}^0`$ Mixing and New Physics Effects
on $`CP`$ Violation in $`D^\pm `$ and $`D_s^\pm `$ Decays
Harry J. Lipkin <sup>1</sup><sup>1</sup>1Electronic address: ftlipkin@wiswic.weizmann.ac.il
Department of Particle Physics, Weizmann Institute of Science, Rehovot 76100, Israel
School of Physics and Astronomy, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv, Israel
Zhi-zhong Xing <sup>2</sup><sup>2</sup>2Electronic address: xing@hep.physik.uni-muenchen.de
Sektion Physik, Universit$`\ddot{a}`$t M$`\ddot{u}`$nchen, Theresienstrasse 37A, 80333 M$`\ddot{u}`$nchen, Germany
## Abstract
Flavor symmetry and symmetry breaking, $`K^0`$-$`\overline{K}^0`$ mixing and possible effects of new physics on $`CP`$ violation in weak decay modes $`D^\pm K_{\mathrm{S},\mathrm{L}}+X^\pm `$, $`(K_{\mathrm{S},\mathrm{L}}+\pi ^0)_K^{}^{}+X^\pm `$ (for $`X=\pi ,\rho ,a_1`$) and $`D_s^\pm K_{\mathrm{S},\mathrm{L}}+X_s^\pm `$, $`(K_{\mathrm{S},\mathrm{L}}+\pi ^0)_K^{}^{}+X_s^\pm `$ (for $`X_s=K,K^{}`$) are analyzed. Relations between $`D^\pm `$ and $`D_s^\pm `$ decay branching ratios are obtained from the $`ds`$ subgroup of SU(3) and dominant symmetry-breaking mechanisms are investigated. A $`CP`$ asymmetry of magnitude $`3.3\times 10^3`$ is shown to result in the standard model from $`K^0`$-$`\overline{K}^0`$ mixing in the final-state. New physics affecting the doubly Cabibbo-suppressed channels might either cancel this asymmetry or enhance it up to the percent level. A comparison between the $`CP`$ asymmetries in $`D_{(s)}^\pm K_\mathrm{S}X_{(s)}^\pm `$ and $`D_{(s)}^\pm K_\mathrm{L}X_{(s)}^\pm `$ can pin down effects of new physics.
Effects of $`CP`$ violation in weak decays of $`D`$ mesons are expected to be rather small, of order $`10^3`$ or lower, within the standard electroweak model . They can naturally be enhanced up to the $`O(10^2)`$ level, if new physics beyond the standard model exists in the charm-quark sector . Although no new physics model suggests direct $`CP`$ violation in charged $`D`$-meson decays, a search is cheap and easy when decays of charge-conjugate particles are measured . Some efforts have so far been made to search for $`CP`$ violation in the $`D`$ system . The experimental prospects are becoming brighter, with the development of higher-luminosity $`e^+e^{}`$ colliders and hadron machines .
While a variety of mixing and $`CP`$-violating phenomena may manifest themselves in neutral $`D`$-meson decays , the charged $`D`$-meson transitions provide a unique experimental opportunity for the study of direct $`CP`$ violation . Some phenomenological analyses of $`CP`$ asymmetries in charged $`D`$-meson decay modes have been made (see, e.g., Ref. and Refs. ). In particular, the importance and prospects of searching for $`CP`$-violating new physics in the promising decays $`D^\pm K_\mathrm{S}X^\pm `$ and $`D^\pm K_\mathrm{S}K_\mathrm{S}K^\pm `$, where $`X^\pm `$ denotes any charged hadronic state, have been outlined in Ref. .
In the present note we first consider flavor-symmetry relations between corresponding $`D^\pm `$ and $`D_s^\pm `$ and then discuss both the non-negligible $`K^0`$-$`\overline{K}^0`$ mixing effect and the possible (significant) new physics effect on $`CP`$ asymmetries in
$$D^\pm K_{\mathrm{S},\mathrm{L}}+X^\pm $$
$`(1\mathrm{a})`$
(for $`X=\pi ,\rho ,a_1`$) and
$$D_s^\pm K_{\mathrm{S},\mathrm{L}}+X_s^\pm $$
$`(1\mathrm{b})`$
(for $`X_s=K,K^{}`$) decays. The similar decay modes involving the resonance $`K^0`$ or $`\overline{K}^0`$, i.e.,
$$D^\pm \left(K_{\mathrm{S},\mathrm{L}}+\pi ^0\right)_K^{}+X^\pm $$
$`(2\mathrm{a})`$
and
$$D_s^\pm \left(K_{S,L}+\pi ^0\right)_K^{}+X_s^\pm $$
$`(2\mathrm{b})`$
are also considered. Within the standard model we show that $`CP`$ violation in these processes comes mainly from $`K^0`$-$`\overline{K}^0`$ mixing and may lead to a decay rate asymmetry of magnitude $`3.3\times 10^3`$. Beyond the standard model the $`CP`$ asymmetries are possible to reach the percent level due to the enhancement from new physics. A comparison between the $`CP`$ asymmetries in $`D_{(s)}^\pm K_\mathrm{S}X_{(s)}^\pm `$ and $`D_{(s)}^\pm K_\mathrm{L}X_{(s)}^\pm `$ decays may pin down the involved new physics.
Within the standard electroweak model the transitions in Eqs. (1) and (2) can occur through both the Cabibbo-allowed channels (Fig. 1) and the doubly Cabibbo-suppressed channels (Fig. 2). The penguin $`cu`$ transition cannot contribute to these decays. The eight diagrams in Figs. 1 and 2 describe all the diagrams allowed by QCD and the standard electroweak model if the quark lines are allowed to go backward and forward in time, and arbitrary numbers of gluon exchanges are included .
Each of the four diagrams in Fig. 1 is seen to go into one of the diagrams of Fig. 2 under the $`ds`$ transformation which interchanges $`d`$ and $`s`$ flavors and is included in SU(3). Its use with the flavor topology of Figs. 1 and 2 not only gives SU(3) symmetry relations but also pinpoints the dominant sources of symmetry breaking. We first note some relations between pairs of amplitudes related by the $`ds`$ transformation:
$$\frac{|A(D^+K^{()0}X^+)|}{|A(D_s^+\overline{K}^{()0}X_s^+)|}=\frac{|V_{cd}V_{us}^{}|}{|V_{cs}V_{ud}^{}|}=\frac{|A(D_s^+K^{()0}X_s^+)|}{|A(D^+\overline{K}^{()0}X^+)|},$$
$`(3)`$
where $`V_{ij}`$ (for $`i=u,c`$ and $`j=d,s`$) are the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements. The explicit assumptions needed to derive these relations are:
1. $`d`$ and $`s`$ quarks couple equally to gluons at all stages of these diagrams, including final state interactions.
2. The properties of the weak interactions under the $`ds`$ transformation are given by the CKM matrix elements.
That $`d`$ and $`s`$ quarks couple equally to gluons at short distances in all the complicated quark diagrams seems reasonable. Thus further $`ds`$ symmetry breaking occurs only at the hadron level via decay constants and form factors. The $`ds`$ transformation changes pions into kaons, $`\rho `$’s into $`K^{}`$’s, etc. The breaking produced by the corresponding changes in masses, decay constants and form factors must be taken into account. The form factors depend upon whether the $`q\overline{q}`$ pair is pointlike or has a hadronic scale defined by the initial state wave function or pair creation by gluons.
Branching ratio data for these decays may well become available with sufficient precision to test SU(3) symmetry and symmetry breaking before they have sufficient precision to show $`CP`$ violation or new physics. They can therefore provide useful constraints on parameters and relative strengths of different diagrams to help narrow the searches for crucial effects predicted by new physics.
We now investigate the phases relevant to $`CP`$ violation. For each decay mode under discussion, its doubly Cabibbo-suppressed transition amplitude and its Cabibbo-allowed one may have different weak and strong phases. We denote the ratio of two transition amplitudes for $`D^+K^{()0}X^+`$ and $`\overline{K}^{()0}X^+`$ or $`D_s^+K^{()0}X^+`$ and $`\overline{K}^{()0}X^+`$ (before $`K^0`$-$`\overline{K}^0`$ mixing) as follows:
$`{\displaystyle \frac{A(D^+K^{()0}X^+)}{A(D^+\overline{K}^{()0}X^+)}}`$ $`=`$ $`R_de^{\mathrm{i}\delta _d}{\displaystyle \frac{V_{cd}V_{us}^{}}{V_{cs}V_{ud}^{}}},`$
$`{\displaystyle \frac{A(D_s^+K^{()0}X_s^+)}{A(D_s^+\overline{K}^{()0}X_s^+)}}`$ $`=`$ $`R_se^{\mathrm{i}\delta _s}{\displaystyle \frac{V_{cd}V_{us}^{}}{V_{cs}V_{ud}^{}}},`$ (4)
where $`\delta _q`$ and $`R_q`$ (for $`q=d`$ or $`s`$) denote the strong phase difference and the ratio of real hadronic matrix elements, respectively. Under $`ds`$ symmetry, $`\delta _s=\delta _d`$ and $`R_s=R_d^1`$.
The magnitudes of $`R_d`$ and $`R_s`$ can be estimated with the help of the effective weak Hamiltonian and the naive factorization approximation. Neglecting the annihilation diagrams in Figs. 1 and 2, which are expected to have significant form factor suppression, we arrive at
$`{\displaystyle \frac{1}{R_d}}`$ $`=`$ $`1+{\displaystyle \frac{a_1}{a_2}}{\displaystyle \frac{X^+|(\overline{u}d)_{VA}^{}|0\overline{K}^{()0}|(\overline{s}c)_{VA}^{}|D^+}{K^{()0}|(\overline{d}s)_{VA}^{}|0X^+|(\overline{u}c)_{VA}^{}|D^+}},`$
$`R_s`$ $`=`$ $`1+{\displaystyle \frac{a_1}{a_2}}{\displaystyle \frac{X_s^+|(\overline{u}s)_{VA}^{}|0K^{()0}|(\overline{d}c)_{VA}^{}|D_s^+}{\overline{K}^{()0}|(\overline{d}s)_{VA}^{}|0X_s^+|(\overline{u}c)_{VA}^{}|D_s^+}},`$ (5)
where $`a_11.1`$ and $`a_20.5`$ are the effective Wilson coefficients at the $`O(m_c)`$ scale . We list the explicit results of $`R_d`$ and $`R_s`$ in Table 1, in which the relevant decay constants and form factors are self-explanatory and their values can be found from Refs. .
These expressions show clearly that the relation $`R_s=R_d^1`$ is violated only by the obvious $`ds`$ breaking in the decay constants and form factors, and holds in the limit where these decay constants and form factors have the same values for all pairs related by $`ds`$; e.g. $`m_\pi =m_K^{}`$, $`f_\pi =f_K`$, $`F_0^{DK}(m_\pi ^2)=F_0^{DK}(m_K^2)`$, $`m_\rho =m_K^{}^{}`$, $`f_\rho =f_K^{}`$, $`F_1^{DK}(m_\rho ^2)=F_1^{D_sK}(m_K^{}^2)`$, etc.
The diagrams selected by factorization differ from other diagrams only in the form factors. Thus relaxing the demand for factorization can only introduce additional form factors and decay constants into the expressions in Table 1 corresponding to the replacement of color-favored couplings by color suppressed couplings.
The ballpark numbers obtained in Table 1 serve only for illustration and show that the $`ds`$ symmetry between $`R_s`$ and $`R_d^1`$ is reasonably acceptable for either the two-pseudoscalar states or the pseudoscalar-vector (axialvector) states. In general $`|R_d|O(1)`$ and $`|R_s|O(1)`$ are expected to be true, independent of the dynamical details of these transitions.
We now consider the decays into final-states including $`K_\mathrm{S}`$ or $`K_\mathrm{L}`$, where the dominant $`CP`$ violation in the standard model comes from the $`K^0`$-$`\overline{K}^0`$ mixing described by
$`|K_\mathrm{S}`$ $`=`$ $`p|K^0+q|\overline{K}^0,`$
$`|K_\mathrm{L}`$ $`=`$ $`p|K^0q|\overline{K}^0,`$ (6)
where $`p`$ and $`q`$ are complex parameters. To ensure the rephasing invariance of all analytical results, we do not use the popular notation $`q/p=(1ϵ)/(1+ϵ)`$. As we shall see later on, the mixing-induced $`CP`$ violation
$$\delta _K=\frac{|p|^2|q|^2}{|p|^2+|q|^2}\mathrm{\hspace{0.33em}3.3}\times 10^3,$$
(7)
which has been measured in semileptonic $`K_\mathrm{L}`$ decays , may play a significant role in the $`CP`$ asymmetries of $`D^\pm `$ and $`D_s^\pm `$ decays.
The ratios of transition amplitudes for $`D^{}`$ and $`D_s^{}`$ decays can be read off from Eq. (4) with the complex conjugation of relevant quark mixing matrix elements. Then we obtain <sup>3</sup><sup>3</sup>3The formulas for the decays involving $`K^0K^0\pi ^0K_\mathrm{S}\pi ^0`$ and $`\overline{K}^0\overline{K}^0\pi ^0K_\mathrm{S}\pi ^0`$ are basically the same as those for $`D^\pm K_\mathrm{S}X^\pm `$ or $`D_s^\pm K_\mathrm{S}X_s^\pm `$ transitions, therefore they will not be written down for simplicity.
$`{\displaystyle \frac{A(D^+K_\mathrm{S}X^+)}{A(D^{}K_\mathrm{S}X^{})}}`$ $`=`$ $`{\displaystyle \frac{(V_{cs}V_{ud}^{})q^{}+R_de^{\mathrm{i}\delta _d}(V_{cd}V_{us}^{})p^{}}{(V_{cs}^{}V_{ud})p^{}+R_de^{\mathrm{i}\delta _d}(V_{cd}^{}V_{us})q^{}}},`$
$`{\displaystyle \frac{A(D_s^+K_\mathrm{S}X_s^+)}{A(D_s^{}K_\mathrm{S}X_s^{})}}`$ $`=`$ $`{\displaystyle \frac{(V_{cs}V_{ud}^{})q^{}+R_se^{\mathrm{i}\delta _s}(V_{cd}V_{us}^{})p^{}}{(V_{cs}^{}V_{ud})p^{}+R_se^{\mathrm{i}\delta _s}(V_{cd}^{}V_{us})q^{}}}.`$ (8)
Although $`q/p`$ itself depends on the phase convention of $`|K^0`$ and $`|\overline{K}^0`$ meson states, which relies intrinsically on that of relevant quark fields , the following two quantities are rephasing-invariant:
$$\frac{V_{cd}V_{us}^{}}{V_{cs}V_{ud}^{}}\frac{p^{}}{q^{}}=re^{+\mathrm{i}\varphi },\frac{V_{cd}^{}V_{us}}{V_{cs}^{}V_{ud}}\frac{q^{}}{p^{}}=\overline{r}e^{\mathrm{i}\varphi }.$$
(9)
Note that $`|q/p|`$ deviates from unity only at the $`O(10^3)`$ level, i.e., the order of observable $`CP`$ violation in $`K^0`$-$`\overline{K}^0`$ mixing . Therefore $`r=\overline{r}=\mathrm{tan}^2\theta _\mathrm{C}5\%`$ is an excellent approximation, where $`\theta _\mathrm{C}13^{}`$ is the Cabibbo angle. The $`CP`$ asymmetries of $`D^\pm K_\mathrm{S}X^\pm `$ and $`D_s^\pm K_\mathrm{S}X_s^\pm `$ transitions can then be given as
$`𝒜_d`$ $`=`$ $`{\displaystyle \frac{|A(D^{}K_\mathrm{S}X^{})|^2|A(D^+K_\mathrm{S}X^+)|^2}{|A(D^{}K_\mathrm{S}X^{})|^2+|A(D^+K_\mathrm{S}X^+)|^2}}`$ (10)
$``$ $`\delta _K+2R_d\mathrm{tan}^2\theta _\mathrm{C}\mathrm{sin}\varphi \mathrm{sin}\delta _d,`$
and
$`𝒜_s`$ $`=`$ $`{\displaystyle \frac{|A(D_s^{}K_\mathrm{S}X_s^{})|^2|A(D_s^+K_\mathrm{S}X_s^+)|^2}{|A(D_s^{}K_\mathrm{S}X_s^{})|^2+|A(D_s^+K_\mathrm{S}X_s^+)|^2}}`$ (11)
$``$ $`\delta _K+2R_s\mathrm{tan}^2\theta _\mathrm{C}\mathrm{sin}\varphi \mathrm{sin}\delta _s,`$
where $`\delta _K`$ has been given in Eq. (7). Clearly $`𝒜_d`$ or $`𝒜_s`$ consists of two different contributions: that from $`K^0`$-$`\overline{K}^0`$ mixing in the final state, and that from the interference between Cabibbo-allowed and doubly Cabibbo-suppressed channels.
The smallness of $`\delta _K`$ implies that the $`K^0`$-$`\overline{K}^0`$ mixing phase is nearly the same as that in direct decays of $`K^0`$ and $`\overline{K}^0`$ mesons . Thus one may take $`q/p=(V_{us}V_{ud}^{})/(V_{us}^{}V_{ud})`$ in the leading-order approximation. The rephasing-invariant weak phase $`\varphi `$ turns out to be the largest outer angle of the unitarity triangle $`V_{cd}V_{ud}^{}+V_{cs}V_{us}^{}+V_{cb}V_{ub}^{}=0`$ (see, e.g., Ref. ) and its magnitude can roughly be constrained as follows:
$$\varphi =\mathrm{arg}\left(\frac{V_{cd}V_{ud}^{}}{V_{cs}V_{us}^{}}\right)\pi \mathrm{arctan}\left|\frac{V_{cb}V_{ub}^{}}{V_{cd}V_{ud}^{}}\right|.$$
(12)
In obtaining this bound, we have taken $`|V_{cd}V_{ud}^{}||V_{cs}V_{us}^{}||V_{cb}V_{ub}^{}|`$ into account. We find $`\mathrm{sin}\varphi O(10^3)`$ by use of current experimental data on quark flavor mixing . This result, together with $`|R_q|O(1)`$ and $`\mathrm{tan}^2\theta _\mathrm{C}5\%`$, implies that the $`CP`$ asymmetry arising from the interference between Cabibbo-allowed and doubly Cabibbo-suppressed channels is negligibly small in $`𝒜_q`$ (for $`q=d`$ or $`s`$). One then concludes that $`𝒜_s𝒜_d\delta _K`$ holds to a good degree of accuracy in the standard model. Such $`\delta _K`$-induced $`CP`$-violating effects can also be observed in the semileptonic $`D^\pm `$ and $`D_s^\pm `$ decays which involve $`K_\mathrm{S}`$ or $`K_\mathrm{L}`$ meson via $`K^0`$-$`\overline{K}^0`$ mixing in the final states .
The conclusion drawn above may not be true if new physics enters the decay modes under discussion. As pointed out in Ref. , one can wonder about a kind of new physics which causes the decay channels resembling the doubly Cabibbo-suppressed ones in Fig. 2 but occurring through a new charged boson or something more complicated than the $`W`$ boson. The amplitudes of such new channels may remain to be suppressed in magnitude (e.g., of the same order as the doubly Cabibbo-suppressed amplitudes in the standard model), but they are likely to have significantly different weak and strong phases from the dominant Cabibbo-allowed channels in Fig. 1 and result in new $`CP`$-violating asymmetries via the interference effects. The strong phases are expected to be very different because of the presence of meson resonances in the D mass region which affect the phases of the doubly-suppressed $`D^\pm `$ and Cabibbo-allowed $`D_s`$ amplitudes while the Cabibbo-allowed $`D^\pm `$ and doubly-suppressed $`D_s`$ decays feed exotic channels which have no resonances . Following this illustrative picture of new physics, we modify the ratios of transition amplitudes in Eq. (4) as follows:
$`{\displaystyle \frac{A(D^+K^0X^+)}{A(D^+\overline{K}^0X^+)}}`$ $`=`$ $`R_d^{}e^{\mathrm{i}\delta _d^{}}{\displaystyle \frac{U_{cd}U_{us}^{}}{V_{cs}V_{ud}^{}}},`$
$`{\displaystyle \frac{A(D_s^+K^0X_s^+)}{A(D_s^+\overline{K}^0X_s^+)}}`$ $`=`$ $`R_s^{}e^{\mathrm{i}\delta _s^{}}{\displaystyle \frac{U_{cd}U_{us}^{}}{V_{cs}V_{ud}^{}}},`$ (13)
in which $`\delta _q^{}`$, $`U_{cd}U_{us}^{}`$ and $`R_q^{}`$ (for $`q=d`$ or $`s`$) stand for the effective strong phase difference, the effective weak coupling and the ratio of effective real hadronic matrix elements, respectively. Note that these quantities are composed of both the contribution from doubly Cabibbo-suppressed channels in the standard model and that from additional channels induced by new physics. The same kind of new physics might also affect $`K^0`$-$`\overline{K}^0`$ mixing, but this effect can always be incorporated into the $`p`$ and $`q`$ parameters. In this case the rephasing-invariant combinations in Eq. (9) become
$$\frac{U_{cd}U_{us}^{}}{V_{cs}V_{ud}^{}}\frac{p^{}}{q^{}}=r^{}e^{+\mathrm{i}\varphi ^{}},\frac{U_{cd}^{}U_{us}}{V_{cs}^{}V_{ud}}\frac{q^{}}{p^{}}=\overline{r}^{}e^{\mathrm{i}\varphi ^{}}.$$
(14)
Here $`r^{}=\overline{r}^{}`$ remains to be a good approximation, but the magnitude of $`r^{}`$ (or $`\overline{r}^{}`$) may deviate somehow from $`\mathrm{tan}^2\theta _\mathrm{C}5\%`$. The $`CP`$ asymmetries of $`D^\pm K_\mathrm{S}X^\pm `$ and $`D_s^\pm K_\mathrm{S}X_s^\pm `$ decays, similar to those obtained in Eqs. (10) and (11), read as
$`𝒜_d^{}`$ $``$ $`\delta _K+2R_d^{}r^{}\mathrm{sin}\varphi ^{}\mathrm{sin}\delta _d^{},`$
$`𝒜_s^{}`$ $``$ $`\delta _K+2R_s^{}r^{}\mathrm{sin}\varphi ^{}\mathrm{sin}\delta _s^{}.`$ (15)
If $`CP`$ violation from the interference between the channel induced by standard physics and that arising from new physics is comparable in magnitude with $`\delta _K`$ (i.e., at the $`O(10^3)`$ level) or dominant over $`\delta _K`$ (i.e., at the $`O(10^2)`$ level), then significant difference between $`𝒜_s^{}`$ and $`𝒜_d^{}`$ should in general appear. This follows that $`\delta _s^{}\delta _d^{}`$ and they may even have the opposite signs. For illustration, we plot the dependence of $`𝒜_d^{}`$ and $`𝒜_s^{}`$ on $`\varphi ^{}`$ in Fig. 3 with the typical inputs $`r^{}=0.04`$, $`R_d^{}\mathrm{sin}\delta _d^{}=0.3`$ and $`R_s^{}\mathrm{sin}\delta _s^{}=0.5`$. Clearly it is worthwhile to measure both asymmetries, and a comparison between them will be helpful to examine SU(3) symmetry and probe possible new physics effects.
The $`CP`$-violating asymmetries in $`D^\pm K_\mathrm{L}X^\pm `$ and $`D_s^\pm K_\mathrm{L}X_s^\pm `$ decays can directly be obtained from those in Eqs. (10), (11) and (15) through the replacements $`R_qR_q`$ and $`R_q^{}R_q^{}`$ (for $`q=d`$ or $`s`$), as ensured by $`CPT`$ symmetry in the total width . The point is simply that $`q/p`$ becomes $`q/p`$, if one moves from the $`K_\mathrm{S}X_{(s)}^\pm `$ state to the $`K_\mathrm{L}X_{(s)}^\pm `$ state. The difference between the $`CP`$ asymmetries in $`D_{(s)}^\pm K_\mathrm{S}X_{(s)}^\pm `$ and $`D_{(s)}^\pm K_\mathrm{L}X_{(s)}^\pm `$ turns out to be
$`𝒜_d^{}(K_\mathrm{S})𝒜_d^{}(K_\mathrm{L})`$ $``$ $`4R_d^{}r^{}\mathrm{sin}\varphi ^{}\mathrm{sin}\delta _d^{},`$
$`𝒜_s^{}(K_\mathrm{S})𝒜_s^{}(K_\mathrm{L})`$ $``$ $`4R_s^{}r^{}\mathrm{sin}\varphi ^{}\mathrm{sin}\delta _s^{}.`$ (16)
This implies an interesting possibility to pin down the involved new physics, which significantly violates $`CP`$ invariance. For either $`D^\pm `$ or $`D_s^\pm `$ decays, we may explicitly conclude that the $`CP`$-violating effect induced by new physics is
* vanishing or negligibly small, if the relationship $`𝒜_q^{}(K_\mathrm{S})𝒜_q^{}(K_\mathrm{L})\delta _K`$ is observed in experiments;
* comparable in magnitude with the $`\delta _K`$-induced $`CP`$ violation, if $`|𝒜_q^{}(K_\mathrm{S})||𝒜_q^{}(K_\mathrm{L})|`$ (or vice versa) is experimentally detected;
* dominant over the $`\delta _K`$-induced $`CP`$ asymmetry, if $`𝒜_q^{}(K_\mathrm{S})𝒜_q^{}(K_\mathrm{L})`$ (of order $`10^2`$) is measured in experiments.
These conclusions are quite general and they should also be valid for other types of new physics involved in the charm-quark sector.
In view of current experimental data , the branching ratios of $`D^\pm K_{\mathrm{S},\mathrm{L}}+(\pi ^\pm ,\rho ^\pm ,a_1^\pm )`$ are estimated to be about $`1.4\%`$, $`3.3\%`$ and $`4.0\%`$, respectively. The branching ratios of $`D_s^\pm K_{\mathrm{S},\mathrm{L}}+(K^\pm ,K^\pm )`$ amount approximately to $`1.8\%`$ and $`2.2\%`$, respectively <sup>4</sup><sup>4</sup>4The decay modes involving the $`\overline{K}^0`$ (or $`K^0`$) resonance may have smaller branching ratios, as both $`\overline{K}^0\overline{K}^0\pi ^0`$ and $`\overline{K}^0K^{}\pi ^+`$ are allowed but only the former is relevant to our purpose. For example, the branching ratio of $`D^+(K_{\mathrm{S},\mathrm{L}}+\pi ^0)_K^{}^{}+\pi ^+`$ is expected to be $`3.2\times 10^3`$ from current data , about 1/5 of the branching ratio of $`D^+K_{\mathrm{S},\mathrm{L}}+\pi ^+`$.. It is therefore possible to measure the $`\delta _K`$-induced $`CP`$ asymmetries in these decay modes with about $`10^{78}`$ $`D^\pm `$ or $`D_s^\pm `$ events. If new physics enhances the asymmetries up to the percent level, then clean signals of $`CP`$ violation can be established with only about $`10^6`$ $`D^\pm `$ or $`D_s^\pm `$ events.
Acknowledgement: It is a pleasure to thank Jeffrey Appel, Edmond Berger, Karl Berkelman, Ikaros Bigi, John Cumalat, Harald Fritzsch, Yuval Grossman, Yosef Nir, J. G. Smith and Yue-liang Wu for helpful discussions and comments. This work was partially supported by the German-Israeli Foundation for Scientific Research and Development (GIF).
|
no-problem/9901/astro-ph9901110.html
|
ar5iv
|
text
|
# SGR 1806-20 Is a Set of Independent Relaxation Systems.
## 1 Introduction
Soft Gamma Repeaters (SGRs) are very highly magnetized ($`B10^{15}\mathrm{G}`$), slowly rotating ($`P8\mathrm{s}`$), young ($`10^4\mathrm{years}`$) neutron stars that produce multiple bursts of soft gamma-rays, often at super-Eddington luminosities ($`10^{3741.5}\mathrm{erg}`$ in a few tenths of a second). Two of these objects (SGR 0526-66 and SGR 1900+14) have also produced hard, extremely intense superbursts ($`10^{44.5}\mathrm{erg}`$ in a few tenths of a second). In the Thompson & Duncan (1995) model, the smaller bursts are produced by ‘crustquakes’ in the neutron star, while the larger bursts are produced by global reconfiguration of the magnetic field.
Four of these objects have been discovered so far, including SGR 1806-20 (Atteia et al. 1987; Laros et al. 1987), the subject of this Letter. Observations of SGR 1806-20 with the XTE PCA (1996 Nov) and ASCA (1993 Oct) find that its quiescent emission shows a 7.47 s periodicity with a spin-down rate of $`\dot{P}=8\times 10^{11}\mathrm{s}\mathrm{s}^1`$, implying a magnetic field of $`8\times 10^{14}\mathrm{G}`$ and a characteristic spin-down age $`P/2\dot{P}`$ of $``$1500 years (Kouveliotou et al. 1998). This source is associated with the SNR G10.0-0.3, which has an inferred age of $``$5000 years (Kulkarni & Frail, 1993). Corbel et al. (1997) measure the distance to this SNR as $`14.5\pm 1.4`$ kpc.
The intervals between successive bursts are distributed lognormally (Hurley et al. 1994). Cheng et al. (1995) found that this distribution, the correlation between successive waiting intervals, and the distribution of intensities (a $`dN/dSS^{1.66}`$ power law with a high-intensity cutoff) are similar to the behavior of earthquakes. Previous analyses have found no clear relationship between the timing of the bursts and their intensities (Laros et al. 1987; Ulmer et al. 1993).
This Letter demonstrates that SGR 1806-20 contains multiple systems that continuously accumulate energy and discontinuously release it as bursts. This is consistent with the crustquake model (with multiple seismic zones) but not with, e.g., impact event, continuous accretion, or disk instability models.
## 2 Observations and Analyses
The data analyzed in this Letter are from the 134 bursts catalogued in Ulmer et al. (1993) from the University of California, Berkeley/Los Alamos National Laboratory instrument on the International Cometary Explorer (ICE) during the SGR’s 1979-1984 term of activity. This activity peaked in 1983 Oct-Nov with more than 100 detected bursts, including 20 on Nov 16 (Day Of Year 320). Figure 1a is the history of bursts during this period, showing the time $`t_i`$ and burst size $`S_i`$ as measured by counts in the 26-40 keV channel of ICE’s scintillating detector.
Figure 1b shows the cumulative fluence, the running sum of the burst sizes, as a function of time. If we assume that the burst catalog provides a good and complete measure of the energy emitted as bursts by this source, then we may use this to understand the energetics of the bursting mechanism. Section 2.1, below, examines and validates this assumption.
Figure 1b shows that the rate of energy release varies dramatically over this time period. However, intervals are apparent when the average power, averaged over many bursts, is approximately constant, giving a constant slope. The intervals marked $`A`$, $`B`$, $`C`$ and $`D`$ are selected for further study in this Letter.
A relaxation system is a system which continuously accumulates an input quantity (e.g., energy) in a reservoir, and discontinuously releases it. For a system that starts with a quantity $`E_0`$ in its reservoir, accumulates at a rate $`R(t)`$, and releases with events of size $`S_i`$ instantaneously at times $`t_i`$, the contents of the reservoir as a function of time is given by
$$E(t)=E_0+_0^tR(t^{})𝑑t^{}\underset{i}{\overset{t_i<t}{}}S_i$$
The simplest behavior from such a system occurs with a constant accumulation rate $`R(t)=r`$, a fixed ‘trip point’ which triggers a release when $`E=E_{trip}`$, and a constant release size $`S_i=s`$, giving a periodic relaxation oscillator with $`P=t_{i+1}t_i=s/r`$. If the accumulation rate, trip point, or release strength are not constant, then more complicated behavior results. Stick-slip (including earthquake) and avalanche systems are other examples of relaxation systems. If the reservoir has a maximum capacity $`E_{max}`$ such that $`0EE_{max}`$, and constant rate $`R(t)=r`$, the sum of releases approximates a linear function of time: $`_i^{t_i<t}S_i=\left[(tt_0)r\right]_{E_{max}}^{+0}`$. The linear sections of Figure 1b can be tested to see if they are consistent with such relaxation systems.
The interval $`B`$, detailed in Figure 2a, demonstrates this behavior. The cumulative fluence is bounded above by a linear function of time, corresponding to a rate of 392 counts per day. The maximum deviation below the line is comparable to the size of the largest bursts. Assuming that energy flows into a reservoir at this constant rate and is released only as the catalogued bursts, we can model the energy in the reservoir by subtracting the emitted fluence from the integrated input energy, as shown in Figure 2b. The bursts have an apparent tendency to keep the reservoir at low levels—the cumulative fluence tends to stay near the linear function. A statistical analysis (see Section 2.2) shows that, if these same bursts were arranged randomly, the cumulative fluence would tend to deviate much more from the best linear rate than is observed here. This is very strong evidence that these bursts come from a relaxation system.
Interval $`C`$ may be a continuation of this relaxation system. Its average rate is consistent with that of $`B`$ and, if it is assumed that a few of the many bursts in the intervening time are from this system, the two intervals can be combined in a plot qualitatively similar to Figure 2. However, the interval between $`B`$ and $`C`$ is the most active period ever seen for this source, including a single hour which has approximately as much fluence as the entire 5-day interval $`B`$. That this violent activity does not affect the parameters of the relaxation system suggests that it comes from a physically independent site, perhaps a different location on the neutron star.
The 9-burst interval $`A`$, detailed in Figure 3, is also consistent with a relaxation system if you omit the single burst that occurs at 1983 DOY 297.940. The remaining bursts are consistent with a constant-$`R`$ relaxation system in which most of the bursts are total releases of the reservoir energy. The rate for $`A`$ is a factor of 20 below that of $`B`$ and $`C`$, suggesting that it is a different system. The high statistical quality of the relaxation system fit to those eight bursts clearly identifies the remaining burst as an interloper (Section 2.2).
The interval $`D`$ (11 bursts), appears to have two different energy accumulation rates differing by $``$40% (57 counts/day for 5 bursts $`D_i`$, then 96 counts/day for 6 bursts $`D_{ii}`$). These rates are comparable to each other, and far from those of $`A`$ and $`B`$, so $`D`$ may represent a single system that speeds up slightly between $`D_i`$ and $`D_{ii}`$, rather than two different systems.
### 2.1 Cumulative Fluence as an Integrating Bolometer
The sum of the catalogued burst intensities is a good measure of the integrated burst energy emitted by the object to the extent that a) the catalog contains all bursts above a certain threshold and is uncontaminated, b) the bursts below that threshold contain only a small fraction of the total energy, c) the detected counts are proportional to the energy released in the direction of the detector, and d) the fraction of energy released in the direction of the detector is constant and, specifically, independent of the neutron star spin phase. Violation of any of these conditions would cause ‘noise’ in our analysis, which could distort our understanding of the source’s burst energy output.
a) catalog completenessICE was an interplanetary spacecraft, and so its observations were not continually interrupted by occultations, as Earth satellite observations often are, nor hampered by rapidly varying background from orbiting within Earth’s magnetosphere. ICE thus provides a long, continuous, and stable set of measurements resulting in a uniform catalog with good completeness down to the instrument’s sensitivity limit of $``$16 counts. The false-trigger rate in the catalog is estimated to be $`<1\mathrm{year}^1`$ (Laros et al. 1987).
b) subthreshold bursts—The observed burst intensity distribution power law, $`S^{1.66}`$ (Cheng et al. 1995) has an index $`\gamma >2`$, which places most of the energy in the largest bursts up to the high-E cutoff. Recent XTE PCA observations find that this power-law distribution extends to bursts far below the ICE threshold (Dieters, priv. comm.). An extrapolation to zero of the integrated energy as a function of burst size indicates that $`28\%`$ of the burst energy is sub-threshold. The noise due to sub-threshold bursts would be muted if the energy comes out in the form of many small bursts, randomly distributed in time. Indeed, 80% of the sub-threshold energy is extrapolated to be in bursts at least a factor of 2 below our threshold but, as this Letter shows, bursts are not randomly distributed in time, so these small bursts still have the potential to cause problems.
c) intensity-energy relationship—The catalogued burst size (the number of counts detected in the 26-40 keV channel of a scintillator) is proportional to the total energy fluence incident on the detector if the bursts always have similar spectra. Fenimore, Laros & Ulmer (1994) find that the spectral shape of bursts from this source is largely independent of the burst fluence with a small scatter in the burst hardness. The spectral fits indicate that each count represents the emission of $`4\times 10^{38}\mathrm{erg}`$ of X/$`\gamma `$-rays.
d) isotropic emission and rotational modulation—SGRs are rotating neutron stars, and anisotropic emission would make the relationship between total emitted energy and the detected counts dependent on the neutron star’s rotational phase. Fourier analysis of the times of bursts in interval $`B`$ (which, coming from a single system, might be expected to show the strongest phase coherence) showed no significant modulation for periods between 7.40 and 7.48 seconds—the reasonable range of extrapolations to 1983 of the Kouveliotou et al. (1998) measured period and spindown rate. Weighting the times either directly or inversely with $`S_i`$ showed that both strong and weak bursts were independent of spin phase.
These conditions merely ensure that the measured running sum of burst sizes is a good approximation to the total emitted burst X/$`\gamma `$-ray energy. The energy which flows into the reservoir may escape, undetected, through other channels of non-burst or non-X/$`\gamma `$ energy release. However, as the data show, any energy leaks are not severe enough to conceal SGR 1806-20’s relaxation system behavior.
### 2.2 Statistical Analysis
Any set of events can be trivially described as a relaxation system, assuming a sufficiently large reservoir and an arbitrary set of release times and sizes. However, relaxation systems with specific properties can be distinguished from random systems by use of a test statistic designed to detect those properties. If the observed value of this test statistic is outside of the range expected for a random process, then that is proof that the system is non-random, and evidence of a physically-significant relaxation system.
For a relaxation system with a constant accumulation rate, a reservoir small compared to the total emitted energy, and a tendency to release a large fraction of the available energy, a promising statistic is the Sum Of Residuals (SOR). This is the sum of the energies left in the reservoir immediately after each burst $`SOR=_iE(t_i^+)`$. Since the contents of the reservoir and the accumulation rate are not directly observable, the SOR is minimized with respect to a constant rate $`r`$ and an empty-reservoir time $`t_0`$ with the constraint that each residual $`E(t_i^+)=(tt_0)r_j^{ji}S_j`$ must be $`0`$ for all $`i`$.
The SOR value can be calculated for the observed data and then, by a bootstrap method, compared to the distribution of SOR values calculated for randomized versions of the data. For this analysis, the randomized data is produced by ‘shuffling’ (selection without replacement) the burst intensities while keeping the burst times the same. This procedure trivially preserves all previously-known characteristics of the data (intensity distribution, interval distribution, and interval-interval correlation) to ensure that the relaxation system behavior is not an artifact of these characteristics.
The entire 134-burst catalog was searched for the sub-interval of $`N=33`$ bursts with the lowest SOR. This located the interval $`B`$. Then, for each of $`10^6`$ trials, the burst intensities were shuffled as described above, and the randomized catalog was again searched for the $`N`$-burst interval with the lowest SOR. (In most cases, this interval was essentially the same as interval $`B`$.) In only 21 of $`10^6`$ trials was the SOR lower than that of the observed data. (This result is moderately insensitive to the value of $`N`$, giving similar values for $`N=30`$ and $`N=35`$, and increasing by an order of magnitude at $`N=25`$.) This demonstrates at the 99.998% confidence level that the intensities are inconsistent with chance, and are correlated with burst times in a manner that implies a relaxation system during this time period.
With the existence of relaxation systems demonstrated, they can be sought in other intervals, with the randomization restricted to specific sets of consecutive bursts. For all 9 bursts in $`A`$ (DOY 294.867-312.797) the SOR statistic does not distinguish the measurements from the randomized trials. However, when the SOR minimization is permitted to discard any one burst (from each of the observed and randomized trials), the observed data is non-random at the $`6.6\times 10^5`$ level. This is evidence, not just that 8 of the bursts are from a relaxation system, but that the remaining burst belongs to a different system. The false-trigger rate in this catalog is estimated to be $`<1\mathrm{year}^1`$ (Laros et al. 1987), or a $`<5\%`$ chance of occurring during this time interval. This implies that the extra burst was from SGR 1806-20, but was not from the system responsible for the other bursts during $`A`$.
Interval $`D`$, as fit by a relaxation system with a single rate change, is non-random at the 97% level. This result, although marginal, suggests that systems on SGR 1806-20 can change their accumulation rate.
## 3 Discussion
As shown in this Letter, some of the bursts from SGR 1806-20 come from relaxation systems. Additional examples can be found in the burst history, and parsimony suggests that all SGR bursts (except, perhaps, superbursts) are from such systems. This would not always be easily demonstrable, even if observations meet all of the requirements of Section 2.1. Single systems that produce only a few bursts, rapidly vary their accumulation rates, or have reservoirs large compared to the typical burst size could be indistinguishable from random. Multiple simultaneously active systems could be difficult to disentangle.
The accumulation rates for the intervals discussed in this paper are as low as 19 counts/day (equivalent to $`10^{34.9}\mathrm{erg}\mathrm{s}^1`$) for the 18-day interval $`A`$, and as high as 392 counts per day (equivalent to $`10^{36.3}\mathrm{erg}\mathrm{s}^1`$) for interval $`B`$. Accumulation rates up to 80,000 counts/day (equivalent to $`10^{38.6}\mathrm{erg}\mathrm{s}^1`$) are seen during the peak hour of activity on 1983 Nov 16, probably in one or a few relaxation systems. There are also quiescent intervals when no bursts are seen for years—between 1985 Aug and 1993 Sep, the available instruments (which provide incomplete coverage) recorded no bursts that are attributed to SGR 1806-20.
If all SGR bursts are from relaxation systems, the available data show that there can be multiple systems active simultaneously. The conditions which activate such systems are global to the SGR—several systems are apparent in the months covered by this Letter, but there are years when no systems are active. However, each system is independent in that it has its own accumulation rate and reservoir.
Relaxation systems are incompatible with SGR models in which each burst is produced by the accretion of a distinct object, such as neutron stars invading another star’s Oort cloud. Continuous accretion with episodic burning, as in a nova, is a type of relaxation system, but it would be difficult for accretion to independently feed multiple sites, each with its own rate and beginning and ending times.
This analysis extends the earthquake analogy of Thompson & Duncan (1995) beyond the similarity of the $`S^{1.66}`$ intensity distribution and the interval relationships found by Cheng et al. (1995). Seismic regions are relaxation systems, driven by quasi-steady accumulation of tectonic stress due to continental drift, with sudden, incomplete releases of energy as earthquakes. Tsuboi (1965), using an analysis similar to this Letter’s, found that the energy released by earthquakes in and near Japan during 1885-1963 is consistent with a constant input rate into a finite reservoir.
Further studies of this aspect of SGRs can be made using continuously-operating gamma-ray instruments on interplanetary spacecraft, such as those on Ulysses, Near Earth Asteroid Rendezvous and Wind. Although XTE is continually interrupted by Earth occultations, its high sensitivity might allow it to determine if the smallest, most frequent bursts also demonstrate the behavior of relaxation systems over short time intervals. The comparison of SGRs with earthquakes may improve our understanding of both types of events.
|
no-problem/9901/astro-ph9901226.html
|
ar5iv
|
text
|
# Abundance ratios in hierarchical galaxy formation
## 1 Introduction
Simulations of hierarchical galaxy formation in a CDM universe (Kauffmann, White & Guiderdoni 1993; Cole et al. 1994) are based on semi-analytic models in the framework of Press-Schechter theory. They aim to describe the formation of galaxies in a cosmological context, and therefore are designed to match a number of constraints like $`B`$ and $`K`$ luminosity functions, the Tully-Fisher relation, redshift distribution, and slope and scatter of the colour-magnitude relation (CMR). Since in a bottom-up scenario more massive objects form later, and are therefore younger and bluer – in contrast to the observed CMR – the original works were substantially suffering from the problem of creating luminous red elliptical galaxies \[Kauffmann et al. 1993, Lacey et al. 1993, Cole et al. 1994\]. It is also not a priori clear if these models which yield strong evolution at intermediate redshift (Kauffmann, Charlot & White 1996) would be able to reproduce the small scatter of the CMR and the Mg-$`\sigma `$ relation. On the other hand, these constraints can be easily explained by the classical single burst picture for elliptical galaxy formation, assuming passive evolution after a short formation epoch at high redshift, provided that subsequent stellar merging is negligible (Bower, Kodama & Terlevich 1998). However, the more recent models by Kauffmann (1996, hereafter K96) and Baugh, Cole & Frenk \[Baugh et al. 1996\] reproduce the above relations with a scatter which is in remarkably good agreement with observational data (Bower, Lucey & Ellis 1992; Bender, Burstein & Faber 1993; Jørgensen, Franx & Kjærgaard 1996). The correct slope of the CMR can be obtained in a hierarchical scheme, if metal enrichment and metallicity-dependent population synthesis models are taken into account (Kauffmann & Charlot 1998; Cole et al., in preparation). In such models, the slope of the CMR is only driven by metallicity, as in the classical models by Arimoto & Yoshii \[Arimoto & Yoshii 1987\]. Note however, that in the framework of the inverse wind models \[Matteucci 1994\] the CMR slope could be in principle produced from a combination of both age and metallicity, since in these models more massive galaxies are assumed to be older.
In this paper I aim to discuss how far hierarchical formation models are able to accomplish a further important constraint, namely the formation of $`\alpha `$-enhanced stellar populations hosted by luminous ellipticals (Peletier 1989; Worthey, Faber & González 1992; Davies, Sadler & Peletier 1993). Models of chemical evolution show that this constraint can be matched by the single collapse model being characterised by short star formation time-scales of the order $`10^810^9`$ yr (e.g. Matteucci 1994; Thomas, Greggio & Bender 1999; Jimenez et al. 1999), and it has been questioned by Bender \[Bender 1997\] whether the observed \[Mg/Fe\] overabundance is compatible with hierarchical models. However, abundance ratios and the enrichment of $`\alpha `$-elements have not been investigated so far in hierarchical models. For this purpose, I consider typical star formation histories provided by semi-analytic models for hierarchical galaxy formation (K96) and explore the resulting abundance ratios of magnesium and iron.
The paper is organised as follows. In Section 2 the model of chemical enrichment is briefly described. Average and bursty star formation histories are then analysed in Sections 3, 4 and 5. The results are discussed and summarised in Sections 6 and 7.
## 2 The model
The idea is to follow the chemical evolution of the galaxy describing the global chemical properties of the final object and its progenitors as a whole during the merging history. In the hierarchical clustering scheme, structures are subsequently build up starting from small disc-like objects. An elliptical is formed when two disc galaxies of comparable mass merge, which is called the ‘major merger’. Before this event many ‘minor mergers’ between a central galaxy and its sattelite systems happen. It is important to emphasise, that the bulk of stars, namely $`7090`$ per cent, forms at modest rates in the progenitor disc-galaxies before this ‘major merger’ event. In the star burst ignited by the latter, up to 30 per cent of the total stellar mass of the elliptical is created. In K96 many Monte Carlo simulations are performed each corresponding to one individual (final) galaxy. The SFH which is specified in K96 and adopted for the present analysis is an average about all these realizations.
### 2.1 Star formation rates
The SFH is characterised by the age distribution of the stellar populations weighted in the $`V`$-band light. In other words, the fractional contributions $`L_V^{\mathrm{SP}}`$ by stellar populations to the total $`V`$-light as a function of their ages $`t`$ are specified. Equation 1 shows how this translates into a star formation rate $`\psi `$:
$$L_V^{\mathrm{SP}}(t_0,t_1)=\frac{_{t_0}^{t_1}\psi (t)L_V^{\mathrm{SSP}}(t)𝑑t}{_0^{t_{\mathrm{univ}}}\psi (t)L_V^{\mathrm{SSP}}(t)𝑑t}.$$
(1)
Here, the interval $`[t_0,t_1]`$ denotes the age bin of the population, $`t_{\mathrm{univ}}`$ is the assumed age of the universe, i.e. $`t_{\mathrm{univ}}13`$ Gyr for the cosmology adopted in K96 ($`\mathrm{\Omega }_m=1,\mathrm{\Omega }_\mathrm{\Lambda }=0,h=0.5`$). The $`V`$-light of a simple stellar population $`L_V^{\mathrm{SSP}}(t)`$ as a function of its age and metallicity is taken from Worthey \[Worthey 1994\]. In this paper, the star formation rate $`\psi `$ as a function of time is chosen such that a set of stellar populations of various ages and metallicities is constructed which exactly covers the age distribution of the K96 models.
### 2.2 Chemical enrichment
The chemical evolution is calculated by solving the usual set of differential equations (e.g. Tinsley 1980). The enrichment process of the elements hydrogen, magnesium, iron, and total metallicity is computed, no instantaneous recycling is assumed. In particular, the delayed enrichment by Type Ia supernovae is taken into account following the model by Greggio & Renzini \[Greggio & Renzini 1983\]. The inclusion of Type Ia is crucial for interpreting Mg/Fe ratios, since SNe Ia substantially contribute to the enrichment of iron. The evolution code is calibrated on the chemical evolution in the solar neighbourhood (Thomas, Greggio & Bender 1998). The ratio of SNe Ia to SNe II is chosen such that observational constraints like supernova rates, the age-metallicity relation and the trend of Mg/Fe as a function of Fe/H in the solar neighbourhood are reproduced. The SN II nucleosynthesis prescription is adopted from Thielemann, Nomoto & Hashimoto \[Thielemann et al. 1996\] and Nomoto et al. \[Nomoto et al. 1997\], because the stellar yields from Woosley & Weaver \[Woosley & Weaver 1995\] are unable to account for the \[Mg/Fe\] overabundance in the solar neighbourhood \[Thomas et al. 1998\].
The IMF is truncated at $`0.1M_{}`$ and $`40M_{}`$. The slope above $`1M_{}`$ is treated as a parameter, below this threshold a flattening according to recent HST measurements (Gould, Bahcall & Flynn 1997) is assumed. This flattening at the low-mass end does not significantly affect $`\alpha `$/Fe ratios. It should be noted that stars more massive than $`40M_{}`$ are expected to play a minor role in the enrichment process, because of their small number and the fall-back effect \[Woosley & Weaver 1995\]. Moreover, the mean \[Mg/Fe\] overabundance in the metal-poor halo stars of our Galaxy is well reproduced for an IMF with the above truncation and Salpeter slope $`x=1.35`$ \[Salpeter 1955\], if Thielemann et al. nucleosynthesis is adopted \[Thomas et al. 1998\]. The $`V`$-luminosity averaged abundances in the stars are computed by using SSP models from Worthey \[Worthey 1994\]. For details I refer the reader to Thomas et al. (1998, 1999).
Finally, two simplifications of the present model should be mentioned:
1. A one-phase interstellar medium (ISM) is assumed. Kauffmann & Charlot \[Kauffmann & Charlot 1998\], instead, distinguish between cold and hot gas allowing mass transfer among these two components. Since stars form out of cold gas, this process basically controls the feed-back mechanism of star formation. The resulting effects on the star formation rate, however, are already covered, since I directly adopt the SFH from K96. The influence on the abundance ratio of Mg and Fe is expected to be negligible, as long as these elements do not mix on significantly different time-scales.
2. The final galaxy is treated as the whole single-unit from the beginning of its evolution, the economics of the individual progenitors are not followed separately as in K96. Hence, the simulations do not directly include galactic mass loss taking place during disc galaxy evolution (K96) as well as mass transfer between the progenitor systems. The fraction of gas converted to stars is adjusted to global metallicities typically observed in the respective galaxy type. The impact on the global properties of the final galaxy, however, is expected to be small. In particular, abundance ratios of non-primordial elements like Mg and Fe are not significantly affected by mass loss and gas transfer, unless one allows for selective mechanisms. However, in the framework of the present analysis, this option shall only be discussed in a qualitative fashion.
## 3 Global populations
Figs. 1 and 2 show the global average SFHs as they are constrained for cluster elliptical and spiral galaxies in the K96 model. The lower x-axis denote ages (look-back time), the evolutionary direction of time therefore goes from the right to the left. The evolution with redshift for an $`\mathrm{\Omega }_m=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ cosmology (adopted in K96) is given by the upper x-axis. The histogram shows the quantity $`L_V^{\mathrm{SP}}(t_0,t_1)`$ as it is explained in Section 2. In the elliptical case (Fig. 1), more than 70 per cent of the light in the $`V`$-band comes from stars which have formed in the first 3 Gyr at redshift $`z2`$. The SFH of the spiral (Fig. 2), instead, is much more extended towards lower redshift. With a $`V`$-mean age of 6.2 Gyr, the latter thus leads to significantly younger populations than the SFH of the average elliptical (10.4 Gyr). The plus signs in Figs. 1 and 2 denote the fractional contributions from the various populations to the total mass of the object. Owing to their stellar remnants, old populations contribute more to the total mass than to the light of the object. As a consequence, in the optical band the weight of young (old) populations is larger (smaller) with respect to their actual mass. This pattern is particularly prominent in the young and old spiral populations.
The abundance ratios of Mg/Fe (left-hand panels) and Fe/H (right-hand panels) as they result from the chemical evolution model are shown with the scale on the right y-axis, respectively. The chemical evolution of the ISM is denoted by the solid line, the filled star symbols show the $`V`$-luminosity weighted average abundance ratios of the respective composite stellar population of the galaxy as a function of age (hence formation redshift). The filled circles in the left-hand panel of Fig. 1 represent the stellar mean Mg/Fe ratio if a global flattening of the IMF with $`x=0.8`$ is assumed. Otherwise, Salpeter slope above $`1M_{}`$ ($`x=1.35`$) is chosen.
In general, the mean abundances in the stellar populations differ from those in the ISM. Metallicity (Fe/H) is always higher, and Mg/Fe always lower in the ISM than in the $`V`$-average of the stars, because the stars archive the abundance patterns of early epochs. Since, in the case of the elliptical, star formation is much more skewed towards early times, this discrepancy is more significant in such objects. The gas fraction which is ejected from the galaxy is chosen such that the global stellar populations of the elliptical have solar metallicity at low redshift (Fig. 1). This leads to an ejection of roughly 30 per cent of the baryonic mass of the galaxy during its evolution. In the case of the spiral (Milky Way), instead, 55 per cent of the gas are assumed to be ejected in order to yield solar metallicity in the ISM after 9 Gyr, roughly when the Sun was born (Fig. 2).
As a consequence, the metallicity of the ISM and the stars is higher in the elliptical than in the spiral. The Mg/Fe ratios, instead, depend mainly on the shape of the star formation rate as a function of time. Since there is no significant star formation at late times in the elliptical, $`N_{\mathrm{SNII}}/N_{\mathrm{SNIa}}`$ is 20 times lower than in the spiral galaxy, which leads to a lower Mg/Fe in the ISM by roughly 0.1 dex. The stars, instead, store the chemical abundances of the early epochs, $`[\mathrm{Mg}/\mathrm{Fe}]_\mathrm{V}`$ is 0.04 dex higher in the elliptical galaxy than in the spiral, which is in accordance with its older mean age. Still, the stellar populations in both galaxy types exhibit roughly solar Mg/Fe ratios at low redshift, in particular $`[\mathrm{Mg}/\mathrm{Fe}]_\mathrm{V}0.04`$ dex for the elliptical. In spite of the negligible contribution of the late star formation to the mean age of the galaxy, the mean \[Mg/Fe\] ratio is driven significantly below $`0.2`$ dex (Fig. 1). This result clearly stands in disagreement to the observational indications that (bright) elliptical galaxies host stellar populations which are $`\alpha `$-enhanced by at least $`0.20.3`$ dex \[Peletier 1989, Worthey et al. 1992, Davies et al. 1993\].
Fig. 1 demonstrates that with Salpeter IMF a super-solar Mg/Fe ratio is only obtained at the very early stages of the evolution, namely in the first Gyr. A star formation mode which is more skewed towards these early formation ages would lead to older ages and a higher degree of $`\alpha `$-enhancement. One may argue that the average SFH considered here does not apply to bright ellipticals in which the observed $`\alpha `$-enhancement is most significant. However, in the K96 simulations brighter objects are on average younger \[Kauffmann & Charlot 1998\], and do not experience SFHs that are more skewed towards high redshift than the present example. They are therefore unlikely to exhibit higher Mg/Fe ratios than calculated here.
The star formation time-scales that are required to obtain \[$`\alpha `$/Fe\]$`0.2`$ for different IMF slopes are discussed in Thomas et al. \[Thomas et al. 1999\]. The filled circles in the upper-left panel of Fig. 1 demonstrate that the shallower IMF with $`x=0.8`$ can reconcile the extended SFH with $`\alpha `$-enhancement. However, particularly in the hierarchical picture in which the formation of different galaxy types can be traced back to the same (or similar) building blocks at early epochs, a flattening of the IMF restricted to elliptical galaxies does not seem to provide a compelling solution.
From Fig. 1 one can also understand that only galaxy formation scenarios in which star formation terminates after $`12`$ Gyr lead to $`[\mathrm{Mg}/\mathrm{Fe}]_\mathrm{V}0.2`$ dex. The classical monolithic collapse provides such a star formation mode. In the hierarchical scheme, instead, (low) star formation at later stages is unavoidable, which directly leads to lower Mg/Fe ratios in the stars that form out of the highly SNIa enriched ISM at lower redshift.
## 4 Burst populations
The SFHs considered here apply to the global properties of the galaxy populations. However, in the picture of hierarchical clustering star bursts that are induced by mergers play an important role for the interpretation of (particularly central) abundance features. The star burst during the major merger forms $`1030`$ per cent of the final stellar mass, which is likely to dominate the galaxy core, since the gas is driven to the central regions of the merger remnant \[Barnes & Hernquist 1996\]. In this section I shall investigate the abundance patterns of the population that results from a $`0.1`$ Gyr star burst triggered by the major merger which forms the final elliptical.
### 4.1 Universal IMF
In Fig. 1 the dotted histogram denotes the distribution of these major merger events among the elliptical galaxy population. At the given epoch when the star burst occurs, the new stellar population forms out of the ISM whose abundances are shown by the solid line in Fig. 1. Owing to the short duration of the star burst, the mean Mg/Fe ratios in the newly created stars are higher than the initial values in the ISM as indicated by the arrows and the dotted line (left-hand panel of Fig. 1). The slope of the IMF is assumed Salpeter also during the burst. It turns out that, due to the extremely low Mg/Fe ratios and the high metallicities in the ISM at the merger epoch, such star bursts are inappropriate to raise Mg/Fe up to a level consistent with observational values. The resulting Mg/Fe does not differ from the mean average in the global stellar populations.
The metallicity Fe/H, instead, increases to even higher values between 0.3 and 0.5 dex, depending on the burst epoch (upper-right panel of Fig. 1). Under the assumption that the entire central population forms in the major burst the model leads to radial metallicity gradients of $`0.30.5`$ dex from the inner to the outer parts of the galaxy, in contradiction to observational measurements of $`0.2`$ dex per decade \[Davies et al. 1993, Kuntschner 1998\]. A mixture of $`3/41/3`$ between burst and global population in the galaxy nucleus is required to smooth the gradient down to the observed value.
The results shown in Fig. 1 unfold the principle dilemma of the hierarchical picture: the initial conditions in the ISM at the burst epoch, namely super-solar metallicity and sub-solar Mg/Fe, are highly unfavourable in producing $`\alpha `$-enhanced populations. As shown in Thomas et al. \[Thomas et al. 1999\], this incapability of late merger events is independent of the burst time-scale and SN Ia rates during the burst. The most promising way out to reconcile an intermediate or late merger with $`\alpha `$-enhancement is to assume a flattening of the IMF during the star burst.
### 4.2 Variable IMF
The arrows and dotted lines in Fig. 3 show the abundance patterns (left-hand panel: Mg/Fe, right-hand panel: Fe/H) of the burst population if a shallow IMF ($`x=0.8`$) during the burst is assumed. The chemical evolution of the global ISM is based on Salpeter IMF ($`x=1.35`$) as before. It turns out that, with a significant flattening of the IMF, the major merger burst produces a metal-rich $`\alpha `$-enhanced stellar population. As demonstrated in the left-hand panel of Fig. 3, the bulk of ellipticals experience their major merger at look-back times of $`79`$ Gyr ($`z0.71.5`$) and therefore exhibit significant $`\alpha `$-enhancement in the burst population.
## 5 Global–burst mixtures
The question is how do the global and the burst populations mix? The value $`x=0.8`$ represents the minimum flattening required for the case that the pure burst population is observed, i.e. in the centre of the galaxy. Apart from the fact that no mixing of the two population types seems unlikely, it would produce metallicity gradients much too steep. Table 1 gives the average abundance ratios for different mixtures of the global (IMF slope $`x=1.35`$) and the 8 Gyr-burst populations ($`x=0.8`$). \[Fe/H\]’ denotes the metallicity if the efficiency of the star burst is reduced from 99 to 50 per cent. Mg/Fe is not significantly affected \[Thomas et al. 1999\].
If the burst population mixes homogeneously with the global population by providing 30 per cent of the total $`V`$-light, the resulting \[Mg/Fe\] is reduced to 0.11 dex (row 3 in Table 1). For such proportions, the extreme flattening of $`x=0`$ would be required to obtain a significantly $`\alpha `$-enhanced population of \[Mg/Fe\]=0.2 dex. More likely, however, is that the burst populations are more prominent in the inner parts due to dissipation processes in the burst gas \[Barnes & Hernquist 1996\]. Table 1 can then be interpreted in terms of increasing galaxy radius from bottom to top. Abundance gradients result according to the global-burst mixtures assumed at different radii of the object.
I shall briefly discuss the following model, in which I roughly devide the galaxy in three zones separated by the inner radius $`r_i`$ and the effective radius $`r_e`$. The resulting stellar abundance ratios in each zone for the respective different global-burst mixtures are shown in Table 2. Column 2 gives the $`V`$-light fraction of each zone, namely 5 per cent from the most inner part, and half of the light within the effectice radius $`r_e`$. Altogether, the fractional contributions are chosen such that the burst population contributes 30 per cent of the total $`V`$-light of the galaxy. The star formation efficiency of the burst population is assumed to be 50 per cent, in order to avoid metallicities that exceed observational determinations. In Table 2, the quota of the burst population is steadily decreasing towards the outer zones of the galaxy, which leads to gradients such that both Mg/Fe and Fe/H are decreasing with increasing radius.
The galaxy nucleus is most metal-rich and significantly $`\alpha `$-enhanced, the gradients of both Mg/Fe and Fe/H within $`r_e`$, however, are rather shallow. Thus this model predicts the Mg/Fe overabundance not to decline remarkably out to $`r_e`$, in accordance with Worthey et al. \[Worthey et al. 1992\] and Davies et al. \[Davies et al. 1993\]. It is interesting to mention that the tendancy of a slight decrease in $`\alpha `$-enhancement is found in the Coma cluster sample analysed by Mehlert et al. (in preparation). The gradient in metallicity as given in Table 2 is also consistent with estimates by Davies et al. \[Davies et al. 1993\] and Kuntschner \[Kuntschner 1998\].
Finally it should be emphasized, that reasonable metallicities are only achieved if star formation during the burst is assumed to be efficient by only 50 per cent. Table 1 shows that otherwise metallicities above $`3Z_{}`$ would be obtained in the centre, which exceeds observational determinations \[Kuntschner & Davies 1998\].
## 6 Discussion
It should again be emphasized that the SFHs discussed here are averages over many Monte Carlo realizations, individual galaxies are therefore not considered. This approach is sensible because also the observational constraint is taken to be a mean $`\alpha `$-enhancement about which the values measured for the single galaxies scatter. Still, in a more detailed analysis it would be of great interest to resolve the SFHs of individual galaxies and to compare the theoretical scatter in $`\alpha `$-enhancement with the observed one.
In the following I will discuss uncertainties in chemical evolution that may have an important impact on the results. Furthermore possible implications on properties of cluster and field galaxies are mentioned.
### 6.1 SN Ia rate and stellar yields
The model of chemical evolution applied for this analysis is calibrated such that the basic local abundance features are covered \[Thomas et al. 1998\]. Besides stellar yields and IMF, this also includes the prescription of the Type Ia supernova, which is adopted from Greggio & Renzini \[Greggio & Renzini 1983\]. According to this model, the maximum of the SN Ia rate occurs roughly 1 Gyr after the birth of the stellar population. It is obvious that this time-scale has a great impact on the results presented here. From the solar neighbourhood data, one cannot entirely exclude a SN Ia rate which sets in later and stronger. Such a model would basically delay (not prevent) the decrease of the Mg/Fe ratio, hence in the hierarchical clustering scheme the global populations of ellipticals would still not appear $`\alpha `$-enhanced. However, the Mg/Fe ratios in the ISM would be higher at the major burst epochs, leading to more $`\alpha `$-enhancement in the burst populations.
Uncertainties in stellar yields \[Thomas et al. 1998\] allow to raise the calculated Mg/Fe ratios. Note, however, that the degree of $`\alpha `$-enhancement used as the main constraint in this paper (\[$`\alpha `$/Fe\]$`=0.2`$ dex) represents the lowest limit determined from observational data.
A very early chemical pre-enrichment by PopIII-like objects may induce initial conditions in favour of high Mg/Fe ratios. The results in this paper, however, are not seriously affected by such uncertainties, since it is argued differentially to the solar neighbourhood chemistry on which the chemical evolution model is calibrated. Thus details of the overall chemical initial conditions in the early universe do not alter the conclusions.
### 6.2 Selective mass loss
Type Ia supernovae explode later than Type II, thus their ejection occurs at different dynamical stages of the merging history. In a scenario in which Type Ia products (basically iron) are lost easier, the accomplishment of enhanced Mg/Fe ratios is simplified. The examination of this alternative, however, requires a more detailed analysis of the possible outflow and star formation processes during the merging history, which by far exceeds the scope of this paper. On the other hand it is important to notice, that $`\alpha `$-enhanced giant elliptical galaxies enrich – for Salpeter IMF – the intracluster medium (ICM) with Mg/Fe underabundant material \[Thomas 1998b\]. A selective loss mechanism as mentioned above further increases this ICM-galaxy asymmetry \[Renzini et al. 1993\] and therefore causes a more striking disagreement with observational measurements which indicate solar $`\alpha `$/Fe ratios in the ICM \[Mushotzky et al. 1996, Ishimaru & Arimoto 1997\].
### 6.3 Cluster vs. Field
The SFH discussed in this paper applies to cluster ellipticals. In such a high density environment, the collapse of density peaks on galaxy scale is boosted to high redshifts (K96). In the low-density surroundings of the field (haloes of $`10^{13}M_{}`$), instead, the situation is entirely different. The K96 model predicts that objects in the field are not destined to be ellipticals or spirals, but actually undergo transformations between both types due to gas accretion (formation of a disc galaxy) or merging of similar sized spirals (formation of an elliptical). As a consequence, the objects we happen to observe as bright ellipticals in the field should have mostly had a recent merger in the past 1.5 Gyr. This leads to younger mean ages, but also to a more extended SFH. Figs. 1 and 3 show that later bursts yield lower Mg/Fe ratios. From the models, one should therefore expect a trend such that (bright) ellipticals in the field are on average less $`\alpha `$-enhanced than their counterparts in clusters (see also Thomas 1998a). Thus, the intrinsic difference between cluster and field ellipticals (K96), which manifests itself most prominent in the properties of bright elliptical galaxies, should be measurable in their $`\alpha `$-element abundance patterns.
## 7 Conclusion
In this paper, I analyse average star formation histories (SFH) as they emerge from hierarchical clustering theory with respect to their capability of producing $`\alpha `$-enhanced abundance ratios observed in elliptical galaxies. For this purpose, the $`V`$-luminosity weighted age distributions for the stellar populations of the model spiral and cluster elliptical galaxy are adopted from Kauffmann (1996, K96).
Owing to the constant level of star formation in spiral galaxies, their SFH leads to roughly solar Mg/Fe ratios in the stars and in the ISM, in agreement with observations in the Milky way. For elliptical galaxies, hierarchical models predict more star formation at high redshift and therefore significantly older mean ages. However, the calculations in this paper show, that their SFH is still too extended in order to accomplish a signifcant degree of global $`\alpha `$-enhancement in the stars. Star bursts ignited by the major merger when the elliptical forms, instead, provide a promising way to produce metal-rich Mg/Fe overabundant populations, under the following assumptions:
1. The nucleus predominantly consists of the burst population formed in the major merger.
2. The burst population provides roughly 30 per cent of the total $`V`$-light of the galaxy.
3. The IMF is significantly flattened ($`x0.8`$) during the burst with respect to the Salpeter value ($`x=1.35`$).
4. The efficiency of star formation during the burst has to be reduced to roughly 50 per cent, in order to garuantee shallow metallicity gradients within the galaxy.
The burst populations and its proportions to the global populations turn out to occupy a crucial role in producing Mg/Fe overabundance in the framework of hierarchical galaxy fotmation.
A direct consequence of the model is a slight gradient in $`\alpha `$-enhancement in terms of decreasing $`\alpha `$/Fe with increasing radius. A further implication is that bright field ellipticals are predicted to exhibit lower $`\alpha `$/Fe ratios than their cluster counterparts, due to the younger mean ages and hence more extended SFHs of the former. A future theoretical task will be to directly combine the chemical evolution of $`\alpha `$-elements and iron with semi-analytic models in order to allow for a more quantitative analysis than provided by this paper.
## Acknowledgments
I am very grateful to R. Bender and L. Greggio, who provided the mental foundation stones of this work and gave important comments on the manuscript. G. Kauffmann, the referee of the paper, is particularly acknowledged for very important suggestions that significantly improved the first version. I also thank C. Baugh, J. Beuing, S. Cole, N. Drory, H. Kuntschner, C. Lacey, C. Maraston, D. Mehlert and R. Saglia for interesting and helpful discussions. This work was supported by the ”Sonderforschungsbereich 375-95 für Astro-Teilchenphysik” of the Deutsche Forschungsgemeinschaft.
|
no-problem/9901/cond-mat9901348.html
|
ar5iv
|
text
|
# Can we Measure Superflow on Quenching ⁴𝐻𝑒?
## Abstract
Zurek has provided a simple picture for the onset of the $`\lambda `$-transition in $`{}_{}{}^{4}He`$, not currently supported by vortex density experiments. However, we argue that the seemingly similar argument by Zurek that superflow in an annulus of $`{}_{}{}^{4}He`$ at a quench will be measurable is still valid.
preprint: IC preprint number
As the early universe cooled it underwent a series of phase transitions, whose inhomogeneities have observable consequences. To understand how such transitions occur it is necessary to go beyond the methods of equilibrium thermal field theory that identified the transitions in the first instance.
In practice, we often know remarkably little about the dynamics of quantum field theories. A simple question to ask is the following: In principle, the field correlation length diverges at a continuous transition. In reality, it does not. What happens? Using simple causal arguments Kibble made estimates of this early field ordering, because of the implications for astrophysics.
There are great difficulties in converting predictions for the early universe into experimental observations. Zurek suggested that similar arguments were applicable to condensed matter systems for which direct experiments could be performed. In particular, for $`{}_{}{}^{4}He`$ he argued that the measurement of superflow at a quench provided a simple test of these ideas. We present a brief summary of his argument.
Assume that the dynamics of the $`{}_{}{}^{4}He`$ lambda-transition can be derived from an explicitly time-dependent Landau-Ginzburg free energy of the form
$$F(T)=d^3x\left(\frac{\mathrm{}^2}{2m}|\varphi |^2+\alpha (T)|\varphi |^2+\frac{1}{4}\beta |\varphi |^4\right),$$
(1)
in which $`\alpha (T)`$ vanishes at the critical temperature $`T_c`$. Explicitly, let us assume the mean-field result $`\alpha (T)=\alpha _0ϵ(T_c)`$, where $`ϵ=(T/T_c1)`$, remains valid as $`T/T_c`$ varies with time $`t`$. In particular, we first take $`\alpha (t)=\alpha (T(t))=\alpha _0t/\tau _Q`$ in the vicinity of $`T_c`$. Then the fundamental length and time scales $`\xi _0`$ and $`t_0`$ are given from Eq.1 as $`\xi _0^2=\mathrm{}^2/2m\alpha _0`$ and $`\tau _0=\mathrm{}/\alpha _0`$. It follows that the equilibrium correlation length $`\xi _{eq}(t)=\xi _{eq}(T(t))`$ and the relaxation time $`\tau (t)`$ diverge at $`T_c`$, which we take to be when $`t`$ vanishes, as
$$\xi _{eq}(t)=\xi _0\left|\frac{t}{\tau _Q}\right|^{1/2},\tau (t)=\tau _0\left|\frac{t}{\tau _Q}\right|^1.$$
(2)
Although $`\xi _{eq}(t)`$ diverges at $`t=0`$ this is not the case for the true correlation length $`\xi (t)`$, which can only grow so far in a finite time. Initially, for $`t<0`$, when we are far from the transition, we can assume that the field correlation length $`\xi (t)`$ tracks $`\xi _{eq}(t)`$ approximately. However, as we get closer to the transition $`\xi _{eq}(t)`$ begins to increase arbitrarily fast. As a crude upper bound, the true correlation length fails to keep up with $`\xi _{eq}(t)`$ by the time $`\overline{t}`$ at which $`\xi _{eq}`$ is growing at the speed of sound $`c(t)=\xi _{eq}(t)/\tau (t)`$, which determines the rate at which the order-parameter can change. The condition $`d\xi _{eq}(t)/dt=c(t)`$ is satisfied at $`t=\overline{t}`$, where $`\overline{t}=\sqrt{\tau _Q\tau _0}`$, with corresponding correlation length
$$\overline{\xi }=\xi _{eq}(\overline{t})=\xi _0\left(\frac{\tau _Q}{\tau _0}\right)^{1/4}.$$
(3)
After this time it is assumed that the relaxation time is so long that $`\xi (t)`$ is essentially frozen in at $`\overline{\xi }`$ until time $`t+\overline{t}`$, when it sets the scale for the onset of the broken phase.
A concrete realisation of how the freezing sets in is provided by the time-dependent Landau-Ginzburg (TDLG) equation for $`F`$ of (1),
$$\frac{1}{\mathrm{\Gamma }}\frac{\varphi _a}{t}=\frac{\delta F}{\delta \varphi _a}+\eta _a,$$
(4)
for $`\varphi =(\varphi _1+i\varphi _2)/\sqrt{2}`$, where $`\eta _a`$ is Gaussian noise. We can show self-consistently that, for the relevant time-interval $`\overline{t}t\overline{t}`$ the self-interaction term can be neglected ($`\beta =0`$), whereby a simple calculation finds $`\xi \overline{\xi }`$ in this interval, as predicted. It thus happens that, at the onset of the phase transition, the field fluctuations are approximately Gaussian. The field phases $`e^{i\theta (𝐫)}`$, where $`\varphi (𝐫)=|\varphi (𝐫)|e^{i\theta (𝐫)}`$, are then correlated on the same scale as the fields.
Consider a closed path in the bulk superfluid with circumference $`C\xi (t)`$. Naively, the number of ’regions’ through which this path passes in which the phase is correlated is $`𝒩=O(C/\xi (t))`$. Assuming an independent choice of phase in each ’region’, the r.m.s phase difference along the path is
$$\mathrm{\Delta }\theta _C\sqrt{𝒩}=O(\sqrt{C/\xi (t)}).$$
(5)
If we now consider a quench in an annular container of similar circumference $`C`$ of superfluid $`{}_{}{}^{4}He`$ and radius $`lC`$, Zurek suggested that the phase locked in is also given by Eq.5, with $`\overline{\xi }`$ of Eq.3. Since the phase gradient is directly proportional to the superflow velocity we expect a flow after the quench with r.m.s velocity
$$\mathrm{\Delta }v=O\left(\frac{\mathrm{}}{m}\sqrt{\frac{1}{C\overline{\xi }}}\right).$$
(6)
provided $`l=O(\overline{\xi })`$. Although in bulk fluid this superflow will disperse, if it is constrained to a narrow annulus it should persist, and although not large is measurable.
In addition to this experiment, Zurek also suggested that the same correlation length $`\overline{\xi }`$ should characterise the separation of vortices in a quench. In an earlier paper one of us showed that this is too simple. Causality arguments are not enough, and whether vortices form on this scale is also determined by the thermal activation of the Ginzberg regime, in which all $`{}_{}{}^{4}He`$ experiments take place. Experimentally, this seems to be the case.
Our aim in this paper is to see whether thermal fluctuations interfere with the prediction Eq.6, for which experiments have yet to be performed. Again consider a circular path in the bulk fluid (in the 1-2 plane), circumference $`C`$, the boundary of a surface $`S`$. For given field configurations $`\varphi _a(𝐱)`$ the phase change $`\theta _C`$ along the path can be expressed as the surface integral
$$\theta _C=2\pi _{𝐱S}d^2x\rho (𝐱),$$
(7)
where the topological density $`\rho (𝐱)`$ is given by
$$\rho (𝐱)=\delta ^2[\varphi (𝐱)]ϵ_{jk}_j\varphi _1(𝐱)_k\varphi _2(𝐱),i,j=1,2$$
(8)
where $`ϵ_{12}=ϵ_{21}=1`$, otherwise zero.
The ensemble average $`\rho (𝐱)_t`$ is taken to be zero at all times $`t`$, guaranteed by taking $`\varphi _a(𝐱)_t=0=\varphi _a(𝐱)_j\varphi _b(𝐱)_t`$. That is, we quench from an initial state with no rotation. For the Gaussian fluctuations that are relevant for the times of interest, all correlations are given in terms of the diagonal equal-time correlation function $`G(r,t)`$, defined by
$$\varphi _a(𝐱)\varphi _b(\mathrm{𝟎})_t=\delta _{ab}G(r,t)r=|𝐱|.$$
(9)
The correlation length $`\xi (t)`$ is defined by $`G(r,t)=o(e^{r/\xi (t)})`$, for large $`r>\xi (t)`$. The TDLG does not lead to simple exponential behaviour, but there is no difficulty in defining $`\xi (t)`$ in practice.
The variance in the phase change around $`C`$, $`\mathrm{\Delta }\theta _C`$ is determined from
$$(\mathrm{\Delta }\theta _C)^2=4\pi ^2_{𝐱S}d^2x_{𝐲S}d^2y\rho (𝐱)\rho (𝐲)_t.$$
(10)
The properties of densities for Gaussian fields have been studied in detail. Define $`f(r,t)`$ by $`f(r,t)=G(r,t)/G(0,t)`$. On using the conservation of charge
$$d^2x\rho (𝐱)\rho (\mathrm{𝟎})_t=0$$
(11)
it is not difficult to show, from the results of, that $`\mathrm{\Delta }\theta _C`$ satisfies
$$(\mathrm{\Delta }\theta _C)^2=_{𝐱S}d^2x_{𝐲S}d^2y𝒞(|𝐱𝐲|,t),$$
(12)
where $`𝐱`$ and $`𝐲`$ are in the plane of $`S`$, and
$$𝒞(r,t)=\frac{1}{r}\frac{}{r}\left(\frac{f^2(r,t)}{1f^2(r,t)}\right).$$
(13)
Since $`G(r,t)`$ is short-ranged $`𝒞(r,t)`$ is short-ranged also. With $`𝐱`$ outside $`S`$, and $`𝐲`$ inside $`S`$, all the contribution to $`(\mathrm{\Delta }\theta _C)^2`$ comes from the vicinity of the boundary of $`S`$, rather than the whole area. That is, if we removed all fluid except for a strip from the neighbourhood of the contour $`C`$ we would still have the same result. This supports the assertion by Zurek that the correlation length for phase variation in bulk fluid is also appropriate for annular flow. The purpose of the annulus (more exactly, a circular capillary of circumference $`C`$ with radius $`lC`$) is to stop this flow dissipating into the bulk fluid.
More precisely, suppose that $`C\xi (t)`$. Then, if we take the width $`2l`$ of the strip around the contour to be larger than the correlation length of $`𝒞(r,t)`$, Eq.12 can be written as
$$(\mathrm{\Delta }\theta _C)^22C_0^{\mathrm{}}𝑑rr^2𝒞(r,t).$$
(14)
The linear dependence on $`C`$ is purely a result of Gaussian fluctuations.
Insofar as we can identify the bulk correlation with the annular correlation, instead of Eq.6, we have
$$\mathrm{\Delta }v=\frac{\mathrm{}}{m}\sqrt{\frac{1}{C\xi _s(t)}}.$$
(15)
The step length $`\xi _s(t)`$ is given by
$$\frac{1}{\xi _s(t)}=2_0^{\mathrm{}}𝑑r\frac{f^2(r,t)}{1f^2(r,t)}.$$
(16)
There are two important differences between Eq.15 and Eq.6. The first is in the choice of time for which $`\mathrm{\Delta }v`$ of Eq.15 is to be evaluated. In Eq.6 the time is the time $`\overline{t}`$ of freezing in of the field correlation. Since $`\xi (t)`$ does not change much in the interval $`\overline{t}<t<\overline{t}`$ we can as well take $`t=0`$. We shall argue below that for Eq.15 a more appropriate time is the spinodal time $`t_{sp}`$ at which the transition has completed itself in the sense that the fields have begun to populate the ground states.
Secondly, a priori there is no reason to identify $`\xi _s(t_{sp})`$ with either $`\overline{\xi }`$ (or even $`\xi (t_{sp})`$). In particular, because $`\overline{\xi }`$ in Eq.6 is defined from the large-distance behaviour of $`G(r,t)`$, and thereby on the position of the nearest singularity of $`G(k,t)`$ in the $`k`$-plane, it does not depend on the scale at which we observe the fluid. This is not the case for $`\xi _s(t)`$ which, from Eq.16, explores all distance scales. Because of the fractal nature of the short wavelength fluctuations, $`\xi _s(t)`$ will depend on how many are included, i.e. the scale at which we look. If we quench in an annular capillary of radius $`l`$ much smaller than its circumference, we are, essentially, coarsegraining to that scale. That is, the observed variance in the flux along the annulus is $`\pi l^2\mathrm{\Delta }v`$ for $`\mathrm{\Delta }v`$ averaged on a scale $`l`$. We make the approximation that that is the major effect of quenching in an annulus. This cannot be wholly true, but it is plausible if the annulus is not too narrow for boundary effects to be important.
Provisionally we introduce a coarsegraining by hand, modifying $`G(r,t)`$ by damping short wavelengths $`O(l)`$ as
$$G(r,t;l)=d/^3ke^{i𝐤.𝐱}G(k,t)e^{k^2l^2}.$$
(17)
We shall denote the value of $`\xi _s`$ obtained from Eq.17 as $`\xi _s(t;l)`$. It permits an expansion in terms of the moments of $`G(k,t)e^{k^2l^2}`$,
$$G_n(t;l)=_0^{\mathrm{}}𝑑kk^{2n}G(k,t)e^{k^2l^2}.$$
(18)
For small $`r`$ it follows that $`f^2(r,t;l)/(1f^2(r,t;l))`$
$$=\frac{G_2}{3G_1}\left[1\left(\frac{3G_3}{20G_2}\frac{G_2}{12G_1}\right)r^2+O(r^4)\right].$$
(19)
Although, for large $`r`$, $`f^{}(r,t;l)^2=o(e^{2r/\xi (t)})`$, we find that the bulk of the integral Eq.16 lies in the forward peak, and that a good upper bound for $`\xi _s`$ is given by just integrating the quadratic term, whence
$$\frac{1}{\xi _s(t;l)}\frac{1}{\xi _s^{min}(t;l)}=\frac{4G_2}{9G_1}\left(\frac{3G_3}{20G_2}\frac{G_2}{12G_1}\right)^{1/2},$$
(20)
with the equality slightly overestimated. In units of $`\xi _0`$ and $`\tau _0`$ we have, in the linear regime,
$$G_n(t;l)\frac{I_n}{2^{n+1/2}}e^{(t/\overline{t})^2}_0^{\mathrm{}}𝑑t^{}\frac{e^{(t^{}t)^2/\overline{t}^2}}{[t^{}+l^2/2]^{n+1/2}}\frac{T(t^{})}{T_c},$$
(21)
where $`I_n=_0𝑑kk^{2n}e^{k^2}`$. The presence of the $`T(t^{})/T_c`$ term is a reminder that the strength of the noise $`\eta `$ is proportional to temperature. However, for the time scales $`O(\overline{t})\tau _Q`$ of interest to us this ratio remains near to unity and we ignore it. For small relative times the integrand gets a large contribution from the ultraviolet cutoff dependent lower endpoint, increasing as $`n`$ increases.
If we return to the Landau-Ginzberg equation Eq.4 we find that $`|\varphi |^2_t\alpha _0/\beta `$ in the interval $`\overline{t}t\overline{t}`$. Although the field has frozen in, the fluctuations have amplitudes that are more or less uniform across all wavelengths. As a result, what we see depends totally on the scale at which we look. Specifically, from Eq.21 $`\xi _s^{min}(0;l)=O(l)`$, as shown in the lowest curve of Fig.1.
If, as suggested by Zurek, we take $`l=O(\overline{\xi })`$ we recover Eq.6 qualitatively, although a wider bore would give a correspondingly smaller flow. However, this is not the time at which to look for superflow since, although the field correlation length $`\xi (t)`$ may have frozen in by $`t=0`$, the symmetry breaking has not begun.
Assuming the linearised Eq.4 for small times $`t>0`$ we see that, as the unfreezing occurs, long wavelength modes with $`k^2<t/\tau _Q`$ grow exponentially and soon begin to dominate the correlation functions. How long a time we have depends on the self-coupling $`\beta `$ which, through $`G_1`$, sets the shortest time scale. This is because, at the absolute latest, $`G_1`$ must stop its exponential growth at $`t=t_{sp}`$, when $`|\varphi |^2_{t_{sp}}`$, satisfies $`|\varphi |^2_{t_{sp}}=\alpha _0/\beta `$. We further suppose that the effect of the backreaction that stops the growth initially freezes in any structure. In Fig.1 we also show $`\xi _s^{min}(t;l)`$ for $`t=3\overline{t}`$ and $`t=4\overline{t}`$, increasing as $`t`$ increases.
For $`{}_{}{}^{4}He`$ with quenches of milliseconds the field magnitude has grown to its equilibrium value before the scale-dependence has stopped. For vortex formation, for which the scale is $`O(\xi _0)`$, the thickness of a vortex, the dependence of the density on scale makes the interpretation of observations problematic. This is not the same here. That the incoherent $`\xi _s`$ depends on radius $`l`$ is immaterial. The end result is that
$$\mathrm{\Delta }v=\frac{\mathrm{}}{m}\sqrt{\frac{1}{C\xi _s(t_{sp};l)}}.$$
(22)
We saw that the expression Eq.20 for $`\xi _s`$ assumed that $`2l`$ is larger than
$$\xi _{eff}(t;l)=\left(\frac{3G_3}{20G_2}\frac{G_2}{12G_1}\right)^{1/2}.$$
(23)
Otherwise the correlations in the bulk fluid from which we want to extract annular behaviour are of longer range than the annulus thickness. Numerically, we find that $`\xi _{eff}(0,l)=2l`$ very accurately at $`t=0`$, but that $`\xi _{eff}(t,l)2l`$ for all $`t>0`$. A crude way to accomodate this is to cut off the integral Eq.14. With a little effort, we see that the effect of this is that $`\xi _s^{min}(t_{sp},l)`$ of Eq.20 is replaced by
$$\xi _s^{max}(t_{sp},l)=\xi _s^{min}(t_{sp},l)[1(14l^2/\xi _{eff}(t_{sp},l)^2)^{3/2}]^1,$$
(24)
greater than $`\xi _s^{min}(t_{sp},l)`$ and thereby reducing the flow velocity for narrower annuli. These are the dashed curves in Fig.1. The effect is largest for small radii $`l\overline{\xi }`$, for which the approximation of trying to read the behaviour of annular flow from bulk behaviour is most suspect. A more realistic approach for such narrow capillaries is to treat the system as one-dimensional. For this reason we have only considered $`l\overline{\xi }`$ in Fig.1. We would expect, from Eq.20, that $`\xi _s(t_{sp};l)`$ has an upper bound that lies somewhere between the curves.
Once $`l`$ is very large, so that the power in the fluctuations is distributed strongly across all wavelengths we recover our earlier result, that $`\xi _s(t_{sp};l)=O(l)`$. In Fig.1 this corresponds to the curves becoming parallel as $`l`$ increases for fixed $`t`$. However, the change is sufficiently slow that annuli, significantly wider than $`\overline{\xi }`$, for which experiments are more accessible, will give almost the same flow as narrower annuli. This would seem to extend the original Zurek prediction of Eq.6 to thicker annuli, despite our expectations for incoherent flow. However, we stress again that caution is necessary, since in the approximation to characterise an annulus by a coarse-grained ring without boundaries we have ignored effects in the direction perpendicular to the annulus. In particular, the circular cross-section of the tube has not been taken into account. One consequence of this is that infinite (non-selfintersecting) vortices in the bulk fluid have no counterpart in an annulus. Removing such strings will have an effect on $`\mathrm{\Delta }\theta _C`$, since the typical fraction of vortices in infinite vortices is at the level of $`70\%`$. However, at the spinodal time the fluctuations in $`{}_{}{}^{4}He`$ are relatively enhanced in the long wavelengths, and such an enhancement is known to reduce the amount of infinite vortices, perhaps to something nearer to $`20\%`$. The details of this effect (being pursued elsewhere) are unclear but, for the sake of argument we take the predictions of the curves in Fig.1 as a rough guide in the vicinity of their minima.
So far we have avoided the question as to which time curves we should follow. This is because $`t_{sp}`$ itself depends on the scale $`l`$ of the spatial volume for which the field average achieves its ground state value. In practice variation is small, with $`t_{sp}`$ for $`{}_{}{}^{4}He`$ varying from about $`3\overline{t}`$ to $`4\overline{t}`$ as $`l`$ varies from $`\xi _0\overline{\xi }`$ to $`l=10\overline{\xi }`$. Since the curves for $`\xi _s(t_{sp};l)`$ lie so close to one another in Fig.1 once $`l4\overline{\xi }`$ the scale at which the coarse-grained field begins to occupy the ground states becomes largely irrelevant.
Since $`\mathrm{\Delta }v`$ only depends on $`\xi _s^{1/2}`$ it is not sensitive to choice of $`l>2\overline{\xi }`$ at the relevant $`t`$. Given all these approximations our final estimate is (in the cm/sec units of Zurek)
$$\mathrm{\Delta }v0.2(\tau _Q[\mu s])^{\nu /4}/\sqrt{C[cm]}$$
(25)
for radii of $`2\overline{\xi }4\overline{\xi }`$, $`\tau _Q`$ of the order of milliseconds and $`C`$ of the order of centimetres. $`\nu =1/2`$ is the mean-field critical exponent above. In principle $`\nu `$ should be renormalised to $`\nu =2/3`$, but the difference to $`\mathrm{\Delta }v`$ is sufficiently small that we shall not bother. Given the uncertainties in its derivation the result Eq.25 is indistinguishable from Zurek’s (with prefactor $`0.4`$), but for the possibility of using somewhat larger annuli. The agreement is, ultimately, one of dimensional analysis, but the coefficient could not have been anticipated. How experiments can be performed, even with the wider annuli that Eq.25 and Fig.1 suggest, is another matter.
We thank Glykeria Karra, with whom some of this work was done. This work is the result of a network supported by the European Science Foundation .
|
no-problem/9901/gr-qc9901075.html
|
ar5iv
|
text
|
# Data analysis strategies for the detection of gravitational waves in non-Gaussian noise
## Abstract
In order to analyze data produced by the kilometer-scale gravitational wave detectors that will begin operation early next century, one needs to develop robust statistical tools capable of extracting weak signals from the detector noise. This noise will likely have non-stationary and non-Gaussian components. To facilitate the construction of robust detection techniques, I present a simple two-component noise model that consists of a background of Gaussian noise as well as stochastic noise bursts. The optimal detection statistic obtained for such a noise model incorporates a natural veto which suppresses spurious events that would be caused by the noise bursts. When two detectors are present, I show that the optimal statistic for the non-Gaussian noise model can be approximated by a simple coincidence detection strategy. For simulated detector noise containing noise bursts, I compare the operating characteristics of (i) a locally optimal detection statistic (which has nearly-optimal behavior for small signal amplitudes) for the non-Gaussian noise model, (ii) a standard coincidence-style detection strategy, and (iii) the optimal statistic for Gaussian noise.
The reliable detection of weak gravitational wave signals from broad-band detector noise from kilometer-scale interferometers such as LIGO and VIRGO is the primary concern in developing gravitational wave data analysis strategies. Because of the weakness of the expected gravitational wave signals, it is critical that the detection strategy should be nearly optimal. However, the detector noise may not be purely stationary and Gaussian, so it is important that the detection strategy also be robust so that detections will be reliable.
Until now, most work on the development of data analysis strategies has been limited to the case of stationary Gaussian noise, though there has been some work on the creation of vetoes that will discriminate between expected gravitational wave signals and non-Gaussian noise bursts. Because the properties of the noise in the LIGO and VIRGO detectors will not be known in advance, it is difficult to assess how well these strategies and vetoes will perform. When real interferometer data from the 40-meter Caltech prototype is used, it is found that additional vetoes are needed to deal with the abundance of false alarms arising from the non-Gaussian noise .
In this paper, I present a simple non-Gaussian noise model, consisting of Poisson-distributed noise bursts, that represents a number of potential non-Gaussian noise sources that may be present in future interferometers. This noise model is less naïve than the usual assumptions of Gaussian noise alone, but is simple enough that many analytical results can be obtained. By using this noise model, the robustness of various detection strategies can be assessed. A general introduction to signal detection in non-Gaussian noise can be found in Ref. . The new result in this paper is the use of the non-Gaussian noise model in examining the performance of simple multi-detector search strategies, which will be important for gravitational wave searches.
I will adopt the following notation in this paper. The detector output is a set of $`N`$ values that are written collectively as a vector $`𝐡`$ in a $`N`$-dimensional vector space $`V`$. This vector can be thought of as having components $`h_j=(𝐡,𝐞_j)`$, $`j[0,N1]`$, which represent a time series of sample measurements made by the detector. Here, $`𝐞_j`$ is the appropriate Cartesian basis on $`V`$. Alternatively, the vector can be expressed as the set of components $`\{\stackrel{~}{h}_{2k},\stackrel{~}{h}_{2k+1}\}`$, $`k[0,N/21]`$—the real and imaginary Fourier transform components of $`\{h_j\}`$. These components are given by $`\stackrel{~}{h}_j=(𝐡,\stackrel{~}{𝐞}_j)`$, where $`\stackrel{~}{𝐞}_{2k}=(2/N)^{1/2}_{j=0}^{N1}𝐞_j\mathrm{cos}(2\pi jk/N)`$ and $`\stackrel{~}{𝐞}_{2k+1}=(2/N)^{1/2}_{j=0}^{N1}𝐞_j\mathrm{sin}(2\pi jk/N)`$, $`k[0,N/21]`$. Thus, the vectors can be treated in either a time representation or a frequency representation.
A natural inner product of two vectors, $`𝐚`$ and $`𝐛`$, in this vector space is defined as $`(𝐚,\mathrm{𝐐𝐛})`$. The kernel $`𝐐`$ is the inverse of the auto-correlation matrix, $`𝐑`$, of the Gaussian component, $`𝐧_\text{G}`$, of the detector noise. Thus, $`𝐑=𝐧_\text{G}𝐧_\text{G}`$, where the angle brackets denote an average over an ensemble of realizations of detector noise. Vector norms are defined in terms of this inner product, i.e., $`𝐚^2=(𝐚,\mathrm{𝐐𝐚})`$, as are unit vectors: $`\widehat{𝐚}=𝐚/𝐚`$.
The detector noise $`𝐧`$ consists of two components: (i) the Gaussian component $`𝐧_\text{G}`$ (that is always present) and (ii) a possible noise burst component $`𝐧_\text{B}`$ that is present with probability $`P_\text{B}`$. The Gaussian component has the probability distribution $`p[𝐧_\text{G}]\mathrm{exp}(𝐧_\text{G}^2/2)`$. If the burst component is randomly distributed in the vector space $`V`$ with normalized measure $`\widehat{D}[𝐧_\text{B}]`$, then the probability distribution for the noise is
$$p[𝐧]e^{𝐧^2/2}+\frac{P_\text{B}}{1P_\text{B}}\widehat{D}[𝐧_\text{B}]e^{𝐧𝐧_\text{B}^2/2}.$$
(1)
Suppose that the noise bursts are uniformly-distributed in the vector space $`V`$ out to a large radius $`R`$. Such bursts will typically last the entire duration of interest ($`N`$ samples) and fill the entire frequency band. The noise distribution is approximately
$$p[𝐧]e^{𝐧^2/2}(1+ϵe^{𝐧^2/2})$$
(2)
for $`𝐧<R`$, where $`ϵ2^{N/2}\mathrm{\Gamma }(N/2+1)R^NP_\text{B}/(1P_\text{B})`$ for large $`R`$.
An alternative derivation of Eq. (2) is the following. Suppose the noise burst is simply an additional transient source of Gaussian noise. Then Eq. (2) can be written as
$$p[𝐧]e^{(𝐧,\mathrm{𝐐𝐧})/2}(1+\epsilon e^{(𝐧,[𝐐𝐐^{}]𝐧)/2}),$$
(3)
where $`\epsilon =(det|𝐑|/det|𝐑^{}|)^{N/2}P_\text{B}/(1P_\text{B})`$, $`𝐑^{}`$ is the auto-correlation matrix for both ambient and the burst components of Gaussian noise together, and the matrix $`𝐐^{}`$ is the inverse of $`𝐑^{}`$. If the typical noise burst is much louder than the ambient Gaussian noise, then $`𝐐𝐐^{}𝐐`$. Therefore, in the case of loud Gaussian noise bursts, Eq. (3) has the same form as Eq. (2).
The likelihood ratio can now be computed for the two alternative hypotheses: that the output $`𝐡=𝐧`$ is noise alone ($`H_0`$), or that the output $`𝐡=A\widehat{𝐮}+𝐧`$ is a signal $`A\widehat{𝐮}`$ of amplitude $`A`$ embedded in noise ($`H_1`$).<sup>*</sup><sup>*</sup>*In this paper, I consider only signals which are completely known (up to their amplitude). The generalization of this case to one in which the signals have an unknown initial phase is straightforward: see, e.g., Ref. . The likelihood ratio $`\mathrm{\Lambda }(A)=p[𝐡H_1]/p[𝐡H_0]`$ is the ratio of the posterior probability of obtaining the observed output $`𝐡`$ given hypothesis $`H_1`$ to the posterior probability of obtaining $`𝐡`$ given hypothesis $`H_0`$. One then finds that the likelihood ratio is
$$\mathrm{\Lambda }(A)=\frac{\mathrm{\Lambda }_\text{G}(A)+\alpha }{1+\alpha }$$
(4)
where
$$\mathrm{\Lambda }_\text{G}(A)=e^{A(\widehat{𝐮},\mathrm{𝐐𝐡})A^2/2}$$
(5)
and
$$\alpha =ϵe^{𝐡^2/2}.$$
(6)
The quantity $`\mathrm{\Lambda }_\text{G}(A)`$ is the likelihood ratio one would obtain if only the Gaussian noise component were present; it is a monotonically increasing function of the matched filter $`(\widehat{𝐮},\mathrm{𝐐𝐡})`$. When a non-Gaussian noise burst component can occur, the likelihood ratio depends on an additional measured quantity: the magnitude of the output vector, $`𝐡`$. In addition to these functions of the detector output, the likelihood ratio also depends on the expected amplitude $`A`$ of the signal and the proportionality factor $`ϵ`$, which encapsulates the probability of a noise burst and the maximum possible amplitude $`R`$ of the burst.
The quantity $`\alpha `$ plays a central role in the modified likelihood ratio of Eq. (4): it acts as a detector of noise bursts, and vetoes events that are more likely due to the burst. The logarithm of $`\alpha `$ is proportional to the amount of power in the detector output. For Gaussian noise alone, the expected value of the power is $`𝐧_\text{G}^2=N`$, and thus $`\mathrm{ln}\alpha \mathrm{ln}P_\text{B}\frac{1}{2}N\mathrm{ln}(R^2/N)`$ for large $`N`$. Thus, for $`R^2>N`$ (as was assumed above), the value of $`\alpha `$ will be small. However, when a noise burst is present, the value of $`\mathrm{ln}\alpha `$ is increased by a typical amount $`\frac{1}{2}R^2`$, so $`\alpha `$ will typically become large. In fact, $`\alpha `$ is the excess-power statistic for detection of arbitrary noise bursts . For small values of $`\alpha `$, the likelihood ratio approaches the usual Gaussian noise likelihood ratio. However, for large values of $`\alpha `$, the likelihood ratio approaches unity.
The likelihood ratio is a function of the detector output via the two quantities $`x=(\widehat{𝐮},\mathrm{𝐐𝐡})=𝐡\mathrm{cos}\theta `$ and $`𝐡`$. The likelihood ratio also depends on the expected signal amplitude $`A`$ and the factor $`ϵ`$. Thus, the likelihood ratio is
$$\mathrm{\Lambda }(A)=\frac{e^{AxA^2/2}+ϵe^{(x\mathrm{sec}\theta )^2/2}}{1+ϵe^{(x\mathrm{sec}\theta )^2/2}}.$$
(7)
Figure 1 shows the likelihood ratios as functions of the matched filter statistic $`x`$ and the angle $`\theta `$ for a given value of $`ϵ`$ and $`A`$. Notice that the likelihood ratio is attenuated when the magnitude of the output, $`𝐡`$, is much larger than the largest expected signal; this attenuation occurs at smaller signal-to-noise ratios for smaller absolute values of the direction cosine $`\mathrm{cos}\theta `$.
Because of the non-trivial dependence of the likelihood ratio on the expected signal amplitude, the optimal statistic for signals of some amplitude $`A`$ will not be the same statistic as the optimal statistic for a different amplitude $`A^{}`$. This situation is unlike the case of purely Gaussian noise, for which the likelihood ratio grows monotonically with the matched filter output, regardless of the expected signal amplitude. Because the amplitude of a signal will not typically be known in advance, it is useful to consider a *locally optimal* statistic , which provides the optimal performance in the limit of small amplitude signals. The rationale behind such a choice is that large amplitude signals pose no challenge for detection, so any reasonable statistic will suffice for them. The locally optimal statistic, which is defined as $`\lambda =d\mathrm{ln}\mathrm{\Lambda }/dA|_{A=0}=(\widehat{𝐮},\mathbf{}\mathrm{ln}p[𝐡])`$, is given by
$$\lambda =\frac{x}{1+ϵe^{(x\mathrm{sec}\theta )^2/2}},$$
(8)
where $`x=(\widehat{𝐮},\mathrm{𝐐𝐡})`$ is the matched filter. Notice that the locally optimal statistic also incorporates a veto based on the value of $`\alpha =ϵ\mathrm{exp}[(x\mathrm{sec}\theta )^2/2]`$. The locally optimal statistic grows approximately linearly for $`x<(2\mathrm{ln}ϵ)^{1/2}\mathrm{cos}\theta `$, at which point it is effectively cut off. This is the same general behavior as was seen before for the likelihood ratio $`\mathrm{\Lambda }(A)`$, but there is no longer any scaling with a prior amplitude.
Another approach is to use a prior distribution $`f(A)`$ for the amplitude, if such a distribution is known, to construct an integrated likelihood ratio $`\mathrm{\Lambda }=\mathrm{\Lambda }(A)f(A)𝑑A`$; this integrated likelihood ratio would then be the optimal statistic for detection of all potential signals. Even if the prior distribution is not known, it is possible to obtain an approximate likelihood ratio if $`f(A)`$ is a slowly-varying function. One can show that $`\mathrm{\Lambda }_\text{G}=\mathrm{\Lambda }_\text{G}(A)f(A)𝑑Af(x)\mathrm{exp}(x^2/2)`$. A further approximation neglects the slowly-varying function entirely; one then obtains $`\mathrm{\Lambda }_\text{G}\mathrm{exp}(x^2/2)`$. Thus,
$$\mathrm{\Lambda }\frac{e^{x^2/2}+ϵe^{(x\mathrm{sec}\theta )^2/2}}{1+ϵe^{(x\mathrm{sec}\theta )^2/2}}.$$
(9)
It is important to generalize the above analysis to the case in which there are multiple detectors. Such a situation is deemed to be essential for gravitational wave searches since a coincident detection is an important corroboration that a signal is of an astronomical origin rather than internal to the detector. As a simple model,A more general model is being considered by Finn . consider two gravitational wave detectors with identical but independent noise (including possible noise bursts) that also have the same response to gravitational wave signals. If the noise was purely Gaussian, the likelihood ratio for the two detectors together would be $`\mathrm{\Lambda }_\text{G}^{\text{AB}}=\mathrm{\Lambda }_\text{G}^\text{A}\times \mathrm{\Lambda }_\text{G}^\text{B}`$, i.e., the product of the likelihood ratios for Gaussian noise in each separate detector. The resulting likelihood ratio is thus an monotonically increasing function of the *sum* of the matched filter values in each detector: $`\mathrm{ln}\mathrm{\Lambda }_\text{G}^{\text{AB}}x^\text{A}+x^\text{B}`$ where $`x^\text{A}=(\widehat{𝐮},𝐐^\text{A}𝐡^\text{A})`$ is the signal-to-noise ratio in the first detector (A) and $`x^\text{B}=(\widehat{𝐮},𝐐^\text{B}𝐡^\text{B})`$ is the signal-to-noise ratio in the second detector (B). If the likelihood ratios are integrated over a slowly-varying prior distribution of possible signal amplitudes, then one finds $`\mathrm{ln}\mathrm{\Lambda }_\text{G}^{\text{AB}}(x^\text{A}+x^\text{B})^2`$, which also depends on the sum of the two detectors’ matched filter values. (In this case, it is important to integrate over the distribution of amplitudes *after* multiplying the two detector’s likelihood ratios together since the same signal amplitude should be present in each detector.) Thus, one would decide that a signal was present if the sum of the matched filter values in the two detectors exceeded some threshold. An alternative strategy would decide that a signal was present only if *each* of the matched filter values exceeded some threshold, i.e., one thresholds on the quantity $`\mathrm{min}(x^\text{A},x^\text{B})`$ rather than on $`x^\text{A}+x^\text{B}`$.
Suppose the likelihood ratios for both detectors, $`\mathrm{\Lambda }^\text{A}(x^\text{A},\theta ^\text{A})`$ and $`\mathrm{\Lambda }^\text{B}(x^\text{B},\theta ^\text{B})`$, are given by Eq. (7). Then, the likelihood ratio for the combined detector is given by the product of $`\mathrm{\Lambda }^\text{A}`$ and $`\mathrm{\Lambda }^\text{B}`$. Figure 2 shows a plot of contours of constant likelihood ratio as a function of the two measured signal-to-noise ratios for $`\theta ^\text{A}=\theta ^\text{B}=0`$ and $`\theta ^\text{A}=0`$, $`\theta ^\text{B}=\pi /6`$. At low signal-to-noise ratios, the contours are approximately lines of constant $`x^\text{A}+x^\text{B}`$—in this regime, the likelihood ratio behaves like the Gaussian likelihood ratio. However, for large values of either $`x^\text{A}`$ or $`x^\text{B}`$, the contours are approximately lines of constant $`\mathrm{min}(x^\text{A},x^\text{B})`$.
The locally optimal two-detector statistic is $`\lambda ^{\text{AB}}=\lambda ^\text{A}+\lambda ^\text{B}`$, where $`\lambda ^\text{A}`$ and $`\lambda ^\text{B}`$ both have the form of Eq. (8). This statistic also depends on $`x^\text{A}`$, $`x^\text{B}`$, $`\theta ^\text{A}`$, and $`\theta ^\text{B}`$. As before, the contours of constant $`\lambda ^{\text{AB}}`$ for fixed $`\theta ^\text{A}`$ and $`\theta ^\text{B}`$ are lines of constant $`x^\text{A}+x^\text{B}`$ for low signal-to-noise ratios but are lines of constant $`\mathrm{min}(x^\text{A},x^\text{B})`$ if the signal-to-noise ratio in one of the detectors is large.
These results somewhat justify the use of the minimum statistic $`\mathrm{min}(x^\text{A},x^\text{B})`$ when there are two detectors with independent noise operating. When one of the signal-to-noise ratios is large but the other is moderate, the minimum statistic will give approximately the same value as the locally optimal statistic. In this case, the effect of the $`\alpha `$-based veto in the locally optimal statistic is mimicked in the minimum statistic. The locally optimal statistic is better able to deal with the case when there are noise bursts present in both detectors, but this case occurs with probability $`P_\text{B}^2`$, so it should be extremely rare. Moreover, the minimum statistic is less powerful than the locally optimal statistic for small signal-to-noise ratios in both detectors. In this case, the locally optimal statistic is approximately $`x^\text{A}+x^\text{B}`$, which is the same as the optimal statistic for Gaussian noise. However, the optimal statistic for Gaussian noise may be unsuitable for the noise model I have considered because the false alarm probability cannot be reduced below $`P_\text{B}`$ for reasonable thresholds. (The minimum statistic suffers the same problem but around the much lower probability $`P_\text{B}^2`$.)
To see the relative performance of these statistics, it is useful to consider the operational characteristics of the tests. For some fixed false alarm probability $`Q_0`$, the operational characteristic is the detection probability $`Q_1`$ as a function of signal amplitude. I have computed these curves using Monte Carlo techniques. The noise model I used corresponds to that of Eq. (2) with burst probability $`P_\text{B}=1\%`$ and maximum burst amplitude $`R=25`$. I fixed the false alarm probability to $`Q_0=10^3`$ and I examined a vector space dimension of $`N=4`$. Figure 3 shows the relative performances of the three statistics in terms of the detection probability as a function of signal amplitude. One sees that the locally optimal statistic performs the best for small amplitude signals (as expected), but that it has poor performance for large amplitude signals. This is because the large amplitude deviations are interpreted as noise bursts and are suppressed. (If the locally optimal statistic were used in a real search, these large events would not be rejected outright but would rather be subjected to further scrutiny; thus the attenuation of the locally optimal statistic for large signal amplitudes is somewhat misleading.) The sum of the signal-to-noise ratio, which is optimal statistic for Gaussian noise, is seen to have poor performance. The reason is that because the false alarm probability is much smaller than the burst probability, the threshold required for this statistic becomes unreasonably large. However, since the false alarm probability is much higher than the burst probability *squared*, the minimum statistic performs reasonably well for both small and large amplitude signals.
The conclusion that can be drawn from this analysis is that search strategies based on a Gaussian noise model can be significantly improved by considering a more realistic non-Gaussian noise model, which may contain noise bursts that occur relatively frequently. If a method can be devised to detect these noise bursts and veto data that contains a noise burst, then the use of such a veto effectively reproduces the locally optimal strategy for the noise model containing bursts. However, the necessary veto may not be easily found since it may not be possible to determine the non-Gaussian properties of the detector noise to sufficient accuracy. A viable alternative when two detectors are operating is to use a coincidence strategy of detection. Since the minimum statistic used in the coincidence strategy is a robust statistic, one would expect that it should have good performance for any realistic noise model.
I would like to thank Bruce Allen, Patrick Brady, Éanna Flanagan, and Kip Thorne for their many useful comments on this paper. This work was supported by the National Science Foundation grant PHY-9424337.
|
no-problem/9901/astro-ph9901243.html
|
ar5iv
|
text
|
# Thermal Properties of Two-Dimensional Advection Dominated Accretion Flow
## 1 Introduction
Depending on the angular momentum it carries and how the angular momentum is dissipated, gas accretes onto compact objects in various ways. If the gas contains very little angular momentum or bulk flow, the flow becomes spherical (Hoyle & Lyttleton 1939; Bondi & Hoyle 1944; Bondi 1952; Loeb & Laor 1992; Foglizzo & Ruffert 1997; Nio, Matsuda, & Fukue 1998). If it has enough angular momentum and if the angular momentum is efficiently removed by some mechanism, the flow flattens to a disk shape (Pringle & Rees 1972; Shakura & Sunyaev 1973). Until recently, only the extreme of these two types of solutions have been studied (see Chakrabarti 1996a,b for history and unified scheme of accretion solutions and Park & Ostriker 1998 for review on general accretion flow).
Spherical accretion has the merit of being simple, and therefore can be accurately calculated. It generally has the infall velocity close to the free-fall value, and most of the gas energy generated is lost into the hole in the case of accretion onto black holes, making the radiation efficiency, $`e`$, of the accretion, a self consistently calculable quantity depending primarily on the entropy at large radii and the accretion rate. The efficiency can be very low or significantly high (Shapiro 1973a,b, 1974; Mészáros 1975; Park & Ostriker 1989; Park 1990a,b). In the opposite limiting case accretion proceeds in the form of thin disk when the gas has enough angular momentum and cools efficiently. It has a fixed and high radiation efficiency, $`e0.1`$, and does not emit high energy photons due to the low temperature of the gas. Although the thin disk accretion has been successfully applied to many astronomical sources (see Pringle 1981 and Frank, King, & Raine 1992 for reviews), its radiation spectrum is incompatible with certain kinds of celestial sources believed to be powered by accretion.
There are also other types of accretion disk solutions beyond the cool thin disk. Shapiro, Lightman, & Eardley (1976) showed that geometrically and optically thin, high-temperature accretion disk can exist in which ions and electrons are weakly coupled and have different temperatures. Although it was more successful in explaining high-energy sources like Cyg X-1, it proved to be unstable on a thermal time scale (Pringle 1976; Piran 1978; Park 1995). Seminal works on geometrically thick accretion disks appeared a few years later (Jaroszyński, Abramowicz, & Paczyński 1980; Paczyński & Wiita 1980). Abramowicz et al. (1988) further combined these works with ‘$`\alpha `$-viscosity’ model to find a so called ‘slim disk’ solution. It is a geometrically and optically thick accretion disk solution with a significant fraction of the gas energy being transported via advection, but treated in one-dimensional framework. These slim disk solutions exist only when the mass accretion rate is near or above the Eddington mass accretion rate defined as
$$\dot{M}_{Edd}\frac{L_E}{c^2}=2.19\times 10^9M_{\mathrm{}}\text{yr}^1$$
(1)
where $`L_E`$ is the Eddington luminosity and the numerical value is for the pure hydrogen. (In the case of accretion onto neutron stars or thin disk accretion, the Eddington accretion rate has sometimes been defined as $`e^1L_E/c^2`$ since $`e`$ is almost a fixed value. However, in the type of accretion in which $`e`$ is not known a priori, this definition would be confusing and misleading because even when $`\dot{M}\dot{M}_{Edd}`$, the luminosity from the accretion can be much smaller than $`L_E`$. This choice of definition makes $`\dot{M}_{Edd}`$ in this work 10 times smaller than $`\dot{M}_{Edd}`$ in NY3.) Slim disks have a radiation spectrum similar to that of the thin disks.
Only recently, a new type of accretion solution has been found that stands somewhere between the spherical and thin disk accretion (Narayan & Yi 1994, 1995a,b, hereafter NY1, NY2, and NY3, respectively, and NY collectively; Abramowicz et al. 1995; see Narayan, Mahadevan, & Quataert 1998 for review and references). These solutions are called ‘advection dominated accretion flow’ (ADAF) because most of the gas energy is advected with the flow and ultimately into the hole due to relatively large infall velocity. This type of accretion is possible at sufficiently low mass accretion rate so that radiative cooling is inefficient and ions can be kept near virial temperature. The isodensity contours of self-similar ADAF changes from that of a sphere to a somewhat flattened spheroid, depending on the parameters chosen (NY2), but in general the flow looks more like the spherical or spheroidal limit rather than the disk-like solutions. It is in a way a reminiscence of ion-torus model (Rees et al. 1982).
ADAF has a number of distinct and desirable properties. First, like spherical accretion, the radiation efficiency can span a large range and is generally much smaller than $`0.1`$. Second, the electron temperature is much higher than that in thin disk accretion and, therefore, can produce high-energy photons. Third, the self-similar form of the solutions makes calculation of various properties very easy (Spruit et al. 1987; NY1). These merits have led to many successful applications of ADAF to various accretion powered sources that are difficult to model by the standard thin disk solutions: Sgr A (Narayan, Yi, & Mahadevan 1995; Mahadevan, Narayan, & Krolik 1997; Manmoto, Mineshige, & Kusunose 1997; Narayan et al. 1998), NGC 4258 (Lasota et al. 1996), soft X-ray transient sources (Narayan, McClintock, & Yi 1996; Narayan, Barret, & McClintock 1997), low-luminosity galactic nuclei (Di Matteo & Fabian 1997; Mahadevan 1997), torque-reversing X-ray pulsars (Yi & Wheeler 1998), and X-ray background (Yi & Boughn 1998).
However, most elaborate works on ADAF have been based on one-dimensional height-integrated equations (Narayan, Kato, & Honma 1997; Chen, Abramowicz, & Lasota 1997; Nakamura et al. 1997; Gammie & Popham 1998; Popham & Gammie 1998; Chakrabarti 1996a) even though the desirable properties depend explicitly on the two-dimensional characteristics of the flow. The two-dimensional nature of ADAF has been explored only in self-similar form (NY2; Xu & Chen 1997) or by numerical simulations of adiabatic, inviscid flow very close to the hole (Molteni, Lanzafame, & Chakrabarti 1994; Ryu et al. 1995; Chen et al. 1997; Igumenshchev & Beloborodov 1997). When treated as a height-integrated disk instead of two-dimensional flow, locally produced photons are assumed to escape the flow without affecting other other parts of the flow, which is only true when the flow is truly disk-like. Also, parts of the thermal and dynamical structure of the two-dimensional flow can be widely different from those of the averaged disk, under general accretion conditions, because of the flow structure in the vertical direction.
So, in this work, we study the thermal properties of the ADAF based on the two-dimensional self-similar flow solutions of NY2. One important difference between this work and previous studies is the allowance for the preheating of the flow at large radii by photons produced at the inner, hotter part of the flow. When the luminosity is high enough, this can substantially change the dynamics of the ADAF leading, quite possibly, to some kind of time-dependent behaviour or outflow.
## 2 Thermal Properties of Advection Dominated Accretion Flows
In generic accretion flow, the thermal state of the gas at a given point is determined by the balance between various heating and cooling processes. Gas can be heated by $`PdV`$ work, viscous dissipation, and the interaction with radiation. It cools mostly by emission of radiation. However, heating and cooling are not always balanced, and the extra energy gain or loss will be stored as internal energy and carried with the flow. This advection of internal energy mostly acts as loss of gas energy at a fixed point in the accretion flow and sometimes is called advective cooling.
The advection-dominated accretion flow solutions of NY refer to a family of solutions where gas at a given point is heated by viscous dissipation and cooled mostly by the advection with negligible radiative cooling. However, the definition of “advective cooling” in NY contains the $`PdV`$ adiabatic heating term. So one should be careful when advective cooling is used in thermal balance calculations. For example, gas of an adiabatic index, $`\gamma =5/3`$, and temperature profile, $`Tr^1`$, has zero advective cooling because $`PdV`$ heating exactly balances the internal energy advection. If $`\gamma <5/3`$, the adiabatic heating alone cannot maintain $`Tr^1`$ profile and viscous dissipation provides the additional heating needed to maintain $`r^1`$ profile.
To facilitate further discussions, we define five time scales relevant in general accretion flow: the inflow time scale
$$t_{flow}r/v_r,$$
(2)
the advective cooling time scale
$$t_{adv}\frac{\epsilon }{q_{adv}^{}},$$
(3)
the viscous heating time scale
$$t_{vis}\frac{\epsilon }{q_{vis}^+},$$
(4)
the radiative heating time scale
$$t_H\frac{\epsilon }{H},$$
(5)
and the radiative cooling time scale
$$t_C\frac{\epsilon }{C},$$
(6)
where $`r`$ is the radius, $`v_r`$ the infall velocity, $`\epsilon `$ the internal energy of gas, $`q_{adv}^{}`$ the cooling rate due to advection, $`q_{vis}^+`$ the viscous heating rate, $`H`$ the radiative heating rate, and $`C`$ the radiative cooling rate, all per unit volume. In self-similar solutions, $`t_{flow}=t_{adv}/ϵt_{adv}`$ and $`t_{adv}=f^1t_{vis}=(1f)f^1t_C`$ where $`f`$ is the ratio between the advective cooling and the total cooling rates and $`ϵ=(5/3\gamma )/(\gamma 1)`$; (NY). When the flow is advection dominated, $`f=1`$ and $`t_{flow}t_{adv}t_{vis}t_C`$.
## 3 Cooling in Two-Dimensional ADAF
The self-similar ADAF is, like the Bondi solution, very attractive due to its simplicity. Most physical quantities are simple power-laws of radius and can be readily calculated. However, this also implies that the solution may not be appropriate everywhere for any real accretion flow.
The simplicity of ADAF self-similar solution is based on the simple energy balance equation. The assumption of a constant ratio between the viscous heating, advective cooling, and radiative cooling, $`1:f:1f`$, maintains the self-similarity of the solutions, yet puts severe restrictions on cooling processes. Should the ratio be constant in radius, the energy equation would be satisfied at only one point. NY overcome this problem by finding a semi-self consistent variable $`f`$ for applications to real accretion flows. However, this is only possible when a positive $`f`$ can be found: the cooling should be smaller than the heating. Solutions do not exist, if positive $`f`$ is not possible. Cooling may be so strong that the flow can not be kept at a high temperature (Rees et al. 1982). This leads to the critical mass accretion $`\dot{M}_{crit}`$ for a given radius above which the optically thin hot solutions do not exist (Abramowicz et al. 1995; NY3). And for a given $`\dot{m}`$, hot solutions exist only within some critical radius, $`r_{crit}`$.
First, we will study similar issues in two-dimensional ADAF. The viscous heating per unit volume in self-similar ADAF is simply
$$q_{vis}^+=f^1q_{adv}^{}=\frac{3}{2}ϵ^{}\frac{\rho |v_r|c_s^2}{r},$$
(7)
where $`\rho (r,\vartheta )`$ is the gas density, $`v_r`$ the radial velocity, and $`c_s`$ the isothermal sound speed. The composite formula for atomic cooling and non-relativistic bremsstrahlung (Stellingwerf & Buff 1982; Nobili, Turolla, & Zampieri 1991) extended to relativistic bremsstrahlung is used for cooling,
$$C=\sigma _Tc\alpha _fm_ec^2n_i^2\left[\left\{\lambda _{br}(T_e)+6.0\times 10^{22}\theta _e^{1/2}\right\}^1+\left(\frac{\theta _e}{4.82\times 10^6}\right)^{12}\right]^1,$$
(8)
where $`\sigma _T`$ is the Thomson cross section, $`\alpha _f`$ the fine-structure constant, $`n_i`$ the number density of ions, and $`\theta _ekT_e/m_ec^2`$. The relativistic bremsstrahlung rate is (Svensson 1982; Stepney & Guilbert 1983; NY3)
$$\lambda _{br}=\left(\frac{n_e}{n_i}\right)(\underset{i}{}Z_i^2)F_{ei}(\theta _e)+\left(\frac{n_e}{n_i}\right)^2F_{ee}(\theta _e),$$
(9)
where
$`F_{ei}`$ $`=`$ $`4\left({\displaystyle \frac{2}{\pi ^3}}\right)^{1/2}\theta _e^{1/2}(1+1.781\theta _e^{1.34})\text{for}\theta _e<1`$
$`=`$ $`{\displaystyle \frac{9}{2\pi }}\theta _e\left[\mathrm{ln}(1.123\theta _e+0.48)+1.5\right]\text{for}\theta _e>1`$
$`F_{ee}`$ $`=`$ $`{\displaystyle \frac{5}{6\pi ^{3/2}}}(443\pi ^2)\theta _e^{3/2}(1+1.1\theta _e+\theta _e^21.25\theta _e^{5/2})\text{for}\theta _e<1`$
$`=`$ $`{\displaystyle \frac{9}{\pi }}\theta _e\left[\mathrm{ln}(1.123\theta _e)+1.2746\right]\text{for}\theta _e>1,`$
and $`Z_i`$ is the charge of ions. The unknown electron temperature is assumed to be described by the fitting formula $`T_e=T_iT_a/(T_i+T_a)`$ with $`T_a10^9\text{K}`$ and $`(kT_i/m_pc^2)=c_s^2(\vartheta )(r_s/r)^1/4`$ with $`r_s2GM/c^2`$, which approximates the electron temperature profile in ADAF (NY3).
The ratio $`f`$ is calculated from the definition,
$$1f\frac{C}{q_{vis}^+}$$
(12)
in the $`(r,\vartheta )`$ plane. The two-dimensional structure of the flow is basically determined by one parameter $`ϵ^{}ϵ/f`$ (see NY2). Contours of $`f=0.9`$ (innermost curve), $`0.8`$, $`0.7`$, $`0.6`$, $`0.5`$, $`0.3`$, $`0.2`$, $`0.1`$, and $`0.0`$ (outermost curve) for ADAF with $`\dot{m}=0.03`$, $`ϵ^{}ϵ/f=1.0`$, and $`\alpha =0.1`$ are shown in Figure 1. The heavily shaded region above the contours represents $`f<0`$, i.e., the region where hot electrons can not exist due to cooling. The one-dimensional calculation of NY3 (Fig. 3) shows that $`f=0.5`$ flow has $`r_{crit}10^4r_s`$, which agrees well with $`r_{crit}1.1\times 10^4r_s`$ for $`\vartheta =0`$ in Figure 1.
Note that there is a roughly cone-shaped shaded region around the pole where the hot solution is not possible. For $`\vartheta 1`$, this region extends down to $`r_s`$. This is mainly due to the slow infall velocity near the pole (NY2) to which the advective cooling is proportional. Since the viscous heating and radiative cooling are proportional to the advective cooling in self-similar ADAF, a small infall velocity near the pole implies a small advective cooling, and hence small viscous heating and radiative cooling. But even atomic plus bremsstrahlung cooling can have a far higher cooling rate than is assumed in ADAF; therefore the flow cannot maintain the high temperature in such regions. Normally, electrons in the region would cool down to $`10^4\text{K}`$. To first order, the dynamics of the flow would not be affected as long as the Coulomb coupling is weak and the ion temperature is near virial. But the low temperature of the electrons for the gas near the polar axis will make the coupling stronger \[energy exchange rate $`T_e^{1/2}`$\] and it is possible that the ion temperature would consequently drop below the virial value, resulting in the collapse of the polar region. In fact, as we shall show in Park & Ostriker (1999), this is the typical case. The possibility of the collapse of the polar region of the flow even at high temperature (for different reasons) is also discussed by Blandford & Begelman (1998). This will create a funnel around the polar axis (Fig. 1) and the flow will look more like torus than spheroid (Rees et al. 1982; Paczyński 1998).
## 4 Comptonization
When the accretion flow is quasi-spherical as in ADAF, whatever photons are produced at smaller radii inevitably interact with the outer part of the flow on their way out. In generic conditions, Compton scattering of photons off electrons is usually the most important interaction. They can either heat or cool the gas depending on the the spectrum of the radiation and the temperature of the gas. High energy photons heat the electrons while low energy ones cool them. However, the flow at large radius is generally heated because most photons are produced at hotter inner regions, and this is called preheating (Ostriker et al. 1976).
The importance of preheating on the flow can be estimated by comparing the relevant time scales. If $`t_{vis}t_H`$, preheating can be safely ignored. However, if there exists a region where $`t_H<t_{vis}`$, then preheating dominates over the viscous heating or the advective cooling. Since radiative fluxes are, in the lowest approximation, quadratic in the flow rate (Shapiro 1973a,b), whereas advective heating/cooling terms are linear in the flow rate, preheating will become significant for high enough accretion rates. In the spherical case we found that the solutions are significantly altered for the mass accretion rate $`\dot{m}\dot{M}/\dot{M}_{Edd}10^{1.5}`$ and the luminosity $`lL/L_{Edd}10^{7.5}`$, with no high-temperature solutions possible having $`l0.03`$ and $`e3\times 10^4`$ due to the preheating instability (Park 1990a,b). Since the density of the ADAF is higher than that in spherical accretion flow for the same mass accretion rate due to the small infall velocity, we expect that this limit will occur for higher values of $`e`$ and $`l`$.
In ADAF, all physical quantities are products of a radial part, a function of radius $`r`$ only, with an angular part, a function of spherical polar angle $`\vartheta `$ only. So
$$t_{vis}=\frac{1}{ϵ^{}}\mathrm{\Omega }_K^1(r)v^1(\vartheta ),$$
(13)
where the radial infall velocity is defined as $`v_r=r\mathrm{\Omega }_K(r)v(\vartheta )`$ with $`\mathrm{\Omega }_K(r)(GM/r^3)^{1/2}`$ and
$$t_H=\left(\frac{m_ec^2}{4kT_X}\right)\left(\frac{L_E}{L_X}\right)\frac{3}{2}c_s^2(\vartheta )\frac{r}{c},$$
(14)
where $`m_e`$ is the electron mass, $`c`$ the speed of light, $`T_X`$ the Compton temperature of the radiation defined as the energy-weighted mean of photon energy averaged over the photon spectral number density, $`4kT_X<(h\nu )^2>/<h\nu >`$, $`L_X`$ the luminosity of the Comptonizing radiation, $`L_E`$ the Eddington luminosity, $`c_s(\vartheta )`$ the isothermal sound speed divided by the Keplerian velocity $`c_s(r,\vartheta )(p/\rho )^{1/2}r\mathrm{\Omega }_K(r)c_s(\vartheta )`$, $`p`$ the total pressure, and $`\rho `$ the gas density (Levich & Syunyaev.1971). The inequality $`t_H<t_{vis}`$ now reduces to
$$v(\vartheta )c_s^2(\vartheta )\left(\frac{r_s}{r}\right)^{1/2}<\frac{2}{3}\left(\frac{4kT_X}{m_ec^2}\right)\left(\frac{L_X}{L_E}\right).$$
(15)
We take the case of the flow $`L_X/L_E=3\times 10^4`$, $`T_X=10^9\text{K}`$, and $`ϵ^{}=0.1`$ as an example (NY3). Equation (15) represents the region above the solid curve in Figure 2, in which preheating is the dominant heating process and should not be ignored in the thermal balance equation. In advection dominated flow, $`t_{flow}t_{adv}t_{vis}`$ and, therefore, $`t_H<t_{flow}`$. The flow is not adiabatic anymore and dynamics of the flow will be significantly altered.
Similarly, there could be many soft photons, thereby lowering the radiation temperature $`T_XT_e`$, and the flow would be cooled by Compton scattering. The condition for this is
$$v(\vartheta )c_s^2(\vartheta )\left(\frac{r_s}{r}\right)^{1/2}<\frac{2}{3}\left(\frac{4kT_e}{m_ec^2}\right)\left(\frac{L_X}{L_E}\right),$$
(16)
which corresponds to the region above the dotted curve in Figure 2 for the same flow parameters.
For the flow near the equator, Compton heating or cooling may be ignored within some radius. But above the equatorial plane at large radius or around the pole, Compton heating or cooling can be more important than the viscous heating. It is quite possible that Compton preheating would heat the flow near the pole to a high temperature flow, that would otherwise cool down. Or if there are abundant soft photons, the flow would cool due to the Compton cooling. This result is independent of the mass accretion rate and depends only on the Comptonizing luminosity. Esin (1997) similarly found the importance of Compton cooling (called ‘non-local cooling’) under certain conditions in a careful one-dimensional analysis, whereas Compton heating was found to be generally unimportant. The main reason for this discrepancy is that different radiation temperature, i.e., spectrum, is assumed and that physical parameters of height-integrated flow can be widely different from those of two-dimensional flow, especially near the polar axis (NY2).
However, a real ADAF could be far more complicated than this. Firstly, the radiation temperature $`T_X`$ can vary from place to place. It should correctly represent the spectrum of the radiation field at a given position, which is the sum of the radiation transferred from other regions of the flow and that produced locally. Secondly, the luminosity $`L_X`$ is also a function of position, which is again related to the density and temperature of the gas. Thirdly, the gas temperature is determined by the amount of Compton heating, therefore the radiation temperature and the radiation energy density. Hence, more accurate analysis requires simultaneously solving the energy equations for gas and radiation. This has been done only for spherical accretion flow (Park 1990a,b; Nobili, Turolla, & Zampieri 1991; Mason & Turolla 1992).
## 5 Preheating limit
If preheating increases above some critical value, it can affect the accretion flow more dramtically than just changing the thermal balance. In Bondi-type spherical accretion flow, Ostriker et al. (1976) found that too much preheating changes the dynamics of the flow around the sonic radius and would disrupt the steady flow. This results in various time-dependent behaviour in the flow and in the outcoming radiation (Cowie, Ostriker, & Stark 1978; Ciotti & Ostriker 1997).
Our next question is whether there is a similar preheating limit for ADAF. Since ADAF is self-simliar, it does not have an accretion radius or outer boundary. So we have to look at the temperature structure in preheated ADAF. We will use simplified thermal balance equation to solve for the temperature for the ADAF with strong preheating luminosity.
In ADAF, the cooling rate is assumed to be a constant fraction of the advective cooling to obtain self-similar forms of solutions. This simplification may not be too bad, if the solution is taken as height-integrated (or angle-averaged) form. But in two-dimensional ADAF, the flow time approaches infinity along the polar direction and is an order-of-magnitude larger than the usual free-fall time along the equatorial direction (NY2). There is no radial advection along the pole, and we expect the Compton heating to be balanced by the radiative cooling. Similarly we apply the thermal balance equation to all parts of the flow to estimate the preheating effect, which is valid when $`t_H<t_{vis}`$ (Fig. 2). The temperature is determined by requiring $`H(T_{eq};r,\vartheta )=C(T_{eq};r,\vartheta )`$. When atomic line cooling and bremsstrahlung are the dominant processes, $`T_{eq}(r)`$ has the form that outside some radius $`r_{}`$, $`T(r)10^4\text{K}`$ and at $`r_{}`$, $`T(r)`$ jumps to $`T_{}2\times 10^6\text{K}`$ at which temperature the bremsstrahlung cooling rate becomes comparable to the peak of the atomic line cooling (Buff & McCray 1974). In this domain there is a classical phase change and the temperature suddenly jumps, because there is no stable equilibrium between $`10^4\text{K}`$ and $`T_{}`$. If we further assume that the luminosity profile $`L_X(r)`$ is constant in $`r`$, the temperature profile inside $`r_{}`$ is simply $`T(r)=T_{}(r/r_{})^1`$ if the gas cools only by non-relativistic bremsstrahlung.
At the transition radius $`r_{}`$, Compton heating is equal to the peak of the cooling curve by definition (eq. 8),
$$\frac{4k(T_XT_e)}{m_ec^2}\frac{l}{\dot{m}}[n(\vartheta )]^1\left(\frac{r_{}}{r_s}\right)^{1/2}=8.8\times 10^5,$$
(17)
where the gas number density is defined as $`n(r,\vartheta )=n(\vartheta )(\dot{M}/4\pi cm_pr_s^2)(r/r_s)^{3/2}`$. Since the temperature of the electrons is not too different from $`10^9\text{K}`$ at the inner part of the flow where most of the radiation is produced, we take $`T_X10^9\text{K}T_e(r)`$. This choice of $`T_X`$ means that the radiation should contain enough hot photons to heat the gas. If there are too many soft photons, e.g., synchrotron photons, $`T_X`$ could be lower.
The accretion flow would be disrupted, if the flow is heated above the virial temperature $`T_{vir}`$, defined as $`(\frac{5}{2})kT_{vir}GMm_p/r`$. The flow temperature suddenly jumps from $`10^4\text{K}`$ to $`T_{}`$ at radius $`r_{}`$, and the inflow would stop or reverse (i.e., become outflow) if $`T_{}>T_{vir}(r_{})`$. The condition $`T_{}>T_{vir}(r_{})`$ is equivalent to $`r_{}>r_v9.4\times 10^5r_s`$ (eq. 17) since $`r_v`$ is defined as $`T_{vir}(r_v)=T_{}`$.
For the efficiency $`e=l/\dot{m}=3\times 10^3`$ flow with $`ϵ^{}=1.0`$ and $`T_x=10^9\text{K}`$, $`r_{}`$ as a function of $`\vartheta `$ is shown in Figure 3 as a solid curve, and $`r_v`$ as a dotted one. The part of the flow between $`r_{}`$ and $`r_v`$ with $`r_{}>r_v`$ is overheated and the steady inflow is not possible whereas the part of the flow with $`r_{}<r_v`$ can accrete normally. The flow in regions A and C of Figure 3 has low temperature $`10^4\text{K}`$ and is stable. The flow in regions B and D are Compton heated above $`T_{}`$ with the flow in region B being unstable, while that in region D being stable. So it would be possible that the flow accretes along the equatorial plane while there is outflow along the pole due to preheating. Blandford & Begelman (1998) also propose advection dominated inflow-outflow via different mechanism: Part of the conservative flow can have positive energy due to the energy flux transported by the viscous torque (see also NY2).
The critical efficiency $`e_{cr}`$ above which this overheating starts to occur in any part of the flow corresponds to $`r_{}|_{\vartheta =0}=r_v`$ since preheating is most effective in the polar direction due to lower infall velocity and lower density. Figure 4 shows $`e_{cr}`$ determined for $`T_X=10^9\text{K}`$ as squares. Depending on the value of $`ϵ^{}`$, any flow with efficiency above $`4\times 10^3`$ will suffer preheating instability at or near the pole, and can develop the time-dependent behaviour or outflow as in spherical case (Cowie, Ostriker, & Stark 1978). This value of $`e_{cr}4\times 10^3`$ corresponds to $`l2\times 10^4`$ and $`\dot{m}0.05`$ in NY3 solutions, and is comparable to that in the spherical accretion flow. Since the radiation temperature assumed here is higher than that of the self-consistent spherical flow (Park 1990a), the critical efficiency, which is inversely proportional to the radiation temperature, should be smaller. However, the gas density of ADAF is on average $`30100`$ times greater for the same total mass accretion rate due to the smaller infall velocity than that in the spherical accretion flow which is almost freely falling (Park 1990a; NY2), resulting in similar critical efficiency. For the same radiation temperature, i.e., spectral shape, higher density provides a major advantage for ADAF (in comparison with spherical flow): higher luminosities and efficiencies should be possible.
## 6 Luminosity from ADAF
One of the unique difference between ADAF (or spherical accretion) onto black holes and thin disk accretion is that its radiation efficiency cannot be assumed a priori but must be determined self consistently. In thin disk accretion, whatever the energy input to the flow, it is locally radiated away because of the long inflow time, and its radiation efficiency is essentially determined by the position of inner edge of the disk. However, accretion flow with significant radial velocity can carry the energy into the hole as well as radiates away. So the outcoming radiation can be either significant or negligible depending on the dynamics and thermal structure of the flow (see Park & Ostriker 1998 for review).
Here, we estimate how much outgoing radiation is produced self consistently by self-similar ADAF. The amount of radiative cooling in ADAF is always assumed to be some fraction $`(1f)`$ of the viscous heating. The remaining fraction $`f`$ is advected with the flow and not radiated. Hence the radiative luminosity would be $`(1f)`$ times the sum of all viscous heating (Eq. 7) over all radii and angles,
$$L_{rad}=\frac{3}{2}ϵ^{}(1f)_{r_{in}}^{r_{out}}𝑑r_0^\pi 2\pi 𝑑\vartheta \rho |v_r|c_s^2r.$$
(18)
Since the parameter $`f`$ is not determined a priori for self-similar ADAF, we take $`f=0`$ to estimate the maximum luminosity that can be produced. Substituting physical quantities of NY2 yields the dimensionless maximum luminosity as a function of $`ϵ^{}`$. Since the right-hand side of the equation (18) is proportional to $`\dot{M}`$, the maximum efficiency $`e_{max}`$ rather than the maximum luminosity is determined. We assume $`r_{in}=3r_s`$, and the values of $`e_{max}`$ for each $`ϵ^{}`$ are shown in Figure 4 as circles. Comparison with the critical efficiency, $`e_{cr}`$, shows that the flow with $`ϵ^{}0.3`$ has a possiblility of preheating disruption. Higher $`ϵ^{}`$ flows are more vulnerable to preheating because they have higher infall velocity and pressure, therefore, higher viscous heating and radiation output.
The exact value of $`e_{max}`$ may differ somewhat from the values above, because simple thermal balance equation is not always valid. In reality, the local cooling rate can be larger than the viscous heating because of the additional $`PdV`$ work. A very good example is the spherical accretion flow without viscous dissipation. The viscous heating is zero, yet the gas is heated by compression due to gravity and radiates (Shapiro 1973a,b; Park 1990a,b). Therefore, $`e_{max}`$ can be higher.
## 7 Self Consistent Flows
So far we have discussed the cases where gas near the polar axis or at large radius is cooled in the absence of preheating, or it can be heated too much and disrupted by preheating. However, it is also possible that the flow can be maintained at high temperature by preheating without being disrupted. In spherical accretion flow, there exist two branches of solutions for certain mass accretion rate (Park 1990a,b; see Park & Ostriker 1998 for references): In the lower luminosity branch, gas is cooled down to $`10^4\text{K}`$ and thus, not much energy is released through radiation. In the other, higher luminosity branch, gas at large radius is Compton heated by hot radiation produced in the inner region, and a higher radiation efficiency is achieved. We do find that this is also true for ADAF. For some mass accretion rates, there exist hot solutions self-consistently maintained by Compton preheating, whereas the flow would cool down to thin disk in the absence of preheating (Park & Ostriker 1999).
## 8 Summary
We have studied the thermal properties of the self-similar, two-dimensional advection dominated accretion flow (NY2) with special consideration given to the radiative cooling and heating. We find that
1. A hot solution is possible only within some critical radius for a given mass accretion rate in the equatorial plane, confirming the one-dimensional analysis (NY3). Also, for any mass accretion rate, a roughly conical region around the pole cannot maintain high-temperature electrons in the absence of radiative heating with the collapsed region shrinking as $`\dot{m}`$ is reduced. If ions become coupled to the now cold electrons, (as seems likely) an empty funnel around the polar axis will be created.
2. Part of the flow at large radii or above the equatorial plane should be affected through Compton heating or cooling by photons produced at smaller radii, if the luminosity is high enough. If the radiation efficiency of the accretion is above $`4\times 10^3`$, and the outcoming radiation has mean photon energy comparable to the electron temperature of the inner region, preheating due to the inverse Compton scattering would overheat the polar region of the flow, and may create time-dependent behaviour or outflow while accretion still goes on in the equatorial directions. For NY3 solutions, these phenomena should begin to occur for luminosities $`l2\times 10^4`$ and accretion rate $`\dot{m}0.05`$ in Eddington units.
3. The role of Compton preheating is quite intriguing in ADAF as in spherical flows. On the one hand it can have the attractive feature of driving polar winds whenever the total luminosity is above some rather low bound. But it may also allow another branch of solutions making the original ADAF solutions more viable for higher mass accretion rates than those for which solutions were valid. The detailed calculation of this new branch of solutions will be given in Park & Ostriker (1999).
We would like to thankfully acknowledge useful conversations with R. Narayan, I. Yi, X. Chen, B. Paczyński, and R. Blandford. This work is supported by NSF grant AST 9424416 and KOSEF 971-0203-013-2. Large part of this work was done when MGP visited Princeton University Observatory with the support from Professor Dispatchment Program of Korea Research Foundation and NSF 9424416.
|
no-problem/9901/nucl-th9901049.html
|
ar5iv
|
text
|
# I Introduction
## I Introduction
Although chiral symmetry is manifest in the Lagrangian of quantum chromodynamics (QCD) for vanishing quark masses, quantum effects break this symmetry spontaneously in the QCD vacuum. At temperatures of order 150 MeV, however, lattice QCD results indicate that chiral symmetry is restored . Such temperatures are expected to be reached in ultrarelativistic heavy-ion collisions at CERN-SPS, BNL-RHIC, and CERN-LHC energies . The restoration of chiral symmetry may lead to observable consequences, for instance, in the dilepton mass spectrum , or the formation of disoriented chiral condensates .
QCD with $`N_f`$ massless quark flavors has a $`SU(N_f)_L\times SU(N_f)_R`$ symmetry. The order parameter for the chiral transition is therefore $`\chi ^{ij}\overline{q}_L^iq_R^j,i,j=1,\mathrm{},N_f`$. For $`N_f=2`$, $`SU(2)_L\times SU(2)_R`$ is isomorphic to $`O(4)`$. Consequently, the effective Lagrangian for the order parameter $`\chi ^{ij}`$ falls in the same universality class as the $`O(4)`$ model, with order parameter $`\varphi ^i,i=1,\mathrm{},4`$. Thus, if the chiral transition is second order in QCD, the dynamics (and the critical exponents) will be the same as in the $`O(4)`$ model, provided one is sufficiently close to the transition temperature . This motivates the study of the $`O(4)`$ model as an effective low-energy model for QCD. The study which will be presented here is based on the $`O(N)`$ model for arbitrary $`N`$. To make contact with QCD, however, we take $`N=4`$ in all numerical calculations.
At finite temperature, the naive perturbative expansion in powers of the coupling constant breaks down, requiring resummation schemes to obtain reliable results . Typically, these schemes aim to include thermal fluctuations to all orders in the calculation of physical quantities. Over the years, many ways to achieve this have been pursued (some approaches are more rigorous, some are more or less ad hoc; Refs. constitute an incomplete list). The present study of the $`O(N)`$ model focuses on the Hartree approximation and its large-$`N`$ limit (the large-$`N`$ approximation).
The Hartree approximation is known from many-body theory and generically represents the self-consistent resummation of tadpole diagrams . There is, however, no unique prescription to perform this resummation. This has led to the confusing situation that the same term “Hartree approximation” has been used for resummation schemes which actually differ in detail . In this work, the so-called Cornwall–Jackiw–Tomboulis (CJT) formalism is applied to derive a Hartree approximation , and we shall use the term “Hartree approximation” exclusively for the resummation of tadpoles within the CJT formalism.
The CJT formalism can be viewed as a prescription for computing the effective action of a given theory. In general, the CJT formalism resums one-particle irreducible diagrams to all orders. The stationarity conditions for the effective action are nothing but Schwinger–Dyson equations for the one- and two-point Green’s functions of the theory. What is usually referred to as the Hartree approximation in the context of the CJT formalism is the special case where only one-particle irreducible tadpole diagrams are included in the resummation. (To be precise, the original work of Cornwall, Jackiw, and Tomboulis referred to this as the “Hartree–Fock approximation”.) In this case, the equations for the two-point Green’s functions simplify to self-consistency conditions, or “gap” equations, for the resummed masses of the quasi-particle excitations, i.e., in our case, the in-medium sigma and pion masses.
The $`O(N)`$ model has been previously studied using the CJT formalism by Amelino-Camelia , and Petropoulos . The former work addressed renormalization using the cut-off renormalization scheme, but did not present solutions of the gap equations. In the latter work, the gap equations were numerically solved, but the issue of renormalization was not treated. In this paper, we complete these investigations by discussing the renormalization of the gap equations in the cut-off as well as the counter-term renormalization scheme and by presenting the corresponding numerical solutions.
Renormalization of expressions obtained in self-consistent approximation schemes is non-trivial. In perturbation theory, it is sufficient to perform renormalization in the vacuum, $`T=0`$, order by order in the coupling constant, as a finite temperature does not introduce new ultraviolet singularities . Consequently, the perturbative renormalization of the $`O(N)`$ model is straightforward and has been known for a long time . Self-consistent approximation schemes, however, in general resum only certain classes of diagrams. This has the consequence that performing renormalization after resummation may require renormalization constants that are no longer independent of the properties of the medium, see Ref. and below. A resummation scheme which circumvents this problem by renormalizing prior to resummation is the so-called optimized perturbation theory . This approach will not be discussed here.
Our main results are the following. In the cut-off renormalization scheme, we find that taking the cut-off, $`\mathrm{\Lambda }`$, to infinity the masses of the sigma meson and the pions become identical , even in the phase where chiral symmetry is broken. This is clearly unphysical, as the pions are Goldstone bosons and thus much lighter than the sigma meson. Moreover, it is a well-known fact that the $`O(N)`$ model becomes trivial in the limit $`\mathrm{\Lambda }\mathrm{}`$. Consequently, renormalization of the $`O(N)`$ model within the cut-off scheme can only be meaningfully studied for a finite value of $`\mathrm{\Lambda }`$. Even then, fixing $`\mathrm{\Lambda }`$ to give the observed meson masses in the vacuum, we find that in the absence of explicit chiral symmetry breaking, the Hartree approximation requires $`\mathrm{\Lambda }`$ to vanish. The large-$`N`$ approximation does not have this problem and allows for nonzero values of $`\mathrm{\Lambda }`$.
In the counter-term scheme, renormalization can be performed within both the Hartree as well as the large-$`N`$ approximation. In the Hartree approximation, the value of the renormalization scale, $`\mu `$, is uniquely fixed by the vacuum mass of the sigma meson in the absence of explicit chiral symmetry breaking. In the large-$`N`$ approximation, this constraint does not exist, and $`\mu `$ can be chosen arbitrarily. As $`\mathrm{\Lambda }`$ or $`\mu `$ are free parameters in the large-$`N`$ approximation, at any given temperature the values of the meson masses depend on the choice of these parameters. This is a consequence of the fact mentioned above that, for self-consistent resummation schemes, the renormalization constants may depend on the temperature.
The outline of the paper is as follows. In Section II, we briefly discuss the effective potential within the CJT formalism . In Section III, this formalism is applied to derive the effective potential for the $`O(N)`$ model in the Hartree approximation. Section IV is devoted to a discussion of the stationarity conditions for the effective potential, which lead to gap equations for the sigma and pion masses. The renormalization of the gap equations is then performed in Section V within the cut-off and the counter-term schemes. In Section VI we present numerical results. Section VII concludes this paper with a summary of our results. As an application we also compute the temperature dependence of the in-medium decay widths to one-loop order for on-shell $`\sigma `$ and $`\pi `$ mesons at rest.
We use the imaginary-time formalism to compute quantities at finite temperature. Our notation is
$$_kf(k)T\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\frac{d^3k}{(2\pi )^3}f(2\pi inT,𝐤),_xf(x)_0^{1/T}𝑑\tau d^3𝐱f(\tau ,𝐱).$$
(1)
We use units $`\mathrm{}=c=k_B=1`$. The metric tensor is $`g^{\mu \nu }=\mathrm{diag}(+,,,)`$.
## II The Effective Potential in the Cornwall–Jackiw–Tomboulis formalism
The notion of an effective action is quite useful for studying theories with spontaneously broken symmetries. For translationally invariant systems, the effective action becomes the effective potential. At the classical level, the effective potential is given by the potential energy (density). The vacuum (ground) state is given by the minimum of the potential energy. For theories with spontaneously broken symmetry there may exist infinitely many equivalent (degenerate) minima. At the quantum level, there are additional terms in the effective potential, corresponding to quantum fluctuations. At finite temperature (and finite chemical potential), the minimum of the effective potential corresponds to the thermodynamic pressure .
The common way to compute the effective potential is via the loop expansion . This approach, however, becomes problematic for theories with spontaneously broken symmetries. In particular, the energy of quasi-particle excitations with small 3-momenta becomes imaginary. The reason is that the requirement of convexity for the effective potential is violated. A way to salvage the loop expansion is to perform a Maxwell construction which restores the convexity of the effective potential . Another way to compute the effective potential is via the CJT formalism . As mentioned in the introduction, this method resums certain classes of diagrams and has the advantage that the energy of the quasi-particle excitations remains real for all values of 3-momentum.
Consider a scalar field theory with Lagrangian
$$(\varphi )=\frac{1}{2}_\mu \varphi ^\mu \varphi U(\varphi ),$$
(2)
for instance, $`\varphi ^4`$ theory where
$$U(\varphi )=\frac{1}{2}m^2\varphi ^2+\lambda \varphi ^4.$$
(3)
The generating functional for Green’s functions in the presence of sources $`J,K`$ reads :
$$𝒵[J,K]=e^{𝒲[J,K]}=𝒟\varphi \mathrm{exp}\left\{I[\varphi ]+\varphi J+\frac{1}{2}\varphi K\varphi \right\},$$
(4)
where $`𝒲[J,K]`$ is the generating functional for connected Green’s functions, $`I[\varphi ]=_x`$ is the classical action, and
$`\varphi J`$ $``$ $`{\displaystyle _x}\varphi (x)J(x),`$ (6)
$`\varphi K\varphi `$ $``$ $`{\displaystyle _{x,y}}\varphi (x)K(x,y)\varphi (y).`$ (7)
The expectation values for the one-point function, $`\overline{\varphi }(x)`$, and the connected two-point function, $`G(x,y)`$, in the presence of sources are given by
$`{\displaystyle \frac{\delta 𝒲[J,K]}{\delta J(x)}}`$ $``$ $`\overline{\varphi }(x),`$ (9)
$`{\displaystyle \frac{\delta 𝒲[J,K]}{\delta K(x,y)}}`$ $``$ $`{\displaystyle \frac{1}{2}}\left[G(x,y)+\overline{\varphi }(x)\overline{\varphi }(y)\right].`$ (10)
One now eliminates $`J`$ and $`K`$ in favor of $`\overline{\varphi }`$ and $`G`$ via a double Legendre transformation to obtain the effective action
$$\mathrm{\Gamma }[\overline{\varphi },G]=𝒲[J,K]\overline{\varphi }J\frac{1}{2}\overline{\varphi }K\overline{\varphi }\frac{1}{2}GK,$$
(11)
where $`GK_{x,y}G(x,y)K(y,x)`$. Thus,
$`{\displaystyle \frac{\delta \mathrm{\Gamma }[\overline{\varphi },G]}{\delta \overline{\varphi }(x)}}`$ $`=`$ $`J(x){\displaystyle _y}K(x,y)\varphi (y),`$ (13)
$`{\displaystyle \frac{\delta \mathrm{\Gamma }[\overline{\varphi },G]}{\delta G(x,y)}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}K(x,y).`$ (14)
For vanishing sources, we find the stationarity conditions which determine the expectation value of the field $`\phi (x)`$ and the propagator $`𝒢(x,y)`$ in the absence of sources:
$`{\displaystyle \frac{\delta \mathrm{\Gamma }[\overline{\varphi },G]}{\delta \overline{\varphi }(x)}}|_{\overline{\varphi }=\phi ,G=𝒢}`$ $`=`$ $`0,`$ (16)
$`{\displaystyle \frac{\delta \mathrm{\Gamma }[\overline{\varphi },G]}{\delta G(x,y)}}|_{\overline{\varphi }=\phi ,G=𝒢}`$ $`=`$ $`0.`$ (17)
Equation (17) corresponds to a Schwinger–Dyson equation for the full (dressed) propagator. It was shown in that the effective action $`\mathrm{\Gamma }[\overline{\varphi },G]`$ is given by
$$\mathrm{\Gamma }[\overline{\varphi },G]=I[\overline{\varphi }]\frac{1}{2}\mathrm{Tr}\left(\mathrm{ln}G^1\right)\frac{1}{2}\mathrm{Tr}\left(D^1G1\right)+\mathrm{\Gamma }_2[\overline{\varphi },G].$$
(18)
Here, $`D^1`$ is the inverse of the tree-level propagator,
$$D^1(x,y;\overline{\varphi })\frac{\delta ^2I[\varphi ]}{\delta \varphi (x)\delta \varphi (y)}|_{\varphi =\overline{\varphi }},$$
(19)
and $`\mathrm{\Gamma }_2[\overline{\varphi },G]`$ is the sum of all two-particle irreducible diagrams where all lines represent full propagators $`G`$.
For constant fields $`\overline{\varphi }(x)=\overline{\varphi }`$, homogeneous systems, and for a Lagrangian of the type given by eq. (2), the effective potential $`V`$ is given by $`V=T\mathrm{\Gamma }/\mathrm{\Omega }`$, where $`\mathrm{\Omega }`$ is the 3-volume of the system, i.e.,
$$V[\overline{\varphi },G]=U(\overline{\varphi })+\frac{1}{2}_k\mathrm{ln}G^1(k)+\frac{1}{2}_k\left[D^1(k;\overline{\varphi })G(k)1\right]+V_2[\overline{\varphi },G],$$
(20)
with
$$D^1(k;\overline{\varphi })=k^2+U^{\prime \prime }(\overline{\varphi })$$
(21)
and $`V_2[\overline{\varphi },G]T\mathrm{\Gamma }_2[\overline{\varphi },G]/\mathrm{\Omega }`$. The stationarity conditions are given by
$`{\displaystyle \frac{\delta V[\overline{\varphi },G]}{\delta \overline{\varphi }}}|_{\overline{\varphi }=\phi ,G=𝒢}=0,`$ (23)
$`{\displaystyle \frac{\delta V[\overline{\varphi },G]}{\delta G(k)}}|_{\overline{\varphi }=\phi ,G=𝒢}=0.`$ (24)
With eq. (20), the latter can be written in the form
$$𝒢^1(k)=D^1(k;\phi )+\mathrm{\Pi }(k),$$
(25)
where
$$\mathrm{\Pi }(k)2\frac{\delta V_2[\overline{\varphi },G]}{\delta G(k)}|_{\overline{\varphi }=\phi ,G=𝒢}$$
(26)
is the self energy. Equation (25) is the aforementioned Schwinger–Dyson equation. The thermodynamic pressure is then determined by
$$p=V[\phi ,𝒢],$$
(27)
which, in the absence of conserved charges, is (up to a sign) identical to the free energy density.
## III The $`O(N)`$ Model
Let us now turn to the discussion of the $`O(N)`$ model. Its Lagrangian is given by
$$=\frac{1}{2}\left(_\mu \underset{¯}{\varphi }^\mu \underset{¯}{\varphi }m^2\underset{¯}{\varphi }\underset{¯}{\varphi }\right)\frac{\lambda }{N}\left(\underset{¯}{\varphi }\underset{¯}{\varphi }\right)^2+H\varphi _1,$$
(28)
where $`\underset{¯}{\varphi }`$ is an $`N`$-component scalar field. For $`H=0`$ and $`m^2>0`$, the Lagrangian is invariant under $`O(N)`$ rotations of the fields. For $`H=0`$ and $`m^2<0`$, this symmetry is spontaneously broken down to $`O(N1)`$, with $`N1`$ Goldstone bosons (the pions). The phenomenological explicit symmetry breaking term, $`H`$, is introduced to yield the observed finite masses of the pions. Spontaneous symmetry breaking leads to a non-vanishing vacuum expectation value for $`\underset{¯}{\varphi }`$:
$$\left|\underset{¯}{\varphi }\right|=\varphi >0.$$
(29)
($`\varphi `$ assumes the role of $`\overline{\varphi }`$ in section II.) At tree level,
$$\varphi f_\pi =\sqrt{\frac{Nm^2}{4\lambda }}\frac{2}{\sqrt{3}}\mathrm{cos}\frac{\theta }{3},\theta =\mathrm{arccos}\left[\frac{HN}{8\lambda }\left(\frac{12\lambda }{Nm^2}\right)^{3/2}\right].$$
(30)
For $`H=0`$, $`\mathrm{cos}(\theta /3)=\sqrt{3}/2`$. The inverse tree-level sigma and pion propagators are given by
$`D_\sigma ^1(k;\varphi )`$ $`=`$ $`k^2+m^2+{\displaystyle \frac{12\lambda }{N}}\varphi ^2,`$ (32)
$`D_\pi ^1(k;\varphi )`$ $`=`$ $`k^2+m^2+{\displaystyle \frac{4\lambda }{N}}\varphi ^2.`$ (33)
This leads to the zero-temperature tree-level masses
$`m_\sigma ^2`$ $`=`$ $`m^2+{\displaystyle \frac{12\lambda f_\pi ^2}{N}},`$ (35)
$`m_\pi ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda f_\pi ^2}{N}}`$ (36)
for the sigma meson and the pion. At tree level, the parameters of the Lagrangian are fixed such that these masses agree with the observed values of $`m_\sigma =`$ 600 MeV and $`m_\pi =`$ 139 MeV. Then, the coupling constant is
$$\lambda =\frac{N(m_\sigma ^2m_\pi ^2)}{8f_\pi ^2},$$
(37)
where $`f_\pi =`$ 93 MeV is the pion decay constant and
$$m^2=\frac{m_\sigma ^23m_\pi ^2}{2}.$$
(38)
The explicit symmetry breaking term is $`H=m_\pi ^2f_\pi `$. These tree-level results may change upon renormalization.
The CJT effective potential for the $`O(N)`$ model is obtained from eq. (20) as
$`V(\varphi ,G_\sigma ,G_\pi )`$ $`=`$ $`{\displaystyle \frac{1}{2}}m^2\varphi ^2+{\displaystyle \frac{\lambda }{N}}\varphi ^4H\varphi `$ (39)
$`+`$ $`{\displaystyle \frac{1}{2}}{\displaystyle _k}\left[\mathrm{ln}G_\sigma ^1(k)+D_\sigma ^1(k;\varphi )G_\sigma (k)1\right]`$ (40)
$`+`$ $`{\displaystyle \frac{N1}{2}}{\displaystyle _k}\left[\mathrm{ln}G_\pi ^1(k)+D_\pi ^1(k;\varphi )G_\pi (k)1\right]`$ (41)
$`+`$ $`V_2(\varphi ,G_\sigma ,G_\pi ),`$ (42)
where $`V_2(\varphi ,G_\sigma ,G_\pi )`$ denotes the contribution from two-particle irreducible diagrams. In the following we include only the two-loop diagrams shown in Fig. 1 in $`V_2`$. These diagrams have no explicit $`\varphi `$ dependence. Then, using eq. (26) only tadpole diagrams (with resummed propagators) contribute to the self energies. As explained in the introduction, this corresponds to the Hartree approximation. The Schwinger–Dyson equations for the full propagators contain no momentum dependence. Thus, these equations are simply gap equations for the masses of the sigma meson and pion.
On the two-loop level there exist, however, two more diagrams, cf. Fig. 2, which will not be taken in our analysis. They depend explicitly on $`\varphi `$ and introduce an additional momentum dependence in the Schwinger-Dyson equations, which makes their solution more complicated. However, in the large-$`N`$ limit these terms are a priori absent, because they are of order $`1/N`$.
In the Hartree approximation,
$`V_2(\varphi ,G_\sigma ,G_\pi )`$ $`=`$ $`3{\displaystyle \frac{\lambda }{N}}\left[{\displaystyle _k}G_\sigma (k)\right]^2+(N+1)(N1){\displaystyle \frac{\lambda }{N}}\left[{\displaystyle _k}G_\pi (k)\right]^2`$ (43)
$`+`$ $`2(N1){\displaystyle \frac{\lambda }{N}}\left[{\displaystyle _k}G_\sigma (k)\right]\left[{\displaystyle _k}G_\pi (k)\right].`$ (44)
The coefficients in this equation are chosen such that, when computing the self energies from eq. (26) and replacing the dressed propagators by the tree-level propagators, one obtains the standard results for the perturbative one-loop self energies .
## IV The Stationarity Conditions for the Effective Potential
The stationarity conditions (23), (24) read
$`0`$ $`=`$ $`m^2\phi +{\displaystyle \frac{4\lambda }{N}}\phi ^3H+{\displaystyle \frac{4\lambda }{N}}\phi {\displaystyle _q}\left[3𝒢_\sigma (q)+(N1)𝒢_\pi (q)\right],`$ (46)
$`𝒢_\sigma ^1(k)`$ $`=`$ $`D_\sigma ^1(k;\phi )+{\displaystyle \frac{4\lambda }{N}}{\displaystyle _q}\left[3𝒢_\sigma (q)+(N1)𝒢_\pi (q)\right],`$ (47)
$`𝒢_\pi ^1(k)`$ $`=`$ $`D_\pi ^1(k;\phi )+{\displaystyle \frac{4\lambda }{N}}{\displaystyle _q}\left[𝒢_\sigma (q)+(N+1)𝒢_\pi (q)\right].`$ (48)
The integrals on the right-hand side of the last two equations correspond to the sigma meson and pion self energies. According to eq. (26), they originate from the diagrams of Fig. 1 via cutting one of the two loops in these diagrams. As one observes, these terms are independent of the momentum $`k^\mu `$ appearing the propagator. The only $`k`$ dependence on the right-hand side enters through $`D_\sigma ^1`$ and $`D_\pi ^1`$, cf. eqs. (32) and (33). Therefore, one is allowed to make the following ansatz for the full propagators:
$$𝒢_{\sigma ,\pi }(k)=\frac{1}{k^2+M_{\sigma ,\pi }^2},$$
(49)
where now $`M_\sigma `$ and $`M_\pi `$ are the masses dressed by interaction contributions from the diagrams of Fig. 1. Note that the diagrams in Fig. 2 have an explicit dependence on the external momentum; including them would invalidate the ansatz (49).
The dressed sigma and pion masses are then determined by the following gap equations
$`M_\sigma ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda }{N}}\left[3\phi ^2+3Q(M_\sigma ,T)+(N1)Q(M_\pi ,T)\right],`$ (51)
$`M_\pi ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda }{N}}\left[\phi ^2+Q(M_\sigma ,T)+(N+1)Q(M_\pi ,T)\right].`$ (52)
Here we introduced the function
$$Q(M,T)_k\frac{1}{k^2+M^2}=\frac{d^3𝐤}{(2\pi )^3}\frac{1}{ϵ_𝐤(M)}\left\{\frac{1}{\mathrm{exp}[ϵ_𝐤(M)/T]1}+\frac{1}{2}\right\},$$
(53)
where $`ϵ_𝐤(M)\sqrt{𝐤^2+M^2}`$. The last term in the integral is divergent and requires renormalization. This will be discussed in section V. The standard practice, however, is to ignore this term, claiming it is independent of temperature. This is wrong, because $`ϵ_𝐤(M)`$ depends on $`T`$ through the gap equation for $`M`$. As is shown below, the correct renormalization procedure changes the results.
Finally, $`\phi `$ is determined by
$$H=\phi \left\{m^2+\frac{4\lambda }{N}\left[\phi ^2+3Q(M_\sigma ,T)+(N1)Q(M_\pi ,T)\right]\right\}.$$
(55)
Using eq. (51) this can be written in the compact form
$$H=\phi \left[M_\sigma ^2\frac{8\lambda }{N}\phi ^2\right].$$
(56)
Note that this equation does not require explicit renormalization, and therefore is valid independent of the renormalization scheme. Equations (51), (52), and (56) are the stationarity conditions in the Hartree approximation. In the case where chiral symmetry is not explicitly broken, $`H=m_\pi =0`$, they imply the following:
1. $`\phi 0`$. This is the phase where chiral symmetry is spontaneously broken. From eq. (56) follows
$$M_\sigma =\sqrt{\frac{8\lambda }{N}}\phi .$$
(57)
On the other hand, eqs. (52) and (55) can be combined to give
$$M_\pi ^2=\frac{8\lambda }{N}\left[Q(M_\pi ,T)Q(M_\sigma ,T)\right].$$
(58)
This implies that Goldstone’s theorem cannot be satisfied in the Hartree approximation at all temperatures: $`M_\sigma 0`$ on account of (57), therefore $`M_\pi =0`$ is not a solution of (58). (Note, however, that after proper renormalization of the function $`Q(M,T)`$, $`M_\pi `$ can be chosen to be zero at one particular temperature, for instance $`T=0`$, but then will be nonzero for other values of $`T`$.)
2. $`\phi =0`$. In this phase chiral symmetry is restored and eqs. (51) and (52) can be combined to
$$M_\sigma ^2M_\pi ^2=\frac{8\lambda }{N}\left[Q(M_\sigma ,T)Q(M_\pi ,T)\right],$$
(59)
which has the solution $`M_\sigma =M_\pi `$; the masses become degenerate.
Let us now turn to the discussion of the large-$`N`$ approximation which is in fact the $`N1`$ limit of the Hartree approximation. To derive the large-$`N`$ limit from the previous results, one simply neglects all contributions of order $`1/N`$. Note, however, that $`\phi ^2N`$, cf. eq. (30). Therefore, in the large-$`N`$ limit, the stationarity conditions read
$`M_\sigma ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda }{N}}\left[3\phi ^2+NQ(M_\pi ,T)\right],`$ (61)
$`M_\pi ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda }{N}}\left[\phi ^2+NQ(M_\pi ,T)\right],`$ (62)
$`H`$ $`=`$ $`\phi \left\{m^2+{\displaystyle \frac{4\lambda }{N}}\left[\phi ^2+NQ(M_\pi ,T)\right]\right\}.`$ (63)
This leads to
$$M_\sigma ^2=M_\pi ^2+\frac{8\lambda }{N}\phi ^2,$$
(64)
and
$$M_\pi ^2\phi =H.$$
(65)
These two equations are valid independent of the renormalization scheme. The latter equation implies the following in the case that chiral symmetry is not explicitly broken, $`H=m_\pi =0`$:
1. $`\phi 0`$. In this phase of spontaneously broken chiral symmetry $`M_\pi `$ has to vanish, i.e., Goldstone’s theorem is respected in the large-$`N`$ limit. $`M_\sigma `$ obeys the same relation (57) as in the Hartree approximation.
2. $`\phi =0`$. In this phase of restored chiral symmetry we again have $`M_\sigma =M_\pi `$, as in the Hartree case.
## V The Renormalized Gap Equations
As mentioned above, the last term in the integrand in (53) is divergent and requires renormalization. In the following we discuss renormalization with a three-dimensional momentum cut-off (CO) and via the counter-term (CT) renormalization scheme.
In the literature one often encounters the argument that this divergent term does not depend on temperature and can therefore either be absorbed in the definition of the renormalized vacuum mass in the CO scheme, or it is completely cancelled by a counter term in the CT scheme. This argument is correct to one-loop order in perturbation theory, since then the mass $`M`$ in this term is simply the bare mass and independent of temperature. However, in a self-consistent approximation scheme, like the Hartree approximation, this argument is incorrect, since the mass $`M`$ is the resummed mass, which becomes a function of the temperature through the self-consistent solution of the gap equation. Therefore, removing the divergence in either the CO or CT scheme may leave a finite, temperature-dependent contribution. Another way of stating this fact is that, as mentioned in the introduction, the renormalization constants may have to be chosen such that they depend on properties of the medium, like the temperature.
### A CO scheme
The simplest way to regularize the divergent integral is to introduce a three-dimensional ultraviolet momentum cutoff, $`\mathrm{\Lambda }`$. Computing the divergent integral then proceeds as follows,
$$Q_\mathrm{\Lambda }(M)^\mathrm{\Lambda }\frac{d^3𝐤}{(2\pi )^3}\frac{1}{2ϵ_𝐤(M)}=\frac{1}{4\pi ^2}_0^\mathrm{\Lambda }𝑑k\frac{k^2}{ϵ_𝐤(M)}=\frac{1}{8\pi ^2}\left[\mathrm{\Lambda }ϵ_\mathrm{\Lambda }(M)M^2\mathrm{ln}\frac{\mathrm{\Lambda }+ϵ_\mathrm{\Lambda }(M)}{M}\right].$$
(66)
In the limit $`\mathrm{\Lambda }\mathrm{}`$, this yields
$$Q(M,0)=\underset{\mathrm{\Lambda }\mathrm{}}{lim}Q_\mathrm{\Lambda }(M)=\frac{d^3𝐤}{(2\pi )^3}\frac{1}{2ϵ_𝐤(M)}=I_1M^2I_2+\frac{M^2}{16\pi ^2}\mathrm{ln}\frac{M^2}{\mu ^2},$$
(67)
where we have introduced a renormalization scale, $`\mu `$, and following we have defined
$`I_1`$ $``$ $`\underset{\mathrm{\Lambda }\mathrm{}}{lim}{\displaystyle \frac{\mathrm{\Lambda }^2}{8\pi ^2}},`$ (69)
$`I_2`$ $``$ $`\underset{\mathrm{\Lambda }\mathrm{}}{lim}{\displaystyle \frac{1}{16\pi ^2}}\mathrm{ln}{\displaystyle \frac{4\mathrm{\Lambda }^2}{\mu ^2}}.`$ (70)
The renormalization is carried out by introducing new parameters
$`{\displaystyle \frac{m_R^2}{\lambda _R}}`$ $`=`$ $`{\displaystyle \frac{m^2}{\lambda }}+{\displaystyle \frac{4(N+2)}{N}}I_1,`$ (72)
$`{\displaystyle \frac{1}{\lambda _R}}`$ $`=`$ $`{\displaystyle \frac{1}{\lambda }}+{\displaystyle \frac{4(N+2)}{N}}I_2,`$ (73)
where $`m_R^2`$ and $`\lambda _R`$ are the finite, renormalized mass and coupling constant.
#### 1 Hartree approximation
In the Hartree approximation, this leads to the following renormalized gap equations for the sigma and pion masses:
$`M_\sigma ^2`$ $`=`$ $`m_R^2+{\displaystyle \frac{4\lambda _R}{N}}{\displaystyle \frac{N+2}{N}}\left[\phi ^2+P(M_\sigma ,T)+(N1)P(M_\pi ,T)\right]`$ (75)
$``$ $`{\displaystyle \frac{2\lambda }{N\lambda _R}}\left\{M_\sigma ^2m_R^2{\displaystyle \frac{4\lambda _R}{N}}(N+2)\left[\phi ^2+P(M_\sigma ,T)\right]\right\},`$ (76)
$`M_\pi ^2`$ $`=`$ $`m_R^2+{\displaystyle \frac{4\lambda _R}{N}}{\displaystyle \frac{N+2}{N}}\left[\phi ^2+P(M_\sigma ,T)+(N1)P(M_\pi ,T)\right]`$ (77)
$``$ $`{\displaystyle \frac{2\lambda }{N\lambda _R}}\left\{M_\pi ^2m_R^2{\displaystyle \frac{4\lambda _R}{N}}(N+2)P(M_\pi ,T)\right\},`$ (78)
where the function $`P(M,T)`$ is defined as
$$P(M,T)=\frac{M^2}{16\pi ^2}\mathrm{ln}\frac{M^2}{\mu ^2}+\frac{d^3𝐤}{(2\pi )^3}\frac{1}{ϵ_𝐤(M)}\frac{1}{\mathrm{exp}[ϵ_𝐤(M)/T]1}.$$
(79)
Equations (76) and (78) are equivalent to eqs. (13), (14) in after the replacements $`\lambda \lambda /6,\varphi ^2N\varphi ^2,P(M,T)P_f[M]`$. \[Note that the terms $`M_\sigma ^2M_\pi ^2`$ in eqs. (13), (14) of can be eliminated by taking the difference of eqs. (13) and (14).\]
In the limit $`\mathrm{\Lambda }\mathrm{}`$, $`\lambda 0^{}`$ in order to have a finite $`\lambda _R`$, and the (bare) theory becomes unstable (see also Ref. ). Also, the (renormalized) masses obey $`M_\sigma ^2=M_\pi ^2`$, cf. (76), (78), which is undesirable. It would imply that chiral symmetry is unbroken, even when $`\phi 0`$. This problem was also addressed by the authors of . On the other hand, for $`0<\lambda <\mathrm{}`$, $`\lambda _R0^+`$ in the limit $`\mathrm{\Lambda }0`$, indicating that the (renormalized) theory becomes trivial .
Therefore, in the CO scheme the gap equations can only be meaningfully studied for finite $`\mathrm{\Lambda }`$. In this case, in the original gap equations (51), (52) we replace
$$Q(M,T)Q_\mathrm{\Lambda }(M)+Q_T(M),$$
(80)
where
$$Q_T(M)Q(M,T)Q(M,0)=\frac{d^3𝐤}{(2\pi )^3}\frac{1}{ϵ_𝐤(M)}\frac{1}{\mathrm{exp}[ϵ_𝐤(M)/T]1}.$$
(81)
The integral $`Q_T(M)`$ is UV-finite and does not require the introduction of a momentum cut-off. Consequently, the gap equations read
$`M_\sigma ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda }{N}}\left\{3\phi ^2+3\left[Q_T(M_\sigma )+Q_\mathrm{\Lambda }(M_\sigma )\right]+(N1)\left[Q_T(M_\pi )+Q_\mathrm{\Lambda }(M_\pi )\right]\right\},`$ (83)
$`M_\pi ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda }{N}}\left\{\phi ^2+\left[Q_T(M_\sigma )+Q_\mathrm{\Lambda }(M_\sigma )\right]+(N+1)\left[Q_T(M_\pi )+Q_\mathrm{\Lambda }(M_\pi )\right]\right\}.`$ (84)
The cut-off $`\mathrm{\Lambda }`$ has to be determined from the values of $`M_\sigma `$ and $`M_\pi `$ at $`T=0`$:
$`m_\sigma ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda }{N}}\left[3f_\pi ^2+3Q_\mathrm{\Lambda }(m_\sigma )+(N1)Q_\mathrm{\Lambda }(m_\pi )\right],`$ (86)
$`m_\pi ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda }{N}}\left[f_\pi ^2+Q_\mathrm{\Lambda }(m_\sigma )+(N+1)Q_\mathrm{\Lambda }(m_\pi )\right],`$ (87)
where we have used $`\phi (T=0)=f_\pi `$. In the chiral limit ($`H=m_\pi =0`$), the difference of (86) and (87) reads
$$m_\sigma ^2=\frac{8\lambda }{N}\left[f_\pi ^2+Q_\mathrm{\Lambda }(m_\sigma )Q_\mathrm{\Lambda }(0)\right].$$
(88)
However, from the stationarity condition (56) we conclude that $`m_\sigma ^2=8\lambda f_\pi ^2/N`$ in the chiral limit. This immediately leads to
$$Q_\mathrm{\Lambda }(m_\sigma )=Q_\mathrm{\Lambda }(0),$$
(89)
which for finite $`m_\sigma `$ can only be fulfilled if $`\mathrm{\Lambda }=0`$. This, however, is exactly the case treated in , without renormalization.
In conclusion, the CO scheme fails to provide a consistent renormalization of infinities in the phase of broken chiral symmetry in the Hartree approximation when $`H=m_\pi =0`$. Note that the same conclusion can be reached with a four-dimensional momentum cut-off. This failure can be traced to the fact that in the Hartree approximation diagrams of the type shown in Fig. 2 are not included (cf. the perturbative renormalization of the linear sigma model , see also the discussion in ).
Due to this failure, no results will be shown for the Hartree approximation with CO renormalization. However, we note that the case of explicitly broken symmetry, $`H0,m_\pi >0`$, is free of this problem. Then, the difference of eqs. (86) and (87) determines the coupling constant as
$$\lambda \lambda (\mathrm{\Lambda })=\frac{N}{8}\frac{m_\sigma ^2m_\pi ^2}{f_\pi ^2+Q_\mathrm{\Lambda }(m_\sigma )Q_\mathrm{\Lambda }(m_\pi )}.$$
(90)
The mass parameter is given by
$$m^2=\frac{m_\sigma ^23m_\pi ^2}{2}\frac{4\lambda }{N}(N+2)Q_\mathrm{\Lambda }(m_\pi ).$$
(91)
$`H`$ is determined from (56) to be $`H=f_\pi [m_\sigma ^28\lambda (\mathrm{\Lambda })f_\pi ^2/N]`$.
#### 2 Large-$`N`$ approximation
In the large-$`N`$ limit, the renormalized gap equation for the pion mass reads
$$M_\pi ^2=m_R^2+\frac{4\lambda _R}{N}\left[\phi ^2+NP(M_\pi ,T)\right],$$
(92)
while $`M_\sigma ^2`$ is still given by (64). This again has the consequence that $`M_\sigma =M_\pi `$ in the limit $`\mathrm{\Lambda }\mathrm{}`$, i.e., $`\lambda 0^{}`$, which as discussed above is an unwanted feature. On the other hand, there is no inconsistency in the large-$`N`$ approximation for finite $`\mathrm{\Lambda }`$. $`\mathrm{\Lambda }`$ is a free parameter and the gap equations to be solved are (64) for the sigma mass and
$$M_\pi ^2=m^2+\frac{4\lambda }{N}\left\{\phi ^2+N\left[Q_T(M_\pi )+Q_\mathrm{\Lambda }(M_\pi )\right]\right\}$$
(93)
for the pion mass. The parameters are again determined from $`M_\sigma (T=0)=m_\sigma `$, $`M_\pi (T=0)=m_\pi `$, and $`\varphi (T=0)=f_\pi `$. From these conditions we derive that the coupling constant is still given by its tree-level value, eq. (37), but $`m^2`$ is now determined from
$$m^2=\frac{m_\sigma ^23m_\pi ^2}{2}4\lambda Q_\mathrm{\Lambda }(m_\pi ).$$
(94)
$`H`$ retains its tree-level value on account of (65).
### B CT scheme
In the CT scheme, counter terms are introduced to subtract the UV divergences in $`Q(M,T)`$. Rewrite
$$\frac{d^3𝐤}{(2\pi )^3}\frac{1}{2ϵ_𝐤(M)}\frac{d^4k}{(2\pi )^4}\frac{1}{k^2+M^2},$$
(95)
where $`k_0𝐑`$ with $`k^2=k_0^2+𝐤^2`$, $`d^4k=d^3𝐤dk_0`$. To determine the counter terms, expand the integrand in a Taylor series around $`M^2=\mu ^2`$, where $`\mu `$ is the renormalization scale.
$$\frac{1}{k^2+M^2}=\frac{1}{k^2+\mu ^2}\underset{n=0}{\overset{\mathrm{}}{}}\left(\frac{\mu ^2M^2}{k^2+\mu ^2}\right)^n.$$
(96)
The $`d^4k`$ integral over the $`n=0`$ term in this expansion is quadratically divergent, while the integral over the $`n=1`$ term diverges logarithmically. The counter terms are chosen to remove these two terms, such that the renormalized result for the divergent integral is
$$\frac{d^4k}{(2\pi )^4}\left[\frac{1}{k^2+M^2}\frac{1}{k^2+\mu ^2}\frac{\mu ^2M^2}{(k^2+\mu ^2)^2}\right]=\underset{n=2}{\overset{\mathrm{}}{}}(\mu ^2M^2)^n\frac{d^4k}{(2\pi )^4}\frac{1}{(k^2+\mu ^2)^{n+1}}.$$
(97)
Note that the second counter term depends on the temperature through $`M`$. This fact represents the aforementioned possibility of having temperature-dependent counter terms in self-consistent approximation schemes, and was already discussed by the authors of . They also pointed out that this problem does not occur in less than three spatial dimensions. This is obvious from eq. (97), because then the second counter term is finite, and thus not required. In contrast, either in ordinary perturbation theory or in optimized perturbation theory renormalization at $`T=0`$ is sufficient to remove all divergences.
The last integral in (97) is finite and equal to
$$\frac{d^4k}{(2\pi )^4}\frac{1}{(k^2+\mu ^2)^{n+1}}=\frac{1}{(4\pi )^2}\frac{\mu ^{2(1n)}}{n(n1)}.$$
(98)
Expression (97) can be rearranged to give the final result
$$\frac{d^4k}{(2\pi )^4}\left[\frac{1}{k^2+M^2}\frac{1}{k^2+\mu ^2}\frac{\mu ^2M^2}{(k^2+\mu ^2)^2}\right]=\frac{1}{(4\pi )^2}\left[M^2\mathrm{ln}\frac{M^2}{\mu ^2}M^2+\mu ^2\right].$$
(99)
To obtain the renormalized gap equations, simply replace $`Q(M,T)`$ as given in (53) by
$$Q(M,T)=Q_T(M)+Q_\mu (M),$$
(100)
where
$$Q_\mu (M)\frac{1}{(4\pi )^2}\left[M^2\mathrm{ln}\frac{M^2}{\mu ^2}M^2+\mu ^2\right].$$
(101)
The renormalization scale $`\mu `$ is chosen to give the correct values for sigma and pion mass at $`T=0`$.
As an alternative to the above procedure, one can also compute (95) in dimensional regularization, i.e., in $`d`$ space-time dimensions, where the coupling constant $`g`$ is replaced by $`g\stackrel{~}{\mu }^ϵ`$. Here, $`\stackrel{~}{\mu }`$ is the renormalization scale in dimensional regularization and $`ϵ4d`$. In order to obtain (99), one has to add a counter term $`M^2/(8\pi ^2ϵ)+\mu ^2/(16\pi ^2)`$. Here, $`\mu `$ is the renormalization scale from the previous treatment and related to $`\stackrel{~}{\mu }`$ by $`\mu ^24\pi e^\gamma \stackrel{~}{\mu }^2`$, where $`\gamma `$ is the Euler-Mascheroni constant. Note again, that the counter term depends implicitly on the temperature through the resummed mass $`M`$. In the Appendix, we furthermore show that the CO and CT schemes are equivalent for unbroken $`O(N)`$ symmetry.
#### 1 Hartree approximation
In the Hartree approximation, the gap equations read
$`M_\sigma ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda }{N}}\left\{3\phi ^2+3\left[Q_T(M_\sigma )+Q_\mu (M_\sigma )\right]+(N1)\left[Q_T(M_\pi )+Q_\mu (M_\pi )\right]\right\},`$ (103)
$`M_\pi ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda }{N}}\left\{\phi ^2+\left[Q_T(M_\sigma )+Q_\mu (M_\sigma )\right]+(N+1)\left[Q_T(M_\pi )+Q_\mu (M_\pi )\right]\right\}.`$ (104)
The renormalization scale $`\mu `$ is determined from the vacuum values for the sigma and pion masses:
$`m_\sigma ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda }{N}}\left[3f_\pi ^2+3Q_\mu (m_\sigma )+(N1)Q_\mu (m_\pi )\right],`$ (106)
$`m_\pi ^2`$ $`=`$ $`m^2+{\displaystyle \frac{4\lambda }{N}}\left[f_\pi ^2+Q_\mu (m_\sigma )+(N+1)Q_\mu (m_\pi )\right].`$ (107)
In the chiral limit, the difference of these two equations reads
$$m_\sigma ^2=\frac{8\lambda }{N}\left[f_\pi ^2+\frac{m_\sigma ^2}{16\pi ^2}\mathrm{ln}\frac{m_\sigma ^2}{\mu ^2e}\right].$$
(108)
However, in order to be consistent with the (generally valid) equation (56), there is only a single choice for the renormalization scale, $`\mu ^2m_\sigma ^2/e`$. Then, the coupling constant is given by its classical value, $`\lambda =Nm_\sigma ^2/(8f_\pi ^2)`$, while
$$m^2=\frac{m_\sigma ^2}{2}\frac{4\lambda }{N}(N+2)\frac{\mu ^2}{16\pi ^2}.$$
(109)
In the case that chiral symmetry is explicitly broken, the difference of (106) and (107) yields the following equation for the coupling constant:
$$\lambda =\frac{N}{8}\frac{m_\sigma ^2m_\pi ^2}{f_\pi ^2+\left[m_\sigma ^2\mathrm{ln}(m_\sigma ^2/\mu ^2e)m_\pi ^2\mathrm{ln}(m_\pi ^2/\mu ^2e)\right]/16\pi ^2}\lambda (\mu ),$$
(110)
i.e., $`\lambda `$ runs with the renormalization scale. However, there is one value for the renormalization scale, where $`\lambda `$ retains its tree-level (i.e. classical) value,
$$\mu ^2\mu _{\mathrm{cl}}^2=\mathrm{exp}\left[\frac{m_\sigma ^2\left(\mathrm{ln}m_\sigma ^21\right)m_\pi ^2\left(\mathrm{ln}m_\pi ^21\right)}{m_\sigma ^2m_\pi ^2}\right].$$
(111)
The results for the Hartree case with explicitly broken symmetry presented in the next section will exclusively employ this value of $`\mu `$. The mass parameter is determined from
$$m^2=\frac{m_\sigma ^23m_\pi ^2}{2}\frac{4\lambda }{N}(N+2)Q_\mu (m_\pi ).$$
(112)
$`H`$ can be obtained from (65) at $`T=0`$.
#### 2 Large-$`N`$ approximation
In the large-$`N`$ limit, the gap equations to be solved are (64) for the sigma meson and
$$M_\pi ^2=m^2+\frac{4\lambda }{N}\left\{\phi ^2+N\left[Q_T(M_\pi )+Q_\mu (M_\pi )\right]\right\}$$
(113)
for the pion. In this case, $`\mu `$ is a free parameter, and cannot be fixed by the vacuum values for the sigma and pion masses. $`\lambda `$ and $`H`$ are always given by their tree-level values. The mass parameter is determined from
$$m^2=\frac{m_\sigma ^23m_\pi ^2}{2}4\lambda Q_\mu (m_\pi ).$$
(114)
## VI Results
In this section, we discuss numerical solutions of the gap equations for the meson masses and the stationarity condition on $`\phi `$. Three different cases are considered: the large-$`N`$ approximation in (a) the CO scheme, (b) the CT scheme, and (c) the Hartree approximation in the CT scheme. The Hartree approximation in the CO scheme will not be discussed, due to the problems exhibited in section V. We focus separately on the cases $`m_\pi =0`$ and $`m_\pi >0`$.
### A $`m_\pi =0`$
Figures 3 (a,c,e) show the meson masses and (b,d,f) $`\phi `$ as functions of temperature. Results for the large-$`N`$ approximation with CO renormalization are shown in parts (a,b), and with CT renormalization in (c,d). Results for the Hartree approximation with CT renormalization are shown in (e,f). For comparison, the dashed lines in each figure correspond to the unrenormalized results of .
In Figs. 3 (a,b), in the phase of spontaneously broken symmetry, there is no difference between the unrenormalized and renormalized cases. To understand this, first remember that $`M_\pi =0`$, cf. the discussion following eq. (65). Therefore, on account of (64), $`M_\sigma `$ is simply given by $`(8\lambda /N)^{1/2}\phi `$. In turn, $`\phi `$ is determined by (63). However, for $`M_\pi =0`$, this has the simple form
$$0=m^2+\frac{4\lambda }{N}\phi ^2+4\lambda \left[\frac{T^2}{12}+Q_\mathrm{\Lambda }(0)\right].$$
(115)
Using (94) for $`m_\pi =0`$, this becomes
$$0=\frac{m_\sigma ^2}{2}+\frac{4\lambda }{N}\phi ^2+4\lambda \frac{T^2}{12},$$
(116)
which is the same condition as in the unrenormalized case (where $`Q_\mathrm{\Lambda }`$ is absent). Since the coupling constant is given by its tree-level value (37), this immediately leads to the conclusion that the temperature for chiral symmetry restoration is
$$T^{}=\sqrt{3}f_\pi .$$
(117)
In the restored phase, $`\phi =0`$, sigma and pion masses are equal, and given by eq. (61) or (62). These equations are cut-off dependent, on account of (80). The mass is decreasing for increasing $`\mathrm{\Lambda }`$.
In Figs. 3 (c,d), we show results for the large-$`N`$ approximation in the CT scheme. In the broken phase, $`\phi >0`$, renormalization again does not affect the masses or $`\phi `$. In the phase of restored symmetry, $`\phi =0`$, the sigma and pion masses are degenerate, but depend on the renormalization scale. They decrease for increasing $`\mu `$. Note the similarity between the masses in the CO and the CT scheme when choosing the same value for the cut-off $`\mathrm{\Lambda }`$ and the renormalization scale $`\mu `$. Considering that both renormalization schemes are fundamentally different, this similarity is quite surprising. Another important conclusion is that renormalization of the gap equations does not destroy the second-order nature of the transition.
The results in the Hartree approximation, eqs. (51) – (56), are displayed in Figs. 3 (e,f). As in the unrenormalized case, we obtain a first order transition, with a transition temperature that appears to be slightly higher than in the unrenormalized case. To determine this temperature, however, one would have to analyze the shape of the effective potential, which is outside the scope of this paper.
### B $`m_\pi =139`$ MeV
Fig. 4 (a,c,e) shows the temperature dependence of the meson masses and Fig. 4 (b,d,f) the function $`\phi (T)`$ in the case of explicit symmetry breaking. As in Fig. 3, large-$`N`$ results are shown in (a,b) for the CO scheme and in (c,d) for the CT scheme. Parts (e,f) show our results for the Hartree approximation with CT renormalization. As already observed in the chiral limit, there is a striking similarity between the results in the CO and the CT schemes when choosing $`\mathrm{\Lambda }=\mu `$. Also, increasing the cut-off or the renormalization scale tends to increase the temperature at which (approximate) symmetry restoration takes place.
Baym and Grinstein noted that the additional terms originating from renormalization have the effect that the gap equations do not have a solution beyond a certain temperature (see also ). We found evidence for this in the CT scheme at temperatures above 400 MeV. In the CO scheme with a finite $`\mathrm{\Lambda }`$, this phenomenon does not occur.
## VII Conclusions and Outlook
In this paper, we have studied the temperature dependence of sigma and pion masses in the framework of the $`O(N)`$ model. The Cornwall–Jackiw–Tomboulis formalism was applied to derive gap equations for the masses in the Hartree and large-$`N`$ approximations. Renormalization of the gap equations was carried out within the cut-off and counter-term renormalization schemes. In agreement with , it was found that the cut-off scheme is flawed when the cut-off $`\mathrm{\Lambda }\mathrm{}`$. We therefore studied this renormalization scheme for $`\mathrm{\Lambda }<\mathrm{}`$. For the Hartree approximation we found that, in the chiral limit ($`m_\pi =0`$), there is no finite value for the cut-off, which is consistent with the set of stationarity conditions for the effective potential; $`\mathrm{\Lambda }=0`$ is the only possible choice. This problem was not encountered in the large-$`N`$ approximation; here any choice for $`\mathrm{\Lambda }`$ is possible. In the counter-term renormalization scheme, the Hartree approximation can be consistently renormalized, but in the chiral limit, the renormalization scale is restricted to a unique value in order to achieve consistency with the stationarity conditions for the effective potential. In the large-$`N`$ limit, the renormalization scale can be chosen arbitrarily. Changing the cut-off in the cut-off scheme or the renormalization scale in the counter-term scheme changes the meson masses at a given temperature. The reason is that, in the self-consistent approximation schemes considered here, the renormalization constants (or counter terms, respectively) may depend implicitly on temperature. This does not occur when renormalizing ordinary perturbation theory.
Our results can be compared to those of Roh and Matsui and Chiku and Hatsuda . The authors of computed the sigma and pion masses from the second derivative of an effective potential which was determined via the standard loop expansion approach. Being aware that this approach fails for theories with spontaneously broken symmetry, they corrected the resulting expressions to obtain gap equations which look similar to the ones in the Hartree approximation. (They are identical to Baym and Grinstein’s modified Hartree approximation ). The stationarity condition for $`\phi `$, however, was taken to be the same as in the large-$`N`$ approximation. Thus, their solutions respect Goldstone’s theorem in the phase of spontaneously broken symmetry, similar to the large-$`N`$ approximation discussed here, while the transition in their model is first order (in the chiral limit), like in the Hartree approximation.
The authors of employ optimized perturbation theory to compute the sigma and pion masses. This approach has the advantage that renormalization is straightforward. The results are similar to those of .
Recent dilepton experiments at CERN-SPS energies have generated interest in medium modifications of meson properties such as their mass and decay width. In general, the meson mass (squared) is given by the inverse propagator at $`k=0`$, $`M^2𝒢^1(0)`$. The decay width of a particle with energy $`\omega `$ at rest, $`𝐤=0`$, is given by $`\gamma (\omega )\mathrm{Im}\mathrm{\Pi }(\omega ,\mathrm{𝟎})/\omega `$ , where $`\mathrm{\Pi }(\omega ,𝐤)`$ is the self energy. In the CJT formalism, $`𝒢^1(k)=D^1(k;\phi )+\mathrm{\Pi }(k)`$, cf. eq. (25). In the Hartree or large-$`N`$ approximation studied here, the self energies do not acquire an imaginary part, because they are simply constants, and thus only shift the mass of the particles. Therefore, in these approximations, the particles are true quasi-particles with vanishing decay width. This would change if we included the diagrams of Fig. 2 in the effective potential, because, as is well-known , the imaginary part of these diagrams corresponds to decay and scattering processes.
To include these diagrams in the above treatment, however, is prohibitively difficult, because then the simple momentum dependence of the propagators $`𝒢_{\sigma ,\pi }(k)`$ in eq. (49) changes, since the self energies become explicitly momentum dependent. Then, instead of simple gap equations for the meson masses, the stationarity condition (24) becomes an (infinite) set of coupled integral equations for the propagators $`𝒢_{\sigma ,\pi }(k)`$.
Therefore, as a first approximation, we compute the decay widths from the self energies corresponding to these diagrams, but with internal lines given by the Hartree or large-$`N`$ propagators (49). This is equivalent to computing the decay width to one-loop order in perturbation theory, but taking the medium-modified masses of the particles computed above instead of the vacuum masses. The on-shell decay width of $`\sigma `$ and $`\pi `$ mesons at rest is then given by the following expressions , valid for $`2M_\pi M_\sigma `$:
$`\gamma _\sigma `$ $`=`$ $`\left({\displaystyle \frac{4\lambda \varphi }{N}}\right)^2{\displaystyle \frac{N1}{16\pi M_\sigma }}\sqrt{1{\displaystyle \frac{4M_\pi ^2}{M_\sigma ^2}}}\mathrm{coth}{\displaystyle \frac{M_\sigma }{4T}},`$ (119)
$`\gamma _\pi `$ $`=`$ $`\left({\displaystyle \frac{4\lambda \varphi }{N}}\right)^2{\displaystyle \frac{M_\sigma ^2}{8\pi M_\pi ^3}}\sqrt{1{\displaystyle \frac{4M_\pi ^2}{M_\sigma ^2}}}{\displaystyle \frac{1\mathrm{exp}[M_\pi /T]}{1\mathrm{exp}[M_\sigma ^2/2m_\pi T]}}{\displaystyle \frac{1}{\mathrm{exp}[(M_\sigma ^22M_\pi ^2)/2M_\pi T]1}}.`$ (120)
These quantities are shown in Fig. 5 for $`m_\pi =0`$, and in Fig. 6 for $`m_\pi =139`$ MeV, for the cases discussed in Figs. 3 and 4. For $`m_\pi =0`$ and in the large-$`N`$ approximation, pions are true Goldstone bosons, and therefore their decay width vanishes below the temperature corresponding to chiral symmetry restoration, see Figs. 5 (b,d). This is different in the Hartree approximation, where Goldstone’s theorem is violated, cf. Fig. 5 (f), and when chiral symmetry is explicitly broken, Figs. 6 (b,d,f). The reason is that, because pions have a finite mass, they can acquire a finite decay width on account of the absorption processes $`\pi \sigma \pi `$ and $`\pi \pi \sigma `$. For massless particles, these processes are kinematically forbidden.
Sigma mesons, however, can always decay into two pions, and therefore acquire a large decay width, cf. Figs. 5 and 6 (a,c,e). All decay widths vanish above the temperature where $`M_\sigma `$ becomes smaller than $`2M_\pi `$. This, however, is an artefact of the one-loop approximation. In two-loop order, the scattering processes $`\sigma \sigma \sigma \sigma ,\sigma \sigma \pi \pi ,\sigma \pi \sigma \pi `$, and $`\pi \pi \pi \pi `$ lead to a finite decay width for all particles even above this threshold.
The decay widths and masses computed here are relevant for the formation of disoriented chiral condensates , since they enter the evolution equations of the long-wavelength modes. This will be the subject of a subsequent investigation .
Acknowledgements
We thank T. Appelquist, J. Berges, S. Gavin, M. Gyulassy, T. Hatsuda, Y. Kluger, J. Knoll, L. McLerran, E. Mottola, B. Müller, and R. Pisarski for valuable discussions. D.H.R. thanks Columbia University’s Nuclear Theory group for continuing access to their computational facilities. J.T.L. is supported by the Director, Office of Energy Research, Division of Nuclear Physics of the Office of High Energy and Nuclear Physics of the U.S. Department of Energy under Contract No. DE-FG-02-91ER-40609.
## A Scheme Equivalence for Unbroken $`O(N)`$ Symmetry
In this Appendix, we show that the CO and the CT schemes are equivalent when the $`O(N)`$ symmetry is not broken ($`\phi =0`$). In this case, the gap equations become degenerate,
$$M^2=m^2+\frac{4\lambda }{N}(N+2)Q(M,T).$$
(A1)
(We only discuss the Hartree case here, for the large-$`N`$ approximation, simply replace $`N+2`$ by $`N`$.) In the CO scheme, using eqs. (67) and (V A), this becomes
$$M^2=m_R^2+\frac{4\lambda _R}{N}(N+2)\left[Q_T(M)+\frac{M^2}{16\pi ^2}\mathrm{ln}\frac{M^2}{\mu ^2}\right].$$
(A2)
The renormalization scale $`\mu ^2`$ can be determined from the $`T=0`$ limit of this equation. For unbroken $`O(N)`$ symmetry, $`M(T=0)=m_R`$, which then yields $`\mu =m_R`$.
On the other hand, in the CT scheme, we have
$$M^2=m_R^2+\frac{4\lambda _R}{N}(N+2)\left[Q_T(M)+\frac{M^2}{16\pi ^2}\mathrm{ln}\frac{M^2}{\mu ^2}\frac{M^2\mu ^2}{16\pi ^2}\right].$$
(A3)
Here, we made the finiteness of $`m`$ and $`\lambda `$ explicit by replacing them with $`m_R`$ and $`\lambda _R`$. Again, the condition $`M(T=0)=m_R`$ yields $`\mu =m_R`$.
The last term in (A3) leads to an apparent difference between the two schemes. However, shifting the coupling constant by a finite amount,
$$\frac{1}{\lambda _R}\frac{1}{\lambda _R}\frac{4(N+2)}{16\pi ^2N},$$
(A4)
one obtains (A2), which proves the equivalence of both schemes after properly redefining the coupling constant. The same conclusion can be reached starting from (A1) and using instead of (V A) the modified renormalization conditions
$`m_R^2\left[{\displaystyle \frac{1}{\lambda _R}}+{\displaystyle \frac{4(N+2)}{16\pi ^2N}}\right]`$ $`=`$ $`{\displaystyle \frac{m^2}{\lambda }}+{\displaystyle \frac{4(N+2)}{N}}I_1,`$ (A6)
$`{\displaystyle \frac{1}{\lambda _R}}+{\displaystyle \frac{4(N+2)}{16\pi ^2N}}`$ $`=`$ $`{\displaystyle \frac{1}{\lambda }}+{\displaystyle \frac{4(N+2)}{N}}I_2,`$ (A7)
which then leads to (A3).
|
no-problem/9901/astro-ph9901371.html
|
ar5iv
|
text
|
# X-ray Variability from the Compact Source in the Supernova Remnant RCW 103
## 1. Introduction
Recent spectro-imaging X-ray observations of central compact sources in supernova remnants (SNRs) challenge earlier notions that most young neutron stars (NSs) evolve in a manner similar to the prototypical Crab pulsar (Gotthelf 1998). In fact, the latest compilations shows that most of such associated objects manifest properties distinct from those of the Crab-like systems.
Based on observational grounds alone, three classes of NSs in SNRs are known whose flux is dominated by their X-ray emission; these include the X-ray pulsars with anomalously slow rotation (with periods in the range of $`512`$ s) and steep ($`\mathrm{\Gamma }\stackrel{>}{}3`$) power-law spectra (Gregory & Fahlman 1980; Gotthelf & Vasisht 1998 and refs. therein), the soft gamma-ray repeaters (Cline et al. 1982; Kulkarni et al. 1994; Vasisht et al. 1994), and a population of radio-quiet NSs in remnants (Caraveo et al. 1996; Petre et al. 1996; Mereghetti et al. 1996). The above objects are linked by their apparent radio-quiet nature, and taken collectively, may help further reconcile the NS birth rate with the observed SNR census. In this study we focus on the enigmatic X-ray source 1E 161348$``$5055 in the SNR RCW 103, for which no clear interpretation yet exists within the above taxonomic framework.
The Einstein X-ray source 1E 161348$``$5055 lies near the projected center of the bright, young ($`2\times 10^3`$ yrs; Carter et al. 1997) Galactic shell-type SNR RCW 103 (G332.4-0.4) and has been proposed as the first example of an isolated, cooling NS (Tuohy & Garmire 1980). It was discovered using the high resolution imager (HRI) but went unseen by a prior Einstein IPC observation and a subsequent ROSAT PSPC one, supposedly due to the poorer spatial resolution of these instruments. Surprisingly, an initial observation with the ROSAT HRI also failed to detect the source; this was attributed to the reduced HRI sensitivity of the 10’ off-axis pointing (Becker 1993). Finally, a 1993 ASCA observation re-discovered this elusive object (Gotthelf, Petre & Hwang 1997, GPH97 herein), but its spectral characteristics were found to be incompatible with a simple cooling NS model. This re-detection has been confirmed by more recent, on-axis, ROSAT HRI observations.
Herein, we present the results of our follow-up (Sep 1997) ASCA observation of 1E 161348$``$5055. In the same field lies the recently discovered 69 ms pulsar AX J161730-505505 (Torii et al. 1998), whose analysis is presented separately (Torii et al. 1999). While both sources are detected again, the flux from 1E 161348$``$5055 has declined significantly since the previous ASCA measurement. We discuss some implications of this large flux variability on the nature of 1E 161348$``$5055.
## 2. Observations and Analysis
A day-long follow-up observation with the ASCA Observatory (Tanaka et al. 1994) of RCW 103 was carried out on 1997 September 4. Data were acquired with both the solid-state (SISs) and gas scintillation spectrometers (GISs). The essential properties of these instruments are qualified in GPH97. The SIS data were acquired in 1-CCD BRIGHT mode with 1E 161348$``$5055 placed as close to the mean SIS telescope bore-sight as was practical, to minimize vignetting losses. The GIS data were collected in the highest time-resolution mode ($`0.5\mathrm{ms}`$ or $`0.064\mathrm{ms}`$, depending on the telemetry mode), with reduced spectral binning of $`12`$ eV per PHA channel. The effective, filtered observation time is $`58(49)`$ ks for each GIS(SIS) sensor. The new data were reduced and analyzed with the same methodology as in GPH97.
## 3. Results
We compared images of RCW 103 from our new observation with the ASCA images obtained four years earlier. The flux-corrected GIS images from the two epochs, restricted to the hard energy band-pass ($`310`$ keV), are displayed in figure 1 using an identical intensity scale. Both reveal a pair of distinct features, each having a spatial distribution consistent with that of a point source; one at the position of 1E 161348$``$5055, and the other at the position of the 69 ms X-ray pulsar AX J161730-505505 (due north); the flux of the latter has evidently remained constant (see Torii et al. 1999 for details). However, we estimate that 1E 161348$``$5055 has dimmed a factor $`12`$ in the hard band, after the diffuse flux from the remnant has been taken into account using the following method.
The soft X-ray flux (below 2 keV) is dominated by steady thermal emission from shock-heated gas in the remnant. The contribution of this component to the hard-band images is estimated using the soft-band images. The latter provides a good model for the spatial distribution of the surrounding shell on arcminute scales in the $`310`$ keV range. The soft-band contribution from the shell was renormalized to the hard-band, and subtracted from the flux calibrated hard-band image to extract the flux contribution from the source alone. For the comparison, the new data were rebinned by a factor of 4 ($`1^{}\times 1^{}`$ pixels) to match the binning used with the earlier observation. The longer exposure of the second observation results in increased sensitivity to 1E 161348$``$5055, however, this gain is offset by its location at a greater off-axis angle (as is evident by the asymmetrical PSF) relative to the first observation. An equivalent analysis of the SIS data reproduces the variability seen in the GIS hard band.
### 3.1. Spectroscopy
We analyzed the spectrum from the new observation using the same approach presented in GPH97. To maximize the sensitivity, we simultaneously fit the spectra from all four ASCA detectors. We restricted our SIS spectral fits to $`>1.2`$ keV as the calibration at the lower energies has become less reliable over time. The improved viewing geometry over the previous observation, coupled with the spectral stacking (four detectors), made it possible to measure the spectrum, despite the lower source flux (only one SIS data set was used in the earlier spectral analysis). The resulting fits to simple models (Table I) are consistent with those inferred for the first observation. Thus, while the flux has decreased by an order of magnitude, the spectrum appears essentially unchanged.
### 3.2. The Long-term Light Curve
To investigate its long-term flux behavior, we constructed a light curve of 1E 161348$``$5055 which spans 18 years ($`19791997`$), using 10 available archival observations. For each observation, we extracted background-subtracted countrates or 3$`\sigma `$ upper limits. These rates are then used to estimate the flux in a given energy band. For lack of knowledge to the contrary, we assume that the spectral shape is invariant in time and modeled by a blackbody whose parameters are given in Table 1. The best-fit power-law model is unphysical at the softer X-ray energies and thus the blackbody model is preferred for this comparison. We folded the latter model through the spectral response function of each instrument using the XSPEC spectral fitting package and inferred the source flux for each observation in a fiducial $`0.52`$ keV energy band. The results, listed in Table 2 and plotted in Fig. 2, confirm a dramatic flux change between the ASCA observations, and suggest that 1E 161348$``$5055 is variable, to a lesser extent, amongst the other observations.
We present this source flux comparison among instruments with some caution, as these can be potentially unreliable. The different energy bands, flux calibrations, point response functions, and background contamination can produce large uncertainties in the derived fluxes. When extrapolating, the relative count rates are very sensitive to the instrumental energy band, along with the assumed $`N_H`$ and emission model. However, our fundamental result stands regardless of the aforementioned caveats: the ASCA data alone, and to a lesser extent the ROSAT data, establish the fact that 1E 161348$``$5055 varied throughout the time it has been observed.
### 3.3. The Short-term Variability
The excursions in flux noted from observations separated by months or years suggest that variability might be present on shorter time-scales. We searched the day-long ASCA observations for hour-scale temporal variability. In addition, we examined the behavior on a timescale of a few days using the August, 1995, ROSAT HRI observation, with a net exposure time of 50 ks spread over a week. In neither case did we find evidence of variability greater than the photon statistic limit of 10 percent.
We searched the new ASCA GIS data for a coherent pulsed signal, as in GPH97. No significant periodicity was found in the period space between 10 ms - 1000 s. The upper limit on the pulsed fraction for this data set is $``$ 23%, compared with the 13% limit of GPH97. The data show no clear evidence for accretion noise such as redness in the power spectrum. The data were searched for X-ray bursts and other anomalies in the light curve, but none were found.
## 4. Discussion
A number of hypotheses have been advanced to explain the nature of 1E 161348$``$5055. Upon discovery, this source was proposed as an isolated neutron star emitting blackbody radiation (Tuohy & Garmire 1980). Further optical and radio observations (e.g. Tuohy et al. 1983; Dickel et al. 1996) have failed to identify a counterpart, thereby bolstering this interpretation. The observations of GPH97 showed that the point source could be described as a hot blackbody of $`kT0.6`$ keV and a $`0.510`$ keV flux of $`6.5\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup>; therefore a luminosity of $`L_X10^{34}`$ erg s<sup>-1</sup> (at 3.3 kpc) and an effective emitting area of $`1`$ km<sup>2</sup>, or $`0.1`$% the surface area of a NS. This corresponds to a rather small hot-spot on a stellar surface, which is in turn surprising since the source shows no rotational modulation (down to $`13`$%). Also, the inferred temperature is too high for an object of age few $`\times 10^3`$ yrs (but see below).
Heyl & Hernquist (1998) recently attempted to salvage the cooling NS model by invoking an ultra-magnetized star ($`B_s10^{15}`$ G) with an accreted hydrogen atmosphere (Page 1997, for a review). This combination can enhance the cooling flux as well as shift the emission bluewards (Chabrier, Potekhin & Yakov 1997) so that it effectively mimics a hot blackbody in the ASCA spectrum. However, it is hard for cooling models to address the issue of variability or the flux decimation observed between the ASCA epochs, unless there is a source for rejuvenating the heating of the NS interior. Such a source could be the super-strong magnetic field, implicit in the above model. Episodic rearrangement of the field (Thompson & Duncan 1996) in the stellar interior could provide the energy to impulsively heat the core.
The stellar surface would re-adjust to reflect the internal heating on a short thermal timescale of a few months. Although heating of the NS is viable in this scenario, the rapid cooling on a timescale of a few years, observed between the ASCA epochs, cannot be explained without some very “exotic” cooling process . In addition, a factor of 10 variability in the hard ($`310`$ keV) band would result in a downward shift of the effective temperature by a factor 1.4, which should have been detected. On these grounds, we reject the hypothesis that the observed X-ray emission from 1E 161348$``$5055 is simple cooling radiation.
An ultra-magnetized NS is also a leading model for the anomalous X-ray pulsars (AXPs), an example of which is the $`\stackrel{<}{}2,000`$ yr-old, 12-s X-ray pulsar in the remnant Kes 73 (Vasisht & Gotthelf 1997). GPH97 compared 1E 161348$``$5055 to the latter on the basis of similar spectral characteristics. And at least two AXPs are reported to vary significantly in flux, by as much as a factor of five (1E 1048.1-593; Oosterbroek et al. 1998). While 1E 161348$``$5055 shows some properties that are tantalizingly similar to those of the AXPs, the lack of observed strong pulsations ($`30\%`$ modulation for the AXPs) is notably amiss, particularly as a large magnetic field should result in highly anisotropic surface emission. Gravitational defocusing and/or unfavorable viewing geometry, however, might account for the lack of observed pulsations.
Alternatively, the variability can be indicative of an accreting compact object. Popov (1997) suggests that 1E 161348$``$5055 is an old accreting NS with a low magnetic field and long spin period ($`10^3`$ s), the by-product of a disrupted binary and not of the same age as RCW 103. This proposition is bolstered by the discovery of the nearby 69 ms pulsar AX J161730-505505 (spindown age of $`8100`$ yrs), located outside the remnant shell (Torii et al. 1998). Several arguments, including those based on the pulsar’s implied velocity and lack of wind nebula, and the symmetry of the SNR, however, make an association unlikely (Kaspi et al. 1998, Torii et al. 1999).
Finally, there exists the possibility that the source is an isolated stellar-mass black hole (BH), accreting from the surrounding medium or from supernova ejecta fall-back. Brown & Bethe (1994) have discussed scenarios in which a massive progenitor explodes as a supernova and then evolves into a BH of several solar masses after accreting captured ejecta. Such a scenario is clearly applicable to the source in RCW 103. Temporal variability and lack of pulsed emission are the natural consequences in such a model.
An accretion process around a BH almost inevitably involves rotating gas flows. Popov (1997) has dismissed the possibility of a few stellar-mass BH in RCW 103 based on the small implied emitting area ($`1`$ km<sup>2</sup>) of an equivalent blackbody radiator (see §4). This argument would certainly apply for the case of the standard optically-thick, thin-disk model. However, low-efficiency solutions can exist for accretion flows (especially at low $`\dot{M}`$) around BHs, in which most of the viscously generated thermal energy is advected into the BH. Below a critical mass accretion rate of $`0.1\dot{M}_{Edd}`$ ($`\dot{M}_{Edd}`$ is the Eddington rate) accretion flows turn advection dominated (or ADAF), and the observed luminosity in 1E 161348$``$5055 would suggest an accretion rate of $`10^{(2.53.0)}\dot{M}_{Edd}`$ (or $`10^{10}`$ $`M_{}`$ yr<sup>-1</sup>) (cf ADAF models summarized in Narayan et al. 1998). At this rate, the BH would accrete $`10^7`$ $`M_{}`$ of matter, a small fraction of the mass of supernova ejecta, at the sustained present rate over its lifetime of $`10^3`$ yr. Within the framework of the above arguments, it is possible that flow around 1E 161348$``$5055 could be detected as a faint optical source (V $`\stackrel{>}{}`$ 22 after accounting for visual extinction) or a 10-100 $`\mu `$Jy (1 GHz) radio source.
## 5. Conclusions
The variability of the X-ray emission from the compact source in RCW 103 leaves little room for a conventional cooling NS origin. Instead, an accretion scenario may be considered, although the relatively low luminosity, lack of optical counterpart, and young age are inconsistent with a typical accreting NS binary. Accretion from a very low mass ($``$0.1 M) companion (Mereghetti et al. 1996; Baykal & Swank 1996) or a fossil disk around a solitary NS (van Paradijs et al. 1995), however, is not ruled out. We suggest that within the context of inefficient accretion (such as advection dominated flows), a stellar mass black hole is a viable possibility. We reiterate, however, that the spectral characteristics of 1E 161348$``$5055 are remarkably similar to those of the AXPs, which are suspected to be ultra-magnetized NSs and thought to be powered by magnetic field decay rather than rotational braking (Thompson & Duncan 1996; Vasisht & Gotthelf 1997).
Independent of the above phenomenology, the properties of 1E 161348$``$5055 add to the view that young collapsed stars can follow an evolutionary scenario quite distinct from those of Crab-like pulsars. The property of being radio quiet is common to all AXPs, SGRs and to 1E 161348$``$5055 (Gotthelf 1998; for possible physical mechanisms see Baring & Harding 1998). It is likely that they all share a common heritage and may prove to be part of an evolutionary sequence.
This work uses data made available from the HEASARC public archive at GSFC. G.V. thanks J Heyl for discussions.
|
no-problem/9901/astro-ph9901021.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The Cepheids’ period – luminosity (P-L) relation got itself talked about again in the recent past due to claims of observed metallicity dependences of at least its zero point (Gould 1994, Sasselov et al. 1997, Sekiguchi & Fukugita 1998, Kennicutt et al. 1998). The divergences of the observational results obtained by the different authors are large, however. Therefore, to date the situation concerning the magnitude and the reliability of the effect is rather uncertain on the observational side. Along the theoretical avenue, some progress towards more realistic and in particular more consistent Cepheid modeling has been achieved recently. Upper limits on the reaction of the P-L relation on assumptions in evolution computations, pulsation stability analyses, and mappings onto photometric passbands can be estimated now quite reliably.
This informal review discusses some of the progress (relying mostly on an extensive comparison by Sandage et al. 1998) and it emphasizes modeling problems which still have to be overcome to finalize the theoretical foundation of the Cepheids’ P-L relation.
## 2 Stellar Evolution
In absence of proper models for stellar hydrodynamics in the framework of stellar evolution, astronomers are urged to parameterize processes such as convective overshooting and semiconvection at various levels of accuracy. Numerous free parameters always go along with such modelling attempts. As different schools tend to favor different choices of such model parameters, we encounter frequently hard to follow controversies and dichotomies in the literature. For a nonspecialist it becomes then essentially impossible to select a particular simulation result for his purpose based on rational reasoning alone. As recipe-driven modeling continues, this unfortunate situation will hardly disappear in the near future.
Semiconvection and convective overshooting are emphasized above because they are the hydrodynamical key-phenomena whose effects cause major uncertainties in the stellar evolution of intermediate-mass and massive stars. Evidently, any other modification of the stellar microphysics shifts and twists the star’s track on the Hertzsprung-Russell (HR) plane too. In contrast to hydrodynamical processes, however, broad acceptance of say opacity tables and nuclear-reaction rates is found in the community.
Numerical observations reveal that overshooting shifts the evolutionary track of a given stellar mass mainly to higher luminosity during the main sequence phase. During the core helium burning the size of the blueward loops is modified when overshooting is incorporated in the numerical scheme (e.g. Matraka et al. 1982, Stothers & Chin 1992, Alongi et al. 1993). In contrast to overshooting, the effect of semiconvection appears to be more dramatic, at least during the core-helium burning phase. Even the very existence of blue loops can depend on incorporating semiconvection (e.g. Langer et al. 1985, Langer 1991).
Even if the physical processes and the micro-physical data are the same in different evolution codes, the resulting tracks can differ significantly. A comparison of Baraffe et al.’s (1998) Fig. 1 with Saio & Gautschy’s (1998) (SG98) Fig. 1 exposes that in the first case an additional smaller loop occurs superposed on the the larger (main) blue loop during the core helium burning. Hence, where SG98 find only three crossings of the Cepheids instability domain, Baraffe et al. (1998) can accommodate up to five.
The argument that the second crossing of the instability region only is relevant for the Cepheid phenomenon is perpetuated even in the most recent literature. A closer look at stellar evolution data shows that only $`M_{}6M_{}`$ spend by far the longest time in the instability domain during the second crossing (say the ratio of the residence time therein exceeds 10 compared with the other crossings). Towards higher masses, the ratios of 2nd to 3rd and 2nd to 1st crossing times drops markedly and therefore the probability to encounter 1st and 3rd time crossing stars increases to a non-negligible level.
For a $`7M_{}`$ star, the ratio of 2nd to 1st crossing time reduces to about 3 and to 1.6 for 3rd/2nd crossing. The numbers further decrease as the stellar mass increases. Detailed numbers depend sensitively on the choice of the abundances. Saio (1998) presented a figure showing that for his computations of $`X=0.7,Z=0.02`$ sequences the 3rd crossing constitutes the slowest passage through the strip for stellar masses above $`5M_{}`$.
All the uncertainties and the resulting discrepancies in the evolutionary tracks as well as, of course, any changes of the star’s chemical composition modify the mass – luminosity (M-L) and the mass – radius (M-R) relation at the the position of say the fundamental blue edge (FBE) of the instability strip. How this translates into and influences the P-L relation will be addressed in the next section.
## 3 Stellar Pulsation Theory
The basic physical mechanism underlying the pulsations of Cepheids is well understood. This success story started in early sixties (cf. Baker & Kippenhahn 1965 and Cox 1980 for further references). This does not mean, however, that all theoretical problems have been resolved in full detail. The qualitative era has passed and increasingly more accurate observational data need to be explained with a theoretical framework in which stellar evolution and also micro-physics is known and understood at a high level.
To motivate the existence of a P-L relation for Cepheids we start with the pulsation equation: $`\mathrm{\Pi }\sqrt{\rho _{}/\rho _{}}=Q`$, with $`\mathrm{\Pi }`$ being the theoretical period (in contrast to the observed one $`P`$). Under favorable circumstances, i.e. under appropriate relations between stellar mass, $`M_{}`$, its luminosity, $`L_{}`$, and the effective temperature, $`T_{\mathrm{eff}}`$, a simple relation between period and luminosity emerges. Note that the pulsation constant, $`Q`$, is constant only for homologous models. Theoretically, the formulation of the P-L relation at the position of the blue edge (in the case of Cepheids at the fundamental blue edge:FBE) is the least cumbersome one and therefore it is chosen in the following. Observers, on the other hand, prefer something like the *ridge line*, which is a mean line through the observational data on the P-L plane. A ridge line is statistically much easier to compute than defining an accurate envelope on top of sparse data. If theorists would comply with the observers’ choice they would have to have a clear idea about the width of the instability strip, which is, however, hard to come by computationally. The position of the red edge of the strip is defined by the efficiency of the convective leakage of the flux in the superficial stellar layers – an extremely cumbersome task to model! We return to the role of convection further below.
After some easy manipulations we can write the pulsation equation as
$$\mathrm{log}\mathrm{\Pi }=\frac{1}{2}\mathrm{log}M/M_{}+\frac{3}{4}\mathrm{log}L/L_{}3\mathrm{log}T_{\mathrm{eff}}+\mathrm{log}Q+C$$
(1)
In the first term of the rhs we can assume a M-L relation at the position of the FBE, i.e. $`M=f_1(L;q_i)`$. Notice that this relation depends on all those assumptions (indicated by the $`q_i`$’s) which influence the loci of the tracks around the instability strip. These quantities are, however, often buried deep in the formulation of the modeling of stellar evolution and therefore they are difficult to quantify. Furthermore, the M-L relation depends on the evolutionary phase, or in other words, in depends on the number of crossing of the instability strip.
The third term on the rhs of eq. (1) can be eliminated via a parameterization of the position of the FBE, i.e. $`T_{\mathrm{eff}}=f_2(L,q_i^{})`$ which depends implicitly again on various quantities $`q_i^{}`$(which might be different from the ones influencing the M-L relation).
Essentially, the quarreling about the robustness of the P-L relation, i.e. the uniqueness of its slope and zero-point, after $`f_1,f_2`$ are introduced into eq. (1), boils down to quantifying the effect of the $`q_i,q_i^{}`$ on the P-L relation. From first principles, little is to be learned on these matters. Therefore, numerical calculations have to be performed and conclusions (even if not so conclusive) have to be derived from such “numerical observations”.
Coarsely speaking, the M-L relation, the parameterization of the FBE, and the M-R relation of supergiants in the Cepheid stage all show significant dependences on $`Y`$ and $`Z`$ abundances, on the number of the crossing (i.e. the evolutionary stage) of the instability region (cf. SBT98), and on all the stellar hydrodynamical uncertainties (e.g. mixing length, overshooting length-scale, semi-convective efficiency). Most surprisingly, however, the *combined* effect of the various dependencies leads eventually to a nearly $`Y,Z`$, and crossing-number independent P-L relation.
As an example consider the following: The FBE (at any luminosity) depends on $`Y`$ and $`Z`$ roughly as
$$\mathrm{\Delta }\mathrm{log}T_{\mathrm{eff}}=+0.04\mathrm{\Delta }Y0.49\mathrm{\Delta }Z$$
(2)
(SG98). More extensive (and therefore probably more reliable) studies of the Y dependence show a larger coefficient associated with $`\mathrm{\Delta }Y`$: +0.14 in Chiosi et al. (1993), and +0.11 in Iben & Tuggle (1972). The important thing to notice is the counteracting influence of $`Y`$ and $`Z`$. It reduces the shift of the FBE when Z is modified with the frequently encountered cosmogonical constraint of $`\mathrm{\Delta }Y/\mathrm{\Delta }Z35`$.
Furthermore, despite the varying M-L relation for different heavy element abundances and different hydrodynamical assumptions concerning convection, the M-R relation adapts such that finally the slope of the emerging P-L relation changes by less than about 10 %. The same applies for the different crossing numbers across the instability region: The M-L relation is a different one for each crossing, so is the M-R relation. In combination they result in a zero point shift of less than $`0.1M_{\mathrm{bol}}`$ for any choice of chemical abundances. The slope remains unchanged to a high degree.
Summa summarum: The combination of stellar evolution and linear stability analyses on these models directly leads to a P-L relation at the FBE (at least) which is not noticeably dependent neither on chemical abundances nor on hydrodynamical treatment or evolutionary stage. The average scatter of the P-L relation at the FBE is about 0.047 in $`\mathrm{log}L/L_{}`$. The ad hoc defined fundamental red line (FRL) of (SG98) – postulating the fundamental red edge (FRE) to be parallel-shifted in $`\mathrm{log}T_{\mathrm{eff}}`$ by 0.06 indicates a P-L relation for the FRL running also parallel to the one of the FBE; this result is not a priori trivial as the stars evolve non-homologously across the instability strip and as the efficiency of convective leakage might be a strong function of $`T_{\mathrm{eff}}`$.
Most of all, however, one of the major unsettled issues in stellar pulsation theory – the influence of convection – is important also for Cepheids. Modeling of convection in pulsating stars has been attempted with mixed success for many years (e.g. Unno 1967, Baker & Gough 1979, Stellingwerf 1982, Xiong et al. 1998, Yecko et al. 1998). Usually it is multiply parameterized and therefore it is not satisfactorily understood. Model computations clearly demonstrate that convective leakage is the dominating process to cause a red edge and therefore it is an indispensable ingredient of the theory. A proper treatment of convection might also be necessary to pin down the blue edge with high accuracy. The sensitivity of blue edge on convection depends on the stellar mass, however. As the instability strip is tilted to lower effective temperatures for higher luminosities, higher-mass stars will be influenced stronger by convection than low-mass stars (cf. Fig. 2).
At high luminosities, radiation pressure tends to destabilize stellar envelopes (i.e. tends to shift the FBE to the blue) whereas convective leakage in the superficial layers induces a redshift of the FBE. The competition between these two effects will determine the final outcome. Taking the observational P-L data at face value, it appears as if convection is the dominating effect, i.e. the shift of the FBE to the red induces a reduction of the slope of the P-L relation at its upper end (cf. SBT98). The deviation from simple linearity of the P-L relation appears to set in above about 60 days period. This is relevant for stellar masses above 10 $`M_{}`$.
The lowest panel of Fig. 2 shows that for the 5 and the 10 $`M_{}`$ stars convective transport is the dominating energy transport mechanism within the partial H/HeI ionization zone: locally more than 90 % of the flux is transported by material motion. The integral work, which is shown in the top panel, is not significantly overlapping with the convection-dominated region in the 5 $`M_{}`$ model. Most of the driving happens deeper inside the envelope, in the HeII partial ionization zone around $`\mathrm{log}T=4.75`$. The situation changes, however, at 10 $`M_{}`$: More than 50 % of the driving occurs there already in the H/HeII region which overlaps with about half of the convective flux dominated region of the envelope. For the latter model, a neglect of the convective flux perturbation in the stability analysis will most probably affect the eigenanalysis markedly. In other words, a simple analysis, suppressing convective leakage even at the blue edge will produce too blue a FBE compared with a realistic treatment. Therefore, over the whole period range, say up to 100 days, a simplistic computation will produce a P-L relation with a slope which is larger than what nature realizes. When constraining a P-L parameterization to low luminosities (i.e. shorter periods), the effect of a even a suppressed pulsation-convection coupling might lead to a slight zero-point shift in the P-L relation. Only a slope shift is not so likely. The magnitude of such a shift is hard to predict. A comparison relies on how realistically convection penetrating optically thin regions can be described. Presently, we are far from a physically realistic treatment of this domain.
In contrast to repeated mis-citations (e.g. Bono et al. 1998) SG98 did not use “radiative” models in their linear, nonadiabatic stability analyses. The used evolutionary stellar models which are convective wherever stellar thermodynamics requires it. What is not included, however, is the *perturbation* of the convective flux due to some convection-pulsation interaction prescription. This clarification seems important as the correct (radiative/convective) structure of the equilibrium model is relevant for the nonadiabatic *period* derived from a linear analysis – even if the perturbation of the convective flux turns out to be irrelevant for the onset of pulsational instability for the short-period Cepheids. Of course, the same aspect pops up again when comparing linear models with convective non-linear ones (e.g. BMS 1999).
The lowest panel of Fig. 2 shows in grey the weight functions of the fundamental modes for the $`5M_{}`$ and the $`10M_{}`$ models. The magnitude of the arbitrarily normalized weight function measures the local importance of the stellar properties to determine the period the pulsation mode under consideration. Obviously, the convection zone in the H/HeI partial ionization region has marginal influence on the weight function for both stellar masses. It is clearly seen that the two sets of curves live in essentially disjoint regions of the star. However, it is important to note for comparisons it is relevant to compute stellar structures with resulting from the same physical input: If the convection zones are suppressed to compute radiative models (e.g. BMS 1999) then the compressibility structure ($`\rho (r)T(r)`$) differs from the one in a radiatively/convectively layered models. Very naturally, two such sets of such unlike models will have different distributions of weight functions and therefore different periods. The more extended the convection zones become, the more the periods diverge between the two approaches: This is just what is seen in Fig. 53 of BMS99.
Modern nonlinear modeling which includes diffusion-type convection treatment coupled with the star’s envelope pulsation do exist (cf. BMS99). The convection formulated therein is a refined version of Stellingwerf’s (1982) approach. The BMS99 nonlinear FBE is at all luminosities below $`\mathrm{10\hspace{0.17em}000}`$ $`L/L_{}`$ about 200 K hotter than e.g. SG98 FBE without pulsation-convection coupling. Naïvely, we would have expected the opposite behaviour. Above $`\mathrm{10\hspace{0.17em}000}`$ $`L/L_{}`$ the BMS99 FBE shifts coolward rather abruptly, suggesting a FBE at 5500 K at $`\mathrm{32\hspace{0.17em}000}`$ $`L/L_{}`$ where the extrapolation of the SG98 data leads to 5700 K. This drop in temperature of the FBE is the reason of the quadratic term introduced into the P-L parameterization by Bono & Marconi (1998). Observations from LMC and SMC do not, in our opinion, support such an efficient convective leakage at long periods (high luminosities).
Considerable complications enter the treatment of convection in Cepheids since the superficial convection zone associated with the H/HeI partial ionization reached into optically thin regions. Hence, radiative losses become important in the energy balance of the convective elements. This is very difficult to model; to date a reliable quantitative description is not available. For the finite amplitude behaviour, i.e. for the lightcurve, the details of the treatment of convection at the outer boundary was shown to be important already in RR Lyrae stars (Feuchtinger 1998) which are hotter than the Cepheids.
It is unlikely that the final effect of convection on the P-L relation will introduce a metallicity dependence which we are missing currently. The driving in the outermost regions is H-ionization dominated, and this is very robust against envisaged small changes in the chemical composition.
To complete this section, we write the P-L relation as shown in Fig. 3, as $`M_{\mathrm{bol}}=a\mathrm{log}\mathrm{\Pi }+b`$. A comparison of several theoretical studies by SBT98 demonstrated that at 10 days period, the zero point of th P-L relation agrees to within $`0.1M_{\mathrm{bol}}`$ for the different authors (and therefore different approaches in terms of stellar evolution and stellar stability computations). With the SG98 data we arrive at $`M_{\mathrm{bol}}=4.84`$ (at \[Fe/H\] = 0) or at $`M_V=4.92`$ after bolometric correction according to SBT98. The zero-point remains stable at a level of about +/- 0.05 mag when \[Fe/H\] varies between +0.4 and -1.7 and when Y changes (independently) between 0.25 and 0.30. The agreement is somewhat degraded at $`\mathrm{log}P=1.5`$ since the slopes derived from the different sources vary at the level of about 10 %. In the study of SG98 the value of $`a`$ was found to increase by about 9 % upon reducing \[Fe/H\] from zero by one dex. This all shows nevertheless the rather remarkable stability of the P-L relation against changes in physical assumptions, numerical realizations, and last but not least against abundance variations.
## 4 Stellar Atmospheres
Even if we should have gained some confidence in the computations of stellar evolution and then in stellar stability properties, there is another hurdle to clear: the transfer of the modulated energy flux through the stellar atmosphere.
As we have seen in the previous sections, the *theoretical* Cepheid P-L relation appears not to be very sensitive (compared with the level of accuracy of presently obtained observations) on metallicity and assumptions on the microphysics or on hydrodynamical processes. Even if that should prove correct in the future, the transformation into observationally relevant filter passbands can potentially destroy this simplicity due to differential blanketing effects in the stellar atmosphere.
Several recent studies of the P-L relation performed a mapping of bolometric magnitudes into various photometric passbands (Baraffe et al. 1998, Bono et al. 1998, SBT98). Only the SBT98 study uses stellar atmospheres which are constructed for the particular needs of Cepheids. In the other studies, the available $`\mathrm{log}g`$ range of the atmosphere models is too narrow for Cepheids so that extrapolations are necessary for colors and bolometric corrections.
SBT98 find a strong dependence of color indices (in particular at short-wavelength) and bolometric correction on metallicity and gravity. The superposition of these dependencies, when computing the the P-M<sub>x</sub> relations (where $`x`$ stands for some filter passband), conspires again to result in a remarkable smallness of the effect when metallicity is varied by a factor of about 50. Table 1 quantifies the metallicity dependence as deduced by SBT98.
The typical scatter in the data amounts to about 0.02 mag. The variations of the absolute magnitudes at different periods is close to the noise in the theoretical models, but also in the range of the uncertainty of observational data. In any case, the above numbers are more than a factor three smaller than some observational results published recently (e.g. Gould 1994, Sasselov et al. 1997, Sekiguchi & Fukugita 1998, Kennicutt et al. 1998).
Applying the above results to distance moduli obtained in different passbands for LMC and SMC very good internal agreement is found, usually on the level of +/- 0.02 mag when using either the SBT98 or the DiBenedetto (1997) data. Additionally, SBT98 demonstrated that distance moduli to the Clouds derived with RR Lyr stars and Cepheids agree well even when not correcting the Cepheids’ moduli for metallicity. This latter result already hints at only a weak dependence of the P-L zero points on abundance.
## 5 Final comments
Based on the mapping of bolometric relations onto selected filter passbands with appropriate stellar (static, plane-parallel) atmosphere models, SBT98 found that a weak metallicity-dependent zero point shift in the P-L relation exists. Its magnitude is below 0.1 mag/dex of \[Fe/H\] in the passbands of B,V, and I. An more exact number could not be deduced to date as the results scrape close to the scatter brought about by the models themselves and the different modeling assumptions. It appears, however, that the theoretically deduced variation with metallicity lies in the range of the claimed observational errors. Therefore, from the theoretical side, no significant zero point variation is expected to be found in the observations presented to date. Applying to the distance moduli corrections to the absolute magnitudes assuming the upper limits of their \[Fe/H\] dependence leads to the distance moduli with a remarkable internal agreement in B,V, and I for both LMC and SMC (cf. SBT98). The obviously much stronger \[Fe/H\] dependences claimed in the observational literature is not supported by theory and the discrepancy has clearly to be resolved in the near future as it poisons the use of pulsators to accurately calibrate the distance scale in the neighboring universe.
One of the important questions at the interface of observations and theory is that of proper averaging observational data. Theory always discusses equilibrium quantities and observations provide at best *some* mean values. How well do these mean values represent equilibrium quantities? The transformations of observational data averaged over a cycle, such as e.g. $`m_\mathrm{V}`$, to obtain quantities predicted by theory (such as $`M_V`$ or $`M_{\mathrm{bol}}`$) are unfortunately not unique. For a brightness to luminosity transformation, a cycle average of the *intensity* is expected to be the best as it can be physically motivated. Surprisingly enough, Karp (1975) shows for his model that a magnitude mean approximates the equilibrium value best. For colors the situation becomes even more inscrutable as the quality of a particular choice of averaging is furthermore considerably passband dependent. Also Bono et al. (1998) point out these differences without suggesting, however, a approximation strategy for observers yet.
Another aspect which might lead to unexpected divergences when defining some sort of a ridge line is the uneven population of the instability strip. Taking Fernie’s (1990) or SBT98’s results at face value one observes that the Cepheids tend to crowd the blue side and avoid the red part of the instability strip altogether at short periods. From the view-point of stellar evolution theory this is not understandable. The stars should enter the instability strip from the red and therefore populate the strip from cool to blue upon increasing the luminosity. Seemingly the opposite is observed! Furthermore, our Fig. 1 shows that the evolutionary timescales are not even across the strip. Almost generically, the evolution is slower close to the blue edge than around the red one. This seems to be true independently of the crossing number. It is the second crossing of the 10 $`M_{}`$ star only which suggests an even speed and therefore an even population of variables across the instability region. Not accounting for the above mentioned population-density differences of pulsators across the strip, not to mention yet observational biases due to an amplitude – effective temperature – period dependence, distorts the slope of the ridge line. For example, computing a ridge line assuming a uniform population of the strip and neglecting the short-period blue clumping discussed by Fernie (1990) leads to a shallower P-L relation than what one obtains from a FBE relation.
On theoretical grounds, using a FBE relation would be the most favorable choice. This, however, is hard if not impossible at all to realize considering the small number of stars usually accessible. When using *all* the Cepheids in the strip to establish a slope of the P-L relation a possible uneven population of the instability strip should be accounted for. A quantification of the effect should be feasible for stellar evolution simulators as they have the necessary data at hand.
Acknowledgements: The Swiss National Science Foundation supported this study through a PROFIL2 fellowship. I am indebted to H. Harzenmoser for inquisitive discourses during a Meringues-meeting at Chemmeribode Bad.
References
Alongi M., Bertelli G., Bressan A., Chiosi C., Fagotto F., et al. 1993, A&AS 97, 851
Baker N.H., Gough D.O. 1979, ApJ 234, 232
Baker N.H., Kippenhahn R. 1965, ApJ 142, 868
Baraffe I., Alibert Y., Méra D., Chabrier G., Beaulieu J.-P. 1998 ApJ 499, L205
Bono G., Marconi M. 1998, in IAU Symp. 190, New Views of the Magellanic Clouds, eds. Y.-H. Chu, J. Hesser & N. Suntzeff, ASP Press
Bono G., Marconi M., Stellingwerf R.F. 1999, to appear in ApJS (BMS99)
Bono G., Caputo F., Castellani V., Marconi M. 1998, astro-ph/9809127
Chiosi C., Wood P.R., Capitanio, N. 1993, ApJS 86, 541
Cox J.P. 1980, Stellar Pulsation, Princeton:Princeton University Press
DiBenedetto G.P. 1997, ApJ 486, 60
Fernie J.D. 1990, ApJ 354, 295
Feuchtinger M. 1998, A&A 337, L29
Gould A. 1994, ApJ 426, 542
Iben I., Jr., Tuggle R.S. 1972, ApJ 178, 445
Karp A.H. 1975, ApJ 200, 354
Kennicutt R.C., et al. 1998, ApJ 498, 181
Kochaneck C.S. 1997, ApJ 491, 13
Langer N. 1991, A&A 252, 669
Langer N., El Eid M.F., Fricke K.J. 1985, A&A 145, 179
Matraka B., Wassermann C., Weigert A. 1982, A&A 107, 283
Saio H. 1998, in Pulsating Stars, eds. M. Takeuti and D.D. Sasselov, Tokyo:Universal Academy Press
Saio H., Gautschy A. 1998, ApJ 498, 360 (SG98)
Sandage A., Bell R.A., Tripicco M.J. 1998, preprint
Sasselov D.D., et al. 1997, A&A 324, 471
Sekiguchi M., Fukugita M. 1998, Observatory 118, 73
Stothers R.B., Chin C-W. 1992, ApJ 390, 136
Stellingwerf R.F. 1982, ApJ 262, 330
Unno W. 1967, PASJ 19,140
Xiong D.R., Cheng Q.L., Deng L. 1998, ApJ 500, 449
Yecko P.A., Kolláth Z., Buchler J.R. 1998, A&A 336, 553
|
no-problem/9901/cond-mat9901244.html
|
ar5iv
|
text
|
# Moment scaling at the sol - gel transition
## I Reversible and irreversible aggregation models
Below we deal with a system of monomers (the basic units) which can aggregate to form the connected clusters. A monomer is considered as a cluster of mass 1 (this is the mass unit). Moreover, monodispersity of the monomeric mass as well as the mass conservation during the aggregation is assumed, i.e., the cluster-masses are always integer numbers in this approach.
The basic sol-gel transition is the appearance at a finite time of an infinite cluster, called the gel. ’Infinite’ means here that a finite fraction of the total mass of the system belongs to the gel. Note that this definition is applicable both to finite as well as to infinite systems, but contextually and historically, the tools used to study this behaviour have been defined somewhat differently in these two cases . For finite systems, the following moments of the number-mass-distribution $`n_s`$ ( i.e. the number of clusters of mass $`s`$) are introduced :
$`M_k^{}={\displaystyle \underset{s}{}}s^kn_s,`$ (1)
where the summation is performed over all clusters except the largest one. The superscript in Eq. (1) recalls this constraint on the allowed values of $`s`$ . Consequently, the mass of gel-fraction is just $`NM_1^{}`$ with :
$`N={\displaystyle \underset{\text{a}lls}{}}sn_s,`$ (2)
the total mass of the system. For infinite systems, the following normalized moments of the concentration-mass-distribution $`c_s`$ (i.e. the concentration of clusters of mass $`s`$) are introduced :
$`m_k^{}={\displaystyle \underset{s}{}}s^kc_s,`$ (3)
where the summation runs over all $`s`$ . Generally, the concentrations are normalized in a special way :
$`c_s=\underset{N\mathrm{}}{lim}{\displaystyle \frac{n_s}{N}},`$ (4)
and not in a direct relation with the volume. Consequently, the probability for a monomer to belong to the gel-cluster is just equal to $`1m_1^{}`$ .
These different definitions of moments are equivalent in the sense that they are connected through the relation :
$`m_k^{}=\underset{N\mathrm{}}{lim}{\displaystyle \frac{M_k^{}}{N}}.`$ (5)
In this paper, we investigate scaling behaviours of distributions of mass-distribution-moments for the two ’classical’ approaches to the sol-gel transition : the reversible model (the percolation ) and the irreversible model (the Smoluchowski equations). In the former case, diffusion is unimportant and the reaction between clusters is the relevant step, whereas in the latter case, on the contrary, the reaction between clusters is unimportant and the diffusion is relevant. In this sense, these two models are believed to belong to the two different classes, and all other models of aggregation are suspected to belong to one of them. For example, statistical Flory-Stockmayer theory should behave as the percolation model for the scaling properties, since it corresponds to the calculation ’at equilibrium’. This ’universality’ in diverse sol-gel situations is at present a guess.
Percolation model can be defined as follows : in a box (a part of the regular lattice), each site correponds to a monomer and a proportion $`p`$ of active bonds is set randomly between sites. This results in a distribution of clusters defined as ensemble of sites connected by active bonds. For a definite value of $`p`$, say $`p_{cr}`$, a giant cluster almost surely spans all the box. For example, in the thermodynamic limit when the size of box becomes infinite (this limit will be denoted by ’$`lim`$’ in the following), a finite fraction of the total number of vertices belongs to this cluster. Therefore, we get the results : $`m_1^{}=1`$ for $`p<p_{cr}`$ and $`m_1^{}<1`$ for $`p>p_{cr}`$. Moreover, $`m_1^{}`$ is a decreasing function of the occupation probability. This typical behaviour is commonly (and incorrectly) called : ’the failure of mass conservation’, but, as stated before, $`m_1^{}`$ is more simply the probability for a vertex to belong to some finite cluster.
On the other hand, the infinite set of Smoluchowski equations :
$`{\displaystyle \frac{dc_s}{dt}}={\displaystyle \frac{1}{2}}{\displaystyle \underset{i+j=s}{}}K_{i,j}c_ic_j{\displaystyle \underset{j}{}}K_{s,j}c_sc_j,`$ (6)
are the coupled non-linear differential equations in the variables $`c_s`$, i.e., in the concentrations of clusters of mass $`s`$. The time $`t`$ includes both diffusion and reaction times, and these equations suppose irreversibility of aggregation, i.e., the cluster fragmentation is not allowed. The coefficients $`K_{i,j}`$ represent the probability of aggregation between two clusters of mass $`i`$ and $`j`$ per unit of time. Some of them are explicitely known for different experimental conditions . But all such known aggregation kernels have the remarkable homogeneity feature :
$`K_{ai,aj}=a^\lambda K_{i,j},`$ (7)
for any positive $`a`$, with $`\lambda `$ called the homogeneity index. Maybe the simplest example of the homogeneous kernel is $`K_{i,j}=(ij)^\mu `$. It has been shown theoretically that if $`\mu `$ is larger than 1/2, then there exists a finite time, say $`t_{cr}`$, for which $`m_1^{}`$ becomes smaller than 1 for $`t>t_{cr}`$ . This can be interpreted as the appearance of an infinite cluster at the finite time $`t_{cr}`$, and it is tempting to put in parallel the occupation probability $`p`$ in the percolation model and the time $`t`$ in the Smoluchowski approach. But we will see later that even if this parallel seems reasonable ($`p`$ being the advancement of the aggregation process), some physical quantities behave quite differently. Note at last that in eq. (6), as written above, the sum over $`j`$ does not include the gel $`j=\mathrm{}`$ if any (since $`c_{\mathrm{}}=1/\mathrm{}=0`$ ), so the reaction between sol and gel is not taken into account. In principle, an additional term realizing the sol-gel aggregation should be added on the right-hand member of eq. (6).
## II Reversible and irreversible sol-gel transitions
To see differences between both types of models, we shall treat two particularly simple cases : the bond-percolation on the Bethe-lattice, and the aggregation kernel $`K_{i,j}=ij`$ in the Smoluchowski approach. They are chosen to be as close as possible in the sense that clusters generated by the percolation model are branched structures (without any loop) in an infinite-dimensional space. So, all their constituents are at the surface and reactivity must then be proportional to their mass. If the diffusion of clusters is negligible, e.g., when all clusters diffuse with the same velocity, or for highly concentrated systems, the corresponding reactivity kernels $`K_{i,j}`$ between two such clusters of mass $`i`$ and $`j`$ should be proportional to $`ij`$.
The bond-percolation on the Bethe-lattice with coordination number $`z`$, has been solved by Fisher and Essam. Here, the main result we are interested in, is the average concentration :
$`c_s=z{\displaystyle \frac{((z1)s)!}{((z2)s+2)!s!}}p^{s1}(1p)^{(z2)s+z},`$ (8)
and the first normalized moment :
$`m_1^{}=({\displaystyle \frac{1p}{1p^{}}})^{2z2},`$ (9)
with $`p^{}`$ being the smallest solution of equation :
$`p^{}(1p^{})^{z2}=p(1p)^{z2}.`$ (10)
Let us define $`p_{cr}1/(z1)`$. For $`p<p_{cr}`$, the only solution of the above equation is : $`p^{}=p`$, but when $`p`$ is larger than $`p_{cr}`$, then there is a smaller non-trivial solution which behaves as $`p_{cr}|pp_{cr}|`$ near $`p_{cr}`$. Above this threshold, the moment $`m_1^{}`$ is smaller than 1 and behaves approximately as $`12(pp_{cr})/(1p_{cr})`$. The marginal case $`z=2`$ corresponds to the linear-chain case.
Coming back to the concentrations, we can see that for large values of the size $`s`$, the following Stirling approximation holds :
$`c_ss^{5/2}\mathrm{exp}(\alpha s),`$ (11)
with $`\alpha `$ given by :
$`\alpha =\mathrm{ln}\left({\displaystyle \frac{p}{p_{cr}}}({\displaystyle \frac{1p}{1p_{cr}}})^{z2}\right).`$ (12)
For this model, a power-law behaviour of the concentrations is seen at the threshold $`p_{cr}`$, where more precisely $`c_ss^\tau `$ with $`\tau =5/2`$. Outside of this threshold, an exponential cut-off is always present . This sort of critical behaviour at equilibrium is analogous to the thermal critical phenomena, and in particular, there exist two independent critical exponents, for example $`\tau `$ and $`\sigma `$ which is the exponent of the mean cluster-size divergence (here $`\tau =5/2`$ and $`\sigma =1`$), to describe completely the critical features.
The case of the Smoluchowski equations is quite different. Putting $`K_{i,j}=ij`$ in these equations, Leyvraz and Tschudi showed that there exists a critical value of the time, say $`t_{cr}`$ (here : $`t_{cr}=1`$), such that the solution is :
$`c_s`$ $`=`$ $`{\displaystyle \frac{s^{s2}}{s!}}t^{s1}\mathrm{exp}(st)\text{f}ort<t_{cr}`$ (13)
$`c_s`$ $`=`$ $`{\displaystyle \frac{s^{s2}}{s!}}\mathrm{exp}(s)/t\text{f}ort>t_{cr},`$ (15)
for size-distribution and :
$`m_1^{}`$ $`=`$ $`1\text{f}ort<t_{cr}`$ (16)
$`m_1^{}`$ $`=`$ $`1/t\text{f}ort>t_{cr},`$ (18)
for the first normalized moment.
As explained in the previous section, the behaviour of $`m_1^{}`$ characterizes the sol-gel transition, but some other features are interesting to compare to the percolation model. Firstly, one can see that the power-law behaviour is present for $`t>t_{cr}`$ and not only at the threshold, since for large $`s`$, we have :
$`c_s`$ $``$ $`s^{5/2}\mathrm{exp}(\alpha s)\text{f}ort<t_{cr}`$ (19)
$`c_s`$ $``$ $`s^{5/2}/t\text{f}ort>t_{cr},`$ (21)
with $`\alpha =tt_{cr}\mathrm{ln}(t/t_{cr})`$. The whole distribution for the finite-size clusters evolves similarly and the appearance of a power-law behaviour is not a sign of the transition but rather a characteristics of the gelation phase. Secondly, it has been proved that for more general homogeneous kernels : $`K_{i,j}=(ij)^\mu `$ , there exists a relation between the exponent $`\tau `$ of the power-law behaviour and the exponent $`\sigma `$ of the divergence of the mean size, more precisely : $`\tau =\sigma +2`$. Here for $`\mu =1`$ we have $`\tau =5/2`$ and $`\sigma =1/2`$. So, just one exponent is needed to describe the complete critical behaviour. In this sense, the reversible and irreversible sol-gel transitions, though close, are not equivalent.
## III The origin of fluctuations in reversible and irreversible sol-gel models
As noticed by Einstein , fluctuations of a macroscopic variable $`M`$ at the equilibrium resembles a Brownian motion in the space of this variable and, hence, one expects the fluctuations to verify :
$`{\displaystyle \frac{<(M<M>)^2>}{<M>}}\text{Constant}.`$ (22)
’Constant’ means here : independent of the mass of the system. This approach should be true for any short-ranged correlations, i.e., far from any critical behaviour. When close to the critical point, fluctuations are correlated throughout the whole system and Ornstein - Zernike argument yields :
$`<(M<M>)^2>ϵ^{\nu (d+\eta 2)},`$ (23)
where $`ϵ`$ is the distance to the critical point, $`d`$ is the dimensionality of the system, and $`\nu `$ , $`\eta `$ are the two critical exponents related respectively to the divergence of the correlation length and to the divergence of the correlation function. In the same way, the average value of $`M`$ behaves as $`<M>ϵ^\beta `$, so that, without making any additional assumption about relations between critical exponents, one obtains :
$`{\displaystyle \frac{<(M<M>)^2>}{<M>^\delta }}\text{Constant},`$ (24)
if and only if $`\delta =\nu (d+\eta 2)/\beta `$ . $`\delta `$ is here a free parameter and should not be confused with yet another critical exponent. ’Constant’ in (24) means : independent of $`ϵ`$ for an infinite system. On the other hand, but by trivial finite-size scaling, this must also be a constant independent of the mass $`N`$ of the system at the critical point. Moreover, if the standard relations between critical exponents hold, we obtain : $`\delta =2`$ for any dimension $`d`$ (for the Landau-Ginzburg theory we have to replace $`d`$ by the upper critical dimension $`d_c=4`$ , above which the mean-field is valid). So the fluctuations of an order parameter in the thermodynamic systems are expected to behave differently at the critical point and outside of it.
The case of irreversible aggregation models is quite different. Fluctuations in off-equilibrium physical processes are hard to analyze because they develop dynamically and, moreover, they often keep memory of a history of the process. In this sense, large fluctuations in such processes cannot be an unambiguous signature of the critical behaviour. In theoretical studies, it is often assumed that the fluctuations are irrelevant for the correct description of the mean properties of the system (e.g. in the Smoluchowski approach). This point will be revisited below.
We will focus on the cluster-mass distribution $`n_s`$ , the cluster-multiplicity distribution $`P(M_0^{})`$, and the distribution of the gel-mass for finite systems. The cluster-multiplicity $`M_0^{}`$ is just the total number of clusters minus 1, i.e., the largest cluster is omitted. We wish to answer the following questions : (i) what is the importance of the power-law behaviour of the mass-distribution in detecting the critical behaviour? and, (ii) what are the scaling properties of multiplicity distribution and gel-mass distribution near the sol-gel transition for reversible and irreversible models ? The importance of the pertinent quantity $`M_0^{}`$ in this context is that it is directly observable in many experimental situations where the system is not too large and the cluster masses are not directly accessible like , e.g., in the hadronization process in strong interaction physics or in the process of formation of the large scale structures in the Universe. Moreover, informations about the normalized moments $`m_0^{}`$ or their fluctuations can also be obtained for large systems, such as in the polymerization, the colloid aggregation, or the aerosol coalsecence.
## IV Scaling of the multiplicity-distributions at the reversible and irreversible sol-gel transition
The multiplicity distribution is intensely studied in the strong interaction physics where simple behaviour of much of the data on hadron-multiplicity distribution seems to point to some universality independent of the particular dynamical process. Some time ago, Koba, Nielsen and Olesen suggested an asymptotic scaling of this multiplicity probability distribution in strong interaction physics :
$`<M_0^{}>P(M_0^{})=\mathrm{\Phi }(z),z{\displaystyle \frac{M_0^{}<M_0^{}>}{<M_0^{}>}},`$ (25)
where the asymptotic behaviour is defined as $`<M_0^{}>\mathrm{}`$, $`M_0^{}\mathrm{}`$ for a fixed
$`M_0^{}/<M_0^{}>`$ ratio. $`<M_0^{}>`$ is the multiplicity averaged over an ensemble of independent events. The KNO scaling means that data for different energies (hence differing $`<M_0^{}>`$) should fall on the same curve when $`<M_0^{}>P(M_0^{})`$ is plotted against the scaled variable $`M_0^{}/<M_0^{}>`$. Extending this assumption, we suppose the more general scaling form :
$`<M_0^{}>^\delta P(M_0^{})=\mathrm{\Phi }(z_\delta ),z_\delta {\displaystyle \frac{M_0^{}<M_0^{}>}{<M_0^{}>^\delta }},`$ (26)
with $`\delta `$ a real parameter and $`\mathrm{\Phi }`$ a positive function. This form will be called the $`\delta `$-scaling. KNO-case corresponds to $`\delta =1`$. The normalization of the probability distribution $`P(M_0^{})`$ and definition of the average value of $`M_0^{}`$, provides the two constraints :
$`lim{\displaystyle _{<M_0^{}>^{1\delta }}^{\mathrm{}}}\mathrm{\Phi }(u)𝑑u=1,`$ (27)
(28)
$`lim{\displaystyle _{<M_0^{}>^{1\delta }}^{\mathrm{}}}u\mathrm{\Phi }(u)𝑑u=0,`$ (29)
which imply : $`\delta 1`$ since $`\mathrm{\Phi }`$ is positive.
As shown by Botet et al , the multiplicity distribution for the 3d-bond percolation model on the cubic lattice at the infinite-network percolation threshold exhibits the $`\delta `$ \- scaling with $`\delta =1/2`$ . Even though the system experiences the second-order critical phenomenon, fluctuations of the multiplicity-distribution remain small and the KNO-scaling does not hold. Of course, $`m_0^{}`$ is not in this case an order parameter since $`\tau >2`$ even though there is some irregularity in its behavior passing the threshold. This non-analyticity can be illustrated by the exact result for bond-percolation on the Bethe lattice. In this mean-field case, the normalized $`0^{th}`$-moment is :
$`m_0^{}=(1{\displaystyle \frac{z}{2}}p^{})\left({\displaystyle \frac{1p}{1p^{}}}\right)^{2z2}{\displaystyle \frac{z2}{2(z1)}}(z1)ϵ+(1{\displaystyle \frac{z}{2}})|ϵ|,`$ (30)
with : $`ϵ=pp_{cr}`$, and $`ϵ1`$. It is easy to see that there is a jump of the first $`p`$-derivative of $`m_0^{}`$ : $`z/2`$ for $`pp_{cr}^{}`$, and $`(43z)/2`$ for $`pp_{cr}^+`$. The proper order parameter for this model is the normalized mass of the gel-phase, i.e., the mass of the largest cluster divided by the total mass of the system $`S_{max}/N`$ . Different probability distributions $`P(S_{max}/N)`$ for different system sizes $`N`$ can be all compressed into a unique characteristic function (see Fig. 1) :
$`<S_{max}/N>P(S_{max}/N)=\mathrm{\Omega }\left({\displaystyle \frac{S_{max}<S_{max}>}{<S_{max}>}}\right),`$ (31)
which is an analogue of the the KNO-scaling function (25) of the multiplicity probability distributions. This new result is important since it seems to be a characteristic critical behaviour of the order parameter. Now we will discuss what happens for a dynamical transition.
We have simulated Smoluchowski approach by a standard Monte-Carlo binary aggregation . At each step of the event-cascade, a couple of cluster, with masses $`i`$ and $`j`$, is chosen randomly with probability proportional to $`(ij)^\mu `$, the time is then increased by the inverse of this probability, and the couple of clusters is replaced by a unique cluster of mass $`i+j`$. The largest cluster is always taken into account in this scenario. Let us discuss here the $`(\mu =1)`$-case ($`K_{i,j}=ij`$). With the proper normalization, the critical gelation time is $`t_{cr}=1/N`$ , where $`N`$ is the total mass of the system. The power-law size distribution is indeed recovered numerically with the right exponent $`\tau =5/2`$ (see Fig. 2). In contrast to the percolation model, the multiplicity distribution shows $`(\delta =0.2)`$-scaling (see Fig. 3) corresponding to very small correlated fluctuations. The scaling function, which in this case is asymmetric and sharp, is also quite different from the one for the percolation .
From the point of view of the criticality, the moment $`M_1^{}`$ should be a better candidate. The $`M_1^{}`$-distribution in the $`z_\delta `$ \- variable with $`\delta =0.67`$ for masses $`N=1024`$, $`N=4096`$ and $`N=16384`$ is shown in Fig. 4. This non-trivial value of $`\delta `$ is here a signature of the transition : it disappears for $`t>t_{cr}`$ (see Fig. 5). But this is not yet analogous to the KNO-scaling because $`M_1^{}`$ is not exactly the order parameter. The true quantity to use is the reduced average mass of the largest cluster : $`S_{max}/N=(NM_1^{})/N=1m_1^{}`$ for which $`(\delta =1)`$-scaling holds (see Fig. 6), as suggested by the phase-transition arguments, and in accordance also with the equilibrium percolation model.
The results we have obtained here for the reversible or irreversible sol-gel transition in thermodynamical or dynamical systems are expected to satisfy the following general conjecture :
* The occurence of $`(\delta =1)`$ \- scaling in the probability distribution $`P(M)`$ of a certain macroscopic quantity $`M`$ is the sign of a critical behavior, with $`M`$ as the order parameter;
* The occurence of scaling with $`1/2<\delta <1`$ in $`P(M)`$ is the sign of a critical behavior in the system with $`M`$ related to but not identical to the order parameter;
* The occurence of $`\delta =1/2`$ \- scaling in the distribution $`P(M)`$ is the sign that the variable $`M`$ is not singular.
These conjectures has also recently been checked for kinetic fragmentation systems, and confort them.
## V Conclusions
We have studied here the scaling features of certain macroscopic quantities which are relevant for the sol-gel transition, such as the two first moments of the cluster-mass-distribution and the mass of the gel, i.e., the order parameter. We have employed two different models : the percolation model which exhibits the equilibrium phase transition and the off-equilibrium Smoluchowski theory which shows a dynamical phase transition. The phase transition threshold in both cases is associated with the very particular scaling (31) of the order parameter fluctuations. For Smoluchowski’s equations, this scaling is seen uniquely at the critical gelation time $`t_{cr}`$ and disappears both for $`t<t_{cr}`$ and $`t>t_{cr}`$ . It is interesting to notice that for $`t>t_{cr}`$ , the cluster-mass-distribution is a power-law but , nevertheless, the order parameter fluctuations exhibit the small amplitude limit of fluctuations, i.e., the Gaussian scaling.
For quantities other than the order parameter, similar scaling laws (see eq. (16)) are present but now with the different value of the parameter $`\delta `$ of the scaling law. We have shown that the scaling : $`<M>^\delta P(M)=\mathrm{\Phi }((M<M>)/<M>^\delta )`$ for $`M,<M>\mathrm{}`$, holds for any macroscopic quantity and the value of $`\delta `$ is : $`1/2`$ for the non-critical quantities, $`0<\delta <1`$ ($`\delta 1/2`$ and $`1`$) for critical quantities which related but which are non identical to the order parameter, and $`1`$ for the order parameter, respectively. These new scaling laws could be used in the phenomenological applications either to find the phase transition threshold or to investigate whether the observed quantity is : regular, critical or whether it corresponds to the order parameter in the studied process. For example, in high-energy lepton-lepton collisions, the scaling behaviour with $`\delta =1`$ has been reported in the multiplicity distributions of the produced particles for various collision energies . This bahaviour has been interpreted as a signature of the asyptotic limit (the Feynman scaling) in the $`S`$-matrix bahaviour, in spite of the fact that the Feynaman scaling has been experimentally disproved by the continuing rise of particle density in the central region. The analysis presented in this work offers an alternative and much more plausible explanation for these findings in terms of the off-equilibrium phase transition. It is indeed challenging to see the a priori unexpected relation between such different physical phenomena as the sol-gel transtition phenomenon and the particle production in the ultrarelativistic collisions of leptons. It would also be extremely interesting to apply the mathematical tools proposed in this work in the more typical aggregation phenomena as found in ….
Figure captions
Fig. 1
The gel-mass distribution for the 3d-bond percolation model on cubic lattices of different sizes : $`N=8^3`$ (crosses), $`N=10^3`$ (asterisks), $`N=12^3`$ (triangles), and for the fixed bond probability $`p_{cr}=0.2488`$. Each point is the average over $`200000`$ events.
Fig. 2
Double-logarithmic plot of the average mass-distribution of the Monte-Carlo Smoluchowski simulations with aggregation kernel $`K_{i,j}=ij`$, for the two different system masses : $`1024`$ (diamonds) and $`N=4096`$ (filled circles) , at the critical gelation-time ($`t_{cr}=1/N`$).
Fig. 3
The multiplicity distribution for the Monte-Carlo Smoluchowski simulations with aggregation kernel $`K_{i,j}=ij`$, for the two different system masses : $`1024`$ (diamonds) and $`N=4096`$ (filled circles), in the $`z_\delta `$-variable ( $`\delta =0.2`$) , at the critical gelation-time. Each point is the average over $`10^5`$ independent events.
Fig. 4
The $`M_1^{}`$-distribution for the Monte-Carlo Smoluchowski simulations with aggregation kernel $`K_{i,j}=ij`$, for the two different system masses : $`1024`$ (diamonds) and $`N=4096`$ (filled circles), in the $`z_\delta `$-variable ( $`\delta =0.67`$) , at the critical gelation-time. Each point is average over $`10^5`$ independent events.
Fig. 5
The same as Fig. 4, but in the $`(\delta =1/2)`$-variable and after the critical gelation-time ($`t_{cr}=2/N`$).
Fig. 6
The $`S_{max}`$-distribution for the Monte-Carlo Smoluchowski simulations with aggregation kernel $`K_{i,j}=ij`$, for the two different system masses : $`1024`$ (diamonds) and $`N=4096`$ (filled circles), in the KNO-variable at the critical gelation-time. Each point is the average over $`250000`$ independent events.
|
no-problem/9901/cond-mat9901005.html
|
ar5iv
|
text
|
# Comment on “Theory of metal-insulator transitions in gated semiconductors”
In a new paper, Altshuler and Maslov suggest a model to account for the experimentally observed decrease in resistivity with decreasing temperature which has been attributed to a transition to an unexpected conducting phase in two-dimensional electron (or hole) systems (see, e.g., Refs. ). The mechanism they propose is based on charging and discharging of the positively-charged traps which are known to exist in the oxide close to the Si-SiO<sub>2</sub> interface in silicon MOSFET’s. Within this model, the strong temperature dependence and magnetic field dependence of the resistance observed in the experiments derive from temperature- and field-induced changes in the charge state of the traps. The anomalous behavior of these materials is thus ascribed to properties of the oxide and the Si-SiO<sub>2</sub> interface rather than to intrinsic behavior of interacting electrons associated with a conductor-insulator transition in two dimensions.
Although the theoretical curves shown in Ref. appear qualitatively similar to the experimental data , close examination reveals significant discrepancies. For example, the measured resistivity of silicon MOSFET’s strongly depends on temperature and magnetic field only at temperatures below approximately $`\frac{1}{3}T_F`$, corresponding to a dimensionless temperature $`t=0.08`$ (at $`ϵ_F/ϵ_d=0.25`$) in Ref. (here $`T_F`$ is the Fermi temperature). In contrast, the resistivity calculated by Altshuler and Maslov varies with temperature and with magnetic field at temperatures well above $`t=0.08`$. In fact, the calculated resistivity is shown in Ref. only for $`t>0.05`$ ($`t>0.1`$ in most figures), i.e., for the regime where one observes essentially no temperature or magnetic field dependence of the resistivity. Moreover, a change in gate voltage of a few percent at $`t=0.25`$ (corresponding to a “high” temperature $`T=T_F`$) causes more than an order of magnitude change in the calculated resistivity , in sharp contrast with experiments which show that the resistivity depends so strongly on gate voltage only at low temperatures, $`T<<T_F`$.
While some of these differences could possibly be remedied by refining the theory, there is a more general and quite fundamental difficulty with the Altshuler-Maslov model which we would like to raise in this Comment.
The model attributes the behavior of a variety of electron and hole systems to unintended and uncontrolled traps introduced during the fabrication of the samples. One surely expects behavior determined by details associated with traps (the number of these traps as well as their distribution in energy) to be sample-specific, contrary to what is found experimentally. For example, although samples with different spacer material were used in the experiments on p-GaAs/AlGaAs heterostructures in Refs. and , very similar behavior was found for the resistivity. To illustrate this point further, we consider the value of the resistivity, $`\rho _c`$, at the separatrix between conducting and insulating phases. In Si MOSFET’s , p-SiGe heterostructures , p-GaAs/AlGaAs heterostructures , n-AlAs heterostructures , and n-GaAs/AlGaAs heterostructures , this value was found to vary between 0.4 and $`3h/e^2`$. Within the Altshuler-Maslov model, the resistivity at the separatrix is determined by two quantities: the number of traps, variously estimated to be between $`10^{10}`$ and $`10^{12}`$cm<sup>-2</sup> in Si MOSFET’s (and unknown in GaAs/AlGaAs heterostructures), and “critical” particle density determined by the trap energy level $`ϵ_t`$, which is also not known. It is very unlikely that a strong function (see Eq. 9 a,b in Ref. ) of two largely unknown parameters which are surely different from sample to sample, and especially from material to material, would yield critical resistivities that differ by less than a factor of ten in the five different systems studied to date.
S. V. Kravchenko
Physics Department, Northeastern University, Boston, Massachusetts 02115
M. P. Sarachik
Physics Department, City College of the City University of New York, New York, New York 10031
D. Simonian
Department of Physics, Columbia University, New York, New York 10027
|
no-problem/9901/cond-mat9901170.html
|
ar5iv
|
text
|
# Spin Relaxation of Conduction Electrons
## I INTRODUCTION
Electron spin is becoming increasingly popular in electronics. New devices, now generally referred to as spintronics, exploit the ability of conduction electrons in metals and semiconductors to carry spin polarized current. Three factors make spin of conduction electrons attractive for future technology: (1) electron spin can store information, (2) the spin (information) can be transferred as it is attached to mobile carriers, and (3) the spin (information) can be detected. In addition, the possibility of having long spin relaxation time or spin diffusion length in electronic materials makes spintronics a viable potential technology.
Information can be stored in a system of electron spins because these can be polarized. To represent bits, for example, spin up may stand for one, spin down for zero. But the sheer existence of two spin polarizations is of limited use if we do not have means of manipulating them. Currently used methods of polarizing electron spins include magnetic field, optical orientation, and spin injection. Polarization by magnetic field is the traditional method that works for both metals and semiconductors. Spin dynamics in semiconductors, however, is best studied by optical orientation where spin-polarized electrons and holes are created by a circularly polarized light. Finally, in the spin injection technique a spin-polarized current is driven, typically from a ferromagnet, into the metallic sample. Since spin is both introduced and transferred by current, this method is most promising for spintronics. Unfortunately, thus far spin injection has not been convincingly demonstrated in semiconductors.
The second factor, the ability of information transfer by electron spins, relies on two facts. First, electrons are mobile and second, electrons have a relatively large spin memory. Indeed, conduction electrons “remember” their spins much longer than they remember momentum states. In a typical metal, momentum coherence is lost after ten femtoseconds, while spin coherence can survive more than a nanosecond. As a result, the length $`L_1`$, the spin diffusion length, over which electrons remain spin polarized is much longer than the mean free path distance $`\mathrm{}`$ over which their momentum is lost. Since $`L_1`$ is the upper limit for the size of spintronic elements (in larger elements the spin-encoded information fades away) it is not surprising that significant effort went into finding ways of reducing the spin relaxation. Quite unexpectedly, in quantum wells, but even in bulk semiconductors, donor doping was found to increase the spin memory of conduction electrons by up to three orders of magnitude. In metals one has much less freedom in manipulating electron states. A theoretical study, however, predicts that even there spin memory can be changed by orders of magnitude by band-structure tailoring. Alloying of polyvalent metals with monovalent ones can increase the spin memory by a decade or two. The ability of conduction electrons to transport spin-polarized current over distances exceeding micrometers has now been demonstrated in both metals and semiconductors.
Finally, after the spin is transferred, it has to be detected. In many experiments the spin polarization is read optically: photoexcited spin-polarized electrons and holes in a semiconductor recombine by emitting circularly polarized light; or the electron spins interact with light and cause a rotation of the light polarization plane. It was discovered, however, that spin can be also measured electronically, through charge-spin coupling. When spin accumulates on the conductor side at the interface of a conductor and a ferromagnet, a voltage or a current appears. By measuring the polarity of the voltage or the current, one can tell the spin orientation in the conductor. Like spin injection, spin-charge coupling has been demonstrated only in metals.
The operational synthesis of spin (information) storage, transfer, and detection can be illustrated on concrete devices. Spin transistor is a trilayer that consists of a nonmagnetic metal (base) sandwiched between two ferromagnets (emitter and collector). Spin-polarized current injected into the base from the emitter causes spin accumulation at the base-collector interface. If the collector magnetic moment is opposite to the spin polarization of the current (and parallel to the emitter magnetic moment, if the injected electrons are from the spin-minority subband), the current flows from the base into the collector. If the collector magnetic moment is parallel to the spin polarization, the current is reversed. In order for the spin accumulation to occur, the current in the metallic base must remain polarized–the base must be thinner than $`L_1`$. Similar principles work in the giant magnetoresistance effect. Multilayer structures with alternating nonmagnetic and ferromagnetic metals have their resistance strongly dependent on the relative orientation of the ferromagnetic moments. The resistance is small if the moments point in the same directions, and large if the directions of neighboring moments are reversed. Again, the information about the moment of a ferromagnetic layer is encoded into electron spins which carry this information through a contiguous nonmagnetic metal into another ferromagnet. Here the information is read and in ideal case the electron is let into the ferromagnet only if its spin is opposite to the direction of the ferromagnetic moment. Otherwise the electron is scattered at the interface.
Several recent reviews focus on spin-polarized transport. An overview of the subject can be found in . Spin transistors, spin injection, and charge-spin coupling in metallic systems is treated in ; a comprehensive account of optical orientation is given in , and recent reviews of giant magnetoresistance are in . Many suggested spintronic devices have not been demonstrated yet, but their potential seems enormous. Industrial issues related to spintronics can be found in , and describes some of the recent spintronic schemes and devices.
The present article introduces basic concepts of the spin relaxation of conduction electrons and identifies important unresolved issues in both semiconductors and metals. Particular emphasis is given to the recent experimental and theoretical work that attempts to enhance and/or understand electron spin coherence in electronic materials.
## II MECHANISMS OF SPIN RELAXATION
Spin relaxation refers to the processes that bring an unbalanced population of spin states into equilibrium. If, say, spin up electrons are injected into a metal at time $`t=0`$ creating a spin imbalance, at a later time, $`t=T_1`$ (the so called spin relaxation time), the balance is restored by a coupling between spin and orbital degrees of freedom. Three spin-relaxation mechanisms have been found to be relevant for conduction electrons (Fig. 1): the Elliott-Yafet, D’yakonov-Perel’, and Bir-Aronov-Pikus.
The Elliott-Yafet mechanism is based on the fact that in real crystals Bloch states are not spin eigenstates. Indeed, the lattice ions induce the spin-orbit interaction that mixes the spin up and spin down amplitudes. Usually the spin-orbit coupling $`\lambda `$ is much smaller than a typical band width $`\mathrm{\Delta }E`$ and can be treated as a perturbation. Switching the spin-orbit interaction adiabatically, an initially spin up (down) state acquires a spin down (up) component with amplitude $`b`$ of order $`\lambda /\mathrm{\Delta }E`$. Since $`b`$ is small, the resulting states can be still named “up” and “down” according to their largest spin component. Elliott noticed that an ordinary (spin independent) interaction with impurities, boundaries, interfaces, and phonons can connect “up” with “down” electrons, leading to spin relaxation whose rate $`1/T_1`$ is proportional to $`b^2/\tau `$ ($`\tau `$ being the momentum relaxation time determined by “up” to “up” scattering). Additional spin-flip scattering is provided by the spin-orbit interaction of impurities, and by the phonon-modulated spin-orbit interaction of the lattice ions (Overhauser ). The latter should be taken together with the Elliott phonon scattering to get the correct low-temperature behavior of $`1/T_1`$ . Yafet showed that $`1/T_1`$ follows the temperature dependence of resistivity: $`1/T_1T`$ at temperatures $`T`$ above the Debye temperature $`T_D`$, and $`1/T_1T^5`$ at very low $`T`$ in clean samples (neutral impurities lead to $`T`$-independent spin relaxation). Elliott-Yafet processes due to the electron-electron scattering in semiconductors were evaluated by Boguslawski .
In crystals that lack inversion symmetry (such as zincblende semiconductors) the spin-orbit interaction lifts the spin degeneracy: spin up and spin down electrons have different energies even when in the same momentum state. This is equivalent to having a momentum-dependent internal magnetic field $`𝐁(𝐤)`$ which is capable of flipping spins through the interaction term like $`𝐁(𝐤)𝐒`$, with $`𝐒`$ denoting the electron spin operator. (This term can be further modulated by strain or by interface electric fields). D’yakonov and Perel’ showed that the lifting of the spin degeneracy leads to spin relaxation. Typically the distance between spin up and down bands is much smaller than the frequency $`1/\tau `$ of ordinary scattering by impurities, boundaries, or phonons. Consider an electron with momentum $`𝐤`$. Its spin precesses along the axis given by $`𝐁(𝐤)`$. Without going the full cycle, the electron scatters into momentum $`𝐤^{}`$ and begins to precess along the direction now given by $`𝐁(𝐤^{})`$, and so on. The electron spin perceives the scattering through randomly changing precession direction and frequency. The precession angle along the axis of initial polarization (or any other fixed axis) diffuses so its square becomes about $`(t/\tau )(\omega \tau )^2`$ after time $`t`$ ($`\omega `$ is the typical precession frequency). By definition $`T_1`$ is the time when the precession angle becomes of order one. Then $`1/T_1\omega (\omega \tau )`$. The factor $`(\omega \tau )`$ is a result of motional narrowing as in nuclear magnetic resonance . The spin relaxation rate $`1/T_1`$ is proportional to the momentum relaxation time $`\tau `$. We note that in strong magnetic fields the precession along the randomly changing axis is suppressed (spins precess along the external field and electron cyclotron motion averages over different internal magnetic fields), leading to a reduction of the D’yakonov-Perel’ spin relaxation.
Another source of spin relaxation for conduction electrons was found by Bir, Aronov, and Pikus in the electron-hole exchange interaction. This interaction depends on the spins of interacting electrons and holes and acts on electron spins as some effective magnetic field. The spin relaxation takes place as electron spins precess along this field. In many cases, however, hole spins change with the rate that is much faster than the precession frequency. When that happens the effective field which is generated by the hole spins fluctuates and the precession angle about a fixed axis diffuses as in the case of the D’yakonov-Perel’ process. The electron spin relaxation rate $`1/T_1`$ is then “motionally” reduced and is proportional to the hole spin relaxation time. Similar reduction of $`1/T_1`$ occurs if holes that move faster than electrons change their momentum before electron spins precess a full cycle. The Bir-Aronov-Pikus spin relaxation, being based on the electron-hole interaction, is relevant only in semiconductors with a significant overlap between electron and hole wave functions.
## III SEMICONDUCTORS
Spin relaxation in semiconductors is rather complex. First, there are different charge carriers to consider. Both electrons and holes can be spin polarized and carry spin-polarized currents. Furthermore, some features of the observed luminescence polarization spectra must take into account excitons, which too, can be polarized. Second, in addition to temperature and impurity content the spin relaxation is extremely sensitive to factors like doping, dimensionality, strain, magnetic and electrical fields. The type of dopant is also important: electrons in $`p`$-type samples, for example, can relax much faster than in $`n`$-type samples. And, finally, since the relevant electron and hole states are typically very close to special symmetry points of the Brillouin zone, subtleties of the band structure often play a decisive role in determining which spin relaxation mechanism prevails. (Band structure also determines what is polarized–often due to degeneracy lifting, spin and orbital degrees are entirely mixed and the total angular momentum is what is referred to as “spin.”) The above factors make sorting out different spin relaxation mechanisms a difficult task.
The first measurement of $`T_1`$ of free carriers in a semiconductor was reported in Si by Portis et al. and Willenbrock and Bloembergen ; these measurements were done by conduction electron spin resonance. Silicon, however, remains still very poorly understood in regards to its spin transport properties. Very little is known, for example, about electronic spin-flip scattering by conventional $`n`$ and $`p`$ dopants. Considering that Si may be an important element for spintronics since it is widely used in conventional electronics, its spin relaxation properties should be further investigated.
Much effort was spent on III-V semiconductors where optical orientation enables direct measurement of $`T_1`$. In these systems holes relax much faster than electrons because hole Bloch states are almost an equal admixture of spin up and down eigenstates. The Elliott-Yafet mechanism then gives $`T_1`$ of the same order as $`\tau `$. In quantum wells (QW), however, $`T_1`$ of holes was predicted by Uenoyama and Sham and Ferreira and Bastard to be quenched, and even longer than the electron-hole recombination time. This was observed experimentally in $`n`$-modulation doped GaAs QWs by Damen et al. who measured hole spin relaxation time of 4 ps at 10 K. Hole and exciton spin relaxation was reviewed by Sham.
Compared to holes, electrons in III-V systems remember their spins much longer and are therefore more important for spintronic applications. Typical measured values of electron $`T_1`$ range from $`10^{11}`$ to $`10^7`$s. All the three spin relaxation mechanisms have been found contributing to $`T_1`$. Although it is difficult to decide which mechanism operates under the specific experimental conditions (this is because in some cases two mechanisms yield similar $`T_1`$, but also because experiments often disagree with each other), some general trends are followed. The Elliott-Yafet mechanism dominates in narrow-gap semiconductors, where $`b^2`$ is quite large ($`\mathrm{\Delta }EE_g`$ is small). Chazalviel studied $`n`$-doped InSb ($`E_g0.2`$ eV) and found that Elliott-Yafet scattering by ionized impurities explains the observed $`1/T_1`$.
If band gap is not too small, the D’yakonov-Perel’ mechanism has been found relevant at high temperatures and sufficiently low densities of holes. The D’yakonov-Perel’ mechanism can be quite easily distinguished from the Elliott-Yafet one: the former leads to $`1/T_1\tau `$ while for the latter $`1/T_11/\tau `$. The increase in the impurity concentration decreases the efficiency of the D’yakonov-Perel’ processes and increases those due to Elliott and Yafet. Another useful test of the D’yakonov-Perel’ mechanism is its suppression by magnetic field. The first experimental observation of the D’yakonov-Perel’ mechanism was reported by Clark et al. on moderately doped $`p`$ samples of GaAs and GaAlAs. Later measurements on less doped samples of GaAs by Maruschak et al. and Zerrouati et al. confirmed that the D’yakonov-Perel’ mechanism is dominant in GaAs at elevated temperatures.
At low temperatures and in highly $`p`$-doped samples (acceptor concentration larger than $`10^{17}`$ cm<sup>-3</sup>) the Bir-Aronov-Pikus mechanism prevails. As the acceptor concentration increases this mechanism reveals itself at progressively higher temperatures. An increase of $`1/T_1`$ with increasing $`p`$ doping signals that the electron-hole spin relaxation is relevant. This was demonstrated in $`p`$-type GaAs (for example, Zerrouati et al. , Maruschak et al. , and Fishman and Lampel) and GaSb (Aronov et al. ). The physics of spin relaxation in $`p`$-doped III-V semiconductors is very rich because several different mechanisms have been shown relevant. More work, however, still needs to be done. It is not clear, for example, what happens at very low temperatures and in very pure samples. There are some indications that at very low temperatures both the D’yakonov-Perel’ and the Bir-Aronov-Pikus mechanisms can explain the observed data at whatever doping. Excellent reviews of conduction electron spin relaxation in bulk III-V semiconductors are . These references contain both experimental data and many useful formulas of $`1/T_1`$.
Electron spin relaxation has been studied also in quantum wells. That spin dynamics in quantum wells differs from that in the bulk is obvious from the fact that the relevant spin relaxation mechanisms are very sensitive to factors like mobility (which is higher in QWs), electron-hole separation (smaller in QWs) and electronic band structure (more complicated in QWs because of subband structures and interface effects). Furthermore, the quality of QW samples is very important since $`1/T_1`$ is strongly influenced by localization and defects. The first measurement of conduction electron $`T_1`$ in a QW was reported by Damen et al. who studied the dynamics of luminescence polarization in $`p`$-modulation doped GaAs/AlGaAs, and obtained $`T_10.15`$ ns at low temperatures. This relaxation time is three to four times smaller than in a similar bulk sample (the acceptor concentration was $`4\times 10^{11}`$ cm<sup>-2</sup>). It was concluded that the relevant mechanism was Bir-Aronov-Pikus. The recent theoretical study by Maialle and Degani of the Bir-Aronov-Pikus relaxation in QWs indicates that, to the contrary, this mechanism is not efficient enough to explain the experiment. Another possibility is the D’yakonov-Perel’ mechanism. Bastard and Ferreira calculated the effectiveness of this mechanism for the case of ionized impurity scattering. Their calculation shows that the D’yakonov-Perel’ mechanism also too weak to explain the experiment. Although some assumptions of the theoretical studies may need to be reexamined (the major difficulty seems to be estimating $`\tau `$) , further experimental work (such as temperature and doping dependence) is required to decide on the relevant mechanism. Recently Britton et al. studied the spin relaxation in undoped GaAs/AlGaAs multiple quantum wells at room temperature. The measured relaxation times vary between 0.07 and 0.01 ns, decreasing strongly with increasing confinement energy. These results seem to be consistent with the D’yakonov-Perel’ mechanism .
Spin relaxation studies in quantum wells also promise better understanding of interface effects. In an inversion layer an electric field arises from the electrostatic confinement. This field induces a spin-orbit interaction which contributes to the spin-splitting (the so called Rashba splitting) of electron bands in addition to the inversion-asymmetry splitting. This should enhance the efficiency of the D’yakonov-Perel’ mechanism. Spin precession of conduction electrons in GaAs inversion layers was investigated by Dresselhaus et al. using antilocalization. The spin relaxation was found to be due to the D’yakonov-Perel’ mechanism, but the spin splitting was identified (by magnetic field dependence) to be primarily due to the inversion asymmetry. This is consistent with an earlier theoretical study of Lommer et al. of spin splitting in heterostructures, which predicted that in GaAs/AlGaAs QWs the Rashba term in the Hamiltonian is weak. In narrow-band semiconductors, however, Lommer et al. predict that the Rashba term becomes relevant. But this remains a not-yet-verified theoretical prediction. Another interesting study of the interface effects was done recently by Guettler et al. following the calculations of Vervoort et al. . Quantum well systems in which wells and barriers have different host atoms (so called “no-common-atom” heterostructures) were shown to have conduction electron spin relaxation enhanced by orders of magnitude compared to common-atom heterostructures. In particular, spin relaxation times in (InGa)As/InP QWs were found to be 20 (90) ps for electrons (holes), while the structures with common host atoms (InGa)As/(AsIn)As have spin relaxation times much longer: 600 (600) ps. This huge difference between otherwise similar samples is attributed to the large electric fields arising from the asymmetry at the interface (interface dipolar fields) .
Spin relaxation of conduction electrons can be controlled. This was first realized by Wagner et al. who $`\delta `$-doped GaAs/AlGaAs double heterostructures with Be (as acceptor). The measured spin relaxation time was about 20 ns which is two orders of magnitude longer than in similar homogeneously $`p`$-doped GaAs. The understanding of this finding is the following. The sample was heavily doped ($`8\times 10^{12}`$ cm<sup>-2</sup>) so the Bir-Aronov-Pikus mechanism was expected to dominate the relaxation. Photogenerated electrons, however, were spatially separated from holes which stayed mostly at the center of the GaAs layer, close to the Be dopants. There was, however, still enough overlap between electrons and the holes for efficient recombination so that the radiation polarization could be studied. The decrease of the overlap between electrons and holes reduced the efficiency of the Bir-Aronov-Pikus mechanism and increased $`T_1`$. This experiment can be also taken as a confirmation that the Aronov-Bir-Pikus mechanism is dominant in heavily $`p`$-doped heterostructures.
The next important step in controlling spin relaxation was the observation of a large enhancement of the spin memory of electrons in II-VI semiconductor QWs by Kikkawa et al. . Introducing a (two dimensional) electron gas by $`n`$-doping the II-VI QWs was found to increase electronic spin memory by several orders of magnitude. The studied samples were modulation-doped Zn<sub>1-x</sub>Cd<sub>x</sub>Se quantum wells with electron densities $`2\times 10^{11}`$ and $`5\times 10^{11}`$ cm<sup>-2</sup> (an additional insulating sample was used as a benchmark). Spin polarization was induced by a circularly polarized pump pulse directed normal to the sample surface. The spins, initially polarized normal, began to precess along an external magnetic field oriented along the surface plane. After a time $`\delta t`$, a probe pulse of a linearly polarized light detected the orientation of the spins. The major result of the study was that in doped samples electron spin remained polarized for almost three orders of magnitude longer than in the insulating (no Fermi see) sample. The measured $`T_1`$ was on the nanosecond scale, strongly dependent on the magnetic field and weakly dependent on temperature and mobility. Although the nanosecond time scales and the increase of the observed polarization in strong magnetic fields (usually a Tesla) could be explained by the D’yakonov-Perel’ mechanism, the temperature and mobility (in)dependence remain a puzzle. The overall increase of $`T_1`$ by donor doping can be understood in the following way. In insulating samples photoexcited spin-polarized electrons quickly recombine with holes. This happens in picoseconds. In the presence of a Fermi sea photoexcited electrons do not recombine (there are plenty other electrons available for recombination) so they lose their spins in nanoseconds, which are natural time scales for spin relaxation. There is a caveat, however. The above scenario is true only if holes lose their spins faster than they recombine with electrons. Otherwise only electrons from the Fermi sea with a preferred spin would recombine, leaving behind a net opposite spin that counters that of the photoexcited electrons. The fast hole relaxation certainly happens in the bulk (and similar enhancement of $`T_1`$ has been observed in $`n`$-doped bulk GaAs by Kikkawa and Awschalom), but not necessarily in quantum heterostructures. This issue therefore remains open. Very recent optically pumped nuclear magnetic resonance measurements in $`n`$-doped AlGaAs/GaAs multiple quantum well systems indicate unusually long $`T_1100\mu `$s at temperatures below 500 mK in the two-dimensional electron gas system under the application of a strong ($`12`$ T) external magnetic field. It is unclear whether this remarkable decoupling (that is, $`T_1100\mu `$s) of the two-dimensional electron gas spins from its environment is an exotic feature of the fractional quantum Hall physics dominating the system, or is a more generic effect which could be controlled under less restrictive conditions.
It was recently demonstrated that spin polarized current can flow in a semiconductor. Hägele et al. used a simple but ingenuous setup that consisted of a micrometer $`i`$-GaAs block attached to a $`p`$-modulation doped GaInAs QW layer. The free surface of the GaAs block was illuminated by a circularly polarized light. The photogenerated electrons then drifted towards the QW under the force of an applied electric field (photoexcited holes moved in the opposite direction towards the surface). The electrons recombined with holes upon hitting the QW, emitting light. By observing the polarization of the emitted light Hägele et al. concluded that electrons captured by the QW were polarized. The spin was almost completely conserved after the electrons traveled as long as 4 micrometers and under the fields up to 6 kV/cm, indicating very long spin diffusion lengths in these experiments .
## IV METALS
Only a dozen elemental metals have been investigated for spin relaxation so far. Early measurements of $`T_1`$ were done by the conduction electron spin resonance technique. This technique was demonstrated for metals by Griswold et al. , and Feher and Kip used it to make the first $`T_1`$ measurement of Na, Be, and Li. This and subsequent measurements established that $`1/T_1`$ in metals depends strongly on the impurity content (especially in light metals like Li and Be) and grows linearly with temperature at high temperatures. Typical spin relaxation time scales were set to nanoseconds, although in very pure samples $`T_1`$ can reach microseconds at low temperatures (for example in sodium, as observed by Kolbe ). Reference is a good source of these early spin relaxation measurements.
The next wave of measurements started with the realization of spin injection in metals. Suggested theoretically by Aronov , spin injection was first demonstrated in Al by Johnson and Silsbee. Later measurements were done on Au and Nb films. The spin injection technique enables measurements of $`T_1`$ in virtually no magnetic fields so that $`T_1`$ can now be measured in superconductors, spin glasses, or Kondo systems where magnetic field qualitatively alters electronic states. Furthermore, by eliminating the need for magnetic fields to polarize electron spins one avoids complications like inhomogeneous line broadening, arising from $`g`$ factor anisotropy. Johnson also succeeded in injecting spin polarized electrons into superconducting Nb films. Spin relaxation of electrons (or, rather, quasiparticles) in superconductors is, however, poorly understood and the experiments, now done mostly on high-$`\mathrm{T}_\mathrm{c}`$ materials only manifest the lack of theoretical comprehension of the subject. Still waiting for its demonstration is spin injection into semiconductors. Although it was predicted long ago by Aronov and Pikus it still remains a great experimental challenge.
The observations that $`1/T_1T`$ at high temperatures, the strong dependence on impurities, and characteristic nanosecond time scales has led to the general belief that conduction electrons in metals lose their spins by the Elliott-Yafet mechanism. Although simple estimates and even some analytical calculations were done for simplest metals like Na (Yafet), careful numerical calculations are lacking. Experimental data are usually analyzed to see if the simple relation suggested by Yafet,
$`1/T_1b^2\rho ,`$ (1)
where $`\rho `$ is resistivity, is obeyed. The spin-mixing $`b^2`$ is the fitting parameter so the temperature behavior of $`1/T_1`$ is determined solely by $`\rho `$. At high temperatures $`1/T_1\rho T`$ as observed. At low temperatures the spin relaxation should obey the Yafet law $`1/T_1T^5`$ (in parallel to the Bloch law $`\rho T^5`$), but so far this has not been observed, mainly due to the large contribution from impurity and boundary scattering. (Even after subtracting this temperature independent background the uncertainties of the measurements prevent a definite experimental conclusion about the low $`T`$ behavior.)
Equation 1 suggests that dividing $`1/T_1`$ by $`b^2`$ one obtains resistivity, up to a multiplicative (material independent) constant. Resistivity, divided by its value $`\rho _D`$ at $`T_D`$ and expressed as function of reduced temperature $`T/T_D`$ follows a simple Grüneisen curve, the same for all simple metals. Monod and Beuneu applied this reasoning to then available experimental data of $`T_1`$. For the spin mixing $`b^2`$ they substituted values obtained from atomic parameters of the corresponding elements. The resulting (revised) scaling is reproduced in Fig. 2. (The original scaling has $`\mathrm{\Gamma }_s`$ divided by $`b^2`$, not by $`b^2\rho _D`$.) The picture is surprising. While some metals (the “main group”) nicely follow a single Grüneisen curve, others do not. There seems to be no obvious reason for the observed behavior. Metals Na and Al, for example, are quite similar in that their atomic $`b^2`$ differ by less than 10%. Yet the spin relaxation times at $`T_D`$ are 0.1 ns for Al and 20 ns for Na .
The solution to this puzzle can be found by recognizing that the main group is formed by monovalent alkali and noble metals, while the metals with underestimated $`b^2`$, Al, Pd, Mg, and Be are polyvalent (no other metals have been measured for $`T_1`$ in a wide enough temperature region). Monovalent metals have their Fermi surfaces inside Brillouin zone boundaries so that distance between neighboring bands, $`\mathrm{\Delta }E`$ is quite uniform and of the order of the Fermi energy $`E_F`$. The spin mixing is then $`b^2(\lambda /E_F)^2`$ for all states on the Fermi surface. Polyvalent metals, on the other hand, have Fermi surfaces which cross Brillouin zone boundaries, and often also special symmetry points and accidental degeneracy lines. When this happens the electron spin relaxation is significantly enhanced. This was first noted by Silsbee and Beuneu who estimated the contribution to Al $`1/T_1`$ from accidental degeneracy lines. Later the present authors gave a rigorous and detailed treatment of how not only accidental degeneracy, but all the band anomalies contribute to $`1/T_1`$ . This treatment led to the spin-hot-spot model which explains why all the measured polyvalent metals have spin relaxation faster than expected from a naive theory. In addition to explaining experiment, the spin-hot-spot model predicts the behavior of other polyvalent metals. The model is illustrated in Fig. 3.
As an example, consider a metal whose Fermi surface crosses a single Brillouin zone boundary. The distance between energy bands $`\mathrm{\Delta }E`$ is about $`E_F`$ for all Fermi surface states except those close to the boundary. There $`\mathrm{\Delta }E2V`$ , where $`V`$ is the Fourier component of the lattice potential associated with the boundary. Since in most cases $`VE_F`$ the spin mixing $`b^2(\lambda /V)^2`$ is much larger than on average. If an electron jumps into such states, the chance that its spin will be flipped is much enhanced. Similarly if the electron jumps from these “spin hot spots.” But how much the states with $`\mathrm{\Delta }E2V`$ contribute to spin relaxation depends on how many they are relative to the number of states on the Fermi surface. A single electron experiences thousands of jumps due to momentum scattering before its spin flips. Therefore the spin relaxation rate $`1/T_1`$ is determined by the average $`b^2`$ of $`b^2`$ over the Fermi surface. The majority of states with $`\mathrm{\Delta }EE_F`$ contribute $`(\lambda /E_F)^2\times 1`$ (the value of $`b^2`$ times the probability of occurrence, which in this case is close to one) to $`b^2`$. The probability of finding a state with $`\mathrm{\Delta }E2V`$ on the Fermi surface turns out to be about $`V/E_F`$ , so the spin hot spots contribute about $`(\lambda /V)^2\times (V/E_F)`$, which is $`(\lambda /E_F)^2\times (E_F/V)`$. This is larger by $`E_F/V`$ than the contribution from ordinary states. Typically $`E_F/V10`$, and considering that in reality the Fermi surface crosses more than one Brillouin zone boundary, the spin relaxation can be enhanced up to two orders of magnitude. Electron jumps that include at least one spin-hot-spot state dominate spin relaxation to the extent that the majority of scattering events (those outside the spin hot spots) can be neglected.
The spin-hot-spot picture not only solves a long-standing experimental puzzle, but also shows a way to tailor the spin relaxation of electrons in a conduction band. Spin relaxation of a monovalent metal, for example, can be enhanced by alloying with a polyvalent metal. This brings more electrons into the conduction band. As the Fermi surface increases, it begins to cross Brillouin zone boundaries and other spin-hot-spot regions. The enhancement of $`1/T_1`$ can be significant. Similarly, $`1/T_1`$ can be reduced by orders of magnitude by alloying polyvalent metals with monovalent. Applying pressure, reducing the dimensionality, or doping into a semiconductor conduction bands as well as any other method of modifying the band structure should work. The rule of thumb for reducing $`1/T_1`$ is washing the spin hot spots off the Fermi surface. (Another possibility would be to inhibit scattering in or out the spin hot spots, but this is hardly realizable.)
The most important work ahead is to catalog $`1/T_1`$ for more metallic elements and alloys. So far only the simplest metals have been carefully studied over large enough temperature ranges, but even in these cases it is not clear, for example, as to how phonon-induced $`1/T_1`$ behaves at low temperatures. It is plausible that understanding $`1/T_1`$ in the transition metals will require new insights (such as establishing the role of the $`s`$-$`d`$ exchange). Another exciting possibility is that the measurements at high enough temperatures will settle the question of the so called “resistivity saturation” which occurs in many transition metals. Indeed, the two competing models of this phenomenon imply different scenarios for $`1/T_1`$: the “phonon ineffectiveness” model implies saturation of $`1/T_1`$, while the model emphasizing the role of quantum corrections to Boltzmann theory apparently does not. Finally, theory should yield probabilities of various spin-flip processes in different metals. Empirical pseudopotential and density functional techniques seem quite adequate to perform such calculations. Some work in this direction is already under way.
## V conclusion
We have provided a brief informal review of the current understanding of spin relaxation phenomenon in metals and semiconductors. Although studying spin relaxation through electron spin resonance measurements and developing its microscopic understanding through quantitative band structure analyses were among the more active early research areas in solid state physics (dating back to the early 1950s), it is surprising that our current understanding of the phenomenon is quite incomplete and is restricted mostly to bulk elemental metals and some of the III-V semiconductor materials (both bulk and quantum well systems). There is a great deal of renewed current interest in the subject because of the potential spintronics applications offering the highly desirable possibility of monolithic integration of electronic, magnetic, and optical devices in single chips as well as the exciting prospect of using spin as a quantum bit in proposed quantum computer architectures. It should, however, be emphasized that all of these proposed applications necessarily require comprehensive quantitative understanding of physical processes controlling spin coherence in electronic materials. In particular, there is an acute need to develop techniques which can manipulate spin dynamics in a controlled coherent way which necessitates having long spin relaxation times and/or spin diffusion lengths. Our understanding of spin coherence in small mesoscopic systems and more importantly, at or across interfaces (metal/semiconductor, semiconductor/semiconductor) is currently rudimentary to non-existent. Much work (both theoretical and experimental as well as materials and fabrication related) is needed to develop a comprehensive understanding of spin coherence in electronic materials before the spintronics dream can become a viable reality.
Acknowledgments–This work is supported by the U.S. ONR and the DOD. We thank P. B. Allen and M. Johnson for useful discussions.
|
no-problem/9901/astro-ph9901058.html
|
ar5iv
|
text
|
# GRAVITATIONAL MICROLENSING AND DARK MATTER IN THE GALACTIC HALO
## Introduction
A central problem in astrophysics concerns the nature of the dark matter in galactic halos, whose presence is implied by the flat rotation curves in spiral galaxies. As first proposed by Paczyński kn:Paczynski , gravitational microlensing can provide a decisive answer to that question, and since 1993 this dream has started to become a reality with the detection of several microlensing events towards the Large Magellanic Cloud. Today, although the evidence for Massive Astrophysical Compact Halo Objects (MACHOs) is firm, the implications of this discovery crucially depend on the assumed galactic model. Moreover, at least two of the events found towards the Large Magellanic Clouds are due to lenses located in the Clouds themselves. Therefore, it might well be that also the other events or at least a fraction of them are due to MACHOs in the Clouds. This issue might be solved when more events will be available.
It has become customary to take the standard spherical halo model as a baseline for comparison. Within this model, the mass moment method yields an average MACHO mass of kn:je $`0.27M_{}`$. Unfortunately, because of the presently available limited statistics different data-analysis procedures lead to results which are only marginally consistent. For instance, the average MACHO mass reported by the MACHO team based on its first two years of data is kn:Pratt $`0.5_{0.2}^{+0.3}M_{}`$. Apart from the low-statistics problem – which will automatically disappear from future larger data samples – we feel that the real question is whether the standard spherical halo model correctly describes our galaxy. Although the answer was believed to lie in the affirmative for some years, nowadays various arguments strongly favour a nonstandard galactic halo. Indeed, besides the observational evidence that spiral galaxies generally have flattened halos, recent determinations of both the disk scale length, and the magnitude and slope of the rotation at the solar position indicate that our galaxy is best described by the maximal disk model. This conclusion is further strengthened by the microlensing results towards the galactic centre, which imply that the bulge is more massive than previously thought. Correspondingly, the halo plays a less dominant role than within the standard halo model, thereby reducing the halo microlensing rate as well as the average MACHO mass. A similar result occurs within the King-Michie halo models kn:Ingrosso , which take into account the finite escape velocity and the anisotropies in velocity space (typically arising during the phase of halo formation). Moreover, practically the same conclusions also hold for flattened galactic models with a substantial degree of halo rotation. So, the expected average MACHO mass should be smaller than within the standard halo model. Still, the problem remains to explain the formation of MACHOs, as well as the nature of the remaining dark matter in galactic halos.
We have proposed a scenario kn:de1 ; kn:de2 ; kn:de4 in which dark clusters of MACHOs and cold molecular clouds – mainly of $`H_2`$ – naturally form in the halo at galactocentric distances larger than $`1020`$ kpc (somewhat similar ideas have also been put forward by Carr and Ashman kn:carr ; kn:ashman , Pfenniger, Combes and Martinet kn:Pfenniger , Gerhard and Silk kn:Silk1 and by Fabian and Nulsen kn:fabian ).
Here, we discuss the dark matter problem in the halo of our Galaxy in connection with microlensing searches and we briefly review the main features of our scenario, along with its observational implications in particular with a $`\gamma `$-ray flux produced in the scattering of high-energy cosmic-ray protons on $`H_2`$. Our estimate for the halo $`\gamma `$-ray flux turns out to be in remarkably good agreement with the recent discovery by Dixon et al. dixon of a possible $`\gamma `$-ray emission from the halo using EGRET data.
The content is as follows: first, we review the evidence for dark matter in the halo of our Galaxy. As next we present the baryonic candidates for dark matter and we discuss the basics of microlensing (optical depth, microlensing rates, etc.). We then give an overview of the results of microlensing searches achieved so far and we briefly present a scenario in which part of the dark matter is in the form of cold molecular clouds (mainly of $`H_2`$).
## Mass of the Milky Way
The best evidence for dark matter in galaxies comes from the rotation curves of spirals. Measurements of the rotation velocity $`v_{rot}`$ of stars up to the visible edge of the spiral galaxies and of $`HI`$ gas in the disk beyond the optical radius (by measuring the Doppler shift in the 21-cm line) imply that $`v_{rot}`$ constant out to very large distances, rather than to show a Keplerian falloff. These observations started around 1970 kn:Rubin , thanks to the improved sensitivity in both optical and 21-cm bands. By now there are observations for over thousand spiral galaxies with reliable rotation curves out to large radii. In almost all of them the rotation curve is flat or slowly rising out to the last measured point. Very few galaxies show falling rotation curves and those that do either fall less rapidly than Keplerian have nearby companions that may perturb the velocity field or have large spheroids that may increase the rotation velocity near the centre.
There are also measurements of the rotation velocity for our Galaxy. However, these observations turn out to be rather difficult, and the rotation curve has been measured only up to a distance of about 20 kpc. Without any doubt our own galaxy has a typical flat rotation curve. A fact this, which implies that it is possible to search directly for dark matter characteristic of spiral galaxies in our own Milky Way.
In oder to infer the total mass one can also study the proper motion of the Magellanic Clouds and of other satellites of our Galaxy. Recent studies kn:Zaritsky ; kn:Lin ; kn:Kochanek do not yet allow an accurate determination of $`v_{rot}(LMC)/v_0`$ ($`v_0=210\pm 10`$ km/s being the local rotational velocity). Lin et al. kn:Lin analyzed the proper motion observations and concluded that within 100 kpc the Galactic halo has a mass $`5.5\pm 1\times 10^{11}M_{}`$ and a substantial fraction $`50\%`$ of this mass is distributed beyond the present distance of the Magellanic Clouds of about 50 kpc. Beyond 100 kpc the mass may continue to increase to $`10^{12}M_{}`$ within its tidal radius of about 300 kpc. This value for the total mass of the Galaxy is in agreement with the results of Zaritsky et al. kn:Zaritsky , who found a total mass in the range 9.3 to 12.5 $`\times 10^{11}M_{}`$, the former value by assuming radial satellite orbits whereas the latter by assuming isotropic satellite orbits.
The results of Lin et al. kn:Lin suggest that the mass of the halo dark matter up to the Large Magellanic Cloud (LMC) is roughly half of the value one gets for the standard halo model (with flat rotation curve up to the LMC and spherical shape), implying thus the same reduction for the number of expected microlensing events. Kochanek kn:Kochanek analysed the global mass distribution of the Galaxy adopting a Jaffe model, whose parameters are determined using the observations on the proper motion of the satellites of the Galaxy, the Local Group timing constraint and the ellipticity of the M31 orbit. With these observations Kochanek kn:Kochanek concludes that the mass inside 50 kpc is $`5.4\pm 1.3\times 10^{11}M_{}`$. This value becomes, however, slightly smaller when using only the satellite observations and the disk rotation constraint, in this case the median mass interior to 50 kpc is in the interval 3.3 to 6.1 (4.2 to 6.8) without (with) Leo I satellite in units of $`10^{11}M_{}`$. The lower bound without Leo I is 65% of the mass expected assuming a flat rotation curve up to the LMC.
## Baryonic dark matter candidates
Before discussing the baryonic dark matter we would like to mention that another class of candidates which is seriously taken into consideration is the so-called cold dark matter, which consists for instance of axions or supersymmetric particles like neutralinos kn:jungman . Here, we will not discuss cold dark matter in detail. However, recent studies seem to point out that there is a discrepancy between the calculated (through N-body simulations) rotation curve for dwarf galaxies assuming an halo of cold dark matter and the measured curves kn:moore ; kn:navarro ; kn:Silk . If this fact is confirmed, this would exclude cold dark matter as a major constituent of the halo of dwarf galaxies and possibly also of spiral galaxies.
From the Big Bang nucleosynthesis model kn:copi ; kn:PDG and from the observed abundances of primordial elements one infers: $`0.010h_0^2\mathrm{\Omega }_B0.016`$ or with $`h_00.41`$ one gets $`0.01\mathrm{\Omega }_B0.10`$ (where $`\mathrm{\Omega }_B=\rho _B/\rho _{crit}`$, and $`\rho _{crit}=3H_0^2/8\pi G`$). Since for the amount of luminous baryons one finds $`\mathrm{\Omega }_{lum}\mathrm{\Omega }_B`$, it follows that an important fraction of the baryons are dark. Indeed, the dark baryons may well make up the entire dark halo matter.
The halo dark matter cannot be in the form of hot ionized hydrogen gas otherwise there would be a large $`X`$-ray flux, for which there are stringent upper limits kn:corx . The abundance of neutral hydrogen gas is inferred from the 21-cm measurements, which show that its contribution is small. Another possibility is that the hydrogen gas is in molecular form clumped into cold clouds, as we will discuss later on. Baryons could otherwise have been processed in stellar remnants (for a detailed discussion see kn:Carr ). If their mass is below $`0.08M_{}`$ they are too light to ignite hydrogen burning reactions. The possible origin of such brown dwarfs or Jupiter like bodies (called also MACHOs), by fragmentation or by some other mechanism, is at present not well understood. It has also been pointed out that the mass distribution of the MACHOs, normalized to the dark halo mass density, could be a smooth continuation of the known initial mass function of ordinary stars kn:Derujula1 . The ambient radiation, or their own body heat, would make sufficiently small objects of H and He evaporate rapidly. The condition that the rate of evaporation of such a hydrogenoid sphere be insufficient to halve its mass in a billion years leads to the following lower limit on their mass kn:Derujula1 : $`M>10^7M_{}(T_S/30K)^{3/2}(1gcm^3/\rho )^{1/2}`$ ($`T_S`$ being their surface temperature and $`\rho `$ their average density, which we expect to be of the order $`1gcm^3`$).
Otherwise, MACHOs might be M-dwarfs or white dwarfs. As a matter of fact, a deeper analysis shows that the M-dwarf option looks problematic. The null result of several searches for low-mass stars both in the disk and in the halo of our Galaxy suggests that the halo cannot be mostly in the form of hydrogen burning main sequence M-dwarfs. Optical imaging of high-latitude fields taken with the Wide Field Camera of the Hubble Space Telescope indicates that less than $`6\%`$ of the halo can be in this form kn:JBahcall . However, these results are derived under the assumption of a smooth spatial distribution of M-dwarfs, and become considerably less severe in the case of a clumpy distribution kn:Kerins1 .
A scenario with white dwarfs as a major constituent of the galactic halo dark matter has been explored kn:Tamanaha . However, it requires a rather ad hoc initial mass function sharply peaked around 2 - 6 $`M_{}`$. Future Hubble deep field exposures could either find the white dwarfs or put constraints on their fraction in the halo kn:Kawaler . Also a substantial component of neutron stars and black holes with mass higher than $`1M_{}`$ is excluded, for otherwise they would lead to an overproduction of heavy elements relative to the observed abundances.
## Basics of microlensing
In the following we present the main features of microlensing, in particular its probability and rate of events (for reviews see also kn:Pac ; kn:Roulet ; kn:napoli , whereas for double lenses see for instance ref. kn:Dominik ). An important issue is the determination from the observations of the mass of the MACHOs that acted as gravitational lenses as well as the fraction of halo dark matter they make up. The most appropriate way to compute the average mass and other important information is to use the method of mass moments developed by De Rújula et al. kn:Derujula .
### Microlensing probability
When a MACHO of mass $`M`$ is sufficiently close to the line of sight between us and a more distant star, the light from the source suffers a gravitational deflection. The deflection angle is usually so small that we do not see two images but rather a magnification of the original star brightness. This magnification, at its maximum, is given by
$$A_{max}=\frac{u^2+2}{u(u^2+4)^{1/2}}.$$
(1)
Here $`u=d/R_E`$ ($`d`$ is the distance of the MACHO from the line of sight) and the Einstein radius $`R_E`$ is defined as
$$R_E^2=\frac{4GMD}{c^2}x(1x)$$
(2)
with $`x=s/D`$, and where $`D`$ and $`s`$ are the distance between the source, respectively the MACHO and the observer.
An important quantity is the optical depth $`\tau _{opt}`$ to gravitational microlensing defined as
$$\tau _{opt}=_0^1𝑑x\frac{4\pi G}{c^2}\rho (x)D^2x(1x)$$
(3)
with $`\rho (x)`$ the mass density of microlensing matter at distance $`s=xD`$ from us along the line of sight. The quantity $`\tau _{opt}`$ is the probability that a source is found within a radius $`R_E`$ of some MACHO and thus has a magnification that is larger than $`A=1.34`$ ($`dR_E`$).
We calculate $`\tau _{opt}`$ for a galactic mass distribution of the form
$$\rho (\stackrel{}{r})=\frac{\rho _0(a^2+R_{GC}^2)}{a^2+\stackrel{}{r}^2},$$
(4)
$`\stackrel{}{r}`$ being the distance from the Earth. Here, $`a`$ is the core radius, $`\rho _0`$ the local dark mass density in the solar system and $`R_{GC}`$ the distance between the observer and the Galactic centre. Standard values for the parameters are $`\rho _0=0.3GeV/cm^3=7.910^3M_{}/pc^3`$, $`a=5.6kpc`$ and $`R_{GC}=8.5kpc`$. With these values we get, for a spherical halo, $`\tau _{opt}5\times 10^7`$ for the LMC and $`\tau _{opt}7\times 10^7`$ for the SMC kn:locarno .
The magnification of the brightness of a star by a MACHO is a time-dependent effect. For a source that can be considered as pointlike (this is the case if the projected star radius at the MACHO distance is much less than $`R_E`$) the light curve as a function of time is obtained by inserting
$$u(t)=\frac{(d^2+v_T^2t^2)^{1/2}}{R_E}$$
(5)
into eq.(1), where $`v_T`$ is the transverse velocity of the MACHO, which can be inferred from the measured rotation curve ($`v_T200km/s`$). The achromaticity, symmetry and uniqueness of the signal are distinctive features that allow to discriminate a microlensing event from background events such as variable stars.
The behaviour of the magnification with time, $`A(t)`$, determines two observables namely, the magnification at the peak $`A(0)`$ \- denoted by $`A_{max}`$ \- and the width of the signal $`T`$ (defined as being $`T=R_E/v_T`$).
### Microlensing rate towards the LMC
The microlensing rate depends on the mass and velocity distribution of MACHOs. The mass density at a distance $`s=xD`$ from the observer is given by eq.(4). The isothermal spherical halo model does not determine the MACHO number density as a function of mass. A simplifying assumption is to let the mass distribution be independent of the position in the galactic halo, i.e., we assume the following factorized form for the number density per unit mass $`dn/dM`$,
$$\frac{dn}{dM}dM=\frac{dn_0}{d\mu }\frac{a^2+R_{GC}^2}{a^2+R_{GC}^2+D^2x^22DR_{GC}xcos\alpha }d\mu =\frac{dn_0}{d\mu }H(x)d\mu ,$$
(6)
with $`\mu =M/M_{}`$ ($`\alpha `$ is the angle of the line of sight with the direction of the galactic centre, which is $`82^0`$ for the LMC), $`n_0`$ not depending on $`x`$ and is subject to the normalization $`𝑑\mu \frac{dn_0}{d\mu }M=\rho _0`$. Nothing a priori is known on the distribution $`dn_0/dM`$.
A different situation arises for the velocity distribution in the isothermal spherical halo model, its projection in the plane perpendicular to the line of sight leads to the following distribution in the transverse velocity $`v_T`$
$$f(v_T)=\frac{2}{v_H^2}v_Te^{v_T^2/v_H^2}$$
(7)
($`v_H210km/s`$ is the observed velocity dispersion in the halo).
In order to find the rate at which a single star is microlensed with magnification $`AA_{min}`$, we consider MACHOs with masses between $`\mu `$ and $`\mu +\delta \mu `$, located at a distance from the observer between $`x`$ and $`x+\delta x`$ and with transverse velocity between $`v_T`$ and $`v_T+\delta v_T`$. The collision time can be calculated using the well-known fact that the inverse of the collision time is the product of the MACHO number density, the microlensing cross-section and the velocity. The rate $`d\mathrm{\Gamma }`$, taken also as a differential with respect to the variable $`u`$, at which a single star is microlensed in the interval $`d\mu dudv_Tdx`$ is given by kn:Derujula ; kn:Griest1
$$d\mathrm{\Gamma }=2v_Tf(v_T)Dr_E[\mu x(1x)]^{1/2}H(x)\frac{dn_0}{d\mu }d\mu dudv_Tdx,$$
(8)
with
$$r_E^2=\frac{4GM_{}D}{c^2}(3.2\times 10^9km)^2.$$
(9)
One has to integrate the differential number of microlensing events, $`dN_{ev}=N_{}t_{obs}d\mathrm{\Gamma }`$, over an appropriate range for $`\mu `$, $`x`$, $`u`$ and $`v_T`$, in order to obtain the total number of microlensing events which can be compared with an experiment monitoring $`N_{}`$ stars during an observation time $`t_{obs}`$ and which is able to detect a magnification such that $`A_{max}A_{TH}`$. The limits of the $`u`$ integration are determined by the experimental threshold in magnitude shift, $`\mathrm{\Delta }m_{TH}`$: we have $`0uu_{TH}`$.
The range of integration for $`\mu `$ is where the mass distribution $`dn_0/d\mu `$ is not vanishing and that for $`x`$ is $`0xD_h/D`$ where $`D_h`$ is the extent of the galactic halo along the line of sight (in the case of the LMC, the star is inside the galactic halo and thus $`D_h/D=1`$.) The galactic velocity distribution is cut at the escape velocity $`v_e640km/s`$ and therefore $`v_T`$ ranges over $`0v_Tv_e`$. In order to simplify the integration we integrate $`v_T`$ over all the positive axis, due to the exponential factor in $`f(v_T)`$ the so committed error is negligible.
However, the integration range of $`d\mu dudv_Tdx`$ does not span all the interval we have just described. Indeed, each experiment has time thresholds $`T_{min}`$ and $`T_{max}`$ and only detects events with: $`T_{min}TT_{max}`$, and thus the integration range has to be such that $`T`$ lies in this interval. The total number of micro-lensing events is then given by
$$N_{ev}=𝑑N_{ev}ϵ(T),$$
(10)
where the integration is over the full range of $`d\mu dudv_Tdx`$. $`ϵ(T)`$ is determined experimentally kn:Pratt ; kn:MACHO . $`T`$ is related in a complicated way to the integration variables, because of this, no direct analytical integration in eq.(10) can be performed.
To evaluate eq.(10) we define an efficiency function $`ϵ_0(\mu )`$
$$ϵ_0(\mu )\frac{𝑑N_{ev}^{}(\overline{\mu })ϵ(T)}{𝑑N_{ev}^{}(\overline{\mu })},$$
(11)
which measures the fraction of the total number of microlensing events that meet the condition on $`T`$ at a fixed MACHO mass $`M=\overline{\mu }M_{}`$. We now can write the total number of events in eq.(10) as
$$N_{ev}=𝑑N_{ev}ϵ_0(\mu ).$$
(12)
Due to the fact that $`ϵ_0`$ is a function of $`\mu `$ alone, the integration in $`d\mu dudv_Tdx`$ factorizes into four integrals with independent integration limits.
The average lensing duration can be defined as follows
$$<T>=\frac{1}{\mathrm{\Gamma }}𝑑\mathrm{\Gamma }T(x,\mu ,v_T),$$
(13)
where $`T(x,\mu ,v_T)=R_E(x,\mu )/v_T`$. One easily finds that $`<T>`$ satisfies the following relation
$$<T>=\frac{2\tau _{opt}}{\pi \mathrm{\Gamma }}u_{TH}.$$
(14)
In order to quantify the expected number of events it is convenient to take as an example a delta function distribution for the mass. The rate of microlensing events with $`AA_{min}`$ (or $`uu_{max}`$), is then
$$\mathrm{\Gamma }(A_{min})=u_{max}\stackrel{~}{\mathrm{\Gamma }}=u_{max}Dr_E\sqrt{\pi }v_H\frac{\rho _0}{M_{}}\frac{1}{\sqrt{\overline{\mu }}}_0^1𝑑x[x(1x)]^{1/2}H(x).$$
(15)
Inserting the numerical values for the LMC (D=50 kpc and $`\alpha =82^0`$) we get
$$\stackrel{~}{\mathrm{\Gamma }}=4\times 10^{13}\left(\frac{v_H}{210km/s}\right)\left(\frac{\rho _0}{0.3GeV/cm^3}\right)\frac{1}{\sqrt{M/M_{}}}\mathrm{s}^1.$$
(16)
For an experiment monitoring $`N_{}`$ stars during an observation time $`t_{obs}`$ the total number of events with a magnification $`AA_{min}`$ is: $`N_{ev}(A_{min})=N_{}t_{obs}\mathrm{\Gamma }(A_{min})`$. In Table 1 we show some values of $`N_{ev}`$ for the LMC, taking $`t_{obs}=1`$ year, $`N_{}=10^6`$ stars and $`A_{min}=1.34`$ (or $`\mathrm{\Delta }m_{min}=0.32`$).
### Mass moment method
A more systematic way to extract information on the masses is to use the method of mass moments kn:Derujula ; kn:Jetzer1 ; kn:Jetzer2 . The mass moments $`<\mu ^m>`$ are defined as
$$<\mu ^m>=𝑑\mu ϵ_n(\mu )\frac{dn_0}{d\mu }\mu ^m.$$
(17)
$`<\mu ^m>`$ is related to $`<\tau ^n>=_{events}\tau ^n`$, with $`\tau (v_H/r_E)T`$, as constructed from the observations and which can also be computed as follows
$$<\tau ^n>=𝑑N_{ev}ϵ_n(\mu )\tau ^n=Vu_{TH}\gamma (m)<\mu ^m>,$$
(18)
with $`m(n+1)/2`$. For targets in the LMC $`\gamma (m)=\mathrm{\Gamma }(2m)\widehat{H}(m)`$ and
$$V2N_{}t_{obs}Dr_Ev_H=2.4\times 10^3pc^3\frac{N_{}t_{obs}}{10^6\mathrm{star}\mathrm{years}},$$
(19)
$$\mathrm{\Gamma }(2m)_0^{\mathrm{}}\left(\frac{v_T}{v_H}\right)^{1n}f(v_T)𝑑v_T,$$
(20)
$$\widehat{H}(m)_0^1(x(1x))^mH(x)𝑑x.$$
(21)
The efficiency $`ϵ_n(\mu )`$ is determined as follows kn:Derujula
$$ϵ_n(\mu )\frac{𝑑N_{ev}^{}(\overline{\mu })ϵ(T)\tau ^n}{𝑑N_{ev}^{}(\overline{\mu })\tau ^n},$$
(22)
where $`dN_{ev}^{}(\overline{\mu })`$ is defined as $`dN_{ev}`$ in eq.(10) with the MACHO mass distribution concentrated at a fixed mass $`\overline{\mu }`$: $`dn_0/d\mu =n_0\delta (\mu \overline{\mu })/\mu `$. $`ϵ(T)`$ is the experimental detection efficiency. For a more detailed discussion on the efficiency see ref. kn:Masso .
A mass moment $`<\mu ^m>`$ is thus related to $`<\tau ^n>`$ as given from the measured values of $`T`$ in a microlensing experiment by
$$<\mu ^m>=\frac{<\tau ^n>}{Vu_{TH}\gamma (m)}.$$
(23)
The mean local density of MACHOs (number per cubic parsec) is $`<\mu ^0>`$. The average local mass density in MACHOs is $`<\mu ^1>`$ solar masses per cubic parsec.
The mean mass, which we get from the six events detected by the MACHO team during their first two years, is kn:je
$$\frac{<\mu ^1>}{<\mu ^0>}=0.27M_{}.$$
(24)
When taking for the duration $`T`$ the values corrected for “blending”, we get as average mass 0.34 $`M_{}`$. If we include also the two EROS events we get a value of 0.26 $`M_{}`$ for the mean mass (without taking into account blending effects). The resulting mass depends on the parameters used to describe the standard halo model. In order to check this dependence we varied the parameters of the standard halo model within their allowed range and found that the average mass changes at most by $`\pm `$ 30%, which shows that the result is rather robust. Although the value for the average mass we find with the mass moment method is marginally consistent with the result of the MACHO team, it definitely favours a lower average MACHO mass.
One can also consider other models with more general luminous and dark matter distributions, e.g. ones with a flattened halo or with anisotropy in velocity space kn:Ingrosso , in which case the resulting value for the average mass would decrease significantly.
Another important quantity to be determined is the fraction $`f`$ of the local dark mass density (the latter one given by $`\rho _0`$) detected in the form of MACHOs, which is given by $`fM_{}/\rho _0126\mathrm{pc}^3`$ $`<\mu ^1>`$. Using the values given by the MACHO collaboration for their two years data kn:Pratt we find $`f0.54`$, again by assuming a standard spherical halo model.
Once several moments $`<\mu ^m>`$ are known one can get information on the mass distribution $`dn_0/d\mu `$. Since at present only few events towards the LMC are at disposal the different moments (especially the higher ones) can be determined only approximately. Nevertheless, the results obtained so far are already of interest and it is clear that in a few years, due also to the new experiments under way (such as EROS II, OGLE II and MOA in addition to MACHO), it will be possible to draw more firm conclusions.
## Present status of microlensing research
It has been pointed out by Paczyński kn:Paczynski that microlensing allows the detection of MACHOs located in the galactic halo in the mass range kn:Derujula1 $`10^7<M/M_{}<1`$, as well as MACHOs in the disk or bulge of our Galaxy kn:Paczynski1991 ; kn:Griest2 . Since this first proposal microlensing searches have turned very quickly into reality and in about a decade they have become an important tool for astrophysical investigations. Microlensing is also very promising for the search of planets around other stars in our Galaxy and generates large databases for variable stars, a field which has already benefitted a lot. Because of the increase of observations, since several new experiments are becoming operative, the situation is changing rapidly and, therefore, the present results should be considered as preliminary.
### Towards the LMC and the SMC
In September 1993 the French collaboration EROS kn:Aubourg announced the discovery of 2 microlensing candidates and the American–Australian collaboration MACHO of one candidate kn:Alcock by monitoring stars in the LMC.
In the meantime the MACHO team reported the observation of altogether 8 events (one is a binary lensing event) analysing their first two years of data by monitoring about 8.5 million of stars in the LMC kn:Pratt . The inferred optical depth is $`\tau _{opt}=2.1_{0.7}^{+1.1}\times 10^7`$ when considering 6 events <sup>1</sup><sup>1</sup>1In fact, the two disregarded events are a binary lensing and one which is rated as marginal. (or $`\tau _{opt}=2.9_{0.9}^{+1.4}\times 10^7`$ when considering all the 8 detected events). Correspondingly, this implies that about 45% (50% respectively) of the halo dark matter is in form of MACHOs and they find an average mass $`0.5_{0.2}^{+0.3}M_{}`$ assuming a standard spherical halo model. It may well be that there is also a contribution of events due to MACHOs located in the LMC itself or in a thick disk of our galaxy, in which case the above results will change quite substantially. In particular for the binary event there is evidence that the lens is located in the LMC. It has been estimated that the optical depth for lensing due to MACHOs in the LMC or in a thick disk is about $`\tau _{opt}=5.4\times 10^8`$ kn:Pratt . However, this value is model dependent so that at present it is not clear which fraction of the events are due to self-lensing in the LMC (and similarly for the SMC).
Other events have been detected towards the LMC by the MACHO group, which have been put on their list of alert events. The full analysis of the 1996 - 1998 seasons is still not published.
EROS has also searched for very-low mass MACHOs by looking for microlensing events with time scales ranging from 30 minutes to 7 days kn:EROS . The lack of candidates in this range places significant constraints on any model for the halo that relies on objects in the range $`5\times 10^8<M/M_{}<2\times 10^2`$. Indeed, such objects may make up at most 20% of the halo dark matter (in the range between $`5\times 10^7<M/M_{}<2\times 10^3`$ at most 10%). Similar conclusions have also been reached by the MACHO group kn:Pratt .
Recently, the MACHO team reported kn:Alcock2 the first discovery of a microlensing event towards the Small Magellanic Cloud (SMC). The full analysis of the four years data on the SMC is still underway, so that more candidates may be found in the near future. A rough estimate of the optical depth leads to about the same value as found towards the LMC. The same event has also been observed by the EROS kn:eros and the Polish-American OGLE collaboration kn:ogle . A second event has been discovered in 1998 and found to be due to a binary lens. This event has been followed by the different collaborations, so that the combined data lead to a quite accurate light curve, from which it is possible to get an upper limit for the value of the proper motion of the lens kn:45 ; kn:46 . The result indicate that the lens system is most probably located in the SMC itself, in which case the lens may be an ordinary binary star. It is remarkable that both the binary events detected so far are due to lenses in the Clouds themselves, making it plausible that this is the case for the other lenses as well.
Since the middle of 1996 the EROS group has put into operation a new 1 meter telescope, located in La Silla (Chile), and which is fully dedicated to microlensing searches using CCD cameras. The improved experiment is called EROS II.
### Towards the galactic centre
Towards the galactic bulge the Polish-American team OGLE kn:Udalski announced his first event also in September 1993. Since then OGLE found in their data from the 1992 - 1995 observing seasons altogether 18 microlensing events (one being a binary lens). Based on their first 9 events the OGLE team estimated the optical depth towards the bulge as kn:udal $`\tau _{opt}=(3.3\pm 1.2)\times 10^6`$. This has to be compared with the theoretical calculations which lead to a value kn:Paczynski1991 ; kn:Griest2 $`\tau _{opt}(11.5)\times 10^6`$, which does, however, not take into account the contribution of lenses in the bulge itself, which might well explain the discrepancy. In fact, when taking into account also the effect of microlensing by galactic bulge stars the optical depth gets bigger kn:Kiraga and might easily be compatible with the measured value. This implies the presence of a bar in the galactic centre. In the meantime the OGLE group got a new dedicated 1.3 meter telescope located at the Las Campanas Observatory. The OGLE-2 collaboration has started the observations in 1996 and is monitoring the bulge, the LMC and the SMC as well.
The French DUO kn:Alard team found 12 microlensing events (one of which being a binary event) by monitoring the galactic bulge during the 1994 season with the ESO 1 meter Schmidt telescope. The photographic plates were taken in two different colors to test achromaticity. The MACHO kn:MACHO collaboration found by now more than $``$ 150 microlensing events towards the galactic bulge, most of which are listed among the alert events, which are constantly updated <sup>2</sup><sup>2</sup>2Current information on the MACHO Collaboration’s Alert events is maintained at the WWW site http://darkstar.astro.washington.edu.. They found also 3 events by monitoring the spiral arms in the region of Gamma Scutum. During their first season they found 45 events towards the bulge. The MACHO team detected also in a long duration event the parallax effect due to the motion of the Earth around the Sun kn:?? . The MACHO first year data leads to an estimated optical depth of $`\tau _{opt}2.43_{0.45}^{+0.54}\times 10^6`$, which is roughly in agreement with the OGLE result, and which also implies the presence of a bar in the galactic centre. These results are very important in order to study the structure of our Galaxy. In this respect the measurement towards the spiral arms will give important new information.
Some globular clusters lie in the galactic disk about half-way between us and the galactic bulge. If globular clusters contain MACHOs, the latter can also act as lenses for more distant stars located in the bulge. Recently, we have analysed the microlensing events towards the galactic bulge, which lie close to three globular clusters and found evidence that some microlensing events are indeed due to MACHOs located in the globular clusters kn:wandeler . If this finding is confirmed, once more data will be available, it would imply that also globular clusters contain an important amount of dark matter in form of MACHOs, which probably would be brown dwarfs or white dwarfs.
### Towards the Andromeda galaxy
Microlensing searches have also been conducted towards M31, which is an interesting target kn:Crotts ; kn:Baillon ; kn:Jetzer . In this case, however, one has to use the so-called “pixel-lensing” method, since the source stars are in general no longer resolvable. Two groups have performed searches: the French AGAPE kn:Agape using the 2 meter telescope at Pic du Midi and the American VATT/COLUMBIA kn:VATT , which used the 1.8 meter VATT-telescope located on Mt. Graham and the 4 meter KNPO telescope. Both teams showed that the pixel-lensing method works, however, the small amount of observations done so far does not allow to draw firm conclusions. The VATT/COLUMBIA team found six candidates which are consistent with microlending, however, additional observations are needed to confirm this. Pixel-lensing could also lead to the discovery of microlensing events towards the M87 galaxy, in which case the best would be to use the Hubble Space Telescope kn:M87 . It might also be interesting to look towards dwarf galaxies of the local group.
### Further developments
A new collaboration between New Zealand and Japan, called MOA, started in june 1996 to perform observations using the 0.6 meter telescope of the Mt. John Observatory kn:Moa . The targets are the LMC and the galactic bulge. They will in particular search for short timescale ($``$ 1 hour) events, and will then be particularly sensitive to objects with a mass typical for brown dwarfs.
It has to mentioned that there are also collaborations between different observatories (for instance PLANET kn:PLANET and GMAN kn:GMAN ) with the aim to perform accurate photometry on alert microlensing events. The GMAN collaboration was able to accurately get photometric data on a 1995 event towards the galactic bulge. The light curve shows clearly a deviation due to the extension of the source star kn:gman . A major goal of the PLANET and GMAN collaborations is to find planets in binary microlensing events kn:Mao ; kn:Loeb ; kn:Rhie . Moreover, microlensing searches are also very powerful ways to get large database for the study and discovery of many variable stars.
At present the only information available from a microlensing event is the time scale, which depends on three parameters: distance, transverse velocity and mass of the MACHO. A possible way to get more information is to observe an event from different locations, with typically an Astronomical Unit in separation. This could be achieved by putting a parallax satellite into solar orbit kn:Refsdal ; kn:Gould .
The above list of presently active collaborations and main results shows clearly that this field is just at the beginning and that many interesting results will come in the near future.
## Formation of dark clusters
We turn now to the discussion of a scenario for the formation of dark clusters of MACHOs and cold molecular clouds. Our scenario kn:de1 ; kn:de2 ; kn:de4 encompasses the one originally proposed by Fall and Rees kn:fall to explain the origin of globular clusters and can be summarized as follows. After its initial collapse, the proto galaxy (PG) is expected to be shock heated to its virial temperature $`10^6`$ K. Because of thermal instability, density enhancements rapidly grow as the gas cools. Actually, overdense regions cool more rapidly than average, and so proto globular cluster (PGC) clouds form in pressure equilibrium with hot diffuse gas. When the PGC cloud temperature reaches $`10^4`$ K, hydrogen recombination occurs: at this stage, their mass and size are $`10^5(R/\mathrm{kpc})^{1/2}M_{}`$ and $`10(R/kpc)^{1/2}`$ pc, respectively ($`R`$ is the galactocentric distance). Below $`10^4`$ K, the main coolants are $`H_2`$ molecules and any heavy element produced in a first chaotic galactic phase. The subsequent evolution of the PGC clouds will be very different in the inner and outer part of the Galaxy, depending on the decreasing ultraviolet (UV) flux as the galactocentric distance $`R`$ increases.
As is well known, in the central region of the Galaxy an Active Galactic Nucleus (AGN) and a first population of massive stars are expected to form, which act as strong sources of UV radiation that dissociates the $`H_2`$ molecules. It is not difficult to estimate that $`H_2`$ depletion should happen for galactocentric distances smaller than $`1020`$ kpc. As a consequence, cooling is heavily suppressed in the inner halo, and so the PGC clouds here remain for a long time at temperature $`10^4`$ K, resulting in the imprinting of a characteristic mass $`10^6M_{}`$. Eventually, the UV flux will decrease, thereby permitting the formation of $`H_2`$. As a result, the cloud temperature drops below $`10^4`$ K and the subsequent evolution leads to star formation and ultimately to globular clusters.
Our main point is that in the outer halo – namely for galactocentric distances larger than $`1020`$ kpc – no substantial $`H_2`$ depletion should take place (owing to the distance suppression of the UV flux). Therefore, the PGC clouds cool and contract. When their number density exceeds $`10^8`$ cm<sup>-3</sup>, further $`H_2`$ is produced via three-body reactions ($`H+H+HH_2+H`$ and $`H+H+H_22H_2`$), which makes in turn the cooling efficiency increase dramatically. This fact has three distinct implications: (i) no imprinting of a characteristic PGC cloud mass shows up, (ii) the Jeans mass can drop to values considerably smaller than $`1M_{}`$, and (iii) the cooling time is much shorter than the free-fall time. In such a situation a subsequent fragmentation occurs into smaller and smaller clouds that remain optically thin to their own radiation. The process stops when the clouds become optically thick to their own line emission – this happens when the Jeans mass is as low as $`10^2M_{}`$. In this manner, dark clusters should form, which contain brown dwarfs in the mass range $`10^210^1M_{}`$.
Before proceeding further, two observations are in order. First, it seems quite natural to suppose that – much in the same way as it occurs for ordinary stars – also in this case the fragmentation process that gives rise to individual brown dwarfs should produce a substantial fraction of binary brown dwarfs. It is important to keep in mind that the mass fraction of primordial binaries can be as large as $`50\%`$. Hence, we see that MACHOs consist of both individual and binary brown dwarfs in the present scenario kn:mnras ; kn:apj . Second, we do not expect the fragmentation process to be able to convert the whole gas in a PGC cloud into brown dwarfs. For instance, standard stellar formation mechanisms lead to an upper limit of at most $`40\%`$ for the conversion efficiency. Thus, a substantial fraction $`\stackrel{~}{f}`$ of the primordial gas – which is mostly $`H_2`$ – should be left over. Because brown dwarfs do not give rise to stellar winds, this gas should remain confined within a dark cluster. So, also cold $`H_2`$ self-gravitating clouds should presumably be clumped into dark clusters, along with some residual diffuse gas (the amount of diffuse gas inside a dark cluster has to be low, for otherwise it would have been observed in optical and radio bands).
Unfortunately, the total lack of any observational information about dark clusters would make any effort to understand their structure and dynamics practically impossible, were it not for some remarkable insights that our unified treatment of globular and dark clusters provides us. In the first place, it looks quite natural to assume that also dark clusters have a denser core surrounded by an extended spherical halo. Moreover, in the lack of any further information it seems reasonable to suppose (at least tentatively) that the dark clusters have the same average mass density as globular clusters. Hence, we obtain $`r_{DC}0.12(M_{DC}/M_{})^{1/3}`$ pc, where $`M_{DC}`$ and $`r_{DC}`$ denote the mass and the median radius of a dark cluster, respectively. As a further implication of the above scenario, we stress that – at variance with the case of globular clusters – the initial mass function of the dark clusters should be smooth, since the monotonic decrease of the PGC cloud temperature fails to single out any particular mass scale. In addition, the absence of a quasi-hydrostatic equilibrium phase for the dark clusters naturally suggests $`M_{DC}10^6M_{}`$. Finally, we suppose for definiteness that all brown dwarfs have mass $`0.1M_{}`$, while the molecular cloud spectrum will be taken to be $`10^3M_{}M_m10^1M_{}`$.
## Observational tests
We list schematically some observational tests for the present scenario.
Clustering of microlensing events – The most promising way to detect dark clusters is via correlation effects in microlensing observations, as they are expected to exhibit a cluster-like distribution kn:maoz . Indeed, it has been shown that a relatively small number of microlensing events would be sufficient to rule out this possibility, while to confirm it more events are needed. However, we have seen that core collapse can liberate a considerable fraction of MACHOs from the less massive clusters, and so an unclustered MACHO population is expected to coexist with dark clusters in the outer halo – detection of unclustered MACHOs would therefore not disprove the present model.
$`\gamma `$-rays from halo clouds – A signature for the presence of molecular clouds in the galactic halo should be a $`\gamma `$-ray flux produced in the scattering of high-energy cosmic-ray protons on $`H_2`$ kn:de1 ; kn:de2 . As a matter of fact, an essential ingredient is the knowledge of the cosmic ray flux in the halo. Unfortunately, this quantity is unknown and the only available information comes from theoretical estimates. Moreover, we assume the same energy distribution of the cosmic rays as measured on Earth. The presence of magnetic fields in the halo is expected to give rise to a temporary confinement of cosmic ray protons similar to what happens in the disk. In addition, there can also be sources of cosmic ray protons located in the halo itself, as for instance isolated or binary pulsars in globular clusters. The best chance to detect the $`\gamma `$-rays in question is provided by observations at high galactic latitude. We find that - regardless of the adopted value for the flatness of the halo - at high-galactic latitude $`\mathrm{\Phi }_\gamma ^{\mathrm{DM}}(>1\mathrm{GeV})`$ lies in the range $`68\times 10^7`$ $`\gamma `$ cm<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup> (assuming a fraction $`\stackrel{~}{f}0.5`$ for the dark matter in form of cold clouds). However, the shape of the contour lines strongly depends on the flatness parameter kn:gamma .
A few months ago, Dixon et al. dixon have re-analyzed the EGRET data concerning the diffuse $`\gamma `$-ray flux with a wavelet-based technique. After subtraction of the isotropic extragalactic component and of the expected contribution from the Milky Way, they find a statistically significant diffuse emission from the galactic halo. At high-galactic latitude, the integrated halo flux above 1 GeV turns out to be $`10^710^6`$ $`\gamma `$ cm<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup>, which is slightly less than the diffuse extragalactic flux (Sreekumar et al. sreekumar ). Our estimate for the halo $`\gamma `$-ray flux turns out to be in remarkably good agreement with the discovery by Dixon et al. dixon . The next generation of $`\gamma `$-ray satellites like AGILE and GLAST will be able to test our model, thanks to their better angular resolution.
CBR anisotropy – An alternative way to discover the molecular clouds under consideration relies upon their emission in the microwave band di . The temperature of the clouds has to be close to that of the cosmic background radiation (CBR). Indeed, an upper limit of $`\mathrm{\Delta }T/T10^3`$ can be derived by considering the anisotropy they would introduce in the CBR due to their higher temperature. Realistically, molecular clouds cannot be regarded as black body emitters because they mainly produce a set of molecular rotational transition lines. If we consider clouds with cosmological primordial composition, then the only molecule that contributes to the microwave band with optically thick lines is $`LiH`$, whose lowest rotational transition occurs at $`\nu _0=444`$ GHz with broadening $`10^5`$ (due to the turbulent velocity of molecular clouds in dark clusters). This line would be detectable using the Doppler shift effect. To this aim, it is convenient to consider M31 galaxy, for whose halo we assume the same picture as outlined above for our galaxy. Then we expect that molecular clouds should have typical rotational speeds of 50-100 km s<sup>-1</sup>. Given the fact that the clouds possess a peculiar velocity (with respect to the CBR) the emitted radiation will be Doppler shifted, with $`\mathrm{\Delta }\nu /\nu _0\pm 10^3`$. However, the precise chemical composition of molecular clouds in the galactic halo is unknown. Even if the heavy molecule abundance is very low (as compared with the abundance in interstellar clouds), many optically thick lines corresponding to the lowest rotational transitions would show up in the microwave band. In this case, it is more convenient to perform broad-band measurements and the Doppler shift effect results in an anisotropy in the CBR. Since it is difficult to work with fields of view of a few arcsec, we propose to measure the CBR anisotropy between two fields of view - on opposite sides of M31 - separated by $`4^0`$ and with angular resolution of $`1^0`$. We suppose that the halo of M31 consists of $`10^6`$ dark clusters which lie within 25-35 kpc. Scanning an annulus of $`1^0`$ width and internal angular diameter $`4^0`$, centered at M31, in 180 steps of $`1^0`$, we would find anisotropies of $`10^5\stackrel{~}{f}\overline{\tau }`$ in $`\mathrm{\Delta }T/T`$. Here, most of the uncertainties arise from the estimate of the average optical depth $`\overline{\tau }`$, which mainly depends on the molecular cloud composition. In conclusion, since the theory does not allow to establish whether the expected anisotropy lies above or below current detectability ($`10^6`$), only observations can resolve this issue.
Absorption-lines – Cold clouds clumped into dark clusters can also be observed through absorption lines (due to heavy molecules) both in UV and in optical bands in the spectra of LMC stars, which lie very close (within $`1^{}`$) to a previously microlensed one.
Infrared searches – Another possibility of detecting MACHOs is via their infrared emission di . In order to be specific, let us assume that all MACHOs have same mass 0.08 $`M_{}`$ and age $`10^{10}`$ yr. Accordingly, their surface temperature is $`1.4\times 10^3`$ K and they emit most of their radiation (as a black body) at $`\nu _{max}11.5\times 10^{13}`$ Hz. First, we consider MACHOs located in M31. In this case, we find a surface brightness $`I_{\nu _{max}}2.1\times 10^3(1\stackrel{~}{f})`$ Jy sr<sup>-1</sup> and $`0.5\times 10^3(1\stackrel{~}{f})`$ Jy sr<sup>-1</sup> for projected separations from the M31 center $`b=20`$ kpc and 40 kpc, respectively. Although these values are about one order of magnitude below the sensitivity of the detectors on ISO Satellite, they lie above the threshold of the detector abord the future planned SIRFT Satellite. For comparison, we recall that the halo of our galaxy would have in the direction of the galactic pole a surface brightness $`I_{\nu _{max}}2\times 10^3\mathrm{Jy}\mathrm{sr}^1`$, provided MACHOs make up the total halo dark matter. Nevertheless, the infrared radiation originating from MACHOs in the halo of our galaxy can be recognized (and subtracted) by its characteristic angular modulation. Also, the signal from the M31 halo can be identified and separated from the galactic background via its b-modulation. Next, we point out that the angular size of dark clusters in the halo of our galaxy at a distance of $`20`$ kpc is $`1.8^{}`$ and the typical separation among them is $`14^{}`$. As a result, a characteristic pattern of bright (with intensity $`3\times 10^2`$ Jy at $`\nu _{max}`$ within angular size $`1.8^{}`$) and dark spots should be seen by pointing the detector into different directions.
## Conclusions
The mistery of the dark matter is still unsolved, however, thanks to the ongoing microlensing and pixel-lensing experiments there is hope that progress on its nature in the galactic halo can be achieved within the next few year. An important point will be to determine whether the MACHOs are in the halo or rather in the LMC or SMC themselves, as suggested by the binary lens events.
Substantial progress will also be done in the study of the structure of our Galaxy and this especially once data from the observations towards the spiral arms will be available. Microlensing is also very promising for the discovery of planets. Although being a rather young observational technique microlensing has already allowed to make substantial progress and the prospects for further contribution to solve important astrophysical problems look very bright.
It has also to be mentioned that only a fraction of the halo dark matter might be in form of MACHOs, in which case there is the problem of explaining the nature of the remaining dark matter and the formation of the MACHOs. Before invoking the need for new particles as galactic dark matter candidates for the remaining fraction, one should seriously consider the possibility that it is in the form of cold molecular clouds. Several observational methods have been proposed to test this scenario, in particular via the induced $`\gamma `$-ray flux for which the predicted value is in remarkably good agreement with the measurement of EGRET dixon .
Acknowledgements
I would like to thank B. Paczyński for an important comment and for bringing to my attention several recent papers.
|
no-problem/9901/cond-mat9901248.html
|
ar5iv
|
text
|
# ON THE GENERALIZED KRAMERS PROBLEM WITH OSCILLATORY MEMORY FRICTION
## 1 Introduction
The classic Kramers formulation of reaction rates in solution and its generalization to non-Markovian solvents has provided many theoretical challenges over the past six decades ). In this formulation the reaction coordinate $`x(t)`$ is modeled as evolving in a double-well potential $`V(x)`$ with a barrier separating the reactant and product states. The solvent effects are modeled in terms of fluctuating and dissipative forces. A full understanding of the dependence of the rate coefficient $`k`$ on the dissipation in the Markovian solvent limit (the “turnover problem”) has only been achieved in the last few years . Comparably thorough understanding in the case of a non-Markovian solvent is not yet available. Understanding of the temperature dependence of $`k`$ is also far from complete . Clearly, there is yet a great deal to learn about this classic problem.
In the past two decades, and most especially in the past few years, attention has also been paid by a number of investigators to the time-dependence of the rate coefficient, that is, the way in which $`k(t)`$ approaches its asymptotic value $`k(\mathrm{})`$ . This time dependence directly mirrors the dynamics of the reaction coordinate in the barrier region on the way toward capture by one well or the other. Our focus is on this time dependence and the way that it is influenced by the parameters of the system.
The generalized Kramers problem is based on the dynamical equations for the reaction coordinate
$$\ddot{x}=_0^t𝑑t^{}\mathrm{\Gamma }(tt^{})\dot{x}(t^{})\frac{dV_{eff}(x)}{dx}+F(t),$$
(1)
where a dot denotes a time derivative, $`V_{eff}(x)`$ is an effective potential related to $`V(x)`$ (cf. next section), $`\mathrm{\Gamma }(tt^{})`$ is the dissipative memory kernel (which we will often simply call the memory kernel), and $`F(t)`$ represents Gaussian fluctuations that satisfy the fluctuation-dissipation relation
$$F(t)F(t^{})=k_BT\mathrm{\Gamma }(tt^{}).$$
(2)
The brackets $`\mathrm{}`$ denote an ensemble average, $`k_B`$ is Boltzmann’s constant, and $`T`$ is the temperature. The fluctuations and dissipation account for the interaction of the reaction coordinate with the surrounding medium. The original Kramers problem dealt with a Markovian solvent, that is, with instantaneous dissipation:
$$\mathrm{\Gamma }(t)=2\gamma \delta (t).$$
(3)
The parameter $`\gamma `$ is the dissipation parameter or damping parameter. Generalizations to the non-Markovian problem have typically focused on exponential memory kernels ,
$$\mathrm{\Gamma }(t)=\frac{\gamma }{\tau }e^{t/\tau },$$
(4)
and on oscillatory memory kernels ,
$$\mathrm{\Gamma }(t)=\mathrm{\Gamma }(0)e^{t/\tau }\left(\mathrm{cos}\mathrm{\Omega }t+\frac{1}{\mathrm{\Omega }\tau }\mathrm{sin}\mathrm{\Omega }t\right)$$
(5)
(this memory kernel and the parameters in it will be discussed in detail in the next section). Another generalization, which we do not address in our work, deals with Gaussian memory kernels ,
$$\mathrm{\Gamma }(t)=\left(\frac{2}{\pi }\right)^{1/2}\frac{\gamma }{\tau }e^{t^2/2\tau ^2}.$$
(6)
In all of these generalizations $`\tau `$ is a measure of the decay time of the memory kernel or, equivalently, of the correlation time of the fluctuations.
In subsequent sections we will provide a brief graphic review of the results addressed in our previous work, which succinctly are as follows. The time-dependent rate coefficient for the Markovian solvent at high damping (that is, beyond the “turnover” regime) decays monotonically towards its equilibrium value . For the exponential memory at high damping there are two distinct types of time dependences, the “non-adiabatic”, in which the rate coefficient decays monotonically to its equilibrium value (as in the Markovian case), and the “caging”, in which the decay to equilibrium is oscillatory with a frequency characteristic of an effective caging potential . At very low damping (that is, below the “turnover”), the decay of the rate coefficient to its equilibrium value is again oscillatory, but now with a frequency characteristic of the bistable potential. This behavior is apparent for the Markovian solvent and also for the exponential memory in this energy-diffusion-limited regime. We have shown that the theoretical predictions agree very well with numerical simulations for all of these generic behaviors.
In this paper we complete our analysis with a study of the oscillatory memory kernel. In appropriate limits we recover the typical low-damping behavior of the rate coefficient and also the high-damping non-adiabatic monotonic behavior, although caging, as we will see, can not be achieved with an oscillatory memory. Most interesting, perhaps, is the appearance of a new time dependence different from those previously observed or anticipated. This new time dependence is a consequence of the new features of the memory kernel such as the fact that it alternates between positive and negative values. We will present and explain this new behavior, and determine the parameter regimes where it may be observed.
This paper is organized as follows. In Sec. 2 we introduce the model Eq. (1) in detail. In fact, we present two equivalent versions of the model. One version invokes a solvent coordinate which is coupled to the reaction coordinate and also coupled to a heat bath. This double presentation not only clarifies the physical origin of the oscillatory memory kernel, but it also leads to more transparent interpretation of the resulting time dependence of the rate coefficient. In Sec. 3 we describe our simulations and numerical procedures. Section 4 presents a graphical summary of the various time dependences obtained numerically in earlier work and provides a context for the presentation of the new behavior identified in the oscillatory memory system. In Sec. 5 we discuss analytic approximations that serve as a backdrop for our analysis and detailed explanation of the new behavior, which is presented in Sec. 6. The results and conclusions are summarized in Sec. 7.
## 2 The Model
We first present an alternative (two-variable) model that eventually leads to Eq. (1) with Eq. (5). The potential energy of the two-variable model is
$$V(x,y)=V(x)+\frac{\omega ^2}{2}y^2+\frac{k}{2}(xy)^2,$$
(7)
where the solvent is explicitly represented by a harmonic coordinate $`y`$. The reaction coordinate $`x`$ evolves in a bistable potential that is taken to be of the familiar form
$$V(x)=\frac{V_0}{4}(x^21)^2.$$
(8)
We take $`V_0`$ as the energy unit throughout this work and thus set it equal to unity. The reaction coordinate is coupled to the solvent coordinate via a harmonic spring of force constant $`k`$. The dynamical equations for the coupled system, assuming that $`y`$ is coupled to a heat bath at temperature $`T`$, are
$`\ddot{x}={\displaystyle \frac{dV(x)}{dx}}+k(yx),`$
$`\ddot{y}=\omega ^2y+k(xy)\gamma \dot{y}+f(t).`$ (9)
Here $`\gamma `$ is the friction coefficient for the solvent coordinate and $`f(t)`$ represents $`\delta `$-correlated Gaussian fluctuations that satisfy the fluctuation-dissipation relation
$$f(t)f(t^{})=2\gamma k_BT\delta (tt^{}).$$
(10)
Throughout we take the barrier height to be large compared to the temperature, $`k_BT1/4`$. We call Eq. (9) the “extended” representation of our system.
Although the extended model can readily be integrated numerically, initial conditions are not arbitrary and require careful consideration. The reduction of the model (9) to the generalized Kramers problem (1) with the fluctuation-dissipation relation (2) requires distributions for the initial solvent coordinate $`y(0)`$ and velocity $`\dot{y}(0)`$ that satisfy certain conditions (cf. Appendix A, where these initial conditions are presented in detail).
A number of theoretical approaches to this problem deal, instead, with the completely equivalent “contracted” or “reduced” representation obtained by explicitly integrating out the solvent coordinate $`y`$. Among these is the work of Grote and Hynes and that of Kohen and Tannor (KT) . The resulting equivalent single-variable problem is shown in AppendixA to be given by Eq. (1) with the effective potential
$$V_{eff}(x)=V(x)+\frac{1}{2}\frac{\omega ^2k}{\omega ^2+k}x^2.$$
(11)
Depending on the relative values of parameters, the resulting memory kernel can decay monotonically (“hyperbolic” case) or it can decay in an oscillatory fashion (“trigonometric” case). We are specifically interested in the trigonometric case, which requires that
$$\left(\frac{\gamma }{2}\right)^2\omega ^2k<0.$$
(12)
The associated memory kernel is
$$\mathrm{\Gamma }(t)=\frac{k^2}{\omega ^2+k}e^{\frac{\gamma }{2}t}\left(\frac{\gamma }{2\mathrm{\Omega }}\mathrm{sin}\mathrm{\Omega }t+\mathrm{cos}\mathrm{\Omega }t\right)$$
(13)
with the frequency
$$\mathrm{\Omega }\sqrt{\omega ^2+k\left(\frac{\gamma }{2}\right)^2}.$$
(14)
We expect that the oscillatory character of the memory kernel may lead to new regimes of dynamical behavior that will become evident in the time dependence of the rate coefficient. We do not pursue the hyperbolic case because we expect behavior similar to that found earlier for the exponential memory kernel and hence do not expect new behaviors in this case.
From the explicit form (11) of the effective potential we see that the additional quadratic term moves the minima of the wells of the bistable potential $`V(x)`$ from $`\pm 1`$ to
$$x_{min}=\pm \sqrt{1\frac{\omega ^2k}{\omega ^2+k}}$$
(15)
and diminishes the barrier from $`1/4`$ to
$$\mathrm{\Delta }V_{eff}^{}=\frac{1}{4}\frac{1}{2}\left(\frac{\omega ^2k}{\omega ^2+k}\right)+\frac{1}{4}\left(\frac{\omega ^2k}{\omega ^2+k}\right)^2.$$
(16)
Both effects can be seen in Fig. 1. It is easily shown that the barrier disappears entirely when $`\omega ^2k\omega ^2+k`$, at which point the very nature of the problem changes. We thus constrain our parameters $`k`$ and $`\omega ^2`$ to ensure bistability:
$$\omega ^2k<\omega ^2+k.$$
(17)
To summarize, then, the model to be considered in this paper is given by Eqs. (9) with the potential (8) and the fluctuation-dissipation relation (10) (extended representation), or, completely equivalently, by Eq. (1) with the effective potential (11), the memory kernel (13), and the fluctuation-dissipation relation (2) (reduced representation). Whichever formulation is used, our parameters are constrained by the inequality (17), which ensures a bistable effective potential, and by the inequality (12), which ensures an oscillatory memory kernel. For some purposes the extended representation provides a more convenient viewpoint, while for others the reduced representation is more transparent. In particular, we find the extended representation more convenient for numerical simulations.
As a final point in this section it is important to note the altered significance of parameters in the oscillatory memory kernel compared to the Markovian or exponential models. In the latter two cases “high friction” and “low friction” refer to the value of $`\gamma `$ since this parameter directly measures the strength of the dissipative force that extracts energy from the reaction coordinate into the bath. Thus, in the exponential memory case $`\gamma `$ is the value of $`\mathrm{\Gamma }(0)`$ and also of the integrated memory kernel. In the case under consideration here, however, $`\gamma `$ is a measure of the dissipative force on the solvent coordinate and not directly on the reaction coordinate. Although $`\gamma `$ indirectly affects the loss of energy of the reaction coordinate, the energy loss channel is now principally determined by the coupling strength between the reaction coordinate and the solvent coordinate. Correspondingly, now $`\mathrm{\Gamma }(0)=k^2/(\omega ^2+k)`$. The integrated memory kernel is $`\mathrm{\Gamma }(0)\gamma /(\omega ^2+k)`$, thus reflecting the overall influence of $`\gamma `$. However, it is the coupling constant $`k`$ that now essentially determines whether we are in the “high friction” or “low friction” regime (more details will be presented in Sec. 4).
## 3 Simulation Method: Initial Conditions and Other Details
The quantity of interest is the time-dependent rate coefficient $`k(t)`$ for an ensemble of particles evolving according to $`x(t)`$ in Eq. (1) or Eq. (9). Numerically we find it more convenient to work with the extended system (9). The coefficient $`k(t)`$ is the time-dependent mean rate of passage of the particles across the barrier at $`x=0`$. The usual focus on the deviation of $`k(t)`$ from its equilibrium transition state theory (TST) value leads to the expression
$$k(t)=\kappa (t)k^{TST}$$
(18)
where $`k^{TST}=(\sqrt{2}/\pi )\mathrm{exp}(1/4k_BT)`$ is the transition state theory rate that assumes that particles never recross the barrier. The transmission coefficient $`\kappa (t)`$ is the correction to transition state theory that includes both the temporal dynamics and the effects of those particles that do recross the barrier. We are interested in the dynamics of the transmission coefficient $`\kappa (t)`$.
Numerically, one might try to calculate $`k(t)`$ directly by starting all the particles in one well and computing at each time how many of them have crossed to the other well. It would require an exceedingly long calculation to gather statistically significant data in this manner, since the reaction barrier is very much higher than the thermal fluctuations. The reactive flux formalism that relies on Eq. (18) overcomes this difficulty since the transmission coefficient can be calculated by dealing only with an ensemble of particles whose initial position is above the barrier \[$`x(0)=0`$\]. The slow process of “getting there” is already included in $`k^{TST}`$. Half of the particles that start above the barrier have a positive velocity distributed according to the Boltzmann distribution in energy, and the other half have the same distribution but with negative velocities.
Upon imposing the initial conditions on $`y`$ discussed in Appendix A, we run quite a few iterations for the solvent coordinate evolution in order to obtain even better thermalization. Having achieved this, we then integrate the fully coupled system (9) with the following distributions for the initial reaction coordinate position $`x(0)x_{}`$ and initial velocity $`\dot{x}(0)v_x`$:
$$P(x_{})=\delta (x_{}),$$
(19)
$$P(v_x)=\frac{v_x}{k_BT}\mathrm{exp}\left(\frac{v_x^2}{2k_BT}\right).$$
(20)
The numerical integration is carried out using the second order Heun’s algorithm . In all our runs our ensemble consists of $`N=10,000`$ particles and we use a very small time step ($`\mathrm{\Delta }t=0.001`$). The transmission coefficient is calculated from these simulated data according to the relation
$$\kappa (t)=\frac{N_+(t)}{N_+(0)}\frac{N_{}(t)}{N_{}(0)},$$
(21)
where $`N_+(t)`$ and $`N_{}(t)`$ are the particles that started with positive velocities and negative velocities respectively and at time $`t`$ are in or over the right hand well (i.e., the particles for which $`x(t)>0`$). Alternatively and completely equivalently (via a simple symmetry argument) one can start all the $`x`$ particles with a positive velocity and then
$$\kappa (t)=\kappa _+(t)\kappa _{}(t)$$
(22)
where $`\kappa _+(t)`$ is the fraction of particles that are in or over the right hand well at time $`t`$, and $`\kappa _{}(t)`$ is the fraction in or over the left hand well. Furthermore, it is easily argued that
$$\kappa _{}(t)=1\kappa _+(t),$$
(23)
so it is sufficient to follow one or the other.
## 4 Numerical Results
The number of independent parameters in the generalized Kramers problem with an oscillatory memory kernel is of course larger than for Markovian or exponential frictions. The values of $`\kappa (t)`$ and $`\kappa _{st}`$ now in general depend on $`k`$, $`\omega ^2`$, $`\gamma `$, and $`k_BT`$. Indeed, a systematic, even qualitative study of the transmission coefficient as was done, for example, in would be quite complex. We focus on a more modest goal, that is, to capture qualitatively the different types of temporal behavior of $`\kappa (t)`$ and the broad parameter regimes where each occurs. These include the three regimes identified for the exponential memory, namely, the energy-diffusion-limited, the non-adiabatic, and the caging regimes, as well as possible new behaviors.
The oscillatory memory kernel can exhibit different appearances depending on the parameter choices. Figure 2 exhibits three distinct “generic” appearances, each roughly representative of a distinct parameter regime. Two of these mimic behaviors of the exponential memory kernel and might be expected to lead to transmission coefficients similar to those obtained earlier. The third, the strongly oscillatory kernel, is new and might be expected to lead to new behavior. Let us consider each case in turn, along with the resulting transmission coefficients.
The solid curve kernel in Fig. 2 mimics the exponential memory kernel in the energy-diffusion-limited regime. The kernel $`\mathrm{\Gamma }(t)`$ is small at all times. In the case of the exponential memory kernel (4) this behavior was insured by choosing $`\gamma `$ to be small, but in the oscillatory memory case, as noted earlier, the meaning of the parameters is different. Now a small value of $`\mathrm{\Gamma }(t)`$ and, in particular, a small value of $`\mathrm{\Gamma }(0)`$ is insured if we choose small values of $`k^2/(\omega ^2+k)`$, that is, $`k`$ must be small and/or $`\omega ^2`$ must be large. Note that the choices must still obey the constraint (17), but this is not a problem. The value of $`\gamma `$ is not constrained by the low friction requirement, but it is constrained by Eq. (12) if we want to insure that we are in the oscillatory regime. The value of $`\gamma `$ determines the oscillation frequency of the memory kernel $`\mathrm{\Gamma }(t)`$ but not its magnitude.
Typical values of the parameters that satisfy the conditions to produce energy-diffusion-limited behavior while preserving the oscillatory character of the memory kernel are $`\omega ^2=1.0`$, $`k=0.14`$, and $`\gamma =0.667`$. These are the values used to produce the solid curve in Fig. 2. In this regime the low dissipation causes the dynamics of the system to be dominated by the slow variation of the energy and consequently by the repeated inertial recrossing of the barrier before the particles are trapped in one well or the other. As expected, we find the typical energy-diffusion-limited behavior for $`\kappa (t)`$ (oscillatory curve in Fig. 3) that consists of a very small initial decay to a plateau up to a time beyond which $`\kappa (t)`$ decays in an oscillatory manner to its equilibrium value. As shown and discussed in our earlier work , the first decay is due to the few low-energy particles that immediately change their initial direction due to a thermal fluctuation and are trapped in the well opposite to the one toward which they were initially moving. The oscillations are associated with the essentially inertial successive recrossings of the higher-energy particles.
All the arguments developed in the reduced system should have a counterpart in the extended scheme. In the energy-diffusion-limited case we have considered small $`k`$, large $`\omega ^2`$, and arbitrary $`\gamma `$ \[subject to the constraint (12)\]. Large $`\omega ^2`$ leads to a narrow harmonic potential for $`y`$ \[and consequently $`y(t)`$ remains small\], and small $`k`$ means weak coupling. Since the coupling term $`ky`$ in Eqs. (9) provides the only energy loss channel for the $`x`$ coordinate, large $`\omega ^2`$ and small $`k`$ therefore lead to low dissipation.
Let us now move on to the high dissipation regime. The dashed curve in Fig. 2 mimics the exponential memory kernel in the diffusion-limited regime. The kernel $`\mathrm{\Gamma }(t)`$ has a high initial value and decays essentially monotonically (although we are in the oscillatory regime). In the case of the exponential memory kernel (4) this regime results when $`\gamma `$ is large. For the exponential memory kernel, the choice of the second parameter, $`\tau `$, further determines two different regimes of behavior. If the correlation time $`\tau `$ is small, such that $`\gamma /\tau <1`$, the transmission coefficient decays monotonically and one is said to be in the non-adiabatic regime. This is also the unique high-dissipation behavior associated with the Markovian problem. On the other hand, if $`\tau `$ is small, such that $`\gamma /\tau >1`$, one is in the caging regime in which the behavior of the transmission coefficient is oscillatory (but quite differently so than in the low dissipation regime). The quasi-exponential dashed memory kernel in Fig. 2 corresponds to the non-adiabatic behavior since the ratio of the initial value (about 0.8) to the decay time of the kernel (about 4.0) is clearly smaller than unity. In order to have the oscillatory memory mimic the non-adiabatic exponential memory case as in the figure we require $`\gamma `$ to be small (since now $`\gamma `$ plays the role that $`1/\tau `$ did before), and $`\mathrm{\Omega }`$ to be small as well (to minimize oscillatory effects). A typical set of values that meets these various conditions is $`\omega ^2=0.01`$, $`k=0.75`$, and $`\gamma =1.74`$, which leads to $`\mathrm{\Omega }=0.0557`$.
The associated transmission coefficient for these parameters exhibits the typical features of the non-adiabatic regime, as shown by the monotonic curve in Fig. 3, namely, a smooth rather rapid decay to the equilibrium value. As in the case of an exponential memory, this decay in the non-adiabatic regime looks Gaussian rather than exponential at short times.
In the extended system small $`\omega ^2`$ means that the potential in $`y`$ is very wide and for this reason $`y(t)`$ easily achieves large values. Large $`k`$ represents strong coupling, and the combination of both conditions leads to high dissipation for the $`x`$ coordinate.
As noted above, the other regime found for an exponential memory kernel with high dissipation is the caging regime, which there occurs when both $`\gamma `$ and $`\tau `$ are large, with $`\gamma /\tau >1`$. Interestingly, the constraints on the parameters and the shape of the oscillatory friction kernel do not admit this regime. This can be understood from the following argument. Caging is achieved when $`\mathrm{\Gamma }(t)`$ is essentially constant over some substantial time range so that the friction integral in Eq. (1) over this time can be approximated as a linear force on $`x(t)`$ and such that the resulting potential becomes monostable. In our case, this resulting potential $`V_r(x)`$ would be
$$V_r(x)=\frac{1}{4}(x^21)^2+\frac{1}{2}\frac{\omega ^2k}{\omega ^2+k}x^2+\frac{1}{2}\frac{k^2}{\omega ^2+k}x^2=\frac{1}{4}(x^21)^2+\frac{1}{2}kx^2.$$
(24)
From this expression it is easy to deduce that the potential $`V_r(x)`$ loses its barrier when $`k>1`$. The combination of the condition that $`\mathrm{\Gamma }(t)`$ behave roughly as a constant for some time interval ($`\gamma `$ small) and that the resulting potential lose its barrier during this time ($`k>1`$) would lead to a caging regime with effective caging potential frequency $`\omega _{cag}=\sqrt{k1}`$. However, this combination of conditions can not be satisfied with an oscillatory memory. If we increase the value of $`k`$ above $`1`$, we also have to increase $`\gamma `$ ($`\mathrm{\Omega }`$ has to remain small to avoid pronounced oscillations), but this in turn leads to the rapid exponential decay of $`\mathrm{\Gamma }(t)`$. It is thus not possible to achieve the conditions for the caging regime with trigonometric oscillatory friction. The caging regime is easily captured in the hyperbolic case (cf. Appendix A), since then $`\mathrm{\Gamma }(t)`$ can take on a very high initial value that can be sustained for a long time.
We have thus seen that the form of $`\mathrm{\Gamma }(t)`$ and the constraints on $`k`$, $`\omega ^2`$, and $`\gamma `$ determine which regimes typical of exponential memories can also be captured with an oscillatory memory. The requirements described so far have been met by either choosing $`\mathrm{\Gamma }(0)`$ to be small (low dissipation) or large (high dissipation) while minimizing the amplitude of the oscillations.
New behaviors for the dynamics of $`\kappa (t)`$ may appear in parameter regimes that emphasize the oscillatory behavior of $`\mathrm{\Gamma }(t)`$. To provide such emphasis we minimize the damping effects of the exponential part by choosing $`\gamma `$ to be small. Further choosing a very small value for $`\omega ^2`$ and a medium value for $`k`$ (large enough to get high amplitudes of oscillation of $`\mathrm{\Gamma }(t)`$, but limited so that the frequency $`\mathrm{\Omega }`$ is not too high) we obtain the oscillatory friction shown as the dotted kernel in Fig. 2. Since $`\gamma `$ and $`\omega ^2`$ are very small, the frequency of the oscillations is
$$\mathrm{\Omega }\sqrt{k}.$$
(25)
In this regime at low temperatures an entirely new temporal behavior emerges for $`\kappa (t)`$. We generically call this the “stair-like” regime. It is shown for a typical set of parameter values in Fig. 3 (corresponding to those of the dotted curve in Fig. 2). The main feature of this new behavior is that $`\kappa (t)`$ exhibits a “stair” shape, namely, it decays via a series of steps followed by plateaus. The explanation of this behavior, including the period of the steps, the dependences on the parameters $`\gamma `$, $`\omega ^2`$, $`k`$, and $`k_BT`$, and the connections with the other regimes are presented in detail in Sec. 6.
## 5 Approximations
In this section we lay the groundwork for the arguments invoked in the next section, where we discuss the stair-like regime in detail. Our explanations are semi-quantitative, that is, we do not develop a theory that reproduces the stair-like curve in the figure in all its details. We mention this because in fact such theories are available for the two other curves . In the high dissipation regime the KT theory predicts the monotonic decay in the non-adiabatic regime, and this prediction, shown in Fig. 3, is seen to be quantitatively very good (see also ). In the low dissipation regime we have shown that our theory also leads to very good quantitative agreement with numerical results for the entire time evolution of $`\kappa (t)`$ . We have not derived such a detailed formula for the stair-like regime, but we nevertheless have been able to gain considerable understanding of this behavior, and this is what we shall present.
Our insights turn out to be most complete if we invoke both representations of the oscillatory problem, the extended as well as the reduced. Furthermore, an understanding of the early time dependence of $`\kappa (t)`$ in both of these representations turns out to be very helpful, even if the approximations that are invoked are not valid over the entire time regime – the breakdown of approximations can also yield useful insights. We thus first turn to the early time behavior.
Consider first the reduced representation. KT theory focuses on the way in which particles subject to dynamics of the generic form (1) with the potential approximated by a parabolic barrier diffuse to one side or the other of the barrier. Since their analysis is restricted to a parabolic barrier (rather than a bistable potential), the theory is appropriate only for high dissipation, that is, when the reaction coordinate never recrosses the barrier once it has left the barrier region. In other cases KT theory may (and indeed does) capture only the initial decay, typically up to the first plateau value of $`\kappa (t)`$, but it does not capture the asymptotic values $`\kappa _{st}`$. This is seen in Fig. 3, where the KT theory predictions are shown for each of the generic transmission coefficients. KT theory works very well for all times for the non-adiabatic high dissipation curve, and captures the initial decay in the energy-diffusion-limited (dashed curve) and stair-like (dot-dashed curve) cases. These initial agreements are fairly typical for all parameter values.
Consider now the extended representation (9). We introduce an even simpler approximation in this representation that also captures the early time behavior when the dissipation of energy in the $`x`$-coordinate is slow and that facilitates our analysis of the stair-like regime. The approximation is based on three main assumptions, all appropriate only at short times. One is akin to the argument we used earlier in the energy-diffusion-limited problem, namely, that the main influence of the temperature arises from the initial thermal distributions. Thus, as long as the initial distributions are chosen correctly, that is, according to Eqs. (A6), (A7), (19), and (20), the thermal effects in the form of the explicit random force acting on the solvent coordinate can be omitted from the dynamical equations. The second is the omission of the dissipation term, i.e., we set $`\gamma `$ to zero. Note that this is the dissipative force on the solvent coordinate; the principal initial dissipative channel for the reaction coordinate $`x(t)`$ is its coupling to the $`y`$ coordinate via $`k`$, and this is retained. The third is to use a parabolic barrier to approximate the potential. With these assumptions the initial decay of the transmission coefficient is due to the low-energy particles (i.e. those barely above the barrier) that are pulled by the $`y`$ coordinate in a direction opposite to the one indicated by their initial velocity, as described by the simplified deterministic coupled linear equations
$`\ddot{x}(t)`$ $`=(1k)x+ky`$
$`\ddot{y}(t)`$ $`=(\omega ^2+k)y+kx.`$ (26)
With the initial distributions (A6), (A7), (19) and (20) we can then use the form Eq. (22) for the transmission coefficient to write
$$\kappa (t)=_{\mathrm{}}^{\mathrm{}}𝑑v_y_{\mathrm{}}^{\mathrm{}}𝑑y_{}_0^{\mathrm{}}𝑑v_xP(v_x)P(y_{})P(v_y)\mathrm{sgn}[x(t;v_x,y_{},v_y)]$$
(27)
where $`\mathrm{sgn}[x]`$ is the sign function, that is, $`\mathrm{sgn}[x]=+1`$ if $`x>0`$ and $`1`$ if $`x<0`$, and $`x(t;v_x,y_{},v_y)`$ is the solution of Eqs. (26) with initial conditions $`v_x`$, $`y_{}`$, $`v_y`$, and $`x_{}=0`$.
The left panel in Fig. 4 shows two simulations in the energy-diffusion-limited regime along with the results of (27) with (26). The early time agreement in this regime is clearly excellent, as seen in the detail inset. Note that both simulations exhibit the same early-time behavior, even though the value of $`\gamma `$ is very different for the two cases (and not particularly “small” in one of the two cases). Clearly, in this regime the values of $`k`$ and $`\omega ^2`$ determine the early time behavior of the transmission coefficient.
More importantly for our purposes here, the right panel of Fig. 4 shows similar early-time agreement between the stair-like numerical results and the approximation. The agreement extends through the first plateau. Note that the approximation captures the (slightly) non-monotonic behavior of the simulation results. The agreement between the two curves provides the basis for our analysis of the stair-like regime in the next section.
## 6 The Stair-like Regime
We saw in Sec. 4 that the stair-like regime is achieved when $`\gamma `$ and $`\omega ^2`$ are small and the temperature is low. The main feature of this new behavior is that the transmission coefficient shows progressive decays connected by plateaus. The length of the plateaus (determined by a time period that we call $`T_\kappa `$), the depths of the decays, and all the characteristics that define this regime depend on the values of the parameters, principally $`k`$. Our understanding of this regime is based on argumentation that relies mainly on the extended system, although some of the arguments can easily be translated to the language of the reduced scheme.
### 6.1 Trajectories
A particularly helpful view of the process is gained by looking at explicit trajectories, as illustrated in Fig. 5. The solid trajectory $`x(t)`$ in the left panel illustrates the typical repeated recrossings in the low dissipation energy-diffusion-limited regime. The dotted curves correspond to two typical trajectories in the non-adiabatic regime. Trajectories in this regime almost never recross the barrier. Those that do recross the barrier do so at short times (before straying far from $`x=0`$), and typically do so only once. These trajectories reinforce the idea that in this regime particles are quickly trapped in one well or the other due to the high dissipation. As seen in the right panel of Fig. 5, the trajectories in the stair-like regime are considerably more complex – this complexity distinguishes the stair-like dynamics from the other regimes. For example, in this new regime one finds $`x`$-particles that remain localized over one well (even though they have sufficient energy to cross the barrier) and that after circling there several times may suddenly recross the barrier. This behavior is not found in any other regime studied so far.
Studying additional $`x`$-trajectories besides those shown explicitly in the right panel of Fig. 5 leads to the realization of a number of important points. First, we note that
* the $`x`$-particles cross the barrier only at certain specific times.
For our typical parameters, these times are $`t=10`$, $`16`$, $`25`$, $`31`$ and $`40`$ (approximately), which coincide with the decay times $`T_\kappa `$ for the associated $`\kappa (t)`$ in Fig. 3. Moreover, since we are working with low temperatures, the energy of the $`x`$-particles is typically not large enough to recross the barrier many times. Actually, we have observed that
* most of the $`x`$-particles that cross the barrier do it only once.
Indeed, only about $`1\%`$ of the particles show multiple recrossings in our typical example. The fact that most particles do not return to their original well once they have crossed the barrier leads to
* steps rather than oscillations.
On the other hand, small friction leads to very slow energy loss, and it is for this reason that
* even at long times crossing the barrier is still possible.
This combination of features characteristic of the energy-diffusion-limited and non-adiabatic regimes ultimately leads to the
* appearance of successive steps and plateaus.
At this point there are two obvious questions about this regime: i) Why do we see decays only at fairly sharply defined specific times and what are these times? In other words, how is the period $`T_\kappa `$ determined? ii) What determines the depth of each decay? The answers to these and other questions are given in the following subsections by considering the effects of varying the parameters of our model.
However, before going ahead, we should first understand how an $`x`$-particle can be trapped in one well in spite of having enough energy to cross the barrier, as well as how an $`x`$-particle can cross the barrier seemingly without having enough energy to do it. The reason for both strange situations is that the behavior of the $`x`$-particles may depend strongly on that of their associated $`y`$-oscillators. A given $`x`$-particle with energy greater than the barrier height can be “trapped” in one well because each time it goes toward the barrier its coupled oscillator pulls it back. Conversely, a given $`x`$-particle with apparently insufficient energy can cross the barrier by being pulled by its $`y`$ oscillator. Thus, the coupling between $`x`$ and $`y`$ may determine the times at which the particles cross the barrier and therefore the times for the decays of $`\kappa (t)`$.
### 6.2 Dependence on $`k`$
To deduce the role of the coupling constant $`k`$, we depart from our typical value ($`k=0.3`$) and consider the trajectories for two cases on either side of this value but that still essentially preserve the stair-like behavior. We call them the large-$`k`$ case ($`k=0.5`$) and the small-$`k`$ case ($`k=0.1`$). The parameter $`k`$ determines the extent to which $`x`$ and $`y`$ particle dynamics are in synchrony.
Since $`\kappa (t)`$ contains information averaged over ensembles of particles, we are interested in the average trajectories of both $`x`$ and $`y`$ coordinates. We thus plot $`x(t)`$ and $`y(t)`$, where $`\mathrm{}`$ here means an average over all the particles that are in the right well ($`x>0`$) at each given time. Fig. 6 shows averaged trajectories for the small-$`k`$ (left panel) and large-$`k`$ (right panel) cases.
For the small-$`k`$ case we readily observe that the motion of $`x(t)`$ and $`y(t)`$ are essentially uncorrelated. Each dynamics proceeds with a different principal frequency of oscillation. A Fourier analysis of the trajectories reveals that the peak frequency for $`x(t)`$ is $`1.022`$ while that of $`y(t)`$ is $`0.306`$. These frequencies can be associated with two characteristic frequencies of our problem. That of $`x(t)`$ corresponds to the frequency of the particle in the bistable potential, namely $`1.0222\pi /T_{semi}`$, where $`T_{semi}`$ is roughly the average semiorbit time for an ensemble of particles above the barrier in the double-well potential. In Ref. we have shown that the semiorbit time for a particle at an energy $`\epsilon `$ above the barrier is $`t_\epsilon =\mathrm{ln}(16/\epsilon )+O(\epsilon \mathrm{ln}\epsilon )`$. An average of this time over a thermal distribution then directly yields $`T_{semi}3.35\mathrm{}\mathrm{ln}k_BT`$. The frequency of $`y(t)`$, on the other hand, coincides with the frequency of $`\mathrm{\Gamma }(t)`$, namely $`0.306\mathrm{\Omega }\sqrt{k+\omega ^2}`$. This frequency characterizes the motion of $`y(t)`$ in the second equation for the extended system, Eq. (9), when the coupling contribution is neglected \[see also Eq. (26)\]. The $`x`$-particles can cross the barrier with frequency $`2\pi /T_{semi}`$ since they do not care where their associated $`y`$-particles are. Thus in this weakly coupled regime
$$T_\kappa T_{semi}3.35\mathrm{}\mathrm{ln}k_BT.$$
(28)
The large-$`k`$ case exhibits a different behavior. In this case $`x(t)`$ and $`y(t)`$ are essentially synchronized. We see in Fig. 6 that $`x(t)`$ has two characteristic periods: a shorter one associated with motion in the bistable potential ($`2\pi /T_{semi}`$) and a longer one that matches that of $`y(t)`$. The frequency of the latter is $`0.624`$ and coincides with $`\sqrt{k+\omega ^2}\sqrt{k}`$. Thus, the motion of $`x`$ is now dominated by the dynamics of its coupled oscillator. The consequence of the strong coupling is that now
$$T_\kappa \frac{2\pi }{\mathrm{\Omega }}\frac{2\pi }{\sqrt{k}}.$$
(29)
These ideas can be further supported by considering the two-variable potential, Eq. (7), drawn in contour form in Fig. 7 for the small-$`k`$ (left panel) and large-$`k`$ (right panel) cases. These plots clearly illustrate the correlations between $`x`$ and $`y`$ (or the lack thereof). When the system is in one of the two two-dimensional wells, $`x`$ and $`y`$ remain more tightly bound in the large-$`k`$ case than in the small-$`k`$ case. In particular, when $`k`$ is small the $`y`$-particle can move away from $`x`$ even when the system has already fallen into one well.
Further, consider the likely pathways followed by the system as it crosses, say, from the right to the left. In the small-$`k`$ case the likely path is for $`y`$ to decrease first (perhaps even to negative values), followed by a change of $`x`$ from $`x>0`$ to $`x<0`$. On the other hand, in the large-$`k`$ case it is easier for $`x`$ to first move from $`x>0`$ to $`x<0`$ to be followed by $`y`$. Therefore, in the large-$`k`$ case the crossing rate is determined by the frequency of $`y(t)`$ so that $`T_\kappa 2\pi /\sqrt{k}`$; in the small-$`k`$ case the crossing rate is limited by the motion of $`x(t)`$ and hence $`T_\kappa T_{semi}`$.
Figure 8 shows the time-dependent transmission coefficient for the two cases. The times $`T_\kappa 7.55`$ (small-$`k`$ case) and $`T_\kappa 10`$ (large-$`k`$ case) obtained from the above arguments are consistent with the steps in the figure (measured from mid-point to mid-point), most clearly in the length of the first step.
In addition to the step period differences, the left panel in Fig. 8 illustrates the $`k`$-dependence of the depths of the steps in the stair-like transmission coefficient. The steps are clearly deeper when $`k`$ is small. The reason for this is that the total system loses energy by dissipation only through the $`y`$ coordinate. Although both cases in the figure correspond to the same value of $`\gamma `$, the $`x`$-particles can retain energy for a longer time when $`k`$ is small. This allows more $`x`$-particles to cross the barrier, and it allows them to do so at later times. The deeper and more numerous clear steps in the small-$`k`$ case are a direct manifestation of these features.
Two further points should be noted. One is the symmetry of the semiorbit time $`t_\epsilon `$ with respect to $`\epsilon `$. That is, the semiorbit time of a particle with an energy $`\epsilon `$ above the barrier is the same as the orbit time of a particle with energy $`\epsilon `$ below the barrier (for small $`\epsilon `$). This symmetry is important because it allows particles to remain in synchrony; otherwise the steps in $`\kappa (t)`$ would be blurred. The other point is the dependence of $`\kappa (t)`$ and consequently of the period $`T_\kappa `$ on barrier height. In general, $`t_\epsilon V_0^{1/2}\mathrm{ln}(16V_0/\epsilon )`$ (which reduces to our previous expression when $`V_0=1`$), and an average of $`t_\epsilon `$ over a thermal distribution of particles above the barrier yields the generalization of Eq. (28)
$$T_\kappa \frac{1}{V_0^{1/2}}\left(\mathrm{ln}16V_0+0.5772\mathrm{}\mathrm{ln}k_BT\right).$$
(30)
The right panel in Fig. 8 shows the transmission coefficient in the small-$`k`$ case for three values of the barrier height. The corresponding period estimates for $`T_\kappa (V_0)`$ obtained from Eq. (30) are $`T_\kappa (0.5)=9.70`$, $`T_\kappa (1.0)=7.55`$, and $`T_\kappa (2.0)=5.83`$. These decreasing periods with increasing barrier height are clearly consistent with the numerical results.
### 6.3 Dependence on $`\gamma `$
We have seen that successive steps in the transmission coefficient arise because the particles lose their energy slowly. This requires $`\gamma `$ to be small – but not too small (cf. below). Indeed, if $`\gamma `$ is decreased we expect particles to lose their energy even more slowly, which leads to a larger number of deeper steps. However, as $`\gamma `$ continues to decrease we expect to begin to see particles that cross the barrier more than once before becoming trapped. This leads to oscillations in $`\kappa (t)`$ and, eventually, to energy-diffusion-limited behavior. Deeper steps and the first appearance of small oscillations with decreasing $`\gamma `$ are clearly evident in the left panel of Fig.9.
Conversely, if $`\gamma `$ is increased, particles lose their energy more rapidly, and fewer particles cross the barrier at all; those that do so cross at most once. In this case, as seen in Fig. 9, the steps are less deep and almost disappear at long times. Indeed, the limit of the stair-like behavior with increasing $`\gamma `$ is the monotonic non-adiabatic regime, where only a few particles cross the barrier and they do so at very short times.
### 6.4 Dependence on $`k_BT`$
Finally, we consider the temperature dependence of the transmission coefficient in the stair-like regime. As temperature is increased, all else remaining the same, there is a greater number of more energetic $`x`$-particles above the barrier. Two aspects of their behavior dominate the resulting transmission coefficient. One is that the particles now have a greater range of semiorbit times $`t_\epsilon `$; the other, more important, effect is that particles are now sufficiently energetic that they can recross the barrier more than once. These are precisely the features that lead to the typical oscillatory behavior of the transmission coefficient in the energy-diffusion-limited regime, and it is towards this behavior that the stair-like regime tends with increasing temperature. The right panel in Fig. 9 shows this progression very clearly: the highest temperature results look very much like the earlier curves for the energy-diffusion-limited case. We should note the deeper first decay in the $`k_BT=0.5`$ curve than observed in our earlier illustrations. This is due to the fact that here we have chosen $`\omega ^2`$ to be very small (a requirement for the stair-like regime). This causes the initial thermal distribution of $`y(0)`$ to pull back $`x`$-particles more effectively than in our earlier example, and this in turn leads to the deeper decay.
### 6.5 Arguments in the Reduced System
Since the extended \[Eq. (9)\] and reduced \[Eq. (1)\] systems are entirely equivalent, it is of course possible to explain the new stair-like regime in terms of quantities and equations associated with the reduced representation. It is perhaps somewhat more cumbersome and less transparent, but the extended representation analysis offers a helpful guide. For example, it is useful to realize that a “negative friction” \[i.e., a negative value of the memory kernel in Eq. (1)\] in the reduced representation is associated with the situation where in the extended system the $`y`$-coordinate pulls the $`x`$-particle in the direction of $`\dot{x}`$.
To explain the different decay periods $`T_\kappa `$ in the reduces representation we note that the memory kernel $`\mathrm{\Gamma }(t)`$ is proportional to $`k^2`$ and the random force $`F(t)`$ is proportional to $`k`$. To cross the barrier, an $`x`$-particle must be moving towards $`x=0`$. When $`k`$ is small, the dynamics of $`x`$ as it moves in the barrier region is thus dominated by the bistable potential $`V_{eff}(x)`$. In particular, the decay periods in $`\kappa (t)`$ are determined by the frequency of the particles moving in the bistable potential with energies slightly larger and slightly smaller than the barrier height, which directly leads again to the earlier estimate $`T_\kappa =T_{semi}`$ \[cf. Eq. (28)\]. The slow dissipation of $`x`$-energy associated with small $`k`$ allows for many deep steps in $`\kappa (t)`$.
As $`k`$ increases, the bistable potential becomes relatively less important and the first and third terms on the right of Eq. (1) increasingly dominate the dynamics of $`x(t)`$. The steps in $`\kappa (t)`$ then acquire the period $`T_\kappa 2\pi /\mathrm{\Omega }2\pi /\sqrt{k}`$ associated with the friction kernel. The more rapid dissipation of $`x`$-energy associated with larger $`k`$ leads to a small number of shallow steps.
The dependence on $`\gamma `$ in this representations is quite clear. When $`\gamma `$ increases, the memory kernel decays more rapidly and the oscillations in $`\mathrm{\Gamma }(t)`$ become irrelevant, thus leading to non-adiabatic behavior of $`\kappa (t)`$. Decreasing $`\gamma `$, on the other hand, leads to pronounced oscillations (and at times negative values) of $`\mathrm{\Gamma }(t)`$. As a result, even particles that start out with energies too low to cross the barrier early may do so at a later time, thus explaining the step structure of $`\kappa (t)`$. The temperature dependence can also be understood: for a given (low) $`\gamma `$, increasing the temperature leads to a greater number of particles above the barrier that can recross more than once before becoming trapped. The steps then become oscillations and the energy-diffusion-limited behavior is recovered.
## 7 Conclusions
In this work we have analyzed the time dependent transmission coefficient for the capture of a particle in one or the other well of a bistable potential as described by the generalized Kramers equation Eq. (1) with an oscillatory memory kernel. The time dependence of the transmission coefficient depends sensitively on the parameters of the model. The equivalence of this model to an “extended system” wherein the reaction coordinate is linearly coupled to a nonreactive coordinate which is in turn coupled to a heat bath, Eq. (9), facilitates the understanding of the various time dependences that are observed.
The different behaviors observed for the transmission coefficient in various parameter regimes are summarized in Fig. 3. The non-adiabatic (monotonic decay) and energy-diffusion-limited (oscillatory decay) behaviors have been encountered earlier in the classic Kramers problem and in the generalized Kramers problem with an exponential memory kernel . The non-adiabatic decay is observed when the reaction coordinate loses its energy rapidly, so that particles cross the barrier only at early times and at most once before becoming trapped. The oscillatory behavior is observed when the reaction coordinate loses its energy slowly, thus allowing several recrossings of the barrier before trapping. These regimes are associated with parameter values that suppress the oscillations of the memory kernel. There is a third behavior observed with exponential friction, the caging regime, which is not observed with an oscillatory memory friction.
The third behavior shown in Fig. 3, which consists of a stair-like decay of the transmission coefficient, is peculiar to the oscillatory memory friction and occurs when the oscillations in the memory kernel are pronounced. This behavior is observed when particles cross the barrier at most once, but not necessarily at early times. In turn, this can be explained by the fact that particles that at one time may not have enough energy to cross the barrier may acquire sufficient energy to do so later via their coupling to the nonreactive coordinate (or, equivalently, when the oscillations in the memory kernel periodically lead to negative values of the kernel). Although the particles cross the barrier at most once, and not necessarily at early times, the crossing events can only occur at fairly sharply defined time intervals that we call $`T_\kappa `$. Hence the appearance of fairly sharp steps in the transmission coefficient. We explain in detail the conditions that lead to the stair-like behavior, the way in which the step time $`T_\kappa `$ and the step depths depend on the parameters of the system, and the way in which this behavior tends to the energy-diffusion-limited or non-adiabatic cases as parameters are modified.
If there were no barrier crossings at all in the Kramers problem, the transmission coefficient would be unity. Single barrier crossings only at early times lead to monotonic decay of the transmission coefficient. Single recrossings that are possible only at specified time intervals $`T_\kappa `$ lead to the new stair-like regime. Multiple recrossings lead to oscillatory behavior. The time dependence of the transmission coefficient clearly provides an interesting mirror for the barrier crossing dynamics of the generalized Kramers problem.
## Acknowledgments
One of us (R. R.) gratefully acknowledges the support of this research by the Ministerio de Educación y Cultura through Postdoctoral Grant No. PF-98-46573147. This work was supported in part by the U. S. Department of Energy under Grant No. DE-FG03-86ER13606, and in part by the Comisión Interministerial de Ciencia y Tecnología (Spain) Project No. DGICYT PB96-0241.
## Appendix A Extended vs Reduced Model
Here we present the analytical details connecting Eqs. (1) and (9). Formal solution of Eq. (9) gives
$`y(t)`$ $`={\displaystyle \frac{v_yy_{}\lambda _2}{\lambda _1\lambda _2}}e^{\lambda _1t}+{\displaystyle \frac{y_{}\lambda _1v_y}{\lambda _1\lambda _2}}e^{\lambda _2t}`$
$`+{\displaystyle \frac{1}{\lambda _1\lambda _2}}{\displaystyle _0^t}𝑑t^{}\left(e^{\lambda _1(tt^{})}e^{\lambda _2(tt^{})}\right)[kx(t^{})+f(t^{})],`$ (A1)
where the first two terms on the right hand side correspond to the homogeneous solution that depends on the initial conditions $`y(0)y_{}`$ and $`\dot{y}(0)v_y`$. The last two terms correspond to the inhomogeneous solution. The term proportional to $`k`$ leads to the memory friction term and the term containing $`f(t^{})`$ is associated with the colored noise in the reduced model. The roots $`\lambda _i`$ are
$$\lambda _{1,2}=\frac{\gamma }{2}\pm \sqrt{\left(\frac{\gamma }{2}\right)^2\omega ^2k}.$$
(A2)
Substitution of this formal solution in Eq. (9) and regrouping of terms directly leads to the reduced model (1) with the memory kernel
$$\mathrm{\Gamma }(t)=\frac{k^2}{\lambda _2\lambda _1}\left(\frac{e^{\lambda _1t}}{\lambda _1}\frac{e^{\lambda _2t}}{\lambda _2}\right).$$
(A3)
We also get the explicit form for the effective potential of the reaction coordinate,
$$V_{eff}(x)=V(x)+\frac{1}{2}\frac{\omega ^2k}{\omega ^2+k}x^2.$$
(A4)
However, the following extra initial conditions must be fulfilled in order to avoid transient terms in the reduced model:
$$x(0)=0;<v_y>=<y_{}>=0;<v_y^2>=k_BT;<y_{}^2>=\frac{k_BT}{\omega ^2+k}.$$
(A5)
The brackets here indicate averages over initial distributions. The following initial distributions for the solvent coordinate are consistent with these requirements:
$$P(y_{})=\sqrt{\frac{\omega ^2+k}{2\pi k_BT}}\mathrm{exp}\left(\frac{(\omega ^2+k)y_{}^2}{2k_BT}\right)$$
(A6)
and
$$P(v_y)=\frac{1}{\sqrt{2\pi k_BT}}\mathrm{exp}\left(\frac{v_y^2}{2k_BT}\right).$$
(A7)
The explicit reduction (integration) thus readily leads to the observation that the initial conditions for $`y`$ are the thermalized solutions of the homogeneous differential equation
$$\ddot{y}+\gamma \dot{y}+(\omega ^2+k)y=0.$$
(A8)
That is, we must thermalize the solvent coordinate evolving in the combined intrinsic and coupling potential with the reaction coordinate fixed at $`x=0`$.
At this point, a distinction should be made between the following two behaviors of the friction kernel. The first is the underdamped case, where the condition
$$\mathrm{\Omega }^2\omega ^2+k\left(\frac{\gamma }{2}\right)^2>0$$
(A9)
leads to complex values for $`\lambda _1`$ and $`\lambda _2`$,
$$\lambda _{1,2}=\frac{\gamma }{2}\pm i\mathrm{\Omega },$$
(A10)
which in turn leads to a trigonometric form for the memory kernel (sometimes called the “trigonometric case”):
$$\mathrm{\Gamma }(t)=\frac{k^2}{\omega ^2+k}e^{\frac{\gamma }{2}t}\left(\frac{\gamma }{2\mathrm{\Omega }}\mathrm{sin}\mathrm{\Omega }t+\mathrm{cos}\mathrm{\Omega }t\right).$$
(A11)
The second behavior, the overdamped case, results when
$$\mathrm{\Lambda }^2\left(\frac{\gamma }{2}\right)^2\omega ^2k0.$$
(A12)
In this case the values of $`\lambda _1`$ and $`\lambda _2`$ are real,
$$\lambda _{1,2}=\frac{\gamma }{2}\pm \mathrm{\Lambda },$$
(A13)
and therefore the memory kernel has a hyperbolic form (sometimes called the “hyperbolic case”):
$$\mathrm{\Gamma }(t)=\frac{k^2}{\omega ^2+k}e^{\frac{\gamma }{2}t}\left(\frac{\gamma }{2\mathrm{\Lambda }}\mathrm{sinh}\mathrm{\Lambda }t+\mathrm{cosh}\mathrm{\Lambda }t\right).$$
(A14)
|
no-problem/9901/cond-mat9901239.html
|
ar5iv
|
text
|
# Explanation of observed features of self-organization in traffic flow
\[
## Abstract
Based on simulations with the “intelligent driver model”, a microscopic traffic model, we explain the recently discovered transition from free over “synchronized” traffic to stop-and-go patterns \[B. S. Kerner, Phys. Rev. Lett. 81, 3797 (1998)\]. We obtain a nearly quantitative agreement with empirical findings such as the “pinch effect”, the flow-density diagram, the emergence of stop-and-go waves from nonhomogeneous congested traffic, and the dimensions of their wavelength.
\]
During the last years, theoretical and empirical investigations have identified different possible mechanisms for a phase transition from free traffic to stop-and-go traffic on freeways. This includes deterministic and stochastic mechanisms as well as effects of inhomogeneities . In contrast, Kerner has recently described the detailled features of another transition to stop-and-go patterns developing from “synchronized” congested traffic on German highways, which are compatible with empirical findings on Dutch highways .
In the following, we propose a quantitative explanation of these observations based on microsimulations with the “intelligent driver model” (IDM). In particular, we will show the possible coexistence of different traffic states along the road behind an inhomogeneity of traffic flow. It is, in upstream direction, associated with the sequence “homogeneous congested traffic” (which, in a multilane model, is related to the observed synchronization among lanes ) $``$ “inhomogeneous congested traffic” (corresponding to the so-called “pinch region” ) $``$ “stop-and-go traffic”, while we have free traffic flow downstream of the inhomogeneity.
It will turn out that, in contrast to previously reported traffic phenomena, this phenomenon relies on the existence of a sufficiently large density region of convectively stable traffic, in which traffic flow is unstable, but any perturbations are convected away in upstream direction. Furthermore, one needs a traffic model in which the resulting traffic flow inside of fully developed traffic jams is much lower (nearly zero) than in “synchronized” traffic. In particular, without suitable modifications (see below), this is not satisfied by the traffic model discussed in Ref. . We also point out that, although the IDM has a unique flow-density relation in equilibrium, it reproduces the observed two-dimensional scattering of flow-density data at medium vehicle densities , even without assuming a mixture of different vehicle types .
The IDM is a continuous, deterministic model, in which the acceleration of a vehicle $`\alpha `$ of length $`l_\alpha `$ at position $`x_\alpha (t)`$ depends on its own velocity $`v_\alpha (t)`$ as well as the gap $`s_\alpha (t)=[x_{\alpha 1}(t)x_\alpha (t)l_\alpha ]`$ and the velocity difference $`\mathrm{\Delta }v_\alpha (t)=[v_\alpha (t)v_{\alpha 1}(t)]`$ to the vehicle $`(\alpha 1)`$ in front:
$$\dot{v}_\alpha =a\left[1\left(\frac{v_\alpha }{v_0}\right)^\delta \left(\frac{s^{}}{s_\alpha }\right)^2\right].$$
(1)
According to this formula, the acceleration on a free road (meaning $`s_\alpha \mathrm{}`$) is given by $`a[1(v_\alpha /v_0)^\delta ]`$, where $`a`$ is the maximum acceleration and $`v_0`$ the desired velocity. The exponent $`\delta `$ is typically between 1 and 5. It allows to describe that the realistic acceleration behavior of drivers lies between a constant acceleration $`a`$ up to their desired velocity $`v_0`$ ($`\delta \mathrm{}`$) and an exponential acceleration behavior ($`\delta =1`$).
The braking term $`a(s^{}/s_\alpha )^2`$ depends Coulomb-like on the gap $`s_\alpha `$, as it is the case for the braking term of the microscopic Wiedemann model . Therefore, the acceleration term is negligible, if the gap $`s_\alpha `$ drops considerably below the “effective desired distance” $`s^{}`$. With the relation
$$s^{}(v_\alpha ,\mathrm{\Delta }v_\alpha )=s_0+\text{max}(v_\alpha T+\frac{v_\alpha \mathrm{\Delta }v_\alpha }{2\sqrt{ab}},0),$$
(2)
it is constructed in a way that drivers keep a minimum “jam distance” $`s_0`$ to a standing vehicle, plus an additional safety distance $`v_\alpha T`$, where $`T`$ is the safe time headway in congested but moving traffic.
The nonequilibrium term proportional to $`\mathrm{\Delta }v_\alpha `$ reflects an “intelligent” braking strategy, according to which drivers restrict their deceleration to $`b`$ in “normal” situations (e.g., when approaching standing or slower vehicles from sufficiently large distances), but they brake harder when the situation becomes more critical, i.e., when the anticipated “kinematic deceleration” $`(\mathrm{\Delta }v)^2/(2s_\alpha )`$, which is necessary to avoid a collision with a uniformly moving leading vehicle ($`\dot{v}_{\alpha 1}=0`$), exceeds $`b`$. Notice that the acceleration $`a`$ is typically lower than the desired deceleration $`b`$, and that both acceleration parameters do not influence the equilibrium flow-density relation (“fundamental diagram”). Since it turns out that neither multilane effects nor different types of vehicles are relevant in the context of this study, we have assumed identical “driver-vehicle units” characterized by the realistic parameters $`v_0=120`$ km/h, $`\delta =4`$, $`a=0.6`$ m/s<sup>2</sup>, $`b=0.9`$ m/s<sup>2</sup>, $`s_0=2`$ m, and $`T=1.5`$ s, apart from a localized change of $`v_0`$ or $`T`$ (see below). For the vehicle length we use $`l=5`$ m, but this value does not affect the dynamics.
We simulated an open freeway section of 20 kilometer length for time intervals up to 120 minutes, of which we display the most interesting parts only. In addition, we assumed an inhomogeneity of traffic flow that will be responsible for the transition from free to congested traffic, as described in Refs. . However, as pointed out by Kerner , the self-organized patterns observed by him are not restricted to the vicinity of on-ramps. We will confirm this by simulating different kinds of inhomogeneities (see Figs. 1 through 3) and comparing them with the injection of vehicles at on-ramps (see Fig. 4).
In Figure 1, we have assumed an inhomogeneity corresponding to a freeway section where people drive more carefully. This was modelled by setting the desired time headway from $`T=1.5`$ s to $`T=1.75`$ s between $`x=0`$ km and $`x=0.3`$ km. In the simulations of Figs. 2 and 3, we have reduced the desired velocity from $`v_0=120`$ km/h to $`v_0=80`$ km/h in the same region. As initial conditions, we assumed homogeneous free traffic in equilibrium at a flow of $`Q_0=1670`$ vehicles/h (Figs. 1 through 3) or 1570 vehicles/h (Fig. 4). The actual initial conditions, however, are only relevant for a short time interval. At the upstream boundary, we assume that vehicles enter the freeway uniformly at a rate $`Q_{\mathrm{in}}(t)=Q_0+\mathrm{\Delta }Q(t)`$ and drive with a velocity corresponding to free traffic in equilibrium. While in the simulation of Fig. 1, the breakdown of traffic flow is caused by exceeding the static freeway capacity at the inhomogeneity , in Figs. 2 and 3 it is triggered by a triangularly shaped perturbation $`\mathrm{\Delta }Q(t)`$ of the inflow between $`t=10`$ min and $`t=20`$ min with a maximum of 200 vehicles per hour and lane at $`t=15`$ min. Furthermore, we used the “absorbing” downstream boundary condition $`\dot{v}_\alpha =0`$. To minimize simulation time, we integrated Eq. (1) with a simple Euler scheme using a coarse time discretization of $`\mathrm{\Delta }t=0.4`$ s, and translated the vehicles in each step according to $`x_\alpha (t+\mathrm{\Delta }t)=x_\alpha (t)+v_\alpha \mathrm{\Delta }t+\frac{1}{2}\dot{v}_\alpha (\mathrm{\Delta }t)^2`$. However, smaller values of $`\mathrm{\Delta }t`$ yielded nearly indistinguishable results.
Figure 1 gives a representative overview of the simulation result by means of a spatiotemporal density plot. The resulting sequence of transitions is essentially the same as observed : After 10 minutes of free traffic, traffic breaks down near the inhomogeneity, resulting in homogeneous congested traffic at this location, that persists over a long time. Upstream of the inhomogeneity, small oscillations develop that travel further upstream and grow to stop-and-go waves of relatively short wavelengths (about 0.8 km). Finally, these waves either dissolve or merge to a few “wide jams” (in which traffic comes to a standstill) with typical distances of 2 km up to 5 km between them. Once the jams have formed, they persist and propagate upstream at a constant propagation velocity without further changes of their shape. No new clusters develop between the jams.
To compare our simulation results directly with the empirical data published by Kerner , we investigated the temporal evolution of the average velocity at six subsequent locations D1 through D6 that had the same distances with respect to the inhomogeneity as in Ref. (see Fig. 2). The detector positions D1 through D4 are upstream of the inhomogeneity, D5 is directly at the inhomogeneity, and D6 is downstream of it. In contrast to the simulation of Fig. 1, the capacity drop at the inhomogeneity is so weak that free traffic is metastable in the overall system. At the inhomogeneity (D5), one observes homogeneous congested (“synchronized”) traffic, at D4 one sees small oscillations that, around D3, develop to stop-and-go waves of larger amplitude, and finally to jams (at D1 and D2). In the downstream direction, the congested traffic dissolves to free traffic (D6). Apart from irregularities in the measured data due to fluctuations, the curves are in (semi-)quantitative agreement with Kerner’s empirical findings .
We also plot one-minute data of the six “detectors” in a flow-density diagram (Fig. 3), together with the equilibrium flow-density relation $`Q_\mathrm{e}(\rho )`$ (“fundamental diagram”). The lower boundary of the data points at medium densities corresponds to the flow-density relation belonging to the downstream front of a fully developed jam. In agreement with observations, this line is the same for all jams and corresponds to a unique propagation velocity of their downstream fronts. The twodimensional region of points above this line relate to congested traffic at D3, D4, and D5.
To understand this scenario, we need some basic results about the stability of homogeneous traffic with respect to localized perturbations. Typically, there are four “critical” densities $`\rho _{\mathrm{c}i}`$ with $`\rho _{\mathrm{c1}}<\rho _{\mathrm{c2}}<\rho _{\mathrm{c3}}<\rho _{\mathrm{c4}}`$ and the following properties : For low and very high densities ($`\rho <\rho _{\mathrm{c1}}`$ or $`\rho >\rho _{\mathrm{c4}}`$), traffic is stable with respect to arbitrary perturbations, while for $`\rho _{\mathrm{c2}}<\rho <\rho _{\mathrm{c3}}`$, it is linearly unstable. In the two density ranges in between, homogeneous traffic is unstable only with respect to perturbations exceeding a certain critical amplitude $`\mathrm{\Delta }\rho _{\mathrm{cr}}(\rho )`$ (“metastability”). Furthermore, there exists a range $`\rho _{\mathrm{cv}}<\rho <\rho _{\mathrm{c3}}`$ with $`\rho _{\mathrm{cv}}>\rho _{\mathrm{c2}}`$, where traffic is linearly unstable, but convectively stable, i.e., all perturbations grow, but they are eventually convected away in upstream direction . Actually, this range can be very large. For the IDM with the parameters used here, we have $`\rho _{\mathrm{cv}}=50`$ vehicles/km and $`\rho _{\mathrm{c3}}=100`$ vehicles/km.
Let us assume that the inflow $`Q_{\mathrm{in}}`$ has a value larger than $`Q_{\mathrm{c1}}`$, and that a phase transition from free to congested traffic has occurred at the inhomogeneity. This breakdown to congested traffic, which we identify with “synchronized traffic”, here , was explained in . Since it is connected with a drop of the effective freeway capacity to the self-organized outflow $`\stackrel{~}{Q}_{\mathrm{out}}`$ from “synchronized” traffic (ST), the region of congested traffic will grow in upstream direction until the inflow $`Q_{\mathrm{in}}(t)=Q_\mathrm{e}(\rho _{\mathrm{in}}(t))`$ to the freeway falls below the synchronized flow $`Q_{\mathrm{ST}}`$ in the congested region behind the inhomogeneity. Without an on-ramp flow $`Q_{\mathrm{ramp}}`$ per freeway lane, we have $`Q_{\mathrm{ST}}=\stackrel{~}{Q}_{\mathrm{out}}`$, otherwise it is $`Q_{\mathrm{ST}}=(\stackrel{~}{Q}_{\mathrm{out}}Q_{\mathrm{ramp}})`$ . In contrast to the empirical findings explained in Ref. , the synchronized flow must be so high, here, that it is linearly unstable, which implies $`Q_{\mathrm{ST}}>Q_\mathrm{e}(\rho _{\mathrm{c3}})`$. Therefore, small perturbations will grow to larger oscillations, which propagate in upstream direction faster than the congested region grows. When the oscillations reach the metastable region of free traffic upstream of the inhomogeneity, oscillations with an amplitude below the critical amplitude $`\mathrm{\Delta }\rho _{\mathrm{cr}}(\rho _{\mathrm{in}})`$ will eventually disappear, the remaining ones will continue to grow until they are fully developed traffic jams. We point out that a sequence of such jams is sustained, because the propagation velocity $`v_\mathrm{g}`$ and the outflow $`Q_{\mathrm{out}}Q_\mathrm{e}(\rho _{\mathrm{c1}})`$ from jams are characteristic constants . Finally, if the inhomogeneity is such that the synchronized flow is convectively stable, i.e., $`Q_{\mathrm{ST}}<Q_{\mathrm{cv}}=Q_\mathrm{e}(\rho _{\mathrm{cv}})`$, the perturbations cannot propagate downstream (in contrast to small perturbations in free traffic). Hence, the front of dissolving traffic is smooth, then, and a small region of homogeneous congested traffic forms near the inhomogeneity \[Figs. 1, 2(a), and 4\].
In Fig. 3, the flow-density relation of the downstream front of a fully developed jam (where the velocity is zero) corresponds to a straight line, the slope of which is $`v_\mathrm{g}`$ . Notice that this line (which in Fig. 3 and in Ref. is labelled by “$`J`$”) lies considerably below the equilibrium curve $`Q_\mathrm{e}(\rho )`$. On the other hand, traffic flow in the region of oscillating congested traffic is nearly in equilibrium as long as the oscillations are small. As the oscillations grow, the data points gradually approach the line $`J`$. This explains the observed twodimensionality of the congested part of the flow-density diagram. We point out that mixtures of different types of driver-vehicle units lead to to further effects contributing to an even wider scattering of flow-density points in the congested regime .
In summary, we have shown that the emergence of stop-and-go waves out of synchronized traffic, their coexistence, and the twodimensional scattering of data in the congested part of the flow-density diagram can be explained in the framework of “standard” traffic models that have a unique equilibrium flow-density relation. The necessary conditions are a metastability of traffic flow, a flow inside of traffic jams that is much lower than in synchronized congested traffic, and a sufficiently large density regime of linearly unstable traffic flow that is convectively stable. If “synchronized” traffic is linearly unstable and free traffic upstream is metastable, upstream-moving perturbations will grow and, when their amplitudes became large enough, eventually form stop-and-go waves. To maintain this mechanism, the region of synchronized traffic behind the inhomogeneity must persist (i.e. it must not dissolve to stop-and-go waves), which is only the case if it is convectively stable. The twodimensionality of the congested branch of the flow-density diagram originates from the fact that the nonhomogeneous congested states are not in equilibrium.
The phenomenon should be widespread since it is triggered at relatively small inhomogeneities whenever traffic flow $`Q_{\mathrm{in}}(t)`$ exceeds a certain threshold $`Q_{\mathrm{c1}}`$. Note that it can be also simulated with other traffic models like the macroscopic gas-kinetic-based model , if “frustration effects” are additionally taken into account (see Fig. 4).
The authors are grateful for financial support by the BMBF (research project SANDY, grant No. 13N7092) and by the DFG (Heisenberg scholarship He 2789/1-1).
|
no-problem/9901/hep-th9901018.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
It is well-known that string theory originated from attempts to understand the strong interactions . However, after the emergence of QCD as the theory of hadrons, the dominant theme of string research shifted to the Planck scale domain of quantum gravity . Although in hadron physics one routinely hears about flux tubes and the string tension, the majority of particle theorists gave up hope that string theory might lead to an exact description of the strong interactions. Now, however, for the first time we can say with confidence that at least some strongly coupled gauge theories have a dual description in terms of strings. Let me emphasize that one is not talking here about effective strings that give an approximate qualitative description, but rather about an exact duality. At weak coupling a convenient description of the theory involves conventional perturbative methods; at strong coupling, where such methods are intractable, the dual string description simplifies and gives exact information about the theory. The best established examples of this duality are conformal gauge theories where the so-called AdS/CFT correspondence has allowed for many calculations at strong coupling to be performed with ease. In these notes I describe, from my own personal perspective, some of the ideas that led to the formulation of the AdS/CFT correspondence. I will also speculate on the future directions. For the sake of brevity I will mainly discuss the AdS<sub>5</sub>/CFT<sub>4</sub> case which is most directly related to 4-dimensional gauge theories.
It has long been believed that the best hope for a string description of non-Abelian gauge theories lies in the ’t Hooft large $`N`$ limit. A quarter of a century ago ’t Hooft proposed to generalize the $`SU(3)`$ gauge group of QCD to $`SU(N)`$, and to take the large $`N`$ limit while keeping $`g_{\mathrm{YM}}^2N`$ fixed . In this limit each Feynman graph carries a topological factor $`N^\chi `$, where $`\chi `$ is the Euler characteristic of the graph. Thus, the sum over graphs of a given topology can perhaps be thought of as a sum over world sheets of a hypothetical “QCD string.” Since the spheres (string tree diagrams) are weighted by $`N^2`$, the tori (string one-loop diagrams) – by $`N^0`$, etc., we find that the closed string coupling constant is of order $`N^1`$. Thus, the advantage of taking $`N`$ to be large is that we find a weakly coupled string theory. It is not clear, however, how to describe this string theory in elementary terms (by a 2-dimensional world sheet action, for example). This is clearly an important problem: the free closed string spectrum is just the large $`N`$ spectrum of glueballs. If the quarks are included, then we also find open strings describing the mesons. Thus, if methods are developed for calculating these spectra, and it is found that they are discrete, then this provides an elegant explanation of confinement. Furthermore, the $`1/N`$ corrections correspond to perturbative string corrections.
Many years of effort, and many good ideas, were invested into the search for an exact gauge field/string duality . One class of ideas, exploiting the similarity of the large $`N`$ loop equation with the string Schroedinger equation, eventually led to the following fascinating speculation : one should not look for the QCD string in four dimensions, but rather in five, with the fifth dimension akin to the Liouville dimension of non-critical string theory . This leads to a picture where the QCD string is described by a two-dimensional world sheet sigma model with a curved 5-dimensional target space. At that stage it was not clear, however, precisely what target spaces are relevant to gauge theories. Luckily, we now do have answers to this question for a variety of conformal large $`N`$ gauge models. The route that leads to this answer, and confirms the idea of the fifth dimension, involves an unexpected detour via black holes and Dirichlet branes. We turn to these subjects next.
## 2 D-branes vs. Black Holes and $`p`$-branes
A few years ago it became clear that, in addition to strings, superstring theory contains soliton-like “membranes” of various internal dimensionalities called Dirichlet branes (or D-branes) . A Dirichlet $`p`$-brane (or D$`p`$-brane) is a $`p+1`$ dimensional hyperplane in $`9+1`$ dimensional space-time where strings are allowed to end, even in theories where all strings are closed in the bulk of space-time. In some ways a D-brane is like a topological defect: when a closed string touches it, it can open open up and turn into an open string whose ends are free to move along the D-brane. For the end-points of such a string the $`p+1`$ longitudinal coordinates satisfy the conventional free (Neumann) boundary conditions, while the $`9p`$ coordinates transverse to the D$`p`$-brane have the fixed (Dirichlet) boundary conditions; hence the origin of the term “Dirichlet brane.” In a seminal paper Polchinski showed that the D$`p`$-brane is a BPS saturated object which preserves $`1/2`$ of the bulk supersymmetries and carries an elementary unit of charge with respect to the $`p+1`$ form gauge potential from the Ramond-Ramond sector of type II superstring. The existence of BPS objects carrying such charges is required by non-perturbative string dualities . A striking feature of the D-brane formalism is that it provides a concrete (and very simple) embedding of such objects into perturbative string theory.
Another fascinating feature of the D-branes is that they naturally realize gauge theories on their world volume. The massless spectrum of open strings living on a D$`p`$-brane is that of a maximally supersymmetric $`U(1)`$ gauge theory in $`p+1`$ dimensions. The $`9p`$ massless scalar fields present in this supermultiplet are the expected Goldstone modes associated with the transverse oscillations of the D$`p`$-brane, while the photons and fermions may be thought of as providing the unique supersymmetric completion. If we consider $`N`$ parallel D-branes, then there are $`N^2`$ different species of open strings because they can begin and end on any of the D-branes. $`N^2`$ is the dimension of the adjoint representation of $`U(N)`$, and indeed we find the maximally supersymmetric $`U(N)`$ gauge theory in this setting . The relative separations of the D$`p`$-branes in the $`9p`$ transverse dimensions are determined by the expectation values of the scalar fields. We will be primarily interested in the case where all scalar expectation values vanish, so that the $`N`$ D$`p`$-branes are stacked on top of each other. If $`N`$ is large, then this stack is a heavy object embedded into a theory of closed string which contains gravity. Naturally, this macroscopic object will curve space: it may be described by some classical metric and other background fields, such as the Ramond-Ramond $`p+1`$ form potential. Thus, we have two very different descriptions of the stack of D$`p`$-branes: one in terms of the $`U(N)`$ supersymmetric gauge theory on its world volume, and the other in terms of the classical Ramond-Ramond charged $`p`$-brane background of the type II closed superstring theory. It is the relation between these two descriptions that is at the heart of the recent progress in understanding the connections between gauge fields and strings.<sup>1</sup><sup>1</sup>1 There are other similar relations between large $`N`$ SYM theories and gravity stemming from the BFSS matrix theory conjecture . Of course, more work is needed to make this relation precise.
### 2.1 Counting the entropy
The first success in building this kind of correspondence between black hole metrics and D-branes was achieved by Strominger and Vafa . They considered 5-dimensional supergravity obtained by compactifying 10-dimensional type IIB theory on a 5-dimensional compact manifold (for example, the 5-torus), and constructed a class of black holes carrying 2 separate $`U(1)`$ charges. These solutions may be viewed as generalizations of the well-known 4-dimensional charged (Reissner-Nordstrom) black hole. For the Reissner-Nordstrom black hole the mass is bounded from below by a quantity proportional to the charge. In general, when the mass saturates the lower (BPS) bound for a given choice of charges, then the black hole is called extremal. The extremal Strominger-Vafa black hole preserves $`1/8`$ of the supersymmetries present in vacuum. Also, the black hole is constructed in such a way that, just as for the Reissner-Nordstrom solution, the area of the horizon is non-vanishing at extremality . In general, an important quantity characterizing black holes is the Bekenstein-Hawking entropy which is proportional to the horizon area:
$$S_{BH}=\frac{A_h}{4G},$$
(1)
where $`G`$ is the Newton constant. Strominger and Vafa calculated the Bekenstein-Hawking entropy of their extremal solution as a function of the charges and succeeded in reproducing this result with D-brane methods. To build a D-brane system carrying the same set of charges as the black hole, they had to consider intersecting D-branes wrapped over the compact 5-dimensional manifold. For example, one may consider D3-branes intersecting over a line or D1-branes embedded inside D5-branes. The $`1+1`$ dimensional gauge theory describing such an intersection is quite complicated, but the degeneracy of the supersymmetric BPS states can nevertheless be calculated in the D-brane description valid at weak coupling. For reasons that will become clear shortly, the description in terms of black hole metrics is valid only at very strong coupling. Luckily, due to the supersymmetry, the number of states does not change as the coupling is increased. This ability to extrapolate the D-brane counting to strong coupling makes a comparison with the Bekenstein-Hawking entropy possible, and exact agreement is found in the limit of large charges . In this sense the collection of D-branes provides a “microscopic” explanation of the black hole entropy.
This correspondence was quickly generalized to black holes slightly excited above the extremality . Further, the Hawking radiation rates and the absorption cross-sections were calculated and successfully reproduced by D-brane models . Since then this system has been receiving a great deal of attention. However, some detailed comparisons are hampered by the complexities of the dynamics of intersecting D-branes: to date there is no first principles approach to the lagrangian of the $`1+1`$ dimensional conformal field theory on the intersection.
For this and other reasons it has turned out very fruitful to study a similar correspondence for simpler systems which involve parallel D-branes only . Our primary motivation is that, as explained above, parallel D$`p`$-branes realize $`p+1`$ dimensional $`U(N)`$ SYM theories, and we may learn something new about them from comparisons with Ramond-Ramond charged black $`p`$-brane classical solutions. These solutions in type II supergravity have been known since the early 90’s . The metric and dilaton backgrounds may be expressed in the following simple and universal form:
$$ds^2=H^{1/2}(r)\left[f(r)dt^2+\underset{i=1}{\overset{p}{}}(dx^i)^2\right]+H^{1/2}(r)\left[f^1(r)dr^2+r^2d\mathrm{\Omega }_{8p}^2\right],$$
(2)
$$e^\mathrm{\Phi }=H^{(3p)/4}(r),$$
where
$$H(r)=1+\frac{L^{7p}}{r^{7p}},f(r)=1\frac{r_0^{7p}}{r^{7p}},$$
and $`d\mathrm{\Omega }_{8p}^2`$ is the metric of a unit $`8p`$ dimensional sphere. The horizon is located at $`r=r_0`$ and the extremality is achieved in the limit $`r_00`$. A solution with $`r_0L`$ is called near-extremal. In contrast to the situation encountered for the Strominger-Vafa black hole, the Bekenstein-Hawking entropy vanishes in the extremal limit. Just like the stacks of parallel D-branes, the extremal solutions are BPS saturated: they preserve 16 of the 32 supersymmetries present in the type II theory. For $`r_0>0`$ the $`p`$-brane carries some excess energy $`E`$ above its extremal value, and the Bekenstein-Hawking entropy is also non-vanishing. The Hawking temperature is then defined by $`T^1=S_{BH}/E`$.
The correspondence between the entropies of the $`p`$-brane solutions (2) and those of the $`p+1`$ dimensional SYM theories was first considered in . Among these solutions $`p=3`$ has a special status: in the extremal limit $`r_00`$ the 3-brane solution is perfectly non-singular . This is evidenced by the fact that the dilaton $`\mathrm{\Phi }`$ is constant for $`p=3`$, but blows up at $`r=0`$ for all other extremal solutions. In the Bekenstein-Hawking entropy of a near-extremal 3-brane of Hawking temperature $`T`$ was compared with the entropy of the $`𝒩=4`$ supersymmetric $`U(N)`$ gauge theory (which lives on $`N`$ coincident D3-branes) heated up to the same temperature. The results turned out to be quite interesting. The Bekenstein-Hawking entropy expressed in terms of the Hawking temperature $`T`$ and the number $`N`$ of elementary units of charge was found to be
$$S_{BH}=\frac{\pi ^2}{2}N^2V_3T^3,$$
(3)
where $`V_3`$ is the spatial volume of the 3-brane. This was compared with the entropy of a free $`U(N)`$ $`𝒩=4`$ supermultiplet, which consists of the gauge field, $`6N^2`$ massless scalars and $`4N^2`$ Weyl fermions. This entropy was calculated using the standard statistical mechanics of a massless gas (the black body problem), and the answer turned out to be
$$S_0=\frac{2\pi ^2}{3}N^2V_3T^3.$$
(4)
It is remarkable that the 3-brane geometry captures the $`T^3`$ scaling characteristic of a conformal field theory (in a CFT this scaling is guaranteed by the extensivity of the entropy and the absence of dimensionful parameters).<sup>2</sup><sup>2</sup>2 Other examples of the “conformal” behavior of the Bekenstein-Hawking entropy include the 11-dimensional 5-brane and membrane solutions . For the 5-brane, $`S_{BH}N^3T^5V_5`$, while for the membrane $`S_{BH}N^{3/2}T^2V_2`$. The microscopic description of the 5-brane solution is in terms of a large number $`N`$ of coincident singly charged 5-branes of M-theory, whose chiral world volume theory has $`(0,2)`$ supersymmetry. Similarly, the membrane solution describes the large $`N`$ behavior of the CFT on $`N`$ coincident elementary membranes. The entropy formulae suggest that these theories have $`O(N^3)`$ and $`O(N^{3/2})`$ massless degrees of freedom respectively. These predictions of supergravity are non-trivial and still mysterious. Since the geometry of the 5-brane throat is $`AdS_7\times S^4`$, and that of the membrane throat is $`AdS_4\times S^7`$, these systems lead to other interesting examples of the AdS/CFT correspondence. Also, the $`N^2`$ scaling indicates the presence of $`O(N^2)`$ unconfined degrees of freedom, which is exactly what we expect in the $`𝒩=4`$ supersymmetric $`U(N)`$ gauge theory. On the other hand, the relative factor of $`3/4`$ between $`S_{BH}`$ and $`S_0`$ at first appeared mysterious and was interpreted by many as a subtle failure of the D3-brane approach to black 3-branes. As we will see shortly, however, the relative factor of $`3/4`$ is not a contradiction but rather a prediction about strongly coupled $`𝒩=4`$ SYM theory at finite temperature.
### 2.2 From absorption cross-sections to two-point correlators
Almost a year after the entropy comparisons I came back to the 3-branes (and also to the 11-dimensional membranes and 5-branes) and tried to interpret absorption cross-sections for massless particles in terms of the world volume theories . This was a natural step beyond the comparison of entropies, and for the Strominger-Vafa black holes the D-brane approach to absorption was initiated earlier in . For the system of $`N`$ coincident D3-branes it was interesting to inquire to what extent the supergravity and the weakly coupled D-brane calculations agreed. For example, they might scale differently with $`N`$ or with the incident energy. Even if the scaling exponents agreed, the overall normalizations could differ by a subtle numerical factor similar to the $`3/4`$ found for the 3-brane entropy. Surprisingly, the low-energy absorption cross-sections turned out to agree exactly!
To calculate the absorption cross-sections in the D-brane formalism one needs the low-energy world volume action for coincident D-branes coupled to the massless bulk fields. Luckily, these couplings may be deduced from the D-brane Born-Infeld action. For example, the coupling of 3-branes to the dilaton $`\mathrm{\Phi }`$, the Ramond-Ramond scalar $`C`$, and the graviton $`h_{\alpha \beta }`$ is given by
$$S_{\mathrm{int}}=\frac{\sqrt{\pi }}{\kappa }d^4x\left[tr\left(\frac{1}{4}\mathrm{\Phi }F_{\alpha \beta }^2\frac{1}{4}CF_{\alpha \beta }\stackrel{~}{F}^{\alpha \beta }\right)+\frac{1}{2}h^{\alpha \beta }T_{\alpha \beta }\right],$$
(5)
where $`T_{\alpha \beta }`$ is the stress-energy tensor of the $`𝒩=4`$ SYM theory. Consider, for instance, absorption of a dilaton incident on the 3-brane at right angles with a low energy $`\omega `$. Since the dilaton couples to $`trF_{\alpha \beta }^2`$ it can be converted into a pair of back-to-back gluons on the world volume. The leading order calculation of the cross-section for weak coupling gives
$$\sigma =\frac{\kappa ^2\omega ^3N^2}{32\pi },$$
(6)
where $`\kappa =\sqrt{8\pi G}`$ is the 10-dimensional gravitational constant (note that the factor $`N^2`$ comes from the degeneracy of the final states which is the number of different gluon species). This result was compared with the absorption cross-section by the extremal 3-brane geometry,
$$ds^2=\left(1+\frac{L^4}{r^4}\right)^{1/2}\left(dt^2+dx_1^2+dx_2^2+dx_3^2\right)+\left(1+\frac{L^4}{r^4}\right)^{1/2}\left(dr^2+r^2d\mathrm{\Omega }_5^2\right).$$
(7)
This geometry may be viewed as a semi-infinite throat which for $`rL`$ opens up into flat $`9+1`$ dimensional space. Waves incident from the $`rL`$ region partly reflect back and partly penetrate into the the throat region $`rL`$. The relevant s-wave radial equation turns out to be
$$\left[\frac{d^2}{d\rho ^2}\frac{15}{4\rho ^2}+1+\frac{(\omega L)^4}{\rho ^4}\right]\psi (\rho )=0,$$
(8)
where $`\rho =\omega r`$. For a low energy $`\omega 1/L`$ we find a high barrier separating the two asymptotic regions. The low-energy behavior of the tunneling probability may be calculated by the so-called matching method, and the resulting absorption cross-section is
$$\sigma _{SUGRA}=\frac{\pi ^4}{8}\omega ^3L^8.$$
(9)
In order to compare (6) and (9) we need a relation between the radius of the throat, $`L`$, and the number of D3-branes, $`N`$. Such a relation follows from equating the ADM tension of the extremal 3-brane solution to $`N`$ times the tension of a single D3-brane, and one finds
$$L^4=\frac{\kappa }{2\pi ^{5/2}}N.$$
(10)
Substituting this into (9), we find that the supergravity absorption cross-section agrees exactly with the D-brane one, without any relative factor like $`3/4`$.
This result was a major surprise to me, and I started searching for its explanation. The most important question is: what is the range of validity of the two calculations? Since $`\kappa g_{st}\alpha ^2`$, (10) gives $`L^4Ng_{st}\alpha ^2`$. Supergravity can only be trusted if the length scale of the 3-brane solution is much larger than the string scale $`\sqrt{\alpha ^{}}`$, i.e. for $`Ng_{st}1`$.<sup>3</sup><sup>3</sup>3A similar conclusion applies to the Strominger-Vafa black hole . Of course, the incident energy also has to be small compared to $`1/\sqrt{\alpha ^{}}`$. Thus, the supergravity calculation should be valid in the “double-scaling limit”
$$\frac{L^4}{\alpha ^2}g_{st}N\mathrm{},\omega ^2\alpha ^{}0.$$
(11)
If the description of the black 3-brane by a stack of many coincident D3-branes is correct, and we presume that it is, then it must agree with the supergravity results in this limit. Since $`g_{st}g_{\mathrm{YM}}^2`$, this corresponds to the limit of infinite ‘t Hooft coupling in the $`𝒩=4`$ $`U(N)`$ SYM theory. Since we also want to send $`g_{st}0`$ in order to suppress the string loop corrections, we necessarily have to take the large $`N`$ limit. To summarize, the supergravity calculations are expected to give exact information about the $`𝒩=4`$ SYM theory in the limit of large $`N`$ and large ‘t Hooft coupling .
Coming back to the entropy problem, we now see that the Bekenstein-Hawking entropy calculation applies to the $`g_{\mathrm{YM}}^2N\mathrm{}`$ limit of the theory, while the free field calculation applies to the $`g_{\mathrm{YM}}^2N0`$ limit. Thus, the relative factor of $`3/4`$ is not a discrepancy: it relates two different limits of the theory. Indeed, on general grounds we expect that in the ‘t Hooft large $`N`$ limit the entropy is given by
$$S=\frac{2\pi ^2}{3}N^2f(g_{\mathrm{YM}}^2N)V_3T^3.$$
(12)
The function $`f`$ is certainly not constant: for example, a recent two-loop calculation shows that its perturbative expansion is
$$f(g_{\mathrm{YM}}^2N)=1\frac{3}{2\pi ^2}g_{\mathrm{YM}}^2N+\mathrm{}$$
(13)
Thus, the Bekenstein-Hawking entropy in supergravity, (3), is translated into the prediction that $`f(g_{\mathrm{YM}}^2N\mathrm{})=3/4`$. In fact, a recent string theory calculation of the leading strong coupling correction gives
$$f(g_{\mathrm{YM}}^2N)=\frac{3}{4}+\frac{45}{32}\zeta (3)(2g_{\mathrm{YM}}^2N)^{3/2}+\mathrm{}.$$
(14)
This is consistent with $`f(g_{\mathrm{YM}}^2N)`$ being a monotonic function which interpolates between 1 at $`g_{\mathrm{YM}}^2N=0`$ and $`3/4`$ at $`g_{\mathrm{YM}}^2N=\mathrm{}`$.
Although we have sharpened the region of applicability of the supergravity calculation (9), we have not yet explained why it agrees with the leading order perturbative result (6) on the D3-brane world volume. After including the higher-order SYM corrections, the general structure of the absorption cross-section in the large $`N`$ limit is expected to be
$$\sigma =\frac{\kappa ^2\omega ^3N^2}{32\pi }a(g_{\mathrm{YM}}^2N),$$
(15)
where
$$a(g_{\mathrm{YM}}^2N)=1+b_1g_{\mathrm{YM}}^2N+b_2(g_{\mathrm{YM}}^2N)^2+\mathrm{}$$
For agreement with supergravity, the strong ‘t Hooft coupling limit of $`a(g_{\mathrm{YM}}^2N)`$ should be equal to 1 . In fact, a stronger result is true: all perturbative corrections vanish and $`a=1`$ independent of the coupling. This was first shown explicitly in for the graviton absorption. The absorption cross-section is related to the imaginary part of the two-point function $`T_{\alpha \beta }(p)T_{\gamma \delta }(p)`$ in the SYM theory. In turn, this is determined by a conformal “central charge” which satisfies a non-renormalization theorem: it is completely independent of the ‘t Hooft coupling.
In general, the two-point function of a gauge invariant operator in the strongly coupled SYM theory may be read off from the absorption cross-section for the supergravity field which couples to this operator in the world volume action . Some examples of this field operator correspondence may be read off from (5). Thus, we learn that the dilaton absorption cross-section measures the imaginary part of $`trF_{\alpha \beta }^2(p)trF_{\gamma \delta }^2(p)`$, the Ramond-Ramond scalar absorption cross-section measures the imaginary part of $`trF_{\alpha \beta }\stackrel{~}{F}^{\alpha \beta }(p)trF_{\gamma \delta }\stackrel{~}{F}^{\gamma \delta }(p)`$, etc. The agreement of these two-point functions with the weak-coupling calculations performed in is explained by supersymmetric non-renormalization theorems. Thus, the proposition that the $`g_{\mathrm{YM}}^2N\mathrm{}`$ limit of the large $`N`$ $`𝒩=4`$ SYM theory can be extracted from the 3-brane of type IIB supergravity has passed its first consistency checks.
## 3 The AdS/CFT Correspondence
The circle of ideas reviewed in the previous section received a seminal development by Maldacena who also connected it for the first time with the QCD string idea. Maldacena made a simple and powerful observation that the “universal” region of the 3-brane geometry, which should be directly identified with the $`𝒩=4`$ SYM theory, is the throat, i.e. the region $`rL`$.<sup>4</sup><sup>4</sup>4 Related ideas were also pursued in . The limiting form of the metric (7) is
$$ds^2=\frac{L^2}{z^2}\left(dt^2+d\stackrel{}{x}^2+dz^2\right)+L^2d\mathrm{\Omega }_5^2,$$
(16)
where $`z=\frac{L^2}{r}L`$. This metric describes the space $`AdS_5\times S^5`$ with equal radii of curvature $`L`$. One also finds that the self-dual 5-form Ramond-Ramond field strength has constant flux through this space (the field strength term in the Einstein equation effectively gives a positive cosmological constant on $`S^5`$ and a negative one on $`AdS_5`$). Thus, Maldacena conjectured that type IIB string theory on $`AdS_5\times S^5`$ should be somehow dual to the large $`N`$ $`𝒩=4`$ SYM theory.
Maldacena’s argument was based on the fact that the low-energy ($`\alpha ^{}0`$) limit may be taken directly in the 3-brane geometry and is equivalent to the throat ($`r0`$) limit. Another way to motivate the identification of the gauge theory with the throat is to think about the absorption of massless particles considered in the previous section. In the D-brane description, a particle incident from the asymptotic infinity is converted into an excitation of the stack of D-branes, i.e. into an excitation of the gauge theory on the world volume. In the supergravity description, a particle incident from the asymptotic (large $`r`$) region tunnels into the $`rL`$ region and produces an excitation of the throat. The fact that the two different descriptions of the absorption process give identical cross-sections supports the identification of excitations of $`AdS_5\times S^5`$ with the excited states of the $`𝒩=4`$ SYM theory.
Another strong piece of support for this identification comes from symmetry considerations . The isometry group of $`AdS_5`$ is $`SO(2,4)`$, and this is also the conformal group in $`3+1`$ dimensions. In addition we have the isometries of $`S^5`$ which form $`SU(4)SO(6)`$. This group is identical to the R-symmetry of the $`𝒩=4`$ SYM theory. After including the fermionic generators required by supersymmetry, the full isometry supergroup of the $`AdS_5\times S^5`$ background is $`SU(2,2|4)`$, which is identical to the $`𝒩=4`$ superconformal symmetry. We will see that in theories with reduced supersymmetry the compact $`S^5`$ factor becomes replaced by other compact spaces $`X_5`$, but $`AdS_5`$ is the “universal” factor present in the dual description of any large $`N`$ CFT and realizing the $`SO(2,4)`$ conformal symmetry. One may think of these backgrounds as type IIB theory compactified on $`X_5`$ down to 5 dimensions. Such Kaluza-Klein compactifications of type IIB supergravity were extensively studied in the mid-eighties , and special attention was devoted to the $`AdS_5\times S^5`$ solution because it is a maximally supersymmetric background . It is remarkable that these early works on compactification of type IIB theory were actually solving large $`N`$ gauge theories without knowing it.
As Maldacena has emphasized, however, it is important to go beyond the supergravity limit and think of the $`AdS_5\times X_5`$ space as a background of string theory . Indeed, type IIB strings are dual to the electric flux lines in the gauge theory, and this provides a natural set-up for calculating correlation functions of the Wilson loops. Furthermore, if $`N`$ is sent to infinity while $`g_{\mathrm{YM}}^2N`$ is held fixed and finite, then there are finite string scale corrections to the supergravity limit which proceed in powers of
$$\frac{\alpha ^{}}{L^2}\left(g_{\mathrm{YM}}^2N\right)^{1/2}.$$
(17)
If we wish to study finite $`N`$, then there are also string loop corrections in powers of
$$\frac{\kappa ^2}{L^8}N^2.$$
(18)
As expected, taking $`N`$ to infinity enables us to take the classical limit of the string theory on $`AdS_5\times X_5`$. However, in order to understand the large $`N`$ gauge theory with finite ‘t Hooft coupling, we should think of the $`AdS_5\times X_5`$ as the target space of a 2-dimensional sigma model describing the classical string physics . The fact that after the compactification on $`X_5`$ the string theory is 5-dimensional supports Polyakov’s idea . In $`AdS_5`$ the fifth dimension is related to the radial coordinate and, after a change of variables $`z=Le^{\phi /L}`$, the sigma model action turns into a special case of the general ansatz proposed in ,
$$I=\frac{1}{2}d^2\sigma [(_i\phi )^2+a^2(\phi )(_iX^\mu )^2+\mathrm{}],$$
(19)
where $`a(\phi )=e^{\phi /L}`$. It is clear, however, that the string sigma models dual to the gauge theories are of rather peculiar nature. The new feature revealed by the D-brane approach, which is also a major stumbling block, is the presence of the Ramond-Ramond background fields. Little is known to date about such 2-dimensional field theories and, in spite of recent new insights , an explicit solution is not yet available.
### 3.1 Correlation functions and the bulk/boundary correspondence
Maldacena’s work provided a crucial insight that the $`AdS_5\times S^5`$ throat is the part of the 3-brane geometry that is most directly related to the $`𝒩=4`$ SYM theory. It is important to go further, however, and explain precisely in what sense the two should be identified and how physical information can be extracted from this duality. Major strides towards answering these questions were made in two subsequent papers where essentially identical methods for calculating correlation functions of various operators in the gauge theory were proposed. As we mentioned in section 2.2, even prior to some information about the field/operator correspondence and about the two-point functions had been extracted from the absorption cross-sections. The reasoning of was a natural extension of these ideas.
One may motivate the general method as follows. When a wave is absorbed, it tunnels from the asymptotic infinity into the throat region, and then continues to propagate toward smaller $`r`$. Let us separate the 3-brane geometry into two regions: $`r\begin{array}{c}>\hfill \\ \hfill \end{array}L`$ and $`r\begin{array}{c}<\hfill \\ \hfill \end{array}L`$. For $`r\begin{array}{c}<\hfill \\ \hfill \end{array}L`$ the metric is approximately that of $`AdS_5\times S^5`$, while for $`r\begin{array}{c}>\hfill \\ \hfill \end{array}L`$ it becomes very different and eventually approaches the flat metric. Signals coming in from large $`r`$ may be thought of as disturbing the “boundary” of $`AdS_5`$ at $`rL`$, and then propagating into the bulk. This suggests that, if we discard the $`r\begin{array}{c}>\hfill \\ \hfill \end{array}L`$ part of the 3-brane metric, then we have to cut off the radial coordinate of $`AdS_5`$ at $`rL`$, and the gauge theory correlation functions are related to the response of the string theory to boundary conditions. Guided by this idea, proposed to identify the generating functional of connected correlation functions in the gauge theory with the extremum of the classical string action subject to the boundary conditions that $`\varphi (x^\lambda ,z)=\varphi _b(x^\lambda )`$ at $`z=L`$ (at $`z=\mathrm{}`$ all fluctuations are required to vanish):<sup>5</sup><sup>5</sup>5 As usual, in calculating the correlation functions in a CFT it is convenient to carry out the Euclidean continuation. On the string theory side we then have to use the Euclidean version of $`AdS_5`$.
$$W[\varphi _b(x^\lambda )]=S_{\varphi _b(x^\lambda )}.$$
(20)
$`W`$ generates the connected Green’s functions of the gauge theory operator that corresponds to the field $`\varphi `$ in the sense explained in section 2.2, while $`S_{\varphi _b(x^\lambda )}`$ is the extremum of the classical string action subject to the boundary conditions. An essentially identical prescription was also proposed in with a somewhat different motivation. If we are interested in the correlation functions at infinite ‘t Hooft coupling, then the problem of extremizing the classical string action reduces to solving the equations of motion in type IIB supergravity whose form is known explicitly . Note that from the point of view of the metric (16) the boundary conditions are imposed not at $`z=0`$ (which would be a true boundary of $`AdS_5`$) but at some finite value $`z=z_{cutoff}`$. It does not matter which value it is since it can be changed by an overall rescaling of the coordinates $`(z,x^\lambda )`$ which leaves the metric unchanged. The physical meaning of this cut-off is that it acts as a UV cut-off in the gauge theory . Indeed, the radial coordinate of $`AdS_5`$ is to be thought of as the effective energy scale of the gauge theory , and decreasing $`z`$ corresponds to increasing energy. In some calculations one may remove the cut-off from the beginning and specify the boundary conditions at $`z=0`$, but in others the cut-off is needed at intermediate stages and may be removed only at the end .
There is a growing literature on explicit calculations of correlation functions following the proposal of . In these notes we will limit ourselves to a brief discussion of the 2-point functions. Their calculations show that the dimensions of gauge invariant operators are determined by the masses of the corresponding fields in $`AdS_5`$ . For scalar operators this relation is
$$\mathrm{\Delta }=2+\sqrt{4+(mL)^2}.$$
(21)
Therefore, the operators in the $`𝒩=4`$ large $`N`$ SYM theory naturally break up into two classes: those that correspond to the Kaluza-Klein states of supergravity and those that correspond to massive string states. Since the radius of the $`S^5`$ is $`L`$, the masses of the Kaluza-Klein states are proportional to $`1/L`$. Thus, the dimensions of the corresponding operators are independent of $`L`$ and therefore independent of $`g_{\mathrm{YM}}^2N`$. On the gauge theory side this is explained by the fact that the supersymmetry protects the dimensions of certain operators from being renormalized: they are completely determined by the representation under the superconformal symmetry . All families of the Kaluza-Klein states, which correspond to such BPS protected operators, were classified long ago .
On the other hand, the masses of string excitations are $`m^2=\frac{4n}{\alpha ^{}}`$ where $`n`$ is an integer. For the corresponding operators the formula (21) predicts that the dimensions do depend on the ‘t Hooft coupling and, in fact, blow up for large $`g_{\mathrm{YM}}^2N`$ as $`2\left(ng_{\mathrm{YM}}\sqrt{2N}\right)^{1/2}`$. This is a highly non-trivial prediction of the AdS/CFT duality which has not yet been verified on the gauge theory side.
It is often stated that the gauge theory lives on the boundary of $`AdS_5`$. A more precise statement is that the gauge theory corresponds to the entire $`AdS_5`$, with the effective energy scale measured by the radial coordinate. In this correspondence the bare (UV) quantities in the gauge theory are indeed specified at the boundary of $`AdS_5`$. In calculating the correlation functions it is crucial that the boundary values of various fields in $`AdS_5`$ act as the sources in the gauge theory action which couple to gauge invariant operators as in (5). A similar connection arises in the calculation of Wilson loop expectation values . A Wilson loop is specified by a contour in $`x^\lambda `$ space placed at $`z=z_{cutoff}`$. One then looks for a minimal area surface in $`AdS_5`$ bounded by this contour and evaluates the Nambu action $`I_0`$ which is proportional to the area. The semiclassical value of the Wilson loop is then $`e^{I_0}`$. This prescription, which is motivated by the duality between fundamental strings and electric flux lines, gives interesting results which are consistent with the conformal invariance . For example, the quark-antiquark potential scales as $`\sqrt{g_{\mathrm{YM}}^2N}/|\stackrel{}{x}|`$. Note that this strong coupling result is different from the weak coupling limit where we have $`Vg_{\mathrm{YM}}^2N/|\stackrel{}{x}|`$.
### 3.2 Conformal field theories and Einstein manifolds
As we mentioned above, the duality between strings on $`AdS_5\times S^5`$ and the $`𝒩=4`$ SYM is naturally generalized to dualities between strings on $`AdS_5\times X_5`$ and other conformal gauge theories. The 5-dimensional compact space $`X_5`$ is required to be a postively curved Einstein manifold, i.e. one for which $`R_{\mu \nu }=\mathrm{\Lambda }g_{\mu \nu }`$ with $`\mathrm{\Lambda }>0`$. The number of supersymmetries in the dual gauge theory is determined by the number of Killing spinors on $`X_5`$.
The simplest examples of $`X_5`$ are the orbifolds $`S^5/\mathrm{\Gamma }`$ where $`\mathrm{\Gamma }`$ is a discrete subgroup of $`SO(6)`$ . In these cases $`X_5`$ has the local geometry of a 5-sphere. The dual gauge theory is the IR limit of the world volume theory on a stack of $`N`$ D3-branes placed at the orbifold singularity of $`R^6/\mathrm{\Gamma }`$. Such theories typically involve product gauge groups $`SU(N)^k`$ coupled to matter in bifundamental representations .
Constructions of the dual gauge theories for Einstein manifolds $`X_5`$ which are not locally equivalent to $`S^5`$ are also possible. The simplest example is the Romans compactification on $`X_5=T^{1,1}=(SU(2)\times SU(2))/U(1)`$ . It turns out that the dual gauge theory is the conformal limit of the world volume theory on a stack of $`N`$ D3-branes placed at the singularity of a certain Calabi-Yau manifold known as the conifold. This turns out to be the $`𝒩=1`$ superconformal field theory with gauge group $`SU(N)\times SU(N)`$ coupled to two chiral superfields in the $`(𝐍,\overline{𝐍})`$ representation and two chiral superfields in the $`(\overline{𝐍},𝐍)`$ representation . This theory has an exactly marginal quartic superpotential which produces a critical line related to the radius of $`AdS_5\times T^{1,1}`$.
## 4 Towards Non-conformal Gauge Theories in Four Dimensions
In the preceding sections I hope to have convinced the reader that type IIB strings on $`AdS_5\times X_5`$ shed genuinely new light on four-dimensional conformal gauge theories. While many insights have already been achieved, I am convinced that a great deal remains to be learned in this domain. We should not forget, however, that the prize question is whether this duality can be extended to QCD or at least to other gauge theories which exhibit the asymptotic freedom and confinement. It is immediately clear that this will not be easy because, as we remarked in section 3, a string approach to weakly coupled gauge theory has not yet been fully developed (the well-understood supergravity limit describes gauge theory with very large ‘t Hooft coupling). On the other hand, the asymptotic freedom makes the coupling approach zero in the UV region . Nevertheless, there may be some at least qualitative approaches to non-conformal gauge theories that shed light on the essential physical phenomena.
One such approach, proposed by Witten , builds on the observation that thermal gauge theories are described by near-extremal $`p`$-brane solutions . It is also known that the high temperature limit of a supersymmetric gauge theory in $`p+1`$ dimensions is described by non-supersymmetric gauge theory in $`p`$ dimensions. Thus, 3-dimensional non-supersymmetric gauge theory is dual to the throat region of the near-extremal 3-brane solution which turns out to have the geometry of a black hole in $`AdS_5`$ (similar black holes were studied long ago by Hawking and Page ). Similarly, 4-dimensional non-supersymmetric gauge theory is dual to the near-horizon region of the near-extremal 4-brane solution . Witten calculated the Wilson loop expectation values in these geometries and showed that they satisfy the area law. Furthermore, calculations of the glueball masses produce discrete spectra with strong resemblance to the lattice simulations . Unfortunately, this supergravity model has some undesirable features as well: for example, the presence in the geometry of a large $`8p`$ dimensional sphere introduces into the spectrum families of light “Kaluza-Klein glueballs” which are certainly absent from the lattice results. Presumably, the root of the problems is that the bare ‘t Hooft coupling is taken to be large, while in order to achieve the conventional continuum limit it has to be sent to zero along a renormalization group trajectory.
A pessimistic conclusion would be that little more can be done at present because the supergravity approximation is supposed to be poor at weak ‘t Hooft coupling. Nevertheless, I feel that one should not give up attempts to understand the asymptotic freedom on the string side of the duality. In fact, some progress in this direction was recently achieved in following Polyakov’s suggestion on how to break supersymmetry. Polyakov argued that a string dual of non-supersymmetric gauge theory should have world sheet supersymmetry without space-time supersymmetry. Examples of such theories include the type $`0`$ strings, which are NSR strings with a non-chiral GSO projection which breaks the space-time supersymmetry .
There are two type $`0`$ theories, A and B, and both have no space-time fermions in their spectra but produce modular invariant partition functions . The massless bosonic fields are as in the corresponding type II theory (A or B), but with the doubled set of the Ramond-Ramond (R-R) fields. The type 0 theory also contains a tachyon, which is why it has not received much attention thus far. In it was suggested, however, that the presence of the tachyon does not spoil its application to large $`N`$ gauge theories. A well-established route towards gauge theory is via the D-branes which were first considered in the type 0 context in . Large $`N`$ gauge theories, which are constructed on $`N`$ coincident D-branes of type 0 theory, may be shown to contain no open string tachyons .
In the presence of a bulk tachyon was turned into an advantage because it gives rise to the renormalization group flow. There the $`3+1`$ dimensional $`SU(N)`$ theory coupled to 6 adjoint massless scalars was constructed as the low-energy description of $`N`$ coincident electric D3-branes of type $`0`$B theory.<sup>6</sup><sup>6</sup>6 In the type $`0`$B theory the 5-form R-R field strength $`F_5`$ is not constrained to be selfdual. Therefore, it is possible to have electrically or magnetically charged 3-branes. This should be contrasted with the situation in the type IIB theory where the 5-form strength is constrained to be selfdual and, thus, only the selfdual 3-branes are allowed. The conjectured dual type 0 background thus carries $`N`$ units of electric 5-form flux. The dilaton decouples from the $`(F_5)^2`$ terms in the effective action, and the only source for it originates from the tachyon mass term,
$$^2\mathrm{\Phi }=\frac{1}{8}m^2e^{\frac{1}{2}\mathrm{\Phi }}T^2,m^2=\frac{2}{\alpha ^{}}.$$
(22)
Thus, the tachyon background induces a radial variation of $`\mathrm{\Phi }`$. Since the radial coordinate is related to the energy scale of the gauge theory, the effective coupling decreases toward the ultraviolet. In the UV limit of the type $`0`$B background dual to the gauge theory was studied in more detail and a solution was found where the geometry is $`AdS_5\times S^5`$ while the ‘t Hooft coupling flows logarithmically. A calculation of the quark-antiquark potential showed qualitative agreement with what is expected in an asymptotically free theory.
These results raise the hope that the AdS/CFT duality can indeed be generalized to asymptotically free gauge theories. While we are still far from constructing reliable string duals of such theories, the availability of new ideas on this old and difficult problem makes me hopeful that more surprises lie ahead.
## Acknowledgements
I am grateful to S. Gubser, A. Peet, A. Polyakov, A. Tseytlin and E. Witten, my collaborators on parts of the material reviewed in these notes. I also thank B. Kursunoglu and other organizers of Orbis Scientiae ’98, especially L. Dolan (the convener of the string session), for sponsoring a very interesting conference. This work was supported in part by the NSF grant PHY-9802484 and by the James S. McDonnell Foundation Grant No. 91-48.
|
no-problem/9901/astro-ph9901146.html
|
ar5iv
|
text
|
# Recent results on intermediate polars
## 1 Introduction
The intermediate polars (IPs) have been reviewed comprehensively by Patterson (1994) and Warner (1995), see also Hellier (1995; 1996). Here I discuss areas of current activity, mainly from the observational side, and with an X-ray bias. A good place to start is with a (fairly conservative) census of currently known systems, presented on the now traditional spin–orbit diagram (Fig. 1). Compared to previous versions we have a tripling of the known systems below the gap, with RX 1238–38 (Buckley et al. 1998) and RX 0757+63 (Tovmassian et al. 1998) joining EX Hya.
## 2 Accretion Mode
Almost all IPs show a dominant X-ray periodicity at the spin period (a defining characteristic). Some, though, show additional X-ray periodicities, particularly at the beat frequency $`\omega `$ – $`\mathrm{\Omega }`$ (where $`\omega `$ = spin frequency and $`\mathrm{\Omega }`$ = orbital frequency). This was seen first in TX Col (Buckley & Tuohy 1989) and subsequently in at least 5 others (e.g. Hellier 1991; 1998). The now-standard interpretation is that some of the accretion stream overflows the initial impact with the disc and connects directly onto field lines, producing ‘flipping’ between the poles on the beat period (e.g. Hellier 1991). This has become known as ‘disc-overflow’ or ‘stream-overflow’ accretion. It appears to be variable, judging by the changing ratio of the spin and beat amplitudes in stars such as FO Aqr and TX Col (Hellier 1991; Norton et al. 1997; Wheatley, this volume).
This idea was criticised by Murray at this conference using the following argument. The overflowing stream can’t penetrate further in than the ballistic distance of minimum approach to the white dwarf, $`r_{\mathrm{min}}`$. If the disc disruption, $`r_{\mathrm{mag}}`$, occurs inside $`r_{\mathrm{min}}`$, the overflowing stream would re-impact on the disc, not the magnetosphere. If, though, $`r_{\mathrm{mag}}>r_{\mathrm{min}}`$, it is unclear whether a disc can form. Murray et al. (1999, see also this volume) propose that spiral shocks could provide an alternative means by which material at the magnetosphere retains knowledge of orbital phase, and hence produces sideband periodicities.
My preference is still for the disc-overflow model, because in X-ray lightcurves the dominant sideband periodicity (where seen) is at $`\omega `$ – $`\mathrm{\Omega }`$, rather than 2($`\omega `$ – $`\mathrm{\Omega }`$) (e.g. Hellier 1992). The $`\omega `$ – $`\mathrm{\Omega }`$ ‘pole-flipping’ frequency arises naturally when two poles are fed from one location in orbital phase, as from a stream (e.g. Hellier 1991; Wynn & King 1992). A twin-armed spiral leads most readily to twice this frequency, or 2($`\omega `$ – $`\mathrm{\Omega }`$) (Murray et al. 1999, and this volume).
So can we overcome Murray’s argument? In addressing it we encounter a far more general problem for IP accretion (see also Warner 1996). I’ll use FO Aqr, often showing a $`\omega `$ – $`\mathrm{\Omega }`$ modulation (Hellier 1993; Beardmore et al. 1998), as a test case. Firstly, FO Aqr is almost certainly close to equilibrium rotation, since the $`O`$ – $`C`$ of the spin cycle has alternated between spin-up and spin-down over the last decade (Patterson et al. 1998). We can thus assume that the 1254-s spin period is also the Keplerian period at the inner edge of the disc (i.e. a critical fastness parameter $``$1). Using the standard formulae for CV parameters (as collected in Warner 1995), with $`P_{\mathrm{orb}}`$ = 4.85 hr and assuming $`M_1=0.7`$ M, we find that as a fraction of the stellar separation, $`r_{\mathrm{min}}`$ = 6%, the stream’s circularization radius, $`r_{\mathrm{circ}}`$ = 10%, and $`r_{\mathrm{mag}}`$ = 14%.
This is good news for the disc-overflow model, in that the stream would hit the magnetosphere. Indeed, this is insensitive to $`q`$ and applies to most IPs.<sup>1</sup><sup>1</sup>1Only in GK Per, with its long 2-d orbit, would $`r_{\mathrm{min}}`$ be well outside $`r_{\mathrm{mag}}`$, and here Hellier & Livio (1994) proposed that a re-impact on the disc resulted in the X-ray QPO seen in outburst (though see Morales-Rueda, Still & Roche, this volume, for a contrary opinion). However, having $`r_{\mathrm{mag}}`$$`>`$$`r_{\mathrm{circ}}`$ is bad news for the disc (its angular momentum would dissipate). Since IPs do seem to be predominantly disc-fed (e.g. the dominance of X-ray pulsations at $`\omega `$; Hellier 1991), this is a general problem for the theory of magnetic accretion and points to the need for torques not currently taken into account, either produced by the disc-field interaction, or by the magnetic interaction of the two stars (e.g. Warner 1996). Note that Mason (1997), studying the large spin-down rate in PQ Gem, suggested that material was threading onto field lines well outside the co-rotation radius, in contradiction to the standard theory (e.g. Ghosh & Lamb 1979).
If we rule out $`r_{\mathrm{mag}}`$$`>`$$`r_{\mathrm{circ}}`$, the disc-overflow model can still work in the regime $`r_{\mathrm{circ}}`$$``$ $`>`$$`r_{\mathrm{mag}}`$$``$ $`>`$$`r_{\mathrm{min}}`$ provided that the disc can form. Whether this is so is unclear, though if the centrifugal barrier of a spinning magnetosphere makes accretion inefficient, a build up of material could screen enough of the field to allow a disc to form. Lamb & Melia (1988) suggest that this always occurs.
Lastly, note that $`r_{\mathrm{mag}}`$ is set by the dense disc. If an overflowing stream is relatively tenuous (Mukai, Ishida & Osborne 1994 estimated that it carried only 2% of the accretion flow in FO Aqr) it could be penetrated by the field further out than the disc is, allowing stream–field interactions even with $`r_{\mathrm{mag}}<r_{\mathrm{min}}`$.
The above arguments also lead to the expectation that faster rotators, which have smaller magnetospheres, would be less likely to show disc-overflow sidebands; Norton et al. (1999) suggest that this is indeed the case.
### 2.1 Optical sidebands
As a result of the disc-overflow idea becoming popular to explain X-ray sidebands, some recent papers have used it to explain optical sidebands also. While this is possible, it should be remembered that there is nothing wrong with the traditional appeal to reprocessing of spin-pulsed X-rays by the secondary and/or hot-spot. Indeed, detailed studies of the phasing of the optical beat period often support this (e.g. Hellier, Cropper & Mason 1991). The reason that this won’t work in the X-ray is that the combination of the solid-angle of the target and the albedo to X-ray reflection means that the amplitude can be at most a few percent of that of the spin-pulsed X-rays. (See also Wickramasinghe & Ferrario, this volume, for an account of optical sidebands.)
One thing that has proved puzzling is the occasional observation of optical pulsations at 2($`\omega `$ – $`\mathrm{\Omega }`$), with little or nothing seen at $`\omega `$ – $`\mathrm{\Omega }`$ (e.g. in TV Col and RX 1238–38; Buckley & Sullivan 1992; Buckley et al. 1998). A twin-armed spiral shock could well explain this (e.g. Murray et al. 1999) since reprocessing off diametrically-opposite sites would double the frequency and so give 2($`\omega `$ – $`\mathrm{\Omega }`$) rather than $`\omega `$ – $`\mathrm{\Omega }`$.
### 2.2 Stream-fed accretion
Through Buckley et al.’s papers (1995, 1997) we know of one system, RX 1712–24, that appears to accrete solely by a stream, since $`\omega `$ – $`\mathrm{\Omega }`$, rather than $`\omega `$, dominates the X-ray lightcurves. Fig. 2 shows unpublished H$`\beta `$ profiles folded on the $`\omega `$ – $`\mathrm{\Omega }`$ cycle, which appear to show the stream flipping in velocity between the poles, as expected in a discless accretor. We heard relatively little about RX 1712–24 at this conference, partly because the tumbling of the magnetosphere beneath the stream produces chaotic variability that is hard to analyse. For instance a Fourier transform of an ASCA lightcurve (unpublished) shows significant power only at 2($`\omega `$ – $`\mathrm{\Omega }`$), in contrast to the $`\omega `$ – $`\mathrm{\Omega }`$ detected by Buckley et al. (1997).
## 3 Curtain-fed accretion
At this conference I presented ‘spin-cycle tomograms’ of several IPs. The technique of Doppler tomography can be applied as readily to data folded on the spin cycle as on the orbital cycle, only the interpretation is different. Fig. 3 shows tomograms of AO Psc and PQ Gem (for a fuller account see Hellier 1997a, 1999). That of AO Psc shows the accretion curtain from the upper pole in the “3 o’clock” position. Note that it subtends $``$100 at the origin, implying that the (line-emitting) accretion curtain covers this range of azimuth.
The PQ Gem tomogram shows emission from both upper and lower poles. In contrast to several IPs, which show maximum blueshift from the upper curtain when it points away from us (e.g. Hellier, Cropper & Mason 1991), the curtains in PQ Gem appear twisted. If they followed the expected pattern (maximum blueshift at spin maximum) the poles would lie symmetrically on the $`x`$-axis in the tomogram, but they are rotated by $``$40. Encouragingly, there is independent evidence from the X-ray lightcurves (Mason 1997) and from polarimetry (Potter et al. 1997) for the same effect, with material accreting predominantly along field lines ahead of the magnetic poles.
### 3.1 The transition region
I flag the transition region between the disc and field since it is the biggest area of uncertainty in IPs. We are still discussing whether it is an orderly transition from a ‘normal’ disc to magnetically-channeled infall, or whether the ‘disc’ is a sea of diamagnetic blobs floating in the magnetic field (e.g. Wynn & King 1995). The transition region, of course, determines the accretion footprint. Given the relatively small accretion areas deduced from observations (see below) my hunch is that the transition region is relatively abrupt and ordered, and that diamagnetic blobs are not able to cross field lines much.
Observationally we find it hard enough to measure $`r_{\mathrm{mag}}`$, never mind the width of the transition region $`\mathrm{\Delta }r`$. We also can’t deduce $`r_{\mathrm{mag}}`$ from the magnetic fields since we have secure $`B`$ values only for the 3 IPs showing polarised light \[these are PQ Gem at 9–21 MG (Väth et al. 1996; Potter et al. 1997), RX 1712–24 at 9–27 MG (Väth 1997), and BG CMi at 2–6 MG (Chanmugam et al. 1990)\]. If someone succeeded in finding $`r_{\mathrm{mag}}`$, though, quoting Patterson (1994), “the treasure trove of the world’s $`\dot{P}`$ data, accumulated over decades and still awaiting a proper interpretation, is theirs for the taking”. Spin-cycle tomography (see above), offering hints of twisted and azimuthally-extended curtains, is perhaps the likeliest way forward.
### 3.2 The accretion footprint
If the transition region is hard to pin down, how about the other end of the curtain, the accretion footprint? We have fewer observational constraints than in AM Her stars. This is because, whereas AM Hers have (typically) one dominant accretion region, IPs accrete equally onto two opposite regions (see below) making the interpretation of their lightcurves much harder. Recently, though, the deeply-eclipsing IP XY Ari has provided secure results. The egress from X-ray eclipse in $``$ $`<`$2 s (Fig. 4) limits the accretion region to $``$ $`<`$0.002 of the white dwarf area during quiescence (Hellier 1997b). At the peak of outburst, however, the accretion region at the upper pole is visible at all spin-phases, which implies that it must be extended into an almost-complete azimuthal ring (Hellier, Mukai & Beardmore 1997). There is no constraint on the azimuthal extent during quiescence. Thus accretion regions seem to be thin ribbons, covering small areas, but extended in magnetic longitude. This, of course, is what we expect from tracing back field lines from the disc-disruption region (e.g. Rosen 1992), but confirmation from data is encouraging.
Additional information comes from the black-body-like soft components in the X-ray spectra of some IPs. Their fluxes lead to fractional areas $`f`$$``$10<sup>-5</sup> (Haberl & Motch 1995), and presumably the hard-X-ray $`f`$ is smaller still.
## 4 X-ray pulse profiles
The X-ray spin pulses in IPs are potentially the biggest clue to the accretion regions on the white dwarf, however their interpretation is still not settled. Progress is currently hampered by ambiguity in the models, in that there are many ways of creating a quasi-sinusoid; considerations include occultation, photo-electric absorption, electron scattering, the effect of non-zero shock heights, and offset or asymmetric dipoles. Although I don’t have space for a comprehensive account, I can outline the current state of thought.
Firstly, let me claim that all IPs accrete roughly equally onto two opposite poles and that both poles contribute to the pulsation. The theoretical justification is that we have never found magnetic monopoles; that a disc feeds both poles roughly equally (asymmetries have only a minor effect out at $`r_{\mathrm{mag}}`$), and that even a stream distributes material evenly onto both poles when averaged over the beat cycle. \[In contrast, stream accretion in a phase-locked AM Her can greatly favour one pole.\] The observational evidence is the fact that IP hard-X-ray lightcurves never go to zero flux \[as occurs in AM Her stars when only one pole accretes and that pole is on the far side of the white dwarf\]. Indeed, when the ‘IP’ RX 1914+24 was observed to have zero flux for half its cycle, Cropper et al. (1998a) proposed that it must be an ultra-short-period AM Her star, which appears to have been confirmed by further data (Ramsay et al., this volume).
There is one exception to the above, which is XY Ari in outburst.<sup>2</sup><sup>2</sup>2We don’t see DQ Her’s white dwarf at all, judging by the high inclination and lack of X-rays. This star has a very high inclination ($`>`$ 80), a short spin pulse (and thus a small equilibrium $`r_{\mathrm{mag}}`$), and during outburst the increased mass transfer reduced $`r_{\mathrm{mag}}`$ by a factor of two. The combination of all three allowed the disc to block the line-of-sight to the lower pole (Hellier et al. 1997), but this exception under extreme conditions reinforces the point that normally both poles must be seen. Many past IP papers have considered only the upper pole, since this is conceptually easier, but this simplification is no longer profitable.
### 4.1 Occultation
Two symmetric, zero-height, accretion poles give no net modulation through occultation, since one site’s appearance compensates for the disappearance of the site diametrically opposite. A non-zero shock height, though, breaks the symmetry. At large shock heights ($`h`$$``$ $`>`$0.3) the upper pole is always visible; we see most of both poles when the upper pole points away (which I’ll call ‘phase 0’ for brevity), and at phase 0.5 the lower pole is mostly hidden, so the net modulation has a flux minimum at phase 0.5. This may apply to EX Hya (Hellier et al. 1991; Allan, Hellier & Beardmore 1998; Mukai, this volume). At lower, and more typical, shock heights ($`h`$$``$0.05) both poles can pass over the limb, and occultation produces maxima near phases 0.25 and 0.75, when both poles are visible because both are on the limb; the result is a small-amplitude, double-peaked modulation (Mukai, this volume). Such features, seen in, e.g., XY Ari and FO Aqr, have previously been ascribed to offset dipoles (e.g. Hellier et al. 1997; Beardmore et al. 1998). It will be difficult to separate these causes, though Mukai’s mechanism is the more natural.
### 4.2 Opacity
If you pour $`10^{17}`$ g s$`^{\text{–1}}`$ onto $`10^3`$$`10^5`$ of the white dwarf then a path length of 0.01–0.05 $`R_{\mathrm{wd}}`$ gives an electron-scattering optical depth of $``$1–100. Photoelectric absorption would extinguish soft X-rays unless, as expected, the flow is highly ionized. These numbers demonstrate that opacity in the post-shock flow is highly significant, and is likely to dominate the pulse profiles, given that occultation largely cancels out. This is confirmed by the finding that electron-scattering depths of a few are required to model the X-ray spectra over spin phase (e.g. Hellier et al. 1996) and by the observation of Compton-broadened K$`\alpha `$ line profiles in some IPs (Hellier, Mukai & Osborne 1998).
For the high values of $`\dot{M}/f`$ implied by the above the radial optical depth through the accretion shock and the pre-shock flow will exceed the horizontal optical depth, so that radiation will emerge preferentially through the sides of the accretion flow (see Fig. 5). This is the ‘accretion curtain’ model for producing large quasi-sinusoidal pulses with both poles acting in phase and with minima when the upper pole points towards the observer. It applies most securely to AO Psc, whose X-ray lightcurve shows a large-amplitude, sinusoidal pulse which is deeper at lower energies, and whose optical emission lines are redshifted during the minimum (see Hellier et al. 1991, 1996, and also see Kim & Beuermann 1995, 1996 for a theoretical model of the situation).
This can’t be the whole story, however, since many IPs (e.g. V405 Aur, YY Dra, GK Per, XY Ari) show smaller-amplitude, double-humped X-ray pulse profiles, rather than the ‘traditional’ quasi-sinusoid. One can perhaps explain this if the inclination and dipole offset conspire to keep the accretion curtain from crossing the line-of-sight, allowing occultation effects, due to non-zero shock heights or asymmetric dipoles, to take over. The curtain would never cross the line-of-sight for inclinations $`i<\delta +ϵ`$ where $`\delta `$ is the dipole offset and $`ϵ`$ (18 for $`r_{\mathrm{mag}}`$ = 10 $`R_{\mathrm{wd}}`$) is the magnetic colatitude of accretion. Many of the IPs showing quasi-sinusoids (FO Aqr, AO Psc, BG CMi) do seem to be medium-inclination systems (as judged by grazing eclipses and/or X-ray orbital dips; Hellier 1995) where curtain-caused dips would be greatest.
Another factor is that, as I’ve pointed out previously (Hellier 1996, see also Allan et al. 1996; Norton et al. 1999), the IPs showing double-humped pulses tend to be the faster rotators (e.g. V405 Aur, XY Ari, YY Dra, AE Aqr). Faster rotators will (in equilibrium) have smaller values of $`r_{\mathrm{mag}}`$. I suggested that if smaller $`r_{\mathrm{mag}}`$ corresponded to larger $`f`$ values then the situation above could be reversed, and optical depths could be lowest radially. If so, the accretion regions would now act as searchlights, beaming X-rays outwards. As the two poles swept into view a double-humped pulse would result. This seems to be supported by UV data on the fastest rotator, AE Aqr (e.g. Eracleous et al. 1994).
It is unclear whether lowering $`r_{\mathrm{mag}}`$ does lead to a larger $`f`$: this would happen if $`\mathrm{\Delta }r`$ was constant as $`r_{\mathrm{mag}}`$ decreased, but not if it was a fixed fraction of $`r`$ (the latter would give the same fractional change in $`B`$). Since velocities and therefore velocity dispersions are greater for smaller $`r_{\mathrm{mag}}`$, though, one might expect a relatively larger $`\mathrm{\Delta }r`$, but the theory of this topic is sparse.
One problem with the above idea is that one of the stars showing a double-hump, and thus needing a higher $`f`$, is V405 Aur, where the measured black-body $`f`$ is only 10<sup>-5</sup> (Haberl & Motch 1995). Thus the picture is unclear and to make progress we need further detailed investigations of the individual stars.
### 4.3 X-ray spectroscopy
Potentially, X-ray spectra resolved over spin phase should sort out the pulse-formation mechanisms. However, we are finding that even with the spectral resolution of ASCA, this is not necessarily so. The reason is that accretion curtains are found to be patchy, multi-phase absorbers complete with electron scattering, giving a much flatter energy signature than simple photoelectric absorption (e.g. Ishida 1991; Hellier et al. 1996). Also, since the bottoms of accretion columns occult most readily, and are cooler than the shock, occultation can produce the same deeper-at-low-energies signature as absorption. In at least two cases \[RX 1238–38 (Hellier et al. 1998) and EX Hya (Allan et al. 1998)\] we have been unable to distinguish the two processes with ASCA data. Observations with a higher S/N and greater energy range (e.g. with XMM) will be required.
## 5 Accretion columns
We have a good theoretical understanding of the temperature and density profile of the accretion column for a given white dwarf mass (e.g. Aizu 1973), and we have increasingly good X-ray spectra (from Ginga, ASCA, RXTE and soon XMM). Can we combine the two to probe accretion columns?
### 5.1 White dwarf masses
In principle we can use a temperature/density profile, together with plasma codes, to construct model X-ray spectra, and deduce the white dwarf masses by fitting to observations (e.g. Cropper, Ramsay & Wu 1998, see also this volume; Beardmore & Osborne 1999). This is easier in IPs than in AM Hers since we can neglect cyclotron cooling, and consequently we can assume that electrons and ions have the same temperature (e.g. Imamura & Durisen 1983).
There are, though, still considerable uncertainties, including clumpiness in the flow; opacity affecting the spectral shape (and if this includes electron scattering of $`\tau `$$``$1 it will affect all energies); the contribution of X-rays reflected by the white dwarf surface; the uncertainty in $`\dot{M}/f`$ (which sets the shock height); uncertainty in the shape of the accretion region (which affects path lengths); the need for a low-temperature cutoff where the column goes optically thick as it merges with the white dwarf; and the mass-radius relation of a hot white dwarf possessing an accreted hydrogen envelope.
Given this it is perhaps not surprising that there are still discrepancies between masses derived by this method and those from other methods (the X-ray spectra tend to give higher masses — see Ramsay et al. 1998; Cropper et al., this volume). However, the estimates by different methods are currently converging, and with some more tuning the X-ray spectra could turn into powerful tools for deriving the white dwarf masses of a whole class.
One factor I haven’t seen discussed is that most treatments assume that the accretion flow falls into the white dwarf from infinity. In an AM Her, with quasi-radial infall from the $`L1`$ point, this is a fair assumption, but in an IP, assuming disruption of the disc at $`r_{\mathrm{mag}}`$, it is unclear how much, in any, of the Keplerian velocity translates into speed down field lines. If we assume freefall only from $`r_{\mathrm{mag}}`$$``$10 $`R_{\mathrm{wd}}`$ then the kinetic energy at the white dwarf is reduced by 10%. Further, the enforced co-rotation with rapidly-spinning field lines will produce a centrifugal force which further slows the infall. A simple calculation, including a centrifugal force term in the freefall, shows that for a fast rotator (in equilibrium at $`r_{\mathrm{mag}}`$$``$7 $`R_{\mathrm{wd}}`$) the two effects combine to a $``$21% reduction in kinetic energy. This is, for example, sufficient to remove the remaining discrepancy between estimates of XY Ari’s white dwarf mass by this method compared to methods using its eclipse (see Cropper et al., this volume).
### 5.2 Broadened lines
Another potential probe of the accretion column occurs through the discovery of broadened iron K$`\alpha `$ lines in the X-ray spectra of some IPs (Hellier et al. 1998; Fig. 7). We showed that Doppler broadening was an insufficient explanation, and concluded that Compton scattering in the post-shock region was broadening the lines. We suggested the picture drawn schematically in Fig. 8, where the hot, optically-thin, upper region radiates little line emission. In the dense, optically-thick, lower region, line photons are destroyed by multiple scatterings. In the transition region, however, resonant trapping of K$`\alpha `$ photons allows them to be Compton-scattered once and only once as they emerge. The degree of broadening is thus a probe of the column temperature at the transition to optical thickness. This could explain why some IPs show broad lines while others don’t: if the transition to optical thickness occurs at temperatures too low for significant iron K$`\alpha `$ emission then only narrow lines will be seen.
### 5.3 Soft IPs
Owing to ROSAT we know of three IPs (PQ Gem, V405 Aur & UU Col; Mason et al. 1992; Haberl et al. 1994; Burwitz et al. 1996) that show soft black-body components similar to those in AM Her stars (a fourth star, RX 1914+24, I now class as an AM Her, see Cropper et al. 1998a).
People have often supposed that IPs have larger $`f`$ values than AM Hers, since IPs accrete from a disc over a range of azimuth, whereas AM Hers accrete from a narrow stream. The soft components in IPs, however, are convincing me otherwise. Firstly, they have measured temperatures (38–57 eV) in the range that AM Hers do, whereas larger areas would lead to lower temperatures. Secondly, the areas derived from their fluxes (e.g. 10<sup>-5</sup> for V405 Aur at 300 pc; Haberl & Motch 1995) are no higher than those found for AM Hers (see Stockman 1995).
Why is this? In an AM Her, the radial infall of the stream helps it to punch a hole in the magnetosphere. It breaks up into blobs which become magnetically controlled at different field strengths (and hence different radii) depending on their size and density. Thus the ballistic-to-magnetic transition occurs over a large volume, corresponding to a large footprint on the white dwarf. This appears to have been observed in spectroscopy of the accretion stream (Sohl & Wynn, this volume). In contrast, passage through a disc in an IP will destroy blobs, and during disc-disruption the tangential Keplerian velocity won’t help the material to penetrate the field. The field lines are, presumably, scooping up material at a near-constant rate as they rotate, and since diffusion times are longer than spin cycles, this is likely to occur at a constant radius. Although the theory of this is uncertain, it could lead to a homogenous, rather than blobby, accretion flow. The small $`\mathrm{\Delta }r`$ would map to a smaller footprint on the white dwarf, compensating for the larger range of azimuth.
Two further observations support this idea. Firstly, AM Hers can show high-amplitude, erratic, soft-X-ray lightcurves, as individual blobs hit the photosphere (Heise et al. 1985); nothing as dramatic has been seen in IPs. Secondly, whereas the soft component is widespread in AM Hers, and thought to be mainly due to blobs penetrating the photosphere and thermalising, only 3 of 23 IPs show comparable soft components, suggesting a general absence of blobby accretion.
So why do those 3 show soft components? We expect some soft emission from irradiated areas around the accretion region in all IPs, but what does it correlate with? With only 3 systems this is hard to answer, but it doesn’t seem to be field strength: PQ Gem and RX 1712–24 both have 10–20 MG fields but the former shows a soft component and the latter doesn’t. Nor does it appear to be absorption: that to RX 1712–24 is no higher than that to V405 Aur (Motch & Haberl 1995). Inclination? Tricky to say, since inclination estimates are only reliable for eclipsing systems. Accretion rate? These estimates are always unreliable! Could the heated regions be veiled by the accretion curtain? Perhaps, but in simple pictures the column-base at one of the poles should always be seen at some spin phase. Thus, this topic remains unsolved.
## 6 Outbursts
Despite the fact that, even up to 1998, authors have stated that outbursts are discordant with an IP classification, at least 6 confirmed IPs have shown outbursts (e.g. Warner 1996; Hellier et al. 1997). I want to publicise the fact that there appear to be two types of outburst. YY Dra, GK Per & XY Ari all show ‘normal’ dwarf-nova-like outbursts, with any differences being explained by their occurrence in a truncated disc (e.g. Angelini & Verbunt 1989). The other three, EX Hya, TV Col & V1223 Sgr, show shorter, lower-amplitude outbursts with a range of observational properties at odds with normal behaviour (Hellier et al. 1997 and references therein). It appears that the normal instability is suppressed and replaced by a different instability (see Warner 1996). We don’t know what this is, though secondary-star instabilities have been suggested, but it remains a topic in need of attention by theorists. Can the outbursts in these 3 stars be explained within the standard disc-instability paradigm?
## References
Aizu K., 1973, Prog. Theo. Phys., 19, 1181
Allan A., Hellier C., Beardmore A.P., 1998, MNRAS, 295, 167
Allan A., Horne K., Hellier C., et al., 1996, MNRAS, 279, 1345
Angelini L., Verbunt F., 1989, MNRAS, 238, 697
Beardmore A.P., Mukai K., Norton A.J. et al., 1998, MNRAS, 297, 337
Beardmore A.P., Osborne J.P., 1999, MNRAS, submitted
Buckley D.A.H. et al. 1995, MNRAS, 275, 1028
Buckley D.A.H. et al. 1997, MNRAS, 287, 117
Buckley D.A.H., Cropper M., Ramsay G. et al., 1998, MNRAS, 299, 83
Buckley D.A.H., Tuohy I.R., 1989, ApJ, 344, 376
Buckley D.A.H., Sullivan D.J., 1992, in ‘Viña del Mar Workshop on Cataclysmic Variable stars’, ed N. Vogt, ASP Conf. Ser. 29, p387
Burwitz V., Reinsch K., Beuermann K., Thomas H.-C., 1996, A&A, 310, L25
Chanmugam G., Frank J., King A.R., Lasota J.-P., 1990, ApJ, 350, 13
Cropper M. et al. 1998a, MNRAS, 293, 57
Cropper M., Ramsay G., Wu K., 1998, MNRAS, 293, 222
Eracleous M. et al. 1994, ApJ, 433, 313
Ghosh P., Lamb F.K., 1979, ApJ, 234, 296
Haberl F., Motch C., 1995, A&A, 297, L37
Haberl F., Thorstensen J.R., Motch C. et al., 1994, A&A, 291, 171
Heise J. et al. 1985, A&A, 148, L14
Hellier C., 1991, MNRAS, 251, 693
Hellier C., 1992, MNRAS, 258, 578
Hellier C., 1993, MNRAS, 265, L35
Hellier C., 1995, in ‘Cape workshop on magnetic cataclysmic variables’, eds, Buckley D.A.H., Warner B., ASP Conf. Ser. 85, p185
Hellier C., 1996, in ‘CVs and related objects’, Kluwer, p143
Hellier C., 1997a, MNRAS, 288, 817
Hellier C., 1997b, MNRAS, 291, 71
Hellier C., 1998, Adv. Space Res., 22(7), 973
Hellier C., 1999, ApJS, in press
Hellier C., Beardmore A.P., Buckley D.A.H., 1998, MNRAS, 299, 851
Hellier C., Cropper M., Mason K.O., 1991, MNRAS, 248, 233
Hellier C., Livio M., 1994, ApJL, 424, L57
Hellier C., Mukai K., Beardmore A.P., 1997, MNRAS, 292, 397
Hellier C., Mukai K., Ishida M., Fujimoto R., 1996, MNRAS, 280, 877
Hellier C., Mukai K., Osborne J.P., 1998, MNRAS, 297, 525
Imamura J.N., Durisen R.H., 1983, ApJ, 268, 291
Ishida M., 1991, PhD thesis, University of Tokyo
Kim Y., Beuermann K., 1995, A&A, 298, 165
Kim Y., Beuermann K., 1996, A&A, 307, 824
Lamb D.Q., Melia F., 1988, in ‘Polarised radiation of circumstellar origin’, p45
Mason K.O., Watson M.G. et al., 1992, MNRAS, 258, 749
Mason K.O., 1997, MNRAS, 285, 493
Motch C. et al., 1996, A&A, 307, 459
Mukai K., Ishida M., Osborne J.P., 1994, PASJ, 46, L87
Murray J.R., Armitage P.J., Ferrario L. et al., 1999, MNRAS, 302, 189
Norton A.J., Beardmore A.P., Allan A., Hellier C., 1999, A&A, submitted
Norton A.J., Hellier C., Beardmore A.P. et al., 1997, MNRAS, 289, 362
O’Donoghue D., Koen C., Kilkenny D., 1996, MNRAS, 278, 1075
Patterson J., 1994, PASP, 106, 209
Patterson J. et al. 1998, PASP, 110, 415
Potter S.B., Cropper M., Mason K.O. et al., 1997, MNRAS, 285, 82
Ramsay G., Cropper M., Hellier C., Wu K., 1998, MNRAS, 297, 1269
Rosen S.R., 1992, MNRAS, 254, 493
Still M.D., Duck S.R., Marsh T.R., 1998, MNRAS, 299, 759
Stockman H.S., 1995, in ‘Cape workshop on magnetic cataclysmic variables’, eds, Buckley D.A.H., Warner B., ASP Conf. Ser. 85, p153
Tovmassian G. H. et al., 1998, A&A, 335, 227
Väth H., 1997, A&A, 317, 476
Väth H., Chanmugam G., Frank J., 1996, ApJ, 457, 407
Warner B., 1995, Cataclysmic variable stars, Cambridge University Press
Warner B., 1996, Astr. Space Sci., 241, 263
Wynn G.A., King A.R., 1992, MNRAS, 255, 83
Wynn G.A., King A.R., 1995, MNRAS, 275, 9
|
no-problem/9901/chao-dyn9901017.html
|
ar5iv
|
text
|
# Characterization and control of small-world networks
## Abstract
Recently Watts and Strogatz have given an interesting model of small-world networks. Here we concretise the concept of a “far away” connection in a network by defining a far edge. Our definition is algorithmic and independent of underlying topology of the network. We show that it is possible to control spread of an epidemic by using the knowledge of far edges. We also suggest a model for better advertisement using the far edges. Our findings indicate that the number of far edges can be a good intrinsic parameter to characterize small-world phenomena.
The properties of very large networks are mainly determined by the way the connections between the vertices are made. At one extreme are the regular networks where only the “local” vertices are inter-connected and the “far away” vertices are not connected while at the other extreme are the random networks where the vertices are connected at random. The regular networks display a high degree of local clustering and the average distance between vertices is quite large. On the other hand, the random networks show negligible local clustering and the average distance between vertices is quite small. The small-world networks have intermediate connectivity properties but exhibit a high degree of clustering as in regular networks and small average distance between vertices as in random networks. A very interesting model for small-world networks was recently proposed by Watts and Strogatz . They found that a regular network acquires the properties of a small-world network with only a very small fraction of connections or edges (about $`1\%`$) rewired to “far away” vertices. They demonstrated that several diverse phenomena like neural networks , power grids and collaboration graphs of film actors can be modeled using small-world networks. Also the spread of an epidemic is much faster in small-world networks than in the regular networks and almost close to that of random networks.
In this paper we suggest a possible way of characterizing small-world networks. The basic ingredients of small-world networks are the “far away” connections. We introduce a notion of far edges in a network to identify these “far away” connections. Our definition of a far edge is independent of any underlying topology for a network and depends only on the way connections or edges are made. We claim that the rapid spread of an epidemic in small-world network as found by Watts and Strogatz is due to these far edges. This allows us to propose a mechanism to control the epidemic using the same far edges which are responsible for the rapid spread. We further demonstrate the utility of our notion of far edges by giving an better method of advertisement.
Consider a graph (network) with $`n`$ vertices and $`E`$ edges. Let $`𝒩_{ij}^\nu `$ denote the number of distinct paths of length $`\nu `$ between the vertices $`i`$ and $`j`$. For a simple graph, $`𝒩_{ij}^1`$ is one if there is an edge between vertices $`i`$ and $`j`$ else it is zero. We now concretise the idea of “far away” connections by defining a far edge. Let an edge $`e_{ij}`$ between vertices $`i`$ and $`j`$ be a far edge of order $`\mu `$ if it is an edge for which $`𝒩_{ij}^{\mu +1}=0`$ and $`𝒩_{ij}^l0`$ for all $`l\mu `$.
Fig. 1 shows an example of a far edge of order one. We note that none of the edges in a completely connected graph are far edges, while all edges in a tree are far edges of order one. Hence forth we will assume that a far edge has order one unless stated otherwise.
To generate small-world networks and also other type of networks we follow the procedure given in Ref. . We start with a regular network consisting of a ring of $`n`$ vertices with edges connecting each vertex to its $`k`$ nearest neighbours. Each edge is rewired with probability $`p`$ avoiding multiple edges. The $`p=1`$ case corresponds to a random network. The networks obtained with $`p0.01`$ correspond to small-world networks .
We have generated several networks from regular ($`p=0`$) to random ($`p=1`$) case. For each network we calculate the average path length $`L(p)`$ and clustering coefficient $`C(p)`$. The quantity $`L(p)`$ denotes the average length of the shortest path between two vertices, and $`C(p)`$ denotes the average of $`C_v`$ over all the vertices $`v`$, where $`C_v`$ is the number of edges connecting the neighbours of $`v`$ normalized with respect to the maximum number of possible edges between these neighbours . Next we determine the far edges in these networks. Let $``$ denote the ratio of number of far edges with the total number of edges. We find that initially, to a good approximation, $``$ is equal to $`p`$ for $`p0.1`$ and then it increases slowly till it saturates to a value of about $`0.2`$ for $`p=1`$. It turns out that the number far edges of order higher than one are negligible.
In Fig. 2 we plot $`C()/C(0)`$ and $`L()/L(0)`$ as functions of $``$. This figure is similar in nature to the plot of $`C(p)/C(0)`$ and $`L(p)/L(0)`$ as functions of $`p`$ (Fig. 2 of Ref. ). The small-world networks can be identified as those with $`C(p)/C(0)1`$ and $`L(p)/L(0)L(1)/L(0)`$. From Fig. 2 we see that this corresponds to $`0.01`$. Thus $``$ can be used as a parameter to characterize networks which interpolate between regular and random cases. We note that $``$ is an intrinsic quantity and does not depend on the procedure of generating networks and hence should prove to be a better parameter than $`p`$.
To further investigate the importance of far edges, we consider the problem of spread of an epidemic. Consider an epidemic starting from a random vertex (seed). We assume that at each time step all the neighbours of infected vertices are affected with probability one, which is the most infectious case, and the vertices which are already affected die and play no further role in the spread of the epidemic. Here, neighbours of a given vertex means all the vertices which are joined to it by edges. As found by Watts and Strogatz , the spread of an epidemic in small-world networks is almost as fast as that in the random case. We propose that the mechanism for the rapid spread of epidemic in small-world networks is due to the traversal of the disease along the far edges. Each such traversal opens a virgin area for the spread of epidemic leading to a rapid growth.
Clearly if the far edges are responsible for the rapid growth of epidemic then we should be able to effectively control the spread by preventing the traversal of epidemic along the far edges. To test this hypothesis, we propose the following mechanism to control an epidemic. We assume that we have sufficient knowledge of the network and we have identified all the far edges. We note that identification of far edges requires only the knowledge of vertices and edges and hence should be possible in many practical situations. Let $`\tau `$ denote the time steps elapsed between the beginning of the epidemic and its detection. Let $`m`$ denote the number of vertices that can be immunized at each time step. To block a far edge we first immunize one of the two vertices connected by this far edge. Immunization is carried out by first blocking all the far edges and then immunizing at random. If the number of far edges is greater than $`m`$ then blocking all the far edges will take more than one time step.
In Fig. 3 we show the fraction of vertices affected as a function of time steps for a small-world network. Curve (a) shows the uncontrolled spread of the epidemic. Curves (d) and (g) show the spread of epidemic with the control method suggested above for $`\tau =7`$ and $`2`$ respectively. For comparison we show, by curves (c) and (f), the epidemic with only random immunization for $`\tau =7`$ and $`2`$ respectively. It is obvious that the far edge control mechanism proposed here is very effective. For larger $`\tau `$ some of the far edges are already traversed by the epidemic, decreasing the efficiency of our control mechanism. Comparing the far edge immunization and the random immunization, we find that the far edge immunization decreases the rate of spread of epidemic more effectively but takes longer time for completely stopping the spread (See Fig. 3, curves (d) and (g)). Further, to test the effectiveness of our method we compare the results with another method of immunization. We order the vertices by their degree. Immunization is carried out by starting with the vertex with the largest degree and then going down the degree. The results for $`\tau =7`$ and $`2`$ are shown as curves (b) and (e) in Fig. 3 respectively. We note that results for immunization using degree are similar to that of the random immunization.
Let $`d`$ denote the asymptotic difference between the number of affected vertices in random and far edge immunization. We plot $`d`$ as a function of $`m`$ for three different values of $``$ (or $`p`$) in Fig. 4. The plot shows that the far edge immunization is most effective when $`m`$ is about half the number of far edges. The reason for the decrease of $`d`$ for large $`m`$ is that the probability that random immunization blocks a far edge, keeps on increasing as $`m`$ increases, thereby decreasing the difference between the two methods. The plot of $`d`$ as a function of $``$ for different values of $`m`$ is shown in Fig. 5. The figure shows that the far edge immunization is more effective for small-world networks. Also from Figs. 4 and 5 it is clear that the far edge immunization gives a substantial benefit in terms of number of unaffected vertices in the small-world case and this number can be as large as $`410`$ which is more than $`40\%`$ of the total number of vertices.
Now, we consider an interesting model of advertisement. Let $`r`$ be the number of vertices or centers from where a product is advertised. The information about the product spreads by word of mouth to the neighbours with the probability $`q_t`$ where $`t`$ is the time elapsed from the initial advertisement. We compare the results of two different ways of choosing the initial centers. In one way the centers are chosen at random and in the other they are chosen as one of the vertex in a far edge. Fig. 6 shows the number of people informed about the product as a function of $`t`$. It is clear that the choice of centers using far edges has definite advantage over that of random choice.
To conclude we have introduced the concept of far edges in networks. Our definition of a far edge is in accordance with the intuitive idea of a “far away” connection between two vertices. The advantage of our definition of far edge is that it is independent of the underlying topology of the network. Also the definition is algorithmic in nature, and allows the determination of far edges only from the knowledge of vertices and edges. We have also applied the idea of far edges to the networks which are not generated by the algorithm given in Ref. and arrived at similar conclusions .
We have demonstrated the use of far edges in the control of the spread of an epidemic and the advertisement of products. Our simulations show that the far edges are indeed important in the spread of epidemic, particularly in the small-world networks. We have shown that the knowledge of far edges can be fruitfully utilized to control the spread of epidemic and better advertisement. Our results strongly indicate that the far edges are the key elements responsible for the special properties of small world phenomena.
Figure Captions:
An example of a network consisting of a far edge. The edge between vertices ‘a’ and ‘b’ is a far edge of order one.
The graph of $`C()/C(0)`$ and $`L()/L(0)`$ as a function of $``$, where $`C`$ is the clustering coefficient, $`L`$ is the average path length and $``$ is the ratio of the number of far edges with the total number of edges. This figure is similar in nature to the plot of $`C(p)/C(0)`$ and $`L(p)/L(0)`$ as functions of $`p`$. The small-world networks lie around $`=0.01`$.
The graphs of fraction of vertices affected as a function of time steps. The curve (a) is the epidemic spread without immunization, the curves (c) and (f) represent the spread when the random immunization is applied (see text) for $`\tau =7`$ and $`2`$ respectively, the curve (b) and (e) shows the spread if the immunization is carried out for the vertices with highest degree first and then in descending degree for $`\tau =7`$ and $`2`$ respectively and the curves (d) and (g) are the spread when the far edge immunization is used $`\tau =7`$ and $`2`$ respectively. The simulations are carried out on a small-world network of 1000 vertices and 10000 edges. The plotted results are averaged quantities over 500 seeds for epidemic.
The graph of the asymptotic difference between the number of affected vertices in random and far edge immunization, $`d`$ as function of number of vertices immunized in one time step, $`m`$. The three curves (a), (b) and (c) are for $`=0.0022`$, $`0.0084`$ and $`0.0162`$ respectively. The curve (b) corresponds to small-world network. The other parameters are as in Fig. 3.
The graph of $`d`$ as function of $``$. The three curves (a), (b) and (c) are plotted for $`m=30`$, $`10`$ and $`80`$ respectively. The figure shows that the immunization method suggested here is most effective in small-world networks.
The graph of number people informed as function of $`t`$. The curves (a) and (b) show the result for far edge centers and random centers respectively. The simulation is carried out on a small-world network with 1000 vertices and 10000 edge. The initial advertisement is done from five centers. The probability function $`q_t`$ is chosen as $`q_1=0.8`$ and $`q_i=0.18`$, where $`i2`$.
|
no-problem/9901/cond-mat9901327.html
|
ar5iv
|
text
|
# On the Short-Time Compositional Stability of Periodic Multilayers
## I Introduction
Modern techniques of thin film deposition permit the preparation of multilayers with nearly arbitrary concentration profiles. The knowledge of the stability of artificial multilayers at elevated temperatures is of great practical interest. With increasing atomic mobility, compositional changes occur where the diffusion distance is determined by the mobility and the available time. In the present paper, the case of coherent periodic multilayers consisting of two immiscible components A and B is considered. Accordingly, competing driving forces for compositional changes in these multilayers are the reduction of the energy of mixing of the two components and the reduction of the interfacial energy of interphase boundaries.
To evaluate the thermal stability of multilayers, we investigate in the following the early stage of compositional changes perpendicular to the individual layers within a one-dimensional model ignoring lateral perturbations of the layered structure as well as boundary effects at the multilayer surface and the interface to the substrate. The neglect of boundary effects is justified as long as the characteristic length of atomic diffusion for the considered time scale is small compared to the total multilayer thickness. The compositional evolution is studied within the framework of the nonlinear Cahn-Hilliard diffusion equation . Correspondingly, the multilayer is described by a smooth concentration profile where individual layers are separated by diffuse interphase boundaries. The continuum description seems questionable in the limiting case of very thin layers of a few monolayers only. However, many predictions of the continuum approach agree qualitatively with those of a detailed analysis of an appropriate lattice model by Hillert .
The composition in multilayers evolves quite differently, depending on the initial concentration profile. This is illustrated by the three examples in Fig. 7. The as-prepared multilayers in Fig. 7 were assumed as periodic A/B layer stacks of pure individual layers, with the exception of case (c) which exhibits a small thickness perturbation. The curves show the concentration of component B as a function of position. They were obtained as numerical solution of the Cahn-Hilliard diffusion equation. Depending on the initial layer thicknesses, different cases can be distinguished: (a) dissolution of very thin layers despite the immiscibility of the components (phase separation can occur only on a larger length scale), (b) relaxation of a periodic structure to a stationary state with smooth concentration profile, and (c) rapid smoothening of the profile followed by slow thinning of thinner layers up to their complete dissolution. The main driving force for the composition changes in Fig. 1 is the minimisation of interface energy. This happens even at the expense of volume energy so that the minimal and maximal values of the concentration profile generally differ from the concentrations of the corresponding bulk phases. A series of experimental findings reported recently seems to be related to these peculiarities.
During mechanical alloying, the formation of nonequilibrium supersaturated phases of immiscible components as e. g. Ag–Cu and Co–Cu was observed. Due to the small layer thicknesses of the lamellar structure which arises during ball milling, a partial intermixing of the two components could be energetically favourable. Similarly, an enhanced solution of carbon in nickel layers was detected in Ni/C multilayers prepared by pulsed laser evaporation with individual layer thicknesses of only a few nanometers . Another surprising observation is the formation of a mixed Co–Cu phase during annealing of Co/Cu multilayers despite the immiscibility of Co and Cu . The metastable mixed phase formed obviously due to the large excess of interface energy in the multilayer. A strong intermixing was also found during deposition of a few monolayers Ni onto Au . In this case, besides interface energy, the large elastic energy of the strained Ni layer is an additional driving force for intermixing. For a better understanding of these experimental findings, the present theoretical work deals with the effect of the high portion of interface energy on the early composition evolution in nanoscale multilayers.
The simulations in Fig. 7 as well as numerical and analytical investigations by other authors reveal that concentration profiles evolve in characteristic stages: (i) the relaxation to layered quasi-stationary states or the complete vanishing (dissolution) of very thin layers takes place comparatively rapidly; (ii) at a longer time scale, a slow ripening process involving diffusion between distant layers occurs, i. e. a thinning of the thinnest layers and a corresponding thickening of the thicker ones. In the following, the slow ripening process is referred to as long-time composition evolution, whereas the relaxation of an arbitrary periodic concentration profile to a stationary solution of the Cahn-Hilliard diffusion equation is referred to as short-time evolution (Fig. 7b). Also, the rapid dissolution of layers as shown in Fig. 7a is considered as a particular case of the short-time evolution.
The aim of the present work is to analyse the conditions under which either a rapid dissolution of thin layers occurs initially (Fig. 7a) or a stationary periodic concentration profile evolves (Fig. 7b). To this end, we focus on one-dimensional stationary solutions of the Cahn-Hilliard diffusion equation which are characterised by the multilayer period $`d`$ and the average composition $`\overline{c}`$ of the multilayer. These two parameters are usually controlled by the multilayer preparation and are conserved during relaxation towards quasi-stationary states. From the following analysis, a $`\overline{c}`$-$`d`$ diagram results which shows the existence region as well as the stability properties of the stationary solutions. In the present paper, a periodic solution is called (globally) stable if its energy is smaller than those of all other concentration profiles with the same periodicity $`d`$ and average composition $`\overline{c}`$. All other stationary solutions are metastable or unstable with respect to perturbations conserving the periodicity and average composition. As outlined by Langer , stationary periodic solutions are always unstable against perturbations of the periodicity (cf. also Fig. 7c). However, as mentioned above, these periodicity perturbations develop in general on a longer time scale and are not considered in this work. Also, small lateral composition perturbations as well as a roughening of interfaces between strained inidvidual layers are not analysed here because it is expected that such perturbations develop on a longer time scale.
The paper is organised as follows. After the description of the model in Sect. II, stationary solutions of the Cahn-Hilliard diffusion equation are considered in Sect. III. Sect. IV deals with a specific model for the Gibbs free energy which allows an analytical calculation of stationary concentration profiles. In Sect. V, the stability of these concentration profiles is investigated. Finally, the results are discussed and summarised.
## II Model
The Gibbs free energy density $`f(c)`$ of a binary A-B system as a function of the uniform concentration $`c`$ (mole fraction) of component B is to exhibit two minima. The equilibrium concentrations, $`\alpha `$ and $`\beta `$, of large coexisting phase regions (strictly two half-spaces) are determined by the common tangent construction (Fig. 7). In the present work, the case of multilayers with planar interfaces is considered. Following Cahn and Hilliard , the free energy per unit area of a system with concentration $`c(x)`$ varying in one dimension is described by
$$F[c]=𝑑x\left[f(c)+\kappa \left(\frac{dc}{dx}\right)^2\right].$$
(1)
The second term on the right-hand-side of (1) represents the energy contribution due to a concentration gradient where $`\kappa `$ is the gradient energy coefficient.
The interdiffusion flux in the system is given by $`j(x)=\stackrel{~}{M}(\delta F/\delta c)/x`$ where $`\stackrel{~}{M}`$ is the atomic mobility. Together with the continuity equation $`c/t+\mathrm{\Omega }j/x=0`$, the following nonlinear diffusion equation results
$$\frac{c}{t}=M\frac{^2}{x^2}\left(\frac{\delta F[c]}{\delta c(x)}\right)=M\frac{^2}{x^2}\left[f^{}(c)2\kappa \frac{^2c}{x^2}\right]$$
(2)
where $`M\mathrm{\Omega }\stackrel{~}{M}`$ with $`\mathrm{\Omega }`$ the atomic volume. For simplicity, the atomic volume of the two components has been assumed to be equal and the composition dependence of $`M`$ has been omitted. Starting from any initial concentration profile, the further evolution can be calculated numerically from (2) (see e. g. Fig. 1). However, in view of the great practical importance of quasi-stationary concentration profiles, we will analyse the stationary solutions of the Cahn-Hilliard diffusion equation (2) more systematically in the following.
## III Stationary solutions
Equilibrium concentration profiles are determined by the extrema of the free energy under the constraint of particle conservation. This leads to the variational problem
$$\frac{\delta }{\delta c(x)}\left(F[c]\mu 𝑑xc(x)\right)=0$$
(3)
with the result
$$f^{}(c)2\kappa \frac{d^2c}{dx^2}\mu =0$$
(4)
(the prime denotes the derivative with respect to $`c`$). Comparison of (4) and (2) reveals that solutions of (4) are also stationary solutions of the diffusion equation (2). The Lagrange multiplier $`\mu `$ is identified as interdiffusion potential. When $`\mu =\delta F/\delta c`$ is uniform, the particle flux vanishes. Integration of (4) leads to
$$\kappa \left(\frac{dc}{dx}\right)^2=f(c)\mu c+KD(c)$$
(5)
where $`K`$ is an integration constant. The last equality in (5) defines the function $`D(c)`$ used in the following. In general, the physically relevant solutions of (5) are periodic concentration profiles $`c(x)`$ oscillating between a minimal value $`a`$ and a maximal value $`b`$ (cf. Figs. 2 and 3). The extrema $`a`$ and $`b`$ are related to the parameters $`\mu `$ and $`K`$ by the conditions $`D(a)=D(b)=0`$ which lead to
$$\mu =\frac{f(b)f(a)}{ba},K=\frac{f(b)af(a)b}{ba}.$$
(6)
Further integration of (5) yields the inverse function of the concentration profile
$$x(c)=_{c_0}^c𝑑cI(c)$$
(7)
with $`I(c)=\sqrt{\kappa /D(c)}=dx/dc`$. The integration bounds have been chosen in such a way that the origin of the $`x`$-coordinate is located at an interphase boundary defined by $`c=c_0(a+b)/2`$. Thus, equation (7) represents the concentration profile in half a period from $`c=a`$ at $`x=d_a/2`$ to $`c=b`$ at $`x=d_b/2`$ (Fig. 7). The individual layer thicknesses $`d_a`$ and $`d_b`$ of the two phase regions, briefly called phase ’a’ and phase ’b’, are given by
$$d_a=2_a^{c_0}𝑑cI(c),d_b=2_{c_0}^b𝑑cI(c).$$
(8)
Similarly, the multilayer period length $`d=d_a+d_b`$ and the mean composition $`\overline{c}`$ are given by
$$d=2_a^b𝑑cI(c),\overline{c}=\frac{2}{d}_{d_a/2}^{d_b/2}𝑑xc(x)=\frac{2}{d}_a^b𝑑cI(c)c.$$
(9)
For given concentrations $`a`$ and $`b`$, the concentration profile $`c(x)`$ can be calculated directly from (7). However, from the experimental point of view, $`a`$ and $`b`$ are not known a priori. Usually, components A and B are deposited consecutively with fixed individual layer thicknesses. During the early annealing stage, a smooth concentration profile develops which is similar to the stationary profiles derived here. The arising concentrations $`a`$ and $`b`$ are determined by equations (9) where the mean composition $`\overline{c}`$ and period length $`d`$ are given. The solution of equations (9) for $`a`$ and $`b`$ is, however, not always unique. If there is more than one solution, one has to compare the free energies of different solutions in order to find that with the lowest one. From (1) and (5), the free energy of one multilayer period results as
$$F_p=4\sqrt{\kappa }_a^b𝑑c\sqrt{D(c)}+\left[\frac{b\overline{c}}{ba}f(a)+\frac{\overline{c}a}{ba}f(b)\right]d.$$
(10)
An important intrinsic length of the present problem is the width of the interphase boundary $`\xi `$ defined by
$$\xi =(ba)\frac{dx(c)}{dc}|_{c=c_0}=(ba)\sqrt{\frac{\kappa }{D(c_0)}}.$$
(11)
The second equality follows from (5). In the limiting case of spatially extended phases ($`d_a,d_b\xi `$; i. e. $`a\alpha `$, $`b\beta `$), the interface width becomes
$$\xi =(\beta \alpha )\sqrt{\kappa /f_0}(\beta \alpha )l_0$$
(12)
with $`f_0D((\alpha +\beta )/2)`$. $`f_0`$ characterises the height of the free energy wall of the $`f(c)`$ curve referred to the common tangent (cf. Fig. 7). The last equality in (12) defines the length unit $`l_0`$ used in the following.
For very thin individual layers, comparable with the width of the interphase boundary, the concentrations in the middle of the layers, $`a`$ and $`b`$, differ from those of the corresponding bulk phases $`\alpha `$ and $`\beta `$ ($`\alpha <a<b<\beta `$, Fig. 7) because the common tangent construction does not apply to thin layers. The concentrations $`a`$ and $`b`$ define a secant with the $`f(c)`$ curve as shown in Fig. 7. An estimate of the difference between the concentrations $`\beta `$ and $`b`$ is given by
$$\frac{\beta b}{\beta \alpha }=\rho _b\mathrm{exp}(d_b/2\xi _b)$$
(13)
(see Appendix A), where $`\xi _b\sqrt{2\kappa /f^{\prime \prime }(\beta )}`$ and $`\rho _b`$ is a numerical factor of the order of unity. An analogous formula applies to the difference $`a\alpha `$. In the limiting case $`d_b\xi _b`$, a factor of $`\rho _b=2`$ has been calculated for the special composition dependence of the Gibbs free energy considered in the following section. The estimate (13) clearly reveals that concentrations $`\beta `$ and $`b`$ differ significantly when the layer thickness $`d_b`$ approaches the characteristic length $`\xi _b`$.
## IV Special case: $`c^4`$–model
To simplify the calculation of the concentration profile (7), we consider the case where the free energy as a function of concentration can be represented as a polynomial of the fourth power
$$f(c)=A(c\alpha )^2(c\beta )^2+B(c\alpha )+C.$$
(14)
The parameters $`B`$ and $`C`$ turn out to be unimportant for the composition evolution. The characteristic energy unit $`f_0`$ results as $`f_0=A(\beta \alpha )^4/16`$. The parameters $`\alpha `$ and $`\beta `$ in (14) coincide with the equilibrium concentrations of spatially extended phases corresponding to the common tangent construction (Fig. 7). The characteristic length $`\xi _b`$ in (13) is obtained as $`\xi _b=(\beta \alpha )l_0/4`$.
For the case (14), briefly called ’$`c^4`$-model’, the inverse concentration profile (7), the period length, and the mean composition (9) can be expressed by elliptic integrals
$$x(c)=\omega \sqrt{\frac{\kappa }{A}}[F_e(\varphi (c),m)F_e(\varphi (c_0),m)],$$
(15)
$$d=2\omega \sqrt{\frac{\kappa }{A}}K(m),$$
(16)
$$\overline{c}=a+\frac{2}{\omega }Z(\varphi _Z,m)$$
(17)
with $`\omega =2/\sqrt{(b_1a)(ba_1)}`$, $`m=(ba)(b_1a_1)/((ba_1)(b_1a))`$, $`\varphi _Z=\mathrm{arcsin}\sqrt{(b_1a)/(b_1a_1)}`$, and $`\varphi (c)=\mathrm{arcsin}\sqrt{(ba_1)(ca)/((ba)(ca_1))}`$. $`K(m)`$ and $`F_e(\varphi ,m)`$ are the complete and incomplete elliptic integrals of first kind, and $`Z(\varphi ,m)`$ is the Jacobi zeta function . $`a`$ and $`b`$ are the concentrations in the middle of the individual layers, and $`a_1`$ and $`b_1`$ are the further two intersections of the $`f(c)`$-curve with the secant shown in Fig. 7. $`a_1<a<b<b_1`$ are also the four zeros of the function $`D(c)`$ defined in (5). Choosing $`a`$ and $`b`$, the other two zeros are given by
$$a_1=\alpha +\beta c_0w,b_1=\alpha +\beta c_0+w$$
(18)
with $`w\sqrt{2[c_0(\alpha +\beta )\alpha \beta ]+ab3c_0^2}`$ and $`c_0(a+b)/2`$. Analytical expressions for the stationary solutions of the Cahn-Hilliard diffusion equation in the case of the $`c^4`$-model have been given previously in terms of Jacobian Elliptic functions by Tsakalos for the symmetric case $`d_a=d_b`$ and by Novick-Cohen and Segel for the general case .
Fig. 7 shows a series of interesting quantities as a function of the mean concentration $`\overline{c}`$ for fixed period length $`d`$, calculated within the $`c^4`$-model. In a certain $`\overline{c}`$$`d`$ region, two stationary periodic solutions have been found. The corresponding branches in Fig. 7 are denoted by #1 and #2, respectively. The free energy $`F_p`$ of the periodic concentration profile with smaller concentration variation $`ba`$ (solution 2) is higher than that of solution 1 (Fig. 7a). For small values of $`\overline{c}`$, the free energy of the homogeneous concentration, $`F_h=f(\overline{c})d`$, is lower than that of the periodic solution 1. The values of the concentrations $`a`$ and $`b`$ are shown in Fig. 7b. A striking feature of these plots is the existence of a minimal value $`\overline{c}_a^M`$ of the mean concentration. At the minimum $`\overline{c}_a^M`$, solutions 1 and 2 merge. The corresponding layer thickness of phase ’b’ is denoted by $`d_b^M`$ (Fig. 7c). It correlates roughly with the minimal value of $`d_b`$ in this case. As discussed in the next section, solution 2 does not appear for multilayer period lengths $`d`$ below a critical value.
## V Stability of stationary solutions
In the following, let us consider the stability of periodic concentration profiles $`c(x)`$ which are obtained as solutions of equation (5). These concentration profiles are also stationary solutions of the Cahn-Hilliard diffusion equation (2). They are stable against any infinitesimal perturbation $`\delta c(x)`$ if the second variation of the free energy
$$\delta ^2F=𝑑x\left[f^{\prime \prime }(c)(\delta c)^2+2\kappa \left(\frac{d\delta c}{dx}\right)^2\right]$$
(19)
is positive definite.
At first, the stability of the homogeneous concentration $`c(x)=\overline{c}`$ is considered. Without any restriction, the perturbation $`\delta c`$ can be represented as a Fourier series. From (19), it is evident that the stabilising influence of the gradient term (second term on the right-hand-side of (19)) is stronger the shorter the wavelength of the perturbation is. Considering a periodic perturbation with period $`d`$, i. e. $`\delta c\mathrm{sin}(2\pi x/d)`$, we find stability $`\delta ^2F0`$ for period lengths
$$d<d^S(\overline{c})=2\pi \sqrt{\frac{2\kappa }{f^{\prime \prime }(\overline{c})}}.$$
(20)
$`d^S(\overline{c})`$ is equal to the smallest wavelength of spinodal decomposition obtained from a stability analysis of diffusion equation (2) . The inverse function of $`d^S(\overline{c})`$ exhibits two branches which are denoted by $`\overline{c}_a^S(d)`$ and $`\overline{c}_b^S(d)`$ (Fig. 7). For the $`c^4`$-model, one obtains
$$\overline{c}_{b/a}^S(d)=\frac{\alpha +\beta }{2}\pm \frac{\beta \alpha }{2\sqrt{3}}\left[1\frac{\pi ^2}{2}(\beta \alpha )^2\left(\frac{l_0}{d}\right)^2\right]^{1/2}.$$
(21)
Within the interval $`\overline{c}_a^S<\overline{c}<\overline{c}_b^S`$, the homogeneous solution is unstable and spinodal decomposition takes place. According to (20), the maximum of $`f^{\prime \prime }(\overline{c})`$ yields a minimal period length $`d_{min}`$ below which the homogeneous solution is stable for all values of $`\overline{c}`$. For the $`c^4`$-model, $`d_{min}=\pi (\beta \alpha )l_0/\sqrt{2}`$ follows.
Since within the region $`\overline{c}_a^S<\overline{c}<\overline{c}_b^S`$ the homogeneous solution is unstable, there must be a stable periodic solution. The investigation of the existence region and stability of periodic solutions is more complicated than that of the homogeneous one. In the following, the results of a numerical calculation of the free energies (10) belonging to the solutions (15) of the $`c^4`$-model are summarised (Fig. 7). It is expected that the qualitative features are the same for other double-well potentials $`f(c)`$. Similar results have been obtained by Hillert for a lattice model.
The behaviour of periodic solutions changes qualitatively in dependence on the multilayer period length $`d`$. The stability of concentration profiles can be discussed in a convenient way by including non-stationary states with nonequilibrium amplitudes $`ba`$ in the consideration. Although the parameter $`ba`$ does not give a complete characterisation of non-stationary states, it is an appropriate quantity to illustrate the stability behaviour of periodic concentration profiles. In Fig. 7, the free energy $`F`$ of concentration profiles is sketched as a function of their amplitude for different mean compositions and for two multilayer period lengths. Extrema of $`F`$ correspond to stationary solutions. In Fig. 7, these extrema are marked by filled and open circles denoting stable and unstable stationary solutions, respectively.
For period lengths above $`d_{min}`$, but below a certain critical value $`d_c`$, the situation is represented by the curves 1 and 2 in Fig. 7a: (i) Within the region $`\overline{c}_a^S<\overline{c}<\overline{c}_b^S`$, there is a stable periodic solution corresponding to the minimum of $`F`$ (curve 1). The homogeneous solution ($`ba=0`$) belongs to a maximum of $`F`$ (more precisely a saddle point) and is therefore unstable. (ii) Outside the region $`\overline{c}_a^S<\overline{c}<\overline{c}_b^S`$, there is no stationary periodic solution (curve 2). The minimum of the free energy $`F`$ is given by the homogeneous solution which, therefore, is stable. At the concentrations $`\overline{c}=\overline{c}_{a,b}^S`$, a continuous transition between the homogeneous and the periodic stationary solution occurs. This transition will be referred to as second order ’phase transition’ in the following despite the fact that the resulting ’phases’ are only stable in the sense discussed in the Introduction.
For larger period lengths $`d>d_c`$, the behaviour is more complex as illustrated in Fig. 7b. Within the region $`\overline{c}_a^S<\overline{c}<\overline{c}_b^S`$, the situation is the same as for $`d<d_c`$ (curve 1). Outside certain limiting values $`\overline{c}_{a,b}^M`$ of the mean concentration, there is no stationary periodic solution (curve 4). However, within the regions $`\overline{c}_a^M<\overline{c}<\overline{c}_a^S`$ and $`\overline{c}_b^S<\overline{c}<\overline{c}_b^M`$, two stationary periodic solutions exist (curves 2 and 3; cf. also Fig. 7). One solution (solution 1) corresponds to a minimum of the free energy $`F`$ and the other one (solution 2) to a maximum. Thus, solution 2 is unstable. Choosing solution 2 as initial condition for the concentration profile, the numerical solution of the diffusion equation (2) reveals a rapid change of the profile. Depending on the initial fluctuations, it evolves either into the periodic solution 1 or into the homogeneous solution. The latter one corresponds also to a minimum of $`F`$. Whether the periodic solution 1 or the homogeneous one is more stable depends on the corresponding values of the free energy $`F_p^{(1)}`$ and $`F_h=f(\overline{c})d`$. At a certain concentration $`\overline{c}=\overline{c}_{a,b}^T(d)`$, for which $`F_p^{(1)}=F_h`$, a first order ’phase transition’ between the periodic and the homogeneous states takes place (cf. Fig. 7). Between $`\overline{c}^S(d)`$ and $`\overline{c}^T(d)`$ the homogeneous solution is metastable ($`F_p^{(1)}<F_h`$, curve 2 in Fig. 7b), whereas between $`\overline{c}^T(d)`$ and $`\overline{c}^M(d)`$ the periodic solution is metastable (curve 3 in Fig. 7b).
At $`\overline{c}=\overline{c}^S(d)`$, solution 2 disappears by merging into the homogeneous solution. With increasing distance of $`\overline{c}`$ from $`\overline{c}^S`$, the difference in the free energies of solutions 1 and 2 decreases and at the concentrations $`\overline{c}^M`$ they coincide (cf. also Fig. 7). This implies that the second variation of the free energy functional vanishes (see Appendix B) and, consequently, the periodic solutions become marginally stable at $`\overline{c}^M`$. Outside the region $`\overline{c}_a^M<\overline{c}<\overline{c}_b^M`$, no inhomogeneous stationary solution exists. Thus, $`\overline{c}_a^M(d)`$ and $`\overline{c}_b^M(d)`$ represent the minimal and maximal mean concentrations for the existence of metastable or globally stable periodic structures with given multilayer period length.
The critical value of the multilayer period $`d_c`$, which separates the regions of first and second order ’phase transitions’ in Fig. 7, can be derived analytically by means of a third-order perturbation analysis. The corresponding critical concentrations $`\overline{c}_c=c^S(d_c)`$ are determined by the equation $`(f^{\prime \prime \prime })^2+\mathrm{\hspace{0.17em}3}f^{\prime \prime }f^{\prime \prime \prime \prime }=0`$, where $`f^{\prime \prime }`$, $`f^{\prime \prime \prime }`$ and $`f^{\prime \prime \prime \prime }`$ denote the second to fourth derivative of $`f(c)`$ at $`c=\overline{c}_c`$. According to (20), the critical period length is then given by $`d_c=2\pi \sqrt{\mathrm{\hspace{0.17em}2}\kappa /(f^{\prime \prime }(\overline{c}_c))}`$. For the $`c^4`$-model, one obtains $`\overline{c}_c=[\alpha +\beta \pm (\beta \alpha )/\sqrt{5}]/2`$ and $`d_c=\sqrt{5/2}d_{min}`$.
Let us recall that the above picture on the stability of periodic solutions was obtained for the special $`c^4`$-dependence of the Gibbs free energy. Further analysis of other dependencies $`f(c)`$ is desirable to confirm the present stability diagram qualitatively and to study quantitative changes. Numerical simulations of the composition evolution in the present work were performed by means of a finite difference method described in ref.. This method is based on a semi-implicit finite difference scheme coupled with a fast Fourier transformation. For numerical details, the reader is referred to Copetti and Elliott . The spatial grid spacing in our calculations was less than 10% of the interphase boundary width of stationary states and for the time integration an adaptive step size control was applied. For example, 1024 grid points were used for the calculations in Fig. 1.
## VI Discussion
Multilayers for practical applications are usually deposited as layer stacks of nearly pure layers of components A and B with thicknesses $`d_A`$ and $`d_B`$. During annealing, a considerable interdiffusion between individual layers can occur depending on the values of the equilibrium bulk concentrations $`\alpha `$ and $`\beta `$, which change with temperature. Consider for definiteness the case of thin layers of component B between thick layers of component A ($`d_B<d_A`$). For arbitrary multilayer period lengths, the B-layers dissolve at elevated temperatures if the mean composition $`\overline{c}=d_B/d`$ (supposing $`\mathrm{\Omega }_A=\mathrm{\Omega }_B`$) is smaller than $`\alpha `$, where $`\alpha `$ increases with increasing temperature. However, layer dissolution can occur also for $`\overline{c}>\alpha `$, if the multilayer period is small enough.
The composition evolution in multilayers is controlled by the competition between volume free energy and interface energy. For mean compositions outside the region $`\overline{c}_a^M(d)<\overline{c}<\overline{c}_b^M(d)`$, no stationary periodic solutions of the Cahn-Hilliard diffusion equation have been found. This implies that individual layers of such multilayers dissolve in the early stage of annealing because the free energy gain by phase separation is too small to overcompensate the interface energy (Fig. 1a). The thickness of the ’b’-layers $`d_b^Md_b(\overline{c}=\overline{c}_a^M)`$, which corresponds to the minimal mean concentration $`\overline{c}_a^M(d)`$, is shown in Fig. 7 as a function of the period length $`d`$. For large period lengths ($`d>7l_0`$ for the example in Figs. 7 and 7), $`d_b^M(d)`$ represents the minimal layer thickness of phase ’b’ of stable concentration profiles. For smaller $`d`$ (but larger than $`d_c`$), the minimal thickness $`d_b`$ appeared in the present case at a mean concentration slightly larger than $`\overline{c}_a^M`$ (cf. Fig. 7c).
Despite the large change of the multilayer period length in Fig. 7, the characteristic layer thickness $`d_b^M(d)`$ changes only slightly and is always of the order of the characteristic length $`l_0=\sqrt{\kappa /f_0}`$. The gradient energy coefficient $`\kappa `$ can be estimated within the framework of the regular solution model as $`\kappa \mathrm{\Delta }U/r_0`$ where $`r_0`$ is the interatomic distance and $`\mathrm{\Delta }U`$ is the energy of mixing (per atom) of an equiatomic solution ($`\overline{c}=0.5`$); typically, $`\kappa =10^{11}`$ to $`10^{10}`$ J/m. The value of $`l_0`$ considerably exceeds the interatomic distance when $`f_0\mathrm{\Delta }U/r_0^3`$, where $`\mathrm{\Delta }U/r_0^3`$ is typically in the range of $`10^8`$ to $`10^9`$ J/m<sup>3</sup>. Choosing for example $`\kappa =310^{11}`$ J/m and $`f_0=10^8`$ J/m<sup>3</sup>, one obtains $`l_00.5`$ nm which is about twice the interatomic distance.
The gradient energy coefficient changes only very weakly with temperature, whereas the parameter $`f_0`$ (cf. Fig. 2) decreases with increasing temperature, approaching zero at the critical phase separation temperature $`T_c`$. As a consequence, the length $`l_0`$ and correspondingly the thickness $`d_b^M`$, characterising the onset of layer dissolution, diverge as $`|TT_c|^{1/2}`$ at the critical temperature. Comparatively small critical temperatures in technologically important regions of a few $`100^{}`$C are found, for example, for systems with large lattice mismatch because mechanical stresses in coherent layers cause a considerable lowering of the critical temperature compared to incoherent phases .
The layer dissolution in multilayers with very small period length, which is driven by the reduction of interface energy, leads to the formation of supersaturated phases. Subsequent phase separation occurs with a larger period length on a longer time scale and can be kinetically hindered by rapid quenching. As mentioned in the introduction, such a situation could be present during mechanical alloying , when a nanoscale lamellar structure develops in the course of ball-milling, or in the case of multilayer deposition with subsequent short-time annealing at moderate temperatures . The nonequilibrium phase formation observed in those experiments could be related to the layer dissolution discussed here. Based on an estimate of the chemical energy of coherent phase boundaries, such a mechanism has been suggested by Gente et al. to explain the observed solid solution formation of immiscible elements due to long-time ball-milling.
## VII Summary
Stationary solutions of the one-dimensional Cahn-Hilliard diffusion equation have been analysed in dependence on the mean composition $`\overline{c}`$ of a binary system and the multilayer period length $`d`$. A $`\overline{c}`$-$`d`$ diagram has been established showing the regions of existence, metastability and global stability of stationary periodic concentration profiles, as well as of the homogeneous concentration (Fig. 7). The diagram was derived under the constraint of fixed period length of the concentration profile. Actually, periodic solutions are unstable against thickness fluctuations which leads to layer thickness coarsening in the course of annealing. However, this happens on a longer time scale than the relaxation to quasi-stationary concentration profiles as long as the individual layer thicknesses are significantly larger than the interphase boundary width.
The present analysis revealed that very thin individual layers (typically a few monolayers for small mutual solubility) dissolve during annealing if the mean composition of artificial multilayers is lower than a critical value $`\overline{c}_a^M(d)`$. The individual layer thickness $`d_b^M`$ corresponding to the minimal value $`\overline{c}_a^M(d)`$ increases slightly with increasing multilayer period length. The layer dissolution is driven by two mechanisms: (i) interdiffusion between pure individual layers to establish the equilibrium phase concentrations and (ii) reduction of interface energy in the case of very thin layers of the order of the interphase boundary width. In a certain $`\overline{c}`$-$`d`$ region, the layered structure can exist as metastable state although the free energy of the homogeneous concentration is lower. According to the Cahn-Hilliard theory, the equilibrium composition in thin layers differs from that of the corresponding bulk phase if the layer thickness becomes comparable with the interphase boundary width .
Consideration of the present conclusions in the design of layered structures could help to improve their thermal stability or, on the other hand, to prepare new metastable phases by controlled layer dissolution. Without changing the qualitative predictions, mechanical stresses in coherent layers due to lattice mismatch can easily be included in the present one-dimensional analysis by a modification of the free energy $`f(c)`$ (see e. g. ref. ).
The evaluation of the long-time stability of multilayers requires an additional analysis of the evolution of lateral composition perturbations including the effect of stresses in individual layers. The roughening instability of interfaces between strained layers has been investigated for example in ref.. A comprehensive analysis of the composition evolution under the influence of stresses owing to lattice mismatch or to the presence of dislocations has been given recently in a series of papers by Léonard and Desai .
###### Acknowledgements.
This work was supported by the Deutsche Forschungsgemeinschaft, Sonderforschungsbereich 422.
## VIII Appendix A
To derive (13), equation (4) is linearised with respect to the concentration difference $`\delta c_b(x)=bc(x)`$. With the approximation $`f^{\prime \prime }(b)f^{\prime \prime }(\beta )`$, one obtains
$$\left(\frac{d^2}{dx^2}\frac{1}{\xi _b^2}\right)\delta c_b(x)=\frac{\mu f^{}(b)}{2\kappa }$$
(22)
($`\xi _b^22\kappa /f^{\prime \prime }(\beta )`$). Choosing the middle of the ’b’-layer as origin of the $`x`$-coordinate and requiring $`\delta c_b(0)=\delta c_b^{}(0)=0`$, the solution of (22) is obtained as
$$\delta c_b(x)=\frac{\mu f^{}(b)}{f^{\prime \prime }(\beta )}\left(\mathrm{cosh}\frac{x}{\xi _b}1\right).$$
(23)
Although this result was derived for small values of $`\delta c_b`$, it is extrapolated to larger values in order to get an estimate for the layer thickness $`d_b`$ defined by $`c(\pm d_b/2)=(a+b)/2`$. Assuming further $`d_b\xi _b`$ and approximating $`ba`$ by $`\beta \alpha `$, one finds
$$\delta c_b(\pm d_b/2)=\frac{ba}{2}\frac{(\beta \alpha )}{2}=\frac{1}{\rho _b}\frac{\mu f^{}(b)}{2f^{\prime \prime }(\beta )}\mathrm{exp}(d_b/2\xi _b).$$
(24)
The correction factor $`\rho _b`$ is introduced in (24) in order to account for the error caused by the linearisation of (22) within the interface. The last equation can be rewritten using the expansion $`f^{}(b)f^{}(\beta )+f^{\prime \prime }(\beta )(b\beta )`$ and the fact that $`\mu f^{}(\beta )`$ is of the order of $`𝒪((\beta b)^2,(a\alpha )^2)`$. Neglecting these higher order terms and inserting $`\mu f^{}(b)f^{\prime \prime }(\beta )(\beta b)`$ into (24), one obtains equation (13).
## IX Appendix B
In the following, the stability of two stationary solutions for the same $`d`$ and $`\overline{c}`$, $`c_1(x)`$ and $`c_2(x)`$ differing by $`\delta c(x)=c_2(x)c_1(x)`$, is analysed for $`\delta c0`$. The two solutions fulfil equation (4) with the corresponding Lagrange multipliers $`\mu _1`$ and $`\mu _2`$. Expansion of (4) for $`c_2=c_1+\delta c`$ with respect to $`\delta c`$ and $`\delta \mu =\mu _2\mu _1`$ yields the following equation for $`\delta c`$: $`f^{\prime \prime }(c_1)\delta c2\kappa d^2\delta c/dx^2=\delta \mu `$. Using the estimate $`\delta \mu 𝒪((\delta c)^2)`$, one obtains to first order in $`\delta c`$
$$\left[f^{\prime \prime }(c_1)2\kappa \frac{d^2}{dx^2}\right]\delta c(x)=0.$$
(25)
This equation is equivalent to marginal stability of $`c_1(x)`$. Indeed, the second variation (19) of $`F[c]`$ can be transformed by partial integration into
$$\delta ^2F=𝑑x\delta c\left[f^{\prime \prime }(c)2\kappa \frac{d^2}{dx^2}\right]\delta c.$$
(26)
In deriving (26), the periodicity of solutions $`c_1(x)`$ and $`c_2(x)`$, and consequently of $`\delta c(x)`$, was used. Comparison of (26) and (25) leads to $`\delta ^2F=0`$. In summary, if two solutions merge, marginal stability results. This happens at the boundaries $`\overline{c}^S`$ as well as $`\overline{c}^M`$ (Fig. 7).
. Present address: Max-Planck-Institut für Physik komplexer Systeme, Nöthnitzer Str. 38, D-01187 Dresden, Germany
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
|
no-problem/9901/hep-ph9901379.html
|
ar5iv
|
text
|
# Correlation between UV and IR cutoffs in quantum field theory and large extra dimensions
## Introduction.
The main subject of this note is to try to discuss the implications for low energy physics of the breakdown of effective field theory, with an UV cutoff $`\mathrm{\Lambda }`$, to describe a system in a finite volume with a length $`L`$ when this length becomes smaller than a critical value which depends on the UV cutoff. One way to derive a limitation of this type on conventional quantum field theory is based on the postulate that the maximum entropy in a box of volume $`L^3`$ is proportional to the area of the box. A stronger constraint on the validity of quantum field theory is obtained if one restricts the IR cutoff $`L`$ in order to exclude states containing a black hole. Assuming a simple power dependence on the cutoffs for the corrections to a conventional calculation in an infinite box without a UV cutoff one finds that the uncertainty in the quantum field theory calculation is far larger than what one naively would ascribe to gravitational effects.
The second ingredient of the present discussion is the very interesting recent proposal of large compactified extra dimensions which allow to understand the observed weakness of gravity in a theory with only one fundamental short distance scale, the weak scale.
When one considers simultaneously the possibility to have large extra dimensions and also the limitations of quantum field theory due to the impossibility to describe states containing a black hole then it becomes important to try to estimate the errors in any conventional quantum field theory calculation which is the aim of this short note.
## Deviations from QFT calculations.
Let us assume, to start with, that the length $`L`$ of the finite box, which acts as an infrared cutoff, is smaller than the compactification radius $`R`$ so we can consider a box of volume $`L^{3+n}`$ in $`3+n`$ space dimensions. A restriction on the infrared cutoff can be obtained by requiring to any state to have a Schwarzschild radius $`L_0`$ smaller than its size $`L`$. If one denotes by $`M`$ the mass scale associated to the gravitational constant in $`4+n`$ dimensions ($`G=1/M^{n+2}`$) and uses the bound $`\rho \mathrm{\Lambda }^4`$ for the energy density in the presence of an UV cutoff $`\mathrm{\Lambda }`$ then one has that the largest possible value for $`L_0`$ is related to the UV cutoff through $`L_0^{1+n}M^{(2+n)}L^{3+n}\mathrm{\Lambda }^{4+n}`$. Then the condition $`L_0<L`$ leads to a restriction combining the IR and UV cutoffs
$$L^2\mathrm{\Lambda }^{4+n}<M^{2+n}$$
(1)
which is the generalization of the constraint $`L\mathrm{\Lambda }^2<M_P`$ obtained in $`3+1`$ dimensions.
In order to estimate the corrections to a conventional quantum field theory calculation done in an infinite box without a UV cutoff one assumes that they can be given as an expansion in powers of $`1/\mathrm{\Lambda }`$ and $`1/L`$. One can consider two general classes of observables, “chirally” protected ($`\widehat{O}`$) involving corrections with even powers of the cutoffs exclusively and the remaining ones ($`O`$) which have all power corrections. The result for an observable $`\widehat{O}`$ including one loop radiative corrections will be
$$\widehat{O}\widehat{O}_{QFT}\left[1+\delta \widehat{O}\right]$$
(2)
where $`\widehat{O}_{QFT}`$ is the calculation in QFT including one loop radiative corrections and the correction $`\delta \widehat{O}`$ can be estimated to be
$$\delta \widehat{O}\frac{\alpha }{\pi }\left[\frac{E^2}{\mathrm{\Lambda }^2}+\frac{1}{L^2E^2}\right]\frac{\alpha }{\pi }\left[\frac{E^2}{\mathrm{\Lambda }^2}+\frac{\mathrm{\Lambda }^{4+n}}{E^2M^{2+n}}\right]$$
(3)
where $`\alpha `$ is the coupling of the perturbative expansion and $`E`$ is the characteristic energy of the observable $`\widehat{O}`$.
From this estimate for the correction to the field theoretical calculation it is possible to determine the choice for the UV cutoff which minimizes the discrepancy in the calculation of $`\widehat{O}`$ in QFT. The best choice for $`\mathrm{\Lambda }`$ corresponds to $`\mathrm{\Lambda }^{6+n}E^4M^{2+n}`$ and the minimal uncertainty in the calculation is
$$\delta _{min}\widehat{O}\frac{\alpha }{\pi }\left(\frac{E}{M}\right)^{\frac{4+2n}{6+n}}$$
(4)
In the case of two extra dimensions one has $`\delta _{min}\widehat{O}(\alpha /\pi )(E/M)`$ which is larger than the corrections that one would expect in an effective field theory calculation valid up to the scale $`M`$ which signals the onset of gravitational effects, $`\delta \widehat{O}(\alpha /\pi )(E/M)^2`$.
If one considers an observable with no “chiral” protection the discussion can be repeated and the final answer is
$$\delta _{min}O\frac{\alpha }{\pi }\left(\frac{E}{M}\right)^{\frac{2+n}{6+n}}$$
(5)
which for the case of two extra dimensions gives a correction proportional to $`\sqrt{E/M}`$ to the QFT calculation.
From the success of the QFT one can derive a lower bound on the scale $`M`$ of the gravitational coupling in $`n+4`$ dimensions. The bound will be higher as the deviations of the experimental determination from the QFT calculation is smaller and for a given accuracy one will get more stringent bounds as the energy increases. As an explicit example of a chirally-protected observable, which is in fact the most important quantitatively, one can consider the anomalous magnetic moment of the electron, $`g2`$, which is known to an accuracy of $`10^{11}`$. In this case one has a characteristic energy scale $`E=m_e`$ and demanding $`\delta _{min}(g2)`$ to be smaller than the experimental error leads, in the case of two extra dimensions, to the bound $`M>100TeV`$. This bound is more stringent that the bound obtained previously by considering contributions of Kaluza-Klein (KK) modes. The very weak coupling of KK-gravitons is compensated by their very large multiplicity and one finds a cross section for emission of KK-gravitons of the order of $`E^2/M^4`$ where $`E`$ is the center-of-mass energy and a corresponding lower bound $`M>30TeV`$.
If one considers the anomalous magnetic moment of the muon one has $`E=m_\mu `$ and then larger corrections than in the case of the electron but also the experimental error is larger ($`10^8`$) so that the corresponding bound for $`M`$ is lower. The same happens for other observables at higher energies which are not determined experimentally with enough precission to give a bound comparable with the one obtained from $`(g2)_e`$.
If one considers a higher number of extra dimensions then the exponent of $`E/M`$ in the deviations from QFT is bigger and then one finds smaller values for the lower bound on the graviational mass scale $`M`$. In the particular case of six extra dimensions one finds $`M\mathrm{}>1TeV`$ which is closer to the Fermi scale.
## Consistency of the determination of $`\delta _{min}\widehat{O}`$.
In the discussion of deviations from the QFT conventional calculation we have assumed that the compactification radius $`R`$ in the extra dimensions is smaller than the lenght $`L`$ of the box. The relation between the $`4+n`$-dimensional gravitational coupling and the effective four-dimensional Newton constant, $`R^nM^{2+n}=M_P^2`$ where $`M_P`$ is the Planck mass scale, can be used to determine the radius $`R`$ in terms of the scale $`M`$,
$$R^2=\frac{1}{M^2}\left(\frac{M_P}{M}\right)^{4/n}$$
(6)
The result(4) for the minimal deviation from the QFT calculation is obtained with an UV cutoff $`\mathrm{\Lambda }`$ such that $`\mathrm{\Lambda }^{6+n}E^4M^{2+n}`$ and an IR cutoff $`L`$ such that $`L^2M^{2+n}/\mathrm{\Lambda }^{4+n}`$. Then one has
$$L^2\frac{1}{M^2}\left(\frac{M}{E}\right)^{\frac{16+4n}{6+n}}$$
(7)
The condition $`L<R`$ gives a restriction on the energy scale $`E`$,
$$E>M\left(\frac{M}{M_P}\right)^{\frac{6+n}{n(4+n)}}$$
(8)
which in the case $`n=2`$ can be rewritten as
$$E\mathrm{}>\frac{m_e}{10}\left(\frac{M}{100TeV}\right)^{5/3}$$
(9)
Then in the case of two large extra dimensions with $`M100TeV`$ we have that the estimate (4) for the deviations from QFT is valid for all proccesses with an energy scale $`E\mathrm{}>100KeV`$. This includes all the applications of relativistic quantum field theory.
If one considers more than two extra dimensions then the range of energies (8) where the estimate (4) for the deviations from QFT is valid is reduced. In the particular case of six extra dimensions one has
$$E>M\left(\frac{M}{M_P}\right)^{1/5}1GeV\left(\frac{M}{1TeV}\right)^{6/5},$$
(10)
which in this case excludes, for $`M`$ close to the Fermi scale, all the applications of QFT at energies $`E\mathrm{}<1GeV`$. This includes the evaluation of $`(g2)`$ for the electron where $`E=m_e`$. At this very low energy the IR cutoff which minimizes the deviation from the QFT evaluation is such that $`LR`$ and then one can neglect the extra dimensions in the estimate of the deviations from QFT. In this case one has to replace $`n`$ by $`n_{eff}=0`$ and $`M`$ by $`M_{eff}=M_P`$ in (4) and one has
$$\delta _{min}[(g2)_e]\frac{\alpha }{\pi }\left(\frac{m_e}{M_P}\right)^{2/3}\frac{\alpha }{\pi }\times 10^{15},$$
(11)
which is very small compared with the experimental error in the determination of $`(g2)_e`$. The stronger constraint on the scale $`M`$ will come, in this case, from high energy observables $`E\mathrm{}>1GeV`$, where one expects deviations from QFT of the order of
$$\delta _{min}\widehat{O}\frac{\alpha }{\pi }\left(\frac{E}{M}\right)^{4/3}.$$
(12)
## Cosmological constant.
We end up with a comment on the result of a QFT evaluation of the vacuum energy density $`\rho _0`$ along the same lines. The result in perturbation theory will be $`\rho _0\mathrm{\Lambda }^4`$ and in order to reproduce the value suggested by the supernovae data an UV cutoff $`\mathrm{\Lambda }2.5\times 10^3eV`$ is required . If one uses the relation $`R=M_P/M^2`$ between the compactification radius $`R`$ and the scale $`M`$ and the relation between the cutoffs leading to $`\delta _{min}\widehat{O}`$ in the case of two extra dimensions ($`L\mathrm{\Lambda }^3M^2`$) then one finds for the IR cutoff
$$L10^{36}\left(\frac{M}{100TeV}\right)^4R$$
(13)
which is much bigger than $`R`$. In this case one would expect that the extra dimensions can be neglected and one should repeat all the considerations based on a correlation between IR and UV cutoffs but now in four effective space-time dimensions. In this case the UV cutoff which minimizes the deviations from a QFT calculation is $`\mathrm{\Lambda }(E^2M_P)^{1/3}`$; then in order to reproduce the value of the UV cutoff, $`\mathrm{\Lambda }2.5\times 10^3eV`$, and then the smallness of the cosmological constant, one has to find a characteristic energy scale in the evaluation of the vacuum energy density $`E_010^{18}eV`$ (!). It is not clear what can be the origin of such a small energy scale. Alternatively one can estimate the corresponding value of the effective IR cutoff through $`L_{eff}\mathrm{\Lambda }^2M_P`$ which gives $`L_{eff}10^{28}cm`$, a value comparable to the present horizon size. In order to predict what the cosmological constant should be one would have to go beyond QFT to find the origin of these scales.
I am grateful to M. Asorey, F. Falceto and A. Segui for discussions and J.L. Alonso for reading the manuscript. This work was supported by CICYT (Spain) project AEN-97-1680.
|
no-problem/9901/cond-mat9901200.html
|
ar5iv
|
text
|
# Singularities of the renormalization group flow for random elastic manifolds
## Abstract
We consider the singularities of the zero temperature renormalization group flow for random elastic manifolds. When starting from small scales, this flow goes through two particular points $`l^{}`$ and $`l_c`$, where the average value of the random squared potential $`U^2`$ turnes negative ($`l^{}`$) and where the fourth derivative of the potential correlator becomes infinite at the origin ($`l_c`$). The latter point sets the scale where simple perturbation theory breaks down as a consequence of the competition between many metastable states. We show that under physically well defined circumstances $`l_c<l^{}`$ and thus the apparent renormalization of $`U^2`$ to negative values does not take place.
PACS numbers: 05.20.-y, 11.10.Hi, 74.60.Ge, 75.60.Ch, 82.65.Dp
Consider an elastic manifold with $`d`$ internal degrees of freedom embedded into a $`(d+N)`$-dimensional space in the presence of a random potential $`U(𝐮,𝐫)`$. The free energy of the manifold takes the form
$$[𝐮]=d^d𝐫\left[\frac{C}{2}\left(\frac{𝐮}{𝐫}\right)^2+U(𝐮,𝐫)\right],$$
(1)
with $`C`$ the elasticity. The random potential $`U(𝐮,𝐫)`$ is assumed to be gaussian with an isotropic correlator $`U(𝐮,𝐫)U(𝐮^{},𝐫^{})=K(|𝐮𝐮^{}|)\delta (𝐫𝐫^{})`$. The model Hamiltonian (1) describes a large class of disordered systems including random magnets, dislocations in metals, and vortices in superconductors.
In a recent paper the functional renormalization group (FRG) approach has been used in order to calculate the critical force for the depinning of a $`(4+N)`$-dimensional elastic manifold in the presence of a weak random potential. The collective-pinning scale has been identified with the FRG flow point, where the fourth derivative of the correlator of the random potential at the origin $`K^{(4)}(0)`$ becomes infinite. By induction it is possible to show that all the higher even derivatives also become singular. However, if we look at the equation for the correlator itself, it turns out that this equation also exhibits a singularity: at some length $`l^{}`$ the average value of the random potential squared $`K_l^{}(0)`$ becomes negative. This situation is, of course, unphysical. The goal of this note is to show that the collective-pinning scale $`l_c`$ is always smaller than the length $`l^{}`$ at which the average value of the pinning potential squared becomes negative. We briefly review the method of the calculation of the collective-pinning length used in Ref. for the $`(d+N)`$-dimensional problem, determine the two length $`l_c`$ and $`l^{}`$ and show that $`l_c<l^{}`$ for a physical situation.
The one-loop zero temperature FRG equation for a $`(d+N)`$-dimensional elastic manifold can be written in the form
$$\frac{K_l(𝐮)}{l}=\left(4d4\zeta \right)K_l(𝐮)+\zeta u_\mu K_l^\mu (𝐮)+I\left[\frac{1}{2}K_l^{\mu \rho }(𝐮)K_l^{\mu \rho }(𝐮)K_l^{\mu \rho }(𝐮)K_l^{\mu \rho }(0)\right],$$
(2)
where $`\zeta `$ is the wandering exponent, $`I=A_d/C^2\mathrm{\Lambda }^{4d}`$ ($`A_d=2\pi ^{d/2}/\mathrm{\Gamma }(d/2)`$ and $`\mathrm{\Lambda }^1`$ is the short scale cutoff ), and the upper indices $`\mu `$ and $`\rho `$ denote the derivative with respect to the cartesian coordinates $`\mu `$ and $`\rho `$. Differentiating this equation four times with respect to $`𝐮`$ and substituting $`𝐮=0`$, we obtain
$$\frac{K_l^{(4)}(0)}{l}=(4d)K_l^{(4)}(0)+\frac{I(N+8)}{3}K_{l}^{(4)}{}_{}{}^{2}(0),$$
(3)
where $`K_l^{(4)}(0)=^4K_l/u_\mu ^4|_{𝐮=0}`$. Integrating Eq. (3) we find that the function $`K_l^{(4)}(0)`$ becomes infinite at the scale $`l_c`$ defined by
$$l_c=\frac{1}{4d}\mathrm{ln}\left(1+\frac{3(4d)}{I(N+8)K_0^{(4)}(0)}\right).$$
(4)
The scale $`l_c`$ defines the collective-pinning radius $`R_c=e^{l_c}/\mathrm{\Lambda }.`$
On the other hand, differentiating Eq. (2) twice with respect to $`𝐮`$ and setting $`𝐮=0`$ we obtain
$$\frac{K_l^{(2)}(0)}{l}=(4d2\zeta )K_l^{(2)}(0).$$
(5)
This equation again can be easily solved and substituting the expression for $`K_l^{(2)}(0)`$ into Eq. (2) with $`𝐮=0`$ we find that $`K(0)`$ becomes negative at the scale
$$l_c^{}=\frac{1}{4d}\mathrm{ln}\left(1+\frac{2(4d)K(0)}{INK^{(2)}(0)^2}\right).$$
(6)
The scheme used in Ref. for the determination of the collective pinning radius $`R_c`$ is valid only if $`l_c<l^{}`$. Thus, we have to prove that indeed in a physical situation $`l^{}>l_c`$. Taking into account Eqs. (4) and (6) we can write this inequality in the form
$$\frac{3N}{2(N+8)}<\frac{K_0(0)K_0^{(4)}(0)}{\left[K_0^{(2)}(0)\right]^2}.$$
(7)
Next, let us show that the inequality (7) is satisfied for any physical correlator $`K(𝐮)`$ with a positive Fourier transform $`K(𝐤)=𝑑𝐮K(𝐮)\mathrm{exp}(i\mathrm{𝐤𝐮})`$ ; the distribution of the Fourier components of the potential $`U`$ is then given by the product $`𝒫(U_𝐤)_𝐤\mathrm{exp}\left(U_𝐤^2/2K(𝐤)\right)`$ and is well defined as long as $`K(𝐤)>0`$. The condition $`K(𝐤)>0`$ implies that $`K(𝐮)`$ can be represented in the form
$$K(𝐮)=𝑑𝐮^{}P(𝐮𝐮^{})P(𝐮^{}).$$
(8)
Taking into account that $`\mathrm{\Delta }K(𝐮=0)=NK^{(2)}(0)`$ and $`\mathrm{\Delta }^2K(𝐮=0)=\left[N(N+2)/3\right]K^{(4)}(0)`$, with $`\mathrm{\Delta }`$ the Laplace operator in the $`N`$-dimensional space, we can rewrite the right hand side of the inequality (7) in the form
$$\frac{K_0(0)K_0^{(4)}(0)}{\left[K_0^{(2)}(0)\right]^2}=\frac{3N}{N+2}\frac{\mathrm{\Delta }^2K(𝐮=0)K(𝐮=0)}{\left[\mathrm{\Delta }K(𝐮=0)\right]^2}.$$
(9)
Using the Schwarz inequality $`(y_1,y_2)y_1y_2`$ and Eq. (8), with the scalar product and the norm defined as $`(y_1,y_2)=𝑑𝐮y_1(𝐮)y_2(𝐮)`$ and $`y_1=\left[𝑑𝐮y_1^2(𝐮)\right]^{1/2}`$ (in particular, $`K(0)=P^2`$, $`\mathrm{\Delta }K(0)=(P,\mathrm{\Delta }P)`$, and $`\mathrm{\Delta }^2K(0)=\mathrm{\Delta }P^2`$), we arrive at the result
$$\frac{K_0(0)K_0^{(4)}(0)}{\left[K_0^{(2)}(0)\right]^2}=\frac{3N}{N+2}\frac{\mathrm{\Delta }^2K(𝐮=0)K(𝐮=0)}{\left[\mathrm{\Delta }K(𝐮=0)\right]^2}\frac{3N}{N+2}.$$
(10)
We then can reformulate the condition (7) to read
$$\frac{3N}{2\left(N+8\right)}\frac{3N}{N+2},$$
(11)
which is always true and hence $`l^{}>l_c`$ for any physical correlator $`K_0(𝐮)`$. At the point $`l_c`$ the third derivative $`K_{l_c}^{(3)}(0)`$ exhibits a jump and the FRG equations (3) and (5) break down as we have used the fact that all odd derivatives of the correlator vanish at the point $`𝐮=0`$ in their derivation. The appearance of new terms in the FRG equations will then prevent the function $`K_l(0)`$ from taking negative values at scales beyond $`l_c`$.
|
no-problem/9901/cond-mat9901036.html
|
ar5iv
|
text
|
# Thickness dependent Curie temperatures of ferromagnetic Heisenberg films
## Abstract
We develop a procedure for calculating the magnetic properties of a ferromagnetic Heisenberg film with single-ion anisotropy which is valid for arbitrary spin and film thickness. Applied to sc(100) and fcc(100) films with spin $`S=\frac{7}{2}`$ the theory yields the layer dependent magnetizations and Curie temperatures of films of various thicknesses making it possible to investigate magnetic properties of films at the interesting 2D-3D transition.
In the past the Heisenberg model in thin films and superlattices has been subject to intense theoretical work. Haubenreisser et al. obtained good results for the Curie temperatures of thin films introducing an anisotropic exchange interaction (2). Shi and Yang calculated the layer-dependent magnetizations of ultra-thin $`n`$-layer films with single-ion anisotropy (3) for thicknesses $`n6`$. Other recent works are aimed at the question of reorientation transitions in ferromagnetic films or low-dimensional quantum Heisenberg ferromagnets .
When investigating the temperature dependent magnetic and electronic properties of thin local-moment films or at surfaces of real substances it becomes desirable to be able to calculate the magnetic properties of the underlying Heisenberg model with no restrictions to neither the film thickness $`n`$ nor the spin $`S`$ of the localized moments. We develop a straigthforward analytical approach for the case of Heisenberg film with single-ion anisotropy.
Considering the Heisenberg model,
$$_f=\underset{ij}{}J_{ij}𝐒_i𝐒_j=\underset{ij}{}J_{ij}\left(S_i^+S_j^{}+S_i^zS_j^z\right),$$
(1)
in a system with film geometry one comes to the conclusion that due to the Mermin-Wagner theorem the problem cannot have a solution showing collective magnetic order at finite temperatures $`T>0`$.
To steer clear of this obstacle there are two possibilities. First, one can apply a decoupling scheme to the Hamiltonian (1) which breaks the Mermin-Wagner theorem. The most common example in the case of the Heisenberg model would be a mean-field decoupling. For us, the main drawback of the mean-field decoupling is its incapability of describing physical properties at the 2D-3D transition.
When choosing a better decoupling approximation to fulfill the Mermin-Wagner theorem, the original Heisenberg Hamiltonian (1) has to be extended to break the directional symmetry. The most common extensions are the introduction of an anisotropic exchange interaction,
$$D\underset{ij}{}S_i^zS_j^zD_\mathrm{s}\underset{i,j\mathrm{surf}}{}S_i^zS_j^z,$$
(2)
and/or the single-ion anisotropy,
$$D_0\underset{i}{}\left(S_i^z\right)^2D_{0,\mathrm{s}}\underset{i\mathrm{surf}}{}\left(S_i^z\right)^2.$$
(3)
In (2) and (3) the first sums run over all lattice sites of the film whereas in the second optional terms the summations include positions within the surface layers of the film, only, according to a possible variation of the anisotropy in the vicinity of the surface.
Extending the original Heisenberg Hamiltonian (1) by (2) or (3) one can now calculate the magnetic properties of films at finite temperatures within a nontrivial decoupling scheme.
For the following we have choosen a single-ion anisotropy which is uniform within the whole film leaving us with the total Hamiltonian:
$$=_f+_A=\underset{ij\alpha \beta }{}J_{ij}^{\alpha \beta }\left(S_{i\alpha }^+S_{j\beta }^{}+S_{i\alpha }^zS_{j\beta }^z\right)+D_0\underset{i\alpha }{}\left(S_{i\alpha }^z\right)^2,$$
(4)
where we have considered the case of a film built up by $`n`$ layers parallel to two infinitely extended surfaces. Here, as in the following, greek letters $`\alpha `$, $`\beta `$, …, indicate the layers of the film, while latin letters $`i`$, $`j`$, …, number the sites within a given layer. Each layer possesses two-dimensional translational symmetry. Hence, the thermodynamic average of any site dependent operator $`A_{i\alpha }`$ depends only on the layer index $`\alpha `$:
$$A_{i\alpha }A_\alpha .$$
(5)
To derive the layer-dependent magnetizations $`S_\alpha ^z`$ for arbitrary values of the spin $`S`$ of the localized moments we introduce the so-called retarded Callen Green function :
$$G_{ij(a)}^{\alpha \beta }(E)S_{i\alpha }^+;B_{j\beta }^{(a)}_E=S_{i\alpha }^+;\mathrm{e}^{aS_{j\beta }^z}S_{j\beta }^{}_E.$$
(6)
For the equation of motion of the Callen Green function,
$$EG_{ij(a)}^{\alpha \beta }(E)=\mathrm{}[S_{i\alpha }^+,B_{j\beta }^{(a)}]_{}+[S_{i\alpha }^+,]_{};B_{j\beta }^{(a)}_E$$
(7)
one needs the inhomogenity,
$$[S_{i\alpha }^+,B_{j\beta }^{(a)}]_{}=\eta _\alpha ^{(a)}\delta _{\alpha \beta }\delta _{ij},$$
(8)
and the commutators,
$`[S_{i\alpha }^+,_f]_{}`$ $`=`$ $`2\mathrm{}{\displaystyle \underset{k\gamma }{}}J_{ik}^{\alpha \gamma }\left(S_{i\alpha }^zS_{k\gamma }^+S_{k\gamma }^zS_{i\alpha }^+\right),`$ (9)
$`[S_{i\alpha }^+,_A]_{}`$ $`=`$ $`D_0\mathrm{}\left(S_{i\alpha }^+S_{i\alpha }^z+S_{i\alpha }^zS_{i\alpha }^+\right).`$ (10)
For the higher Green function on the right hand side of the equation of motion (7) resulting from the commutator relationship (9) one can apply the Random Phase Approximation (RPA) which has proved to yield reasonable results throughout the entire temperature range:
$`S_{i\alpha }^zS_{k\gamma }^+;B_{j\beta }^{(a)}_E`$ $``$ $`S_\alpha ^zS_{k\gamma }^+;B_{j\beta }^{(a)}_E,`$ (11)
$`S_{k\gamma }^zS_{i\alpha }^+;B_{j\beta }^{(a)}_E`$ $``$ $`S_\gamma ^zS_{i\alpha }^+;B_{j\beta }^{(a)}_E.`$ (12)
For the higher Green functions resulting from the commutator (10) this is not possible due to the strong on-site correlation of the corresponding operators. However, one can look for an acceptable decoupling of the form
$$S_{i\alpha }^+S_{i\alpha }^z+S_{i\alpha }^zS_{i\alpha }^+;B_{j\beta }^{(a)}_E=\mathrm{\Phi }_{i\alpha }S_{i\alpha }^+;B_{j\beta }^{(a)}_E.$$
(13)
As was shown by Lines an appropriate coefficient $`\mathrm{\Phi }_{i\alpha }=\mathrm{\Phi }_\alpha `$ can be found for any given function $`B_{j\beta }^{(a)}=f\left(S_{j\beta }^{}\right)`$, which is all we need to know for the moment. We will come back to the explicit calculation of the $`\mathrm{\Phi }_\alpha `$ later.
Using the relations (8)–(13) and applying a two-dimensional Fourier transform introducing the in-plane wavevector $`𝐤`$ the equation of motion (7) becomes
$`\left(E\mathrm{}D_0\mathrm{\Phi }_\alpha \right)G_{𝐤(a)}^{\alpha \beta }`$ $`=`$ $`\mathrm{}\eta _\alpha ^{(a)}\delta _{\alpha \beta }`$ (14)
$`+2\mathrm{}{\displaystyle \underset{\gamma }{}}\left(J_\mathrm{𝟎}^{\alpha \gamma }S_\gamma ^zG_{𝐤(a)}^{\alpha \beta }J_𝐤^{\alpha \gamma }S_\alpha ^zG_{𝐤(a)}^{\gamma \beta }\right).`$
Writing equation (14) in matrix form one immediately gets the solution by simple matrix inversion:
$$G_{𝐤(a)}^{\alpha \beta }(E)=\mathrm{}\left(\begin{array}{ccc}\eta _1^{(a)}& & 0\\ & \mathrm{}& \\ 0& & \eta _n^{(a)}\end{array}\right)\left(E𝕀𝕄\right)^1,$$
(15)
where $`𝕀`$ represents the $`n\times n`$ identity matrix and
$$\frac{\left(𝕄\right)^{\alpha \beta }}{\mathrm{}}=\left(D_0\mathrm{\Phi }_\alpha +2\underset{\gamma }{}J_0^{\alpha \gamma }S_\gamma ^z\right)\delta _{\alpha \beta }2J_𝐤^{\alpha \beta }S_\alpha ^z.$$
(16)
The local, i.e. layer-dependent, spectral density, $`S_{𝐤(a)}^\alpha =\frac{1}{\pi }\mathrm{Im}G_{𝐤(a)}^{\alpha \alpha }`$, can then be written as a sum of $`\delta `$-functions and with (15) one gets:
$$S_{𝐤(a)}^\alpha =\mathrm{}\eta _\alpha ^{(a)}\underset{\gamma }{}\chi _{\alpha \alpha \gamma }(𝐤)\delta \left(EE_\gamma (𝐤)\right),$$
(17)
where $`E_\gamma (𝐤)`$ are the poles of the Green function (15) and $`\chi _{\alpha \alpha \gamma }(𝐤)`$ are the weights of these poles in the diagonal elements of the Green function, $`G_{𝐤(a)}^{\alpha \alpha }`$. Both, the poles and the weigths can be calculated e.g. numerically.
Extending the procedure by Callen from 3D to film structures<sup>1</sup><sup>1</sup>1The only pre-condition for the extension is that the spectral density has the multipole structure (17) one finds an analytical expression for the layer-dependent magnetizations,
$$S_\alpha ^z=\mathrm{}\frac{(1+\phi _\alpha )^{2S+1}(S\phi _\alpha )+\phi _\alpha ^{2S+1}(S+1+\phi _\alpha )}{\phi _\alpha ^{2S+1}(1+\phi _\alpha )^{2S+1}},$$
(18)
where
$$\phi _\alpha =\frac{1}{N_\mathrm{s}}\underset{𝐤}{}\underset{\gamma }{}\frac{\chi _{\alpha \alpha \gamma }(𝐤)}{\mathrm{e}^{\beta E_\gamma (𝐤)}1}.$$
(19)
Here, $`N_\mathrm{s}`$ is the number atoms in a layer and $`\beta =\frac{1}{k_\mathrm{B}T}`$. The poles and weigths in (19) have to be calculated for the special Green function $`G_{𝐤(a)}^{\alpha \alpha }`$ with $`a=0`$<sup>2</sup><sup>2</sup>2The parameter $`a`$ had been introduced to derive (19) for arbitrary spin $`S`$.. In this case the Callen Green function (6) simply becomes:
$$G_{ij(0)}^{\alpha \beta }=G_{ij}^{\alpha \beta }=S_{i\alpha }^+;B_{j\beta }^{(0)}=S_{i\alpha }^+;S_{j\beta }^{},$$
(20)
and, according to (8),
$$\eta _\alpha ^{(0)}=\eta _\alpha =2\mathrm{}S_\alpha ^z.$$
(21)
Having solved the problem formally we are left with explicitly calculating the coefficients $`\mathrm{\Phi }_\alpha `$ of equation (13). Applying the spectral theorem to (13) for the special case of $`a=0`$ one gets, using elementary commutator relations:
$$S_{j\beta }^{}S_{i\alpha }^+(2S_{i\alpha }^z+\mathrm{})=\mathrm{\Phi }_{i\alpha }S_{j\beta }^{}S_{i\alpha }^+_E.$$
(22)
We now define the Green function
$$D_{ji}^{\beta \alpha }=S_{j\beta }^{};C_{i\alpha }_E,$$
(23)
where $`C_{i\alpha }`$ is a function of the lattice site. Writing down the equation of motion of $`D_{ji}^{\beta \alpha }`$ for the limit $`D_00`$,
$$ED_{ji}^{\beta \alpha }(E)=\mathrm{}[S_{j\beta }^{},C_{i\alpha }]_{}+[S_{j\beta }^{},_f]_{};C_{i\alpha }_E,$$
(24)
and decoupling all the higher Green functions using the RPA one arrives after transformation into the two-dimensional $`𝐤`$-space at:
$$D_𝐤^{\beta \alpha }=\mathrm{}\left(\begin{array}{ccc}[S_1^{},C_1]_{}& & 0\\ & \mathrm{}& \\ 0& & [S_n^{},C_n]_{}\end{array}\right)\left(E𝕀𝔸\right)^1,$$
(25)
where $`𝔸`$ is a matrix which is independent on the choice of $`C_{i\alpha }`$. Now putting $`C_{i\alpha }`$ in (23) in turn equal to $`S_{i\alpha }^+`$ and to $`S_{i\alpha }^+(2S_{i\alpha }^z+\mathrm{})`$ and applying the spectral theorem to equation (25) one eventually gets the relation:
$$\frac{S_{j\beta }^{}S_{i\alpha }^+}{[S_{i\alpha }^{},S_{i\alpha }^+]_{}}=\frac{S_{j\beta }^{}S_{i\alpha }^+(2S_{i\alpha }^z+\mathrm{})}{[S_{i\alpha }^{},S_{i\alpha }^+(2S_{i\alpha }^z+\mathrm{})]_{}}.$$
(26)
The coefficients $`\mathrm{\Phi }_{i\alpha }`$ are then with (22) given by
$$\mathrm{\Phi }_{i\alpha }=\frac{[S_{i\alpha }^{},S_{i\alpha }^+(2S_{i\alpha }^z+\mathrm{})]_{}}{[S_{i\alpha }^{},S_{i\alpha }^+]_{}}=\frac{2(S_{i\alpha }^z)^2\mathrm{}^2S(S+1)}{S_{i\alpha }^z},$$
(27)
where, along with commutator relations, the identity
$$S_{i\alpha }^\pm S_{i\alpha }^{}=\mathrm{}^2S(S+1)\pm \mathrm{}S_{i\alpha }^z(S_{i\alpha }^z)^2$$
(28)
has been used. To avoid the unknown expectation value $`(S_{i\alpha }^z)^2`$ we apply the spectral theorem to the spectral density (17) with $`a=0`$ and get using (21):
$$S_\alpha ^{}S_\alpha ^+=2\mathrm{}S_\alpha ^z\frac{1}{N_\mathrm{s}}\underset{𝐤}{}\underset{\gamma }{}\frac{\chi _{\alpha \alpha \gamma }(𝐤)}{\mathrm{e}^{\beta E_\gamma (𝐤)}1}\stackrel{(\text{19})}{=}2\mathrm{}S_\alpha ^z\phi _\alpha .$$
(29)
Hence, with (28) and (29), we get
$$(S_\alpha ^z)^2=\mathrm{}^2S(S+1)\mathrm{}S_\alpha ^z(1+2\phi _\alpha ),$$
(30)
and the coeffcients $`\mathrm{\Phi }_\alpha `$ can be written in the convienient form
$$\mathrm{\Phi }_\alpha =\frac{2\mathrm{}^2S(S+1)3\mathrm{}S_\alpha ^z(1+2\phi _\alpha )}{S_\alpha ^z}.$$
(31)
Together with (31), the equations (15), (16), (18), and (19) represent a closed system of equations, which can be solved numerically.
All the following calculations have been performed for spin $`S=\frac{7}{2}`$, applicable to a wide range of interesting rare-earth compounds, and for an exchange interaction in tight-binding approximation $`J=0.01\mathrm{eV}`$ which is uniform within the whole film. The case where the exchange integrals in the vicinity of the surfaces are modified has been dealt with by a couple of authors . The single-ion ansitropy which plays the mere role of keeping the magnetizations at finite temperatures was choosen $`D_0/J=0.01`$.
Figs. 1 and 2 show the temperature and layer-dependent magnetizations of, respectively, simple cubic (sc) and face-centered cubic (fcc) films with the surfaces parallel to the (100)-planes. For the following $`Z_s`$ means the coordination number of the atoms in the surface layers and $`Z_b`$ is the coordination number in the centre layers of the films. For the case of a monolayer, $`n=1`$, the curves for the sc(100) and the fcc(100) ’film’ are identical, both having the same structure. With increasing film thickness the Curie temperatures of the films increase. For fcc(100) films the increase in $`T_\mathrm{C}`$ is steeper resulting in the limit of thick films in a Curie temperature about twice the value of that of the according sc(100) films due to the higher coordination number of the fcc 3D-crystal ($`Z_{b,fcc}=12`$) compared to the sc 3D-crystal ($`Z_{b,sc}=6`$). The larger difference between surface and centre layer magnetization of the fcc(100) films compared to the sc(100) films can be explained by the lower ratio between $`Z_s`$ and $`Z_b`$ ($`Z_{s,fcc(100)}/Z_{b,fcc}=8/12`$ and $`Z_{s,sc(100)}/Z_{b,sc}=5/6`$).
Concluding, we have have shown that the presented approach is a useful and straigthforward method for calculating the layer-dependent magnetizations of films of various thicknesses and with arbitrary spin $`S`$ of the localized moments.
We would like to thank P. J. Jensen for helpful discussions and for bringing Ref. to our attention. One of the authors (R. S.) would like to acknowledge the support by the German National Merit Foundation. The support by the Sonderforschungsbereich 290 (”Metallische dünne Filme: Struktur, Magnetismus und elektronische Eigenschaften“) is gratefully acknowledged.
|
no-problem/9901/hep-ph9901359.html
|
ar5iv
|
text
|
# Standard Model Large-𝐸_𝑇 Processes and Searches for New Physics at HERA
## 1 Introduction
The two experiments H1 and ZEUS at HERA have accumulated over the last six years almost 100 pb<sup>-1</sup> of luminosity, providing the first glimpse at the short distance frontier of lepton-nucleon interactions. A substantial effort in analyzing the data has been focused on extending the rejection limits for parameters of various known extensions of the standard model (SM) . In the corresponding dedicated searches both the data selection and the analysis method were optimized to detect the anticipated experimental signatures of these extensions.
Another approach, which we would like to call generic searches, consists of analyzing a broad class of events characterized by the presence of a large transverse energy particle or a system of particles used as tags for short distance and/or large mass processes . The event selection criteria for such searches are optimized to cover those phase space regions where the standard model predictions are sufficiently precise to detect anomalies rather than to cover the phase space region where anomalies are expected. They allow one to detect anomalies in the relative abundance of several processes. In general, generic searches minimize the chances that unexpected novel phenomena are overlooked if their manifestations in the data do not follow one of the predefined scenarios and enlarge the discovery potential if signatures of new phenomena are weak but present in more than one final state topology.
Both dedicated and generic searches rely on the knowledge of standard model predictions for the distributions of events over the available phase space. This requires dedicated SM calculations and their implementations in Monte Carlo event generators. In our view, these indispensable theoretical tools for searches of novel phenomena at HERA need upgrading. The lack of tools which are applicable over the full phase space leaves an important fraction of registered data un-analyzed. The high-energy, i.e. high-$`y`$, frontier of $`\gamma p`$ scattering is an example of a phase space region unique to HERA which was largely untouched by dedicated searches. In this region both deep inelastic and photoproduction processes contribute and a tedious procedure of matching the respective calculations and their Monte Carlo implementations is required . In dedicated searches, leaving out a fraction of the phase space has only weak impact on the derived exclusion limits. For generic searches, the development of appropriate theoretical tools covering the full phase space, in particular close to its boundaries, is mandatory if one wants to maximize the efficiency for detecting unexpected novel phenomena rather than to establish the rejection limits for the expected ones.
Standard model predictions for many important processes and corresponding Monte Carlo event generators to simulate the bulk of events in $`ep`$ scattering are available. These programs may, however, suffer from approximations which could turn out to be severe in corners of the phase space or for specific final states. As an example we mention the fact that the DJANGO branch of the Monte Carlo program DJANGOH is well-suited for the simulation of the inclusive DIS cross section, but does not take into account effects due to the emission of photons from quarks. For the total cross section this is a reasonable approximation, but is not acceptable when searching for final states containing photons in the vicinity of jets. An example where this is relevant is the search for excited quarks.
The precision of tools for searches is of a different quality compared to that for a precision measurement of structure functions or parameters of the standard model. In general the statistical significance of novel phenomena is weak and a precision of the order of 20 % matches already the statistical and systematic errors of the measurement. Therefore, electroweak radiative corrections are not crucial and pure QED corrections are sufficiently accurate when calculated in the leading logarithmic approximation. QCD corrections, however, need to be controlled.
In some cases significant contributions may arise from processes which are not accessible to perturbative methods, or which are sensitive to soft-gluon resummation. An example for the first case is $`ep\gamma X`$ which receives contributions from $`epep\gamma `$ in the absence of cuts on the hadronic final state , an example for the latter case is the production of two jets with equal $`p_T`$ . Therefore, the scope of the calculations needed for generic searches makes them much more ambitious than in cases where cuts are applied to remove the “difficult” phase space regions.
The main requirement for the theoretical tools used in searches is their completeness: all standard model processes contributing to a particular final state have to be controlled. In determining a complete list of all relevant processes, a close collaboration between theorists and experimentalists is needed because measurement effects significantly enlarge the number of processes which could potentially contribute to a particular final state topology. For example, the production of jets, which could occasionally mimic electrons, muons, taus, photons and, if undetected, neutrinos may contribute to the majority of the final states considered below. The theoretical tools in this case should not only provide the means to control the jet production rate and their spectra but, in addition, to control the probabilities for a jet to fragment into rare particle topologies, like single tracks, jets with only neutral particles, etc. The discussion of these latter aspects is beyond the scope of this paper.
In this note we provide a short review of the standard model calculations and their Monte Carlo implementations which can be used in searches for new physics and point out areas where progress is needed. The discussion is organized according to a possible classification of generic searches: inclusive single-particle spectra for electrons, neutrinos, jets, photons and so on, and spectra for various combinations of these particles are considered in turn.
## 2 Inclusive single-particle spectra
### 2.1 Electrons and Neutrinos
Measurements that only look for electrons or neutrinos (i.e., missing transverse momentum) in the final state correspond to the classical inclusive deep inelastic scattering. The precision of standard model predictions is determined by that of the structure functions input. These are obtained from global fits to a wide variety of data taking into account $`Q^2`$ evolution according to perturbative QCD in next-to-leading order. Theoretically, the predictions are on a sound basis since the operator product expansion approach for the inclusive measurement is well-founded. Therefore the precision is limited by the experimental accuracy of the data that are used in the fits. Recent estimates suggest precisions well below 10 per cent including uncertainties from scale variation and $`\alpha _s`$. For the longitudinal structure function $`F_L`$, where presently available measurements are not very precise, QCD predictions can be used. The inclusive measurement is sensitive to the simulation of hadronic final states only to the extent that the latter modify the reconstructed kinematical variables and enter the determination of experimental corrections.
Electroweak radiative corrections are known to order $`\alpha `$ and can be taken into account in the Monte Carlo program HERACLES . Only small residual uncertainties are expected at large $`Q^2`$. One should, however, keep in mind that taking into account radiative corrections requires the knowledge of structure functions down to $`Q^2=0`$. Therefore, additional uncertainties from low-$`Q^2`$ structure functions are present, although only at the level of a few per cent.
### 2.2 Jets
Jets in high-energy scattering are a genuine QCD testing ground and have therefore received much attention in theoretical calculations: NLO calculations were completed for both $`(1+1)`$\- and $`(2+1)`$-jet production in deep inelastic scattering and their implementations in Monte Carlo programs have been available already for some time. Jet cross sections need the definition of a jet finding algorithm and the older calculations were restricted to the modified JADE scheme. More recent progress in Refs. allows jet cross sections to be calculated with more general jet definitions<sup>1</sup><sup>1</sup>1A comparison of the programs MEPJET , DISENT and DISASTER++ revealed discrepancies, see for details.. As shown in the choice of the jet algorithm has considerable influence on the size of scale uncertainties. $`(3+1)`$-jet cross sections are known to leading order only and implemented in the available programs , the LO $`(4+1)`$-jet cross section also in .
Jets arising from photoproduction have also been investigated and NLO calculations have been available for many years . Recent work has improved the understanding of various aspects like scale uncertainties and the matching of theoretical and experimental jet definitions. The efforts of Klasen, Kramer and Pötter have lead to NLO predictions for $`(2+1)`$-jet production also in the transition region from photoproduction to deep inelastic scattering. The results were implemented recently in the Monte Carlo generator JETVIP by Pötter .
The theoretical tools for generic searches in jet production need further improvements. The most important restriction of presently available programs is related to very large $`Q^2`$ where $`Z`$ exchange contributions (and $`W`$ exchange in charged current scattering) are important: these contributions are available only at leading order in the general-purpose programs like LEPTO or DJANGOH . Other possible improvements are related to the treatment of heavy quarks, resolved contributions and, for future experiments, polarized beams. We refer to the accompanying reports of these proceedings for more details.
### 2.3 Photons
Searching for a photon in the final state without requiring additionally a large $`p_T`$ electron or neutrino, one has to cope with contributions from photoproduction (or more generally very low $`Q^2`$), deep inelastic scattering (i.e., large $`Q^2`$) and the transition region connecting these two cases. For the two extreme situations, NLO calculations are available, for photoproduction in Ref. and for DIS in Ref. <sup>2</sup><sup>2</sup>2However, not including $`Z`$ exchange.. The transition region, however, has not yet been investigated.
With the present situation, the calculations for DIS and photoproduction have to be combined by hand with all the difficulties arising when approaches of different authors using different conventions are to be matched. In principle, a cut on $`Q^2`$ should allow one to separate the two cases: the calculation in the DIS region is used for $`Q^2`$ above a lower limit $`Q_{\mathrm{min}}^2`$ of the order of a few GeV<sup>2</sup> and the results for photoproduction can be folded with the Weizsäcker-Williams spectrum of photons originating from the incoming lepton. $`Q_{\mathrm{min}}^2`$ enters the normalization of the Weizsäcker-Williams spectrum. In practice this matching was never done and progress similar to that which resulted in the Monte Carlo program JETVIP would be very useful to avoid this difficulty.
Similarly to the case of radiative corrections to inclusive DIS, there is a contribution from virtual $`e\gamma ^{}`$ Compton scattering producing a photon with large transverse momentum (balanced by the electron $`p_T`$) but no large momentum scale in the hadronic subprocess $`\gamma ^{}pX`$ . Substantial contributions both from quasi-elastic scattering, i.e. with $`epep\gamma `$, as well as inelastic scattering with low-mass hadronic final states (deep-inelastic Compton process) $`epe\gamma X`$ are expected. The first case can be described with the help of the well-known formfactors for elastic $`ep`$ scattering and corresponding predictions can be obtained with the help of the programs HERACLES or COMPTON ; the latter case is less well understood and needs further improvements.
### 2.4 $`W`$ and $`Z`$
The Monte Carlo generator EPVEC based on the calculations by Baur, Vermaseren and Zeppenfeld <sup>3</sup><sup>3</sup>3For previous calculations see the references in . has been used in searches for anomalous $`W`$ couplings and as a tool to control the $`W`$ and $`Z`$ contribution to isolated lepton production . EPVEC is presently being checked against an independent calculation of Dubinin and Song . As in the case of inclusive photon and inclusive jet production the main difficulty in the calculation of NLO corrections boils down to matching the DIS and the photoproduction contributions. In Ref. this matching is defined in terms of the virtuality of a quark exchanged in the $`u`$-channel which introduces a cutoff $`u_{\mathrm{min}}`$ and uncertainties related to variations of this unphysical parameter. A better matching scheme would separate the deep inelastic regime from photoproduction in terms of the virtuality of the exchanged photon. Remaining $`u`$-pole singularities can be absorbed into the parton distribution functions in the photon, giving rise to large QCD corrections for the resolved contribution. First numerical results of a corresponding calculation by Nason, Rückl and Spira were reported on this workshop . When finalized, a Monte Carlo implementation of this calculation will allow to obtain the $`W`$ total cross sections, as well as the spectra at high transverse momentum, with sufficient precision to be useful in searches for anomalies. The spectrum at low $`p_T`$ of the $`W`$ might require resummation of soft gluon contributions.
### 2.5 Muons and taus
Large transverse momentum muons and taus are produced at HERA predominantly as decay products of $`W`$ and $`Z`$ bosons. Their production has been discussed in the previous subsection. A comparably large contribution is due to non-resonant dimuon (ditau) production. Leading-order diagrams in photon-photon interactions constituting a subset of these processes have been calculated in Ref. and subsequently implemented in the Monte Carlo program LPAIR . LPAIR is restricted to the case of large invariant masses of the lepton pair. It is not complete for searches based on a single lepton tag because processes of internal conversion of virtual photons emitted by the quark or by the electron are not included. These processes are dominant for dimuons (ditaus) produced with low invariant mass, and comparable in size to photon-photon interactions at high transverse momentum. The missing contributions have been calculated in Ref. and implemented subsequently in the Monte Carlo program TRIDENT . This program was, however, not used so far in experimental analyses and additional testing seems to be required. A new Monte Carlo program, called GRAPE-Dilepton, is presently being developed on the basis of the GRACE system . It will take into account elastic and quasi-elastic contributions and the complete matrix elements needed in the deep inelastic regime as well as simulation of the hadronic final state. No NLO calculations are available at present to control theoretically the inclusive spectra of muons and taus at HERA at the level of precision required for searches. Progress in this domain has to include a Monte Carlo program implementing all dilepton production processes, including diagrams with on- and off-shell $`Z^0`$’s.
## 3 Multi-body final states
Multi-body final states, especially those involving an unconventional particle composition, are of particular interest for HERA generic searches. When trying to reveal novel phenomena in multi-body final states, the advantage due to higher energies at the Tevatron is balanced by significantly smaller QCD backgrounds at HERA.
NLO calculations are much more involved for multi-body final states and available only for a few cases. Even LO calculations are not worked out for all interesting processes. In such cases the general-purpose program packages CompHEP or GRACE may be helpful to obtain good estimates. It has to be stressed however that additional work will be needed, for example to take into account the effects due to the hadronization of final states.
Searches for anomalies in the production of electron pairs and triplets or photon pairs, as well as those looking for $`e\mu `$, $`e\tau `$, $`\mu \mu `$, $`\tau \tau `$ $`e\mu \mu `$ and $`e\tau \tau `$ cannot rely at present on complete standard model predictions. We refer here to the discussion in section 2.5.
Searches for the anomalous production of an $`e\gamma `$ system made so far were limited to that fraction of the available phase space which can be controlled by the Monte Carlo programs DJANGOH<sup>4</sup><sup>4</sup>4DJANGOH provides an option which allows simulation of the hadronic final state but does not include quarkonic radiation (DJANGO branch) or an option including quarkonic radiation but without hadronization (HERACLES branch). and COMPTON. More recent calculations include QCD corrections to order $`O(\alpha _s)`$. Although these calculations can not easily be combined with more complete Monte Carlo generators, they are useful for comparisons with data corrected for detector effects and give a theoretical handle for studies of the $`e\gamma `$ and $`e\gamma +`$jet systems. Similar calculations for the production of $`\nu \gamma `$ and $`\nu \gamma +`$jet final states exist in LO but remain to be done in NLO.
Significant progress has been made recently in the precision of the standard model calculations for $`e+`$jet and $`e+2`$jets final states. We refer here to the discussion presented in section 2.2. and to the contribution of Pötter and Seymour in these proceedings. As already mentioned, the corresponding calculations for final states with $`\nu +`$jet and $`\nu +2`$ jets remain to be done.
The searches for anomalies in the production of final states containing $`e\nu `$, $`\mu \nu `$, $`\tau \nu `$ as well as $`e\nu +`$jet, $`\mu \nu +`$jet, $`\tau \nu +`$jet, rely on calculations for $`W`$ production which was discussed in section 2.4. A particular need for calculations of QCD corrections for three body final states involving large $`E_T`$ jets has to be underlined in the context of the reported observation of anomalous events of this type in the H1 data . Other multi-body final states where QCD corrections have not yet been calculated are $`\mu +`$jet, $`\tau +`$jet, 3jets, and many more rare final states.
In several multi-particle final states where theoretical control is poor, generic searches can, at present, only be based on the relative abundance of various processes. Simple approximate formulas which, for example, relate the cross section for lepton pair or jet pair production to those for large-$`E_T`$ photon production are known (see for example ) and expected to be precise enough for the present requirements.
## 4 Conclusions
The forseen upgrade of the HERA machine is expected to lead to an increase of the collected luminosity by at least a factor of ten by the year 2005. This improvement deserves being matched by corresponding upgrades of theoretical tools for generic searches in order to fully exploit the search potential at HERA. If deviations from the standard model predictions are observed, a combined treatment of different multi-particle final states is expected to help in understanding the anomalies. For example anomalies in the internal structure of the proton at short distances may reveal themselves by looking at the spectra of jets associated with various multi-body final states . Generic searches are helpful in interlinking the experimental aspects of physically different processes. This facilitates the assessment of the size of systematic measurement errors.
We acknowledge discussions on various aspects of standard model calculations with T. Abe, R. Devenish, D. Graudenz and B. Pötter.
## 5 References
|
no-problem/9901/hep-ph9901428.html
|
ar5iv
|
text
|
# Ultra High Energy Cosmic Rays and Inflation.
## Introduction
According to the modern tale, all matter in the Universe was created in reheating after inflation. While this happened really long ago and on very small scales, this process is obviously of such vital importance that one may hope to find some observable consequences, specific for particular models of particle physics. And, indeed, we now believe that there can be some clues left. Among those are: topological defects production in non-thermal phase transitions nth , GUT scale baryogenesis bau , generation of primordial background of stochastic gravitational waves at high frequencies gw , just to mention a few. However, matter appears in many kinds and forms, and it is hard to review all possibilities in one talk. I’ll concentrate on a possible relation to a mounting puzzle of the Ultra High Energy Cosmic Rays (UHECR).
When proton (or neutron) propagates in CMB, it gradually looses energy colliding with photons and creating pions gzk . There is a threshold energy for the process, so it is effective for very energetic nucleons only, which leads to the famous Greisen-Zatsepin-Kuzmin (GZK) cutoff of the high energy tail of the spectrum of cosmic rays. All this means that detection of, say, $`3\times 10^{20}`$ eV proton would require its source to be within $`50`$ Mpc. However, many events above the cut-off were observed by Yakutsk, Haverah Park, Fly Eye and AGASA collaborations cr (for the review see Ref. bs ).
Results from the AGASA experiment AGASA are shown in Fig. 1. The dashed curve represents the expected spectrum if conventional extragalactic sources of UHECR would be distributed uniformly in the Universe. This curve displays the theoretical GZK cut-off, but we see events which are way above it. (Numbers attached to the data points show the number of events observed in each energy bin.) Note that no candidate astrophysical source, like powerful active galaxy nuclei, were found in the directions of all six events with $`E>10^{20}`$ eV AGASA
There were no conventional explanation found to these observations, and the question arises, is it indication of the long awayted new physics, at last ?
Many solutions to the puzzle were suggested, which rely on different extensions of the standard model, in one way or the other. Among those are:
* A particle which is immune to CMBR. In this scenario, primary particle is produced in conventional astrophysical accelerators and is able to travel cosmological distances. There are variations to this scheme. This can be a new exotic particle able to produce normal air showers in Earth’s atmosphere farrar , or this can be an accelerated (anti)neutrino annihilating via $`Z^0`$ resonance on the relic neutrinos in a local high density neutrino clump, thus producing energetic gamma or nucleon W . Massiveness of neutrino, $`m_\nu `$ eV, is a necessary requirement in this scheme.
* Another possibility is that UHECR are produced when topological defects destruct near the lab (on the cosmological scale) hsw . Topological defects which were considered in these kinds of scenarios were: strings S , superconducting strings hsw , networks of monopoles connected by strings necl , magnetic monopoles mon .
* Conceptually the simplest possibility is that UHECR are produced (again cosmologically locally) in decays of some new particlerelicX . The candidate $`X`$-particle must obviously obey constraints on mass, number density and lifetime.
## UHECR from decaying particles
In order to produce cosmic rays in the energy range $`E>10^{11}`$ GeV, the decaying primary particle has to be heavy, with the mass well above GZK cut-off, $`m_X>10^{12}`$ GeV. The lifetime, $`\tau _X`$, cannot be much smaller than the age of the Universe, $`\tau _U10^{10}`$ yr. Given this shortest possible lifetime, the observed flux of UHE cosmic rays will be generated with the rather low density of $`X`$-particles, $`\mathrm{\Omega }_X10^{12}`$, where $`\mathrm{\Omega }_Xm_Xn_X/\rho _{\mathrm{crit}}`$, $`n_X`$ is the number density of X-particles and $`\rho _{\mathrm{crit}}`$ is the critical density. On the other hand, X-particles must not overclose the Universe, $`\mathrm{\Omega }_X<1`$. With $`\mathrm{\Omega }_X1`$, the X-particles may play the role of cold dark matter and the observed flux of UHE cosmic rays can be matched if $`\tau _X10^{22}`$ yr.
The problem of the particle physics mechanism responsible for a long but finite lifetime of very heavy particles can be solved in several ways. For example, otherwise conserved quantum number carried by X-particles may be broken very weakly due to instanton transitions, or quantum gravity (wormhole) effects relicX . Other interesting models of superheavy long-living particles were found in Refs. Xmodels .
Spectra of UHE cosmic rays arising in decays of relic X-particles were successfully fitted to the data for $`m_X`$ in the range $`10^{12}<m_X/\mathrm{GeV}<10^{14}`$ mx .
Here I address the issue of X-particle abundance. It was noticed CKR ; KT98 that such heavy particles are produced in the early Universe from the vacuum fluctuations and their abundance can be correct naturally, if the standard Friedmann epoch in the Universe evolution was preceded by the inflationary stage. This is a fundamental process of particle creation unavoidable in the time varying background and it requires no interactions. Temporal change of the metric is the single cause of particle production. Basically, it is the same process which during inflation had generated primordial large scale density perturbations. No coupling (e.g. to the inflaton or plasma) is needed. All one needs are stable (very long-living) X-particles with the mass of order of the inflaton mass, $`m_X10^{13}`$ GeV. Inflationary stage is not required to produce superheavy particles from the vacuum. Rather, the inflation provides a cut off in excessive gravitational production of heavy particles which would happen in the Friedmann Universe if it would start from the initial singularity KT98 . Resulting abundance is quite independent of detailed nature of the particle which makes the superheavy (quasi)stable X-particle a very interesting dark matter candidate. New particle needs good name. I like Wimpzilla wimpZ .
### Friedmann Cosmology.
For particles with conformal coupling to gravity (fermions or scalars with $`\xi =1/6`$ in $`\xi R\varphi ^2`$ interaction term with the curvature), it is the particle mass which couples the system to the background expansion and serves as the source of particle creation. Therefore, just on dimensional grounds, we expect $`n_Xm_X^3a^3`$ at late times when particle creation diminishes. In Friedmann cosmology, $`a(mt)^\alpha (m/H)^\alpha `$ and the anticipated formulae for the X-particles abundance can be parameterised as $`n_X=C_\alpha m_X^3(H/m_X)^{3\alpha }`$. It is expansion of the Universe which is responsible for particle creation. Therefore, this equation which describes simple dilution of already created particles is valid when already $`H<<m_X`$. On the other hand particles with $`m_X>>H`$ cannot be created by this mechanism. Creation occurs when $`Hm_X`$. Coefficient $`C_\alpha `$ can be found numerically KT98 , its typical value is $`O(10^2)`$, and we find that stable particles with $`m_X>10^9`$ GeV will overclose the Universe. There is no room for Superheavy particles in our Universe if it started from the initial Friedmann singularity KT98 , since the value of the Hubble constant is limited from above only by the Planck constant in this case.
### Inflationary Cosmology.
If there was inflation, the Hubble constant (in effect) did not exceeded the inflaton mass, $`H<m_\varphi `$. The mass of the inflaton field has to be $`m_\varphi 10^{13}`$ GeV as constrained by the amplitude of primordial density fluctuations relevant for the large scale structure formation. Therefore, production of particles with $`m_X>H10^{13}`$ GeV has to be suppressed in inflationary cosmology. Results of direct numerical integration of gravitational particle creation in chaotic inflation model with the potential $`V(\varphi )=m_\varphi ^2\varphi ^2/2`$ is shown in Fig. 2.
This figure was calculated assuming $`T_\mathrm{R}=10^9`$ GeV for the reheating temperature. (At reheating the entropy of the Universe was created in addition to X-particles. In general, multiply this figure by the ratio $`T_\mathrm{R}/10^9`$ GeV and divide it by the fractional entropy increase per comoving volume if it was significant at some late epoch.) Reheating temperature is constrained, $`T_\mathrm{R}<10^9`$ GeV, in supergravity theory gtino . We find that $`\mathrm{\Omega }_Xh^2<1`$ if $`m_X(\mathrm{few})\times 10^{13}`$ GeV. This value of mass is in the range suitable for the explanation of UHECR events KT98 . Gravitationally created superheavy X-particles can even be the dominating form of matter in the Universe today if X-particles are in this mass range CKR ; KT98 .
## Topological defects and inflation
Decaying topological defect can naturally produce very energetic particles, and this may be related to UHECR hsw -mon , for recent reviews see bs . However, among motivations for inflation there was the necessity to get rid of unwanted topological defects. And inflation is excellent doing this job. Since temperature after reheating is constrained, especially severely in supergravity models, it might be that the Universe was never reheated up to the point of GUT phase transitions. Topological defects with a sufficiently high scale of symmetry breaking cannot be created. How then topological defects could populate the Universe?
The answer may be provided by non-thermal phase transitions nth which can occur in preheating preh after inflation. Explosive particle production caused by stimulated decay of inflaton oscillations lead to anomalously high field variances which restore symmetries of the theory even if actual reheating temperature is small. Defects form when variances are reduced by the continuing expansion of the Universe and phase transition occur. This problem is complicated, and while some features can be anticipated and some quantities roughly estimated, the problem requires numerical study. In recent papers defects the defect formation and even the possibility of the first order phase transitions during preheating was demonstrated explicitly. Fig. 3 shows string distribution in a simulation with symmetry breaking scale $`\mathrm{v}=3\times 10^{16}`$ GeV, when a pair of “infinite” strings and one big loop had formed. Size of the box is comparable to the Hubble length at this time.
## Conclusions
Next generation cosmic ray experiments, which will be soon operational, will tell us which model for UHECR may be correct and which has to be ruled out. One unambiguous signature is related to homogeneity and anisotropy of cosmic rays. If particles immune to CMBR are there, the UHECR events should point towards distant, extraordinary astrophysical sources FB98 . If wimpzillas are in the game, the Galaxy halo will be reflected in anisotropy of the UHECR flux DT98 . It is remarkable that we might be able to learn about the earliest stages of the Universe’s evolution. Discovery of heavy X-particles will mean that the model of inflation is likely correct, or that at least “standard” Friedmann evolution from the singularity is ruled out, since otherwise X-particles would have been inevitably overproduced KT98 .
|
no-problem/9901/cond-mat9901024.html
|
ar5iv
|
text
|
# The effect of an external magnetic field on the gas-liquid transition in the Ising spin fluid
## Abstract
The theoretical phase diagrams of the magnetic (Ising) lattice fluid in an external magnetic field is presented. It is shown that, depending on the strength of the nonmagnetic interaction between particles, various effects of external field on the Ising fluid take place. In particular, at moderate values of the nonmagnetic attraction the field effect on the gas-liquid critical temperature is nonmonotoneous. A justification of such behavior is given. If short-range correlations are taken into account (within a cluster approach), the Curie temperature also depend on the nonmagnetic interaction.
PACS numbers 64.70.Fx, 77.80.Bh, 75.50.Mm, 64.60.Kw.
Anisotropic liquids are very sensitive matters. Such are nematic liquid crystals and ferrofluids. Many efforts have been made in order to investigate effects of shape and flexibility of the molecules, of long and short-range interactions on the properties of anisotropic liquids. External field effects are still worthier of attention, because the application of an external field allows to change properties of anisotropic liquids dynamically (in contrast to the effects of molecules’ shape, etc., which are static).
In ferrofluids an external magnetic field removes the magnetic order-disorder transition, nevertheless the first order transitions between ferromagnetic phases of different densities remain. The external field deforms the phase diagram of a magnetic fluid shifting coexistence lines between these phases. Kawasaki studied a magnetic lattice gas , which implements one of the ways to model properties of magnetic fluids. On temperature-density phase diagrams in Ref. one can see that the gas-liquid binodal of the magnetic fluid significantly lowers after the application of an external magnetic field. Vakarchuk with coworkers and Lado et al. studied another model of magnetic fluids — the fluid of hard spheres with embedded Heisenberg spins. They concluded that in such a system the temperature of the gas-liquid critical point (the top of binodal) increases after the application of an external field. The nature of such a discrepancy can be various, because the types of models (continuum fluids and the lattice gas ) as well as approximations used differ.
In this letter results of a more detailed investigation of the magnetic (Ising) lattice gas are reported. The first new point is the inclusion of the nonmagnetic interaction between particles. Another one consists in overcoming of limitations applied by the mean field approximation (MFA). It is known that this approximation is good for very long-range potentials only (and becomes accurate for a family of the infinite-ranged ones, the so-called Kac potentials ). The MFA can not reproduce some essential features of the systems with the nearest-neighbor interaction (such as the percolation phenomena in the quenched diluted Ising model, differences in magnetic properties of the quenched site-disordered Ising model and the annealed one ). Such drawbacks can be overcome with the two-site cluster approximation (TCA) .
We shall show within both the MFA and the TCA that at different values of the nonmagnetic interaction between particles the Ising fluid demonstrates various effects of external fields. In particular, at moderate values of the nonmagnetic attraction the field effect on the critical temperature is nonmonotoneous.
In lattice models of a fluid its particles are allowed to occupy only those spatial positions which belong to sites of a chosen lattice. The configurational integral of a simple fluid is in such a way substituted by the partition function
$`Z=\text{Sp}\mathrm{exp}\left(\beta \right),\beta =1/(k_\mathrm{B}T),`$ (1)
$`=H\mu N={\displaystyle \frac{1}{2}}{\displaystyle \underset{ij}{}}I_{ij}n_in_j\mu {\displaystyle \underset{i}{}}n_i,`$ (2)
where $`n_i`$, which equals 0 or 1, is a number of particles at site $`i`$. Sp means a summation over all occupation patterns. The total number $`N`$ of particles is allowed to fluctuate, $`\mu `$ is a chemical potential, which should be determined from the relation
$$N=\underset{i}{}n_i_{};\mathrm{}_{}=Z^1\text{Sp}(\mathrm{})\mathrm{exp}(\beta ).$$
(3)
Lattice models, due to particles can not approach closer than the lattice spacing allows, automatically preserve the essential feature of molecular interaction: nonoverlapping of particles. The lattice fluid with nearest neighbor interaction is known to demonstrate the gas-liquid transition only. Nevertheless, the lattice gas with interacting further neighbors possesses a realistic (that means, argon-like) phase diagram with all transitions between the gaseous, liquid and solid phases being present .
We shall consider a magnetic fluid in which the particles carry Ising spins $`S_i=\pm 1`$ and there is also an additional exchange interaction between the particles
$`H\mu N`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{ij}{}}J_{ij}S_in_iS_jn_jh{\displaystyle \underset{i}{}}S_in_i`$ (5)
$`{\displaystyle \frac{1}{2}}{\displaystyle \underset{ij}{}}I_{ij}n_in_j\mu {\displaystyle \underset{i}{}}n_i`$
Each site can be in three states: empty (i), occupied by the particle with spin up (ii) or down (iii). The trace in the partition function implies a summation over all $`3^𝒩`$ states, where $`𝒩`$ is a number of sites.
An interaction of fluctuations are totally neglected in the mean field approximation used in the previous studies of the model . This can partially be recovered using the idea of “clusters”. The partition function of a finite group of particles in an external field can be evaluated explicitly. A contribution of the other particles may be expressed in terms of the effective field, and this field has to be evaluated selfconsistently. From such a point of view the MFA is a one-site cluster approximation, in which each cluster comprises one site. Increasing the size of clusters one may expect to obtain more accurate results. Indeed, the results of the two-site cluster approximation turns to be accurate for the one-dimensional systems and on the Cayley tree . Here we shall formulate such an approximation for the Ising lattice gas with the nearest neighbor interactions. For the sake of brevity we shall not use the cluster expansion formulation which has some advantages, such as the possibility to calculate corrections of a higher order and correlation functions of the model . Instead we shall rely on the first order approximation and closely follow the derivation by Vaks and Zein . Let us introduce the effective-field Hamiltonian of a single site
$$H_i\mu n_i_i=\stackrel{~}{h}S_in_i\stackrel{~}{\mu }n_i,$$
(6)
where $`\stackrel{~}{h}=h+z\phi `$, $`\stackrel{~}{\mu }=\mu +z\psi `$, $`\phi `$ and $`\psi `$ are effective fields substituting for interactions with nearest neighbor sites, $`z`$ is a first coordination number of the lattice. In the two-site Hamiltonian the interaction between a pair of the nearest neighbor sites is taken into account explicitly
$`H_{ij}\mu n_i\mu n_j_{ij}`$ $`=`$ $`J_{ij}S_in_iS_jn_j\stackrel{~}{h}^{}S_in_i\stackrel{~}{h}^{}S_jn_j`$ (8)
$`I_{ij}n_in_j\stackrel{~}{\mu }^{}n_i\stackrel{~}{\mu }^{}n_j,`$
where $`\stackrel{~}{h}^{}=h+z^{}\phi `$, $`\stackrel{~}{\mu }^{}=\mu +z^{}\psi `$, and $`z^{}=z1`$ due to one of the neighbors is already taken into account. The fields have to be found from the selfconsistency conditions that require an equality of average values calculated with the one-site and two-site Hamiltonians. To determine $`\phi `$ and $`\psi `$ it is sufficient to impose these conditions on the average values of spin $`m=S_in_i_{}`$ and of the occupation number $`n=n_i_{}`$,
$$n_i__i=n_i_{_{ij}};S_in_i__i=S_in_i_{_{ij}}.$$
(9)
This approximation leads to the following expression for the internal energy
$`U/𝒩`$ $`=`$ $`{\displaystyle \frac{1}{2}}J_0S_in_iS_jn_j_{_{ij}}hS_in_i_{_{ij}}`$ (11)
$`{\displaystyle \frac{1}{2}}I_0n_in_j_{_{ij}},`$
where $`J_0=_jJ_{ij}=zK`$ and $`I_0=_jI_{ij}=zV`$ are integral interaction strengths, $`K`$ and $`V`$ denote, respectively, the magnetic coupling and the nonmagnetic attraction between nearest neighbor sites. The expression (11) can be computed explicitly in terms of the fields $`\phi `$ and $`\psi `$ and model parameters. The other thermodynamic potentials can be found in a straightforward way. For example, the grand thermodynamic potential $`\mathrm{\Omega }`$ of the model satisfies the following Gibbs-Helmholtz equation
$$\frac{\beta \mathrm{\Omega }}{\beta }=U\mu N.$$
(12)
The solution of this differential equation, taking into account relations (9), reads
$$\beta \mathrm{\Omega }/𝒩=z^{}\mathrm{ln}\text{Sp}\mathrm{exp}(\beta _i)\frac{z}{2}\mathrm{ln}\text{Sp}\mathrm{exp}(\beta _{ij}).$$
(13)
It is possible to build isotherms of the fluid using the thermodynamic relation $`\mathrm{\Omega }=PV`$ and expression (13) and solving the system of nonlinear selfconsistency equations (9). It is possible and convenient to exclude the chemical potential $`\mu `$ and the field $`\psi `$ from final equations of state:
$`\beta PV/𝒩`$ $`=`$ $`\mathrm{ln}(1n)+{\displaystyle \frac{z}{2}}\mathrm{ln}[(1n)(1x)+xr],`$ (14)
$`\mathrm{tanh}\beta \stackrel{~}{h}`$ $`=`$ $`p{\displaystyle \frac{\mathrm{sinh}2\beta \stackrel{~}{h}^{}}{\mathrm{cosh}2\beta \stackrel{~}{h}^{}+\mathrm{exp}(2\beta K)}}`$ (16)
$`+(1p)\mathrm{tanh}\beta \stackrel{~}{h}^{},`$
where
$`p`$ $`=`$ $`1(1n)/r,r=0.5+\sqrt{(n0.5)^2+n(1n)/x},`$ (17)
$`x`$ $`=`$ $`{\displaystyle \frac{2\mathrm{exp}(\beta V\beta K)\mathrm{cosh}^2\beta \stackrel{~}{h}^{}}{\mathrm{cosh}2\beta \stackrel{~}{h}^{}+\mathrm{exp}(2\beta K)}},`$ (18)
$`p`$ is a probability that a randomly chosen nearest neighbor site to a given particle is occupied. In the limit $`z\mathrm{}`$ and $`K0`$, $`V0`$ (keeping $`J_0`$ and $`V_0`$ constant) TCA formulae (1418) turn into the results of the MFA.
At low temperatures the isotherms of the fluid contain the “liquid” and “gaseous” parts separated by a region of the negative compressibility. The thermodynamic states, in which the compressibility of the uniform fluid is negative ($`\frac{P}{n}<0`$), constitute the spinodal region on the temperature-density phase diagram. In this region the fluid is thermodynamically unstable and must separate on phases of different densities. The densities of coexisting phases can be determined with the Maxwell rule of areas applied to the nonmonotoneous sections of the isotherms. Also at sufficiently low temperatures and $`h=0`$ an isotherm of the model has a break at the density in which nonzero solution of selfconsistency equation (16) for the field $`\phi `$ appears, and the second order phase transition to the ferromagnetic phase occurs.
Figure 1a shows the temperature-density phase diagram of the model within the mean field approximation at different model parameters. There are three families of lines for three values of the nonmagnetic interaction strength $`v=I_0/J_0`$. The bold lines correspond to zero field case ($`h=0`$). The straight line is a line of Curie points that separates paramagnetic and ferromagnetic regions. Within the mean field approximation the slope of the Curie line is independent of $`v`$. Under binodals (convex lines resting on the points (0,0) and (1,0) on the phase diagram) a phase separation takes place: one of the phases (vapor) is rarefied, the other (liquid) is denser. The nonmagnetic attraction between particles, of course, favors the phase separation — the binodal moves upward with increasing $`v`$. At small $`v`$ the phase diagram possesses of the tricritical point: the top of binodal lays on the Curie line, the liquid is ferromagnetic and the gas is paramagnetic. The tricritical point disappears at large $`v`$, in this case the top of binodal deviates from the Curie line in the paramagnetic region.
The external field dissolves the ferromagnetic transition and therefore eliminates the Curie line. The gas-liquid binodals in presence of the external magnetic field are depicted with thin lines. The attached numbers are strengths of the field $`h/J_0`$. At small $`v`$ there is a temperature interval, where the external field suppresses phase separation — the top of binodal shifts downward in agreement with the results of Kawasaki . Nevertheless, at large nonmagnetic attractions (e.g., $`v=4`$) the reverse field effect takes place. At moderate values of the nonmagnetic interaction (for example, $`v=2`$) the field effect is nonmonotoneous — weak fields lower the top of binodal, stronger fields shift it up.
In Fig. 1b one can see that the TCA, besides quantitative differences, gives some qualitative corrections to the MFA results. Within the TCA the Curie line becomes slightly concave, and the nonmagnetic attraction between particles increases the Curie temperature. The latter effect can be justified by the qualitative arguments. Indeed, the nonmagnetic attraction increases the probability that a randomly chosen pair of the nearest neighbor sites is occupied. Since at this sites particles interact magnetically, the magnetic interaction becomes more effective, and the Curie temperature increases also. Therefore the account of density fluctuations in the TCA leads to the dependence of the Curie temperature on $`v`$. In the case of the non-compressible fluid ($`n=1`$) the density fluctuations are absent, and the Curie temperature is independent of the nonmagnetic attraction.
The TCA predictions concerning the effect of field support the MFA results. The variety of field effects may be explained by the existence of two concurrent tendencies. The first, the external field aligns the spins, which leads to the more effective attraction between particles (let us remind that at $`v=0`$ particles with parallel spins ($`S_iS_j=1`$) attract and those with opposite spins ($`S_iS_j=1`$) repulse). This raises the binodal (for example, in simple nonmagnetic fluids the binodal goes up when the interaction increases). The second tendency takes place, if the susceptibility of the rarefied phase is larger than that of the coexisting dense phase. In this case the magnetization and, consequently, the effective attraction between particles grow better in the rarefied phase. This decreases the energetical gain of the phase separation. Therefore the second tendency suppresses the gas-liquid separation in the fluid and counteracts the first tendency. The second tendency is very strong at $`h=0`$ and $`v=0`$ in the region of the tricritical point, where the vapor (paramagnetic) branch of binodal almost coincides with the Curie line (where the susceptibility tends to infinity), whereas the branch of the coexistent liquid phase rapidly deviates from the Curie line. As a result, the external field lowers the top of the binodal. The second tendency gets weak and disappears when the susceptibilities of the coexistent phases levels; they are comparable, for example, if both liquid and vapor phases are paramagnetic. The behavior of $`v=4`$ binodal (see Fig. 1) demonstrates this feature. A relation between the susceptibilities results from various factors. For example, a short-rangeness of the interactions levels the susceptibilities and weakens the second tendency . It can be seen from the following observation of the field effect at $`v=2`$: in Fig. 1a the top of binodal at $`h=0.01`$ is higher than that at $`h=0.1`$, whereas in Fig. 1b the reverse situation takes place. Since the results of the MFA (as well as those of the TCA in the limit $`z\mathrm{}`$) are correct for the long-ranged potentials, whereas for our case the TCA is much more accurate, the corrections provided by the TCA have to be attributed to differences between the systems with the long-range and short-range potentials.
Still more illuminating confirmation of the “bi-tendency” explanation one can see in Fig. 2. There is the phase diagram of special topology, which takes place at intermediate $`v`$. The model at $`h=0`$ and $`t=0.63`$ undergoes two first-order phase transitions. At this temperature the fluid can be in three phases: paramagnetic gas (at $`n<0.35`$), paramagnetic liquid ($`0.65<n<0.69`$), ferromagnetic liquid ($`n>0.83`$). What we would like to emphasize is that weak external fields (e.g., $`h/J_0=0.01`$) raise the binodal at $`n=0.5`$ and lower it at $`n=0.75`$. Such behavior completely fit into the “bi-tendency” explanation: at $`n=0.5`$ and $`h=0`$ both phases are paramagnetic, the second tendency is absent, therefore the external field favors the phase separation; at $`n=0.75`$ the second tendency wins at small fields, like in the case $`v=2`$ (see Fig. 1).
One can see that the lattice gas approach may be successfully used for description of complex fluids when continual approaches lead to too complex calculations or do not give satisfactory results. It is this situation that takes place when one determines the influence of the nonmagnetic attraction on the Curie temperature . In this case the account of the short-range correlations within the cluster approach yields qualitatively new results in comparison with current continual methods.
|
no-problem/9901/physics9901007.html
|
ar5iv
|
text
|
# Receiver-Operating-Characteristic Analysis Reveals Superiority of Scale-Dependent Wavelet and Spectral Measures for Assessing Cardiac Dysfunction
## Abstract
Receiver-operating-characteristic (ROC) analysis was used to assess the suitability of various heart rate variability (HRV) measures for correctly classifying electrocardiogram records of varying lengths as normal or revealing the presence of heart failure. Scale-dependent HRV measures were found to be substantially superior to scale-independent measures (scaling exponents) for discriminating the two classes of data over a broad range of record lengths. The wavelet-coefficient standard deviation at a scale near 32 heartbeat intervals, and its spectral counterpart near 1/32 cycles per interval, provide reliable results using record lengths just minutes long. A jittered integrate-and-fire model built around a fractal Gaussian-noise kernel provides a realistic, though not perfect, simulation of heartbeat sequences.
PACS number(s) 87.10.+e, 87.80.+s, 87.90.+y
Though the notion of using heart rate variability (HRV) analysis to assess the condition of the cardiovascular system stretches back some 40 years, its use as a noninvasive clinical tool has only recently come to the fore . A whole host of measures, both scale-dependent and scale-independent, have been added to the HRV armamentarium over the years.
One of the more venerable among the many scale-dependent measures in the literature is the interbeat-interval (R-R) standard deviation $`\sigma _{\mathrm{int}}`$ . The canonical example of a scale-independent measure is the scaling exponent $`\alpha _S`$ of the interbeat-interval power spectrum, associated with the decreasing power-law form of the spectrum at sufficiently low frequencies $`f`$: $`S(f)f^{\alpha _S}`$ . Other scale-independent measures have been developed by us , and by others . One of the principal goals of this Letter is to establish the relative merits of these two classes of measures, scale-dependent and scale-independent, for assessing cardiac dysfunction.
One factor that can confound the reliability of a measure is the nonstationarity of the R-R time series. Multiresolution wavelet analysis provides an ideal means of decomposing a signal into its components at different scales , and at the same time has the salutary effect of eliminating nonstationarities . It is therefore ideal for examining both scale-dependent and scale-independent measures; it is in this latter capacity that it provides an estimate of the wavelet scaling exponent $`\alpha _W`$ .
We recently carried out a study in which wavelets were used to analyze the R-R interval sequence from a standard electrocardiogram (ECG) database . Using the wavelet-coefficient standard deviation $`\sigma _{\mathrm{wav}}(m)`$, where $`m=2^r`$ is the scale and $`r`$ is the scale index, we discovered a critical scale window near $`m=32`$ interbeat intervals over which it was possible to perfectly discriminate heart-failure patients from normal subjects. The presence of this scale window was confirmed in an Israeli-Danish study of diabetic patients who had not yet developed clinical signs of cardiovascular disease . These two studies , in conjunction with our earlier investigations which revealed a similar critical scale window in the counting statistics of the heartbeat (as opposed to the time-interval statistics considered here), lead to the recognition that scales in the vicinity of $`m=32`$ enjoy a special status. This conclusion has been borne out for a broad range of analyzing wavelets, from Daubechies 2-tap (Haar) to Daubechies 20-tap (higher order analyzing wavelets are suitable for removing polynomial nonstationarities ). It is clear that scale-dependent measures \[such as $`\sigma _{\mathrm{wav}}(32)`$\] substantially outperform scale-independent ones (such as $`\alpha _S`$ and $`\alpha _W`$) in their ability to discriminate patients with certain cardiac dysfunctions from normal subjects (see also ).
The reduction in the value of the wavelet-coefficient standard deviation $`\sigma _{\mathrm{wav}}(32)`$ that leads to the scale window occurs not only for heart-failure patients , but also for heart-failure patients with atrial fibrillation , diabetic patients , heart-transplant patients , and in records preceeding sudden cardiac death . The depression of $`\sigma _{\mathrm{wav}}(32)`$ at these scales is likely associated with the impairment of autonomic nervous system function. Baroreflex modulations of the sympathetic or parasympathetic tone typically lie in the range 0.04–0.09 Hz (11–25 sec), which corresponds to the time range where $`\sigma _{\mathrm{wav}}(m)`$ is reduced.
The perfect separation achieved in our initial study of 20-h Holter-monitor recordings endorses the choice of $`\sigma _{\mathrm{wav}}(32)`$ as a useful diagnostic measure. The results of most studies are seldom so clear-cut, however. When there is incomplete separation between two classes of subjects, as observed for other less discriminating measures using these identical long data sets , or when our measure is applied to large collections of out-of-sample or reduced-length data sets , an objective means for determining the relative diagnostic abilities of different measures is required.
ROC Analysis.Receiver-operating-characteristic (ROC) analysis is an objective and highly effective technique for assessing the performance of a measure when it is used in binary hypothesis testing. This format provides that a data sample be assigned to one of two hypotheses or classes (e.g., normal or pathologic) depending on the value of some measured statistic relative to a threshold value. The efficacy of a measure is then judged on the basis of its sensitivity (the proportion of pathologic patients correctly identified) and its specificity (the proportion of control subjects correctly identified). The ROC curve is a graphical presentation of sensitivity versus $`1`$specificity as a threshold parameter is swept (see Fig. 1).
The area under the ROC curve serves as a well-established index of diagnostic accuracy ; a value of 0.5 arises from assignment to a class by pure chance whereas the maximum value of 1.0 corresponds to perfect assignment (unity sensitivity for all values of specificity). ROC analysis can be used to choose the best of a host of different candidate diagnostic measures by comparing their ROC areas, or to establish for a single measure the tradeoff between reduced data length and misidentifications (misses and false positives) by examining ROC area as a function of record length (see Fig. 2). A minimum record length can then be specified to achieve acceptable classification. Because ROC analysis relies on no implicit assumptions about the statistical nature of the data set , it is more reliable and appropriate for analyzing non-Gaussian time series than are measures of statistical significance such as p-value and $`d^{}`$ which are expressly designed for signals with Gaussian statistics . Moreover, ROC curves are insensitive to the units employed (e.g., spectral magnitude, magnitude squared, or log magnitude); ROC curves for a measure $`M`$ are identical to those for any monotonic transformation thereof such as $`M^x`$ or $`\mathrm{log}(M)`$. In contrast the values of $`d^{}`$, and its closely related cousins, change under such transformations. Unfortunately, this is not always recognized which leads some authors to specious conclusions .
Scale-Dependent vs Scale-Independent Measures.– Wavelet analysis provides a ready comparison for scale-dependent and scale-independent measures since it reveals both. ROC curves constructed using 75,821 R-R intervals from each of the 24 data sets (12 heart failure, 12 normal) , are presented in Fig. 1 (left) for the wavelet measure $`\sigma _{\mathrm{wav}}(32)`$ (using the Haar wavelet) as well as for the wavelet measure $`\alpha _W`$. It is clear from Fig. 1 that the area under the $`\sigma _{\mathrm{wav}}(32)`$ ROC curve is unity, indicating perfect discriminability. This scale-dependent measure clearly outperforms the scale-independent measure $`\alpha _W`$ which has significantly smaller area. These results are found to be essentially independent of the analyzing wavelet .
We now use ROC analysis to quantitatively compare the tradeoff between reduced record length and misidentifications for this standard set of heart-failure patients using three scale-dependent and three scale-independent measures. In the first category are the wavelet-coefficient standard deviation $`\sigma _{\mathrm{wav}}(32)`$, its spectral counterpart $`S(1/32)`$ , and the interbeat-interval standard deviation $`\sigma _{\mathrm{int}}`$. In the second category, we consider the wavelet scaling exponent $`\alpha _W`$, the spectral scaling exponent $`\alpha _S`$, and a scaling exponent $`\alpha _D`$ calculated according to detrended fluctuation analysis (DFA) .
In Fig. 2 (left) we present ROC area, as a function of R-R interval record length, using these six measures. The area under the ROC curves forms the rightmost point in the ROC area curves. The file sizes are then divided into smaller segments of length $`L`$. The area under the ROC curve is computed for the first such segment for all 6 measures, and then for the second segment, and so on for all segments of length $`L`$. From the $`L_{\mathrm{max}}/L`$ values of the ROC area, the mean and standard deviation are computed. The lengths $`L`$ employed range from $`L=2^6=64`$ to $`L=2^{16}=65,536`$ in powers of two.
The best performance is achieved by $`\sigma _{\mathrm{wav}}(32)`$ and $`S(1/32)`$, both of which attain unity area (perfect separation) for sufficiently long R-R sequences. Even for fewer than 100 heartbeat intervals, corresponding to just a few minutes of data, these measures provide excellent results (in spite of the fact that both diurnal and nocturnal records are included). $`\sigma _{\mathrm{int}}`$ does not perform quite as well. The worst performance, however, is provided by the three scaling exponents $`\alpha _W`$, $`\alpha _S`$, and $`\alpha _D`$, confirming our previous findings . Moreover, results obtained from the different power-law estimators differ widely , suggesting that there is little merit in the concept of a single exponent, no less a “universal” one , for characterizing the human heartbeat sequence. In a recent paper Amaral et al. conclude exactly the opposite, that the scaling exponents provide the best performance. This is because they improperly make use of the Gaussian-based measures $`d^2`$ and $`\eta `$, which are closely related to $`d^{}`$, rather than ROC analysis. These same authors also purport to glean information from higher moments of the wavelet coefficients, but such information is not reliable because estimator variance increases with moment order. The results presented here accord with those obtained in a detailed study of 16 different measures of HRV . There are vast differences in the time required to compute these measures however: for 75,821 interbeat intervals, $`\sigma _{\mathrm{wav}}(32)`$ requires the shortest time (20 msec) whereas DFA$`(32)`$ requires the longest time (650,090 msec).
It will be highly useful to evaluate the relative performance of these measures for other records, both normal and pathologic. In particular the correlation of ROC area with severity of cardiac dysfunction should be examined.
An issue of importance is whether the R-R sequences, and therefore the ROC curves, arise from deterministic chaos . We have carried out a phase-space analysis in which differences between adjacent R-R intervals are embedded. This minimizes correlation in the time series which can interfere with the detection of deterministic dynamics. The results indicate that the behavior of the underlying R-R sequences, both normal and pathological, appear to have stochastic rather than deterministic origins .
Generating a realistic heartbeat sequence.– The generation of a mathematical point process that faithfully emulates the human heartbeat could be of importance in a number of venues, including pacemaker excitation. Integrate-and-fire (IF) models, which are physiologically plausible, have been developed for use in cardiology. Berger et al. , for example, constructed an integrate-and-fire model in which an underlying rate function was integrated until it reached a fixed threshold, whereupon a point event was triggered and the integrator reset. Improved agreement with experiment was obtained by modeling the stochastic component of the rate function as band-limited fractal Gaussian noise (FGN), which introduces scaling behavior into the heart rate, and setting the threshold equal to unity . This fractal-Gaussian-noise integrate-and-fire (FGNIF) model has been quite successful in fitting a whole host of interval- and count-based measures of the heartbeat sequence for both heart-failure patients and normal subjects . However, it is not able to accommodate the differences observed in the behavior of $`\sigma _{\mathrm{wav}}(m)`$ for the two classes of data.
To remedy this defect, we have constructed a jittered version of this model which we dub the fractal-Gaussian-noise jittered integrate-and-fire (FGNJIF) model . The occurrence time of each point of the FGNIF is jittered by a Gaussian distribution of standard deviation $`J`$. Increasing the jitter parameter imparts additional randomness to the R-R time series at small scales, thereby increasing $`\sigma _{\mathrm{wav}}`$ at small values of $`m`$ and, concomitantly, the power spectral density at large values of the frequency $`f`$. The FGNJIF simulation does a rather good job of mimicing patient and control data for a number of key measures used in heart-rate-variability analysis. The model is least successful in fitting the interbeat-interval histogram $`p_\tau (\tau )`$, particularly for heart-failure patients. This indicates that that a mechanism other than jitter for increasing $`\sigma _{\mathrm{wav}}`$ at low scales should be sought .
It is of interest to examine the global performance of the FGNJIF model using the collection of 24 data sets. To achieve this we carried out FGNJIF simulations using parameters comparable with the actual data and constructed simulated ROC curves for the measures $`\sigma _{\mathrm{wav}}(32)`$ and $`\alpha _W`$ as shown in Fig. 1 (right). Similar simulations for ROC area versus record length are displayed in Fig. 2 (right) for the six experimental measures considered. Overall, the global simulations (right-hand side of Fig. 1 and 2) follow the trends of the data (left-hand side of Fig. 1 and 2) reasonably well, with the exception of $`\sigma _{\mathrm{int}}`$. This failure is linked to the inability of the simulated results to emulate the observed interbeat-interval histograms. It will be of interest to consider modifications of the FGNIF model that might bring the simulated ROC curves into better accord with the data-based curves.
|
no-problem/9901/cond-mat9901272.html
|
ar5iv
|
text
|
# Does Good Mutation Help You Live Longer?
## Abstract
We study the dynamics of an age-structured population in which the life expectancy of an offspring may be mutated with respect to that of its parent. When advantageous mutation is favored, the average fitness of the population grows linearly with time $`t`$, while in the opposite case the average fitness is constant. For no mutational bias, the average fitness grows as $`t^{2/3}`$. The average age of the population remains finite in all cases and paradoxically is a decreasing function of the overall population fitness.
In this letter, we investigate the role of mutation on the age distribution and fitness of individuals within a simple age-structured population dynamics model. The basic feature of our model is that the life expectancy of an offspring, which measures its fitness, may be mutated with respect to that of its parent. While age-structured population models have been studied previously, relatively little is known about the role of mutation. When the individual reproduction rate is the fitness measure and population is regulated by externally imposed death, mutation leads to predominance by the fittest species. In a related vein, it was recently shown that longevity is heritable within the Penna bit-string model of aging. In these studies, the role of positive mutations was central. Our focus is quite different, as we study the dynamics of the fitness and age in a self-interacting population as a function of the advantageous and deleterious mutation rates.
When advantageous mutation is favored, that is, the offspring life expectancy (fitness) is greater than that of its parent, the fitness distribution of the population approaches a Gaussian with average fitness growing linearly in time and dispersion increasing as $`t^{1/2}`$. Conversely, when deleterious mutation is more likely, there is a $`t^{2/3}`$ approach to a steady fitness distribution. In the absence of mutational bias, the fitness distribution again approaches a Gaussian, with average fitness growing as $`t^{2/3}`$ and width growing as $`t^{1/2}`$. The average age of the population reaches a steady value in all cases and, surprisingly, is a decreasing function of the average fitness. Therefore within our model, a fitter population leads to a decreased individual lifetime.
Our model is a simple population dynamics scenario which incorporates age structure and mutation. This dynamics is based on the logistic model, $`\dot{N}=bN\gamma N^2`$, in which a population with density $`N(t)`$ evolves both by birth at rate $`b`$, and death at rate $`\gamma N`$, with steady-state solution $`N_{\mathrm{}}=b/\gamma `$. The crucial new element in our model is that the life expectancy of each newborn may be mutated by $`\pm \tau `$ (with $`|\tau |=1`$ without loss of generality) with respect to that of its parent. We also assume a constant age-independent mortality rate and birth rate for each individual.
Each of these features represent idealizations of reality; for example, it would be more realistic to incorporate a mortality rate which is an increasing function of age. We shall argue below that our choice of an age-independent mortality leads to behavior which applies to systems with realistic mortality rates. The nature of our results also suggests that the details of the mutation-driven shift in offspring life expectancy is not crucial.
Let $`C_n(a,t)`$ be the density of individuals with life expectancy $`n1`$ and age $`a`$ at time $`t`$. According to our model, the rate equation for $`C_n(a,t)`$ is
$$\left(\frac{}{t}+\frac{}{a}\right)C_n(a,t)=\left(\gamma N(t)+\frac{1}{n}\right)C_n(a,t).$$
(1)
The derivative with respect to $`a`$ on the left hand side accounts for aging. On the right hand side, the loss term $`\gamma NC_n`$ accounts for death by competition and is assumed to be independent of an individual’s age and fitness. As discussed above, the mortality rate is taken as age independent; the form $`C_n/n`$ guarantees that the life expectancy equals $`n`$.
We account for the population of newborns as a boundary condition for $`C_n(a=0,t)`$ . An individual produces offspring with the same life expectancy at rate $`b`$, and, due to mutation, produces offspring whose life expectancy is longer or shorter than its parent by $`\pm 1`$, with respective rates $`b_\pm `$. Defining $`P_n(t)=_0^{\mathrm{}}𝑑aC_n(a,t)`$ as the density of individuals at time $`t`$ of any age whose life expectancy equals $`n`$, then the boundary condition for $`C_n(0,t)`$ is
$$C_n(0,t)=bP_n(t)+b_+P_{n1}(t)+b_{}P_{n+1}(t).$$
(2)
To determine the asymptotic behavior of the age and fitness distributions, it proves useful to first disregard the age structure and focus on fitness alone. From Eqs. (1)–(2), the rate equations for $`P_n(t)`$ for $`n1`$ are
$$\frac{dP_n}{dt}=\left(b\gamma N\frac{1}{n}\right)P_n+b_+P_{n1}+b_{}P_{n+1},$$
(3)
with $`P_0=0`$. This describes a random-walk-like process in a one-dimensional fitness space which is augmented by birth and death due to the first term on the right-hand side. Using $`N(t)=\mathrm{\Sigma }P_n(t)`$, we find that the total population density obeys a generalized logistic equation
$$\frac{dN}{dt}=(B\gamma N)N\underset{n=1}{\overset{\mathrm{}}{}}\frac{P_n}{n}b_{}P_1,$$
(4)
where $`Bb+b_++b_{}`$ is the total birth rate.
We now discuss the asymptotic behavior of these rate equations for three basic cases: subcritical – deleterious mutations favored ($`b_{}>b_+`$); critical – no mutational bias ($`b_+=b_{}`$); and supercritical – advantageous mutations favored ($`b_+>b_{}`$). In all three cases, the total population density $`N`$ and the average age $`A=N^1A_n`$, with $`A_n=aC_n(a)𝑑a`$, approach steady values. These are determined by a balance between the total birth rate $`B`$ and the death rate $`\gamma N`$ due to overcrowding. In the critical and supercritical cases, this leads to the steady state behaviors for the total density and the average age,
$$N=\frac{B}{\gamma },A=\frac{1}{\gamma N}=\frac{1}{B}.$$
(5)
The behavior in the subcritical case is more subtle, as we now discuss.
Subcritical Case. Here a steady state is reached whose properties are found by setting $`\frac{dP_n}{dt}=0`$ in Eq. (3). We solve this rate equation by introducing the generating function $`F(x)=_{n1}P_nx^{n1}`$ to transform the rate equation into the differential equation
$$\frac{F^{}}{F}=\frac{\gamma Nb+12b_{}x}{b_{}(\gamma Nb)x+b_+x^2}.$$
(6)
Integrating Eq. (6), subject to the obvious boundary condition $`F(1)=N`$, gives a family of solutions which are parameterized by the total population density $`N`$. To extract a unique solution one has to invoke on additional arguments. First notice that $`N`$ lies within a finite range. The upper limit is found from the steady-state version of Eq. (4), $`(B\gamma N)N=\mathrm{\Sigma }n^1P_n+b_{}P_1>0`$, to give $`\gamma N<B`$. The lower limit is obtained from the physical requirement that all the $`P_n`$’s are positive and therefore $`F(x)`$ is an increasing function of $`x`$. From Eq. (6) this leads to the inequality $`(\gamma Nb)^24b_+b_{}`$. Thus
$$b+2\sqrt{b_+b_{}}\gamma N<B.$$
(7)
For any initial condition for the $`P_n`$ with a finite support in $`n`$, only the minimal solution which satisfies the lower bound of Eq. (7) is realized. This selection is reminiscent of the behavior in the Fisher-Kolmogorov equation and related reaction-diffusion systems.
To understand why the minimal solution is selected, consider the steady-state asymptotic behavior of $`P_n`$ for $`n\mathrm{}`$. In this limit, we may neglect the $`P_n/n`$ term in Eq. (3). The resulting quasi-linear equation has the solution $`P_n=A_+\lambda _+^n+A_{}\lambda _{}^n`$, with $`\lambda _\pm =\left[\gamma Nb\pm \sqrt{(\gamma Nb)^24b_+b_{}}\right]/2b_{}`$, and with $`\lambda _\pm <1`$. Thus the steady-state fitness distribution decays exponentially with $`n`$. When the total population density attains the minimal value $`N_{\mathrm{min}}=(b+2\sqrt{b_+b_{}})/\gamma `$, $`\lambda _+`$ achieves its minimum possible value $`\lambda _+^{\mathrm{min}}=\sqrt{b_+/b_{}}\mu ^1`$, where $`\mu `$ is the mutational bias. Since $`P_n\lambda _+^n`$, the fitness distribution has the most rapid decay in $`n`$ for the minimal solution. This minimal solution appears to be the basin of attraction for any initial condition with $`P_n(0)`$ decaying at least as fast as $`\mu ^n`$. Conversely, an initial condition which decays as $`\alpha ^n`$ with $`\alpha `$ in the range $`(\mu ^1,\lambda _+^{\mathrm{max}}=1)`$ should belong to the basin of attraction of the solution where, from the steady-state version of Eq. (3) in the large-$`n`$ limit, the total population density is $`N=(b+b_{}\alpha +b_+\alpha ^1)/\gamma `$. We have verified this general classification of solutions numerically.
Since the the steady state is approached exponentially in time for the classical logistic equation, $`\dot{N}=bN\gamma N^2`$, one might anticipate a similar relaxation for our age-structured logistic equation (4). However, a numerical integration of the rate equations gives a power-law relaxation of the total population density, $`N_{\mathrm{}}N(t)t^{2/3}`$, for a compact initial condition (Fig. 1). This is also verified by an asymptotic analysis of the rate equations. A similar relaxation also occurs for the subpopulation densities with given fitness, $`P_n(t)`$.
For the relevant situation where the density $`N`$ takes the minimal value, we integrate Eq. (6) to give the generating function
$$F(x)=N\left(\frac{\mu 1}{\mu x}\right)^2\mathrm{exp}\left\{\frac{x1}{b_+(\mu x)(\mu 1)}\right\}.$$
(8)
One can formally determine the $`P_n`$ by expanding $`F(x)`$ in a Taylor series. However, the asymptotic characteristics of the fitness distribution are more easily determined directly from the generating function by using $`n^k=\frac{1}{N}_{n=1}^{\mathrm{}}n^kP_n=\frac{1}{N}\left(x\frac{d}{dx}\right)^kF(x)|_{x=1}`$. Applying this to Eq. (8), the first two moments of the fitness distribution are
$`n`$ $`=`$ $`{\displaystyle \frac{1}{b_+(\mu 1)^2}}+{\displaystyle \frac{2}{\mu 1}}+1,`$ (9)
$`\sigma ^2`$ $`=`$ $`n^2n^2={\displaystyle \frac{\mu +1}{b_+(\mu 1)^3}}+{\displaystyle \frac{2\mu }{(\mu 1)^2}}.`$ (10)
The average age of the population may be obtained by first solving Eq. (1) in the steady state to give
$$C_n(a)=P_n\left(\gamma N+\frac{1}{n}\right)\mathrm{exp}\left[\left(\gamma N+\frac{1}{n}\right)a\right].$$
(11)
The average age then is
$`A`$ $`=`$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle _0^{\mathrm{}}}aC_n(a)𝑑a,`$ (12)
$`=`$ $`{\displaystyle \frac{1}{\gamma N}}{\displaystyle \frac{1}{N}}{\displaystyle \frac{1}{(\gamma N)^2}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{P_n}{n+(\gamma N)^1}},`$ (13)
$`=`$ $`{\displaystyle \frac{1}{\gamma N}}{\displaystyle \frac{1}{N}}{\displaystyle \frac{1}{(\gamma N)^2}}{\displaystyle _0^1}x^{\frac{1}{\gamma N}}F(x)𝑑x,`$ (14)
where the second line is obtained by using the expression for $`C_n(a)`$ from Eq. (11) and the last line follows by expressing the sum in terms of an integral of the generating function. The surprising feature that emerges by numerical evaluation of this integral (Fig. 2) is that the average age decreases as the population gets fitter!
Supercritical Case. When $`b_+>b_{}`$, the random walk in fitness space defined by Eq. (3) is biased away from the origin and a continuum approach becomes appropriate in the long-time limit. Treating $`n`$ as continuous and Taylor expanding the master equation for small deviations about $`n`$, gives the following convection-diffusion equation, supplemented by birth/death terms, for the fitness distribution
$$\left(\frac{}{t}+V\frac{}{n}\right)P=\left(B\gamma N\frac{1}{n}\right)P+D\frac{^2P}{n^2}.$$
(15)
The difference between the advantageous and deleterious mutation rates now defines a bias velocity $`Vb_+b_{}`$, and the average mutation rate plays the role of a diffusion constant $`D(b_++b_{})/2`$. Integrating over all fitness values, the total population density obeys
$$\frac{dN}{dt}=(B\gamma N)N_0^{\mathrm{}}\frac{P(n,t)}{n}𝑑n.$$
(16)
Since the fitness distribution is sharply peaked at $`n=Vt`$ (see below), the integral on the right-hand side approaches $`N/Vt`$. By setting $`\frac{dN}{dt}=0`$ in the resulting equation, we conclude that $`\gamma NB\frac{1}{Vt}`$. This gives both the steady-state density, as well as the rate of convergence to the steady state.
We now find the fitness distribution by substituting this asymptotics for $`N(t)`$ into Eq. (15) to give
$$\left(\frac{}{t}+V\frac{}{n}\right)P=\left(\frac{1}{Vt}\frac{1}{n}\right)P+D\frac{^2P}{n^2}.$$
(17)
The birth/death term on the right hand side may be neglected, since $`n=Vt`$, and fluctuations about this average are of order $`t^{1/2}`$. This approximation reduces Eq. (17) into the classical convection-diffusion equation, with solution
$$P(n,t)=\frac{N}{\sqrt{4\pi Dt}}\mathrm{exp}\left[\frac{(nn)^2}{4Dt}\right].$$
(18)
This gives a localized fitness distribution with average fitness growing linearly in time, $`n=Vt`$, and width growing diffusively, $`\sigma =\sqrt{2Dt}`$.
To determine the age characteristics, notice that asymptotically, the $`P_n`$’s change slowly with time, so that the time variable $`t`$ is slow. On the other hand, the age variable $`a`$ is fast. Physically this reflects the fact that during the lifetime of a typical individual the change in the age characteristics of the population is small. Thus in the first approximation, we retain only the age derivative in Eq. (1). We also ignore the term $`C_n/n`$, which is small near the peak of the asymptotic fitness distribution. Solving the resulting master equation and using the boundary condition of Eq. (2) we obtain
$`C_n(a,t)`$ $``$ $`P_n(t)\gamma Ne^{\gamma Na}`$ (19)
$`=`$ $`{\displaystyle \frac{\gamma N^2}{\sqrt{4\pi Dt}}}\mathrm{exp}\left[\gamma Na{\displaystyle \frac{(nVt)^2}{4Dt}}\right].`$ (20)
Summing over the fitness variable, the total age distribution $`C(a,t)=C_n(a,t)`$ is just a (stationary) Poisson, $`C(a,t)=\gamma N^2e^{\gamma Na}`$, and the average age is $`A=(\gamma N)^1=B^1`$, in agreement with Eq. (5).
Let us compare this average age to that in the subcritical case; the latter is given by Eq. (12) with $`\gamma N=b+2\sqrt{b_+b_{}}`$. To provide a fair comparison (Fig. 2), take the total birth rate $`B`$ to be the same in both cases. It can then be proved that the average age in the supercritical case is always smaller than that in the subcritical case. Individuals in a population with preferential deleterious mutations live longer than if advantageous mutations are favored! The continuous “rat-race” to increased fitness in the supercritical case does not lead to an increase in the average life span.
Critical Case. With no mutational bias, the fitness still grows indefinitely, but more slowly than in the supercritical system. The equation of motion for $`P(n,t)`$ is again given by Eq. (15), but with $`V`$ set equal to zero and with $`N(t)`$ is still described by Eq. (16). To derive the scaling behaviors of $`n`$ and the width of the fitness distribution, we first use the fact that numerical integration of Eq. (15) again gives a localized fitness distribution. Thus we may estimate the integral on the right-hand side of Eq. (16) as $`N/n`$. This leads to $`\gamma NB\frac{1}{n}`$. Substituting this into Eq. (15) for $`P(n,t)`$ now yields
$$\frac{P}{t}=\left(\frac{1}{n}\frac{1}{n}\right)P+D\frac{^2P}{n^2}.$$
(21)
To determine the long-time behavior of this equation, we exploit the fact that the fitness distribution is peaked near $`nn`$. This suggests changing variables from $`(n,t)`$ to the co-moving co-ordinates $`(y=nn,t)`$. Eq. (21) then becomes
$$\frac{P}{t}\frac{dn}{dt}\frac{P}{y}=\frac{y}{n^2}P\frac{y^2}{n^3}P+D\frac{^2P}{y^2}.$$
(22)
Let us first assume that the average fitness grows faster than diffusively, that is, $`n\sqrt{t}`$. With this assumption, the dominant terms in Eq. (22) are
$$\frac{dn}{dt}\frac{P}{y}=\frac{y}{n^2}P.$$
(23)
These terms balance when $`n/(ty)y/n^2`$. Using this scaling in Eq. (22) and then balancing the remaining subdominant terms gives $`y\sqrt{t}`$. The combination of these results then give $`nt^{2/3}`$. This justifies our initial assumption, $`n\sqrt{t}`$. Finally, writing $`n=(ut)^{2/3}`$, simplifies Eq. (23) to
$$\frac{P}{y}=\frac{3y}{2u^2t}P,$$
(24)
whose solution is the Gaussian of Eq. (18), but with $`n=(ut)^{2/3}`$. The value $`u=\sqrt{3D}`$ is determined by substituting $`n=(ut)^{2/3}`$ in Eq. (22) and balancing the subdominant terms. To summarize, a Gaussian fitness distribution holds in both the critical and supercritical cases with the fitness distribution peaked at
$$n=\{\begin{array}{cc}(3D)^{1/3}t^{2/3}\hfill & \text{critical case;}\hfill \\ Vt\hfill & \text{supercritical case.}\hfill \end{array}$$
(25)
The age distribution in the critical case is obtained similarly to the supercritical case. The asymptotics of $`C_n(a,t)`$ is again given by a form similar to Eq. (19), which gives $`C(a,t)=\gamma N^2e^{\gamma Na}`$ after summing over $`n`$. Hence the average age is $`B^1`$, as in Eq. (5).
While our discussion is based on a population dynamics with an age-independent mortality rate, this assumption does not substantially affect our main results. The crucial point is that old age is unattainable within our model. In the critical and supercritical cases, this is due to death by increased competition among fit individuals, while in the subcritical case, age is limited by the deleterious mutational bias. Thus for a more realistic mortality rate which increases with age, similar fitness and age dynamics to those outlined here would still result.
In summary, in our population dynamics model, the average fitness grows linearly in time when advantageous mutations are more likely and the fitness approaches a steady value when deleterious mutations are favored. In spite of this fitness evolution, the average age of the population always reaches a steady state. Intriguingly, this average age is a decreasing function of the average population fitness. This paradoxical behavior arises because competition becomes keener as the population becomes fitter.
We gratefully acknowledge partial support from NSF grant DMR9632059 and ARO grant DAAH04-96-1-0114.
|
no-problem/9901/hep-ph9901232.html
|
ar5iv
|
text
|
# Nonperturbative QCD corrections to the effective coefficients of the four-Fermi operators
## Acknowledgements
MRA would like to thank Emi Kou for useful discussions and comments on quark condensate contribution. VE is grateful for support from the Natural Sciences and Engineering Research Council of Canada. MRA acknowledges support from the Science and Technology Agency of Japan.
Figure Captions
Figure 1: Perturbative one loop correction to the effective coefficient of the four-fermion operators.
Figure 2: Nonperturbative quark condensate contribution to the effective coefficient of the four-fermion operators at the one-loop level.
Figure 3: Nonperturbative gluon condensate contribution to the effective coefficient of the four-fermion operators at the one-loop level.
|
no-problem/9901/astro-ph9901215.html
|
ar5iv
|
text
|
# Ultra High Energy Neutrinos from Supernova Remnants
## I Introduction
The supernova remnants (SNRs) could be the principal source of galactic cosmic rays up to energies of $`10^{15}`$ eV . A fraction of the accelerated particles interact within the supernova remnants and its adjacent neighborhood, and produce $`\gamma `$ rays. If the nuclear component of cosmic rays is strongly enhanced inside SNRs, then through nuclear collisions leading to pion production and subsequent decay, $`\gamma `$ rays and $`\nu `$s are produced. Therefore, simultaneous high energy $`\gamma `$ ray and $`\nu `$ observations from SNR sources would suggest accelerated hadrons in SNR.
Recent observations above 100 MeV by the EGRET instrument have found $`\gamma `$ ray emission from the direction of several SNRs (e.g. IC 443, $`\gamma `$ cygni, etc.). However, the production mechanisms of these high energy gamma-rays has not been unambiguously identified. The emission may be due to the interaction of protons, accelerated by the SNR blast wave, with adjacent molecular clouds , bremsstrahlung or inverse Compton from accelerated electrons , or due to pulsars residing within the SNRs. Evidence for electron acceleration in SNR comes from the ASCA satellite detection of non-thermal X-ray emission from SN 1006 and IC 443 . Ground based telescopes have detected TeV emission from SN 1006 and the Crab Nebula . For recent reviews see . Our objective in this paper is to look closely at SNRs as sources of ultra high energy (UHE) neutrinos and we investigate different possibilities in the sections II and III below. Section IV gives a brief overview and conclusions.
## II Neutrinos from Supernova Remnants Assuming p-p Interactions
There are two schools of thought describing the high and very high energy gamma-ray emission from SNRs. In one, TeV $`\gamma `$ rays are suggested to be leptonic in origin where TeV photons are produced in inverse Compton scattering off the microwave radiation and other ambient photon fields by relativistic electrons. In the second, the decay of neutral pions produced in proton nucleon collisions produce gamma-rays. TeV neutrino emission from SNRs is possible only if hadronic models are taken into consideration.
Non-linear particle acceleration concepts have been used in to provide SNR gamma-ray fluxes. Following the gamma-ray flux above 1 TeV from a SNR at a distance, d, and considering a differential energy spectrum of accelerated protons inside the remnant of the form $`E^{2\alpha }`$, would be
$$F_\gamma (>1\mathrm{TeV})8.4\times 10^6\theta q_\gamma (\alpha )\left(\frac{E}{1\mathrm{T}\mathrm{e}\mathrm{V}}\right)^{3\alpha }\left(\frac{E_{\mathrm{sn}}}{10^{51}\mathrm{erg}}\right)\left(\frac{n}{1\mathrm{c}\mathrm{m}^3}\right)\left(\frac{\mathrm{d}}{1\mathrm{k}\mathrm{p}\mathrm{c}}\right)^2\mathrm{cm}^2\mathrm{s}^1$$
(1)
where $`n`$ is the number density of the gas and $`q_\gamma `$ is the production rate of photons. These results correspond to the SNR in the Sedov (adiabatic) phase where the luminosity is roughly constant.
UHE neutrinos can be predicted to be produced as a significant byproduct of the decay of charged pions. To find the neutrino flux ($`F_{\nu _\mu +\overline{\nu _\mu }}`$) for different spectral indices we resort to the calculated ratios $`\frac{F_{\nu _\mu +\overline{\nu _\mu }}}{F_\gamma }`$ as given in Table 1. $`q_\gamma (\alpha )`$ values for different $`\alpha `$ are also included. The contribution of nuclei other than H in both the target matter and cosmic rays is assumed to be the same as in the ISM . The units of $`q_\gamma `$ are $`\mathrm{s}^1\mathrm{erg}^1\mathrm{cm}^3(\mathrm{H})^1`$. A comprehensive discussion of the spectrum weighted moments for secondary hadrons, based on the accelerator beams with fixed targets at beam energies $`1`$ TeV, has been presented by Gaisser . This has been shown to also characterize correctly the energy region beyond 1 TeV . A direct ratio estimate was calculated in to give results very close to that in . For harder spectra the ratio is found to approach unity.
We have taken the average of the two ratios as given in Table 1 to calculate the corresponding neutrino flux from equation (1) for each spectral index. The expression for the $`\nu _\mu `$ flux for $`\alpha 4.2`$ is
$$F_{\nu _\mu }(>1\mathrm{TeV})3.4\times 10^{11}\theta \left(\frac{E}{1\mathrm{T}\mathrm{e}\mathrm{V}}\right)^{1.2}\left(\frac{E_{\mathrm{sn}}}{10^{51}\mathrm{erg}}\right)\left(\frac{n}{1\mathrm{c}\mathrm{m}^3}\right)\left(\frac{\mathrm{d}}{1\mathrm{k}\mathrm{p}\mathrm{c}}\right)^2\mathrm{cm}^2\mathrm{s}^1$$
(2)
This is twice the corresponding $`\nu _e`$ flux.
Recent data from the CANGAROO detector indicate that the energy spectrum of $`\gamma `$ rays from the Crab pulsar/nebula may extend up to at least 50 TeV. The CANGAROO detector has also observed VHE $`\gamma `$ rays upto 10 TeV from SN1006 . These emissions could be explained on the basis of electron inverse Compton processes. However, as the energy increases, Compton process produces steeper spectra because of synchrotron energy loss of electrons in magnetic fields. Hence, leptonic models have difficulty in explaining the observed hard spectrum that extends to beyond 10 TeV as is observed from the Crab. If we consider hadronic models to be viable at such energies, we should expect corresponding UHE neutrino emission from SNRs.
The hadronic mechanism for production of $`\pi ^0`$ and hence $`\gamma `$ rays as described above has been used in to explain the UHE emission from the Crab. Nuclear p-p interactions are considered to occur among protons accelerated in the nebula. However, the energy balance between the magnetic field and relativistic particles in the nebula show that nucleon contribution is dominant only at energies above 10 TeV. The derived gamma-ray spectrum in this model closely matches the SN1006 spectrum obtained by the CANGAROO instrument.
An approximate expression for the gamma ray spectrum from equation (1) could be written for spectral index $`\alpha `$ varying between 4.0-4.5 as ,
$$F_\gamma (>1\mathrm{T}\mathrm{e}\mathrm{V})4.0\times 10^{3\alpha }\left(\frac{W_p}{10^{48}\mathrm{erg}}\right)\left(\frac{n}{100\mathrm{c}\mathrm{m}^3}\right)\left(\frac{\mathrm{d}}{2\mathrm{k}\mathrm{p}\mathrm{c}}\right)^2\left(\frac{E}{\mathrm{TeV}}\right)^{3\alpha }\mathrm{cm}^2\mathrm{s}^1$$
(3)
where d is the distance to the source (distance to Crab is 2 kpc), $`W_p`$ is the kinetic energy of the accelerated protons (reasonable value is $`10^{48}`$ erg) and $`n`$ is the effective number density (100 $`\mathrm{cm}^3`$). The corresponding UHE neutrino flux can be calculated directly using values from Table 1. For reasonable parameters as an example for $`\alpha 4.2`$ the neutrino flux from the Crab would be,
$$F_\nu (>1\mathrm{T}\mathrm{e}\mathrm{V})8.3\times 10^{13}\left(\frac{E}{\mathrm{TeV}}\right)^{1.2}\mathrm{cm}^2\mathrm{s}^1$$
(4)
In the shell type supernova remnant, SN1006, at a distance of $`1.8`$ Kpc , the total estimated kinetic energy of the accelerated protons would be $`10^{49}`$ ergs ($`10\%`$ of the supernova explosion energy of $`10^{50}`$ ergs ) and the matter density is $`0.4\mathrm{cm}^3`$ (The matter density is low since SN1006 is located above the galactic plane). The observed $`\gamma `$ ray flux cannot be accounted for by the hadronic acceleration mechanism alone due to this low matter density. However, there could be corresponding UHE neutrino emission.
We show in Figure 1 the expected neutrino flux from the Crab Nebula and SN1006 for different spectral indices as calculated above for typical parametric constants. The expected neutrino flux for SN1006 is found to be several orders of magnitude lower than the Crab for the same neutrino energies.
## III UHE Neutrino Emission from SNR and Pulsars due to Nuclear Interactions
There is another model which predicts UHE neutrinos from SNR . In this model, very young SNRs are considered in which ions are accelerated in the slot gap of the highly magnetized rapidly spinning pulsar. Nuclei, probably mainly Fe nuclei, extracted from the neutron star surface and accelerated to high Lorentz factors can be photodisintegrated by interaction with neutron star’s radiation field and hot polar caps. Photodisintegration can also occur in the presence of extremely strong magnetic fields typical of neutron star environments ($`10^{12}`$ G). For acceleration to sufficiently high energies we need a short initial pulsar period ($``$ 5 ms). The energetic neutrons produced as a result of photodisintegration interact with target nuclei (matter in the shell) as they travel out of the SNR, producing gamma ray and neutrino signals; those neutrons passing through the shell decay into relativistic protons contributing to the pool of galactic cosmic rays. For a beaming solid angle to the Earth of $`\mathrm{\Omega }_b`$, the neutrino flux in this model can be calculated from,
$$F_\nu (E_\nu )\frac{\dot{N}_{\mathrm{Fe}}}{\mathrm{\Omega }_bd^2}[1\mathrm{exp}(\tau _{pp})]N_n(E_n)P_{n\nu }^M(E_\nu ,E_n)𝑑E_n$$
(5)
where $`\dot{N}_{\mathrm{Fe}}`$ is the total rate of Fe nuclei injected, $`d`$ is the distance to the SNR, $`P_{n\nu }^M(E_\nu ,E_n)dE_n`$ is the number of neutrinos produced with energies in the range $`E_\nu `$ to $`(E_\nu +dE_\nu )`$ (via pion production and subsequent decay), and $`N_n(E_n)`$ is the spectrum of neutrons extracted from a single Fe nucleus. $`\tau _{pp}`$ is the optical depth of the shell to nuclear collisions (assuming shell type SNR) which is a function of the mass ejected into the shell during the supernova explosion and of the time after explosion. We show in Figure 2 the $`\nu _\mu +\overline{\nu }_\mu `$ spectra obtained from this model at a distance of 10 kpc; the time after explosion is 0.1 year. Signals from nuclei, which are not completely fragmented, are ignored. These particles are charged and would be trapped in the central region of the SNR which has a relatively low matter density and therefore would not make any significant contribution to neutrino fluxes.
## IV Overview and Conclusions
UHE neutrinos can be detected by observing muons, electrons and tau leptons produced in charged-current neutrino nucleon interactions . To minimize the effects of atmospheric muon and neutrino background, usually the upward going muons (to identify muon neutrinos) are observed. To observe $`\nu _e`$, one looks at the contained event rates for resonant formation of $`W^{}`$ in the $`\overline{\nu _e}`$ interactions at $`E_\nu =6.3`$ PeV for downward moving $`\nu _e`$. The key signature for the detection of $`\nu _\tau `$ is the charged current $`\nu _\tau `$ interaction, which produces a double cascade on either end of a minimum ionizing track . The threshold energy for detecting these neutrinos is near 1 PeV. At this energy cascades are separated by roughly 100m which should be resolvable in the planned neutrino telescopes. However, the evidence for $`\nu _\tau `$ would indicate neutrino oscillations since they are not expected from the hadronic models. Neutrinos produced by cosmic ray interactions in the atmosphere are considerably larger than individual source fluxes at 1 TeV but falls rapidly with energy. The “conventional” atmospheric neutrino flux is derived from the decay of charged pions and kaons produced by cosmic ray interactions in the atmosphere. The angle averaged atmospheric flux in the neutrino energy range 1 TeV $`<E_\nu <10^3`$ TeV, can be parametrized by the equation
$$F_\nu =7.8\times 10^8\left(\frac{E_\nu }{1\mathrm{T}\mathrm{e}\mathrm{V}}\right)^{2.6}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1$$
(6)
An additional “prompt” contribution of neutrinos to the atmospheric flux arises from charm production and decay. The vertical prompt neutrino flux has been recently reexamined using the Lund model for particle production and has been shown to be slightly larger than the conventional atmospheric neutrino flux at higher energies $`100`$ TeV. We compare equations (3) and (5) to find that the neutrino flux from the Crab would be significantly above the atmospheric background beyond a few TeV. For energies of a TeV or more the neutrino direction can be reconstructed to better than 1 degree, and less than one event per year in a one degree bin is expected from the combined atmospheric and AGN backgrounds . For a muon neutrino of energy $``$ 1 TeV the rate of upward muons in a detector with effective area of $`1\mathrm{k}\mathrm{m}^2`$ from the Crab will be $``$ 1 – 30 per year, depending on the model chosen. This neutrino flux should be detectable by large neutrino telescopes with good angular resolution of about 1 degree. However, neutrino flux from SN1006 will be negligible even in such large area detectors.
We must also account for the shadow factor which represents the attenuation of the neutrinos traversing the earth. This effect is prominant at energies $``$ 100 TeV. In that case, it is necessary to restrict our attention to downward moving neutrinos. The expected rates would be larger, but the effects of atmospheric muons have to be eliminated by restricting the solid angle to include only large column depths .
The question of the importance of hadronic interactions in the Crab Nebula and other SNRs can therefore be settled by the detection of neutrinos which is likely in the next generation UHE kilometer scale detectors in ice/water.
Acknowledgements : M.R. wishes to acknowledge useful discussions with Dr. H.J. Crawford and Dr. D. Bhattacharya. Thanks to an anonymous referee for useful comments. This research was supported in part by Grant NAG5-5146.
|
no-problem/9901/astro-ph9901415.html
|
ar5iv
|
text
|
# HST FOC spectroscopy of the NLR of NGC 4151. I. Gas kinematics. 1footnote 11footnote 1Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
## 1 Introduction
The SABab galaxy NGC 4151 hosts one of the most studied active galactic nuclei (AGN) in the sky, with observations from the $`\gamma `$-ray to the MHz range (e.g., Kriss et al. 1995; Knop et al. 1996; Warwick et al. 1996; Ulvestad et al. 1998). The AGN spectrum presents a broad X-ray Fe K$`\alpha `$ line (Yaqoob et al. 1995), optical and UV variable broad permitted emission lines (Crenshaw et al. 1996; Kaspi et al. 1996), as well as several blue-shifted narrow absorption systems (Weymann et al. 1997), and strong narrow lines originated in a system of clouds with up to a few kpc extension (Pérez et al. 1989; Yoshida & Ohtani 1993). The Extended Narrow Line Region (ENLR) of NGC 4151 shows line intensities and widths as well as kinematics consistent with quiescent gas in normal rotation in the galactic disk, illuminated by the central source (Penston et al. 1990; Robinson et al. 1994). Several authors have remarked on the continuity between the kinematics of the large scale H 1 21 cm emission and that of the ENLR (Vila-Vilaró et al. 1995; Asif et al. 1997), but because of the contamination by the Narrow Line Region (NLR; R $`<`$ 4″) the ground based data have not yielded convincing evidence for normal rotation in the circum-nuclear region of NGC 4151 (Schulz 1987; Mediavilla, Arribas & Rasilla 1992).
Narrow-band Hubble Space Telescope (HST) images (Evans et al. 1993; Boksenberg et al. 1995) suggested that the complex morphology of the emission clouds and filaments in the Narrow Line Region (NLR, $`R<4`$″) is shaped not by the anisotropic character of the central source’s radiation but mainly by the interaction between the hot plasma of the radio jet and the ambient gas in the disk. In a previous paper (Winge et al. 1997, hereafter Paper I), we presented the initial results from HST long-slit spectroscopy of the NLR of NGC 4151 at a spatial resolution of 0″.029, isolating for the first time the spectra of individual clouds and demonstrating the influence of the radio ejecta in both the local kinematic and ionization conditions of the emission gas. Several very localized sub-systems of both blue and red-shifted high-velocity knots were detected, and we also observed off-nuclear continuum emission and marked variations on the emission line ratios within a few pc. Such evidence indicates that the physical conditions of the emission gas in individual clouds are strongly influenced by local parameters, either density fluctuations or shock ionization, possibly both.
In this paper we present a detailed study of the kinematics of the gas in both the extended and inner NLR of NGC 4151 from ground-based and HST data with high spatial resolution that allow us to separate the underlying velocity field of the emission gas in the NLR from the effects of the radio jet, and to probe its connection with the large scale rotation of the ENLR in the galactic disk. The most striking evidence for the interaction of the radio jet with the ambient gas and the presence of strong shocks, observed as high-velocity emission knots, localized off-nuclear continuum emission, and variations in the emission-line ratios in scales of a few to tens of parsec, will be discussed in a forthcoming paper.
For a distance to NGC 4151 of 13.3 Mpc, 0″.1 corresponds to a linear scale of 6.4 pc in the plane of the sky; a value of H = 75 km s<sup>-1</sup>Mpc<sup>-1</sup> is assumed throughout this paper.
## 2 Observations and Data Reduction
### 2.1 Ground Based data
Long-slit spectroscopic observations of NGC4151 were obtained using the IPCS (Boksenberg 1972; Boksenberg & Burgess 1973) at the f/15 Cassegrain focus of the 2.5m Isaac Newton Telescope on March 9 to 13, 1985. The wavelength coverage includes the \[O 3\] $`\lambda `$4959,5007 and H$`\beta `$ lines, at an instrumental resolution of 0.75 Å (45 km s<sup>-1</sup>) Full Width Half Maximum (FWHM). The spatial resolution was 0″.63. A total of 17 spectra were obtained at PA = 48° and 138° (parallel and perpendicular to the ENLR direction, respectively) at several different offsets from the nucleus. Different slit widths were used, from 0″.22 to 0″.65. The slit positions are shown in Figure 6 and the log of the observations is given in Table 1.
The individual frames were reduced using the FIGARO image processing environment (Shortridge 1993), and then re-binned to a resolution of 0.24 Å/pixel (60 km s<sup>-1</sup>). The continuum spectra were subtracted after correction for vignetting effects, and the spatial distribution of emission line fluxes, central velocity and FWHM obtained by fitting Gaussian functions to the line profiles using the LONGSLIT spectral analysis software (Wilkins & Axon 1992).
### 2.2 HST FOC f/48 data
The NLR of NGC 4151 was observed using the HST Faint Object Camera (FOC) f/48 long-slit spectrograph on July 3, 1996. The slit, 0″.063 $`\times `$ 13″.5 in size, was positioned along PA = 47°. The F305LP filter was used to isolate the first order spectrum which covers the 3650 – 5470 Å interval at a 1.58 Å/pixel resolution. The spatial scale is 0″.0287 per pixel and the instrumental PSF is of the order of 0″.08. The observational procedure consisted in obtaining first an interactive acquisition (IntAq) image in 1024 x 512 zoomed mode with the f/48 camera through the F220W + F275W filters to allow for an accurate centering of the object. While the necessary offsets were calculated, a 1247 seconds, 1024 $`\times `$ 512 non-zoomed mode spectrum was taken with the slit at the IntAq position. Six 697 seconds (non-zoomed) spectra were then obtained stepping across the NLR in 0″.2 intervals, starting 0″.6 SE of the nucleus. The three resulting SE spectra were too faint to be useful. The slit positions, derived as described below, are shown in Figure 6, superimposed on an archival \[O 3\] $`\lambda `$5007 FOC f/96 image of the galaxy and listed in Table 2.
The spatial relation between objects in the sky is not preserved in the uncalibrated data from the cameras since the raw FOC images are affected by strong geometric distortion. This distortion comprises two components: the external or optical, due to the off-axis position of the detector itself; and the internal, a combination of the distortion caused by the magnetic focusing of the image intensifiers and the one introduced by the spectrographic mirror and grating. The internal contribution is by far the most important, and it is strongly time and format dependent. We have used both standard IRAF<sup>2</sup><sup>2</sup>2IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. procedures for spectroscopic data reduction as well as specific packages developed for the FOC data (at stsdas.hst\_calib.foc) to calibrate the data.
Initially, all frames, including those used for subsequent calibration, were geometrically corrected for the optical plus focusing induced distortions using the equally spaced grid of reseaux marks etched onto the first photocatode in the intensifier tube. The observed position of the reseaux marks were measured in the internal flat-field frames bracketing the observations and then compared with an equally spaced artificial grid of suitable size (9 $`\times `$ 17 reseaux marks for the 1024 $`\times `$ 512 mode) already corrected by the inverse optical distortion<sup>3</sup><sup>3</sup>3determined from ray-tracing models of the HST and FOC optics and available within IRAF.. Each individual transformation was computed fitting two dimensional Chebyshev polynomials of 6<sup>th</sup> order in x and y and 5<sup>th</sup> order in the cross-terms, and applied to the respective science and calibration frames. The rms uncertainties in the reseaux positions are 0.12 – 0.20 pixel for the 512 mode, depending on the signal to noise (S/N) ratio of the flat-field images.
The remaining (mirror$`+`$grating) internal distortion along the dispersion direction was corrected by tracing the spectra of two stars observed in the core of the globular cluster 47 Tuc. The stars are $``$ 130 pixels apart and provide a good correction for most of the working area of the slit. The rms of the tracing is about 0.2 pixel. The distortion along the spatial direction was obtained in a similar way, tracing the brightness distribution of the emission lines of the planetary nebula NGC 6543, with residuals of 0.10 – 0.22 pixel. The two corrections were combined in a single calibration file and applied simultaneously to the science frames.
Since the FOC spectrograph does not contain an internal reference source, the NGC 6543 geometrically-corrected frame was used to obtain the wavelength calibration. Reference wavelengths were derived from ground-based observations (Pérez et al., in preparation), which also indicate that distortions introduced by the internal velocity field of the nebula are negligible at the f/48 resolution (less than 0.5 Å $``$ 0.3 pixel). The two-dimensional spectrum was collapsed along the slit, and a 6<sup>th</sup> order Legendre polynomial solution found for the pixel-to-wavelength transformation, with residuals of $``$ 0.04 Å. The lines were re-identified in the original frame and a bi-dimensional wavelength calibration file obtained. The spatially extended emission lines in the fully calibrated frame of NGC 6543 are measured to be within 0.13 Å of their reference values.
The procedure above can be summarized as follows:
1. optical plus main geometric distortions are corrected using the reseaux marks and left a residual error of 0.12 – 0.20 pixel.
2. residual (spectrograph) distortion in the dispersion direction is corrected tracing the spectra of point sources and the error is $``$ 0.2 pixel.
3. residual distortion along the slit is corrected tracing the emission lines of an extended source and the error is 0.10 – 0.22 pixel.
4. errors from internal velocity field of the wavelength calibrator and from the calibration itself are less than 0.5 and 0.13 Å, respectively.
The final errors estimated for the NGC 4151 spectra, combining all the above sources in quadrature (including the errors from the geometric corrections applied to the calibration files), are 0″.016 and 1.1 Å rms in the spatial and dispersion directions, respectively. The final wavelength resolution is $``$ 310 km s<sup>-1</sup> at \[O 3\] $`\lambda `$5007.
Flux calibration was obtained using the UV standard star LDS749b. The data frame was geometrically corrected and wavelength calibrated as above, and the spectrum extracted on a 16-pixel window. This was then divided by the integration time and by the appropriate segment of the absolute flux table, rebinned to match the wavelength interval. A 6<sup>th</sup> order spline3<sup>4</sup><sup>4</sup>4The “order” of a liner or cubic spline function in the IRAF routines refers to the number of polinomial pieces in the sample region. was fitted to the resulting counts per flux per second spectrum, averaged in 60 Å intervals, generating a smooth response function. The vignetting along the spatial direction was corrected using the model presented in the FOC Handbook (Nota et al. 1996), and combined with the above response function to obtain a two-dimensional sensitivity calibration frame. We estimate light-losses due to the small size of the slit to be in the order of 20% for a point source. The relative flux of lines measured within a 0″.3 – 0″.6 (10 – 20 pixels) interval is believed to be accurate at a 10 – 15% level.
The science frames were divided by the exposure time and by the composite sensitivity frame. The background emission was subtracted by fitting a spline3 function along both spatial and dispersion directions, after masking the regions with emission lines and continuum. We opted to keep the fitting function order as low as possible which provides the best subtraction over most of the frame, even when this implied an imperfect result over small areas, where the background was larger and/or a more rapidly varying function of position. An example of the resulting 2D frames is shown in Figure 6. The plots correspond to a 2″.6 $`\times `$ 71 Å (166 pc $`\times `$ 4250 km s<sup>-1</sup>) segment of the \[O 3\] $`\lambda `$5007 emission line centered on the nuclear continuum on the PA47\_1 spectrum (top) and 0″.6 SW from the center of the slit on the PA47\_3 spectrum (bottom). Multiple velocity systems and a complex cloud structure are seen, and some features can be easily identified in the spectra show in Figure 6, like the broad plume 0″.37 SW of the nucleus in the top image or the double-peaked feature and high-velocity cloud 0″.2 and 1″ SW, respectively, in the 0″.41 NW image.
To accurately determine the slit positions, we compared the \[O 3\] $`\lambda `$5007 luminosity profile of each spectrum with that derived from an archive FOC f/96 image taken with the F501N filter. The observed spectra were convolved with the transmission curve of the filter and the data collapsed along the dispersion direction. The f/96 image was rebinned to match the f/48 spatial resolution and the spectra light profiles compared with the sum of two successive lines stepping across the image at PA = 47°. The final agreement is good within a few percent and the final uncertainty in the position corresponds to half the size of the slit (or one line in the image), 0″.03.
We also retrieved from the HST Archive the NGC 4151 spectra obtained on July 11, 1995 as an engineering test of the f/48 detector. The slit was positioned at PA = 40° and the spectra were taken using the 1024 $`\times `$ 256 mode. Data reduction was complicated by the lack of an equivalent flat-field frame (the ones interspersed with the observations were taken in the 1024 $`\times `$ 512 zoomed mode), and contemporaneous wavelength calibration observations, resulting in larger uncertainties in both spatial and spectral scales. Due to the higher background features located in the upper and middle part of the frames, as well as the presence of a serious “blemish” in the detector crossing over part of the \[O 3\] $`\lambda `$4959,5007 emission lines, we have chosen to optimize the reduction for the \[O 2\] $`\lambda `$3727 region, resulting in final uncertainties of 0″.02 and 1.24 Å in the spatial and dispersion directions, respectively. We note, however, that several emission and kinematic features observed in the 1996 \[O 3\] data can also be identified in the 1995 frames (see Section 3). Five data sets contain enough signal to be useful, and their positions, derived in a similar way to the 1996 data, are listed in Table 2 and shown in Figure 6.
## 3 Results
### 3.1 Ground-based Data
Figure 6 shows a selection of the \[O 3\] $`\lambda `$5007 line profiles observed along the PA48 position at various offsets from the nucleus. In the inner 5″ (the NLR), up to 3 Gaussian components were needed to fit the line profiles, while the extended emission was well represented by a single Gaussian with the instrumental resolution ($``$ 45 km s<sup>-1</sup>). The transition between the ENLR and the inner NLR at around 4″ is particularly instructive. Here, the extended narrow component is joined by a second blue-shifted component of $``$ 400 km s<sup>-1</sup> FWHM. Interior to this radius the line profiles are always broad and double.
Figure 6 presents the result of the profile analysis for the PA48 position. It can be seen that the velocity structure of the extended emission (represented as stars) is similar to that of a typical galactic rotation curve, indicating that this emission originates from gas in the galactic plane. The line width of $`<`$ 45 km s<sup>-1</sup> is also consistent with the velocity dispersion of gas in the disks of normal spiral galaxies (van der Kruit & Shostak 1984). This component has a ratio \[O 3\]/H$`\beta `$ $``$ 9, indicating it is photoionized by the AGN continuum. The general impression is that the FWHM of this primary component increases steadily as one approaches the nucleus while the radial velocity connects almost seamlessly onto the velocity field of the extended narrow component, a point we shall re-emphasize when we discuss the FOC f/48 results. This progression is well illustrated by the line profiles between 1″.9 and 3″.8 SW (Figure 6c and 6d). The second component (open squares) is much broader, FWHM $``$ 600 km s<sup>-1</sup>, is present only in the inner 5″ of the emission region, and is systematically blueshifted with respect to the primary. The \[O 3\]/H$`\beta `$ ratio is $``$ 7. At first sight, the physical reality of the third, narrower, component (FWHM $``$ 100 km s<sup>-1</sup>), which is only found very close to the nucleus, with high velocity shifts with respect to the main rotation curve, may be thought as questionable. However, as we shall see, it corresponds to structures identified in the high-spatial resolution FOC f/48 data, closely associated with the radio jet.
### 3.2 HST FOC f/48 Data
To study the kinematics of the gas in the HST spectra the brightest emission lines, \[O 2\] $`\lambda `$3727 and \[O 3\] $`\lambda `$4959,5007, were extracted in spatial windows varying from 2 to 20 pixels along the slit, and their profiles fitted using 1 to 3 Gaussian components. The two \[O 3\] lines were fitted simultaneously, constraining the physical parameters of the corresponding components of each line to their theoretical values (3:1 intensity ratio, 48 Å separation, and same FWHM). Figure 6 shows some examples of the fits for the 1996 \[O 3\] region. The spectra are identified by their distance to the center of the slit, defined as the line passing through the nucleus perpendicular to the slit direction, with negative values running towards the SW. All regions plotted are located within 1″.5 from the active nucleus and the line profile representations vary from a single component, with very little broadening (second spectrum at position PA47\_3) to combinations of very broad or multiple Gaussians.
Figure 6 shows two regions of the \[O 3\] $`\lambda `$4959,5007 emission in the 1995 0″.14 and 0″.52 NW (PA40\_3 and PA40\_5) frames, which correspond to the same kinematical features already remarked in Figure 6, the nuclear plume and the double peaked profile. Even with the presence of the blemish redward of \[O 3\] $`\lambda `$5007, the similarity with the nearby 1996 profiles is evident.
The results of the decomposition are shown in Figures 6a to 6d for the 1996 \[O 3\] $`\lambda `$5007 data, where the top panels show the FWHM of the different components superimposed over the brightness profiles of the radio emission (from the VLA$`+`$Merlin 5 GHz radio map of Pedlar et al. 1993). In these figures, the data are plotted as a function of position along the slit. The resulting components naturally separate into three groups: a narrow (FWHM $``$ 700 km s<sup>-1</sup>), closest to the systemic velocity and usually the brightest component in the fit (filled triangles); a broad (FWHM $``$ 700 km s<sup>-1</sup>) base that appears in the highest S/N spectra and is distributed around the radio knots (stars); and the occasional narrow secondary component, which can be split from the main emission by as much as $`\pm `$ 1000 km s<sup>-1</sup> (open circles). The gray region is the instrumental FWHM. The seven points on the PA47\_1 (nuclear) position plotted as open triangles will be discussed later (see Section 4.2.2).
As discussed in Paper I, there is an association between the optical and the radio emission, in the sense that the brightest emission-line filaments surround the radio knots, as expected in a scenario where the plasma of the radio jet is clearing a channel in the surrounding medium and enhancing the line emission along its edges by compression of the ambient gas (Taylor, Dyson & Axon 1992; Steffen et al. 1997a ,b ). Figures 6a – 6d present the kinematic expression of this association, with broad bases and/or high velocity components closely associated with the radio emission, while the “main” narrow component follows a more ordered pattern, strikingly resembling that of a disk rotation curve.
The lower resolution (instrumental FWHM $``$ 450 km s<sup>-1</sup>) and lower S/N in the \[O 2\] $`\lambda `$3727 region did not allow us to isolate as many components as for \[O 3\] $`\lambda `$5007. Nevertheless, the overall behaviour of the velocity field, shown in Figures 6 and 6 for the 1996 and 1995 data, respectively, is the same as for the \[O 3\] $`\lambda `$5007 emission, with the narrow component tracing rotation in the disk, while the broad and secondary components mark the interaction of the radio plasma with the ambient gas.
Based on our observations, we propose that the emission gas in the inner 5″ of NGC 4151 is, to the first order, produced by the central source’s photoionization of the ambient gas located in the disk of the galaxy, and therefore, its kinematic behaviour is mainly planar rotation, determined by the dominant potential in the nuclear region (either a very concentrated but extended mass distribution or a central Massive Dark Object). The ionizing photons escape along a broad cone which grazes the disk, as suggested by Pedlar et al. (1992), and Robinson et al. (1994). From our data we can see that the systematic outflow previously remarked on the literature (e.g., Schulz 1990) as dominating the kinematics of the emission gas in the NLR of NGC 4151 can be understood as an effect of the lack of spatial resolution of the ground based data. When the spectra of the individual clouds are obtained, it is possible to decouple the gas that is in general rotation in the disk under the gravitational influence of the central mass concentration from that which is being entrained and swept along by the radio plasma, as it plunges through the galactic disk. This scenario has already been suggested by Vila-Vilaró et al. (1995), based on a higher (0.34 arcsec/pixel and $``$ 1″ seeing) spatial resolution ground-based spectrum oriented along PA = 51°, where they found that the emission within 3″ from the active nucleus can be decomposed into one blue-shifted component located mainly SW of the nucleus, and a second system that they describe as appearing “to link the blue-shifted ENLR SW of the nucleus with the red-shifted emission to the NE. It is conceivable that it represents a continuation of the ENLR velocity field and hence traces the galactic rotation curve within the NLR.” Mediavilla et al. (1992) also remarked on the existence of a gradual connection between the kinematic and physical properties of the galactic environment and the (ground-based) unresolved inner NLR.
We also note that if the scenario of an outflow along the edges of the ionization cone is invoked, it is necessary for the line of sight be located outside the cone or systematic blue-shifted components would be projected on both SW and NE sides, which is not observed. A narrow cone, however, implies a $``$ 25° misalignment between the outer ENLR and the direction of the radio jet, and therefore to a geometry that is distinct from the “standard” unified model. Pedlar et al. (1993) first remarked on the presence of small, systematic changes in the jet orientation, since the position angle of the jet in the inner 3″ SW oscillates between 254° and 263° (74° to 83°), with an approximate mirror symmetry to the NE side. The abrupt change of 55° in the direction of the radio emission in the milliarcsec scale as seen in the VLBA radio maps of Ulvestad et al. (1998) suggests that effects such as precession or warping are very likely present in the collimating structure around the central source (Pringle 1997), but the fact that the arcsecond-scale radio jet is aligned within 10° indicates that these effects would be transient and their impact in the alignment and morphology of the NLR/ENLR/radio emission are difficult to determine.
The presence of the variable blue-shifted optical and UV absorption lines is clear evidence that a gas outflow is present in the nuclear region of NGC 4151. The high resolution GHRS spectra presented by Weymann et al. (1997) show the existence of several distinct systems with outflow velocities from 300 to 1600 km s<sup>-1</sup> with respect to the nucleus, but the material responsible for these features is very likely to be located inside 1 pc of the active nucleus and the actual geometry and dynamics of the clouds is not known (Espey et al. 1998). Although tempting, to associate the bulk kinematics of the emission-line gas of the NLR on scales of several to hundreds of parsecs with the nuclear outflowing gas is not consistent with our data, since we observe very localized clouds with both blue- and red-shifted emission within the velocity range quoted for the outflows above, while the bulk of the gas can be well described by a planar rotation model.
## 4 Modeling the Rotation Curve
To study the gas velocity field, we concentrate on the narrowest, closer to the systemic velocity, and most frequently the brightest, Gaussian component (the filled triangles in Figures 6 to 6) as representative of the “main” NLR emission. The same argument was used to define the ground-based rotation curve (see Section 3.1). In the ENLR, the general behaviour of the velocity field connects smoothly with that of the neutral gas in the kiloparsec scale (e.g., Vila-Vilaró et al. 1995). Here, we proceed on the assumption that the gas represented by the “main” component of the velocity field we isolated in the HST data is, to a first approximation, also participating in the general rotation of the gas in the galactic disk, since it also connects reasonably well with the extended emission, as can be seen in Figure 6.
### 4.1 The Model
We have used the Bertola et al. (1991) analytic expression, which provides a simple parametric representation for particles (gas or stars) on circular orbits in a plane, in the form:
$$V_c(r)=V_{sys}+\frac{Ar}{(r^2+C_o^2)^{p/2}}$$
(1)
where $`V_{sys}`$ is the systemic velocity, $`r`$ is the radius in the plane of the disk and $`A`$, $`C_o`$, and $`p`$ are parameters that define the amplitude and shape of the curve. If $`v(R,\mathrm{\Psi })`$ is the radial velocity at a position $`(R,\mathrm{\Psi })`$ in the plane of the sky, where $`R`$ is the projected radial distance from the nucleus and $`\mathrm{\Psi }`$ its corresponding position angle, we have:
$$v_{mod}(R,\mathrm{\Psi })=V_{sys}+\frac{AR\mathrm{cos}(\mathrm{\Psi }\mathrm{\Psi }_o)\mathrm{sin}i\mathrm{cos}^pi}{\{R^2\eta +C_o^2cos^2i\}^{p/2}}$$
(2)
where
$$\eta [sin^2(\mathrm{\Psi }\mathrm{\Psi }_o)+cos^2icos^2(\mathrm{\Psi }\mathrm{\Psi }_o)]$$
where $`i`$ is the inclination of the disk ($`i=0`$° for a face-on disk) and $`\mathrm{\Psi }_o`$ the position angle of the line of nodes.
### 4.2 The Fitting
We used a Levenberg-Marquardt non-linear least-squares algorithm to fit the above model. The various parameters are determined simultaneously by minimizing the residuals $`\mathrm{\Delta }v=(v_{obs}v_{mod})`$, with $`v_{mod}(R,\mathrm{\Psi };A,C_o,p,i)`$ and $`v_{obs}(R,\mathrm{\Psi })`$ being the model and observed radial velocities at the position $`(R,\mathrm{\Psi })`$ in the plane of the sky, respectively.
The rotation curve expressed in Equation 2, while giving a simple representation of the gas kinematics, has a few shortcomings from the point of view of a minimization procedure. The two projection angles, $`i`$ and $`\mathrm{\Psi }_o`$ can be independently determined only if data are available for more than one position angle in the galaxy. In a single run of the programme, they are also strongly dependent on the initial values provided to the algorithm.
The parameters $`A`$ and $`i`$ are strongly coupled when determining the amplitude of the rotation curve, so equally acceptable fits can be obtained with a large $`A`$ and a small $`i`$ or vice-versa. This situation is more acute for $`p`$ $``$1, and the two parameters tend to decouple for larger values of the exponent. The parameter $`C_o`$ is determined mainly by the steepness of the inner part of the rotation curve, implying that the use of off-nuclear slit positions will tend to push its value to larger radii simply by the absence of data corresponding to smaller values of $`R`$. On the other hand, the exponent $`p`$ is determined by the outer parts of the rotation curve, and its value is expected to be between 1 for a “dark halo” potential, and 1.5 for a system with finite mass contained within the “turn-over” radius. Evidently, data points for larger values of $`R`$ will tend to reflect the potential generated by the mass distribution on larger scales.
To test the algorithm, we used the original data (with added errors of 5-10%) from Bertola et al. (1991) on NGC 5077 and followed the same procedure described in their paper. Our resulting model agrees with theirs within an rms of 8 km s<sup>-1</sup>, with the final parameters differing by less than 5%. Also, applying the algorithm to an artificial rotation curve with spatial sampling and added errors similar to those of the NGC 4151 data retrieves the original parameter set to better than 1%.
#### 4.2.1 The Ground Based data set
To derive the projection angles of the galaxy’s disk, we used the ground based data listed on Table 1, and excluded from the fit the points within 5″ from the active nucleus since the spatial resolution does not allow us to separate the “main” rotating component from the highly disturbed gas interacting with the radio jet in the NLR.
Initially, the data from the three slit positions at PA = 48° were fitted with $`A`$, $`p`$, $`C_o`$, $`i`$, $`\mathrm{\Psi }_o`$, and $`V_{sys}`$ as free parameters, with $`V_{sys}`$ having the same value for all data points. Then, to take into account any residual zero point velocity offsets between the data sets, each position angle was separately fitted allowing $`V_{sys}`$ to vary, but keeping the remaining parameters fixed to the values obtained before. The resulting systemic velocity for each position was subtracted from the observed values and the whole data set refitted with now five free parameters. Once convergence was achieved, the new model was used to obtain the value of $`V_{sys}`$ at each slit position, and the procedure repeated. The model was found to be stable at the third iteration, and the velocity offsets between data sets were smaller than 10 km s<sup>-1</sup>.
Finally, the effect of the initial guess on the parameters was explored running the program for a wide interval of Monte Carlo search for each parameter, while the others were given their best fit values as the initial guesses. The resulting ranges in the best-fit parameters are shown in Table 3. They tend to cluster in two families characterized by different values in the position angle of the line of nodes ($`\mathrm{\Psi }_o`$). Since the least-squares minimization is unable to select between them, the two average models for these families are also listed as Models A and B and plotted in Figure 6. There we see that our data alone is not enough to allow us to discard either of these models, so we choose the value of $`\mathrm{\Psi }_o`$ as 33°.9, which is closer to the results quoted in the literature ($`\mathrm{\Psi }_o`$ $`=`$ 29°.1 $`\pm `$ 2.4 at R $`=`$ 10″ from H 1 21 cm observations – Pedlar et al. (1992); $`\mathrm{\Psi }_o`$ $``$ 34°– 41° at R $`=`$ 2″.5 from H$`\alpha `$$`+`$\[N 2\] measurements – Mediavilla & Arribas (1995)). We stress that the final model obtained from the HST data is essentially insensitive to the exact choice of value within this narrow range, and therefore for simplicity, in the rest of this paper we present only the analysis carried out using Model A.
Other than that, the final solution is stable in all parameters: the inclination of the disk, $`i`$ = 21°, agrees very well with other photometric and kinematic determinations (Simkin 1975; Bosma et al. 1977; Pedlar et al. 1992); the rotation curve derived from the emission gas in the ENLR is essentially flat for R $``$ 15″($``$ 1 kpc), as observed from H 1 and optical data (Pedlar et al. 1992; Robinson et al. 1994; Vila-Vilaró et al. 1995; Asif et al. 1997); the amplitude $`A`$ is typical of the values observed for normal galaxies with similar absolute magnitude and Hubble type (Rubin et al. 1985). On the other hand, the parameter $`C_o`$ $``$ 440 pc, which for $`p`$ = 1 is the radius at which the velocity reaches 70% of its maximum value, is smaller than the typical value of $``$ 1.0 kpc, indicating a large central mass concentration.
Using the Model A above to obtain the predicted rotation velocity at the positions sampled by the PA = 138° data, we found that the data scatters around the model without much evidence of an ordered velocity field. This is not surprising for the three inner positions (2″.5 NE, Nucleus, 2″.5 SW) where perturbations can be induced by the radio jet plunging through the ambient gas. For the outermost positions, the observed points present much shallower gradients than expected from the model, with values smaller by 15 – 50 km s<sup>-1</sup>. The presence of turbulence on the velocity field across the outer knots has been already noted by other authors, and explained as effects of the expansion of the ionization front that produces the line emission (Asif et al. 1997) or of cloud-cloud collisions where the gas streaming in from the leading edge of the bar meets the one trapped in the inner Lindblad resonance orbits (Robinson et al. 1994; Vila-Vilaró et al. 1995).
Therefore, in agreement with previous works, we conclude that the kinematics of the ionized gas in the ENLR of NGC 4151 beyond a distance of about 0.5 kpc is dominated by the general rotation of the galactic disk, with significant perturbations present in the emission-line knots due to a non-circular or non-planar velocity component associated with gas turbulence, effects of the ionization front and/or of the presence of the galactic bar.
Under the hypothesis that the emission in the inner 5″ of the NLR of NGC 4151 is produced by gas rotating in the disk of the galaxy, ionized mainly by the central source and kinematically disturbed by the interaction with the radio jet, we now assume that the projection of the velocity field in the plane of the sky is determined by the same geometrical parameters ($`i`$ and $`\mathrm{\Psi }_o`$) as obtained from the ENLR rotation curve, and use the same procedure as above to fit the FOC f/48 data.
Notice that we do not expect the other parameters of the fit to be the same or even similar to those obtained for the large scale rotation, since the high spatial resolution of the HST data is sampling the gas whose behaviour is governed by the very inner part of the gravitational potential well, while the ENLR gas reacts to the mass distribution on kpc scales.
#### 4.2.2 The 1996 FOC f/48 \[O 3\] data set
As before, the full data set was first fitted with $`A`$, $`p`$, $`C_o`$, and $`V_{sys}`$ as free parameters, while $`i`$ and $`\mathrm{\Psi }_o`$ were kept fixed at 21° and 33°.9, respectively, and with $`V_{sys}`$ having the same value for all data points, then each slit position was separately fitted with the resulting model but allowing $`V_{sys}`$ to vary. As a double check of the result, we also obtained a fit for each slit position allowing all four parameters to vary, and found that the values of $`V_{sys}`$ for the individual “best” and constrained fits agree well within the errors. As explained in Macchetto et al. (1997), the repositioning of the FOC spectrographic mirror, which moves between flat-field and source exposures, can cause a shift in the zero point of the wavelength scale. If one of the individual data sets is shifted up or down relative to the others, the simultaneous fit to all data will result in a weighted value for $`V_{sys}`$, and a larger value for $`A`$ than what would be expected from a uniform wavelength scale. We found a $``$ 10 km s<sup>-1</sup> shift between the systemic velocity for position PA47\_1 and both PA47\_2 and PA47\_4, obtained in the same orbit, and a $``$ 50 km s<sup>-1</sup> shift between them and PA47\_3, the IntAq spectrum.
The resulting systemic velocity for each slit position was then subtracted from the observed values and the whole data set refitted with $`A`$, $`p`$, and $`C_o`$ as free parameters. Finally, the effect of the initial guess on the parameters was explored as for the ground based data, and the resulting intervals in the parameters are listed in Table 4, together with the average model (“fit”) and the one obtained using only the PA47\_1 spectra (“nucleus”) which, as discussed in Section 4.2, would be more sensitive to the actual value of $`C_o`$. These two models are plotted in Figure 6 as full and dotted lines, respectively. Notice that the points represented as open triangles in Figure 6a had been excluded from the final fit. Their projected position corresponds to where the radio jet crosses the slit and therefore we suspect their high velocity can be due to jet entrainment. If included in the fit, their effect is to increase the total amplitude $`A`$ of the curve, with little change in the other parameters. The most striking result that emerges from our analysis is that the exponent $`p`$ of Eq. 1 changes from 1 to 1.5, indicating that the potential goes from “dark halo” at the ENLR distances to dominated by the central mass concentration in the interval 1″$`<`$ R $`<`$ 4″, with the gas rotation curve presenting an almost Keplerian fall-off in this range of radii.
#### 4.2.3 The 1995 and 1996 FOC f/48 \[O 2\] data sets
The same procedure as for the 1996 \[O 3\] data set was repeated using the rotation curves obtained from the 1995 and 1996 \[O 2\] $`\lambda `$3727 emission-line measurements. For the 1995 data only the two innermost positions (PA40\_2 and PA40\_3) were used. The three data sets give similar results, shown in Table 4 and plotted in Figures 6 and 6 for the \[O 2\] $`\lambda `$3727 1996 and 1995 data, respectively. The NE side of the 0″.57 SE rotation curve is systematically blueshifted with respect to the model in the inner 1″.2 from the nucleus. This effect can be associated with the expansion of the radio component C5 (see Figure 2 of Paper 1), which is localized just NW of the emission sampled by the slit. The low S/N of the 1995 \[O 2\] $`\lambda `$3727 data does not allow us to completely separate the disturbed components, but one high velocity component can be seen at the edge of this region in Figure 6.
### 4.3 The HST STIS slitless data
In March 1997, slitless CCD spectra of the NLR of NGC 4151 were taken with the Space Telescope Imaging Spectrometer (STIS) newly installed on board HST. A description of the data is given in Hutchings et al. (1998). Although the \[O 3\] $`\lambda `$5007 spectral image has comparable spatial resolution to our data, it suffers from considerable confusion in the inner 2 \- 3″, where the complex velocity systems lead to an ambiguity between spatial and velocity information. The data are, nevertheless, worthwhile for our purpose since they contain velocity information extending to larger distances from the nucleus and covering a wider area of the NLR than our spectra. We have used the data given in Table 1 of the Hutchings et al. paper, and the same archival WFPC2 \[O 3\] $`\lambda `$5007 image to derive the position relative to the nucleus of the individual clouds identified in their work. We stress that the position ascribed by us to the clouds is uncertain by at least 2 WFPC2 pixels (0″.1). The high velocity systems detected in the STIS data were excluded and the remaining data within 6″ from the nucleus was fitted using the same procedure as in the previous Sections, with $`V_{sys}`$ set to zero (the velocities in the STIS data are given relative to the nuclear \[O 3\] $`\lambda `$5007 emission).
The resulting set of parameters is essentially identical to the one obtained for the fit of the FOC f/48 \[O 3\] $`\lambda `$5007 nuclear position alone (model \[O 3\] $`\lambda `$5007 Nuc. in Table 4): $`A`$ = 998 km s<sup>-1</sup>; $`p`$ = 1.498; $`C_o`$ = 0″.75. The bidimensional representation of these two fits is presented in Figure 6, where the STIS data and corresponding model, and the \[O 3\] $`\lambda `$5007 1996 FOC f/48 data and the Nucleus model of Table 4, are shown on the top and bottom panels, respectively. The velocity contours are 0 to 240 km s<sup>-1</sup> in steps of 40 km s<sup>-1</sup> to the NE and the negative equivalents to the SW. The residuals, defined as $`V_{res}=V_{rot}V_{model}`$, are negative (the model overestimates the observed velocity) on the left, and positive (the model underestimates the observations) on the right panels. There is no preferential distribution of positive and negative residuals as will be expected if the velocity field of the gas was partly due to a large scale radial component such as a bulk outflow. On the other hand, the largest residuals tend to concentrate around the position of the brightest radio knots, indicating that the expansion of the hot plasma introduces a significant non-planar component to the motion of the ambient gas.
## 5 Discussion
### 5.1 The rotation curve
Our INT data confirmed the general result from both H 1 and other ground based observations that the gas at large scales is in planar rotation, dominated by the galactic potential. As we approach the nucleus, it is possible to see in both the INT and HST data the perturbations introduced by the interaction of the radio jet with the ambient gas superimposed on the rotation component, which however, still dominates the bulk velocity field.
The combined effect of higher spatial resolution and profile decomposition is apparent when we compare our data with the rotation curve of Robinson et al. (1994), obtained along PA = 51°. The INT curve has a smaller total amplitude in the inner 5″, 180 km s<sup>-1</sup> instead of $``$ 350 km s<sup>-1</sup>. Furthermore, the asymmetry remarked on previously in the ground-based data, where the SW peak was blue-shifted by 200 km s<sup>-1</sup>, and the NE red-shifted by 50 km s<sup>-1</sup> with respect to the ENLR, disappears. The line profiles shown in Figure 6 show that such asymmetry is created by the mixing of the high velocity clouds plus the disk component. The behaviour of the line FWHM also indicates that when crossing the NLR boundary, the single-component profile jumps from 420 km s<sup>-1</sup> to a little over 600 km s<sup>-1</sup> (see Figure 16b of Robinson et al. 1994), while our “main component” is always below 300 km s<sup>-1</sup>. This value is also the FWHM for the rotational component in the FOC f/48 data.
One direct consequence of the order of magnitude improvement in spatial resolution provided by the FOC f/48 data is that the double-peaked, blue-shifted lines previously observed 3 – 4″ SW of the nucleus are not evidence of a systematic bulk outflow but rather a complex profile created by the gas in the disk plus the disturbed component entrained by the radio jet. The presence of both blue- and red-shifted high velocity components in the HST spectrum of individual clouds reinforces the argument that the interaction of the jet with the ISM of the host galaxy associated with the collimated radiation from the central source are key factors in determining the morphology of the NLR, but not its bulk kinematics. Using the simple assumption that the gas in the NLR is in the same plane as the outer ENLR, we have been able to obtain a good parametrization for the rotation curve within 4″ of the central source. Our results indicate that the kinematics of the NLR gas in the interval 30 $``$ R $``$ 250 pc is best represented by a thin disk in rotation around the mass distribution contained within the turn-over radius ($``$ 0.5″). The observed velocity of the gas at this radius, taken as purely Keplerian, would imply a mass of M $``$ $`10^9`$ M within the inner 60 pc.
Although our nuclear spectrum was saturated in the inner 0″.3 and the dispersion of the data points within the turn-over radius is rather large, the velocities measured there are smaller than further out, an effect that cannot be ascribed to the spatial PSF, and that indicates that the $`10^9`$ M above is not in a point mass but in an extended distribution. If we assume spherical symmetry for such mass distribution, our first measured velocity points to the NE and SW of the nucleus would imply an enclosed mass of $``$ 5 $`\times `$ $`10^8`$ M within R $``$ 0″.34 (20 pc; $`v_{obs}`$ $``$ 100km s<sup>-1</sup>) or $``$ 5 $`\times `$ $`10^7`$ M within R $``$ 0″.15 (10 pc; $`v_{obs}`$$``$ 40 km s<sup>-1</sup>), respectively. This value would be an upper limit to any point mass that could be present there. Further observations with higher (or as high) spatial resolution but with better sampling in the inner region of the rotation curve and higher spectral resolution should be able to further constrain the nature of the mass distribution.
The presence of a large central mass concentration was already indicated by the small value of the $`C_o`$ parameter when fitting the kinematics the ENLR gas (see section 4.2.1), and the change of behaviour of the resulting rotation curve (from flat outside to Keplerian-like in the inner regions) indicates that it dominates the kinematics of the NLR. This is confirmed by a quick analysis of the brightness profile from the archival HST/WFPC2 images. Assuming a mass-to-light ratio of 2, we find that the stars contribute at most $``$ 2 $`\times `$ $`10^8`$ M to the mass between 0″.5 and 4″. Comparing with a central mass of 5 $`\times `$ $`10^8`$$`10^9`$ M within a 0″.5 radius and considering that the circular velocity scales on the square root of the mass, the presence of an extended (bulge) component would cause the gas in the NLR to deviate from a pure Keplerian rotation curve by $``$ 10 – 20%. This value is well within the observational scatter of the data even bearing in mind the considerable uncertainties introduced by the effect of the central source to the brightness profile. Thus even though the mass inside the turnover radius is not a true point source, it is clear why the models yielded such a closely Keplerian value of $`p`$ = 1.5 in our analysis.
The wide-angle ionizing cone model of Pedlar et al. (1993) implies that the common collimation axis of the ionizing cone and the radio jets is not perpendicular to the galactic disk. For $`i`$ $``$ 21°, and a bicone opening angle of $``$ 130° (see Figure 8 of Boksenberg et al. 1995), the angle between the collimation axis and the galaxy rotation axis is $``$ 25°, and the one between the collimation axis and the line of sight, $``$ 40°. Our model for the kinematics of the gas in the inner NLR indicates that it is still rotating in the plane of the galaxy even at distances of a few tens of parsecs from the nucleus, and therefore is not directly related to the symmetry axis of the AGN itself. Such lack of common orientation is also indicated by the radio observations of Weymann et al. (1997) at subparsec scales, and by the Fe K$`\alpha `$ profile presented by Yaqoob et al. (1995). If it is assumed that the Fe K$`\alpha `$ line is produced in the accretion disk, the models indicate that the structure within $`10^3`$ $`r_g`$, the gravitational radius of the black hole, is essentially face-on. A similar situation is seen in the recent HST infrared observations of Centaurus A (Schreier et al. 1998), where the gas disk structure at $``$ 20 pc scales is oriented along the major axis of the bulge, an indication that its geometry is set by the galaxy gravitational potential rather than by the symmetry of the AGN and its jet.
### 5.2 Evidence for rotation in other objects
Two other AGN, NGC 1068 and Mrk 3, have been spectroscopically studied with HST with enough detail and spatial resolution to carry out an analysis similar to that in this paper. While in the first object we also observe strong but localized perturbations induced by the interaction of the radio jet with the ambient gas superimposed in a more general pattern characteristic of ordered rotation (Axon et al. 1997), in Mrk 3 the gas motions in the region co-spatial with the radio jet are clearly dominated by the expansion of the cocoon of hot gas shocked and heated by the radio ejecta (Capetti et al. 1999). Such observations are in agreement with Nelson & Whittle (1996) results, where it was argued that the correlations between the stellar velocity dispersion and the \[O 3\] profile in a large sample of ground-based observations of Seyferts indicate that the motions in the NLR are predominantly gravitational in nature, with objects with linear radio sources presenting broader \[O 3\] lines.
Evidence for an underlying rotational component is also found in ground-based studies of several other objects, like NGC 1365 (Hjelm & Lindblad 1996), NGC 3516 (Mulchaey et al. 1992; Arribas et al. 1997), NGC 2992 (Márquez et al. 1998; Allen et al. 1998), with a general trend for the low-ionization gas to be a better tracer of the disk component, while the high-ionization gas presents more deviant behaviour, usually associated with outflow.
We remark that all these studies used the centroid of the emission lines, rather than the individual components and, until now, the observations did not have enough spatial resolution to separate the individual clouds, and show whether the often noted double-peaked profile is seen everywhere - implying that the outflow is actually a wind; or localized - as expected when the gas is entrained by the radio jet. The observations are also naturally biased toward the brighter emission line knots, which are more likely to be disturbed, either by the interaction with the radio jets, tidal effects or even the central source radiation. Obtaining good S/N data of the central regions of other AGN with high spatial resolution and even moderate spectral resolution can prove to be a useful technique for detecting the undisturbed gas and probing the nature of the central potential. Since stellar dynamical methods are rendered impotent, as the absorption lines in Seyfert spectra are inevitably filled in by the featureless continuum, this approach may provide the only means of directly determining the central object (black hole) mass in currently active nuclei.
A potentially interesting follow-up of such studies is that, if the mass of the central object M is determined using the NLR gas kinematics, the presence of continuum variability can, in principle, provide an estimate for another key missing piece of information on AGN models, the accretion rate $`\dot{\mathrm{M}}`$. Using a very simple steady-state, irradiated black-body approximation for the thermal structure of the accretion disk, Peterson et al. (1998) presented evidence of a correlation between the product (M $`\dot{\mathrm{M}}`$) and the time delay between different continuum wavebands in NGC 7469 (Wanders et al. 1997; Collier et al. 1998), the only source where significant delays have been detected. If such correlation is found to hold for other nuclei, the two methods above would provide totally independent estimates for each of the quantities. However, viscous dissipation time scales are too long to reconcile with the observed continuum lags, which forces consideration of X-ray irradiated or composite models (Sincell & Krolik 1998; Collier et al. 1999). Therefore, with an independent measure of the central object mass, obtaining $`\dot{\mathrm{M}}`$ would be dependent on further development of the accretion disk theory itself, and to obtain it using AGN continuum variability would make it necessary to carry on high sampling rate, simultaneous multi-wavelength campaigns on other nearby active galaxies.
## 6 Summary
We can summarize the main results of our study of both HST and ground-based long-slit spectra of the inner and extended NLR of NGC 4151 as follows:
1. By decomposing the \[O 3\] $`\lambda `$5007 line profile in multiple Gaussian components we were able to trace the main kinematic component of the ENLR across the nuclear region, connecting smoothly the emission gas system with the large scale rotation defined by H 1 observations.
2. Individual clouds in the NLR (R $`<`$ 4″) are observed to be kinematically disturbed by the interaction with the radio jet, but underlying these perturbations the cloud system is moving in a pattern compatible with disk rotation. High velocity components (up to $`\pm `$ 1000 km s<sup>-1</sup>, relative to systemic) and broad (FWHM up to 1800 km s<sup>-1</sup>) bases are detected in the \[O 3\] $`\lambda `$5007 profile of the brightest clouds. Such regions are invariably at the edge of the radio knots, and this association, together with the overall morphology of the velocity field, lead us to propose that the main kinematic system in the inner region of NGC 4151 is still rotation in the plane of the disk, disturbed but not defined by the interaction with the radio jet and the AGN emission.
3. Fitting a simple expression for planar rotation to the data, we find that the ENLR gas (R $`>`$ 4″) presents a kinematic behaviour consistent with and well represented by rotation in the galactic disk, with characteristics similar to other normal spiral systems. We obtain $`i`$ = 21°, and $`\mathrm{\Psi }_o`$ = 34° – 43° for the inclination to the line of sight and position angle of the line of nodes of the disk, respectively. The velocity field of external knots at R $``$ 6″ and 20″ transverse to the radial direction presents evidence of non-planar or non-circular movements, probably associated with gas turbulence and streaming motions along the bar.
4. Using the same projection angles as obtained for the ENLR, the NLR emission component believed to represent the continuation of the disk velocity field was also found to be consistent with planar rotation, although disturbed by the jet, as expected. However, while the velocity field of the extended ENLR gas is dominated by the potential of the galactic bulge, presenting a flat curve at large distances, we find that the behaviour of the gas in the inner NLR is best represented by a Keplerian-like potential, with the kinematics of the gas up to 4″ dominated by the $``$ $`10^9`$ M mass concentration located within the turn-over radius of the rotation curve, located at $``$ 0″.5. Our measurements inside the turn-over radius imply that this is an extended distribution rather than a point mass, and, if spherical symmetry is assumed, the innermost observed velocity still not affected by the spatial PSF gives a 5 $`\times `$ $`10^7`$ M mass concentrated within a radius of about 10 pc.
###### Acknowledgements.
C.W. to thanks the Space Telescope Science Institute for the hospitality during the last two years, and acknowledges the financial support from the Brazilian institution CNPq through a Post-doctoral fellowship, and from ESA. We also thank the referee, Dr. R. Antonucci for his very thorough reading of this paper.
|
no-problem/9901/hep-ph9901404.html
|
ar5iv
|
text
|
# Big bang nucleosynthesis limit on 𝑁_𝜈
## I Introduction
The Standard Model (SM) contains only $`N_\nu =3`$ weakly interacting massless neutrinos. However the recent experimental evidence for neutrino oscillations may require it to be extended to include new superweakly interacting massless (or very light) particles such as singlet neutrinos or Majorons. These do not couple to the $`Z^0`$ vector boson and are therefore not constrained by the precision studies of $`Z^0`$ decays which establish the number of $`SU(2)_\mathrm{L}`$ doublet neutrino species to be
$$N_\nu =2.993\pm 0.011.$$
(1)
However, as was emphasized some time ago , such particles would boost the relativistic energy density, hence the expansion rate, during big bang nucleosynthesis (BBN), thus increasing the yield of <sup>4</sup>He. This argument was quantified for new types of neutrinos and new superweakly interacting particles in terms of a bound on the equivalent number of massless neutrinos present during nucleosynthesis:
$$N_\nu =3+f_{\mathrm{B},\mathrm{F}}\underset{i}{}\frac{g_i}{2}\left(\frac{T_i}{T_\nu }\right)^4,$$
(2)
where $`g_i`$ is the number of (interacting) helicity states, $`f_\mathrm{B}=8/7`$ (bosons) and $`f_\mathrm{F}=1`$ (fermions), and the ratio $`T_i/T_\nu `$ depends on the thermal history of the particle under consideration . For example, $`T_i/T_\nu 0.465`$ for a particle which decouples above the electroweak scale such as a singlet Majoron or a sterile neutrino. However the situation may be more complicated, e.g. if the sterile neutrino has large mixing with a left-handed doublet species, it can be brought into equilibrium through (matter-enhanced) oscillations in the early universe, making $`T_i/T_\nu 1`$ . Moreover such oscillations can generate an asymmetry between $`\nu _\mathrm{e}`$ and $`\overline{\nu _\mathrm{e}}`$, thus directly affecting neutron-proton interconversions and the resultant yield of <sup>4</sup>He . This can be quantified in terms of the effective value of $`N_\nu `$ parametrizing the expansion rate during BBN, which may well be below 3! Similarly, non-trivial changes in $`N_\nu `$ can be induced by the decays or annihilations of massive neutrinos (into e.g. Majorons), so it is clear that it is a sensitive probe of new physics.
The precise bound on $`N_\nu `$ from nucleosynthesis depends on the adopted primordial elemental abundances as well as uncertainties in the predicted values. Although the theoretical calculation of the primordial <sup>4</sup>He abundance is now believed to be accurate to within $`\pm 0.4\%`$ , its observationally inferred value as reported by different groups differs by as much as $`4\%`$. Furthermore, a bound on $`N_\nu `$ can only be derived if the nucleon-to-photon ratio $`\eta n_\mathrm{N}/n_\gamma `$ (or its lower bound) is known, since the effect of a faster expansion rate can be compensated for by the effect of a smaller nucleon density. This involves comparison of the expected and observed abundances of other elements such as D, <sup>3</sup>He and <sup>7</sup>Li which are much more poorly determined, both observationally and theoretically. The most crucial element in this context is deuterium which is supposedly always destroyed and never created by astrophysical processes following the big bang . Until relatively recently , its primordial abundance could not be directly measured and only an indirect upper limit could be derived based on models of galactic chemical evolution. As reviewed in ref., the implied lower bound to $`\eta `$ was then used to set increasingly stringent upper bounds on $`N_\nu `$ ranging from 4 downwards , culminating in one below 3 which precipitated the so-called “crisis” for standard BBN , and was interpreted as requiring new physics.
However as cautioned before , there are large systematic uncertainties in such constraints on $`N_\nu `$ which are sensitive to our limited understanding of galactic chemical evolution. Moreover it was emphasized that the procedure used earlier to bound $`N_\nu `$ was statistically inconsistent since, e.g., correlations between the different elemental abundances were not taken into account. A Monte Carlo (MC) method was developed for estimation of the correlated uncertainties in the abundances of the synthesized elements , and incorporated into the standard BBN computer code , thus permitting reliable determination of the bound on $`N_\nu `$ from estimates of the primordial elemental abundances. Using this method, it was shown that the conservative observational limits on the primordial abundances of D, <sup>4</sup>He and <sup>7</sup>Li allowed $`N_\nu 4.53`$ ($`95\%`$ C.L.), significantly less restrictive than earlier estimates. Similar conclusions followed from studies using maximum likelihood (ML) methods . However the use of the Monte Carlo method is computationally expensive and moreover the calculations need to be repeated whenever any of the input parameters — either reaction rates or inferred primordial abundances — are updated.
In a previous paper we presented a simple method for estimation of the BBN abundance uncertainties and their correlations, based on linear error propagation. To illustrate its advantages over the MC+ML method, we used simple $`\chi ^2`$ statistics to obtain the best-fit value of the nucleon-to-photon ratio in the standard BBN model with $`N_\nu =3`$ and indicated the relative importance of different nuclear reactions in determining the synthesized abundances. In this work we extend this approach to consider departures from $`N_\nu =3`$. We have checked that our results are consistent with those obtained independently using the MC+ML method where comparison is possible.
The essential advantage of our method is that the correlated constraints on $`N_\nu `$ and $`\eta `$ can be easily reevaluated using just a pocket calculator and the numerical tables provided, when the input nuclear reaction cross-sections or inferred abundances are known better. We have in fact embedded the calculations in a compact Fortran code, which is available upon request from the authors, or from a website . Thus observers will be able to readily assess the impact of new elemental abundance determinations on an important probe of physics beyond the standard model.
## II The Method
In this section we recapitulate the basics of our method and outline its extension to the case $`N_\nu 3`$.
### A Basic Ingredients
The method has both experimental and theoretical ingredients. The experimental ingredients are: (a) the inferred values of the primordial abundances, $`\overline{Y_i}\pm \overline{\sigma }_i`$; and (b) the nuclear reaction rates, $`R_k\pm \mathrm{\Delta }R_k`$. We normalize all the rates to a “default” set of values ($`R_k1`$), namely, to the values compiled in Ref. , except for the neutron decay rate, which is updated to its current value .
The theoretical ingredients are: (a) the calculated abundances $`Y_i`$; and (b) the logarithmic derivatives $`\lambda _{ik}=\mathrm{ln}Y_i/\mathrm{ln}R_k`$. Such functions have to be calculated for generic values of $`N_\nu `$ and of $`x\mathrm{log}_{10}(\eta _{10})`$, where $`\eta _{10}=\eta /10^{10}`$. Note that the fraction of the critical density in nucleons is given by $`\mathrm{\Omega }_\mathrm{N}h^2\eta _{10}/273`$, where $`h0.7\pm 0.1`$ is the present Hubble parameter in units of $`100`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, and the present temperature of the relic radiation background is $`T_0=2.728\pm 0.002`$ K .
The logarithmic derivatives $`\lambda _{ik}`$ can be used to to propagate possible changes or updates of the input reaction rates ($`R_kR_k+\delta R_k`$) to the theoretical abundances ($`Y_iY_i+Y_i\lambda _{ik}\delta R_k/R_k`$). Moreover, they enter in the calculation of the theoretical error matrix for the abundances, $`\sigma _{ij}^2=Y_iY_j_k\lambda _{ik}\lambda _{jk}(\mathrm{\Delta }R_k/R_k)^2`$. This matrix, summed to the experimental error matrix $`\overline{\sigma }_{ij}^2=\delta _{ij}\overline{\sigma }_i\overline{\sigma }_j`$ and then inverted , defines the covariance matrix of a simple $`\chi ^2`$ statistical estimator. Contours of equal $`\chi ^2`$ can then be used to set bounds on the parameters $`(x,N_\nu )`$ at selected confidence levels.
In Ref. we gave polynomial fits for the functions $`Y_i(x,N_\nu )`$ and $`\lambda _{ik}(x,N_\nu )`$ for $`x[0,1]`$ and $`N_\nu =3`$. The extension of our method to the case $`N_\nu 3`$ (say, $`1N_\nu 5`$) is, in principle, straightforward, since it simply requires recalculation of the functions $`Y_i`$ and $`\lambda _{ik}`$ at the chosen value of $`N_\nu `$. However, it would not be practical to present, or to use, extensive tables of polynomial coefficients for many different values of $`N_\nu `$. Therefore, we have devised some formulae which, to good accuracy, relate the calculations for arbitrary values of $`N_\nu `$ to the standard case $`N_\nu =3`$, thus reducing the numerical task dramatically. Such approximations are discussed in the next subsection.
### B Useful Approximations
As is known from previous work , the synthesized elemental abundances $`\mathrm{D}/\mathrm{H}`$, $`{}_{}{}^{3}\mathrm{He}/\mathrm{H}`$, and $`{}_{}{}^{7}\mathrm{Li}/\mathrm{H}`$ (i.e., $`Y_2`$, $`Y_3`$, and $`Y_7`$ in our notation) are given to a good approximation by the quasi-fixed points of the corresponding rate equations, which formally read
$$\frac{\mathrm{d}Y_i}{\mathrm{d}t}\eta \underset{+,}{}Y\times Y\times \sigma v_T,$$
(3)
where the sum runs over the relevant source $`(+)`$ and sink $`()`$ terms, and $`\sigma v_T`$ is the thermally-averaged reaction cross section. Since the temperature of the universe evolves as $`dT/dtT^3\sqrt{g_{}}`$, with the number of relativistic degrees of freedom, $`g_{}=2+(7/4)(4/11)^{4/3}N_\nu `$ (following $`e^+e^{}`$ annihilation), the above equation can be rewritten as
$$\frac{\mathrm{d}Y_i}{\mathrm{d}T}\frac{\eta }{g_{}^{1/2}}T^3\underset{+,}{}Y\times Y\times \sigma v_T,$$
(4)
which shows that $`Y_2`$, $`Y_3`$, and $`Y_7`$ depend on $`\eta `$ and $`N_\nu `$ essentially through the combination $`\eta /g_{}^{1/2}`$. Thus the calculated abundances $`Y_2`$, $`Y_3`$, and $`Y_7`$ (as well as their logarithmic derivatives $`\lambda _{ik}`$) should be approximately constant for
$$\mathrm{log}\eta \frac{1}{2}\mathrm{log}g_{}=\mathrm{const},$$
(5)
as we have verified numerically.
Equation (5), linearized, suggests that the values of $`Y_i`$ and of $`\lambda _{ik}`$ for $`N_\nu =3+\mathrm{\Delta }N_\nu `$ can be related to the case $`N_\nu =3`$ through an appropriate shift in $`x`$:
$`Y_i(x,3+\mathrm{\Delta }N_\nu )`$ $``$ $`Y_i(x+c_i\mathrm{\Delta }N_\nu ,3),`$ (6)
$`\lambda _{ik}(x,3+\mathrm{\Delta }N_\nu )`$ $``$ $`\lambda _{ik}(x+c_i\mathrm{\Delta }N_\nu ,3),`$ (7)
where the coefficient $`c_i`$ is estimated to be $`0.03`$ from Eq. (5) (at least for small $`\mathrm{\Delta }N_\nu `$). In order to obtain a satisfactory accuracy in the whole range $`(x,N_\nu )[0,1]\times [1,5]`$, we allow upto a second-order variation in $`\mathrm{\Delta }N_\nu `$, and for a rescaling factor of the $`Y_i`$’s:
$`Y_i(x,3+\mathrm{\Delta }N_\nu )`$ $`=`$ $`(1+a_i\mathrm{\Delta }N_\nu +b_i\mathrm{\Delta }N_{\nu }^{}{}_{}{}^{2})Y_i(x+c_i\mathrm{\Delta }N_\nu +d_i\mathrm{\Delta }N_{\nu }^{}{}_{}{}^{2},3),`$ (8)
$`\lambda _{ik}(x,3+\mathrm{\Delta }N_\nu )`$ $`=`$ $`\lambda _{ik}(x+c_i\mathrm{\Delta }N_\nu +d_i\mathrm{\Delta }N_{\nu }^{}{}_{}{}^{2},3).`$ (9)
We have checked that the above formulae (with coefficients determined through a numerical best-fit) link the cases $`N_\nu 3`$ to the standard case $`N_\nu =3`$ with very good accuracy.
As regards the $`{}_{}{}^{4}\mathrm{He}`$ abundance ($`Y_4`$ in our notation), a semi-analytical approximation also suggests a relation between $`x`$ and $`N_\nu `$ similar to Eq. (5), although with different coefficients . Indeed, functional relations of the kind (8,9) work well also in this case. However, in order to achieve higher accuracy and, in particular, to match the result of the recent precision calculation of $`Y_4`$ which includes all finite temperature and finite density corrections , we also allow for a rescaling factor for the $`\lambda _{4k}`$’s.
We wish to emphasize that the validity of our prescription for the evaluation of the BBN uncertainties and for the $`\chi ^2`$ statistical analysis does not depend on the approximations discussed above. The semi-empirical relations (8,9) are only used to enable us to provide the interested reader with a simple and compact numerical code . This allows easy extraction of joint fits to $`x`$ and $`N_\nu `$ for a given set of elemental abundances, without having to run the full BBN code, and with no significant loss in accuracy.
## III Primordial Light Element Abundances
The abundances of the light elements synthesized in the big bang have been subsequently modified through chemical evolution of the astrophysical environments where they are measured . The observational strategy then is to identify sites which have undergone as little chemical processing as possible and rely on empirical methods to infer the primordial abundance. For example, measurements of deuterium (D) can now be made in quasar absorption line systems (QAS) at high red shift; if there is a “ceiling” to the abundance in different QAS then it can be assumed to be the primordial value. The helium (<sup>4</sup>He) abundance is measured in H II regions in blue compact galaxies (BCGs) which have undergone very little star formation; its primordial value is inferred either by using the associated nitrogen or oxygen abundance to track the stellar production of helium, or by simply observing the most metal-poor objects . (We do not consider <sup>3</sup>He which can undergo both creation and destruction in stars and is thus unreliable for use as a cosmological probe.) Closer to home, the observed uniform abundance of lithium ($`{}_{}{}^{7}\mathrm{Li}`$) in the hottest and most metal-poor Pop II stars in our Galaxy is believed to reflect its primordial value .
However as observational methods have become more sophisticated, the situation has become more, instead of less, uncertain. Large discrepancies, of a systematic nature, have emerged between different observers who report, e.g., relatively ‘high’ or ‘low’ values of deuterium in different QAS, and ‘low’ or ‘high’ values of helium in BCG, using different data reduction methods. It has been argued that the Pop II lithium abundance may in fact have been significantly depleted down from its primordial value , with observers arguing to the contrary . We do not wish to take sides in this matter and instead consider several combinations of observational determinations, which cover a wide range of possibilities, in order to demonstrate our method and obtain illustrative best-fits for $`\eta `$ and $`N_\nu `$. The reader is invited to use the programme we have provided to analyse other possible combinations of observational data.
### A Data Sets
The data sets we consider are tabulated in Table I. Below we comment in detail on our choices.
* Data Set A: This is taken from Ref. who performed the first detailed MC+ML analysis to determine $`\eta `$ and $`N_\nu `$ and is chosen essentially for comparison with our method, as in our previous paper .
Their adopted value of the primordial deuterium abundance, $`\overline{Y_2}=1.9\pm 0.4\times 10^4`$, was based on early observations of a QAS at redshift $`z=3.32`$ towards Q0014+813 which suggested a relatively ‘high’ value , and was consistent with limits set in other QAS, but in conflict with the much lower abundance found in a QAS at $`z=3.572`$ towards Q1937-1009 . More recently, observations of a QAS at $`z=0.701`$ towards Q1718+4807 have also yielded a high abundance as we discuss later.
The primordial helium abundance was taken to be $`\overline{Y_4}=0.234\pm 0.002\pm 0.005`$ from linear regression to zero metallicity in a set of 62 BCGs , based largely on observations which gave a relatively ‘low’ value .
Finally the primordial lithium abundance $`\overline{Y_7}=1.6\pm 0.36\times 10^{10}`$ was taken from the Pop II observations of Ref., assuming no depletion.
* Data Set B: This corresponds to the alternative combination of ‘low’ deuterium and ‘high’ helium, as considered in our previous work , with some small changes.
The primordial deuterium abundance, $`\overline{Y_2}=3.4\pm 0.3\times 10^5`$, adopted here is the average of the ‘low’ values found in two well-observed QAS, at $`z=3.572`$ towards Q1937-1009 , and at $`z=2.504`$ towards Q1009+2956 .
The primordial helium abundance, $`\overline{Y_4}=0.245\pm 0.004`$, is taken to be the average of the values found in the two most metal-poor BCGs, I Zw 18 and SBS 0335-052, from a new analysis which uses the helium lines themselves to self-consistently determine the physical conditions in the H II region, and specifically excludes those regions which are believed to be affected by underlying stellar absorption . For example these authors demonstrate that there is strong underlying stellar absorption in the NW component of I Zw 18, which has been included in earlier analyses .
The primordial lithium abundance $`\overline{Y_7}=1.73\pm 0.21\times 10^{10}`$ is from Ref., again assuming no depletion. (Note that the uncertainty was incorrectly reported as $`\pm 0.12\times 10^{10}`$ in Ref., as used in our previous work .)
* Data Set C: It has been suggested that the discordance between the ‘high’ and ‘low’ values of the deuterium abundance reported in QAS may be considerably reduced if the analysis of the H+D profiles accounts for the correlated velocity field of bulk motion, i.e. mesoturbulence, rather than being based on multi-component microturbulent models. It is then found that a value of $`\overline{Y_2}=(3.55.2)\times 10^5`$ is compatible simultaneously (at 95% C.L.) with observations of the QAS at $`z=0.701`$ towards Q1718+4807 (in which a ‘high’ value was reported ), and observations of the QAS at $`z=3.572`$ towards Q1937-1009 and at $`z=2.504`$ towards Q1009+2956 (in which a ‘low’ value was reported ). We adopt this value, along with the same helium abundance as in Set B.
It has also been argued that the lithium abundance observed in Pop II stars has been depleted down from a primordial value of $`\overline{Y_7}=3.9\pm 0.85\times 10^{10}`$ , the lower end of the range being set by the presence of highly overdepleted halo stars and consistency with the <sup>7</sup>Li abundance in the Sun and in open clusters, while the upper end of the range is set by the observed dispersion of the Pop II abundance “plateau” and the <sup>6</sup>Li/<sup>7</sup>Li depletion ratio. We adopt this value, noting that a somewhat smaller depletion is suggested by other workers who find a primordial abundance of $`\overline{Y_7}=2.3\pm 0.5\times 10^{10}`$.
* Data Set D: Recently, a ‘high’ value of the deuterium abundance, $`\overline{Y_2}=3.3\pm 1.2\times 10^4`$, has been reported from observations of a QAS at $`z=0.701`$ towards Q1718+4807 , in confirmation of an earlier claim . We adopt this value along with the same helium abundance as in set A.
For the lithium abundance, we adopt the same value as in Set B but increase the systematic error by 0.02 dex to allow for the uncertainty in the oscillator strengths of the lithium lines .
### B Qualitative Implications on $`N_\nu `$ and $`\eta `$
Different choices for the input data sets (A–D) lead to different implications for $`\eta `$ and $`N_\nu `$, that can be qualitatively understood through Figs. 14.
Figure 1 shows the BBN primordial abundances $`Y_i`$ (solid lines) and their $`\pm 2\sigma `$ bands (dashed lines), as functions of $`x\mathrm{log}(\eta /10^{10})`$, for $`N_\nu =`$ 2, 3, and 4. The grey areas represent the $`\pm 2\sigma `$ bands allowed by the data set A (see Table I). There is global consistency between theory and data for $`x0.20.4`$ and $`N_\nu =3`$, while for $`N_\nu =2`$ ($`N_\nu =4`$) the $`Y_2`$ data prefer values of $`x`$ lower (higher) than the $`Y_4`$ data. Therefore, we expect that a global fit will favor values of $`(x,N_\nu )`$ close to $`(0.3,3)`$.
Figure 2 is analogous, but for the data set B. In this case, there is still consistency between theory and data at $`N_\nu =3`$, although for values of $`x`$ higher than for the data set A. For $`N_\nu =2`$ $`(N_\nu =4)`$ the combination of $`Y_2`$ and $`Y_7`$ data prefer values of $`x`$ lower (higher) than $`Y_4`$. The best fit is thus expected to be around $`(x,N_\nu )(0.7,3)`$.
Similarly, Figure 3 shows the abundances for the data set C. The situation is similar to data set B (Fig. 2), but one can envisage a best fit at a slightly lower value of $`x`$, due to the higher value of $`Y_2`$, partly opposed by the increase in $`Y_7`$.
Finally, Fig. 4 refers to the data set D. In this case, data and theory are not consistent for $`N_\nu =2`$, since $`Y_2`$ and $`Y_4`$ pull $`x`$ in different directions, and no “compromise” is possible since intermediate values of $`x`$ are disfavored by $`Y_7`$. However, for $`N_\nu =3`$ there is relatively good agreement between data and theory at low $`x`$. Therefore, we expect a best fit around $`(x,N_\nu )(0.2,3)`$.
The qualitative indications discussed here are confirmed by a more accurate analysis, whose results are reported in the next section.
## IV Determining $`N_\nu `$
We now present the results of fits to the data sets A–D in the $`(x,N_\nu )`$ variables, using our method to estimate the correlated theoretical uncertainties, and adopting $`\chi ^2`$ statistics to include both theoretical and experimental errors. We have used the theoretical $`Y_i`$’s obtained by the standard (updated) BBN evolution code , and checked that using the polynomial fits given in Sec. II B induce negligible changes which would not be noticeable on the plots.
In the analysis, we optionally take into account a further constraint on $`\eta `$ (independent on $`N_\nu `$) coming from a recent analysis of the Ly$`\alpha `$-“forest” absorption lines in quasar spectra. The observed mean opacity of the lines requires some minimum amount of neutral hydrogen in the high redshift intergalactic medium, given a lower bound to the flux of ionizing radiation. Taking the latter from a conservative estimate of the integrated UV background due to quasars, Ref. finds the constraint $`\eta 3.4\times 10^{10}`$. This bound is not well-defined statistically but, for the sake of illustration, we have parametrized it through a penalty function quadratic in $`\eta `$:
$$\chi _{\mathrm{Ly}\alpha }^2(\eta )=2.7\times \left(\frac{3.4\times 10^{10}}{\eta }\right)^2,$$
(10)
to be eventually added to the $`\chi ^2(\eta ,N_\nu )`$ derived from our fit to the elemental abundances. The above function excludes values of $`\eta `$ smaller than $`3.4\times 10^{10}`$ at 90% C.L. (for one degree of freedom, $`\eta `$).
Figure 5 shows the results of joints fits to $`x=\mathrm{log}(\eta _{10})`$ and $`N_\nu `$ using the abundances of data set A. The abundances $`Y_2`$, $`Y_4`$, and $`Y_7`$ are used separately (upper panels), in combinations of two (middle panels), and all together, without and with the Ly$`\alpha `$-forest constraint on $`\eta `$ (lower panels). In this way the relative weight of each piece of data in the global fit can be understood at glance. The three C.L. curves (solid, thick solid, and dashed) are defined by $`\chi ^2\chi _{\mathrm{min}}^2=2.3,\mathrm{\hspace{0.17em}6.2}`$, and $`11.8`$, respectively, corresponding to 68.3%, 95.4%, and 99.7% C.L. for two degrees of freedom ($`\eta `$ and $`N_\nu `$), i.e., to the probability intervals often designated as 1, 2, and 3 standard deviation limits. The $`\chi ^2`$ is minimized for each combination of $`Y_i`$, but the actual value of $`\chi _{\mathrm{min}}^2`$ (and the best-fit point) is shown only for the relevant global combination $`Y_2+Y_4+Y_7(+\mathrm{Ly}\alpha )`$.
The results shown in Fig. 5 for the combinations $`Y_4+Y_7`$ and $`Y_2+Y_4+Y_7`$ are consistent with those obtained in ref. by using the same input data but a completely different analysis method (namely Monte Carlo simulation plus maximum likelihood). The consistency is reassuring and confirms the validity of our method. For this data set, the helium and deuterium abundances dominate the fit, as it can be seen by comparing the combinations $`Y_2+Y_4`$ and $`Y_2+Y_4+Y_7`$. The preferred values of $`x`$ are relatively low, and the preferred values of $`N_\nu `$ range between 2 and 4. Although the fit is excellent, the low value of $`x`$ is in conflict with the Ly$`\alpha `$-forest constraint on $`\eta `$, as indicated by the increase of $`\chi _{\mathrm{min}}^2`$ from 0.02 to 8.89.
Figure 6 is analogous, but for the data set B which favors high values of $`x`$ because of the ‘low’ deuterium abundance. The combination of $`Y_2+Y_7`$ isolates, at high $`x`$, a narrow strip which depends mildly on $`N_\nu `$. The inclusion of $`Y_4`$ selects the central part of such strip, corresponding to $`N_\nu `$ between 2 and 4. As in Fig. 5, the combination $`Y_4+Y_7`$ does not appear to be very constraining. The overall fit to $`Y_2+Y_4+Y_7`$ is acceptable but not particularly good, mainly because $`Y_2`$ and $`Y_7`$ are only marginally compatible at high $`x`$. On the other hand, the $`Y_2+Y_4+Y_7`$ bounds are quite consistent with the Ly$`\alpha `$-forest constraint.
In data set C, the deuterium abundance has increased further. Moreover the lithium abundance is no longer at the minimum of the theoretical curve as before, so strongly disfavors “intermediate” values of $`x`$. The overall effect, as shown in Figure 7, is that $`\chi _{\mathrm{min}}^2`$ decreases a bit with respect to Set B, and the best fit value of $`x`$ moves slightly lower. The allowed value of $`N_\nu `$ ranges between 2 and 4. Note that had we retained the same $`Y_7`$ as in Set B, then $`\chi _{\mathrm{min}}^2`$ would have dropped to 0.91 (2.55) for the combination $`Y_2+Y_4+Y_7`$ (+ Ly$`\alpha `$-forest constraint).
Finally, Fig. 8 refers to data set D which, like Data Set A, has the ‘high’ deuterium abundance but with larger uncertainties. So although a low value of $`x`$ is still picked out, a high $`x`$ region is still possible at the 2$`\sigma `$ level (in the $`Y_2+Y_4+Y_7`$ panel) and is even favored when the Ly$`\alpha `$-forest bound is included (although with an unacceptably high $`\chi _{\mathrm{min}}^2`$). Note that had the lithium abundance been taken to be the same as in data set C (i.e. allowing for depletion), the $`\chi _{\mathrm{min}}^2`$ would have been 0.07 (7.42) for the combination $`Y_2+Y_4+Y_7`$ (+ Ly$`\alpha `$-forest constraint).
Of course one can also consider orthogonal combinations to those above, e.g. ‘high’ deuterium and ‘high’ helium, or ‘low’ deuterium and ‘low’ helium . The latter combination implies $`N_\nu 2`$, thus creating the so-called “crisis” for standard nucleosynthesis . Conversely, the former combination suggests $`N_\nu 4`$, which would also constitute evidence for new physics. Allowing for depletion of the primordial lithium abundance to its Pop II value, relaxes the upper bound on $`N_\nu `$ further, as noted earlier .
## V Conclusions
The results discussed above demonstrate that the present observational data on the primordial elemental abundances are not as yet sufficiently stable to derive firm bounds on $`\eta `$ and $`N_\nu `$. Different and arguably equally acceptable choices for the input data sets lead to very different predictions for $`\eta `$, and to relatively loose constraints on $`N_\nu `$ in the range 2 to 4 at the 95% C.L. Thus it may be premature to quote restrictive bounds based on some particular combination of the observations, until the discrepancies between different estimates are satisfactorily resolved. Our method of analysis provides the reader with an easy-to-use technique to recalculate the best-fit values as the observational situation evolves further.
However one might ask what would happen if these discrepancies remain? We have already noted the importance of an independent constraint on $`\eta `$ (from the Ly$`\alpha `$-forest) in discriminating between different options. However, given the many assumptions which go into the argument , this constraint is rather uncertain at present. Fortunately it should be possible in the near future to independently determine $`\eta `$ to within $`5\%`$ through measurements of the angular anisotropy of the cosmic microwave background (CMB) on small angular scales , in particular with data from the all-sky surveyors MAP and PLANCK . Such observations will also provide a precision measure of the relativistic particle content of the primordial plasma. Hopefully the primordial abundance of <sup>4</sup>He would have stabilized by then, thus providing, in conjunction with the above measurements. a reliable probe of a wide variety of new physics which can affect nucleosynthesis.
###### Acknowledgements.
We thank G. Fiorentini for useful discussions and for earlier collaboration on the subject of this paper.
|
no-problem/9901/astro-ph9901143.html
|
ar5iv
|
text
|
# Weak gravitational lensing in the standard Cold Dark Matter model, using an algorithm for three-dimensional shear
## 1 INTRODUCTION
The gravitational lensing of light by the general form of the large-scale structure in the universe is of considerable importance in cosmology. This ‘weak lensing’ may result in magnification of a distant source from Ricci focusing due to matter in the beam, and shear leading to distortion of the image cross-section. The strength of these effects depends on the lens and source angular diameter distances and the specific distribution of matter between the observer and source. Consequently the effects are likely to be sensitive to the particular cosmological model. In extreme cases, a source may be strongly lensed if the light passes close to a massive structure such as a galaxy, and this occasionally results in the appearance of multiple images of the source. One of the most important applications of such ‘strong lensing’ studies has been the reconstruction of mass profiles for lensing galaxies and estimations of the Hubble parameter, $`H_0`$, from measurements of the time-delay between fluctuations in the multiple images of a background quasar; see, e.g., Falco, Govenstein and Shapiro (1991), Grogan and Narayan (1996), and Keeton and Kochanek (1997). These studies have frequently made use of the ‘thin-screen approximation’ in which the depth of the lens is considered to be small compared with the distances between the observer and the lens and the lens and the source. In the thin-screen approximation the mass distribution of the lens is projected along the line of sight and replaced by a mass sheet with the appropriate surface density profile. Deflections of the light from the source are then considered to take place only within the plane of the mass sheet, making computations for the light deflections much simpler.
The simplicity of the thin-screen approximation has also lead to its use in weak gravitational lensing studies, where the output volumes from cosmological $`N`$-body simulations are treated as planar projections of the particle distributions within them. However, the procedures have to be extended when dealing with the propagation of light from very distant sources, where a number of simulation time outputs are necessary to cover the observer-source distance. In these cases each simulation volume is replaced by a planar projection of the particle distribution, and to compute the distributions in magnification and shear for a large number of rays passing through the system of screens, use is made of the multiple lens-plane theory which has been variously described by Blandford and Narayan (1986), Blandford and Kochanek (1987), Kovner (1987), Schneider and Weiss (1988a, b), and summarised by Schneider, Ehlers and Falco (1992). We describe some of these two-dimensional weak lensing methods in Section 1.1.
Couchman, Barber and Thomas (1998) considered some of the shortcomings of these two-dimensional lens-plane methods, and also rigorously investigated the conditions under which two-dimensional methods would give equivalent results to integrating the shear components<sup>1</sup><sup>1</sup>1Note that, throughout this paper we refer to the elements of the matrix of second derivatives of the gravitational potential as the ‘shear’ components, although, strictly, the term ‘shear’ refers to combinations of these elements which give rise to anisotropy. through the depth of a simulation volume. They showed that, in general, it is necessary to include the effects of matter stretching well beyond a single period in extent, orthogonal to the line of sight, but depending on the particular distribution of matter. It is also necessary to project the matter contained within a full period onto the plane, assuming the distribution of matter in the universe to be periodic with periodicity equal to the simulation volume side dimension. They also showed that errors can occur in two-dimensional approaches because of the single angular diameter distance to each plane, rather than specific angular diameter distances to every location in the simulation volume.
These considerations motivated Couchman et al. (1998) to develop an algorithm to evaluate the shear components at a large number of locations within the volume of cubic particle simulation time-slices. The algorithm they developed is based on the standard P<sup>3</sup>M method (as described in Hockney and Eastwood, 1988), and uses a Fast Fourier Transform (FFT) method for speed. It is designed to compute the six independent three-dimensional shear components, and therefore represents a significant improvement over two-dimensional methods. We describe the three-dimensional shear algorithm in outline in Section 2.1.
In this paper we have applied the algorithm to the standard Cold Dark Matter (sCDM) cosmological $`N`$-body simulations available from the Hydra consortium,<sup>2</sup><sup>2</sup>2 http://coho.astro.uwo.ca/pub/data.html which we describe in Section 2.2. By combining the outputs from the algorithm from sets of linked time-slices going back to a redshift of 4, we are able to evaluate the overall shear, convergence, magnifications and source ellipticities (and distributions for these quantities). We first describe other work which has generated results from studies of weak lensing in the sCDM cosmology.
### 1.1 Other work
There are numerous methods for studying weak gravitational lensing. In ‘ray-tracing,’ (see, for example, Schneider and Weiss, 1988b, Jaroszyński et al., 1990, Wambsganss, Cen and Ostriker, 1998, and Marri and Ferrara, 1998, the paths of individual light rays are traced backwards from the observer as they are deflected at each of the projected time-slice planes. The mapping of these rays in the source plane then immediately gives information about the individual amplifications which apply. In the ‘ray-bundle’ method, (see, for example, Fluke, Webster and Mortlock, 1998a, b, and Premadi, Martel and Matzner, 1998a, b, c), bundles of rays representing a circular image are considered together, so that the area and shape of the bundle at the source plane, (after deflections at the intermediate time-slice planes), gives the required information on the ellipticity and magnification. There are also many different procedures for computing the deflections and shear, although most apply the multiple lens-plane theory to obtain the overall magnifications and distributions. We shall describe briefly four works which have produced weak lensing results in the sCDM cosmology.
Jaroszyński et al. (1990) use the ray-tracing method with two-dimensional planar projections of the time-slices, and by making use of the assumed periodicity in the particle distribution, they translate the planes for each ray, so that it becomes centralised in the plane. This ensures that there is no bias acting on the ray when the shear is computed. Each plane is divided into a regular array of pixels, and the column density in each pixel is evaluated. Instead of calculating the effect of every particle on the rays, the pixel column densities in the single period plane are used. They calculate the two two-dimensional components of the shear (see Section 5 for the definition of shear) as ratios of the mean convergence of the beam, which they obtain from the mean column density. However, they have not employed the net zero mean density requirement in the planes, (described in detail by Couchman et al., 1998), which ensures that deflections and shear can only occur when there are departures from homogeneity. Also, the matter in the pixel through which the ray is located is excluded. Their probability distributions for the convergence, due to sources at redshifts of 1, 3 and 5, are therefore not centralised around zero, and exhibit only limited broadening for sources at higher redshift. They also display the probability distributions for the shear and the corresponding distributions for source ellipticity. The procedures used by Jaroszyński (1991) and Jaroszyński (1992) are improved by the introduction of softening to each particle to represent galaxies of different masses and radii with realistic correlations in position. In these papers Monte Carlo methods are used to study the effects of weak lensing on the propagation of light through inhomogeneous particle distributions.
Wambsganss et al. (1998) also use the ray-tracing method with two-dimensional planar projections of the simulation boxes, which have been randomly oriented. Rays are shot through the central region of 8$`h^1`$Mpc $`\times 8h^1`$Mpc only, (where $`h`$ is the Hubble parameter expressed in units of 100 km s<sup>-1</sup> Mpc<sup>-1</sup>), and the deflections are computed by including all the matter in each plane, allocated to pixels 10$`h^1`$kpc $`\times 10h^1`$kpc, covering one period in extent only. The planes have comoving dimensions of 80$`h^1`$Mpc $`\times 80h^1`$Mpc. The computations make use of a hierarchical tree code to collect together those matter pixels far away, whilst the nearby ones are treated individually, and the code assumes that all the matter in a pixel is located at its centre of mass. By using the multiple lens-plane theory, they show both the differential magnification probability distribution, and the integrated one for 100 different source positions at redshift $`z_s=3.0`$. One advantage of this type of ray-tracing procedure is its ability to indicate the possibility of multiple imaging, where different rays in the image plane can be traced back to the same pixel in the source plane.
Premadi, Martel and Matzner (1998a) have improved the resolution of their $`N`$-body simulations by using a Monte Carlo method to locate individual galaxies inside the computational volume, and ensuring that they match the 2-point correlation function for galaxies. They also assign morphological types to the galaxies according to the individual environment, and apply a particular surface density profile for each. To avoid large scale structure correlations between the simulation boxes, five different sets of initial conditions are used for the simulations, so that the individual plane projections can be selected at random from any set. By solving the two-dimensional Poisson equation on a grid, and inverting the equation using a FFT method, they obtain the first and second derivatives of the gravitational potential on each plane. They also correctly ensure that the mean surface density in each lens-plane vanishes, so that a good interpretation of the effects of the background matter is made. Their method uses beams of light, each comprising 65 rays arranged in two concentric rings of 32 rays each, plus a central ray. To obtain good statistical data they have run their experiment for 500 beams. They show the average shear for a source at $`z_s=5`$ contributed by each of the lens-planes individually, and find that the largest contributions come from those planes at intermediate redshift, of order $`z`$=1 - 2. Similarly, they find that the lens-planes which contribute most to the average magnifications are also located at intermediate redshifts. The multiple lens-plane theory then enables the distributions of cumulative magnifications to be obtained, which are shown to be broad and similar in shape for the sCDM and cosmological constant models, although the latter model shows a shift to larger magnification values.
Marri and Ferrara (1998) use a total of 50 lens-planes evenly spaced in redshift up to $`z=10`$. Their mass distributions have been determined by the Press-Schechter formalism (which they outline), which is a complementary approach to $`N`$-body numerical simulations. From this method they derive the normalised fraction of collapsed objects per unit mass for each redshift. They acknowledge that the Press-Schechter formalism is unable to describe fully the complexity of extended structures, the density profile of the collapsed objects (the lenses), or their spatial distribution at each redshift. They therefore make the assumption that the lenses are spatially uncorrelated and randomly distributed on the planes, and furthermore behave as point-like masses with no softening. The maximum number of lenses in a single plane is approximately 600, each having the appropriate computed mass value. In their ray-tracing approach they follow $`1.85\times 10^7`$ rays uniformly distributed within a solid angle of $`2.8\times 10^6`$sr, corresponding to a $`420\mathrm{}\times 420\mathrm{}`$ field. The final impact parameters of the rays are collected in an orthogonal grid of $`300^2`$ pixels in the source plane. Because of the use of point masses, their method produces very high magnification values, greater than 30 for the sCDM cosmology. They have also chosen to use a smoothness parameter $`\overline{\alpha }=0`$ in the redshift-angular diameter distance relation (which we describe in Section 3) which depicts an entirely clumpy universe.
### 1.2 Outline of paper
In Section 2.1 we summarise the main features of the algorithm for shear in three dimensions, which is detailed in Couchman et al. (1998). Because the code is applied to evaluation positions within the volume of $`N`$-body simulation boxes, we are able to apply specific appropriate angular diameter distances to each location, which is not possible with two-dimensional planar projections of simulation boxes. We also note that the algorithm automatically includes the effects of the periodic images of the fundamental simulation volume, so that the results for the shear are computed for matter effectively stretching to infinity. Included also is the net zero mean density requirement, which ensures that deflections and shear may only occur as a result of departures from homogeneity. In Section 2.2 we describe the sCDM $`N`$-body simulations and how we combine the different output time-slices from the simulations to enable the integrated shear along lines of sight to be evaluated. Section 2.3 describes the variable softening facility employed in the code, and our choice of a minimum softening value, which may be given a realistic physical interpretation.
Because we evaluate the shear at locations throughout the volume of the simulation boxes, and because of some sensitivity (see Couchman et al., 1998) of the results to the smoothness, or clumpiness, of the matter distribution in the universe, we consider, in Section 3, our choice of the appropriate angular diameter distances. We consider the effects of shear on the angular diameter distance, and the sensitivity of our results to the smoothness parameter, $`\overline{\alpha }`$. Measurements of the particle clustering within our simulations, which determines the variable softening parameter for use in the shear algorithm, also enable a good definition for the smoothness parameter to be made, and this is discussed.
In Section 4, we describe the formation of structure within the universe as it evolves, in terms of the magnitudes of the shear components computed for each time-slice. We see how the rms values of the components vary with redshift, and also how the set of highest values behave. We also identify, in terms of the lens redshifts, where the significant contributions arise. Our conclusions are compared with the results of other authors.
In Section 5, we describe in outline the multiple lens-plane theory, with particular reference to our application of it. In Section 6, we discuss our results for the shear, convergence, magnifications, source ellipticities, distributions of these values, and relationships amongst them. Section 7 summarises our findings, compares our results with those of other authors for the sCDM cosmology, and proposes applications of our method and results.
## 2 THE ALGORITHM FOR THREE-DIMENSIONAL SHEAR, AND THE COSMOLOGICAL SIMULATIONS
### 2.1 Description of the three-dimensional algorithm
The algorithm we are using to compute the elements of the matrix of second derivatives of the gravitational potential has been described fully in Couchman et al. (1998). The algorithm is based on the standard P<sup>3</sup>M method, and uses a FFT convolution method. It computes all of the six independent shear component values at each of a large number of selected evaluation positions within a three-dimensional $`N`$-body particle simulation box. The P<sup>3</sup>M algorithm has a computational cost of order $`N\mathrm{log}_2N`$, where $`N`$ is the number of particles in the simulation volume, rather than $`O(N^2)`$ for simplistic calculations based on the forces on $`N`$ particles from each of their neighbours. For ensembles of particles, used in typical $`N`$-body simulations, the rms errors in the computed shear component values are typically $`0.3\%.`$
In addition to the speed and accuracy of the shear algorithm, it has the following features.
First, the algorithm uses variable softening designed to distribute the mass of each particle within a radial profile which depends on its specific environment. In this way we are able to set individual mass profiles for the particles which enables a physical depiction of the large scale structure to be made. We describe our choice of the appropriate variable softening in Section 2.3.
Second, the shear algorithm works within three-dimensional simulation volumes, rather than on planar projections of the particle distributions, so that angular diameter distances to every evaluation position can be applied. It has been shown (Couchman et al., 1998) that in specific circumstances, the results of two-dimensional planar approaches are equivalent to three-dimensional values integrated throughout the depth of a simulation box, provided the angular diameter distance is assumed constant throughout the depth. However, by ignoring the variation in the angular diameter distances throughout the box, errors up to a maximum of 9% can be reached at a redshift of $`z=0.5`$ for sCDM simulation cubes of comoving side 100$`h^1`$Mpc. (Errors can be larger than this at high and low redshift, but the angular diameter distance multiplying factor for the shear values is greatest here for sources we have chosen at a redshift of 4.)
Third, the shear algorithm automatically includes the contributions of the periodic images of the fundamental volume, essentially creating a realisation extending to infinity. Couchman et al. (1998) showed that it is necessary to include the effects of matter well beyond the fundamental volume in general (but depending on the particular particle distribution), to achieve accurate values for the shear. Methods which make use of only the matter within the fundamental volume may suffer from inadequate convergence to the limiting values.
Fourth, the method uses the ‘peculiar’ gravitational potential, $`\varphi `$, through the subtraction of a term depending upon the mean density. Such an approach is equivalent to requiring that the net total mass in the system be set to zero, and ensures that we deal only with light ray deflections arising from departures from homogeneity; in a pure Robertson-Walker metric we would want no deflections.
### 2.2 The sCDM large scale structure simulations
We have chosen, in this paper, to apply the shear algorithm to the sCDM cosmological $`N`$-body simulations available from the Hydra consortium, and produced using the ‘Hydra’ $`N`$-body hydrodynamics code, as described by Couchman, Thomas and Pearce (1995). Each time-slice from this simulation contains $`128^3`$ dark matter particles, each of 1.2 $`\times 10^{11}h^1`$ solar masses, with a CDM spectrum in an Einstein-de Sitter universe, and has comoving box sides of $`100h^1`$Mpc. The output times for each time-slice have been chosen so that consecutive time-slices abut, enabling a continuous representation of the evolution of large scale structure in the universe. However, to avoid unrealistic correlations of the structure through consecutive boxes, we arbitrarily rotate, reflect and translate the particle coordinates in each before the boxes are linked together. We have chosen to analyse all the simulation boxes back to a redshift of 3.9, a distance which is covered by a continuous set of 33 boxes (assuming the source in this case at $`z_s=3.9`$ to be located at the far face of the 33rd box, which has a nominal redshift of 3.6). The simulations used have a power spectrum shape parameter of 0.25 as determined experimentally on cluster scales, (see Peacock and Dodds, 1994), and the normalisation, $`\sigma _8`$, has been taken as 0.64 to reproduce the number density of clusters, according to Vianna and Liddle (1996).
We establish a regular array of $`100\times 100`$ lines of sight through each simulation box, and compute the six independent shear components at 1000 evenly spaced evaluation positions along each. Since we are dealing with weak lensing effects and are interested only in the statistical distribution of values, these lines of sight adequately represent the trajectories of light rays through each simulation box. It is sufficient also to connect each ‘ray’ with the corresponding line of sight through subsequent boxes in order to obtain the required statistics of weak lensing. This is justified because of the random re-orientation of each box performed before the shear algorithm is applied.
### 2.3 Variable softening
The variable softening facility in the code allows each particle to be treated individually as an extended mass, and the softening parameter applied to each is chosen to be proportional to the distance, $`l`$, to the particle’s 32nd nearest neighbour. In this way the softening is representative of the density environment of each particle. The appropriate value of the parameter is determined using a different smoothed particle hydrodynamics (SPH) programme. The shear algorithm then works with the ratio of the chosen softening for each particle to the maximum value (equivalent to the mesh dimension, which is defined by the regular grid laid down to decompose the short- and long-range force calculations), so that the parameter has a maximum value of unity in the code.
Isolated particles are therefore assigned large softening values, and are then not able to cause anomalous strong deflections. In addition, this helps to ensure that more rays pass through regions of softened mass rather than voids of negative density. Particles in denser regions are assigned correspondingly smaller softening scales, and are therefore able to cause stronger deflections. In the regions of highest density we choose to set a minimum value for the softening, to avoid interpolation errors in the code for very small separations, and to introduce a physically realistic scale size to such particles.
In the sCDM simulation we have used, the minimum values for $`l`$ are of order $`10^3`$ in box units, and for a large cluster of 1000 particles this is comparable to the maximum value of the Einstein radius for lenses up to a redshift of 4. (For our maximum source redshift of 3.9, and for a lens of 1000 particles in our simulation, the Einstein radius reaches a maximum of 0.11$`h^1`$ Mpc, or 0.0011 box units, at a redshift of 0.52.) Hence, by choosing a minimum for the variable softening of this order, we would rarely expect to see strong lensing. At the same time, this scale is approximately of galactic dimensions, thereby giving a realistic interpretation to the choice.
We have therefore set the minimum level to 0.001 in box units, and allowed it to remain at a fixed physical dimension throughout the redshift range of the simulations. Thus, we have set the value to be 0.001 for the $`z=0`$ simulation box, rising to 0.0046 in the earliest simulation box at $`z=3.6`$.
Couchman et al. (1998) describes also the sensitivity of the magnification distributions to the choice of minimum softening arising from a single, assumed isolated, simulation box, and shows that the results are insensitive to minimum softenings of 0.001 and 0.002, apart from the peak magnification values, which occur only in limited numbers of lines of sight. This is very useful because we can assume that our results are likely to be little different from those using the same minimum softening throughout, whilst keeping the value fixed in physical size gives a credible interpretation to the softening.
## 3 ANGULAR DIAMETER DISTANCES
One of the advantages of being able to evaluate the shear components at a large number of locations within the volume of each time-slice is that we are able to apply the appropriate angular diameter distance factors to each as part of the procedure to determine the magnifications and ellipticities. The elements of the Jacobian matrix, at each evaluation position,
$$𝒜=\left(\begin{array}{cc}\mathcal{1}\psi _{\mathcal{11}}& \psi _{\mathcal{12}}\\ \psi _{\mathcal{21}}& \mathcal{1}\psi _{\mathcal{22}}\end{array}\right),$$
(1)
(from which the magnification may be derived at any point), contains the two-dimensional effective lensing potentials which are related to the computed three-dimensional shear through
$$\psi _{ij}=\frac{D_dD_{ds}}{D_s}.\frac{2}{c^2}\frac{^2\varphi (z)}{x_ix_j}𝑑z,$$
(2)
where $`D_d`$, $`D_{ds}`$, and $`D_s`$ are the angular diameter distances from the observer to the lens, the lens to the source, and the observer to the source, respectively, and $`c`$ is the velocity of light. (The factor $`D_dD_{ds}/D_s`$ may be written equivalently as $`cR/H_0`$, where $`R`$ is dimensionless.) The integration is along the line of sight. The angular diameter distance of the source is defined to be the distance inferred from its angular size, assuming Euclidean geometry, and in an expanding universe this distance becomes a function of the redshift of the source. The angular diameter distance also depends very much on the distribution of matter; for example, excess matter within the beam causes it to become more focussed, making the source appear closer than it really is. It is therefore necessary to have available appropriate values for the angular diameter distances for the particular distribution of matter in the simulation data-set being investigated.
Schneider et al. (1992) summarise clearly the work of Dyer and Roeder (1972, 1973) who made assumptions about the type of matter distribution to obtain a second order differential equation for the angular diameter distance in terms of the density parameter, $`\mathrm{\Omega }`$, for the universe, and the redshift of the source:
$`\left(z+1\right)\left(\mathrm{\Omega }z+1\right){\displaystyle \frac{d^2D}{dz^2}}+\left({\displaystyle \frac{7}{2}}\mathrm{\Omega }z+{\displaystyle \frac{\mathrm{\Omega }}{2}}+3\right){\displaystyle \frac{dD}{dz}}`$ (3)
$`+\left({\displaystyle \frac{3}{2}}\overline{\alpha }\mathrm{\Omega }+{\displaystyle \frac{\sigma ^2}{(1+z)^5}}\right)D=0.`$
$`\overline{\alpha }`$ is the smoothness parameter, which is taken to be the fraction of mass in the universe which is smoothly distributed, so that a fraction $`(1\overline{\alpha })`$ is considered to be bound into clumps. $`\sigma `$ is the optical scalar for the shear, introduced by matter surrounding the beam.
Dyer and Roeder considered the convenient scenario in which the light beams travel through the homogeneous low density, or empty regions, passing far away from the clumps, so that the shear becomes negligible. However, we must consider whether the shear in our particle simulation time-slices is able to significantly affect our chosen values for the angular diameter distances.
Schneider and Weiss (1988a) performed Monte Carlo simulations to determine the amplification of sources in a clumpy universe made up of several lens-planes, each containing a random distribution of point-like particles. They were able to show that the fraction of ‘empty cones,’ i.e., possible ray trajectories far from the clumps with negligible shear, in a clumpy universe is small, so that in general, the effects of shear must be taken into account in the expression for the angular diameter distances. For rays weakly affected by shear and with low amplifications, the linear terms in the shear almost cancel, but higher order terms become more important. However, the probability for rays being affected by shear is dramatically lower in model universes with $`\overline{\alpha }=0.8`$ compared with universes with $`\overline{\alpha }=0`$. (We shall show shortly that the values of $`\overline{\alpha }`$ in our sCDM simulations are always at least 0.88, so that even at $`z=0`$ the matter distribution may be considered smooth according to the usual definition of $`\overline{\alpha }`$.) In summary, we might expect the number of rays affected by shear to be low in smooth matter distributions, and then the effect to be only of second order. Schneider and Weiss (1988a) also derive an integral equation for the angular diameter distance which they show to be equivalent to that of Dyer and Roeder (1973) (without the shear term) when measured through the ‘empty cones.’
Watanabe and Tomita (1990) numerically solve the null geodesic equations for light passing through a spatially flat Einstein-de Sitter background universe in which the matter is condensed into (softened) compact objects of galactic or galactic cluster dimensions, and having an average specified separation in the present epoch. Their conclusion, that, on average, the effect of shear on the distance-redshift relation is small, providing the scale of the inhomogeneities is greater than or equal to galactic scales, agrees also with those of Futamase and Sasaki (1989), who show that, in most cases, the shear does not contribute to the amplification. This conclusion remains valid even when the density contrast is greater than unity, although in the model used by Watanabe and Tomita (1990) all amplifications were less than 2.
Our own work is conducted using a cosmological simulation in which the distribution of matter is very smooth. Furthermore, our minimum softening scale is of the order of galactic dimensions, so that we feel justified in accepting that the shear plays only a second order role in the distance-redshift relation in our sCDM data-set. (We are able to quantify the effects of shear from our results in Section 6, and find that they are negligible.) With $`\sigma 0`$, therefore, equation (3) immediately reduces to the well-known Dyer-Roeder equation. However, we also need to establish a value for the smoothness parameter in our simulations, so that the appropriate angular diameter distances can be evaluated and applied to the data. Assuming $`\sigma =0`$, Schneider et al. (1992) give the following generalised solution of the Dyer-Roeder equation for the angular diameter distance between redshifts of $`z_1`$ and $`z_2`$ for $`\mathrm{\Omega }=1`$:
$$D(z_1,z_2)=\frac{c}{H_0}\frac{1}{2\beta }\left[\frac{(1+z_2)^{\beta \frac{5}{4}}}{(1+z_1)^{\beta +\frac{1}{4}}}\frac{(1+z_1)^{\beta \frac{1}{4}}}{(1+z_2)^{\beta +\frac{5}{4}}}\right],$$
(4)
in which $`\beta `$ is expressed in terms of arbitrary $`\overline{\alpha }`$:
$$\beta =\frac{1}{4}(2524\overline{\alpha })^{\frac{1}{2}}.$$
(5)
We can write the left hand side of equation 4, equivalently, as $`D(z_1,z_2)=\frac{c}{H_0}r(z_1,z_2)`$, in which $`r(z_1,z_2)`$ is the dimensionless angular diameter distance. We show in Figure 1 the value of the dimensionless multiplying factor, $`R=r_dr_{ds}/r_s`$, as it applies to different time-slices at different redshifts, assuming sources at $`z_s=3.9`$, 3.0, 1.9, 1.0 and 0.5. (These values correspond to the redshifts of our time-slices, and have been chosen to be close to $`z=4`$, 3, 2, 1 and 0.5.) We have assumed zero shear, a completely smooth distribution of matter, ($`\overline{\alpha }=1`$), and $`\mathrm{\Omega }=1`$. We see that the peak in this factor occurs near $`z=0.5`$ for a source at redshift 4.
From the output of our algorithm we are able to obtain an estimate of the clumpiness or smoothness in each time-slice. Having set the minimum softening scale, the code declares the number of particles which are assigned the minimum softening, and we can therefore immediately obtain the mass fraction contained in clumps, which we choose to define by the minimum softening scale.
In the earliest time-slice at $`z=3.6`$, (next to $`z=3.9`$), there is a mass fraction of only $`5.6\times 10^3`$ in clumps, giving $`\overline{\alpha }(z=3.6)=0.99`$, and at $`z=0`$ the fraction is 0.12, giving $`\overline{\alpha }(z=0)=0.88`$. Whilst we have not accurately tried to assess the mean value for $`\overline{\alpha }`$ extending to different source redshifts, it is clear that the value throughout is very close to 1, and almost equivalent to the ‘filled beam’ approximation. This result concurs with Tomita (1998) who solves the null-geodesic equations for a large number of pairs of light rays in four different cosmological simulations with the sCDM spectrum. He uses $`32^3`$ particles in each, softened to various physical radii up to a maximum of $`40h^1`$kpc, and finds $`\overline{\alpha }`$ to be close to 1 in all cases. However, there does appear to be considerable dispersion in the values at late times. We show in Figure 2 how similar the multiplying factor is for the values $`\overline{\alpha }=0.9`$ and 1.0, and how these compare with a value of $`\overline{\alpha }=0`$ for an entirely clumpy universe. The discrepancy at the peak between $`\overline{\alpha }=0.9`$ and $`\overline{\alpha }=1.0`$ is 3.1%.
Figure 3 shows the ratio of the multiplying factor for $`\overline{\alpha }=1`$ and $`\overline{\alpha }=0.9`$ for the various source redshifts. For sources at $`z_s=2`$ the maximum value of the ratio is 1.014, and for sources nearer than $`z_s=1`$ the discrepancy is well below 1%.
## 4 THE FORMATION OF STRUCTURE
The shear algorithm generates the six independent three-dimensional shear component values (expressed in box units), and we have chosen to compute them at 1000 evaluation positions along $`100\times 100`$ lines of sight in each simulation time-slice. In a simplistic way, the magnitude of these components characterises the particular time-slice. To convert the components to absolute values we have to apply the appropriate angular diameter distance factors, $`R=r_dr_{ds}/r_s`$, as described in the previous section, together with the factor $`B(1+z)^2`$, where $`B=3.733\times 10^9`$ for the simulation boxes we have used (which have comoving dimensions of $`100h^1`$Mpc) and where the $`(1+z)^2`$ factor occurs to convert the comoving code units to physical units.
The magnitude of the rms value determined from each component multiplied by $`B(1+z)^2`$ in each time-slice is then of interest. In Figure 4 we show these values for the sum of the diagonal terms in the (projected) matrix of effective lensing potentials; this is closely associated with the surface density, which in turn determines the magnifications produced in the time-slice. We notice that the values for these combined components very slowly decreases towards $`z=0`$. This same trend is apparent with the other components individually. It has the interesting interpretation that, even though structure is forming (to produce greater magnification locally), the real expansion of the universe (causing the mean particle separation to increase) just outweighs this in terms of the magnitudes of the component values. Nevertheless, the formation of structure can be seen; by considering just the sets of highest values in each time-slice, again multiplied by the factor $`B(1+z)^2`$, and taking the mean values of these, we see in Figure 4 an initial fall as the universe expands and before structure has begun to form, and then at later times an increase in the mean values, indicative of the existence of dense (bound) structures.
However, when the values are then multiplied by the angular diameter distance factor, $`R`$, we see in Figure 5 that the peaks are extremely broad, indicating that significant contributions to the magnifications and ellipticities can arise in time-slices covering a wide range of redshifts, and not just near $`z=0.5`$ where $`R`$ has its peak (for sources at $`z_s=4`$).
Premadi, Martel and Matzner (1998a) have done similar work, using 5 different sets of initial conditions for each of their $`N`$-body simulations, so that the time-slices can be chosen at random from any one of the 5 sets, randomly translated to avoid correlations in the large scale structure between adjacent boxes, and projected onto planes. They solve the two-dimensional Poisson equation on a grid, and use a FFT method to obtain the first and second derivatives of the gravitational potential on each plane. They consider the effects on light beams, each consisting of 65 rays arranged in concentric rings to represent circular images, and have performed 500 calculations for each cosmological model, based on 500 different random translations of the planes. For the shear and magnification they find that the individual contribution due to each lens-plane is greatest at intermediate redshifts, of order $`z=12`$, for sources located at $`z_s=5`$.
Premadi, Martel and Matzner (1998b, c) also report their results for the shear for sources at $`z_s=3`$, and again find very broad peaks covering a wide range of (intermediate) lens-plane redshifts.
## 5 MULTIPLE LENS-PLANE THEORY
As described in Section 2.2, we establish 1000 evaluation positions along each of the $`100\times 100`$ lines of sight through each simulation time-slice, and the shear algorithm computes the six independent second derivatives of the gravitational potential at each position. By integration of the values we establish the matrix of two-dimensional effective lensing potentials at each of 50 positions along every line of sight. We establish the Jacobian matrix, $`𝒜`$, from these effective lensing potentials by applying the appropriate multiplying factors, as described in Section 3, and the Jacobian develops along the line of sight for each evaluation position. It is computed recursively in accordance with the multiple lens-plane theory, which is summarised by Schneider et al. (1992). The final Jacobian matrix after $`N`$ deflections is
$$𝒜_{\mathrm{total}}=\underset{i=1}{\overset{N}{}}𝒰^i𝒜_i,$$
(6)
where $``$ is the unit matrix,
$$𝒰^i=\left(\begin{array}{cc}\psi _{11}^i& \psi _{12}^i\\ \psi _{21}^i& \psi _{22}^i\end{array}\right)$$
(7)
for the $`i`$th deflection, and the intermediate Jacobian matrices are
$$𝒜_j=\underset{i=1}{\overset{j1}{}}\beta _{ij}𝒰_i𝒜_i,$$
(8)
where
$$\beta _{ij}=\frac{D_s}{D_{is}}\frac{D_{ij}}{D_j},$$
(9)
in which $`D_j`$, $`D_{is}`$ and $`D_{ij}`$ are the angular diameter distances to the $`j`$th lens, that between the $`i`$th lens and the source, and that between the $`i`$th and $`j`$th lenses, respectively.
The magnification, $`\mu `$, at any position, is given in terms of the Jacobian at that point:
$$\mu =\left(det𝒜\right)^1,$$
(10)
so that we can assess the magnification as it develops along a line of sight, finally computing the emergent magnification after passage through an entire box or set of boxes. The convergence, $`\kappa `$, is defined by
$$\kappa =\frac{1}{2}(\psi _{11}+\psi _{22})$$
(11)
from the diagonal elements of the Jacobian matrix, and causes isotropic focussing of light rays, and so isotropic magnification of the source. Thus, with convergence acting alone, the image would be the same shape as, but a different size from, the source.
The shear, $`\gamma `$, in each line of sight, is given by
$$\gamma ^2=\gamma _1^2+\gamma _2^2\frac{1}{4}(\psi _{11}\psi _{22})^2+\psi _{12}^2.$$
(12)
Shear introduces anisotropy, causing the image to be a different shape, in general, from the source.
From equation 10, and these definitions,
$$\mu =(1\psi _{11}\psi _{22}+\psi _{11}\psi _{22}\psi _{12}^2)^1,$$
(13)
so that with weak lensing the magnification reduces to
$$\mu 1+2\kappa +3\kappa ^2+\gamma ^2+O(\kappa ^3,\gamma ^3).$$
(14)
In the presence of convergence and shear, a circular source becomes elliptical in shape, and the ellipticity, $`ϵ`$, defined in terms of the ratio of the minor and major axes, becomes
$$ϵ=1\frac{1\kappa \gamma }{1\kappa +\gamma },$$
(15)
which reduces to
$$ϵ2\gamma (1+\kappa \gamma )+O(\kappa ^3,\gamma ^3)$$
(16)
in weak lensing.
The multiple lens-plane procedure allows values and distributions of the magnification, ellipticity, convergence and shear to be obtained at $`z=0`$ for light rays traversing the set of linked simulation boxes starting from the chosen source redshift. The ability to apply the appropriate angular diameter distances at every evaluation position avoids the introduction of errors associated with planar methods, and also allows the possibility of choosing source positions within a simulation box if necessary. This may be useful when considering the effects of large-scale structure on real observed sources at specific redshifts, or if the algorithm is to be applied to large simulation volumes.
## 6 RESULTS
We first examine the importance of the smoothness parameter, $`\overline{\alpha }`$, in the distance-redshift relation, to the magnification distribution, by computing the magnifications due to a single (assumed isolated) simulation box at $`z=0.5`$ for a source at $`z_s=4.`$ (At this box redshift the contribution to the magnifications is expected to be near the maximum.) The magnification distributions arising for $`\overline{\alpha }=1`$ and $`\overline{\alpha }=0.9`$, (deduced from our simulations, as explained in Section 3) are virtually indistinguishable. The only significant difference is the maximum value of the magnification in each case, which is only 1.9% higher in the $`\overline{\alpha }=1`$ case. We therefore feel justified in presenting our results based on a smoothness parameter of $`\overline{\alpha }=1`$ throughout.
We have chosen to assume source redshifts, $`z_s`$, close to 4, 3, 2, 1 and 0.5, and shall refer to the sources in these terms. The actual redshift values are 3.9, 3.0, 1.9, 1.0 and 0.5 respectively, corresponding to nominal time-slice redshifts in our sCDM simulation. For each source redshift we have evaluated the final emergent Jacobian matrix at $`z=0`$ for all 10000 lines of sight, by linking all the simulation boxes between the source redshift and $`z=0`$, as described in Section 5, and, by manipulation of the data according to the multiple lens-plane equations, we have been able to produce all the required values for the magnifications, ellipticities, shear and convergence.
Figures 6 and 7 show the distributions of the magnifications, $`\mu `$, for the five source redshifts, and for all source redshifts there is a significant range. The rms fluctuations for the magnifications about the mean value of $`<\mu >=1`$ are displayed in column 2 of the table for each source redshift. However, since the magnification distributions are asymmetrical, we have calculated the values, $`\mu _{\mathrm{low}}`$ and $`\mu _{\mathrm{high}}`$, above and below which 97$`\frac{1}{2}`$% of all lines of sight fall. These are displayed in columns 3 and 4 of the table.
The accumulating number of lines of sight having magnifications greater than the abscissa value is shown in Figure 8 for the five different source redshifts, and clearly shows the distinctions at the high magnification end.
In Figure 9 we show the magnification, $`\mu `$, plotted against the convergence, $`\kappa `$, for $`z_s=4`$, and see that the magnification is clearly not linear in $`\kappa `$ as expected for small magnitudes of $`\kappa `$. This is true for all our source redshifts except $`z_s=0.5`$, for which the curve is closely linear throughout. The non-linearity arises because of the presence of the higher order terms in the expression for $`\mu `$ given by equation 14, and we show for comparison the curve of $`\mu =1+2\kappa +3\kappa ^2`$.
We would generally expect the shear, $`\gamma `$, to fluctuate strongly for light rays passing through regions of high density (high convergence), and we indeed find considerable scatter in the shear when plotted against the convergence. Figure 10, however, shows the result of binning the convergence values and calculating the average shear in each bin, for sources at $`z_s=4`$. We see that throughout most of the range in $`\kappa `$ the average shear increases very slowly, and closely linearly. (At the high $`\kappa `$ end there are too few data points to establish accurate average values for $`\gamma `$.) This result suggests that there may be a contribution to the magnification from the shear, and we discuss this later in this Section.
Figure 11 shows the distributions in the convergence, $`\kappa `$, primarily responsible for the magnifications. The rms values for the convergence are 0.052 (for $`z_s=4`$), 0.047 (for $`z_s=3`$), 0.038 (for $`z_s=2`$), 0.025 (for $`z_s=1`$) and 0.013 (for $`z_s=0.5`$). These values are entirely consistent with the rms fluctuations for the magnification about the mean (stated above), being slightly below half the rms magnification values (see equation 14).
The distributions in the shear, $`\gamma `$, (defined according to equation 12) for the five source redshifts, are broadest, as expected, for the highest source redshifts, and, for $`z_s=4`$, 97$`\frac{1}{2}`$% of all lines of sight have shear values below 0.103. The ellipticity, $`ϵ`$, in the image of a source is primarily produced by the shear, and we show in Figure 12 the distributions in $`ϵ`$ for the five source redshifts. The peaks in the ellipticity distributions occur at $`ϵ=0.057`$ for $`z_s=4`$, 0.057 for $`z_s=3`$, 0.047 for $`z_s=2`$, 0.027 for $`z_s=1`$ and 0.012 for $`z_s=0.5`$. Figure 13 displays the accumulating number of lines of sight with $`ϵ`$ greater than the abscissa value. For $`z_s=4`$, we find that 97$`\frac{1}{2}`$% of all lines of sight have ellipticities up to 0.195. In Figure 14 we see that the ellipticity is very closely linear in terms of $`\gamma `$ throughout most of the range in $`\gamma `$. The scatter arises because of the factor containing the convergence, $`\kappa `$, in equation 16.
Finally, we attempted to see if there was a contribution to the magnification from the shear as implied by the distance-redshift relation (equation 3). We found considerable scatter, as expected, in the plots of magnification vs. shear, but we found in Figure 10 a tenuous connection between the shear and the convergence, indicating that there may be a similar connection between the magnification and the shear. We see from equation 14 that the effect of shear is only of second order (as established by Schneider and Weiss, 1988a). By binning the shear values and calculating the average magnification in each bin, we are able to show (Figure 15) that there may be a slow increase in $`<\mu >`$ with increasing shear. Figure 15 is for sources at $`z_s=4`$. Although there are insufficient data points at the high shear end, it still seems likely that the effects of shear on the mean magnification may be at least 10% for shear values greater than about 0.1. However, interestingly, only 2.6% of the data points in our simulation produced shear in excess of 0.1. According to equation 3, the shear has an effect in the distance-redshift relation equivalent to increasing the effective smoothness parameter, $`\overline{\alpha }`$. However, by substituting the mean shear value determined for sources at $`z_s=0.5`$ the effect on $`\overline{\alpha }`$ is found to be completely negligible. Furthermore, the importance of the effect reduces with redshift, so that our conclusion in Section 3, to ignore the effects of shear in the distance-redshift relation, can now be justified.
## 7 DISCUSSION OF RESULTS
Following the brief summary in the Introduction of work by other authors on the effects of weak gravitational lensing in the sCDM cosmology, we described in Section 2 the algorithm for the three-dimensional shear, and some of the key advantages it offers over other methods. In particular, we mentioned the ability of the code to include automatically the effects of matter in the periodic images of the fundamental volume, so that matter effectively stretching to infinity is included in computations of the shear. We also described the variable softening feature in the code which allows a good physical interpretation of the matter distribution in simulation time-slices to be made. We explained our choice of an appropriate minimum for the variable softening, taking into account its physical dimension, the degree of particle clustering, and the likelihood of strong lensing effects. We also described in Section 2 the sCDM simulations we have used, which are available from the Hydra consortium.
One clear advantage of the algorithm operating on a three-dimensional volume is that we can apply the appropriate angular diameter distance to every single evaluation position, thereby avoiding the introduction of errors associated with the use of single values in two-dimensional methods. (Couchman et al., 1998, analyses these possible errors.) However, we have had to consider what the ‘appropriate’ values should be.
First, we considered the effects of shear in the distance-redshift relation (equation 3), and were guided by the findings of Schneider and Weiss (1988a), Watanabe and Tomita (1990) and Futamase and Sasaki (1989) that the shear probably has only a second order effect. Our decision to ignore the effects of shear in the relation in general is justified because we found that significant effects may occur only in $`2.6\%`$ of the lines of sight, and the impact on the effective value of the smoothness parameter, $`\overline{\alpha }`$, by substituting the mean values of the shear, is completely negligible at all redshifts.
Second, we needed to include a suitable value for the smoothness parameter, $`\overline{\alpha }`$. The minimum value for the variable softening in the shear algorithm, and the number of particles falling within this minimum value, provides an excellent framework for determining $`\overline{\alpha }`$ in accordance with the original definition of Dyer and Roeder (1972).
We find, on this definition, that $`\overline{\alpha }`$ varies between approximately 1.0, in the $`z=3.6`$ time-slice, and 0.9, at $`z=0`$, and we therefore checked the significance for the angular diameter distance multiplying factor with these extreme values. For sources at $`z_s=4`$ the difference between the factors is very small (see Figure 2) at all lens redshifts, and the maximum discrepancy is only 3.1%, as shown in Figure 3. This discrepancy is always less that 1% for sources with redshifts less than 1. Furthermore, we investigated the effects of $`\overline{\alpha }`$ on the magnification distribution arising from a single (assumed isolated) simulation box at $`z=0.5`$ for a source at $`z_s=4`$. The two distributions are virtually indistinguishable; the most significant difference is in the values of the maximum magnification in each case, which differs by only 1.9%. For these reasons we chose to proceed with our analysis of the results for the sCDM cosmology on the basis of $`\overline{\alpha }=1`$.
In Section 4 we found the general and interesting result that the rms values of the ‘intrinsic’ computed shear values, multiplied by the conversion factor $`B(1+z)^2`$, but before the application of the angular diameter distance multiplying factors, fell slowly with redshift towards $`z=0`$, (i.e., with the evolving and expanding universe). Evidently, the universal expansion just outweighs the formation of structure when viewed in terms of the shearing on light. The formation of structure could be seen by considering only the sets of highest values in each time-slice, and then the mean values of these initially fall, before increasing slowly at the onset of structure formation. When the appropriate angular diameter distance multiplying factors were applied to the computed values, we then found the further interesting result that there can be considerable contributions to the shear and magnification arising from time-slices covering a very broad range of redshifts. This result is displayed in Figure 5.
In Section 5 we described how the data computed from our sets of simulation boxes were manipulated in accordance with the multiple lens-plane theory to produce the results of Section 6. There we showed results based on sources at five different redshifts, namely $`z_s=4,`$ 3, 2, 1 and 0.5. We showed distributions in the magnification (and details of the high magnification end of these distributions), the convergence and the ellipticity (which closely resembles the distribution in the shear), and also the relationships amongst these various quantities. Figure 9 shows the strong departure from the linear regime for the magnification as a function of the convergence, whilst Figure 14 shows a closely linear relationship between the ellipticity and the shear. Figure 10 suggests a slow increase in shear with increasing convergence, broadly as expected. For sources at $`z_s=4`$, 97$`\frac{1}{2}`$% of all lines of sight have magnification values up to 1.30. (The maximum magnifications depend on the choice of the minimum softening in the code, although the overall distributions are very insensitive to the softening.) In particular, we found rms fluctuations in the magnification (about the mean) as much as 0.13 for sources at $`z_s=4`$. Even for sources at $`z_s=0.5`$ there is a measurable range of magnifications up to 1.05 for 97$`\frac{1}{2}`$% of the lines of sight.
We summarised in the Introduction the methods of other workers using the sCDM cosmology. Because of the way in which Jaroszyński et al. (1990) determine the magnifications, their distributions do not have mean magnifications of 1. However, their dispersions in the convergence for sources at $`z_s=1`$ and $`z_s=3`$ can be seen to be considerably lower than our values. In addition the dispersions appear to show very little evolution with redshift. Wambsganss et al. (1998) find magnifications up to 100 and correspondingly highly dispersed distributions, very much larger than ours at $`z_s=3`$. (Their magnification distributions show separately the results for multiply-imaged sources and singly-imaged sources.) The very wide distributions they find have also enabled them to support a $`\mu ^2`$ power-law tail in the distribution which is predicted by Schneider et al. (1992) in the case of magnification by point sources when $`\mu 1.`$ The magnification distributions of Premadi et al. (1998a) appear incomplete, but the range in magnifications appears to be rather similar to ours for sources at $`z_s=3.`$ This is reassuring because, although their method relies on two-dimensional projections of the simulation boxes, they include many of the essential features to which we have drawn attention, for example, an assumed periodicity in the matter distribution, randomly chosen initial conditions to avoid structure correlations between adjacent simulation boxes, the net zero mean density requirement, realistic mass profiles for the particles, and use of the filled beam approximation with a smoothness parameter, $`\overline{\alpha }=1`$. Marri and Ferrara (1998) show very much wider magnification distributions than we have found, and also very high maximum values, which occur as a result of using point particles rather than smoothed particles. We also disagree with their choice of $`\overline{\alpha }=0`$, which is representative of an entirely clumpy universe, as opposed to our finding that the sCDM universe is very close to being smooth (with $`\overline{\alpha }1`$) at all epochs.
In our own work 97$`\frac{1}{2}`$% of the lines of sight have ellipticities up to 0.195 for $`z_s=4`$. At the peaks of the distributions we found values of 0.057 and 0.027 for $`ϵ`$ for sources at $`z_s=3`$ and 1 respectively. These are somewhat lower than the values of 0.095 ($`z_s=3`$) and 0.045 ($`z_s=1`$) found by Jaroszyński et al. (1990). Rather surprisingly, however, their peak values in the distributions for the shear are quite similar to our own, especially for sources at $`z_s=3`$.
Our magnification results may have an impact on the interpretation of the magnitude data for high-redshift Type Ia Supernovæ reported by Riess, Filippenko, Challis et al. (1998), since we have seen in Section 6 the possible range of magnifications that may apply to distant sources. The high-redshift Supernovæ data include sources up to redshifts of 0.97, so that the effects of the large-scale structure should not be ignored when interpreting the peak magnitudes and distance moduli. However, our magnification values for $`z_s=1`$ and $`z_s=0.5`$ above and below which 97$`\frac{1}{2}`$% of all lines of sight fall are considerably closer to unity than the values found by Wambsganss, Cen, Xu and Ostriker (1997) for the sCDM model. We would therefore expect to find correspondingly smaller lensing-induced dispersions in the distance moduli. However, we hope to quantify the dispersions in the distance moduli and the effect on the deceleration parameter, $`q_0`$, for an open cosmology in a future paper, especially in view of Riess et al.’s (1998) conclusions in favour of an open universe with a cosmological constant.
Another area affected by the presence of a distribution in magnifications is the luminosity function for quasars or high-redshift galaxies. Most sources are demagnified (the median value for $`\mu `$ is always just less than 1) which will remove many galaxies from the dim end of the luminosity function in a flux-limited survey, but at $`z_s=2`$, say, we find an rms fluctuation in the magnifications of 8.8% which will also allow some dim galaxies to be magnified and observed, where otherwise they would not have been.
In addition to considering these matters further we hope to address the following questions in the immediate future.
1. How does the redshift dependence of the shear matrix change in low-density universes? We shall be attempting to answer this question using simulation data from other cosmologies available from the Hydra consortium. In particular, we shall work on open and flat cosmological simulations with $`\mathrm{\Omega }_0=0.3`$. Of particular interest is the flat model with $`\mathrm{\Omega }_0=0.3`$ and cosmological constant $`\mathrm{\Lambda }_0=0.7`$, in view of the recent work by Riess et al. (1998) indicating the likelihood of this type of universe. In critical density universes it is believed that clustering continues to grow to the present day, and this is indicated by the results shown in Figure 4. However, in low density universes, structures should have formed by $`z\mathrm{\Omega }_0^11`$, so that the shapes of the curves in Figure 4 are likely to be very different.
2. How do our distributions in the magnification, ellipticity, shear and convergence vary amongst different cosmologies? With low-density universes, weak lensing effects are likely to be very different due to four main factors: (i) the formation of structure at earlier times, and its persistence through periods in which the contribution to the lensing is significant; (ii) dilution of the effects as the universe expands beyond the formation of structure; (iii) different values for the angular diameter distances; (iv) the lower average values for the computed shear components in view of the lower density values in the universe.
3. Do the high-magnification and low-ellipticity lines of sight occur because of the effects of individual large clusters, or as a result of continuous high density regions such as filamentary structures?
4. How frequently do lines of sight in the direction of multiply-imaged quasars coincide with lines of high convergence associated with the general form of the large-scale structure (independent of the lensing galaxy)? There is clear evidence (Thomas, Webster & Drinkwater, 1995) of increased numbers of near-neighbour galaxies (when viewed along the line of sight) to bright quasars, and this raises the intriguing possibility that some sub-critical lenses may become critical (and produce multiple images of background sources) in the presence of high density large-scale structure along the line of sight. According to the multiple lens-plane theory it is entirely consistent that the determinant of the developing Jacobian matrix along a high-convergence line of sight may change sign in the presence of a high surface density (but sub-critical) lens. In such a scenario modifications to the models for the surface density profile of the lensing galaxy would also be required.
## ACKNOWLEDGMENTS
We are indebted to the Starlink minor node at the University of Sussex for the preparation of this paper, and to the University of Sussex for the sponsorship of AJB. We thank NATO for the award of a Collaborative Research Grant (CRG 970081) which has greatly facilitated our interaction. R. L. Webster and C. J. Fluke of the University of Melbourne, and K. Subramanian of the National Centre for Radio Astrophysics, Pune, have been particularly helpful.
## REFERENCES
Blandford R. D. & Narayan R., 1986, Ap. J., 310, 568
Blandford R. D. & Kochanek C. S., 1987, Proc. 4th Jerusalem Winter School for Th. Physics, Dark Matter in the Universe, ed. Bahcall J. N., Piran T. & Weinberg S., Singapore, World Scientific, p.133
Couchman H. M. P., Barber A. J. & Thomas P. A., 1998, astro-ph, 9810063, Preprint
Couchman H. M. P., Thomas, P. A., & Pearce F. R., 1995, Ap. J., 452, 797
Dyer C. C. & Roeder R. C., 1972, Ap. J. (Letts.), 174, L115
Dyer C. C. & Roeder R. C., 1973, Ap. J. (Letts.), 180, L31
Falco E. E., Govenstein M. V. & Shapiro I. I., 1991, Ap. J., 372, 364
Fluke C. J., Webster R. L. & Mortlock D. J., 1998a, astro-ph, 9812300, Preprint
Fluke C. J., Webster R. L., Mortlock D. J., 1998b, In preparation
Futumase T. & Sasaki M., 1989, Phys. Rev. D, 40, 2502
Grogan N. A. & Narayan R., 1996, Ap. J., 464, 92
Hockney R. W. & Eastwood J. W., 1988, ‘Computer Simulation Using Particles’, IOP Publishing, ISBN 0-85274-392-0
Jaroszyński M., 1991, MNRAS, 249, 430
Jaroszyński M., 1992, MNRAS, 255, 655
Jaroszyński M., Park C., Paczynski B., & Gott III J. R., 1990, Ap. J., 365, 22
Keeton C. R. & Kochanek C. S., 1997, Ap. J., 487, 42
Kovner I., 1987, Ap. J., 316, 52
Marri S. & Ferrara A., 1998, astro-ph, 9806053, Preprint
Peacock J. A. & Dodds S. J., 1994, MNRAS, 267, 1020
Premadi P., Martel H. & Matzner R., 1998a, Ap. J., 493, 10
Premadi P., Martel H. & Matzner R., 1998b, astro-ph, 9807127, Preprint
Premadi P., Martel H. & Matzner R., 1998c, astro-ph, 9807129, Preprint
Riess A. G., Filippenko A. V., Challis P., Clocchiatti A., Diercks A., Garnavich P. M., Gilliland R. L., Hogan C. J., Jha S., Kirshner R. P., Leibundgut B., Phillips M. M., Reiss D., Schmidt B. P., Schommer R. A., Smith R. C., Spyromilio J., Stubbs C., Suntzeff N. B. & Tonry J., 1998, A. J., 116, 1009
Schneider P., Ehlers J., & Falco E. E., 1992, ‘Gravitational Lenses’, Springer-Verlag, ISBN 0-387-97070-3
Schneider P. & Weiss A., 1988a, Ap. J., 327, 526
Schneider P. & Weiss A., 1988b, Ap. J., 330, 1
Thomas P. A., Webster R. L. & Drinkwater M. J., 1995, MNRAS, 273, 1069
Tomita K., 1998, astro-ph, 9806047, Preprint
Vianna P. T. P., & Liddle A. R., 1996, MNRAS, 281, 323
Wambsganss J., Cen R., & Ostriker J., 1998, Ap. J., 494, 29
Wambsganss J., Cen R., Xu G. & Ostriker J., 1997, Ap. J., 475, L81
Watanabe K.& Tomita K., 1990, Ap. J., 355, 1
|
no-problem/9901/astro-ph9901238.html
|
ar5iv
|
text
|
# Quasars
## 1 Introduction
Our meeting occurs 35 years after the discovery of quasars, a discovery that transformed our concepts of active galactic nuclei (AGN), even though the connection between quasars and AGN was not clear at the time. We now consider quasars as the most luminous class of AGNs. Their great luminosity, which can be more than 1000 times that of an $`L^{}`$ galaxy, is part of their mystery on the one hand, while on the other hand it enables us to observe them at the greatest distances and earliest epochs at which they occur in the universe.
One of the great values of this symposium is that it brings together people from all fields of AGN research and provides us with an opportunity to take a fresh look at the state of the field and the key research problems.
In this talk I will cover some of the highlights of quasar history as they apply to our topic and review the main properties of quasars as we define them today. I will discuss recent results in quasar research, especially those that bear on the relation of quasars to activity in galaxies. I will also describe some current research problems and consider future opportunities for the field that will be provided by large telescopes and the large quasar surveys that are under way.
## 2 History
While the discovery of quasars in 1963 (Schmidt 1963) is well known<sup>1</sup><sup>1</sup>1The 1st Texas Symposium on Relativistic Astrophysics (Robinson, Schild, & Schucking 1965) still makes excellent reading about that feverish first year of work., I would like to mention that it was preceded in 1958, forty years ago, by a key paper by Burbidge at the Paris Symposium. He pointed out that tremendous energy, $`10^{60}`$ ergs, resided in extragalactic radio sources. This was an unprecedented amount for the time, and the paper was influential in forcing people to think about non-stellar sources of energy in galaxies, that is, what we now call activity in galaxies.
To continue with the topic of milestone years, we can also note that the evolution of the quasar population was discovered by Schmidt in 1968, thirty years ago, a discovery that was an essential first step to showing that the characteristic time-scale for quasar activity is quite short in cosmological terms.
## 3 Definitions
Schmidt’s classic definition is that quasars are star-like objects of large redshift. More quantitatively, quasars are generally considered to have $`z>0.1`$ and $`M_B<23(H_0=50)`$ mag (see Schmidt and Green 1983). Traditionally, i.e. at resolutions of $`12`$ arcsec, they were considered as being star-like, a description that is intertwined with the redshift and absolute magnitude limits just given. We now know that better spatial resolution often yields evidence of a host galaxy. Other key properties of quasars are that they have broad emission lines<sup>2</sup><sup>2</sup>2For this article, BL Lac objects will be considered as a separate class, although they are members of the AGN family. in their spectra and that they can emit continuum radiation across the electromagnetic spectrum from $`\gamma `$-rays to radio waves, with ultraviolet and X-ray emission usually being very prominent. Also, quasars show variability on time scales of days to years.
How well can we explain all these properties? Although it is generally believed that the picture of an accretion disk surrounding a black hole is correct, agreement between current models and observations is distressingly poor in many cases.
Operationally, it is important to be aware of the effect of the apparent size and luminosity limit in the definition of quasars on modern surveys. For example, as the angular resolution of surveys improves to 1 arcsec or better and the depth of surveys increases either because of the use of larger aperture telescopes or longer exposures, the host galaxies of quasars will be increasingly visible. In such cases, strict imposition of the “star-like” criterion for quasars will exclude bona-fide AGNs.
Similarly, we now know that high-luminosity Seyfert galaxies can overlap in absolute luminosity with low-luminosity quasars, and analyses of surveys must allow for this. For example, deep surveys for quasars with good angular resolution will find both quasars and AGNs. It is important for the determination of the evolution of the entire AGN population that surveys consider what classes of objects they are including and perhaps rejecting.
## 4 Recent Results
### 4.1 Host Galaxies
The Hubble Space Telescope (HST) has provided critical new information on the nature of the host galaxies in which quasars reside and about the nature of quasar environments. The excellent image quality of the repaired telescope gives the best combination of angular resolution and light gathering power yet applied to quasars. Here I report on two papers, which of course build on previous ground-based work.
Bahcall et al. (1997) presented results with the Wide-Field Camera of HST for 20 luminous quasars with $`z<0.3`$. For the host galaxies, they found that 2 were as bright as the brightest cluster galaxies, 10 were like normal elliptical galaxies, 3 were normal spirals, 3 were complex, interacting systems, and in 2 cases there was faint nebulosity surrounding the quasar. For the radio-quiet quasars, 7 occurred in elliptical galaxies and 3 in spirals. For the 6 radio-loud quasars, 3 to 5 of them were in elliptical galaxies. On average, the host galaxies were 2.2 magnitudes brighter than normal field galaxies. In 8 cases, they detected companion galaxies within a projected distance of 10 kpc from the quasar nucleus. The interactions, presence of companions, and higher density of galaxies seen around quasars suggest that interactions are important to quasar activity.
Boyce et al. (1998) used HST in a complementary study of 14 low-redshift quasars. They find that 9 occur in elliptical galaxies (all 6 of the radio-loud quasars and 3 radio-quiet objects); 2 radio-quiet quasars are in disk galaxies, and the other 3, which are radio-quiet, ultraluminous IR objects, occur in violently interacting systems. The average luminosity of the quasar host galaxies is 0.8 magnitudes brighter than $`L^{}`$, while the radio-loud objects are 0.7 magnitudes brighter than the radio-quiet ones.
It is evident, as Bahcall et al. point out, that the hosts and environments of quasars are complex, and that the previous ideas about radio-quiet quasars residing in spiral galaxies and radio-loud quasars in ellipticals may not hold up. However, it is perhaps more important to realize that the HST observations provide powerful support for the concept of quasars residing at the centers of galaxies and that galaxy interactions play an important role in quasar activity.
### 4.2 The Hubble Deep Field
The Hubble Deep Field (HDF) has given us unprecedented new views of distant galaxies. In combination with spectroscopic observations with the Keck Telescopes, studies of galaxies are now well advanced at $`z>3`$, redshifts that were unattainable previously. Consequently, we now have the opportunity to study directly the relationship of galaxies and quasars at and beyond the redshift of peak quasar activity.
Recently Conti et al. (1998) have carried out a detailed search for compact quasars and AGNs in the HDF to $`V_{606}=27`$ mag to study their presence and behavior at luminosities corresponding to AGNs in the nearby universe. Although the HDF contains more than 3000 galaxies, Conti et al. found an upper limit of 20 for the number of quasar candidates. Based on spectroscopic observations to date, the actual number may be much smaller, even close to 0. However, because of the great depth of the HDF exposures and the $`0.1`$ arcsec image quality, it is possible that any AGNs in the HDF are spatially resolved, and the next step is to develop sensitive techniques to detect AGN within faint, resolved galaxies in the HDF. A complication is that many of the distant galaxies being found by HST and Keck are undergoing intense star formation, which gives them colors similar to those of many quasars. Jarvis and MacAlpine (1998) report identification of 12 resolved objects harboring candidate AGN. The crucial next step will be to confirm the nature of the candidates with follow-up spectroscopy, a very difficult task because of their faintness.
### 4.3 Evolution of the Luminosity Function
One of the most striking observed features of quasars is the evolution of their luminosity function. The space density of luminous quasars increases by a factor of $``$1000 between the present epoch and redshift $`23`$ and then falls steeply toward higher redshifts (Warren, Hewett, & Osmer 1994, WHO; Schmidt, Schneider, & Gunn 1995, SSG; Kennefick, Djorgovski, & de Carvalho 1995). A straightforward explanation of this behavior is that we are seeing back to the epoch of peak quasar activity, an epoch that presumably has to do with the formation of black holes at the centers of galaxies and the time of significant fueling of the quasar activity via the infall of material to the center.
However, a persistent question about the nature of the peak is whether it is affected significantly by dust absorption along the line of sight. If so, there could be an important population of quasars at high redshift that are hidden at optical/UV wavelengths, indicating that the epoch of peak activity was even earlier. There is no doubt that some quasars are highly reddened; the basic question is how many.
One way to answer this question is to use samples of radio-selected quasars with complete optical identifications. Dust is transparent to radio radiation, and so samples with complete optical identifications provide an excellent test, as long as the ratio of radio quasars to the total number of quasars does not change significantly with epoch.
Hook, Shaver, and McMahon (1998) have carried out just such a program and find that the evolution of quasars in their sample is remarkably similar to that found by WHO and SSG. This suggests that dust is not the cause of the apparent decline in activity at $`z>3`$. Similarly, Benn et al. 1998 used IR observations in the K band of radio-selected quasars and found no evidence for a large population of reddened and dust-absorbed quasars. These results are in contrast to those of Masci (1998), who does claim evidence for a population of reddened objects.
The ultra-deep ROSAT survey of Hasinger et al. (Hasinger 1998) has yielded important new X-ray results. The good positional accuracies of the survey show that most of the sources are quasars/AGNs and narrow emission-line galaxies are only a small fraction, in contrast with some previous work. Their new determination of the X-ray luminosity function is not consistent with pure luminosity evolution but can be fit by pure density evolution from $`z=0`$ to $`z2`$. Their results suggest that black holes should be common in massive galaxies at the present epoch, as discussed in more detail below.
## 5 The News
The astro-ph electronic preprint archive has had a large impact on our field by greatly increasing the accessibility of preprints and making them instantly available around the world. It also provides a convenient way of tracking the latest developments. Here I mention a few highlights gleaned from postings to astro-ph in the last year and from other sources.
The Most Luminous. Irwin et al. (1998) reported the discovery of APM $`0279+5255`$, a broad-absorption line quasar with $`z=3.87`$ and $`R=15.2`$ mag. The object is coincident with an IRAS FSC source, and the estimated luminosity is $`5\times 10^{15}L_{}`$, making it the intrinsically most luminous object known. There is evidence that the source is gravitationally lensed, which amplifies the true emitted luminosity.
The Most Distant. Weymann et al. (1998) find from Keck spectroscopy an emission line in the galaxy HDF4-473.0 that, if identified with Ly$`\alpha `$, yields a redshift of $`z=5.60`$. The galaxy is in the Hubble Deep Field and is the most distant object with slit spectroscopy that has yet been identified. It is not a quasar or AGN, and the absence of quasars with $`z>4.9`$, despite continuing surveys for them, is beginning to appear significant in view of the increasing number of confirmed and candidate galaxies with $`z>5`$.
The Smallest. Kedziora-Chudczer et al. (1998) observed significant radio variability on timescales less than an hour in the radio quasar PKS $`0405385`$, which would make it the smallest extragalactic source observed. They attribute the variation to interstellar scintillation of a source with an angular size smaller than 5 microarcsec. The inferred brightness temperature is well above the inverse Compton limit. If interpreted as steady relativistic beaming, the Lorentz factor would be 1000.
The first FIRST gravitational lens. Schechter et al. (1998) found that the quasar FBQ $`0951+2635`$, with $`V=16.9`$ mag and $`z=1.24`$, from the FIRST radio survey, is a gravitational lens with two images separated by 1.1 arcsec.
Update to the Verón-Cetty and Verón Catalog. Verón-Cetty and Verón released the 8th edition of their catalog during the year. It contains entries for 11,358 quasars, 357 BL Lac objects, and 3334 AGNs and is available electronically at http://obshpz.obs-hp.fr/www/catalogues/veron2\_8.html. Such catalogs continue to be an vital resource for the community, especially as new surveys yield so many new quasars and AGNs. Also, the electronic availability of the catalogue makes it even more accessible and valuable than it was previously.
## 6 Some Current Research Problems
Here I call attention to some current research problems that need further work. Their eventual solution should improve our understanding of quasars and AGNs in important ways.
The Disagreement between Observations and Predictions for Accretion Disks. Koratkar (1997) points out that observations do not confirm most predictions of accretion disk models. For example, the Zheng et al. (1997) composite spectrum for ultraviolet wavelengths does not match predictions, and soft X-ray fluxes are observed to be too flat. Fewer Lyman edges are observed than predicted. Polarization is not seen either, which seems to rule out scattering as a way of smoothing the Lyman edges. An additional theoretical question is how the radiation from the accretion disk couples with that of the hot (X-ray) corona. It is important to resolve these issues if we are to have confidence in this basic part of our concept for quasars and AGNs.
What powers Ultra-luminous IRAS galaxies? Observations by Genzel et al. (1998) indicate that massive stars predominate in 70–80% of the cases, with AGNs dominating in the others. At least half of the systems probably have both an AGN and a circum-nuclear ring of starburst activity. They see no clear trend for the AGN component to dominate in the most compact and presumably most advanced mergers.
Do all galaxies have massive black holes? van der Marel (1997) notes that available data appear consistent with most galaxies having black holes, whose mass roughly correlates with the luminosity of the spheroid (cf. Magorrian et al. 1998). The black holes could have formed in or prior to a quasar phase and grown via mass accretion. Some of the implications of this work are discussed below under the theory section.
Is the broad Fe K$`\alpha `$ line produced directly near a black hole? How well do we understand the origin of X-ray emission in general? Observations of the broad Fe K$`\alpha `$ line in AGNs are widely interpreted as arising in the inner part of accretion disks around black holes and therefore providing both confirmation of the presence of black holes as well as direct information about conditions in the disks. However, Weaver and Yaqoob (1998) have raised questions about whether the emission in fact does occur so close to the centers of AGNs. More generally, intensive monitoring of NGC 7469 in X-rays and the ultraviolet by Nandra et al. (1998) provides strong constraints on quasar models. The data are not consistent with the UV emission being reprocessed by gas absorbing X-rays nor with the X-rays arising from Compton upscattering of the UV radiation.
These are just some examples of research problems in need of solution for us both to have confidence in our general picture of quasars and AGNs being powered by accretion onto massive black holes and to develop a quantitative understanding that explains the major observed features of these objects.
## 7 Theory
In addition to the above types of problems, considerable research is directed to basic questions such as, Do we understand how quasars form and evolve? Can we connect theories of galaxy and black hole formation with the observations of quasars at high redshift and the incidence of black holes in galaxies at low redshift? Here I mention briefly some recent theoretical work that demonstrates progress in our understanding of quasars and ties in with present and future observational work.
Haiman, Madau, and Loeb (1998) point out that the scarcity of quasars at $`z>3.5`$ in the Hubble Deep Field implies that the formation of quasars in halos with circular velocities less than 50 km/s is suppressed (on the assumption that black holes form with constant efficiency in cold dark matter halos). They note that the Next Generation Space Telescope should be able to detect the epoch of formation of the earliest quasars.
Cavaliere and Vittorini (1998) note that the observed form for the evolution of the space density of quasars can be understood at early times when cosmology and the processes of structure formation provide material for accretion onto central black holes as galaxies assemble. Quasars then turn off at later times because interaction with companions cause the accretion to diminish.
Haehnelt, Natarajan, and Rees (1998) show that the peak of quasar activity occurs at the same time as the first deep potential wells form. The Press-Schechter approach provides a way to estimate the space density of dark matter halos. But the space density of $`z=3`$ quasars is less than 1% that of star-forming galaxies, which implies the quasar lifetime is much less than a Hubble time. For an assumed relation between quasar luminosity and timescale and the Eddington limit, it is possible to connect the observed quasar luminosity density with dark matter halos and the numbers of black holes in nearby galaxies. The apparently large number of local galaxies with black holes implies that accretion processes for quasars are inefficient in producing blue light.
## 8 Future Directions and Possibilities
The research problems and theoretical ideas described in this article are already open to observational study and testing with 8-10-m class telescopes and the Hubble Space Telescope, as we have discussed in the case of studies of quasar host galaxies, high-redshift galaxies, and black holes in galaxies. As the capabilities of the large ground-based telescopes improve (via infrared optimization and adaptive optics, for example), and when the Next Generation Space Telescope is completed, we will be able to study directly the relation of AGNs and galaxies over virtually the entire range of their evolutionary history. Similarly, the X-ray observatories AXAF and XMM will offer very significant new capabilities for the study of both the nature of quasars and AGNs and their evolution.
In the meantime, large-area, ground-based surveys such as the Sloan Digital Sky Survey<sup>3</sup><sup>3</sup>3www.sdss.org and the 2dF<sup>4</sup><sup>4</sup>4msowww.anu.edu.au/$``$rsmith/QSO\_Survey/qso\_surv.html survey will increase the number of known quasars by more than an order of magnitude. We may expect that the combination of the new samples, the new observatories, and continued theoretical advances will answer many of the questions raised here.
###### Acknowledgements.
I thank Brad Peterson and David Weinberg for comments and suggestions on a first draft of this article. I am grateful to the Organizing Committee and the National Science Foundation (via grant AST-9529324) for financial support.
|
no-problem/9901/astro-ph9901270.html
|
ar5iv
|
text
|
# Very young massive stars in the Small Magellanic Cloud, revealed by HST Based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
## 1 Introduction
It is now generally believed that massive stars form in dense cores of molecular clouds. Initially, they are enshrouded in dusty remains of the molecular material, and, therefore, are not observable in ultraviolet and visible light. At this stage they can only be detected indirectly at infrared and radio wavelengths, emitted respectively by the surrounding dust and by the ionized stellar winds. At a later stage the far-UV photons dissociate the molecules and ionize the atoms creating ultracompact H ii regions. Eventually, the natal molecular cloud is ionized to become a compact H ii region. As the ionized volume of gas increases, the advancing ionization front of the H ii region reaches the cloud border. The ionized gas then flows away into the interstellar medium according to the so-called champagne effect (Tenorio-Tagle teno (1979), Bodenheimer et al. boden (1979)). From this time on the opacity drops and the newborn stars become accessible to observation in the ultraviolet and visible.
The youngest H ii regions that can be penetrated with ultraviolet and optical instruments provide therefore the best opportunities for a direct access to massive stars at very early stages of their evolution. Because of the small timescales involved, it is difficult to catch the most massive stars just at this very point in their evolution, namely when the young H ii regions emerge from the associated molecular clouds. Contrary to the situation in the Galaxy, where interstellar extinction in the line of sight is generally high, the Magellanic Clouds, especially the SMC, provide an environment where the sites of massive star formation are accessible without additional foreground extinction.
Our search for such very young, emerging H ii regions in the Magellanic Clouds started almost a decade ago on the basis of ground-based observations at the European Southern Observatory. The result was the discovery of a distinct and very rare class of H ii regions in the Magellanic Clouds, that we called high-excitation compact H ii “blobs” (HEBs). The reason for this terminology was that no features could be distinguished with those telescopes. So far only five HEBs have been found in the LMC: N159-5, N160A1, N160A2, N83B-1, and N11A (Heydari-Malayeri & Testor 1982, 1983, 1985, 1986, Heydari-Malayeri et al. 1990) and two in the SMC: N88A and N81 (Testor & Pakull 1985, Heydari-Malayeri et al. 1988a).
In contrast to the typical H ii regions of the Magellanic Clouds, which are extended structures (sizes of several arc minutes corresponding to more than 50 pc, powered by a large number of exciting stars), HEBs are very dense and small regions ($``$ 5<sup>′′</sup> to 10<sup>′′</sup> in diameter corresponding to $``$ 1.5–3.0 pc). HEBs are, in general, heavily affected by local dust (Heydari-Malayeri et al. 1988a , Israel & Koornneef ik91 (1991)). They are probably the final stages in the evolution of the ultracompact H ii regions whose Galactic counterparts are detected only at infrared and radio frequencies (Churchwell chur (1990)). Because of the contamination by strong nebular background no direct information about the exciting stars of HEBs has been achievable with ground-based telescopes. Furthermore, it is not known whether a single hot object or several less massive stars are at work there.
The compact H ii “blob” N81 (Henize hen (1956), other designations: DEM138 in Davies et al. dav (1976), IC 1644, HD 7113, etc.) lies in the Shapley Wing at $``$ 1.2 ($``$ 1.2 kpc) from the main body of the SMC. Other H ii regions lying towards the Wing are from west to east N83, N84, N88, N89, and N90. A study of N81 carried out a decade ago (Heydari-Malayeri et al. 1988a ), revealed some of its physical characteristics: age of 1 to 2.5 million years, mass of ionized gas amounting to $``$ 350 $`M_{}`$, low metal content typical of the chemical composition of the SMC, gas density of $``$ 500 cm<sup>-3</sup>, electron temperature of 14 100K, etc. However, the study suffered from a lack of sufficient spatial resolution and could not be pursued by available Earth-bound facilities. More specifically, the exciting star(s) remained hidden inside the ionized gas. It was not possible to constrain the theoretical models as to the nature of the exciting source(s) and choose among various alternatives (Heydari-Malayeri et al. 1988a ). This is, however, a critical question for theories of star formation.
The use of HST is therefore essential for advancing our knowledge of these objects. Here we present the results of our project GO 6535 dedicated to direct imaging and photometry of the “blobs” as a first step in their high-resolution study. A “true-color” high-resolution image and a brief account of the results for the layman were presented in a NASA/HST/ESA Press Release 98-25, July 23, 1998 (Heydari-Malayeri et al. hey98 (1998): http://oposite.stsci.edu/pubinfo/pr/1998/25).
## 2 Observations and data reduction
The observations of N81 described in this paper were obtained with the Wide Field Planetary Camera 2 (WFPC2) on board the HST on September 4, 1997. The small size of N81 makes it an ideal target for the 36<sup>′′</sup> -field of WFPC2. We used several wide- and narrow-band filters (F300W, F467M, F410M, F547M, F469N, F487N, F502N, F656N, F814W) with two sets of exposure times (short and long). In each set we repeated the exposures twice with $``$ 5 pixel offset shifts in both horizontal and vertical direction. This so-called dithering technique allowed us to subsequently enhance the sampling of the point spread function (PSF) and improve our spatial resolution by $``$ 20%. The short exposures, aimed at avoiding saturation of the CCD by the brightest sources, range from 0.6 sec (F547M) to 20 sec (F656N). The long exposures range from 8 sec (F547M) to 300 sec (F656N & F469N).
The data were processed through the standard HST pipeline calibration. Multiple dithered images where co-added using the stsdas task drizzle (Fruchter & Hook fru (1998)), while cosmic rays were detected and removed with the stsdas task crrej. Normalized images were then created using the total exposure times for each filter. To extract the positions of the stars, the routine daofind was applied to the images by setting the determination threshold to 5$`\sigma `$ above the local background level. The photometry was performed setting a circular aperture of 3–4 pixels in radius in the daophot package in stsdas.
A crucial point in our data reduction was the sky subtraction. For most isolated stars the sky level was estimated and subtracted automatically using an annulus of 6–8 pixel width around each star. However this could not be done for several stars located in the central region of N81 due to their proximity. In those cases we carefully examined the PSF size of each individual star (fwhm$``$ 2 pixels, corresponding to 0<sup>′′</sup>.09 on the sky) and did an appropriate sky subtraction using the mean of several nearby off-star positions. To convert into a magnitude scale we used zero points in the Vegamag system, that is the system where Vega is set to zero mag in Cousin broad-band filters. The magnitudes measured were corrected for geometrical distortion, finite aperture size (Holtzman et al. holtz (1995)), and charge transfer efficiency as recommended by the HST Data Handbook. The photometric errors estimated by daophot are smaller than 0.01 mag for the brighter (14–15 mag) stars, while they increase to $``$ 0.2 mag for 19 mag stars.
We note that the filter F547M is wider than the standard Strömgren $`y`$ filter. To evaluate the presence of any systematic effects in our photometry and color magnitude diagrams due to this difference in the filters, we used the stsdas package synphot. Using synthetic spectra of hot stars, with spectral types similar to those found in H ii regions, we estimated the difference due to the HST band-passes to be less than 0.002 mag, which is well within the photometric errors.
The “true-color” image of N81 (Heydari-Malayeri et al. hey98 (1998)) was assembled from three separate WFPC2 images using the iraf external package color task rgbsum. The basic images were the ultraviolet (F300W) and the hydrogen emission blue and red lines H$`\beta `$ (F487N) and H$`\alpha `$ (F656N).
Two line intensity ratio maps were secured using the normalized H$`\alpha `$, H$`\beta `$ and \[O iii\] $`\lambda `$5007 (F502N) images ($`\mathrm{\S }`$ 3.2 and $`\mathrm{\S }`$ 3.3). In order to enhance the S/N ratio in the fainter parts, each image was first smoothed using a 2 $`\times `$ 2-pixel Gaussian filter.
## 3 Results
### 3.1 Overall view
The WFPC2 imaging, in particular the $`I`$ band filter, reveals some 50 previously unknown stars lying towards N81, where not even one was previously observable. Six of them are grouped in the core region of $``$ 2<sup>′′</sup> wide, as displayed in Fig. 1. The brightest ones are identified by numbers in Fig. 2. See also the true-color version of this image accompanying the HST Press Release (Heydari-Malayeri et al. hey98 (1998)). Two bright stars (#1 & #2) occupy a central position and are probably the main exciting sources of the H ii region. Only 0<sup>′′</sup>.27 apart on the sky (projected separation $``$ 0.08 pc), they are resolved in the WFPC2 images (Fig. 3).
Two prominent dark lanes divide the nebula into three lobes. One of the lanes ends in a magnificent curved plume more than 15<sup>′′</sup> (4.5 pc) in length. The absorption features are probably parts of the molecular cloud associated with the H ii region ($`\mathrm{\S }`$ 4.2). The extinction due to dust grains in those directions amounts to $``$ 1 mag as indicated by the H$`\alpha `$/H$`\beta `$ map ($`\mathrm{\S }`$ 3.2). A conspicuous absorption “hole” or dark globule of radius $``$ 0<sup>′′</sup>.25 is situated towards the center of the H ii region, where the extinction reaches even higher values. The apparent compact morphology of this globule in the presence of a rather violent environment is intriguing. We explore some possibilities of its origin in $`\mathrm{\S }`$ 4.1.
An outstanding aspect is the presence of arched filaments, gaseous wisps, and narrow ridges produced by powerful stellar winds and shocks from the hot, massive stars. We are therefore witnessing a very turbulent environment typical of young regions of star formation. The two bright ridges lying west of stars #1 & #2 are probably ionization/shock fronts. The filamentary and wind induced structures are best seen in Fig. 4, which presents an unsharp masking image of N81 in H$`\alpha `$ without large scale structures. In order to remove these brightness variations and enhance the high spatial frequencies, a digital “mask” was created from the H$`\alpha `$ image. First the H$`\alpha `$ image was convolved by a 2 $`\times `$ 2-pixel Gaussian, and then the smoothed frame was subtracted from the original H$`\alpha `$ image. Interestingly, the inspection of the orientation of these arched filaments suggests the presence of at least three sources of stellar winds: stars #1 & #2 jointly, #3, and probably #11, which are the four brightest blue stars of the cluster.
### 3.2 Extinction
A map of the H$`\alpha `$/H$`\beta `$ Balmer decrement is presented in Fig. 5a. The ratio reaches its highest values towards the dark lanes ($``$ 3.8), the western ridges ($``$ 4.5), and the dark globule ($``$ 4.5). This latter value is very close to the highest ratio (4.3) expected for a medium in which dust is locally mixed with gas. The dark “hole”, while present, does not show up prominently in Fig. 5a, because it is small ($``$ 0<sup>′′</sup>.25 across corresponding to $``$ 15 000 AU) and the binning by convolution used to enhance the S/N ratio in the fainter parts ($`\mathrm{\S }`$ 2) has reduced the line ratio. The high H$`\alpha `$/H$`\beta `$ ratios and the fact that the interstellar extinction is known to be small towards the SMC (Prévot et al. prev (1984)) support the idea that dust is local and probably mixed with gas in this young H ii region. Furthermore, the high resolution observations show that the extinction towards N81 is generally higher than previously believed. A mean H$`\alpha `$/H$`\beta `$ = 3.30 corresponds to $`A_V`$ = 0.40 mag ($`c`$(H$`\beta `$) = 0.20), if the interstellar reddening law is used. For comparison, the ground-based observations had yielded H$`\alpha `$/H$`\beta `$ values of 3.05 (Heydari-Malayeri et al. 1988a ) and 2.97 (Caplan et al. cap (1996), using a circular diaphragm of 4.89 in diameter). We remark that the Balmer ratio decreases with decreasing spatial resolution. This provides another indication that dust is concentrated towards the inner parts of N81.
### 3.3 Nebular emission
The \[O iii\]$`\lambda `$5007/H$`\beta `$ intensity map (Fig. 5b) reveals a relatively extended high-excitation zone with a mean value of $``$ 4.8. The highest intensity ratio, $``$ 5.5, belongs to the region of shock/ionization fronts represented by the two bright western ridges (Fig. 3). It is not excluded that collisional excitation of the O<sup>++</sup> ions by shocks contributes to the high value of the ratio in that region. Another high excitation zone runs from the east of star #9 to the south. The remarkable extension of the \[O iii\]$`\lambda `$5007/H$`\beta `$ ratio suggests that the O<sup>++</sup> ions in N81 occupy almost the same zone as H<sup>+</sup>. This is in agreement with our previous chemical abundance determination results (Heydari-Malayeri et al. 1988a ) showing that more than 80% of the total number of oxygen atoms in N81 are in the form of O<sup>++</sup>. The extension of the ratio also suggests that the H ii region is not powered by one central, but by several separate hot stars.
We measure a total H$`\beta `$ flux $`F`$(H$`\beta `$) = 7.69 $`\times `$ 10<sup>-12</sup> erg cm<sup>-2</sup> s<sup>-1</sup> above 3$`\sigma `$ level for N81 without the stellar contribution and accurate to $``$ 3%. Correcting for a reddening coefficient of $`c`$(H$`\beta `$) = 0.20 ($`\mathrm{\S }`$ 3.2) gives $`F_0`$(H$`\beta `$) = 1.20 $`\times `$ 10<sup>-11</sup> erg cm<sup>-2</sup> s<sup>-1</sup>. From this a Lyman continuum flux of $`N_L`$ = 1.36 $`\times `$ 10<sup>49</sup> photons s<sup>-1</sup> can be worked out if the H ii region is assumed to be ionization-bounded. A single main sequence star of type O6.5 or O7 can account for this ionizing UV flux (Vacca et al. vacca (1996), Schaerer & de Koter sch (1997)). However, this is apparently an underestimate since the dust grains mixed with gas would considerably absorb the UV photons, and moreover the H ii region is probably density-bounded, since one side of it has been torn open towards the interstellar medium.
### 3.4 Stellar content
The results of the photometry for the brightest stars are presented in Table 1, where the star number refers to Fig. 2. The color-magnitude diagram $`y`$ versus $`by`$ is displayed in Fig. 6a. It shows a blue cluster centered on Strömgren colors $`by=0.05`$, or $`vb=0.20`$, typical of massive OB stars (Relyea & Kurucz rk (1978), Conti et al. conti (1986)). This is confirmed by the $`U`$ – $`I`$ colors deduced from Table 1. Almost all these stars lie within the H ii region and we are in fact viewing a starburst in this part of the SMC. We may neglect some of the ionizing stars if they lie deeper in the molecular cloud and are affected by larger extinctions. The three red stars (#15, #24, and #27), located outside the H ii region, show up particularly in the true-color image (Heydari-Malayeri et al. hey98 (1998)) and are probably evolved stars not belonging to the cluster. The two main exciting stars (#1 and #2) stand out prominently on the top of the color-magnitude diagram.
One can estimate the luminosity of the brightest star of the cluster (#1), although in the absence of spectroscopic data this is not straightforward. Using a mean reddening of $`A_V`$ = 0.4 mag ($`\mathrm{\S }`$ 3.2), and a distance modulus $`M`$ – $`m`$ = 19.0 (corresponding to a distance of 63.2 kpc, e.g. Di Benedetto di (1997) and references therein), we find a visual absolute magnitude $`M_V`$ = –5.02 for star #1. If a main sequence star, this corresponds to an O6.5V according to the calibration of Vacca et al. (vacca (1996)) for Galactic stars. The corresponding luminosity and mass would be log L = 5.49 $`L_{}`$ and $`M`$ = 41 $`M_{}`$. The star may be more massive than this, since sub-luminosity and/or peculiar extinction would be consistent with extreme youth (Walborn et al. wal99 (1999)).
The images obtained through a narrow-band filter (F469N) centered on the He ii 4686 Å line were compared with those using the broad-band filter representing the Strömgren $`b`$ (F467M). The resulting photometry is displayed in the color-magnitude diagram shown in Fig. 6b. Interestingly, three stars show an apparent He ii excess. However, one should note that since two of these (#27 and #15) are red types (Fig. 2, and the true-color image in Heydari-Malayeri et al. hey98 (1998)), such a narrow-band enhancement could also be due to an artifact of molecular absorption bands throughout the spectra. On the other hand, the third star (#7) is blue and has stronger apparent He ii emission. This star lies deep in the core of the star cluster/H ii region where the nebular excitation is high. It may be a Wolf-Rayet or Ofpe/WN candidate in the SMC.
## 4 Discussion and concluding remarks
### 4.1 Morphology of N81
N81 is a young H ii region whose moving ionization front has reached the surface of its associated molecular cloud (see $`\mathrm{\S }`$ 4.2), as predicted by the champagne model (Tenorio-Tagle teno (1979), Bodenheimer et al. boden (1979)). Ionized gas is pouring out into the interstellar medium with a relative velocity of 4 km s<sup>-1</sup> ($`\mathrm{\S }`$4.2). The bright central core of N81 is presumably a cavity created by the stellar photons on the surface of the molecular cloud. The two bright ridges lying at projected distances of $``$ 0.7 and $``$ 1.0 pc west of stars #1 & #2 (Fig. 3) are probably parts of the cavity seen edge-on. They represent ionization/shock fronts advancing in the molecular cloud. The higher excitation zones, indicated by the \[O iii\]/H$`\beta `$ map ($`\mathrm{\S }`$ 3.3), may be parts of the cavity surface situated perpendicularly to the line of sight. The H ii region is probably ionization-bounded in those directions. The outer, diffuse areas with fainter brightness (Fig. 1) are the champagne flow in which strong winds of massive stars have given rise to the filamentary pattern.
The absorption lanes and the the dark globule may represent the denser, optically thick remains of the natal molecular cloud which have so far survived the action of harsh ultraviolet photons of the exciting stars. High resolution imaging observations by HST have shown the presence of massive dust pillars inside the Galactic H ii region M16 (Hester et al. hest (1996)) and more recently in the LMC giant H ii region 30 Dor (Scowen et al. sco (1998), Walborn et al. wal99 (1999)). By comparison, the dark globule in N81 may be the summit of such a dust pillar in which second generation stars may be forming. To further investigate this idea, a follow-up high-resolution near-IR imaging of the central region is essential. At longer wavelengths we are less affected by the absorption and we will be able to probe deeper into the core of the globule.
### 4.2 Molecular cloud
The molecular cloud associated with N81 has been observed during the ESO-SEST survey of the CO emission in the Magellanic Clouds. Israel et al. (is (1993)) detected <sup>12</sup>CO (1-̇0̇) emission at two points towards N81 using a resolution of 43<sup>′′</sup> ($``$ 13 pc, or $``$ 4 times the size of the H ii region). The brighter component has a main beam brightness temperature of 375 mK, a line width of 2.6 km s<sup>-1</sup> and a LSR velocity of 152 km s<sup>-1</sup>. The molecular emission velocity is in agreement with the velocity of the ionized gas $`V_{LSR}`$ = 147.8 km s<sup>-1</sup> which we measured on the basis of high dispersion H$`\beta `$ spectroscopy (Heydari-Malayeri et al. 1988a ). The difference of 4.2 km s<sup>-1</sup> is probably due to the local motion of the ionized gas streaming into the interstellar medium towards the observer. The molecular cloud is brighter than those detected towards the neighboring H ii regions N76, N78, and N80, but is weaker than that associated with N84 which has a distinct velocity (168 km s<sup>-1</sup>). We do not know the size, the morphology, nor the mass of the molecular cloud, and we cannot localize the H ii region N81 with respect to it.
The SMC is known to have an overall complex structure with several overlapping neutral hydrogen layers (McGee & Newton McGee (1981)). We used the recent observations by Stanimirovic et al. (stan (1998)) to examine the H i emission towards N81. The authors combine new Parkes telescope observations with an ATCA (Australia Telescope Compact Array) aperture synthesis mosaic to obtain a set of images sensitive to all angular (spatial) scales between 98<sup>′′</sup> (30 pc) and 4 (4 kpc) in order to study the large-scale H i structure of the SMC. An H i concentration focused on N84/N83 extends eastward until N81 ($``$ 30 apart). The H i spectra towards N81 and N83/N84 show complex profiles ranging from $``$ 100 to 200 km s<sup>-1</sup> with two main peaks at $``$ 120 and 150 km s<sup>-1</sup> but spread over tens of km s<sup>-1</sup>. The corresponding column densities are 3.94 $`\times `$ 10<sup>21</sup> and 5.29 $`\times `$ 10<sup>21</sup> atoms cm<sup>-2</sup> respectively. In consequence, the correlation between the H i and CO clouds towards N81 does not seem simple.
The WFPC2 images reveal a turbulent nebula in which arc-shaped features are sculpted in the ionized gas under the action of violent shocks, ionization fronts, and stellar winds. The presence of shocks in N81 was evoked by Israel & Koornneef (ik88 (1988)) in order to explain the infrared molecular hydrogen emission which they detected towards this object. H<sub>2</sub> emission may be caused either by shock excitation due to stars embedded in a molecular cloud or by fluorescence of molecular material in the ultraviolet radiation field of the OB stars exciting the H ii region. According to these authors, shock excitation of H<sub>2</sub> is only expected very close to (within 0.15 pc of) the stars, while radiative excitation can occur at larger distances ($``$ 1 to 2 pc). Our HST observations suggest that the radiative mechanism is the dominant one, since the shock/ionization fronts are situated at projected distances larger than 0.15 pc from the exciting stars ($`\mathrm{\S }`$ 4.1).
### 4.3 Star formation
These observations are a breakthrough in the investigation of the exciting source of N81. This compact H ii region is not powered by a single star, but by a small group of newborn massive stars. This result is important not only for studying the energy balance of the H ii region, but also because it presents new evidence in support of collective formation of massive stars. Recent findings suggest that massive star formation is probably a collective process (see, e.g., Larson lar (1992) and references therein, Bonnell et al. bonn (1998)). This is also in line with the results of high-resolution, both ground-based and space observations (Weigelt & Baier wei (1985), Heydari-Malayeri et al. 1988b , Walborn et al. wala (1995)a), in particular the resolution of the so-called Magellanic supermassive stars into tight clusters of very massive components (Heydari-Malayeri & Beuzit hey94 (1994) and references therein). It should however be emphasized that these cases pertain to relatively evolved stellar clusters. They are not associated with compact H ii regions, probably because the hot stars have had enough time to disrupt the gas.
N81 is a rare case in the SMC since a small cluster of massive stars is caught almost at birth. It provides a very good chance to check the history of massive star evolution (de Koter et al. koter (1997)). Massive stars are believed to enter the main sequence while still veiled in their natal molecular clouds (Yorke & Krügel yk (1977), Shu et al. shu (1987), Palla & Stahler pal (1990), Beech & Mitalas bee (1994), Bernasconi & Maeder bern (1996)) implying that these stars may already experience significant mass loss through a stellar wind, while still accreting mass from the parental cloud. This point constitutes an important drawback for current models of massive star evolution since, contrary to the assumption of the earlier models, a proper zero-age-main-sequence mass may not exist for these stars (Bernasconi & Maeder bern (1996)). As shown by Fig. 4, the most massive stars of the cluster, i.e. stars #1, #2, #11, and #3, seem to be at the origin of stellar winds carving the surrounding interstellar medium. Also, if still younger massive stars are hidden inside the cental dark globule, they can participate in the emission of strong winds.
Another interesting aspect is the small size of the starburst that occurred in N81. Apparently, only a dozen massive stars have formed during the burst, in contrast to the neighboring region N83/N84 which is larger and richer (Hill et al. hill (1994)). The contrast to the SMC giant region N66 (NGC346), which has produced a plethora of O stars (Massey et al. mas89 (1989)), is even more striking. The difference may not be only due to the sizes of the original molecular clouds but also to their environments. N81 is an isolated object far away from the more active, brighter concentrations of matter. Star formation may be a local event there, while for N66 and its neighboring regions, N76, N78, and N80, external factors may have played an active role in triggering the starburst. Judging from their radial velocities (Israel et al. is (1993)), the molecular clouds associated with the H ii regions of the Wing seem to be independent from each other, and star formation has not probably propagated from one side to the other. On the other hand, according to Hunter (hun (1995)), massive stars formed in very small star-forming regions appear to have a very different mass function, implying that different sizes of star-forming events can have different massive star products. However, the resolution of this question needs more observational data.
We may have overlooked a distinct co-spatial population of lower mass stars towards N81, as our short exposure WFPC2 images were aimed at uncovering the brightest massive stars lying inside the ionized H ii region. Note that the Orion Nebula, which contains a low-mass population (Herbig & Terndrup her (1986), McCaughrean & Stauffer McC (1994), Hillenbrand hil (1997)), would have the same size as N81 if placed in the SMC. Interestingly, the $`I`$ band image shows many red stars not visible in the blue bands, a few fainter ones lying close to stars #1 and #2. Do they belong to N81? The answer to this question which is crucial for star formation theories (Zinnecker et al. zin (1993)), is not straightforward, because of the complex structure of the SMC with its overlapping layers (McGee & Newton McGee (1981), Stanimirovic et al. stan (1998)).
### 4.4 Wolf-Rayet candidate
Wolf-Rayet stars as products of massive star evolution are generally very scarce, particularly so in the metal poor SMC galaxy, which contains only nine confirmed stars of this category (Morgan et al. mor (1991)). It is therefore highly desirable to identify and study every single new candidate, such as #7.
A noteworthy feature of our new candidate is its apparent faintness. With $`V`$ = 19.64, it is $``$ 3 mag weaker than the faintest W-R stars in that galaxy detected so far from the ground (Azzopardi & Breysacher azzo (1979), Morgan et al. mor (1991)). It is also much fainter than the known Of stars in the SMC (Walborn et al. walb (1995)b). Noteworthy as well is the fact that the small W-R population in the SMC is very peculiar compared to that in our Galaxy. For example, all nine W-R stars are binary systems and all, but one, belong to the nitrogen class. Our candidate may represent the first single W-R detected in the SMC. The true nature of this object can only be clarified with STIS spectroscopy during the second phase of our project. Its confirmation would provide new data for improving massive star evolutionary models in the low metallicity domain.
### 4.5 Future work
We have presented our first results on the SMC “blob” N81 based uniquely on direct imaging with WFPC2. The high-resolution observations have enabled us to identify the hot star candidates. Forthcoming STIS observations of these stars will provide the stellar spectra. The analysis of the line profiles with non-LTE wind models (Schaerer & de Koter sch (1997)) will allow us to determine the wind properties (terminal velocity, mass loss rate), the effective temperature and luminosity, and to derive constraints on the surface abundances of H, He, and metals (de Koter et al. koter (1997), Haser et al. haser (1998)). The H-R diagrams so constructed will be compared with appropriate low-metallicity models (Meynet et al. mey1 (1994), Meynet & Maeder mey2 (1997)) to yield the evolutionary status of the stars and to subject the evolutionary scenarios to observational constraints.
Since the SMC is the most metal-poor galaxy observable with very high angular resolution, N81 provides an important template for studying star formation in the very distant metal-poor galaxies which populate the early Universe. Although other metal-poor galaxies can be observed with HST and their stellar content analyzed from color-magnitude diagrams (e.g., I Zw 18: Hunter & Thronson ht (1995), de Mello et al. mello (1998)), the SMC is the most metal-poor galaxy where spectroscopy of individual stars (required to determine the parameters of massive stars) can be achieved with the highest spatial resolution.
###### Acknowledgements.
We are grateful to Dr. James Lequeux and Dr. Daniel Schaerer who read the manuscript and made insightful comments. We would like also to thank an anonymous referee for suggestions which helped improve the paper. VC would like to acknowledge the financial support from a Marie Curie fellowship (TMR grant ERBFMBICT960967).
|
no-problem/9901/cond-mat9901083.html
|
ar5iv
|
text
|
# Tunneling problems by quantum Monte Carlo
## Abstract
We develop a new numerical scheme which allows precise solution of coherent tunneling problems, i.e., problems with exponentially small transition amplitudes between quasidegenerate states. We explain how this method works for the single-particle (tunneling in the double-well potential) and many-body systems (e.g., vacuum-to-vacuum transitions), and gives directly the instanton shape and tunneling amplitude. Most importantly, transition amplitudes may be calculated to arbitrary accuracy (being limited solely by statistical errors) no matter how small are their absolute values.
Tunneling phenomena are among the most intriguing consequences of quantum theory. They are of fundamental importance both for the high-energy and condensed matter physics, and the list of systems which behavior is governed by tunneling transitions ranges from quantum chromodynamics to Josephson junctions and defects in crystals (see, e.g., Ref. ).
Precise analytic treatment of tunneling in complex systems is very hard, if not impossible. The most crucial simplification is in reducing the original problem to the semiclassical study of the effective action for some collective variable $`𝐑`$, with the assumption that all the other degrees of freedom adjust adiabatically to the motion of $`𝐑`$ . In certain cases, one may also include dissipative effects due to “slow” modes other than the selected collective variable which do not follow the dynamics of $`𝐑`$ adiabatically . Typically, the parameters of the effective action can not be found analytically (although one may relate some of them to the linear response coefficients) and have to be deduced from experiments. As far as we are aware, at present there are no tools to address the tunneling problem numerically. “Exact” diagonalization works only for relatively small systems, and, even in small systems, its accuracy is not sufficient to resolve very small energy splittings $`\mathrm{\Delta }E`$, say, when $`\mathrm{\Delta }E/E10^{10}`$, due to round-off errors (unless specific no-round-off arithmetics is used).
In this letter we develop a quantum Monte Carlo (MC) approach which allows precise calculations of tunneling amplitudes (and instanton shapes) no matter how small are their absolute values. The MC scheme contains no systematic errors, and its accuracy is limited only by statistical noise. The key point of our approach is in simulating imaginary-time dependence of transition amplitudes $`A_{ji}(\tau )`$ between selected reference states $`|\eta _1`$ and $`|\eta _2`$ (see below). In doing so we have to solve the problem of collecting reliable statistics in a case when $`A_{ji}(\tau )`$ varies, say, over hundreds of orders of magnitude (!) between different points in time. First, we describe in detail how to evaluate tunneling splitting and instanton shape in the double-well potential. We proceed then to the many-body problem of vacuum-to-vacuum transitions by considering the case of 1D quantum antiferromagnet with exchange anysotropy. Finally, we discuss the generality of the method suggested and demonstrate our numeric results for tunneling in the double-well potential (with comparison to the exact-diagonalization data where possible).
Consider the standard problem of particle motion in external potential:
$$H=m\dot{x}^2/2+U(x),$$
(1)
which has two minima at points $`x=\eta _1`$ and $`x=\eta _2`$, and large tunneling action $`S=_{\eta _1}^{\eta _2}p𝑑x=_{\eta _1}^{\eta _2}[2mU(x)]^{1/2}𝑑x1`$. These minima are supposed to be near-degenerate, i.e., the lowest eigenstates of the Hamiltonian (1), $`H\mathrm{\Psi }_\alpha =E_\alpha \mathrm{\Psi }_\alpha `$, form a doublet with
$$E_2E_1e^S\omega _i\omega _i,$$
(2)
where $`\omega _i`$ are the classical vibration frequencies in the potential minima $`\omega _i=[U^{\prime \prime }(\eta _i)m]^{1/2}`$ (we set $`\mathrm{}=1`$). The lattice analog of the Hamiltonian (1) reads
$$H=t\underset{<ll^{}>}{}d_l^{}d_l^{}+\underset{l}{}n_lU_l,n_l=d_l^{}d_l,$$
(3)
where $`d_l^{}`$ creates a particle on the site number $`l`$, and the first sum is over nearest-neighbor sites.
The transition amplitude from the state $`|\eta _1`$ to the state $`|\eta _2`$, where $`|\eta _{1,2}=\delta (x\eta _{1,2})`$, in imaginary time $`\tau `$ is given by
$$A_{ji}(\tau )=\eta _2|e^{H\tau }|\eta _1\underset{\alpha }{}\alpha |\eta _1\eta _2|\alpha e^{E_\alpha \tau }.$$
(4)
We now make use of the inequality (2) to define the asymptotic regime $`E_2E_1\tau ^1\omega _i`$, see, e.g., Refs. ,
$$A_{ji}(\tau )e^{\overline{E}\tau }\underset{\alpha =1,2}{}\alpha |\eta _1\eta _2|\alpha [1(E_\alpha \overline{E})\tau ].$$
(5)
where $`\overline{E}=(E_2+E_1)/2`$.
It is convenient to split the double-well potential in two terms $`U(x)=U^{(1)}(x)+U^{(2)}(x)`$, where $`U^{(1,2)}(x)`$ is identical to $`U(x)`$ to the left/right of the barrier maximum point and remains constant afterwards. Introducing system ground states in each minimum as $`H^{(i)}\mathrm{\Psi }_G^{(i)}=E_G^{(i)}\mathrm{\Psi }_G^{(i)}`$, where $`H^{(i)}=m\dot{x}^2/2+U^{(i)}(x)`$, we may rewrite
$`\mathrm{\Psi }_1`$ $`=`$ $`u\mathrm{\Psi }_G^{(1)}+v\mathrm{\Psi }_G^{(2)},`$ (6)
$`\mathrm{\Psi }_2`$ $`=`$ $`v\mathrm{\Psi }_G^{(1)}u\mathrm{\Psi }_G^{(2)},`$ (7)
where $`(u^2,v^2)=1/2\pm \xi /2E`$, $`E^2=\mathrm{\Delta }^2+\xi ^2`$, with obvious identification of the energy splitting $`2E=E_2E_1`$ and bias energy $`\xi =(E_G^{(1)}E_G^{(2)})/2`$. Here $`\mathrm{\Delta }`$ is the tunneling amplitude, which defines energy splitting $`E_2E_1=2\mathrm{\Delta }`$ in the degenerate case. Substituting Eq. (7) into Eq. (5) we finally obtain
$`A_{ii}(\tau )`$ $``$ $`e^{\overline{E}\tau }Z_i^2,`$ (8)
$`A_{ji}(\tau )`$ $``$ $`\tau e^{\overline{E}\tau }Z_iZ_j\mathrm{\Delta }.`$ (9)
Here $`Z_i=\mathrm{\Psi }_G^{(i)}|\eta _i`$ projects the reference states $`|\eta _i`$ on the corresponding ground states in each minimum. All corrections to Eq. (9), e.g., the neglect of the overlap integrals $`\mathrm{\Psi }_G^{(1)}|\eta _2`$ and $`\mathrm{\Psi }_G^{(2)}|\eta _1`$, are small in parameter $`e^S`$. Note also that Eq. (9) does not depend on the bias $`\xi \omega _i`$, and is insensitive to the behavior of $`\mathrm{\Psi }_G^{(i)}`$ in the deep underbarrier region. This (rather standard) consideration relates transition amplitudes to the tunneling amplitude through the asymptotic analysis of $`A_{ij}(\tau )`$ in time.
MC simulation of the transition amplitude is almost identical to the standard simulation of quantum statistics \[the partition function at a given temperature $`T=1/\beta `$ may be written as $`𝒵(\tau =\beta )=\mathrm{Tr}_\eta A(\eta ,\eta ,\tau )`$\]. Fixed boundary conditions, as opposed to the trace over closed trajectories (configurations), are trivial to deal within any scheme. The real difference is in sampling different time-scales - thermodynamic calculations are typically done with $`\beta =const`$. Now we are forced to consider trajectories with different values of $`\tau `$ and to treat imaginary time as “dynamic” (in MC sense) variable. The idea of utilizing time dependencies of trajectories in MC simulations was extensively discussed in connection with the Worm algorithm and polaron Green function .
We now turn to the problem of normalization. This problem might seem intractable in view of close analogy between $`𝒵(\tau )`$ and $`A(\tau )`$. Formally, in the limit $`\tau 0`$, the amplitude is trivial to find in most cases, e.g., for the Hamiltonians (1) and (3) it is given by the free particle propagation
$$A_{ji}(\tau 0)\{\begin{array}{cc}e^{m(\eta _i\eta _j)^2/2\tau }\sqrt{\frac{m}{2\pi \tau }}\hfill & \mathrm{continuous}\hfill \\ (t\tau )^{|\eta _i\eta _j|}/|\eta _i\eta _j|!\hfill & \mathrm{discrete},\hfill \end{array}$$
(10)
and this knowledge may be used to normalize MC statistics for $`A_{ji}(\tau )`$ (in the discrete case $`|\eta _i\eta _j|=`$integer). However the absolute values of $`A_{ji}(\tau )`$ at short and long times will typically differ by orders and orders of magnitude and simply none MC statistics will be available at short times.
The solution to the puzzle lays in the possibility to use an arbitrary fictitious potential $`A_{\text{fic}}(\tau )`$ in Metropolis-type updates in time-domain
$$\frac{P_{acc}(𝒜)}{P_{acc}(𝒜)}=e^{S_𝒜+S_{}}\frac{A_{\text{fic}}(\tau ^{})}{A_{\text{fic}}(\tau )}\frac{W_𝒜(\tau )}{W_{}(\tau ^{})},$$
(11)
where $`P_{acc}(𝒜)/P_{acc}(𝒜)`$ is the acceptance ratio for the update transforming initial trajectory $``$ (having duration $`\tau `$) to the trajectory $`𝒜`$ (having duration $`\tau ^{}`$), and $`e^S_{}`$ is the statistical weight of the trajectory $``$. The normalized distribution functions $`W_{}(\tau ^{})`$, according to which a new value of $`\tau `$ is seeded, are also arbitrary; the best choice of $`W`$’s follows from the conditions of (i) optimal acceptance ratio (as close to unity as possible), and (ii) simple analytic form allowing trivial solution of the equation $`_{\tau _a}^\tau ^{}W(\tau ^{})𝑑\tau ^{}=r_{\tau _a}^{\tau _b}W(\tau ^{})𝑑\tau ^{}`$ on the time interval $`(\tau _a,\tau _b)`$ , where $`0<r<1`$ is the random number . Each trajectory adds a contribution $`=1/A_{\text{fic}}(\tau )`$ to the time-histogram of $`A_{ij}(\tau )`$.
One may use fictitious potential to enhance statistics of trajectories with certain values of $`\tau `$ “by hand”, e.g., by making $`A_{\text{fic}}`$ zero outside some time-window. To get a reliable and properly weighted statistics both at short and long times we need $`A_{\text{fic}}(\tau )1/A_{ij}(\tau )`$ to compensate completely for the severe variation of the transition amplitude between different time-scales. This goal is achieved as follows. The initial stage of the calculation, called thermolization, prepares the fictitious potential using recursive self-adjusting scheme - starting from $`A_{\text{fic}}(\tau )=1`$ in a given time-window $`(\tau _{min},\tau _{max})`$ and zero otherwise, we collect statistics for $`A_{ij}(\tau )`$ to the temporary time-histogram and after $`M>10^6÷10^7`$ updates we renew the fictitious potential as
$$A_{\text{fic}}(\tau )=\{\begin{array}{cc}A_{ij}(\tau _0)/A_{ij}(\tau ),\hfill & \tau _1<\tau <\tau _2\hfill \\ A_{ij}(\tau _0)/A_{ij}(\tau _1),\hfill & \tau _{min}<\tau <\tau _1\hfill \\ A_{ij}(\tau _0)/A_{ij}(\tau _2),\hfill & \tau _2<\tau <\tau _{max}\hfill \end{array}$$
(12)
where $`\tau _1`$ and $`\tau _2`$ are the points (to the left and to the right of some reference point $`\tau _0`$) where temporary statistics becomes unreliable and has large fluctuations. It makes sense to select $`\tau _0`$ close to the maximum of $`A_{ij}(\tau )`$ (this point may be tuned a posteriory), and points $`\tau _1`$ and $`\tau _2`$ are formally defined as the first points in the histogram where smooth variation of $`A_{ij}(\tau )`$ ends: $`A_{ij}(\tau _{1,2}+\mathrm{\Delta }\tau )/A_{ij}(\tau _{1,2})<\delta `$ (here $`\mathrm{\Delta }\tau `$ is the difference between the nearest points in time-histogram, and $`\delta `$ is a small number, say 0.01). The thermolization stage continues until $`\tau _1=\tau _{min}`$ and $`\tau _2=\tau _{max}`$, and fictitious potential stops changing (withing a factor of two). After that the actual calculation starts with a fixed $`A_{\text{fic}}(\tau )`$, and a new histogram for $`A_{ij}(\tau )`$ is collected.
The idea of using fictitious potential proportional to the inverse of the transition amplitude is clear - it allows to collect reliable statistics on different time-scales with comparable relative accuracy. With this tool at hand one can easily normalize $`A_{ij}(\tau )`$ using known analytic results for short times, and deduce tunneling amplitude and Z-factors from the analysis of the long-time asymptotic, Eq. (9). We can not but note that fictitious potential in time-domain very much resembles the so-called “guiding wavefunction” in the Green-function MC methods , with essential difference that here it is used to reach exponentially rare configurations.
Instanton shape calculation is a much easier task since it can be done by considering trajectories for the transition amplitude $`A_{ij}(\tau )`$ with fixed but sufficiently long $`\tau 1/\omega _i`$; again, to ensure that $`A_{ij}`$ is dominated by just one instanton trajectory we need $`\tau 1/\mathrm{\Delta }`$. For any given MC trajectory $`x(\tau )`$ one has first to define the instanton center position in time, and to recount all times from this center-point $`\tau _c`$ \[instanton center statistics is almost uniform in $`(0,\tau )`$ (except near the ends of the time interval) due to the generic “zero-mode” present in instanton solutions \]. This can be done by looking at the average time
$$\tau _c=\frac{_B\tau 𝑑\tau }{_B𝑑\tau },$$
(13)
where the integral is taken over the barrier region between the wells $`U(x(\tau ))>E_G`$. The instanton shape is obtained then by collecting statistics of $`x(\tau \tau _c)`$ to the time histogram. In this simple example when the notion of collective coordinate is not necessary (or formally, $`𝐑=x`$), we do not need to define separately the estimator for $`𝐑`$ (see the opposite example below).
The case of vacuum-to-vacuum transition in the many-body problem is formulated in precisely the same manner, and Eqs. (4-9), (11-12) holds true once the identification of the states $`|\eta _{1,2}`$ is done and the short-time limit, Eq. (10), is calculated. Consider as a typical example a 1D spin chain with $`2L`$ sites and antiferromagnetic (AF) couplings between the nearest-neighbor spins
$$H=\underset{<ij>}{}\left[J\stackrel{}{S}_i\stackrel{}{S}_j+J^{}S_i^zS_j^z\right],$$
(14)
which has near degenerate ground state with the lowest doublet separated from the rest of the system spectrum by finite gap for $`J^{}>0`$. The natural choice of $`|\eta _{1,2}`$ is then an ordered AF state with $`S_i^z|\eta _{1,2}=(\pm 1)^i|\eta _{1,2}`$. Note, that for a large system Z-factors $`\mathrm{\Psi }_G^{(i)}|\eta _i`$ will be also exponentially small, and even diagonal amplitudes $`A_{ii}`$ have to be calculated with the use of $`A_{\text{fic}}`$ \[in the single-particle case one may obtain Z-factors by ignoring $`A_{\text{fic}}`$-trick\]. The short-time behavior is given by
$`A_{ii}(\tau )`$ $``$ $`1`$ (15)
$`A_{ij}(\tau )`$ $``$ $`\left(J\tau /2\right)^L\{\begin{array}{cc}2\hfill & (\mathrm{ring})\hfill \\ 1\hfill & (\mathrm{open}\mathrm{chain}),\hfill \end{array}`$ (18)
To decipher the instanton we need now some a priori knowledge about the relevant collective variable (if such knowledge is not available one has to study different possibilities). For example, if tunneling proceeds via two domain walls well-separated from each other (thin-wall approximation ), then the collective variable $`𝐑`$ is the distance between the walls, and the underbarrier region in Eq. (13) is related to the existence of two separated walls. These definitions are not too specific and work only approximately; this is however the generic difficulty of dealing with collective variables which are meaningful only in the macroscopic limit. Obviously, the knowledge of the instanton shape does not allow precise evaluation of $`\mathrm{\Delta }`$, and gives only rough estimation of $`\mathrm{ln}\mathrm{\Delta }`$.
To test the proposed scheme and to compare results to the exact diagonalization (ED) data we have applied our algorithm to the lattice model (3) with $`U(x)=U_0[(x/\eta )^21]^2`$. In what follows we measure all energies in units of the hopping amplitude $`t`$ and count them from the potential minimum. We set $`U_0=1`$ and consider two interwell separations: $`\eta =10`$ and $`\eta =40`$. For $`\eta =10`$ the ED data for the ground state energy, Z-factor, and tunneling amplitude are: $`E_G=(E_1+E_2)/2=0.1923`$, $`Z^2=0.2465`$, $`\mathrm{\Delta }=(E_2E_1)/2=3.6078\times 10^6`$. Our MC data give $`E_G=0.192(2)`$, $`Z^2=0.246(2)`$, and $`\mathrm{\Delta }=3.61(3)\times 10^6`$. The case of large $`\eta `$ is more subtle since only $`E_G=0.0495`$ and $`Z^2=0.1255`$ may be tested against ED - one may use textbook semiclassical analysis of the corresponding continuous model to see that $`\mathrm{\Delta }(\eta =40)10^{24}÷10^{23}`$, that is far beyond the standard computer round-off errors.
In Fig. (1) and Fig. (2) we present our MC data for the diagonal and off-diagonal amplitudes for $`\eta =40`$, and fits to the expected long-time, Eq. (9), and short-time behavior, Eq. (10) \[we also include the lowest-order correction for the potential energy at $`\tau 0`$ which tells that $`A_{ij}e^{\overline{U}\tau }`$ where $`\overline{U}=_\eta ^\eta U(x)𝑑x`$\]. The variation of $`A_{ij}`$ in Fig. 2 is about two-hundred orders! From Fig. (1) we deduced $`E_G=0.0494(2)`$ and $`Z^2=0.126(1)`$ in agreement with ED. From Fig. (2) we then obtain $`\mathrm{\Delta }=1.7(4)\times 10^{23}`$. MC simulation of the above figures took about 5 days each on PII-266 processor.
In Fig. (3) we present our results for the instanton trajectory $`x(\tau )`$ (at $`\eta =40`$) along with the known semiclassical results for the continuous model (1) . The accuracy of the data is self-evident, although we argue that MC data rather represent the trajectory with finite energy $`E=E_G`$ while analytic results correspond to $`E=0`$.
We note that the present technique is very hard to implement in the discrete-time schemes with finite Trotter parameter $`\mathrm{\Delta }\tau `$. On one hand, long-time asymptotic regime (see Fig. 2) requires to consider $`\tau `$ as long as $`640`$. On another hand at short times, the requirement of smooth variation of the amplitude $`\mathrm{\Delta }\tau d\mathrm{ln}[A_{ij}(\tau )]/d\tau 1`$, after substituting Eq. (10), means $`\mathrm{\Delta }\tau \tau /(2\eta )10^3`$ for $`\tau =0.1`$. To avoid large systematic errors due to time-discretization at short times one has to use $`\mathrm{\Delta }\tau =10^4`$(!!). Apart from enormous memory usage (there will be about $`10^7`$ time slices) that small Trotter parameter severely slows down the efficiency of the code, in fact, we are not aware of any MC simulation with $`\mathrm{\Delta }\tau 10^4`$.
It is worth mentioning that similar technique makes it possible to study directly $`𝒵(\tau )`$ over different time-scales - normalization of the partition function does not matter since none physical quantity depend on it. Thus one can obtain temperature dependences of the free energy, entropy etc. in a single MC run.
This work was supported by the RFBR Grants 98-02-16262 and 97-02-16548 (Russia), IR-97-2124 (European Community).
|
no-problem/9901/cond-mat9901338.html
|
ar5iv
|
text
|
# An Efficient Molecular Dynamics Scheme for Predicting Dopant Implant Profiles in Semiconductors
## 1 Introduction
The continuing quest for greater processor performance demands ever smaller device sizes. The effort to realize these ultra shallow junction devices has resulted in current industry trends, such as the use of low energy, high mass, high dose, and large angle implants, to create abrupt dopant profiles. The experimental measurement of such profiles is challenging, as effects that are negligible at high implant energies become increasingly important as the implant energy is lowered. For example, the measurement of dopant profiles by secondary ion mass spectrometry (SIMS) is problematic for very low energy (less than 10 keV) implants, due to limited depth resolution of measured profiles. Also, refining SIMS protocols to obtain profiles for new ion-target combinations, e.g. In, Sb, or N implants or Si<sub>1-x</sub>Ge<sub>x</sub> targets is not a trivial problem.
The use of computer simulation as an alternative method to determine dopant profiles is well established. Binary collision approximation (BCA) codes have traditionally been used, however such simulations become unreliable at low ion energies. The BCA approach breaks down when multiple collisions or collisions between moving atoms become significant, or when the crystal binding energy is of the same order as the energy of the ion. Such problems are clearly evident when one attempts to use the BCA to simulate channeling in semiconductors; here the interactions between the ion and target are neither binary nor collisional in nature, rather they occur as many simultaneous soft interactions which steer the ion down the channel.
A more accurate, alternative to the BCA, is the use of molecular dynamics (MD) simulation to calculate ion trajectories. However, the computational cost of traditional MD simulations precludes the calculation of the thousands of ion trajectories necessary to construct a dopant profile. Here we present a highly efficient MD-based scheme, that is optimized to calculate the concentration profiles of ions implanted into semiconductors. The algorithms are incorporated into our implant modeling molecular dynamics code, REED-MD<sup>1</sup><sup>1</sup>1Named for ‘Rare Event Enhanced Domain following Molecular Dynamics’.. Our program has previously been demonstrated to describe the low dose (zero damage) implant of As, B, and P ions with energies in the sub MeV range into crystalline Si in $``$100$``$, $``$110$``$, and non-channeling directions, and also into amorphous Si. We have now extended our model to any ion species, and to other diamond crystal substrates, such as C, Ge, SiC, Si<sub>1-x</sub>Ge<sub>x</sub>, and GaAs. A model for ion induced damage has also been added to the program so that high dose implants can be simulated.
## 2 Molecular Dynamics Model
The basis of the molecular dynamics model is a collection of empirical potential functions that describe interactions between atoms and give rise to forces between them. In addition to the classical interactions described by the potential functions, the interaction of the ion with the target electrons is required for ion implant simulations, as this is the principle way in which the ion loses energy. Another necessary ingredient is a description of the target material structure, including thermal vibration amplitudes.
Interactions between target atoms are modeled by derivatives of the many-body potential developed by Tersoff. ZBL ‘pair specific’ screened Coulomb potentials are used to model interactions for common ion-target combinations. For other combinations, the ZBL ‘universal’ potential is used. The ‘universal’ potential is also used to describe the close-range repulsive part of the Tersoff potentials.
We include energy loss due to inelastic collisions, and energy loss due to electronic stopping as two distinct mechanisms. The Firsov model is used to describe the loss of kinetic energy from the ion due to momentum transfer between the electrons of the ion and target atom. We implement this using a velocity dependent pair potential, as derived by Kishinevskii.
A modified Brandt-Kitagawa model, that involves both global and local contributions to the electronic stopping is used for electronic energy loss. This model contains the single fitted parameter in our scheme, $`r_s^0`$, the ‘average’ one electron radius of the target material experienced by the ion. This is adjusted to account for oscillations in the $`Z_1`$ dependence of the electronic stopping cross-section. The parameter is fit once for each ion-target combination and is then valid for all ion energies and incident directions. By using a realistic stopping model, with the minimum of fitted parameters, we obtain a greater transferability to the modeling of implants outside the fitting set.
In the calculations presented here, the target is a {100} diamond crystal with a surface oxide layer. The oxide structure was obtained from annealing a periodic SiO<sub>2</sub> sample with the density constrained to that estimated for grown surface oxide. Thermal vibrations of atoms are modeled by displacing atoms from their lattice sites using a Debye model. For high dose implants, the accumulation of damage within the target is described by a simple Kinchin-Pease model. Target properties are either species dependent, e.g., local electron density, or are obtained by interpolation from known values for single element materials, e.g., lattice constant and Debye temperature.
## 3 Efficient Molecular Dynamics Algorithms
We apply a combination of methods to increase the efficiency of this specific type of simulation. Infrequently updated neighbor lists are employed to minimize the time spent in force calculations. The paths of the atoms are integrated using Verlet’s algorithm, with a variable timestep that is dependent upon both kinetic and potential energy of atoms.
It is infeasible to calculate dopant profiles by full MD simulation, as computational requirements scale approximately as $`u^4`$, where $`u`$ is the initial ion velocity. We have developed a modified MD scheme which is capable of producing accurate dopant profiles with a much smaller computational overhead. We continually create and destroy target atoms, to follow the domain of the substrate that contains the ion. Material is built in front of the ion, and destroyed in its wake. Hence, the ion experiences the equivalent of a complete crystal, but the cost of the algorithm is only O($`u`$).
To further improve efficiency, we use three other approximations. The moving atom approximation is used to reduce the number of force calculations. Atoms are divided into two sets; those that are ‘on’ have their positions integrated, and those that are ‘off’ are stationary. At the start of the simulation, only the ion is turned on. Some of the ‘off’ atoms will be used in the force calculations and will have forces assigned to them. If the resultant force exceeds a certain threshold, the atom is turned on. We use two thresholds in our simulation; all atoms interacting directly with the ion are turned on immediately (zero threshold), and other atoms are turned on if the force exceeds 8.0$`\times `$10<sup>-9</sup> N.
For high ion velocities, we do not need to use a many-body potential to maintain the stable diamond lattice; a pair potential is sufficient, as only repulsive interactions are significant. Hence, at a certain ion velocity we switch from the complete many-body potential to a pair potential approximation for the target atom interactions. We make a further approximation for still higher ion energies, where only ion-target interactions are significant in determining the ion path. This approximation, termed the recoil interaction approximation, brings the MD scheme close to many BCA implementations. The major difference between the two approaches is that the ion path is obtained by integration, rather than by the calculation of asymptotes, and that multiple interactions are, by the nature of the method, handled in the correct manner.
## 4 Rare Event Algorithm
A typical dopant profile in a crystalline semiconductor consists of a near-surface peak followed by an almost exponential decay over several orders of magnitude in concentration. If we attempt to directly calculate a statistically significant dopant concentration at all depths of the profile we will have to run many ions that are stopped near the peak for every one ion that stops in the tail, and most of the computational effort will not enhance the accuracy of the profile.
In order to remove this redundancy, we employ an ‘atom splitting’ scheme to increase the sampling in the deep component of the concentration profile. At certain splitting depths in the material, each ion is replaced by two ions, each with a statistical weighting of half that prior to splitting. Each split ion trajectory is run separately, and the weighting of the ion is recorded along with its final depth. As the split ions experience different environments (material is built in front of the ion, with random thermal displacements), the trajectories rapidly diverge from one another. Due to this scheme, we can maintain the same number of virtual ions moving at any depth, but their statistical weights decrease with depth. During a typical simulation, 1,000 implanted ions are split to yield around 10,000 virtual ions.
We estimate the uncertainty in the calculated dopant profiles by dividing the final ion depths into 10 sets. A depth profile is calculated from each set using a histogram of 100 bins, and the standard deviation of the distribution of the 10 concentrations for each bin is taken as the uncertainty. Fig. 1 demonstrates the effectiveness of the scheme, by comparing profiles obtained with and without atom splitting over five orders of magnitude. We estimate that the rare event algorithm reduces CPU time by a factor of 90 when calculating profiles over 3 orders of magnitude, and by a factor of 900 when calculating a profile over 5 orders of magnitude.
## 5 Results and Discussion
First, we give 2D profiles for low dose, low energy profiles to show scattering and surface effects. We then give examples of 1D profiles produced by simulations, and compare to SIMS data. All simulations were run with a target temperature of 300 K, and a beam divergence of 1.0 was assumed. Each profile was constructed from 1,000 ions, with the splitting depths updated every 25 ions, and a domain of 3$`\times `$3$`\times `$3 unit cells was used. The direction of the incident ion beam is specified by the angle of tilt, $`\theta ^{}`$, from normal and the azimuthal angle $`\varphi ^{}`$, as ($`\theta `$,$`\varphi `$). In the case of the low energy ($``$ 10 keV) implants, we assume one unit cell thickness of surface oxide; for other cases we assume three unit cells of oxide at the surface. For the low energy implants, we have calculated profiles over a change of five orders of magnitude in concentration; for the higher energy implants we calculate profiles over 3 orders of magnitude. A dose of 1$`\times `$10<sup>12</sup> cm<sup>-2</sup> (zero damage) is used unless otherwise noted.
2D profiles are shown projected onto the plane normal to the surface and containing the zero degree azimuth. This makes it easy to differentiate between major channeling directions; the $``$100$``$ channel is vertical, and the four $``$110$``$ channels appear at angles of 35 from vertical. Fig. 2 shows the result of an ultra-low energy implant into Si. Although the implant is in the $``$100$``$ direction, this channel is closed at such low ion energies and the only channeling occurs in the $``$110$``$ direction. This demonstrates the need to have a ‘universal’ electronic stopping model, rather than a model tuned for a particular channeling direction. The effect of the amount of surface disorder is shown in Fig. 3. Increasing the thickness of the surface layer leads to more ions being scattered into the larger $``$110$``$ channel, and hence gives a far deeper tail to the profile.
There is increasing interest in the use of SiGe as a replacement for Si currently used in CMOS technology, due to its higher switching speed. Fig. 4 shows the effect of Ge concentration on profiles from B and As implants into Si<sub>1-x</sub>Ge<sub>x</sub> targets. The trend is clearly for shallower profiles with increasing Ge concentration, but this is extremely non-linear; the difference between $`x=0`$ and $`x=0.2`$ profiles is greater than the difference between $`x=0.2`$ and $`x=0.8`$ profiles. The remaining figures show the calculated concentration profiles of several ion species implanted under various conditions into GaAs substrates, and comparison with available SIMS data. The results of the REED calculations show good agreement with the experimental data, demonstrating the accuracy of our model and its transferability to many ion-target combinations and implant conditions.
## 6 Conclusions
In summary, we have developed a restricted MD code to simulate the ion implant process and calculate ‘as implanted’ dopant profiles. This gives us the accuracy obtained by time integrating atom paths, with an efficiency far in excess of full MD simulation.
The scheme described here gives a viable alternative to the BCA approach. Although it is still more expensive computationally, it is sufficiently fast to be used on modern desktop computer workstations. The method has two major advantages over the BCA approach: (i) Our MD model consists only of standard empirical potentials developed for bulk semiconductors and for ion-solid interactions. The only fitting is in the electronic stopping model, and this involves *only one* parameter per ion-target combination. We believe that by using physically based models for all aspects of the program, with the minimum of fitting parameters, we obtain good transferability to the modeling of implants outside of our fitting set. (ii) The method does not break down at the low ion energies necessary for production of the next generation of semiconductor technology; it gives the correct description of multiple, soft interactions that occur both in low energy implants, and higher energy channeling. Hence our method remains truly predictive at these low ion energies, whilst the accuracy of the BCA is in doubt.
This work was performed under the auspices of the United States Department of Energy.
|
no-problem/9901/astro-ph9901203.html
|
ar5iv
|
text
|
# BEPPOSAX OBSERVATIONS OF THE GALACTIC SOURCE GS 1826-238 IN A HARD X-RAY HIGH STATE
## Abstract
ABSTRACT
The BeppoSAX Narrow Field Instruments observed the galactic source GS 1826-238 in October 1997, following a hard X-ray burst with a peak flux of about 100 mCrab detected by BATSE. Two short X-ray bursts ($``$150 seconds) were detected up to 60 keV, with larger amplitude and duration at lower energies (up to a factor 20 times the persistent emission). This confirms the proposed identification of the source as a weakly magnetized neutron star in a LMXRB system. For both persistent and burst states, the spectrum in the 0.4-100 keV energy range is well fitted by an absorbed black body, plus a flat power-law ($`\mathrm{\Gamma }1.7`$) with an exponential cutoff at 50 keV.
<sup>1</sup> IFCAI-CNR, Via Ugo La Malfa 153, I-98146 Palermo, Italy
<sup>2</sup> ITESRE-CNR, Via Gobetti 101, I-40129 Bologna, Italy
<sup>3</sup> ESTEC, Astrophysics Division, Keplerlaan 1, 2200 AG Noordwijk, The Netherlands
<sup>4</sup> NASA Marshall Space Flight Center, Huntsville, AL 35812, USA
<sup>5</sup> IFCTR, CNR, Via Bassini 15, I-20133 Milano, Italy
KEYWORDS: X-ray bursts, LMXRB
1. INTRODUCTION
GS 1826-238 was serendipitously discovered by the Ginga LAC in September 1988 with a flux of 26 mCrab and a hard power-law spectrum (photon index $`\mathrm{\Gamma }=1.7`$), and optically identified with a $`V19.3`$ star . Observations both a month before and after the discovery by the Ginga ASM, by TTM in 1989 and by ROSAT in 1990 and 1992 found comparable flux levels . In 1994 the source was detected at a 7$`\sigma `$ level with OSSE above 50 keV with a steep power-law spectrum ($`\mathrm{\Gamma }=3.1`$).
Due to its flickering flux variability and hard X-ray spectrum, reminiscent of the behavior of Cyg X-1, the source was tentatively classified as a black hole candidate. Three X-ray bursts detected on 31 March 1997 with the WFC onboard BeppoSAX , and two optical bursts suggest instead that the system contains a lowly magnetized neutron star. We focus here on the spectral shape of the source during intense, hard X-ray states and on the spectral evolution exhibited in rapid X-ray bursts.
2. OBSERVATIONS, DATA ANALYSIS AND RESULTS
Observations with the BeppoSAX Narrow Field Instruments took place on 6-7, 17-18, and 25-26 October 1997 within a Target of Opportunity program for the monitoring of X-ray transients in active state in hard X-rays. The first pointing was triggered by the BATSE detection of a hard X-ray signal with a peak flux of about 100 mCrab. On source MECS exposures lasted typically for 25-30 ks, while LECS and PDS exposures were typically 30% and 50% shorter, respectively. The target was significantly detected with the PDS up to 100 keV. Data reduction was performed following standard methods (see, e.g., ). The MECS light curves exhibit two pronounced X-ray bursts of a factor 20 amplitude in the range 1.6-10 keV and $``$150 seconds duration on October 18.12 (Fig. 1) and 26.19 UT. (Due to instrumental visibility constraints, the LECS only partially detected the first burst.) Both bursts were also detected, up to 60 keV, by the PDS. The flux exhibits a linear rise, which appears faster at higher energies, followed by an exponential decay. Burst amplitudes and durations decrease with increasing energy. Excluding the bursts, no significant variability is seen at any frequency within a same pointing, nor from epoch to epoch. For the spectral analysis, the LECS, MECS, HPGSPC and PDS data have been considered in the ranges 0.4-6 keV, 1.6-10 keV, 6-30 keV, and 15-100 keV (15-50 keV for the bursts), respectively. Both persistent and burst emission spectra cannot be fitted by an absorbed black body model only, but they exhibit a high energy tail which is accounted for by a power-law model plus a high energy cutoff (Fig. 2). For the October 17-18 observation, we obtain fitted temperatures of $`T_{BB}=0.94\pm 0.05`$ keV and $`T_{BB}=1.9\pm 0.1`$ keV for persistent and burst emission, respectively, power-law photon indices $`\mathrm{\Gamma }=1.34\pm 0.04`$ and $`\mathrm{\Gamma }=1.1\pm 0.4`$, and exponential cutoffs $`E=49\pm 3`$ keV and $`E=12\pm 4`$ keV. The last value is very uncertain, due to the more limited energy range used for the fit. A fit to time-resolved spectra in the 2-10 keV range along the burst light curve indicates that the temperature decreases during the burst decay. The burst emission spectrum has been fitted without subtracting the underlying persistent signal, to avoid neglecting the possible influence of the burst on the accretion flow (see , and references therein).
3. DISCUSSION
The energy-dependent amplitude of the bursts detected by BeppoSAX and their shorter duration at higher energies strongly suggest a thermonuclear origin. This is confirmed by the good fit obtained with a thermal model up to 10 keV and by the decreasing black body temperature during the burst decay. The present and previous observations of burst activity in GS 1826-238 rule out the black hole nature of the central accretor and suggest the presence of a weakly magnetized neutron star in a LMXRB system. While the temperature of the black body doubles during the burst with respect to the persistent emission, the slope of the high energy power-law does not significantly vary, although its normalization is a factor of two larger in burst than in persistent state, indicating that the hard tail is intrinsically significant in the burst state too. A similar finding was previously reported for X1608–52 by Nakamura et al. , although the explored spectral range was more limited. Our fitted temperature of $``$2 keV during the burst is consistent with the inverse correlation Nakamura et al. find between temperature and hard tail intensity (see their Table 2). This high energy component is tentatively interpreted in terms of inverse Compton scattering of soft black body photons off high energy electrons in a hot region surrounding the neutron star (see however Day & Done ). The break at 50 keV, reminiscent of that observed by BATSE in X1608–52 , indicates that, unlike in black hole candidates, the neutron star surface emission at soft energies tends to cool the hot Comptonization region. The temporal spacing of the two bursts detected by the present BeppoSAX observations is consistent, within the 40 minutes uncertainty, with the 5.67 hours period found by Ubertini et al. during 2.5 years of WFC observations.
REFERENCES
Makino, F. 1988, IAU Circ. No. 4653; Barret, D., Motch, C., & Pietsch, W. 1995, A&A, 303, 526; in’t Zand, J. J. M. 1992, PhD Thesis, University of Utrecht; Strickman, M., et al. 1996, A&AS, 120, 217; Ubertini, P., et al. 1997, IAU Circ. No. 6611; Homer, L., Charles, P. A., & O’Donoghue, D. 1998, MNRAS, 298, 497; Chiappetti, L., et al. 1998, Nuc. Phys. B (Proc. Suppl.) 69/1-3, 340; Lewin, W. H. G., Van Paradijs, J., & Taam, R. E. 1995, in X-ray Binaries, eds. W. H. G. Lewin, J. Van Paradijs, and E. P. J. Van den Heuvel, Cambridge Astrophysics Series; Nakamura, N., et al. 1989, PASJ, 41, 617; Day, C. S. R., & Done, C. 1991, MNRAS, 253, 35P; Zhang, S.N., et al. 1996, A&AS, 120C, 279; Ubertini, P., et al. 1998, this Conference.
|
no-problem/9901/quant-ph9901035.html
|
ar5iv
|
text
|
# The cost of exactly simulating quantum entanglement with classical communication
## 1 Introduction
Bell’s celebrated theorem shows that certain scenarios involving bipartite quantum measurements result in correlations that are impossible to simulate with a classical system if the measurement events are space-like separated. If the measurement events are time-like separated then classical simulation is possible, at the expense of some communication. Our goal is to quantify the required amount of communication.
The issue that we are addressing is part of the broader question of how quantum information affects various resources required to perform tasks in information processing. A two-way classical communication channel between two separated parties can be regarded as a resource, and a natural goal is for two parties to produce classical information satisfying a specific stochastic property. One question is, if the parties have an a priori supply of quantum entanglement, can they accomplish such goals with less classical communication than necessary in the case where their a priori information consists of only classical probabilistic information? And, if so, by how much? Our question is, to what extent does the fundamental behaviour of an entangled quantum system itself provide savings, in terms of communication compared with classical systems?
Imagine a scenario involving two “particles” that may have been “together” (and interacted) at some previous point in time, but are “separated” (in a sense which implies that they can no longer interact) at the present time. Suppose that a measurement is then arbitrarily selected and performed on each particle (not necessarily the same measurement on both particles). If the underlying physics governing the behaviour of the system is “classical” then the behaviour of such a system could be based on correlated random variables (usually called “local hidden variables”), reflecting the possible results of a previous interaction. If no communication can occur between the components at the time when the measurements take place then this imposes restrictions on the possible behaviour of such a system. In fact, if the underlying physics governing the behaviour of the system is “quantum” (in the sense that it can be based on entangled quantum states, rather than correlated random variables) then behaviour can occur that is impossible in the classical case. This is a natural way of interpreting Bell’s theorem . To formalize—and later generalize—this, we shall define quantum measurement scenarios and (classical) local hidden variable schemes.
## 2 Definitions and preliminary results
Define a quantum measurement scenario as a triple of the form $`(|\mathrm{\Psi }_{AB},M_A,M_B)`$, where $`|\mathrm{\Psi }_{AB}`$ is a bipartite quantum state, $`M_A`$ is a set of measurements on the first component, and $`M_B`$ is a set of measurements on the second component.
It is convenient to parametrize the simplest von Neumann measurements on individual qubits by points on the unit circle (more general von Neumann measurements, which involve complex numbers, are considered later in this paper). Let the parameter $`x[0,2\pi )`$ denote a measurement with respect to the operator
$$R(x)=\left(\begin{array}{cc}\mathrm{cos}x& \mathrm{sin}x\\ \mathrm{sin}x& \mathrm{cos}x\end{array}\right)$$
(1)
(whose eigenvectors are $`\mathrm{cos}(\frac{x}{2})|0+\mathrm{sin}(\frac{x}{2})|1`$ and $`\mathrm{sin}(\frac{x}{2})|0\mathrm{cos}(\frac{x}{2})|1`$).
Consider the case of a pair of qubits in the Bell state $`|\mathrm{\Phi }^+_{AB}=\frac{1}{\sqrt{2}}|0|0+\frac{1}{\sqrt{2}}|1|1`$. \[Our results are written for such states, but can be modified to apply to any of the other Bell states, including the Einstein-Podolsky-Rosen singlet state $`|\mathrm{\Psi }^{}_{AB}=\frac{1}{\sqrt{2}}|0|1\frac{1}{\sqrt{2}}|1|0`$.\] Let $`x,y[0,2\pi )`$ be the respective measurement parameters of the two components and let $`a,b\{0,1\}`$ be the respective outcomes. Then the joint probability distribution of these outcomes is given as:
| | $`\mathrm{Pr}[b=0]`$ | $`\mathrm{Pr}[b=1]`$ |
| --- | --- | --- |
| $`\mathrm{Pr}[a=0]`$ | $`\frac{1}{2}\mathrm{cos}^2(\frac{xy}{2})`$ | $`\frac{1}{2}\mathrm{sin}^2(\frac{xy}{2})`$ |
| $`\mathrm{Pr}[a=1]`$ | $`\frac{1}{2}\mathrm{sin}^2(\frac{xy}{2})`$ | $`\frac{1}{2}\mathrm{cos}^2(\frac{xy}{2})`$ |
Two simple but noteworthy examples of bipartite quantum measurement scenarios with the Bell state $`|\mathrm{\Phi }^+_{AB}`$ are:
$`(|\mathrm{\Phi }^+_{AB},M_A,M_B)`$, where $`M_A=M_B=\{0,\frac{\pi }{2}\}`$.
$`(|\mathrm{\Phi }^+_{AB},M_A,M_B)`$, where $`M_A=\{\frac{\pi }{8},\frac{3\pi }{8}\}`$ and $`M_B=M_A=\{\frac{\pi }{8},\frac{3\pi }{8}\}`$.
In both examples, each individual outcome is a uniformly distributed bit regardless of the measurements. In Example 1, if the two measurements are the same then the outcomes are completely correlated; whereas, if the two measurements are different, the outcomes are completely independent. In Example 2, the two outcomes are equal with probability $`\mathrm{sin}^2(\frac{\pi }{8})`$ if $`x=y=+\frac{3\pi }{8}`$; and with probability $`\mathrm{cos}^2(\frac{\pi }{8})`$ otherwise. These examples are interesting in the context of local hidden variable schemes, which are defined next.
Intuitively, we are interested in classical devices that simulate bipartite quantum measurement scenarios to varying degrees, and such devices are naturally explained as local hidden variable schemes. To define a local hidden variable scheme, it is convenient to view it as a two-party procedure whose execution occurs in two stages: a preparation stage and a measurement stage. For ease of reference, call the two parties Alice and Bob. During the preparation stage, local hidden variables $`u`$ for Alice and $`v`$ for Bob are determined by a classical random process. During this stage, arbitrary communication can occur between the two parties, so $`u`$ and $`v`$ may be arbitrarily correlated. During the measurement stage, measurements $`x`$ and $`y`$ are given to Alice and Bob (respectively), who produce outcomes $`a=A(x,u)`$ and $`b=B(y,v)`$ (respectively). During this stage, no communication is permitted between the parties, which is reflected by the fact that the value of $`A(x,u)`$ is independent of the value of $`y`$ (and vice versa).
A local hidden variable scheme simulates a measurement scenario $`(|\mathrm{\Psi }_{AB},M_A,M_B)`$ if, for any $`xM_A`$ and $`yM_B`$, the outputs produced by Alice and Bob, (namely, $`a`$ and $`b`$ respectively), have exactly the same bivariate distribution as the outcomes of the quantum measurement scenario as dictated by the laws of quantum physics.
The measurement scenario in Example 1 is easily simulatable by the following local hidden variable scheme. Let $`u`$ and $`v`$ each consist of a copy of the same uniformly distributed two-bit string. Then let Alice and Bob each output the first bit of this string if their measurement is 0 and the second bit if their measurement is $`\frac{\pi }{2}`$. On the other hand, for the measurement scenario of Example 2, it turns out that there does not exist a local hidden variable scheme that simulates it .
Now, we consider a more powerful classical instrument for simulating measurement scenarios. Define a local hidden variable scheme augmented by $`k`$ bits of communication, as follows. Informally, it is a local hidden variable scheme, except that the prohibition of communication between the parties during the measurement stage is relaxed to a condition that allows up to $`k`$ bits of communication (but no more). More formally, a local hidden variable scheme augmented by $`k`$ bits of communication, has a preparation stage where random variables $`u`$ and $`v`$ for Alice and Bob are determined and during which arbitrary communication is permitted between the two parties. Then there is a measurement stage which begins by measurements $`x`$ and $`y`$ being given to Alice and Bob (respectively). Then one party computes a bit (as a function of his/her measurement and local hidden variables) which is sent to the other party. This constitutes one round of communication. Then again one party (the same one or a different one) computes a bit (as a function of his/her measurement, local hidden variables, and any data communicated from the other party at previous rounds) and sends it to the other party. And this continues for $`k`$ rounds, after which Alice and Bob output bits $`a`$ and $`b`$ (respectively).
For example, for the measurement scenario of Example 2, a local hidden variable scheme augmented with one single bit of communication can simulate it. This is a consequence of the following more general result, whose easy proof we include for completeness.
Theorem 1. For any quantum measurement scenario $`(|\mathrm{\Psi }_{AB},M_A,M_B)`$, there exists a local hidden variable scheme augmented with $`\mathrm{log}_2(|M_A|)`$ bits of communication (from Alice to Bob) that exactly simulates it.
Proof. First note that, if we allow $`\mathrm{log}_2(|M_A|)`$ bits of communication from Alice to Bob and $`\mathrm{log}_2(|M_B|)`$ bits of communication from Bob to Alice then it is trivial to simulate the quantum measurement scenario. With this much communication, Alice can obtain $`y`$ and Bob can obtain $`x`$, which effectively defeats any “nonlocality” in the scenario. More precisely, during the preparation stage, Alice and Bob can construct $`|M_A||M_B|`$ random variable pairs, $`(a^{(x,y)},b^{(x,y)})`$, one for each value of $`xM_A`$ and $`yM_B`$. Each such random variable pair would specify the values of the outcomes of Alice and Bob for the given values of $`x`$ and $`y`$, with the appropriate correlation. During the measurement stage, after the communication of $`x`$ and $`y`$ between them, Alice and Bob can simply output $`a^{(x,y)}`$ and $`b^{(x,y)}`$ (respectively).
To obtain a protocol in which only $`\mathrm{log}_2(|M_A|)`$ bits of communication from Alice to Bob occurs, note that the unconditional probability distribution of $`a^{(x,y)}`$ (the output of Alice when the measurements are $`x`$ and $`y`$) is independent of the value of $`y`$. This is because the distribution of $`a^{(x,y)}`$ is completely determined by $`x`$ and the reduced density matrix of $`|\mathrm{\Psi }_{AB}`$ with the second component traced out ($`\text{Tr}_B(|\mathrm{\Psi }_{AB})`$), and this quantity is independent of $`y`$. Therefore, the local hidden variables can be set up as follows. For each $`xM_A`$, $`a^{(x)}`$ is sampled according to the appropriate probability distribution, and then, for each $`xM_A`$ and $`yM_B`$, $`b^{(x,y)}`$ is sampled according the appropriate conditional probability distribution (conditioned on the value of $`a^{(x)}`$) in order to produce the correct bivariate distribution for $`(a^{(x)},b^{(x,y)})`$. Then, during the measurement stage it suffices for Alice to send $`x`$ to Bob, and for Alice and Bob to output $`a^{(x)}`$ and $`b^{(x,y)}`$ (respectively).
We shall see that in some cases the upper bound of Theorem 1 is asymptotically tight while in other cases it is not. In the sections that follow, we focus on the case of a single Bell state and the case of $`n`$ Bell states, and provide a new upper or lower bound in each case.
## 3 The case of a single Bell state
Consider the case of a single Bell state $`|\mathrm{\Phi }^+_{AB}=\frac{1}{\sqrt{2}}|0|0+\frac{1}{\sqrt{2}}|1|1`$, but where the sizes of $`M_A`$ and $`M_B`$ may be arbitrarily large. By Theorem 1, we only obtain an upper bound of $`\mathrm{log}_2(|M_A|)`$ bits for the amount of communication necessary for an augmented local hidden variable scheme to simulate it. In the case where $`M_A`$ and $`M_B`$ are each the entire interval $`[0,2\pi )`$, this communication upper bound would be infinite. If only a finite number, $`k`$, bits of communication are permitted then one alternative that might seem reasonable is for Alice to send $`x^{}`$, a $`k`$-bit approximation of $`x`$, to Bob. The protocol for Alice and Bob would be along the lines of the one in Theorem 1, but using $`x^{}`$ in place of $`x`$. This would clearly not produce an exact simulation for a general $`x[0,2\pi )`$, but it would produce an approximation that improves as $`k`$ increases. Is this the best that can be done with $`k`$ bits of communication? The next theorem demonstrates that it is possible to obtain an exact simulation for any $`x,y[0,2\pi )`$ with only a constant number of bits of communication.
Theorem 2. For the quantum measurement scenario $`(|\mathrm{\Phi }^+_{AB},M_A,M_B)`$ with $`|\mathrm{\Phi }^+_{AB}=\frac{1}{\sqrt{2}}|0|0+\frac{1}{\sqrt{2}}|1|1`$ and $`M_A=M_B=[0,2\pi )`$, there exists a local hidden variable scheme augmented with four of bits of communication (from Alice to Bob) that exactly simulates it.
Proof. The local hidden variables are $`c\{0,1\}`$ and $`\theta [0,\frac{3\pi }{5})`$, and both are uniformly distributed.
For $`j\{0,1,\mathrm{},9\}`$, define $`\alpha _j=\frac{j\pi }{5}`$. It is useful to view $`\alpha _0,\alpha _1,\mathrm{},\alpha _9`$ as ten equally-spaced points on the unit circle. Define the $`j^{\text{th}}`$ $`\alpha `$-slot as the interval $`[\alpha _j,\alpha _{(j+1)mod10})`$. Also, define $`\beta _0=\alpha _0+\theta `$, $`\beta _1=\alpha _3+\theta `$, and $`\beta _2=\alpha _6+\theta `$ and $`\gamma _0=\alpha _5+\theta `$, $`\gamma _1=\alpha _8+\theta `$, and $`\gamma _2=\alpha _1+\theta `$ (where the addition is understood to be modulo $`2\pi `$). Define the $`j^{\text{th}}`$ $`\beta `$-slot as the interval $`[\beta _j,\beta _{(j+1)mod3})`$, and the $`j^{\text{th}}`$ $`\gamma `$-slot as the interval $`[\gamma _j,\gamma _{(j+1)mod3})`$.
The protocol starts by Alice sending Bob information specifying the $`\alpha `$-slot, $`\beta `$-slot, and $`\gamma `$-slot in which $`x`$ is located. Note that these slots partition the unit circle into sixteen intervals, so Alice can convey this information by sending four bits to Bob. Then Alice outputs the bit $`c`$.
The full procedure for Bob is summarized below, but, in order to explain the idea behind it, it is helpful to first consider the special case where $`y`$ is in the 2$`^{\text{nd}}`$ $`\alpha `$-slot and the $`\alpha `$-slot number of $`x`$ is within two of that of $`y`$ (in other words, the $`\alpha `$-slot number of $`x`$ is in $`\{0,1,2,3,4\}`$). Note that these conditions depend on the values of $`x`$ and $`y`$ only (and not on the values of the local hidden variables). Also, these conditions imply that $`|xy|\frac{3\pi }{5}`$. In this case, Bob does the following. If the $`\beta `$-slots of $`x`$ and $`y`$ are the same then Bob outputs $`c`$. If the $`\beta `$-slots of $`x`$ and $`y`$ are different then exactly one $`\beta _k`$ is between $`x`$ and $`y`$. Let $`u=|y\beta _k|`$. Then Bob’s procedure is to output $`c`$ with probability $`1\frac{3\pi }{10}\mathrm{sin}(u)`$.
To analyse the stochastic behaviour of this procedure (still in the special case), let $`r=|xy|`$ and note that the probability of $`x`$ and $`y`$ being in different $`\beta `$-slots is $`\frac{5r}{3\pi }`$. Also, conditional on $`x`$ and $`y`$ being in different $`\beta `$-slots, the probability distribution of the position of the $`\beta _k`$ between $`x`$ and $`y`$ is uniform. Therefore,
$`\mathrm{Pr}[a=b]`$ $`=`$ $`\left(1\frac{5r}{3\pi }\right)+\left(\frac{5r}{3\pi }\right)\left(\frac{1}{r}\right){\displaystyle _0^r}\left(1\frac{3\pi }{10}\mathrm{sin}(u)\right)𝑑u`$ (2)
$`=`$ $`\frac{1}{2}(1+\mathrm{cos}(r))`$
$`=`$ $`\mathrm{cos}^2(\frac{r}{2}),`$
which is exactly what is required.
The procedure for Bob in the above special case can be generalized to apply to the other possible cases by considering various similarities and symmetries among the cases. First note that the above procedure actually works in all cases where the $`\alpha `$-slot number of $`y`$ is in $`\{2,3,4,5,6\}`$ and the $`\alpha `$-slot number of $`x`$ is within two of that of $`y`$. This is because, in these cases, the interval between $`x`$ and $`y`$ (of length $`\frac{3\pi }{5}`$) lies entirely within the interval $`[0,\frac{9\pi }{5})`$ and $`\beta _0,\beta _1,\beta _2`$ are uniformly distributed points spaced $`\frac{3\pi }{5}`$ apart in this interval.
Now, consider the cases where the $`\alpha `$-slot number of $`y`$ is in $`\{7,8,9,0,1\}`$ and the $`\alpha `$-slot number of $`x`$ is still within two of that of $`y`$. In these cases, the interval containing $`x`$ and $`y`$ may not lie entirely within $`[0,\frac{9\pi }{5})`$, and so the distribution of $`\beta _0,\beta _1,\beta _2`$ may no longer satisfy the relevant properties. To avoid this problem, Bob applies the above procedure with $`\gamma _0,\gamma _1,\gamma _2`$ substituted in place of $`\beta _0,\beta _1,\beta _2`$. This works because $`\gamma _0,\gamma _1,\gamma _2`$ are uniformly distributed points spaced $`\frac{3\pi }{5}`$ apart in the interval $`[\pi ,\frac{4}{5}\pi )`$ (taken clockwise) and the interval containing $`x`$ and $`y`$ is within this interval.
The above covers all cases where the $`\alpha `$-slot number of $`x`$ is within two of that of $`y`$. To handle the remaining cases, Bob works with $`y^{}=y+\pi `$ (whose $`\alpha `$-slot number will then be within two of that of $`x`$) instead of $`y`$. Let $`r^{}=|xy^{}|`$. Then, since $`\mathrm{cos}^2(\frac{r^{}}{2})=\mathrm{sin}^2(\frac{r}{2})`$, Bob will obtain the required distribution if he applies the above procedure but negates his output bit.
In summary, Bob’s procedure after obtaining information specifying the $`\alpha `$-slot, $`\beta `$-slot, and $`\gamma `$-slot of $`x`$ from Alice is:
if the difference between the $`\alpha `$-slot numbers of $`x`$ and $`y`$ is more than two then
set $`y`$ to $`y+\pi `$
set $`c`$ to $`\neg c`$
if the $`\alpha `$-slot number of $`y`$ is in $`\{7,8,9,0,1\}`$ then
set $`\beta _0,\beta _1,\beta _2`$ to $`\gamma _0,\gamma _1,\gamma _2`$
if $`x`$ and $`y`$ are in the same $`\beta `$-slot then
output $`c`$
else there exists a $`\beta _k`$ between $`x`$ and $`y`$
set $`u`$ to $`|y\beta _k|`$
output $`c`$ with probability $`1\frac{3\pi }{10}\mathrm{sin}(u)`$
Theorem 2 applies to all measurements with respect to operators of the form given in Eq. (1). The most general possible von Neumann measurement on an individual qubit can be parametrized by $`(x,x^{})[0,2\pi )\times [0,2\pi )`$ and taken with respect to the operator
$$S(x,x^{})=\left(\begin{array}{cc}\mathrm{cos}x& e^{ix^{}}\mathrm{sin}x\\ e^{ix^{}}\mathrm{sin}x& \mathrm{cos}x\end{array}\right)$$
(3)
(whose eigenvectors are $`\mathrm{cos}(\frac{x}{2})|0+e^{ix^{}}\mathrm{sin}(\frac{x}{2})|1`$ and $`\mathrm{sin}(\frac{x}{2})|0e^{ix^{}}\mathrm{cos}(\frac{x}{2})|1`$). If Alice and Bob make such measurements with respective parameters $`(x,x^{})`$ and $`(y,y^{})`$ and $`a`$ and $`b`$ are the respective outcomes then $`\mathrm{Pr}[a=0]=\mathrm{Pr}[b=0]=\frac{1}{2}`$ and
$`\mathrm{Pr}[a=b]`$ $`=`$ $`\mathrm{cos}^2(\frac{x^{}+y^{}}{2})\mathrm{cos}^2(\frac{xy}{2})+\mathrm{sin}^2(\frac{x^{}+y^{}}{2})\mathrm{cos}^2(\frac{x+y}{2}).`$ (4)
Theorem 3. For the quantum measurement scenario $`(|\mathrm{\Phi }^+_{AB},M_A,M_B)`$ with $`|\mathrm{\Phi }^+_{AB}=\frac{1}{\sqrt{2}}|0|0+\frac{1}{\sqrt{2}}|1|1`$ and $`M_A=M_B=[0,2\pi )\times [0,2\pi )`$, there exists a local hidden variable scheme augmented with eight bits of communication (from Alice to Bob) that exactly simulates it.
Proof. The local hidden variable scheme consists of two executions of the four-bit protocol of Theorem 2. In the first execution, Alice and Bob use measurement parameters $`x^{}`$ and $`y^{}`$ to obtain output bits $`a^{}`$ and $`b^{}`$ (respectively) such that
$$\mathrm{Pr}[a^{}=b^{}]=\mathrm{cos}^2(\frac{x^{}+y^{}}{2}).$$
(5)
In the second execution, Alice and Bob use measurement parameters $`(1)^a^{}x`$ and $`(1)^b^{}y`$ to obtain their final output bits $`a`$ and $`b`$ (respectively). Note that
$$\mathrm{Pr}[a=b]=\{\begin{array}{cc}\mathrm{cos}^2(\frac{xy}{2})\hfill & \text{if }a^{}=b^{}\hfill \\ \mathrm{cos}^2(\frac{x+y}{2})\hfill & \text{if }a^{}b^{}\text{,}\hfill \end{array}$$
(6)
which, combined with Eq. (5), implies Eq. (4) as required.
We do not know whether a similar result holds in the case of quantum measurements that are more general than von Neumann measurements (e.g. positive operator valued measures).
## 4 The case of $`𝒏`$ Bell states
Consider the case of $`n`$ Bell states, i.e. the tensor product of $`|\mathrm{\Phi }^+_{AB}`$ with itself $`n`$ times. This state can be written as $`|\mathrm{\Phi }^+_{AB}^n=\frac{1}{\sqrt{2^n}}_{i\{0,1\}^n}|i|i`$. Theorem 3 implies that any $`n`$ independent von Neumann measurements performed on the $`n`$ Bell states can be simulated by a local hidden variable scheme augmented with $`8n`$ bits of communication. In the case of coherent measurements on such a state, the exact simulation cost can be much larger, as shown by the following theorem.
Theorem 4. There exists a pair of sets of measurements, $`M_A`$ and $`M_B`$ (each of size $`2^{2^n}`$) on $`n`$ qubits, such that, for the quantum measurement scenario $`(|\mathrm{\Phi }^+_{AB}^n,M_A,M_B)`$ with $`|\mathrm{\Phi }^+_{AB}^n=\frac{1}{\sqrt{2^n}}_{i\{0,1\}^n}|i|i`$, any local hidden variable scheme must be augmented with a constant times $`2^n`$ bits of communication in order to exactly simulate it.
Proof. The proof is based on connections between a measurement scenario and a communication complexity problem examined in . We begin by defining a set of $`2^{2^n}`$ measurements, which we call Deutsch-Jozsa measurements, due to their connection with the algorithm in . The measurements are parametrized by the set $`\{0,1\}^{2^n}`$. For a parameter value $`z\{0,1\}^{2^n}`$, we index the bits of $`z`$ by the set $`\{0,1\}^n`$. That is, for $`i\{0,1\}^n`$, $`z_i`$ denotes the “$`i^{\text{th}}`$” bit of $`z`$. The measurement on $`n`$ qubits corresponding $`z\{0,1\}^{2^n}`$ is easily described as two unitary transformations followed by a measurement in the computational basis. The first unitary transformation is a phase shift that maps $`|i`$ to $`(1)^{z_i}|i`$ for each $`i\{0,1\}^n`$. The second unitary transformation is the $`n`$-qubit Hadamard transformation, which maps $`|i`$ to
$$\frac{1}{\sqrt{2^n}}\underset{j\{0,1\}^n}{}(1)^{ij}|j,$$
(7)
where $`ij`$ is the inner product of the two $`n`$-bit strings $`i`$ and $`j`$ (that is, $`ij=i_0j_0+i_1j_1+\mathrm{}+i_{n1}j_{n1}`$). These two unitary transformations are followed by a measurement in the computational basis $`\{|i:i\{0,1\}^n\}`$, yielding an outcome in $`\{0,1\}^n`$.
Set $`M_A=M_B=\{0,1\}^{2^n}`$, the set of Deutsch-Jozsa measurements. We will now show that, for $`xM_A`$ and $`yM_B`$, the joint probability distribution of the outcomes $`a`$ and $`b`$ satisfies the following properties:
1. If $`x=y`$ then $`\mathrm{Pr}[a=b]=1`$.
2. If the Hamming distance between $`x`$ and $`y`$ is $`2^{n1}`$ then $`\mathrm{Pr}[a=b]=0`$.
To show this, consider the quantum state after the phase flips and Hadamard transformations have been performed, but before the measurement. First, applying the phase flips to $`|\mathrm{\Phi }^+_{AB}^n`$ yields the state
$$\frac{1}{\sqrt{2^n}}\underset{i\{0,1\}^n}{}(1)^{x_i+y_i}|i|i.$$
(8)
Next, after applying the Hadamard transformations, the state becomes
$$\frac{1}{\sqrt{2^{3n}}}\underset{j,k,i\{0,1\}^n}{}(1)^{x_i+y_i+i(jk)}|j|k$$
(9)
(where $`jk`$ is the bit-wise exclusive-or of $`j`$ and $`k`$). To prove property 1, note that if $`x=y`$ then state (9) becomes
$$\frac{1}{\sqrt{2^{3n}}}\underset{j,k,i\{0,1\}^n}{}(1)^{i(jk)}|j|k=\frac{1}{\sqrt{2^n}}\underset{i\{0,1\}^n}{}|i|i,$$
so $`\mathrm{Pr}[a=b]=1`$ when the measurement is performed. To prove property 2, note that if the Hamming distance between $`x`$ and $`y`$ is $`2^{n1}`$ then $`x_i+y_i`$ is even for $`2^{n1}`$ values of $`i`$ and odd for $`2^{n1}`$ values of $`i`$. Therefore, the amplitude of any ket of the form $`|j`$$`|j`$ in state (9) is
$$\frac{1}{\sqrt{2^{3n}}}\underset{i\{0,1\}^n}{}(1)^{x_i+y_i}=0,$$
(10)
so $`\mathrm{Pr}[a=b]=0`$.
Now we reduce a communication complexity problem in to the problem of designing an augmented local hidden scheme that satisfies properties 1 and 2. The communication complexity problem (called $`\text{EQ}^{}`$ in ) is a restricted version of the “equality” problem, and is defined as follows. Alice and Bob get inputs $`x,y\{0,1\}^{2^n}`$ (respectively), and one of them (say, Bob) must output 1 if $`x=y`$ and 0 if the Hamming distance between $`x`$ and $`y`$ is $`2^{n1}`$ (the output of Bob can be arbitrary in all other cases). In , it is proven that any classical protocol that exactly solves this restricted equality problem requires $`c2^n`$ bits of communication for some constant $`c>0`$ (the proof is based on a combinatorial result in ). Suppose that there exists a local hidden variable scheme augmented with $`f(n)`$ bits of communication that simulates the measurement scenario $`(|\mathrm{\Phi }^+_{AB}^n,M_A,M_B)`$. Then one can use this to construct a protocol for restricted equality with $`f(n)+n`$ bits of communication as follows. Alice and Bob first execute the protocol for $`(|\mathrm{\Phi }^+_{AB}^n,M_A,M_B)`$ and then Alice sends her output $`a`$ to Bob, who outputs 1 if $`a=b`$ and 0 if $`ab`$. It follows that $`f(n)+nc2^n`$, so $`f(n)c2^nnc^{}2^n`$, for some $`c^{}>0`$ and sufficiently large $`n`$. The theorem extends to all $`n1`$, possibly using a smaller constant $`c^{\prime \prime }`$, because it follows from that Example 2 cannot be simulated without communication.
|
no-problem/9901/cond-mat9901023.html
|
ar5iv
|
text
|
# A low-temperature dynamic mode scanning force microscope operating in high magnetic fields
## I Introduction
Low temperature scanning force microscopes (SFMs) are desirable for many applications in physical research. To our knowledge, the first prototypes have become commercially available only recently, reaching base temperatures around 5K under UHV-conditions. Lower temperatures can be reached with only a few home-built microscopes in non-UHV setups.
Especially in cases where the sample itself is sensitive to light a non-optical detection method is needed in order to keep the probe-sample distance fixed. Such a method has the additional advantage that it leads to less involved setups. Piezoresistive cantilevers have been proposed and applied for low temperature scanning force microscopy. However, these cantilevers are not easily available. Recently, piezoelectric quartz tuning forks have been employed in a scanning near-field optical microscope designed for operation at low temperatures. In this setup, an optical fibre is glued along one prong of the tuning fork which is mechanically excited to oscillate. The tuning fork is used as a friction-force sensor. The advantages of these piezoelectric sensors are the availability, the low cost and the high quality factors. These tuning forks have been successfully employed for atomic force microscopy, scanning near field optical microscopy, magnetic force microscopy and acoustic near field microscopy at room temperature. In Refs. the SFM operation was just used to keep the tip-sample distance fixed while an additional nanosensor measures the physical quantity of interest.
In this paper we describe the implementation of a tuning fork sensor suitable for SFM-imaging at temperatures below 4.2K and in high magnetic fields.
## II Microscope Overview
The microscope is based on an Oxford Instruments made commercial cryo-SXM being significantly modified for operation as an SFM for probing semiconductor devices. Fig. 1 shows the schematic setup. The microscope head which is made of nonmagnetic material (mainly Ti and CuBe) is mounted at the end of a sample rod which can be suspended in the sample space of a 1.7K variable temperature insert (VTI). The VTI is part of a standard <sup>4</sup>He cryostat with a superconducting magnet producing magnetic fields up to 8T. The temperature can be controlled by setting the gas flow with a needle valve and controlling the heater power. The head can be operated under ambient conditions or in the cryostat in <sup>4</sup>He gas at temperatures between 300K and 2.2K and at pressures of typically a few millibars. Below 2.2K the helium becomes superfluid and the operation becomes difficult for reasons discussed below.
Effective insulation of the cryostat from building vibrations is achieved by suspending it from appropriate rubber-ropes leading to a resonance frequency of about 1Hz for the whole system. The <sup>4</sup>He pumping line and the He-recovery line are plastic pipes which are suspended together with all electrical connections to the cryostat from additional elastic ropes. Vibrations from the <sup>4</sup>He pump are effectively damped by guiding the pumping line through a box of sand. Microphony of the cryostat is reduced using a rubber mat which is tied around the cryostat body. Additional vibration isolation between cryostat and sample rod is implemented using a specially designed top flange with passive damping elements and adjusting screws to prevent mechanical contact between the microscope and the VTI tube.
The microscope head allows coarse tip-sample approach using a slip-stick drive moving the scan-piezo up or down. A puck mounted at the end of the scan-piezo can be laterally positioned in a similar fashion. In contrast to the commercial design of the head, in our system it serves as the platform where the SFM sensor is mounted. Samples are mounted in chip carriers which can be easily plugged into a 32-pin chip socket. The socket is mounted on a copper block which incorporates the sample heater and the sample thermometer. The heater allows us to evaporate the water film from the sample surface before cooldown. It further gives us the possibility to keep the sample warmer than its surroundings during the cooling process in order to avoid freezing contaminations on the sample surface. The sample mount unit can be used for standard magnetotransport measurements independent of the SFM operation. For detection of temperature gradients, additional thermometers are located at the bottom of the VTI and at the top of the microscope head. The scanning unit is a five electrode tube scanner of 50.8mm length and outer diameter of 12.7mm. With a maximum (bipolar) voltage of 230V it gives a lateral scan range of 52.2$`\mu `$m in $`x`$-$`y`$ direction and a $`z`$-range of 5$`\mu `$m at a temperature of 290K. At 4.2K the lateral range is 8.8$`\mu `$m and the $`z`$-range is 0.85$`\mu `$m. The TOPS3 control electronics by Oxford Instruments consist of a digital feedback loop which can be switched from logarithmic amplification for STM operation to linear amplification for the SFM mode. Tip approach and data acquisition are computer controlled.
## III Piezoelectric tuning forks
In order to implement cheap and easy-to-make dynamic SFM operation at cryogenic temperatures we decided to avoid optical cantilever deflection detection and use piezoelectric tuning forks instead. Initially, quartz tuning forks have been developed for the realization of very small and stable oscillators in watches. Due to their use in industry they are cheap and easily available. Most of them have a (lowest) resonance frequency $`f_0=2^{15}`$Hz and quality-factors in vacuum between $`Q=20000`$ and $`100000`$. In scanning probe microscopy they have recently found new applications as piezoelectric sensors. The high $`Q`$-values of tuning forks offer the advantage that dynamic mode operation under ambient conditions or in liquids is possible. However, low-temperature operation in the dynamic SFM-mode has not been demonstrated before. In our system the tuning fork is driven by an AC-voltage from a Yokogawa function generator FG320. This voltage is scaled down by a factor of 1000 with a voltage divider residing close to the microscope head, which leads to typical excitation amplitudes of 0.1-10mV. We directly measure the admittance of the tuning fork with a home-made current-voltage converter with a gain of 1V/$`\mu `$A connected in series to the tuning fork. The mechanical tip amplitude depends linearly on the measured current and is typically between 1-100nm. Due to the high oscillator quality the power loss of the tuning fork at resonance can be far less than $`10\text{nW}`$, which makes it ideal for the use in low cooling power cryostats. A typical resonance curve and a corresponding equivalent circuit is shown in Fig. 2. The mechanical resonator is modeled by the $`LRC`$-branch in the equivalent circuit which leads to the admittance maximum. The capacitance between the electrodes described by $`C_0`$ leads to the asymmetry of the resonance curve with a minimum above the resonance frequency.
A tip made of W or PtIr wire with typical diameter of 20$`\mu `$m is glued to one of the tuning fork arms in parallel to the direction of the vibrational motion of the arm (see inset of Fig. 2). The tip can be contacted separately. After mounting the tuning fork with the tip on the scanning head the tip is electrochemically etched to achieve tip radii of less than 30nm. After the preparation of a tip the resonance frequency of the tuning fork is shifted by about 100Hz to lower frequencies and the $`Q`$-value is lowered but can be kept above a value of 20000 in vacuum.
The tip prepared on the tuning fork can be used as a tunnelling tip (STM-mode) or as the tip in dynamic SFM operation mode without any modifications on the scanning head. This gives simultaneous access to two complementary imaging modes at low temperatures without the need of a tip exchange. The high stiffness of the tuning fork arms (static spring constant up to 2$`\mu `$N/nm) is sufficient to guarantee stable tunnelling conditions.
## IV Phase-locked loop
The principle of dynamic mode SFM operation of the tuning fork is the same as for normal cantilevers. The elastic interaction of the tip with the sample surface will shift the resonance frequency via the presence of force gradients. Inelastic tip-sample interactions will mainly alter the $`Q`$-value of the oscillator. With our setup we can use both quantities to control the z-piezo via the feedback loop, i.e. we can either keep the resonance frequency or the dissipated power constant during a scan.
If driven at constant frequency, the high $`Q`$-values of tuning forks lead to a very slow response of the oscillation amplitude or phase on a steplike change in the tip-sample interaction on a time scale $`Q/f_0`$. The resulting limitation in bandwidth is undesirable since it makes SFM imaging very slow. A significant increase in the bandwidth can be achived with the use of a phase-locked loop. Figure 3 shows the operation principle. The measured current through the tuning fork is analyzed electronically and its amplitude and phase (relative to the driving voltage) are determined. Both signals are fed back into the oscillator via two separate PID-components. The phase signal is used to modulate the excitation frequency and allows locking on a fixed value of the phase. In contrast to the detection of amplitude or phase changes at fixed driving frequency, the phase-locked loop gives a much faster response on the time scale needed for the determination of the signal phase, which is typically $`3/f_0100\mu \mathrm{s}`$ independent of the $`Q`$-value. This scheme allows us to achieve a bandwidth significantly larger than that of the $`z`$-feedback loop. The latter is limited by the lowest mecanical resonance frequency of the scan piezo at around 1kHz.
When we prepare for scanning, we first measure the frequency response of the tuning fork to determine the actual resonance frequency $`f_0`$. The phase-locked loop can then be set up with $`f_0`$ as the carrier frequency. The frequency of the function generator is controlled with a sensitivity set to $`100`$ \- $`500\mathrm{m}\mathrm{H}\mathrm{z}/V`$. The noise in the frequency is then of the order of $`100\mu \mathrm{Hz}/\sqrt{Hz}`$ in the range from 1Hz to 1kHz corresponding to 10 mHz peak-to-peak.
As an additional refinement we can use the measured amplitude of the tuning fork oscillation to feed it back into the amplitude modulation input of the oscillator. This additional feedback keeps the amplitude of the tuning fork oscillation at a fixed value by adjusting the amplitude of the excitation voltage.
## V SFM operation
As the error signal for the feedback loop controlling the tip sample distance we use either the voltage proportional to the frequency shift (frequency control mode) or the change in driving voltage needed to keep the oscillation amplitude constant (amplitude control mode). Due to the size of our scan piezo its mechanical resonances allow a bandwidth of only 1kHz for the $`z`$-control, i.e. this is the decisive factor limiting the overall bandwidth of our system. We therefore reach typical scan speeds of up to 10$`\mu `$m/s.
Before the microscope is inserted into the cryostat, the sample is heated above 100C and the VTI is heated close to room temperature. After inserting the microscope we pump the VTI and cool down at a rate of 3K/min. The sample is kept 50K above the temperature of the gas flow in order to prevent freezing contaminations on the sample surface. For optimum operation a stable temperature gradient along the sample rod has to be maintained. The pressure, which strongly affects the resonance frequency, has to be stabilized below the vapor pressure of <sup>4</sup>He in order to prevent liquid helium to enter the VTI. Operation in normal liquids (e.g. <sup>4</sup>He or <sup>3</sup>He) is possible but the $`Q`$-value and thereby the sensitivity is reduced by a factor of four. Below 2.2K, where <sup>4</sup>He becomes superfluid, in our cryostat the resonance frequency becomes unstable, presumably due to thermodynamic instabilities. These problems, however, can be solved by operating the microscope in a vacuum beaker.
In the following we show two examples of SFM-operation at low temepratures. Fig. 4 shows the image of the surface of a 150nm gold film evaporated on a glass substrate. Small grains with a typical size of 30nm are resolved. The lateral resolution is better than 20nm, the resolution in $`z`$-direction is better than 1Å. The upper image shows the topography and the lower image is the frequency shift, i.e. the error signal for the $`z`$-feedback.
If one sets out to investigate semiconductor nanostructures, locating the structure of interest at low temperature where the scan range is only a few microns, is a significant problem. With the lateral coarse tip positioning we were able to find a 10$`\mu `$m$`\times 10\mu `$m spot on top of a GaAs/AlGaAs-heterostructure where a Hall-bar device had been fabricated by photolithography techniques. On top of a part of the Hall-bar a square lattice of oxide dots with a period of 400nm had been written by AFM-lithography. Fig. 5 shows an image of this area of the sample taken at a temperature of 2.5K.
Operation of the head in a magnetic field of 8T shifts the scanned area by about 0.5$`\mu `$m. It is therefore possible to scan at different fixed magnetic fields without loosing the structure of interest. Spectroscopy at a fixed point of the surface with sweeping magnetic field is difficult. The resonance frequency of the tuning fork is not found to shift significantly when the magnetic field is altered.
The design of our cryo-SFM, especially the low power dissipation of the tuning forks, makes it an ideal system for imaging at even lower temperatures in a <sup>3</sup>He-cryostat or even a dilution refrigerator. In spite of the high stiffness of the tuning fork sensor compared to conventional SFM-cantilevers high sensitivity is achieved due to the large $`Q`$-values of the mechanical oscillator. The high stiffness allows the preparation of probes without a strong impairment of oscillator performance. It may offer the option in the future to fabricate other sensors, e.g. semiconductor chips, instead of or in addition to tunnelling tips at the end of one tuning fork arm. A straightforward continuation of the development of our system leads to the application of the scanned gate technique, Kelvin force microscopy and scanning capacitance microscopy.
## VI Conclusion
In conclusion, we have implemented a low-cost dynamic mode cryo-SFM for operation down to temperatures below 4.2K and in magnetic fields up to 8T. Due to the utilization of piezoelectric quartz tuning forks there is no need for an optical cantilever deflection detection. The unit can be operated in STM-mode or SFM-mode without tip exchange. High bandwith is achieved with a phase-locked loop which controls the driving frequency of the tuning fork. Further development beyond the SFM-operation at low temperatures is feasible.
## ACKNOWLEDGMENTS
The authors would like to thank H. Hug and K. Karrai for valuable discussions. Financial support by the Eidgenössische Technische Hochschule is acknowledged.
|
no-problem/9901/nucl-th9901078.html
|
ar5iv
|
text
|
# Chemical fluctuations in high-energy nuclear collisions
## Abstract
Fluctuations of the chemical composition of the hadronic system produced in nuclear collisions are discussed using the $`\mathrm{\Phi }`$measure which has been earlier applied to study the transverse momentum fluctuations. The measure is expressed through the moments of the multiplicity distribution and then the properties of $`\mathrm{\Phi }`$ are discussed within a few models of multiparticle production. A special attention is paid to the fluctuations in the equilibrium ideal quantum gas. The system of kaons and pions, which is particularly interesting from the experimental point of view, is discussed in detail.
PACS: 25.75.+r, 24.60.Ky, 24.60.-k
Keywords: Relativistic heavy-ion collisions; Fluctuations; Thermal model
There are many sorts of hadrons produced in high energy collisions. The ratios of multiplicities of particles of given species to the total particle number characterize the chemical composition of the collision final state. The composition is expected to reflect the collisions dynamics. Generation of the quark-gluon plasma in heavy-ion collisions was argued long ago to enhance the strange particle production. While the significant strangeness yield enhancement has been experimentally observed in the central nucleus-nucleus collisions at CERN SPS (see the data compilation and the recent review ), it is a matter of hot debate whether the observation can be indeed treated as a plasma signal. The strange hadron abundance is naturally described within the models assuming the plasma occurrence at the early collision stage (see e.g. ) but the models, which neglect such a possibility (see e.g. or the review ), can be also tuned to agree with the experimental data. Thus, it would be desirable to go beyond the average particle numbers and see whether the strangeness enhancement in the central heavy-ion collisions is accompanied with the qualitative change of the strangeness yield fluctuations. The equilibrium quark-gluon plasma scenario is obviously expected to lead to the smaller fluctuations than the nonequilibrium cascade-like hadron models but the specific calculations are needed to quantify such a prediction. Anyhow, it seems to be really interesting to study the strangeness yield fluctuations on the event-by-event basis. However, we immediately face the difficulty how to quantitatively measure the fluctuations in the events of very different multiplicity. The problem appears to be of more general nature.
There are several interesting proposals to use fluctuations as a potential source of valuable information on the collision dynamics. If the hadronic system produced in the collision is in the thermodynamical equilibrium, the temperature and multiplicity fluctuations have been argued to determine, respectively, the heat capacity and compressibility of the hadronic matter at freeze-out. An extensive discussion of the equilibrium fluctuations can be found in . In the experimental realization of such ideas one has to disentangle however the ‘dynamical’ fluctuations of interest from the ‘trivial’ geometrical ones due to the impact parameter variation. The latter fluctuations are very sizable and dominate the fluctuations of all extensive event characteristics such as multiplicity or transverse energy. The variation of the impact parameter can also influence the fluctuations of the intensive quantities e.g. the temperature.
A specific solution to the problem was given in our paper , where we introduced the measure of fluctuations or correlations which has been later called $`\mathrm{\Phi }`$. It is constructed in such a way that $`\mathrm{\Phi }`$ is exactly the same for nucleon-nucleon (N–N) and nucelus-nucelus (A–A) collisions if the A–A collision is a simple superposition of N–N interactions. On the other hand, $`\mathrm{\Phi }`$ equals zero when the correlations are absent in the collision final state. The method proposed in has been recently applied to the NA49 experimental data. The fluctuations of transverse momentum found in the central Pb–Pb collisions at 158 GeV per nucleon have appeared to be surprisingly small . It has been also claimed that the correlations, which are of short range in the momentum space, are responsible for the nonzero positive value of $`\mathrm{\Phi }_{p_T}`$ being observed. Our calculations of $`\mathrm{\Phi }_{p_T}`$ in the equilibrium ideal gas show that $`\mathrm{\Phi }_{p_T}`$ is positive for bosons, negative for fermions and zero for classical particles. When the hadronic system at freeze-out is identified with the pion gas, the calculated $`\mathrm{\Phi }_{p_T}`$ slightly overestimates the experimental value but the inclusion of the pions which come from the resonance decays removes the discrepency. An interesting analysis of the $`p_T`$fluctuations within the so-called non-extensive statistics is given in .
The theoretical analysis of the result has provided a new insight into the collision dynamics. It has been argued within the UrQMD model that the secondary scatterings are responsible for the dramatic correlation loss in the central collisions of heavy-ions. This conclusion however seems to contradict the results of the analysis where the LUCIAE event generator has been used and the rescatterings are shown to reduce insignificantly the $`p_T`$correlations measured by $`\mathrm{\Phi }`$. While the effect of the secondary interactions needs to be clarified, the smallness of the fluctuations observed in the central heavy-ion collisions is a very restrictive test of the collision models. The so-called random walk model is ruled out because it gives much stronger correlations in A–A than N–N case . The same holds for the LUCIAE event generator when the jet production and/or the string clustering is taken into account even at a rather moderate rate. On the other hand, the quark-gluon string model seems to pass the test successfully .
As argued in , the measure $`\mathrm{\Phi }`$ can be also applied to study the fluctuations of chemical composition of the hadronic system produced in the nuclear collisions. The chemical fluctuations seem to be even more interesting than those of the kinematical variables such as $`p_T`$. The final state momentum distribution of hadrons characterizes the system at the moment of freeze-out, while the system chemical composition is fixed at the earlier evolution stage - the chemical freeze-out when the secondary inelastic interactions are no longer effective. The total strangeness yield presumably saturates even earlier and the subsequent interactions are mostly responsible for the strangeness redistribution among hadron species.
The NA49 Collaboration plans to study the chemical fluctuations in heavy-ion collisions at CERN SPS . Since the $`\mathrm{\Phi }`$measure will be used in these studies, it is desirable to better understand the properties of $`\mathrm{\Phi }`$ when applied to the chemical fluctuations. This is the aim of our note. At the beginning we express $`\mathrm{\Phi }`$ through the commonly used moments of the multiplicity distribution and analyze the result within several models of the distribution. Then, we compute the $`\mathrm{\Phi }`$measure for case of the two-component ideal quantum gas in equilibrium. The system of kaons and pions is discussed in detail. In particular, the role of resonances is analysed.
Let us first introduce the measure $`\mathrm{\Phi }`$ which describes the correlations (or fluctuations) of a single particle variable $`x`$ such as the particle energy or transverse momentum. As observed in , $`x`$ can also characterize the particle sort. Then, $`x=1`$ if the particle is of the sort of interest, say the particle is strange, and $`x=0`$ if the particle is not of this sort, it is a non-strange particle. We define the single-particle variable $`z\stackrel{\mathrm{def}}{=}\mathrm{x}\overline{\mathrm{x}}`$ with the overline denoting averaging over a single particle inclusive distribution. In the case of the chemical fluctuations, $`\overline{x}`$ is the probability (averaged over events and particles) that a produced particle is of the sort of interest, say it is strange. One easily observes that $`\overline{z}=0`$. We now introduce the event variable $`Z`$, which is a multiparticle analog of $`z`$, defined as $`Z\stackrel{\mathrm{def}}{=}_{\mathrm{i}=1}^\mathrm{N}(\mathrm{x}_\mathrm{i}\overline{\mathrm{x}})`$, where the summation runs over particles from a given event. By construction $`Z=0`$, where $`\mathrm{}`$ represents averaging over events. Finally, the $`\mathrm{\Phi }`$measure is defined in the following way
$$\mathrm{\Phi }\stackrel{\mathrm{def}}{=}\sqrt{\frac{\mathrm{Z}^2}{\mathrm{N}}}\sqrt{\overline{\mathrm{z}^2}}.$$
(1)
We compute $`\mathrm{\Phi }`$ for the system of particles of two sorts, $`a`$ and $`b`$, e.g. strange and non-strange hadrons. $`x_i=1`$ when $`i`$th particle is of the $`a`$ type and $`x_i=0`$ otherwise. The inclusive average of $`x`$ and $`x^2`$ read
$$\overline{x}=\underset{x=0,1}{}xP_x=P_1,\overline{x^2}=\underset{x=0,1}{}x^2P_x=P_1,$$
where $`P_1`$ is the probability (averaged over particles and events) that a produced particle is of the $`a`$ sort. Thus,
$$P_1=\frac{N_a}{N_a+N_b},$$
with $`N_a`$ and $`N_b`$ being the numbers of particles $`a`$ and $`b`$, respectively, in a single event. One immediately finds that $`\overline{z}=0`$ while
$`\overline{z^2}=P_1P_1^2={\displaystyle \frac{N_aN_b}{N^2}},`$ (2)
where $`N=N_a+N_b`$ is the multiplicity of all particles $`a`$ and $`b`$ in a single event.
Since the event variable $`Z`$ equals $`N_a\overline{x}N`$, one gets
$`Z`$ $`=`$ $`N_a\overline{x}N=0,`$
$`Z^2`$ $`=`$ $`N_a^22\overline{x}N_aN+\overline{x}^2N^2.`$
The latter equation gives
$$Z^2N^2=N_b^2N_a^2+N_a^2N_b^22N_aN_bN_aN_b,$$
which can be rewritten as
$`{\displaystyle \frac{Z^2}{N}}={\displaystyle \frac{N_b^2}{N^3}}\left(N_a^2N_a^2\right)`$ $`+`$ $`{\displaystyle \frac{N_a^2}{N^3}}\left(N_b^2N_b^2\right)`$ (3)
$``$ $`2{\displaystyle \frac{N_aN_b}{N^3}}\left(N_aN_bN_aN_b\right).`$ (4)
The fluctuation measure $`\mathrm{\Phi }`$ is completely determined by eqs. (2, 3). So, let us consider its properties within three simple models of the multiplicity distribution.
1) The distributions of particles $`a`$ and $`b`$ are poissonian and independent from each other i.e.
$`N_i^2N_i^2`$ $`=`$ $`N_i,`$ (5)
$`N_aN_b`$ $`=`$ $`N_aN_b,`$ (6)
where $`i=a,b`$. One easily notices that $`\mathrm{\Phi }=0`$ in this case.
2) The particles $`a`$ and $`b`$ are assumed to be correlated in such a way that there are no chemical fluctuations in the system. The event chemical composition, which is fully characterized (for a two component system) by the ratio $`N_a/N`$, is assumed to be strictly independent of the event multiplicity. Then, $`N_a=\alpha N`$ and $`N_b=(1\alpha )N`$ with $`\alpha `$ being a constant smaller than unity. Since $`N_a`$, $`N_b`$ and $`N`$ are the integer numbers, $`\alpha `$ has to be a rational fraction. Then, we have
$$N_a=\alpha N,N_a=(1\alpha )N,$$
$$N_a^2N_a^2=\alpha ^2\left(N^2N^2\right),N_b^2N_b^2=(1\alpha )^2\left(N^2N^2\right),$$
$$N_aN_bN_aN_b=\alpha (1\alpha )\left(N^2N^2\right).$$
Consequently, $`Z^2=0`$ and
$$\mathrm{\Phi }=\sqrt{\frac{N_aN_b}{N^2}}=\sqrt{\alpha (1\alpha )}.$$
(7)
One sees that the $`\mathrm{\Phi }`$measure is negative (but larger than $`1/2`$) when the chemical fluctuations vanish in the system.
3) The particles $`a`$ and $`b`$ are identified with the positively and, respectively, negatively charged hadrons. Then, the charge conservation leads to the strict correlation of the particle numbers:
$$N_+N_{}=Q,$$
where $`Q`$ is the electric charge of the system. In this case we have
$$N_+=N_{}+Q,$$
$$N_+^2N_+^2=N_{}^2N_{}^2,$$
$$N_+N_{}N_+N_{}=N_{}^2N_{}^2.$$
Therefore,
$`\overline{z^2}={\displaystyle \frac{\left(N_{}+Q\right)N_{}}{N^2}},`$
$`{\displaystyle \frac{Z^2}{N}}={\displaystyle \frac{Q^2}{N^3}}\left(N_{}^2N_{}^2\right).`$
When $`Q=0`$ we reproduce the result (7) corresponding to $`\alpha =1/2`$.
After the illustrative examples let us compute $`\mathrm{\Phi }`$ for the equilibrium gas which is a mixture of the particles $`a`$ and $`b`$. Then,
$`N_i`$ $`=`$ $`\lambda _i{\displaystyle \frac{}{\lambda _i}}\mathrm{ln}\mathrm{\Xi }(V,T,\lambda _a,\lambda _b),`$ (8)
$`N_aN_bN_aN_b`$ $`=`$ $`\lambda _a\lambda _b{\displaystyle \frac{^2}{\lambda _b\lambda _a}}\mathrm{ln}\mathrm{\Xi }(V,T,\lambda _a,\lambda _b),`$ (9)
$`N_i^2N_i^2`$ $`=`$ $`\left(\lambda _i{\displaystyle \frac{}{\lambda _i}}\right)^2\mathrm{ln}\mathrm{\Xi }(V,T,\lambda _a,\lambda _b),`$ (10)
where $`\mathrm{\Xi }(V,T,\lambda _a,\lambda _b)`$ is the grand canonical partition function with $`V`$, $`T`$ and $`\lambda _i`$ denoting, respectively, the system volume, temperature and the fugacity which is related to the chemical potential $`\mu _i`$ as $`\lambda _i=e^{\beta \mu _i}`$ with $`\beta T^1`$.
When the gas of interest is the mixture of the two ideal quantum gases, the partition function is
$$\mathrm{ln}\mathrm{\Xi }(V,T,\lambda _a,\lambda _b)=\pm g_aV\frac{d^3p}{(2\pi )^3}\mathrm{ln}\left[1\pm \lambda _ae^{\beta E_a}\right]\pm g_bV\frac{d^3p}{(2\pi )^3}\mathrm{ln}\left[1\pm \lambda _be^{\beta E_b}\right],$$
(11)
where $`g_i`$ denotes the number of the particle internal degrees of freedom; $`E_i\sqrt{m_i^2+𝐩^2}`$ is the particle energy with $`m_i`$ and $`𝐩`$ being its mass and momentum; the upper sign is for fermions while the lower one for bosons.
Substituting the ideal gas partition function (11) into eqs. (8), one easilly finds
$`N_i`$ $`=`$ $`g_iV{\displaystyle \frac{d^3p}{(2\pi )^3}\frac{1}{\lambda _i^1e^{\beta E_i}\pm 1}},`$ (12)
$`N_aN_b`$ $`=`$ $`N_aN_b,`$ (13)
$`N_i^2N_i^2`$ $`=`$ $`g_iV{\displaystyle \frac{d^3p}{(2\pi )^3}\frac{\lambda _i^1e^{\beta E_i}}{(\lambda _i^1e^{\beta E_i}\pm 1)^2}},`$ (14)
where, as presviously, the index $`i`$ labels the particles of the type $`a`$ or $`b`$. It is worth noting that the system volume $`V`$ which enters eqs. (12) cancels out in the final expression of $`\mathrm{\Phi }`$. Therefore, the measure $`\mathrm{\Phi }`$ is, as expected, an intensive quantity. One observes in eqs. (12) that
$$N_i^2N_i^2<N_i$$
for fermions,
$$N_i^2N_i^2>N_i$$
for bosons, and
$$N_i^2N_i^2=N_i$$
in the classical limit where $`\lambda _i^11`$. Therefore, one finds from eqs. (2, 3) that
$$\frac{Z^2}{N}<\overline{z^2}\mathrm{and}\mathrm{\Phi }<0$$
when the particles $`a`$ and $`b`$ are fermions,
$$\frac{Z^2}{N}>\overline{z^2}\mathrm{and}\mathrm{\Phi }>0$$
when the particles $`a`$ and $`b`$ are bosons, and
$$\frac{Z^2}{N}=\overline{z^2}\mathrm{and}\mathrm{\Phi }=0$$
when the particles of both types can be treated as classical. If the particles $`a`$ and $`b`$ are of different statistics, the sign of $`\mathrm{\Phi }`$ is determined by the sign of the expression
$`N_a^2\left(N_b^2N_b^2N_b\right)+N_b^2\left(N_a^2N_a^2N_a\right),`$
which is either positive or negative depending of the particle masses, their chemical potentials, and the numbers of the internal degrees of freedom.
If all particles are massless and their chemical potentials vanish, the calculations can be performed analytically. In this case eqs. (12) give
$`N_i`$ $`=`$ $`{\displaystyle \frac{g_i\zeta (3)}{\pi ^2}}\left({\displaystyle \genfrac{}{}{0pt}{}{3/4}{1}}\right)VT^3g_i\left({\displaystyle \genfrac{}{}{0pt}{}{0.09}{0.12}}\right)VT^3,`$
$`N_i^2N_i^2`$ $`=`$ $`{\displaystyle \frac{g_i}{6}}\left({\displaystyle \genfrac{}{}{0pt}{}{1/2}{1}}\right)VT^3g_i\left({\displaystyle \genfrac{}{}{0pt}{}{0.08}{0.17}}\right)VT^3,`$
where $`\zeta (x)`$ is the Riemann zeta function ($`\zeta (3)1.202`$); as previously the upper case is for fermions and the lower one for bosons. If the particles $`a`$ and $`b`$ are both fermions or both bosons, eqs. (2, 3) get the form
$`\overline{z^2}`$ $`=`$ $`{\displaystyle \frac{g_ag_b}{(g_a+g_b)^2}},`$ (15)
$`{\displaystyle \frac{Z^2}{N}}`$ $`=`$ $`{\displaystyle \frac{\pi ^2}{6\zeta (3)}}{\displaystyle \frac{g_ag_b}{(g_a+g_b)^2}}\left({\displaystyle \genfrac{}{}{0pt}{}{2/3}{1}}\right),`$ (16)
and consequently
$$\mathrm{\Phi }\left(\genfrac{}{}{0pt}{}{0.045}{\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}0.170}}\right)\frac{\sqrt{g_ag_b}}{g_a+g_b}.$$
Let us now consider the fluctuations in the system of pions and kaons. To be specific, the particles $`a`$ are identified with $`\pi ^{}`$ while the particles $`b`$ with $`K^+`$ or $`K^{}`$. As we will see, the fluctuations in the $`\pi ^{}K^+`$ system can be very different from those in $`\pi ^{}K^{}`$ one. The systems $`\pi ^+K^+`$ and $`\pi ^+K^{}`$, which are not discussed here, are analogous to, respectively, $`\pi ^{}K^{}`$ and $`\pi ^{}K^+`$. At first we treat the pions and kaons as a mixture of the ideal gases of $`\pi `$ and $`K`$. Since the pions and kaons are of a given charge (plus or minus) $`g_\pi =g_K=1`$. The masses are taken, respectively, 140 and 494 MeV. $`\mathrm{\Phi }`$ as a function of temperature has been computed numerically from the formulas (1,2,3) combined with (12). The results, which are obviously the same for the $`\pi ^{}K^+`$ and $`\pi ^{}K^{}`$ systems, are shown with the dashed lines in Figs. 1-4. The calculations have been performed for several values of the chemical potentials of pions and kaons. In the case of pions, $`\mu _\pi =0`$ when the system is in the chemical equilibrium. (We obviously neglect here a tiny effect of the electric charge conservation.) The finite value of $`\mu _K`$ appears even in the equilibrium system of zero net strangeness due to the simultaneous baryon and strangeness conservation. For example, the estimated value of $`\mu _K`$ for the strange (not antistrange) mesons is 38 MeV at $`T=160`$ MeV for the equilibrium hadronic system produced in heavy-ion collisions at CERN SPS .
In Figs. 1 and 2 we observe a dramatic increase of $`\mathrm{\Phi }`$ with the temperature. $`\mathrm{\Phi }`$ also grows with $`\mu _K`$ (at $`\mu _\pi =0`$) while the dependence on $`\mu _\pi `$ changes with the temperature. Below $`T120`$ MeV $`\mathrm{\Phi }`$ is a decreasing function of $`\mu _\pi `$ but above this temperature $`\mathrm{\Phi }`$ grows with $`\mu _\pi `$ ($`\mu _K`$ is fixed and equals zero). Such a behaviour can be easily understood. $`\mathrm{\Phi }`$ can be approximated as
$$\mathrm{\Phi }\frac{\sqrt{N_K^2N_K^2}\sqrt{N_K}}{\sqrt{N_\pi }},$$
(17)
for $`N_\pi N_K`$ which holds for sufficiently low temperatures. (We also assume here that $`N_K^2N_K^2`$ is not larger than $`N_\pi ^2N_\pi ^2`$.) The growth of $`\mu _K`$ leads to the increase of the numerator of the expression (17) while the growth of $`\mu _\pi `$ enlarges the denominator. At higher temperatures the numbers of pions and kaons are comparable to each other and the pion dispersion $`N_\pi ^2N_\pi ^2`$ provides a significant contribution to $`\mathrm{\Phi }`$. Then, $`\mathrm{\Phi }`$ grows with $`\mu _\pi `$.
It is a far going idealization to model a fireball at freeze-out as an ideal gas of pions and kaons. A substantial fraction of the final state particles come from the hadron resonances. We take them into account in the following way. Since the resonances are relatively heavy, their phase-space density is rather low. Consequently, the resonances can be treated as classical particles with the poissonian multiplicity distribution . Then, one easily shows that
$`N_i`$ $`=`$ $`N_i^{}+{\displaystyle \underset{r}{}}b_rN_r,i=\pi ,K`$ (18)
$`N_i^2N_i^2`$ $`=`$ $`N_{i}^{}{}_{}{}^{2}N_i^{}^2+{\displaystyle \underset{r}{}}b_rN_r,`$ (19)
$`N_\pi N_KN_\pi N_K`$ $`=`$ $`N_\pi ^{}N_K^{}N_\pi ^{}N_K^{}+{\displaystyle \underset{r}{}}b_r^{}N_r^{},`$ (20)
where $`N_i^{}`$ is the number of ‘direct’ pions or kaons which are described by the formulas (12); the summation runs over the resonances which decay into pions or kaons; $`N_r`$ is the number of resonances of type $`r`$ and $`b_r`$ is the branching ratio of the resonance decay into the pion or kaon channel. The resonances are assumed to decay into no more than one pion or one kaon of interest. We denote with $``$ the resonances, such as $`K^{}`$, which decay into the pion-kaon pair under study.
In the actual calculations we have taken into account the lightest resonances: $`\rho (770)`$, $`\omega (782)`$ and $`K^{}(892)`$. Now, an important difference between the correlations in the $`\pi ^{}K^{}`$ and $`\pi ^{}K^+`$ system appears. The decays of $`\overline{K}^0`$ into $`K^+\pi ^{}`$ produce the correlation in the $`\pi ^{}K^+`$ system. Analogous correlation in the $`\pi ^{}K^{}`$ system is absent. $`\mathrm{\Phi }`$ as a function of temperature has again been computed numerically from the formulas (1,2,3) combined with eqs. (12) which are now supplemented with eqs. (18). The results are shown with the solid lines in Figs. 1 and 2 for the $`\pi ^{}K^{}`$ system and in Fig. 3 and 4 for the $`\pi ^{}K^+`$ one. The calculations have been again performed for several values of the chemical potentials of pions and kaons. The chemical potential of $`\rho `$ and $`\omega `$ has been taken to be equal to $`\mu _\pi `$ while that of $`K^{}`$ equals $`\mu _K`$. One sees that in the case of $`\pi ^{}K^{}`$ correlations the presence of resonances does not change the results qualitatively although the value of $`\mathrm{\Phi }`$ is significantly reduced. The case of $`\pi ^{}K^+`$ is changed dramatically due to the resonances. The role of the term corresponding to $`N_\pi N_KN_\pi N_K`$ appears to be so important that $`\mathrm{\Phi }`$ becomes negative for sufficiently large temperatures. It is somewhat surprising that $`\mathrm{\Phi }`$ from Fig. 4 changes its sign at the temperature of about $`T=110`$ MeV which is approximately independent of $`\mu _K`$. Such a behaviour can be understood as follows. Since $`N_\pi >N_K`$ in the domain of the parameter values of interest, we expand the expressions (3) and (2) in powers of $`N_K/N_\pi `$. One observes that the first power terms of (3) and (2) cancel out each other. Therefore, $`\mathrm{\Phi }=0`$ when the second power terms of (3) and (2) are equal to each other. Taking into account that the kaons are approximately classical and consequently $`\mathrm{\Phi }`$ depends on $`\mu _K`$ roughly as $`e^{\beta \mu _K}`$, one indeed finds that the position of $`\mathrm{\Phi }=0`$ is approximately independent of $`\mu _K`$.
At the end we take an effort to estimate $`\mathrm{\Phi }`$ from the existing experimental data which appear to be rather scarce. Specifically, we consider the system of $`K_s^0`$ and negative hadrons produced in $`pp`$ interactions at the energy 205 GeV, which is close to the currently available energies of heavy-ion collisions at CERN SPS. This case is expected to be similar to the $`\pi ^{}K^+`$ system discussed above. One finds in that:
$$N_{}=2.84,N_{}^2N_{}^2=3.63$$
$$N_K=0.18,N_{}N_KN_{}N_K=0.078.$$
Unfortunately, there are no data on $`N_K^2N_K^2`$ which gives a dominant contribution to $`\mathrm{\Phi }`$. If the multiplicity distribution of kaons is poissonian i.e. $`N_K^2N_K^2=N_K`$, we get $`\mathrm{\Phi }=0.004`$. The dispersion of the negative hadron multiplicity distribution is known to follow the so-called Wróblewski formula . Applying the formula to the kaons we have $`N_K^2N_K^2=\left(0.58N_K+0.29\right)^2=0.16`$. In this case the kaon multiplicity appears to be even narrower than the poissonian one and $`\mathrm{\Phi }=0.02`$. The estimate of $`\mathrm{\Phi }=0.007`$ given in exceeds the two our numbers because the kaon mutiplicity distribution, which is used in , is (after averaging over the negative hadron multiplicity) broader than the poissonian one. We conclude that the existing data give a rather poor information on $`\mathrm{\Phi }`$ in $`pp`$ collisions.
We summarize our study as follows. The $`\mathrm{\Phi }`$measure seems to be a useful tool to study the chemical fluctuations in heavy-ion collisions. If the particles of different species are produced independently from each other and the multiplicity distributions are poissonian, $`\mathrm{\Phi }`$ is exactly zero. When the particles are produced in such a way that there are no chemical fluctuations (particle ratios are fixed), $`\mathrm{\Phi }`$ is negative but larger than $`1/2`$. If the nucleus-nucleus collision is a simple superposition of N–N interactions the value $`\mathrm{\Phi }`$ is strictly independent of the collision centrality. The same happens when the hadronic system produced in the nucleus-nucleus collisions achieves the equilibrium with the temperature and chemical potentials being independent of the impact parameter. The thermal model, which seems to be successful in describing the average multiplicities of different particle species, gives a definite prediction of $`\mathrm{\Phi }`$, which is positive for bosons and negative for fermions. The correlations in the system of pions and kaons have been considered in detail. The estimate of $`\mathrm{\Phi }`$ for the $`\pi ^{}K^{}`$ system is rather reliable while the prediction concerning the $`\pi ^{}K^+`$ correlations is sensitive to the details of the model. Since the experimental value of $`\mathrm{\Phi }`$ in $`pp`$ interactions can be hardly extracted from the existing data, the fluctuation measurements of nuclear collisions should start with the nucleon-nucleon case.
I am very indebted to ECT\* in Trento where the idea to analyze the equilibrium chemical fluctuations was born. Numerous fruitful discussions with Marek Gaździcki, who initiated this study, are also gratefully acknowledged.
Figure Captions
Fig. 1. $`\mathrm{\Phi }`$measure of the $`\pi ^{}K^{}`$ correlations as a function of temperature for three values of the pion chemical potential. The kaon chemical potential vanishes. The resonances are either neglected (dashed lines) or taken into account (solid lines). The most upper dashed line on the right hand side of the figure corresponds to $`\mu _\pi =100`$ MeV, the central one to $`\mu _\pi =0`$, and the lowest line to $`\mu _\pi =100`$ MeV. At sufficiently small temperatures the respective dashed and solid lines coincide.
Fig. 2. $`\mathrm{\Phi }`$measure of the $`\pi ^{}K^{}`$ correlations as a function of temperature for three values of the kaon chemical potential. The pion chemical potential vanishes. The resonances are either neglected (dashed lines) or taken into account (solid lines). The most upper dashed line corresponds to $`\mu _K=100`$ MeV, the central one to $`\mu _K=0`$, and the lowest line to $`\mu _K=100`$ MeV. At sufficiently small temperatures the respective dashed and solid lines coincide.
Fig. 3. The absolute value of $`\mathrm{\Phi }`$measure of the $`\pi ^{}K^+`$ correlations as a function of temperature for three values of the pion chemical potential. The kaon chemical potential vanishes. The resonances are either neglected (dashed lines) or taken into account (solid lines). The most upper dashed line on the right hand side of the figure corresponds to $`\mu _\pi =100`$ MeV, the central one to $`\mu _\pi =0`$, and the lowest line to $`\mu _\pi =100`$ MeV. At sufficiently small temperatures the respective dashed and solid lines coincide.
Fig. 4. The absolute value of $`\mathrm{\Phi }`$measure of the $`\pi ^{}K^+`$ correlations as a function of temperature for three values of the kaon chemical potential. The pion chemical potential vanishes. The resonances are either neglected (dashed lines) or taken into account (solid lines). The most upper dashed and solid lines correspond to $`\mu _K=100`$ MeV, the central ones to $`\mu _K=0`$, and the lowest lines to $`\mu _K=100`$ MeV. At sufficiently small temperatures the respective dashed and solid lines coincide.
|
no-problem/9901/chao-dyn9901024.html
|
ar5iv
|
text
|
# Spiral Waves in Media with Complex Excitable Dynamics
## Abstract
The structure of spiral waves is investigated in super-excitable reaction-diffusion systems where the local dynamics exhibits multi-looped phase space trajectories. It is shown that such systems support stable spiral waves with broken rotational symmetry and complex temporal dynamics. The main structural features of such waves, synchronization defect lines, are demonstrated to be similar to those of spiral waves in systems with complex-oscillatory dynamics.
Studies of spatially-distributed active media have demonstrated the ubiquity of self-organized spatio-temporal patterns, in particular spiral waves, in various physical or biological systems such as the Belousov-Zhabotinsky (BZ) reaction , catalytic surfaces , cardiac muscle and colonies of the amoebae Dictiostelium discoideum .
Most research has been devoted to the study of simple oscillatory or excitable systems. Recently it was shown that reactive media with complex periodic and chaotic oscillations are capable of supporting spiral waves with a variety of distinctive features, absent in simple oscillatory systems . The rotational symmetry of spiral waves in period-doubled media is broken by synchronization line defects where the phase of the local oscillation changes by multiples of $`2\pi `$. It was conjectured that spiral waves with broken rotational symmetry could also be observed in super-excitable systems where the phase space trajectory after excitation follows a multi-looped path of relaxation to the stable fixed point . Broken spirals with a clearly visible synchronization defect line emanating from the spiral core were observed under special three-dimensional conditions in the BZ reactive medium . The nature of spiral waves in complex-excitable media is nevertheless largely unexplored.
In this paper we show that spiral waves with broken rotational symmetry exist in a prototypical super-excitable system and demonstrate that the topological properties of line defects, described earlier for oscillatory media, also hold for excitable systems.
We consider a spatially-distributed system whose dynamics is governed by a pair of reaction-diffusion equations of the form
$`{\displaystyle \frac{u}{t}}`$ $`=`$ $`{\displaystyle \frac{1}{\epsilon }}u(1u)(u{\displaystyle \frac{v+b}{a}}f(v))+D^2u,`$ (1)
$`{\displaystyle \frac{v}{t}}`$ $`=`$ $`uv,`$ (2)
where $`v`$ is a non-diffusive variable and both $`u`$ and $`v`$ are functions of time and space. This model with $`f(v)0`$ was studied in as a simplified version of the FitzHugh-Nagumo model which serves as a prototype of an excitable system described by two variables. The excitable dynamics of system (1) consists of two fast and two slow stages. If displaced from the stable fixed point $`u=0,v=0`$ to the right of the unstable branch of the nullcline $`\dot{u}=0`$, it quickly reaches upper stable branch $`u=1`$. It follows this branch until $`v`$ reaches sufficiently large values and then jumps to the lower stable branch $`u=0`$, along which it slowly relaxes to the stationary state. To add a super-excitability to (1) a modification of the unstable branch of the nullcline $`\dot{u}=0`$ was proposed in as
$$f(v)=\alpha \mathrm{exp}\left(\frac{(vv_0)^2}{\sigma ^2}\right).$$
(3)
The introduction of (3) changes the shape of the unstable branch of $`\dot{u}=0`$ so that for suitably chosen parameters $`\alpha ,\sigma `$ and $`v_0`$ it nearly touches the nullcline $`\dot{v}=0`$ (see Fig.1). As a result, if another excitation is applied to the system (1) before it has reached the stationary state, it may execute second, smaller excitable loop before it finally reaches the stable state.
The spatio-temporal dynamics of a one-dimensional array of such elements forced by an external pacemaker with varying period was studied in . When the period of forcing $`T_f`$ is larger than a certain internal period of the system $`T_0`$, the response is a train of waves corresponding to the large relaxation loop $`(\mathrm{𝟏}^\mathrm{𝟎})`$. If $`T_f<T_0`$ the system develops wavetrains with low amplitude corresponding to small relaxation loop $`(\mathrm{𝟎}^\mathrm{𝟏})`$. Non-trivial behavior is observed when $`T_f`$ is only slightly smaller than $`T_0`$. In this case the system shows mixed-mode waveforms $`(\mathrm{𝟏}^𝐧)`$ consisting of one large and $`n`$ small waves. Response of this type is an example of the complex-excitable dynamics targeted in our studies. Instead of an external pacemaker we use a spiral wave as a self-sustained source of excitation in the medium.
Spiral waves were initiated in a two-dimensional square domain with no-flux boundary conditions. While $`\alpha `$ and $`\sigma `$ were used as bifurcation parameters, other parameters were fixed at ($`\epsilon =0.005,a=0.6,b=0.03,v_0=0.2`$). As in complex-excitable dynamics was found in the parameter region between domains of large-amplitude $`\mathrm{𝟏}^\mathrm{𝟎}`$ (small $`\alpha `$ and $`\sigma `$) and small-amplitude $`\mathrm{𝟎}^\mathrm{𝟏}`$ (large $`\alpha `$ and $`\sigma `$) waves. In this region the medium supports mostly aperiodic, stable spiral waves lacking rotational symmetry. Figure 2 shows such a spiral wave for $`\alpha =0.15,\sigma ^2=0.001`$ at two time instances. Note that the wave length of a large-amplitude wave is larger than that of a small-amplitude wave and, thus, the shape of the spiral is distorted. The concentration time series $`v(𝐫,t)`$ at different locations in the medium (cf Fig.3) show aperiodic concatenations of $`\mathrm{𝟏}^\mathrm{𝟏}`$ and $`\mathrm{𝟏}^\mathrm{𝟐}`$ waveforms, while trivial patterns $`\mathrm{𝟏}^\mathrm{𝟎}`$ and $`\mathrm{𝟎}^\mathrm{𝟏}`$ are completely absent.
As the domain of complex-excitable dynamics is traversed from $`\mathrm{𝟏}^\mathrm{𝟎}`$ to $`\mathrm{𝟎}^\mathrm{𝟏}`$ the contribution of the waveforms $`\mathrm{𝟏}^𝐧`$ with $`n>0`$ steadily grows, as well as the number $`n`$ of the small-amplitude loops. Thus, the transition from large to small-amplitude spiral waves occurs gradually through a succession of irregular spiral patterns with a progressively growing contribution of low-amplitude waves.
Although irregular patterns are found in most of the complex-excitable domain, pure period-3 dynamics was found in a sub-domain of this region. Figure 4 shows the spatial structure of a spiral wave in this parameter region. Observation of the spiral wave dynamics for long time periods shows that the entire concentration field slowly rotates around the spiral core with a constant angular velocity $`\omega `$. The period-3 dynamics is manifested in a coordinate frame centered at the spiral core and rotating with velocity $`\omega `$. Indeed, it takes three rotations of the spiral for the concentration field to return to itself.
Figure 1 shows a phase portrait of the dynamics at a non-special location in the medium calculated in the rotating frame. Before closing onto itself the phase space trajectory executes two small loops and one large loop, corresponding to a pure $`\mathrm{𝟏}^\mathrm{𝟐}`$ dynamics. Consider a polar coordinate frame $`(\rho ,\varphi )`$ in the $`(u,v)`$ phase plane with origin at an arbitrary point internal to both the small and large loops. During one full period of the dynamics the phase variable $`\varphi `$ changes by $`6\pi `$. Calculation of the phase at every point in the medium at a time $`t_0`$ gives an instantaneous snapshot $`\varphi (𝐫,t_0)`$ of the time-dependent phase field $`\varphi (𝐫,t)`$.
Consider a closed contour $`\mathrm{\Gamma }`$ that surrounds the spiral core. The phase increment $`\mathrm{\Delta }_\mathrm{\Gamma }\varphi =_\mathrm{\Gamma }\varphi (𝐫,t_0)𝑑𝐥`$ along $`\mathrm{\Gamma }`$ will be equal to a multiple of the full $`6\pi `$ period of the dynamics. From the topological theory of point defects in simple oscillatory or excitable media it follows that $`\mathrm{\Delta }_\mathrm{\Gamma }\varphi `$ is invariant and for the one-armed spiral waves in this study takes the values $`\pm 2\pi `$. In complex-periodic media this contradiction is resolved by the existence of synchronization defect lines emanating from the core . The phase of the local oscillation experiences jumps equal to multiples of $`2\pi `$ when such line defects are crossed. Any contour $`\mathrm{\Gamma }`$ encircling the spiral core intersects these lines so that the total phase increment $`\mathrm{\Delta }_\mathrm{\Gamma }\varphi `$ is obtained from the integration of $`\varphi (𝐫,t_0)`$ along $`\mathrm{\Gamma }`$ yielding $`\pm 2\pi `$ plus phase jumps at the intersections of $`\mathrm{\Gamma }`$ with the synchronization defect lines. The sum of both contributions yields the full period phase increment of the local dynamics. It was predicted that one should be able to find phenomena analogous to synchronization defect lines in media with complex-excitable dynamics.
For the specific case of period-3 complex-excitable dynamics where the total phase increment is $`6\pi `$, one might expect to find one line defect where the phase jumps by $`4\pi `$ or two defect lines where the phase jumps by $`2\pi `$ on crossing each line. In Fig.4 one sees two lines emanating from the spiral core at an angle of $`180^o`$. Investigation of the change in phase of the local dynamics across these lines shows that they are indeed $`2\pi `$ synchronization defect lines exhibiting the loop exchange phenomenon described earlier for complex-oscillatory media.
Figure 5 shows the $`v(𝐫,t)`$ time series at four neighboring locations in the medium along a path traversing one of the defect lines. Panel $`𝐚`$ is a plot of the normal $`\mathrm{𝟏}^\mathrm{𝟐}`$ dynamics seen on one side of the defect line. Every third minimum is lower than the two preceding minima and corresponds to the larger relaxation loop in the phase space plot.
As one approaches the defect line, the large-amplitude loop shrinks while both small-amplitude loops grow (Fig.5(b)). Then, one small-amplitude loop begins to grow faster than the other and at some point becomes larger than the still shrinking large-amplitude loop (Fig.5(c)). This loop exchange process continues until the new large loop attains a size equal to that of a large-amplitude loop and the other two loops shrink to the size of the small-amplitude loop. The local dynamics on opposite sides of the defect line (compare Figs 5(a) and 5(c)) experiences a $`2\pi `$ phase shift. Thus, the total phase increment $`\mathrm{\Delta }_\mathrm{\Gamma }\varphi =2\pi `$ resulting from the integration of $`\varphi (𝐫,t_0)`$ along $`\mathrm{\Gamma }`$, excluding its intersections with defect lines, plus the two $`2\pi `$ phase jumps at the intersection points gives the expected $`6\pi `$ phase increment.
Our results show that the structure of complex-periodic spiral waves is governed by general topological principles independent of whether the dynamics is excitable or oscillatory. This fact allows one to extend the predictions inferred from the studies of complex-oscillatory systems to systems with super-excitable dynamics. The formation of complex-periodic spiral waves in such systems might play a role in the development of some pathological conditions in the heart. Indeed, mixed-mode electrical activity with alternating large and small amplitude maxima, so-called alternans, is typically observed as a symptom of tachycardia .
|
no-problem/9901/hep-ph9901300.html
|
ar5iv
|
text
|
# Supernova Neutrinos and the Neutrino Masses
## I Introduction
Whether or not neutrinos have mass and are mixed are subjects of great current interest. Many experiments are now or will soon be searching for neutrino flavor mixing in a variety of circumstances. The strongest evidence for mixing (which implies mass) so far comes from the atmospheric neutrino experiments. However, all of these experiments by their nature are only sensitive to the differences of neutrino masses, and not the mass scale.
The absolute scale of the neutrino masses is an important probe of physics beyond the standard model of particle physics. In addition, if the neutrinos have masses of order a few eV or more, they may be an important part of the dark matter in the universe. Direct mass measurements from decay kinematics do not place very stringent limits: $`m_{\nu _e}5`$ eV, $`m_{\nu _\mu }<170`$ keV, and $`m_{\nu _\tau }<18`$ MeV. It will be very difficult to significantly improve these limits with terrestrial experiments.
A core-collapse supernova is a tremendous source of neutrinos and antineutrinos of all flavors. Since the current limit on the $`\nu _e`$ mass is comparatively low, the $`\nu _\mu `$ and $`\nu _\tau `$ masses could be measured by their time-of-flight delay relative to the $`\nu _e`$ and $`\overline{\nu }_e`$. Since they have energies only of order 25 MeV, the $`\nu _\mu `$ and $`\nu _\tau `$ can be detected only by their neutral-current interactions. While they also have neutral-current interactions, the $`\nu _e`$ and $`\overline{\nu }_e`$ will be detected primarily by their charged-current interactions.
Even a tiny mass will make the velocity slightly less than for a massless neutrino, and over the large distance to a supernova will cause a measurable delay in the arrival time. A neutrino with a mass $`m`$ (in eV) and energy $`E`$ (in MeV) will experience an energy-dependent delay (in s) relative to a massless neutrino in traveling over a distance D (in 10 kpc, approximately the distance to the Galactic center) of
$$\mathrm{\Delta }t(E)=0.515\left(\frac{m}{E}\right)^2D,$$
(1)
where only the lowest order in the small mass has been kept.
If the neutrino mass is nonzero, lower-energy neutrinos will arrive later, leading to a correlation between neutrino energy and arrival time. Using this idea, Ref. has shown that the next supernova will allow sensitivity to a $`\nu _e`$ mass down to about 3 eV, comparable to the terrestrial limit. Since the neutrino energy is not measured in neutral-current interactions a similar technique cannot be used for the $`\nu _\tau `$ mass. (The incoming neutrino energy is not determined since a complete kinematic reconstruction of the reaction products is typically not possible.)
Instead, the strategy for measuring the $`\nu _\tau `$ mass is to look at the difference in time-of-flight between the neutral-current events (mostly $`\nu _\mu `$,$`\nu _\tau `$,$`\overline{\nu }_\mu `$, and $`\overline{\nu }_\tau `$) and the charged-current events (just $`\nu _e`$ and $`\overline{\nu }_e`$). We assume that the $`\nu _\mu `$ is massless and will ask what limit can be placed on the $`\nu _\tau `$ mass. There are three major complications to a simple application of Eq. (1): (i) The neutrino energies are not fixed, but are characterized by spectra; (ii) The neutrino pulse has a long intrinsic duration of about 10 s, as observed for SN1987A; and (iii) The statistics are finite.
One possible neutral-current signal is the excitation of $`{}_{}{}^{16}\mathrm{O}`$, followed by detectable gamma emission . In SuperKamiokande (SK), which has a target volume of 32 kton of light water, this would cause about 710 events. This signal would allow sensitivity to a $`\nu _\tau `$ mass as low as about 45 eV .
Another possible neutral-current signal is deuteron breakup, followed by neutron detection. In the Sudbury Neutrino Observatory (SNO), which has a target volume of 1 kton of heavy water, this would cause about 485 events. (This detector also has a light-water target with an active volume of about 1.4 kton). While the statistics are somewhat lower than for SK, the energy dependence of the cross section is less steep and emphasizes lower energies and hence longer delays, leading to a sensitivity to a $`\nu _\tau `$ mass down to about 30 eV
Since one expects a type-II supernova about every 30 years in our Galaxy, there is a good chance that these mass limits can be dramatically improved in the near future.
## II Production and detection of supernova neutrinos
When the core of a large star ($`M8M_{}`$) runs out of nuclear fuel, it collapses to proto-neutron star. About 99% of the gravitational binding energy change, about $`3\times 10^{53}`$ ergs, is carried away by neutrinos. Because of the high density, they diffuse outward over a timescale of several seconds. When they are within about one mean free path of the edge, they escape freely, with a thermal spectrum (approximately Fermi-Dirac) characteristic of the surface of last scattering. Because different flavors have different interactions with the matter, the temperatures are different. The $`\nu _\mu `$ and $`\nu _\tau `$ neutrinos and their antiparticles have a temperature of about 8 MeV (or $`E`$ 25 MeV). The $`\overline{\nu }_e`$ neutrinos have a temperature of about 5 MeV ($`E`$ 16 MeV), and the $`\nu _e`$ neutrinos have a temperature of about 3.5 MeV ($`E`$ 11 MeV). The luminosities of the different neutrino flavors are approximately equal at all times. The neutrino luminosity rises quickly over a time of order 0.1 s, and then falls over a time of order several seconds, roughly like an exponential with a time constant $`\tau `$ = 3 s. The detailed form of the neutrino luminosity used below is less important than the general shape features and their characteristic durations.
For thermal spectra which are constant in time, and for equal luminosities among the different flavors, the scattering rate for a given reaction can be written as:
$$\frac{dN_{sc}}{dt}=C𝑑Ef(E)\left[\frac{\sigma (E)}{10^{42}\mathrm{cm}^2}\right]\left[\frac{L(t\mathrm{\Delta }t(E))}{E_B/6}\right],$$
(2)
where $`f(E)`$ is the neutrino energy spectrum, $`\sigma (E)`$ the cross section, and $`L(t)`$ the luminosity. For a massless neutrino (i.e., the charged-current events), $`\mathrm{\Delta }t(E)=0`$, and the time dependence of the scattering rate is simply the time dependence of the luminosity. For a massive neutrino (i.e., the neutral-current $`\nu _\tau `$ events), the time dependence of the scattering rate is additionally dependent on the mass effects, as written. The overall constant is
$$C=8.28\left[\frac{E_B}{10^{53}\mathrm{ergs}}\right]\left[\frac{1\mathrm{MeV}}{T}\right]\left[\frac{10\mathrm{kpc}}{D}\right]^2\left[\frac{\mathrm{det}.\mathrm{mass}}{1\mathrm{kton}}\right]n,$$
(3)
where $`E_B`$ is the total binding energy release, $`T`$ is the spectrum temperature, $`D`$ is the distance to the supernova, and $`n`$ is the number of targets per molecule for the given reaction. For a light-water detector, the initial coefficient in $`C`$ is 9.21 instead of 8.28.
The Sudbury Neutrino Observatory, though primarily intended for solar neutrinos, also makes an excellent detector for supernova neutrinos. Electrons and positrons will be detected by their Čerenkov radiation, and gammas via secondary electrons and positrons. Neutrons will be detected by one of three possible modes, depending on the detector configuration. The key neutral-current reaction is deuteron breakup: $`\nu +d\nu +p+n`$ and $`\overline{\nu }+d\overline{\nu }+p+n`$, with thresholds of 2.22 MeV. Some other relevant reactions are given in Table I.
## III Signature of a small neutrino mass
### A General description of the data
As noted, for a massless neutrino ($`\nu _e`$ or $`\overline{\nu }_e`$) the time dependence of the scattering rate is simply the time dependence of the luminosity. For a massive neutrino ($`\nu _\tau `$), the time dependence of the scattering rate additionally depends on the delaying effects of a mass. To search for these effects, we define two rates: a Reference $`R(t)`$ containing only massless events, and a Signal $`S(t)`$ containing some fraction of massive events (along with some massless events which cannot be separated).
The Reference $`R(t)`$ can be formed in various ways, for example from the charged-current reaction $`\overline{\nu }_e+pe^++n`$ in the light water of either SK or SNO (the former with a much better precision).
The primary component of the Signal $`S(t)`$ in SNO is the 485 neutral-current events on deuterons. With the hierarchy of temperatures assumed here, these events are 18% ($`\nu _e+\overline{\nu }_e`$), 41% ($`\nu _\mu +\overline{\nu }_\mu `$), and 41% ($`\nu _\tau +\overline{\nu }_\tau `$). The flavors of the neutral-current events of course cannot be distinguished. Under our assumption that only $`\nu _\tau `$ is massive, there is already some unavoidable dilution of $`S(t)`$.
In Figure 1, $`S(t)`$ is shown under different assumptions about the $`\nu _\tau `$ mass. The shape of $`R(t)`$ is exactly that of $`S(t)`$ when $`m_{\nu _\tau }=0`$, though the number of events in $`R(t)`$ will be different. The rates $`R(t)`$ and $`S(t)`$ will be measured with finite statistics, so it is possible for statistical fluctuations to obscure the effects of a mass when there is one, or to fake the effects when there is not. We determine the mass sensitivity in the presence of the statistical fluctuations by Monte Carlo modeling. We use the Monte Carlo to generate representative statistical instances of the theoretical forms of $`R(t)`$ and $`S(t)`$, so that each run represents one supernova as seen in SNO. The best test of a $`\nu _\tau `$ mass seems to be a test of the average arrival time $`t`$. Any massive component in $`S(t)`$ will always increase $`t`$, up to statistical fluctuations.
### B $`t`$ analysis
Given the Reference $`R(t)`$ (i.e., the charged-current events), the average arrival time is defined as
$$t_R=\frac{_kt_k}{_k1},$$
(4)
where the sum is over events in the Reference. The effect of the finite number of counts $`N_R`$ in $`R(t)`$ is to give $`t_R`$ a statistical error:
$$\delta \left(t_R\right)=\frac{\sqrt{t^2_Rt_R^2}}{\sqrt{N_R}}.$$
(5)
For a purely exponential luminosity, $`t_R=\sqrt{t^2_Rt_R^2}=\tau `$.
Given the Signal $`S(t)`$ (i.e., the neutral-current events), the average arrival time $`t_S`$ and its error $`\delta \left(t_S\right)`$ are defined similarly. The widths of $`R(t)`$ and $`S(t)`$ are similar, each of order $`\tau =3`$ s (the mass increases the width of $`S(t)`$ only slightly for small masses.) The signal of a mass is that the measured value of $`t_St_R`$ is greater than zero with statistical significance.
Using the Monte Carlo, we analyzed $`10^4`$ simulated supernova data sets for a range of $`\nu _\tau `$ masses. For each data set, $`t_St_R`$ was calculated and its value histogrammed. These histograms are shown in the upper panel of Fig. 2 for a few representative masses. (Note that the number of Monte Carlo runs only affects how smoothly these histograms are filled out, and not their width or placement.) These distributions are characterized by their central point and their width, using the 10%, 50%, and 90% confidence levels. That is, for each mass we determined the values of $`t_St_R`$ such that a given percentage of the Monte Carlo runs yielded a value of $`t_St_R`$ less than that value. With these three numbers, we can characterize the results of complete runs with many masses much more compactly, as shown in the lower panel of Fig. 2. Given an experimentally determined value of $`t_St_R`$, one can read off the range of masses that would have been likely (at these confidence levels) to have given such a value of $`t_St_R`$ in one experiment. From the lower panel of Fig. 2, we see that SNO is sensitive to a $`\nu _\tau `$ mass down to about 30 eV if the SK $`R(t)`$ is used, and down to about 35 eV if the SNO $`R(t)`$ is used.
We also investigated the dispersion of the event rate in time as a measure of the mass. A mass alone causes a delay, but a mass and an energy spectrum also cause dispersion. We defined the dispersion as the change in the width $`\sqrt{t^2_St_S^2}\sqrt{t^2_Rt_R^2}`$. We found that the dispersion was not statistically significant until the mass was of order 80 eV or so; however, for such a large mass the statistical significance of $`t_St_R`$ cannot be missed. This means that the average delay is well-characterized by a single energy, which for SNO is $`E_c32`$ MeV.
## IV Conclusions and discussion
One of the key points of our technique is that the abundant $`\overline{\nu }_e`$ events can be used to calibrate the neutrino luminosity of the supernova and to define a clock by which to measure the delay of the $`\nu _\tau `$ neutrinos. The internal calibration substantially reduces the model dependence of our results, and allows us to be sensitive to rather small masses. Our calculations indicate that a significant delay can be seen for $`m=30`$ eV with the SNO data, corresponding to a delay in the average arrival time of about 0.15 s. Even though the duration of the pulse is expected to be of order 10 s, such a small average delay can be seen because several hundred events are expected. Without such a clock, one cannot determine a mass limit with the $`t_St_R`$ technique advocated here, since the absolute delay would be unknown. Instead, one would have to constrain the mass from the observed dispersion of the events; only for a mass of $`m=150`$ eV or greater would the pulse become significantly broader than expected from theory.
Moreover, the technique used here allows accurate analytic estimates of the results, so that it is easy to see how the conclusions would change if different input parameters were used. If the $`\nu _\tau `$ mass is very small, and a only a limit is placed, then this scales as $`m_{limit}T^{3/4}\sqrt{\tau }`$, where $`T`$ is the $`\nu _\mu `$ and $`\nu _\tau `$ temperature and $`\tau `$ is the luminosity timescale . Thus the final result is relatively insensitive to the supernova parameters in their expected ranges. Additionally, this is independent of the distance $`D`$. Because of obscuration by dust, it may be difficult to observe the light from a future Galactic supernova. It is therefore rather important that this does not affect the ability to place a limit on the $`\nu _\tau `$ mass. In Ref. , we have discussed how a supernova could be located by its neutrinos, perhaps in advance or independently of the light.
The observation of the neutrino signal of a future Galactic supernova will be extremely significant test of the physics involved. It will allow, among other things, determination of the imprecisely-known supernova neutrino emission parameters. In addition, we hope to be able to use the same data to determine or constrain neutrino properties. In Refs. , we discuss how both of these goals can be achieved simultaneously, with or without the additional complication of neutrino oscillations.
Despite the long intrinsic duration of the supernova neutrino pulse and the spectra of neutrino energies, it is in fact possible to discern even a small $`\nu _\tau `$ mass by a time-of-flight measurement. The results are that SK and SNO are sensitive to a $`\nu _\tau `$ mass as low as about 45 eV and 30 eV, respectively. In the above, we considered that the $`\nu _\mu `$ is massless and the $`\nu _\tau `$ is massive. Since they cannot be distinguished experimentally, the limit in fact applies to both $`\nu _\mu `$ and $`\nu _\tau `$. These results include the effects of the finite statistics, and are relatively insensitive to uncertainties in some of the key supernova parameters. When the next Galactic supernova is observed, the $`\nu _\tau `$ mass limit will be improved by nearly 6 orders of magnitude. The importance of this result is highlighted by its significance to both cosmology and particle physics. So that the universe is not overclosed, the sum of the stable neutrino masses must be less than about 100 eV. Some seesaw models of the neutrino masses predict a $`\nu _\tau `$ mass as large as about 30 eV ). As noted, this seems to be the best technique for direct measurement of the $`\nu _\mu `$ and $`\nu _\tau `$ masses.
## ACKNOWLEDGMENTS
I acknowledge support as a Sherman Fairchild fellowship from Caltech, and I thank Petr Vogel for his collaboration on Refs. .
Figure Captions
FIG. 1. The expected event rate for the Signal $`S(t)`$ at SNO in the absence of fluctuations for different $`\nu _\tau `$ masses, as follows: solid line, 0 eV; dashed lines, in order of decreasing height: 20, 40, 60, 80, 100 eV. Of 535 total events, 100 are massless ($`\nu _e+\overline{\nu }_e`$), 217.5 are massless ($`\nu _\mu +\overline{\nu }_\mu `$), and 217.5 are massive ($`\nu _\tau +\overline{\nu }_\tau `$). These totals count events at all times; in the figure, only those with $`t9`$ s are shown.
FIG. 2. The results of the $`t`$ analysis for a massive $`\nu _\tau `$, using the Signal $`S(t)`$ from SNO defined in the text. In the upper panel, the relative frequencies of various $`t_St_R`$ values are shown for a few example masses. The solid line is for the results using the SK Reference $`R(t)`$, and the dotted line for the results using the SNO $`R(t)`$. In the lower panel, the range of masses corresponding to a given $`t_St_R`$ is shown. The dashed line is the 50% confidence level. The upper and lower solid lines are the 10% and 90% confidence levels, respectively, for the results with the SK $`R(t)`$. The dotted lines are the same for the results with the SNO $`R(t)`$.
|
no-problem/9901/cond-mat9901343.html
|
ar5iv
|
text
|
# A Hartree-Fock Study of Persistent Currents in Disordered Rings
## Abstract
For a system of spinless fermions in a disordered mesoscopic ring, interactions can give rise to an enhancement of the persistent current by orders of magnitude. The increase in the current is associated with a charge reorganization of the ground state. The interaction strength for which this reorganization takes place is sample-dependent and the log-averages over the ensemble are not representative. In this paper we demonstrate that the Hartree-Fock method closely reproduces results obtained by exact diagonalization. For spinless fermions subject to a short-range Coulomb repulsion U we show that due to charge reorganization the derivative of the persistent current is a discontinuous function of U. Having established that the Hartree-Fock method works well in one dimension, we present corresponding results for persistent currents in two coupled chains.
Pacs numbers: 71.30.+h, 05.45.+b, 71.10.Pm, 72.15.Rn
In a normal-metal mesoscopic ring threaded by a magnetic flux , measured values of the persistent current are two orders of magnitude larger than predicted. This result suggests that a quantitative theory must treat electron-electron interactions and disorder on an equal footing. Previous studies of spinless fermions concentrated on the behaviour of ensemble averages of the persistent current and led to the conclusion that repulsive interactions cannot significally enhance the current. It was therefore argued that only systems which include spin could show such an increase of the current . However, more recently it was shown that for one-dimensional systems of spinless fermions interacting through a short-range Coulomb repulsion U, the persistent current does increase. This enhancement of the current is accompanied by a charge reorganization of the ground state which happens at different values of the interaction strength U, depending on the disorder realization, and therefore the ensemble averaged persistent current may not be relevant.
The results of were obtained using the density matrix renormalization group (DMRG) technique which is essentially exact and contains correlation effects. However, this technique is computationally demanding and not easily extended to higher dimensions. Therefore it is of interest to determine whether or not the results of are contained in a mean-field description using a single Slater determinant. In this paper we address this question, by presenting results obtained using the Hartree-Fock method for spinless fermions interacting via a short-range Coulomb repulsion. In one dimension we show that the Hartree-Fock method agrees with the exact results of , reproducing the predicted behaviour of the persistent current as well as the sample-dependent charge reorganization. After establishing the validity of the method, we extend the calculation to a quasi-one dimensional system comprising two parallel chains.
The total Hamiltonian for a system of $`N`$ spinless fermions on a disordered chain ($`1`$D) of $`M`$ sites is
$$H=\underset{i=1}{\overset{M}{}}\epsilon _ic_i^{}c_i\underset{i,j=1}{\overset{M}{}}t_{ij}c_i^{}c_j+\frac{1}{2}\underset{i,j=1}{\overset{M}{}}U_{ij}c_i^{}c_ic_j^{}c_j$$
(1)
The operators $`c_i^{}`$ and $`c_i`$ are creation and annihilation operators for an electron on site $`i`$, the on-site energies $`\epsilon _i`$ are random variables uniformly distributed over the interval $`\frac{W}{2}`$ to $`+\frac{W}{2}`$ and $`U_{ij}`$ is a nearest-neighbour interaction of the form
$$U_{ij}=\{\begin{array}{cc}U\hfill & \text{if }j=i\pm 1\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}$$
(2)
The hopping elements $`t_{ij}`$ are restricted to nearest neighbours with $`t_{i,i\pm 1}=t=1`$ except at the ends of the chain, for which $`t_{1N}=t_{N1}=1`$ (periodic boundary conditions) or $`t_{1N}=t_{N1}=1`$ (anti-periodic boundary conditions).
The single-particle Hartree-Fock equation corresponding to equation (1) is of the form
$`\epsilon _i\mathrm{\Psi }^n(i)t_{ii1}\mathrm{\Psi }^n(i1)t_{ii+1}\mathrm{\Psi }^n(i+1)`$ (3)
$`+{\displaystyle \underset{m=1}{\overset{N}{}}}{\displaystyle \underset{j=1}{\overset{M}{}}}|\mathrm{\Psi }^m(j)|^2U_{ij}\mathrm{\Psi }^n(i)`$ (4)
$`{\displaystyle \underset{m=1}{\overset{N}{}}}{\displaystyle \underset{j=1}{\overset{M}{}}}\mathrm{\Psi }^m(j)\mathrm{\Psi }^m(i)U_{ij}\mathrm{\Psi }^n(j)=E_n\mathrm{\Psi }^n(i)`$ (5)
where $`\mathrm{\Psi }^n(i)`$ is the amplitude of the $`n`$th single-particle wavefunction on site $`i`$. The third and fourth terms are the direct (Hartree) and exhange (Fock) potentials, respectively.
The parameters $`N`$, $`M`$, $`t`$, $`W`$, and $`U`$ were chosen to be exactly the same as in so as to directly test the accuracy of the method. Here we present results for $`M=20`$ sites and $`N=10`$ particles (half-filling) and the strength of the disorder was taken to be large, $`W=9`$. At zero $`U`$ the charge density of a given sample is spatially inhomogeneous due to the presence of such disorder. As $`U`$ increases we observe a reorganization of the charge, and for large $`U`$, obtain a homogeneous configuration in which the particles are equally spaced. This reorganization happens at different values of the interaction strength for different samples. As an example, figure 1 shows the charge density for a single sample in the free electron case ($`U=0`$) and for $`U=20`$, which is large enough to yield a periodic array of charges.
In addition to the charge density we have studied the phase sensitivity $`D`$, which is a measure of the delocalization effect mentioned above and is defined by
$$D(U)=\frac{M}{2}\mathrm{\Delta }E$$
(6)
Here $`\mathrm{\Delta }E=(1)^N(E_g(0)E_g(\pi ))`$ is the difference in the ground state energy between periodic and anti-periodic boundary conditions, where the Hartree-Fock ground state energy is
$$E_g=\frac{1}{2}\underset{n=1}{\overset{N}{}}[E_n+\underset{i,j=1}{\overset{M}{}}\mathrm{\Psi }^n(i)h_{ij}\mathrm{\Psi }^n(j)]$$
(7)
with $`h_{ij}=\epsilon _i\delta _{ij}+t_{ij}\delta _{ij\pm 1}`$.
In agreement with we find peaks in $`\mathrm{log}D`$ at sample-dependent values of $`U`$, associated with reorganization of the ground-state charge density. For positive $`U`$, figure 2 shows the ensemble average of $`\mathrm{log}D`$ along with results for four individual samples. For negative $`U`$ the mean field equations do not converge and no results were obtainable. Fig. 2 is in remarkable agreement with the exact results of . For example the average of $`\mathrm{log}D`$, exhibits a local maximum around $`Ut`$. Following we have also examined the relative increase of the phase sensitivity with respect to the free fermion case, $`\eta =\mathrm{log}D(U)\mathrm{log}D(0)`$, and in agreement with , obtain a log-normal distribution for $`\eta `$ (figure 3).
Having established the validity of Hartree-Fock theory as a method for computing the charge density and phase sensitivity of one-dimensional rings, we now extend our analysis to two coupled one-dimensional chains. For a system consisting of two rings with $`20`$ sites in each, and $`20`$ spinless fermions in total, we again examine the case of strong disorder, $`W=9`$. In figure 4 we show the charge density for the two rings. As we can see, the charge reorganization that was present in $`1`$D is also obvious for two chains. For very strong $`U`$ the particles localize in the odd and the even sites of the first and second chains respectively, as expected classically. Thus, one again obtains the delocalization effect associated with the crossover from an Anderson to a Mott insulator.
The phase sensitivity and the probability distribution of the relative increase $`\eta `$ are shown in figures 5 and 6, which again demonstrate that interactions produce an increase in the fluctuations of the current. However, in contrast with a single chain, the average of $`\mathrm{log}D`$ no longer possesses a local maximum and instead decreases monotonically with increasing $`U`$.
In summary, the above results demonstrate that the effects discussed in are contained in a single Slater-determinant ground state and are describable by mean field Hartree-Fock theory. By extending the analysis to two chains, we find that the maximum in the average of $`\mathrm{log}D`$ is no longer present, which suggests that this feature may be a peculiarity of strictly one-dimensional systems. The fact that Hartree-Fock theory is applicable in one dimension, where mean field theories are least accurate, indicates that in higher dimensions Hartree-Fock theory should be sufficient to describe the ground state of a system with electron-electron interactions and disorder, at least in the strong disorder limit.
|
no-problem/9901/astro-ph9901129.html
|
ar5iv
|
text
|
# BeppoSAX Observations of 2-Jy Lobe-dominated Broad-Line Sources. I. The Discovery of a Hard X-ray Component
## 1 Introduction
There is abundant evidence that strong anisotropies play a major role in the observed characteristics of radio loud active galactic nuclei (AGN; see Antonucci 1993 and Urry & Padovani 1995 for a review). Radio jets are in fact known to be strongly affected by relativistic beaming, while part of the optical emission in some classes of objects is likely to be absorbed by a thick disk or torus around the active nucleus.
A unification of all high-power radio sources has been suggested (Barthel 1989; Urry & Padovani 1995 and references therein) and according to this scheme, the lobe-dominated, steep-spectrum radio quasars (SSRQ) and the core-dominated, flat-spectrum radio quasars (FSRQ) are believed to be increasingly aligned versions of Fanaroff-Riley type II (FR II; Fanaroff & Riley 1974) radio galaxies. Within this scheme, broad-line (FWHM $`\text{ }>`$ 2000 $`\mathrm{km}\mathrm{s}^1`$) radio galaxies (BLRG) have a still uncertain place. They could represent either objects intermediate between quasars and radio galaxies, (i.e., with the nucleus only partly obscured and the broad emission lines just becoming visible at the edge of the obscuring torus) or low-redshift, low-power equivalent of quasars.
The scenario described above makes a number of predictions about the X-ray emission of these radio-loud AGN. Moreover, the hard X-ray band, that is less affected by absorption, is essential for a complete knowledge of the intrinsic nature of these objects.
Although the X-ray spectrum can be very complex, the presence of a nuclear, likely beamed X-ray component in quasars is quite well established in particular for FSRQ and blazars (Wilkes & Elvis 1987; Shastri et al. 1993; Sambruna et al. 1994). There are mainly two arguments to support this: 1) the tendency for radio loud AGN to have systematically flatter X-ray slopes than radio quiet ones; 2) the fact that the soft X-ray slope decreases with core dominance (Shastri et al. 1993) and increases with radio spectral index (Fiore et al. 1998). Both these results are explained with the presence of a radio-linked synchrotron self-Compton component of the X-ray emission that is likely to be beamed. This component would be dominant in the FSRQ. In SSRQ, in which “blazar-like”, non-thermal emission is probably less important because of the larger angle w.r.t. the line of sight, the “UV bump” would be stronger (as effectively observed: e.g., Wills et al. 1995) and the steeper soft X-ray component would represent its high-energy tail.
Although a nuclear X-ray component has been detected also in radio galaxies, it appears to be much weaker than in radio quasars (consistent with the idea that radio galaxies have an obscured nucleus). For example, the X-ray spectrum of Cygnus A (Ueno et al. 1994) is consistent with a typical quasar spectrum absorbed by a high column density of cold gas along the line of sight. On the other hand, in the case of the broad-line radio galaxy 3C 390.3 (Inda et al. 1994), its hard X-ray spectrum can be described by a relatively flat, unabsorbed power-law. This would suggest that BLRG might well be the low-redshift counterpart of radio quasars and therefore aligned within $`\text{ }<40^{}`$ (as predicted by unified schemes: see e.g., Urry & Padovani 1995).
From the above it is clear that a spectral X-ray study of lobe-dominated, broad-line radio sources (including both SSRQ and BLRG), covering a large X-ray band is necessary for a number of reasons. Namely: 1) to study the hard X-ray properties of lobe-dominated, broad-line radio sources, at present not well known; 2) to investigate if the difference in the soft X-ray spectra of SSRQ and FSRQ apply also to the hard X-ray band. The detection of a flatter component in SSRQ will be extremely important for our understanding of the emission processes in this class of objects; 3) to increase the number of BLRG for which the X-ray spectrum is known in detail in order to disentangle the real nature of BLRG and investigate if the X-ray spectra of BLRG and SSRQ are similar.
In this paper we present BeppoSAX observations of five lobe-dominated, broad-line radio sources, namely three SSRQ and two BLRG (we follow the commonly adopted definition of lobe-dominated source, which implies a value of the core dominance parameter $`R<1`$). The sample is well defined (i.e., it is not a compilation of known hard X-ray sources) and it is extracted from the 2-Jy sample of radio sources for which a wealth of radio and optical information is available. The unique capability of the BeppoSAX satellite (Boella et al. 1997a) of performing simultaneous broad-band X-ray ($`0.1200`$ keV) studies is particularly well suited for a detailed analysis of the X-ray energy spectrum of these sources.
In § 2 we present our sample, § 3 discusses the observations and the data analysis, while § 4 describes the results of our spectral fits to the BeppoSAX data. In § 5 we also examine the ROSAT PSPC data of our sources to better constrain the fits at low energies, in § 6 we combine the analysis of the BeppoSAX and ROSAT data while in § 7 we briefly comment on the lack of iron lines in our spectra. Finally, § 8 discusses our results and § 9 summarizes our conclusions. Throughout this paper spectral indices are written $`S_\nu \nu ^\alpha `$.
## 2 The Sample
The lobe-dominated, broad-line objects studied in this paper belong to a complete subsample of the 2 Jy catalogue of radio sources (Wall & Peacock 1985). This subsample, defined by redshift $`z<0.7`$ and declination $`\delta <10^{}`$, includes 88 objects and is complete down to a flux density level of 2 Jy at 2.7 GHz. Optical spectra are available for all the sources together with accurate measurements of the \[O III\]$`\lambda `$5007, \[O II\]$`\lambda `$3727 and H$`\beta `$ emission line fluxes (Tadhunter et al. 1993, 1998). Estimates of the core dominance parameter $`R`$ \[$`R=S_{core}/(S_{tot}S_{core})`$\] have been derived from both arcsec-resolution images and higher resolution data (Morganti et al. 1993, 1997). A study of the soft X-ray characteristics of the objects in the sample has been carried out using the ROSAT All-Sky Survey and/or ROSAT PSPC pointed observations (Siebert et al. 1996). For most of the objects, however, no useful X-ray spectral information is available.
The 2 Jy subsample described above contains 16 lobe-dominated, broad-line objects (excluding compact steep-spectrum sources, whose relation to other classes is still not clear). From those, we have selected the 10 sources with estimated flux in the 0.1 – 10 keV band larger than $`2\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup> for an X-ray spectral study with the BeppoSAX satellite<sup>1</sup><sup>1</sup>1Note that, as expected in any flux-limited sample, the 10 selected objects are $`30`$ times more luminous in the X-ray band than the 6 sources which did not make the X-ray flux cut. Our sample is then biased towards the most X-ray luminous lobe-dominated, broad-line sources in the 2-Jy sample.. Here we present the results obtained for the 5 objects so far observed in Cycle 1. The list of objects and their basic characteristics are given in Table 1, which presents the source name, position, redshift, optical magnitude $`V`$, 2.7 GHz radio flux, radio spectral index $`\alpha _\mathrm{r}`$ (taken from Wall & Peacock 1985), core dominance parameter $`R`$ at 2.3 GHz, Galactic N<sub>H</sub> and classification.
## 3 Observations and Data Analysis
A complete description of the BeppoSAX mission is given by Boella et al. (1997a). The relevant instruments for our observations are the coaligned Narrow Field Instruments (NFI), which include one Low Energy Concentrator Spectrometer (LECS; Parmar et al. 1997) sensitive in the 0.1 – 10 keV band; three identical Medium Energy Concentrator Spectrometers (MECS; Boella et al. 1997b), covering the 1.5 – 10 keV band; and the Phoswich Detector System (PDS; Frontera et al. 1997), coaligned with the LECS and the MECS. The PDS instrument is made up of four units, and was operated in collimator rocking mode, with a pair of units pointing at the source and the other pair pointing at the background, the two pairs switching on and off source every 96 seconds. The net source spectra have been obtained by subtracting the ‘off’ to the ‘on’ counts. A journal of the observations is given in Table 2.
The data analysis was based on the linearized, cleaned event files obtained from the BeppoSAX Science Data Center (SDC) on-line archive (Giommi & Fiore 1997) and on the XIMAGE package (Giommi et al. 1991) upgraded to support the analysis of BeppoSAX data. The data from the three MECS instruments were merged in one single event file by SDC. The LECS data above 4 keV were not used due to calibration uncertainties in this band that have not been completely solved at this time (Orr et al. 1998). As recommended by the SDC, LECS data have been then fitted only in the $`0.14`$ keV range, while MECS data were fitted in the $`1.810.5`$ keV range.
Spectra were accumulated for each observation using the SAXSELECT tool, with 8.5 and 4 arcmin extraction radii for the LECS and MECS respectively, which include more than 90% of the flux. The count rates given in Table 2 were obtained using XIMAGE and refer to channels 10 to 950 for the LECS and 36 to 220 for the MECS. The BeppoSAX images were also checked for the presence of serendipitous sources which could affect the data analysis. The ROSAT public images of our sources were also inspected (see below). Most of our objects have at least one source within the LECS extraction radius in the ROSAT PSPC and/or MECS images but at a flux level which is at maximum 10% of the target (and in most cases below 3%). Serendipitous sources in the field are then unlikely to affect our results at a significant level. We also looked for variability on timescales of 500 and 1,000 seconds for each observation, with null results.
The LECS and MECS background is low, although not uniformly distributed across the detectors, and rather stable. For this reason, and in particular for the spectral analysis, it is better to evaluate the background from blank fields, rather than in annuli around the source. Background files accumulated from blank fields, obtained from the SDC public ftp site, were then used.
## 4 Spectral Fits
Spectral analysis was performed with the XSPEC 9.00 package, using the response matrices released by SDC in early 1997. The spectra were rebinned such that each new bin contains at least 20 counts (using the command GRPPHA within FTOOLS). Various checks using some of the rebinning files provided by SDC have shown that our results are independent of the adopted rebinning within the uncertainties. The X-ray spectra of our sources are shown in Figure 1 (which includes both BeppoSAX and ROSAT data: see Sect. 6).
### 4.1 LECS Data: Constraining N<sub>H</sub>
At first, we fitted the LECS data with a single power-law model with Galactic and free absorption. The absorbing column was parameterized in terms of N<sub>H</sub>, the HI column density, with heavier elements fixed at solar abundances. Cross sections were taken from Morrison and McCammon (1983). For one set of fits N<sub>H</sub> was fixed at the Galactic value, derived from Elvis, Lockman & Wilkes (1989) for PHL 1657 and from the nh program at HEASARC (based on Dickey & Lockman 1990), for the remaining objects. N<sub>H</sub> was also set free to vary to check for internal absorption and/or indications of a “soft-excess.”
Our results are presented in Table 3, which gives the name of the source in column (1), the energy index $`\alpha _\mathrm{x}`$ and reduced chi-squared and number of degrees of freedom, $`\chi _\nu ^2`$(dof), in columns (2)-(3) for the fixed-N<sub>H</sub> fits; columns (4)-(6) give N<sub>H</sub>, $`\alpha _\mathrm{x}`$ and $`\chi _\nu ^2`$(dof) for the free-N<sub>H</sub> fits. Finally, in column (7) we report the unabsorbed X-ray flux in the $`0.14.0`$ keV range (multiplied by a normalization constant derived from the combined LECS plus MECS fits: see next section). The errors quoted on the fit parameters are the 90% uncertainties for one and two interesting parameters for Galactic and free N<sub>H</sub> respectively.
Two results are immediately apparent from Table 3: the fitted energy indices are flat ($`\alpha _\mathrm{x}`$ $`<1`$); and the fitted N<sub>H</sub> values are consistent with the Galactic ones. This is confirmed by an $`F`$-test which shows that the addition of N<sub>H</sub> as a free parameter does not result in a significant improvement in the $`\chi ^2`$ values. We will then assume Galactic N<sub>H</sub> in the combined LECS and MECS fits. For the two objects without LECS data this assumption is also justified by the fact that the fit to the MECS data is not strongly dependent on N<sub>H</sub>.
The spectrum of PHL 1657 is more complicated than a simple power-law: the residuals show a clear excess at $`E\text{ }<0.7`$ keV. Indeed a broken power-law model significantly improves the fit (see below). Weaker “soft-excesses” cannot be excluded in the two other sources (see below) so the fluxes given in Table 3, based on a single power-law fit to the data, are almost certainly underestimated.
### 4.2 LECS and MECS Data
Our results from the jointly fitted LECS and MECS data assuming a single power-law model with Galactic absorption are presented in Table 4, which gives the name of the source in column (1), $`\alpha _\mathrm{x}`$ and $`\chi _\nu ^2`$(dof) in columns (2)-(3), and the unabsorbed X-ray flux in the $`210`$ keV range in column (4). The errors quoted on $`\alpha _\mathrm{x}`$ are 90% uncertainties.
Due to the uncertainties in the calibration of the LECS instrument, the LECS/MECS normalization has been let free to vary. The resulting values, in the 0.6 – 0.8 range, are consistent with the expected one (Giommi & Fiore, private communication).
The striking result is that all the sources have relatively flat X-ray energy indices. The mean value is $`\alpha _\mathrm{x}=0.75\pm 0.02`$ (here and in the following we give the standard deviation of the mean). This implies that the spectra are still raising in a $`\nu f_\nu `$ plot, and therefore that the peak of the X-ray emission in the BeppoSAX band is at $`E>10`$ keV. Table 4 also reports (in the footnotes) the fits to the MECS data for the three sources with both LECS and MECS observations. The energy indices in the $`1.810.5`$ keV range have a mean value $`\alpha _\mathrm{x}=0.74\pm 0.03`$, basically the same as in the whole $`0.110.5`$ keV band.
As mentioned in the Introduction, the spectra of the class of sources under study are generally steep at lower X-ray energies (and there is indeed strong evidence for a steeper X-ray component in the LECS data of PHL 1657). We then tried to fit a broken power-law model to our data. A significant improvement in the fit (96.5% level) was obtained only for PHL 1657, whose residuals again showed a clear excess at $`E\text{ }<0.7`$ keV. The best-fit parameters are $`\alpha _\mathrm{S}=1.3`$, $`\alpha _\mathrm{H}=0.76\pm 0.12`$, and $`E_{\mathrm{break}}=0.9`$ keV.
The fact that the other four sources show no significant evidence for a concave spectrum needs to be investigated with more data at soft X-ray energies. Hence the need to resort to ROSAT PSPC data (see Sect. 5).
### 4.3 The PDS Detection of Pictor A
Only the brightest source of our sample, Pictor A, has been detected by the PDS instrument (up to $`50`$ keV; see Fig. 1) despite the relative short exposure time (6.8 ks). The count rate is $`0.18\pm 0.05`$ ct/s, that is the significance of the detection is about 3.6 $`\sigma `$. Given the relatively small statistics, it is hard to constrain the high energy ($`E>10`$ keV) spectrum of Pictor A. A parametrization of the MECS and PDS data with a single power-law model gives a best fit value of $`\alpha _\mathrm{x}`$ perfectly consistent with that derived from MECS data only. A broken power-law model, with the soft energy index fixed to the value obtained from the MECS data (see Table 4), gives no significant improvement in the fit (and the hard energy index is consistent with the soft one). Therefore, the PDS data appear to lie on the extrapolation of the lower energy data. There might be a slight excess in the residuals above 10 keV but as described above this is not significant and does not warrant more complicated models.
## 5 ROSAT PSPC Data
All our objects were found to have ROSAT PSPC data: namely, OF $`109`$, Pictor A, PHL 1657 and PKS 2152$``$69 were all targets of ROSAT observations, while data for OM $`161`$ were extracted from the ROSAT All-Sky Survey.
### 5.1 Data Analysis
In the analysis of the pointed PSPC observations, we first determined the centroid X-ray position by fitting a two-dimensional Gaussian to the X-ray image. Source counts were then extracted from a circular region with 3 arcmin radius around the centroid source position. The local background was determined from an annulus with inner radius 5 arcmin and outer radius 8 arcmin. If any X-ray sources were detected in the background region, they were first subtracted from the data.
The source counts from OM $`161`$ were extracted from a circular region with radius 5 arcmin from the All-Sky Survey data. The larger extraction radius compared to the pointed PSPC observations accounts for the larger point spread function in the Survey. The local background was determined from two source-free regions with radius 5 arcmin, displaced from the source position along the scanning direction of the satellite during the All-Sky Survey.
After background subtraction, the data were vignetting and dead time corrected and finally binned into pulse height channels. Only channels 12-240 were used in the spectral analysis, due to existing calibration uncertainties at lower energies. The pulse height spectra were rebinned to achieve a constant signal-to-noise ratio in each spectral bin, which ranged from 3 to 6, depending on the total number of photons.
### 5.2 Spectral Fits
As for the LECS data, we fitted the ROSAT PSPC data with a single power-law model with Galactic and free absorption. Our results are presented in Table 5, which gives the name of the source in column (1), the ROSAT observation request (ROR) number in column (2), $`\alpha _\mathrm{x}`$ and $`\chi _\nu ^2`$(dof) in columns (3)-(4) for the fixed-N<sub>H</sub> fits; columns (5)-(7) give N<sub>H</sub>, $`\alpha _\mathrm{x}`$ and $`\chi _\nu ^2`$(dof) for the free-N<sub>H</sub> fits. Finally, in column (8) we report the unabsorbed X-ray flux in the $`0.12.4`$ keV range. The errors quoted on the fit parameters are the 90% uncertainties for one and two interesting parameters for Galactic and free N<sub>H</sub> respectively.
The main results of the ROSAT PSPC fits are the following: 1. the fitted energy indices are steeper than the MECS (plus LECS) ones (with the exception of OM $`161`$ for which the ROSAT $`\alpha _\mathrm{x}`$ has large uncertainties); 2. there is no evidence for intervening absorption above the Galactic value in our sources, with the exception of PKS 2152$``$69, for which the $`F`$-test shows that the addition of N<sub>H</sub> as a free parameter results in a significant improvement (98.6% level) in the goodness of the fit (the fitted N<sub>H</sub> is about 50% higher than the Galactic value); 3. the single power-law fit is not great, although still acceptable, for Pictor A ($`P_{\chi ^2}5\%`$) and PHL 1657 ($`P_{\chi ^2}7\%`$).
The mean difference between the ROSAT PSPC and MECS (plus LECS) energy indices (excluding OM $`161`$) is $`0.44\pm 0.11`$, clearly indicative of a flattening at high energies, with the emergence of a hard component.
### 5.3 A Thermal Component in the X-ray Spectra of Pictor A and PKS 2152$``$69
Pictor A and PKS 2152$``$69 are relatively nearby objects ($`z0.035`$). The ROSAT PSPC images show evidence for an extended component on a scale $`\text{ }>50\mathrm{"}`$ for PKS 2152$``$69 and $`\text{ }>70\mathrm{"}`$ for Pictor A, which correspond to about 40 and 70 kpc respectively. PKS 2152-69 is also seen extended from ROSAT HRI data (Fosbury, private communication). Early-type galaxies are known to have diffuse emission from hot gas on these scales (Forman, Jones & Tucker 1985), so we added a thermal component (Raymond & Smith 1977) to the power-law model, assuming solar abundances (our results are only weakly dependent on the adopted abundances). Our results are reported in Table 6 which gives the name of the source in column (1), $`\alpha _\mathrm{x}`$ in column (2), the gas temperature (in keV) in column (3), $`\chi _\nu ^2`$(dof) in column (4), the ratio between the thermal and non-thermal components in the $`0.12.4`$ keV range in column (5), and finally the $`F`$-test probability in column (6). The errors quoted on the fit parameters are the 90% uncertainties for two interesting parameters. Galactic N<sub>H</sub> as been assumed (see above). In both cases this addition results in a significantly improved fit ($`>99.9\%`$ level) over a single power-law model. (Note that a Raymond-Smith model by itself gives extremely poor fits to the data.) With the addition of a thermal component the need for absorption above the Galactic value, which was indicated for PKS 2152$``$69 and suggested for Pictor A vanishes; free N<sub>H</sub> fits now do not result in a significant improvement in the goodness of the fits.
As a check of our results we also fitted the spectra of OF $`109`$ and PHL 1657 with a power-law plus thermal component. No need for an extra component was found, which is consistent with the fact that these two sources are at higher redshift (i.e., the putative thermal component is completely swamped by the stronger non-thermal emission).
It is interesting to note that the dominant component in the X-ray emission of our two nearest sources is definitely non-thermal, but nevertheless the data indicate a $`510\%`$ contribution from thermal emission. This is confirmed by an analysis of the PSPC images. The relevance of extended emission was in fact estimated by subtracting the point spread function (PSF) from the radial profile of the sources. For Pictor A, the fraction of photons above the PSF is 4%, while for PKS 2152$``$69 is 13% (for the two other sources with PSPC pointed data these fractions are less than 1%, as expected). Given the statistical and systematic uncertainties in the PSF (due to residual wobble motion, attitude uncertainties, etc.) these fractions agree very well with the results from the spectral decomposition (5 and 10% respectively).
The gas temperatures we find ($`<1`$ keV) are very reasonable for gas associated with an elliptical galaxy (e.g., Forman et al. 1985). To check that the observed luminosities are also physically plausible, we performed the following test. There is a well-known strong correlation between X-ray luminosity and absolute blue magnitude for elliptical galaxies (e.g., Forman et al. 1985, 1994). Integrated blue magnitudes for Pictor A and PKS 2152$``$69, obtained from NED, imply $`M_\mathrm{B}20.7`$ and $`22`$ respectively. The $`0.54.5`$ keV luminosities for the thermal components of the two sources are $`L_{0.54.5}4\times 10^{42}`$ erg s<sup>-1</sup> for Pictor A, with a rather large 90% error range ($`10^{42}10^{43}`$) while for PKS 2152$``$69 we get $`L_{0.54.5}2\times 10^{42}`$ erg s<sup>-1</sup> (90% error range: $`3\times 10^{41}8\times 10^{42}`$). These numbers, compared against Fig. 4 of Forman et al. (1994), show that while the X-ray power in the thermal component of PKS 2152$``$69 is not unusual for its optical power, that of Pictor A is about an order of magnitude larger than the maximum values of elliptical galaxies of the same absolute magnitude. It then seems that the intrinsic power of the thermal component is too large to be associated with the galaxy.
As the relatively low gas temperature inferred from the data is also typical of small groups, one could speculate that most of the thermal emission in Pictor A is associated with a group associated with this source. An inspection of Fig. 8 of Ponman et al. (1996), which reports the X-ray luminosity – temperature relation for Hickson’s groups, shows that, within the rather large errors, Pictor A might fall in the correct portion of the plot, although the best fit values ($`L_\mathrm{x}4\times 10^{42}`$ erg s<sup>-1</sup>, $`kT=0.55`$ keV), would put it above the observed correlation. However, the richness of the environment of this source is very low (Zirbel 1997), inconsistent even with a small group. It might therefore be speculated that Pictor A is another example for a so-called fossil group (Ponman et al. 1994), i.e. a single elliptical galaxy that is considered to be the result of a merging process of a compact group. This merging is believed not to affect the X-ray halo of the group (Ponman & Bertram 1993) and the galaxies formed in this way will still show the extended thermal emission component of the intra-group gas although they appear isolated.
In summary, while the thermal component in PKS 2152$``$69 is consistent with emission from a hot corona around the galaxy, in the case of Pictor A the intrinsic power of this component is too high. However, the luminosity of the thermal component is consistent with that of a compact group of galaxies. Since Pictor A appears to be isolated, we might have another example of a fossil group.
We found no physically meaningful evidence for the presence of a thermal component in the LECS spectra of the three sources for which we have the relevant data.
## 6 ROSAT and BeppoSAX Data: The Whole Picture
The last step is to put BeppoSAX and ROSAT PSPC data together to better constrain the shape of the X-ray spectra, especially at low energies. Two of our sources, in fact, have no LECS data, while for the remaining three only less than 10 LECS bins are available below 1 keV. The ROSAT effective area is larger than the LECS effective area in the range of overlap, providing more leverage in the soft X-rays.
As before, we left free the LECS/MECS normalization and, as before, the fitted values are consistent with the expected ones. We also left free the PSPC/MECS normalization, to allow for any X-ray variability, which is seen in at least some lobe-dominated broad-line sources (e.g., 390.3: Leighly et al. 1997; 3C 382: Barr & Giommi 1992). However, the PSPC/MECS normalization was, on average, around 1, with a maximum excursion of $`3040\%`$. No strong variability is then present between the ROSAT and BeppoSAX data.
As it turned out, in all cases for which we had enough statistics at low energies (i.e., excluding OM $`161`$) a broken power-law model resulted in a significantly improved fit ($`99.9\%`$ level) over a single power-law model over the whole 0.1 – 10.5 keV range. Our results are reported in Table 7 which gives the name of the source in column (1), $`\alpha _\mathrm{S}`$, $`\alpha _\mathrm{H}`$, and $`E_{\mathrm{break}}`$ in columns (2)-(4), $`\chi _\nu ^2`$(dof) in column (5) and finally the $`F`$-test probability in column (6). The errors quoted on the fit parameters are the 90% uncertainties for three interesting parameters. Based on the LECS and ROSAT PSPC results, Galactic N<sub>H</sub> as been assumed for all sources apart from PKS 2152$``$69. The combined data with the best fit single power-law model (to show the spectral concavity) are shown in Figure 1.
As can be seen from the Table, the model parameters are extremely well determined. Not surprisingly, the $`\alpha _\mathrm{S}`$ values are very similar to the ROSAT PSPC energy indices, while the $`\alpha _\mathrm{H}`$ values are basically the same as the MECS (plus LECS) energy indices. The spectra are obviously concave, with $`\alpha _\mathrm{S}\alpha _\mathrm{H}=0.49\pm 0.09`$ and energy breaks around 1.5 keV (Pictor A has a break at about 4 keV but with a large error due to the relatively small difference between the soft and the hard spectral indices). The fact that the breaks fall at relatively low energies explain why the energy indices derived from the LECS fits are basically the same as those obtained from the combined LECS and MECS fits.
The addition of a thermal component in Pictor A and PKS 2152$``$69 improves significantly the fit ($`>99.8\%`$ level) even in the case of a broken power-law model. As for the single power-law plus thermal component, free N<sub>H</sub> fits do not result in a significant improvement in the goodness of the fits, so Galactic N<sub>H</sub> is assumed. Best-fit parameters are $`\alpha _\mathrm{S}=0.77`$, $`\alpha _\mathrm{H}=0.66`$, $`E_{\mathrm{break}}=2.1`$ keV, $`kT=0.39`$ keV, for Pictor A, and $`\alpha _\mathrm{S}=0.92`$, $`\alpha _\mathrm{H}=0.68`$, $`E_{\mathrm{break}}=1.4`$ keV, $`kT=0.57`$ keV, for PKS 2152$``$69. These are perfectly consistent with those obtained from the broken power-law fit to the BeppoSAX and ROSAT PSPC data and with the temperatures derived from the ROSAT PSPC data, but the uncertainties are now poorly determined because of the relatively large number of parameters.
## 7 Iron Lines?
A number of AGN exhibit in their X-ray spectra iron K$`\alpha `$ lines which are characteristic of relativistic effects in an accretion disk surrounding a central black hole (e.g., Nandra & Pounds 1994). It appears that radio-loud AGN have weaker iron lines than radio-quiet ones, although some low-luminosity, radio-loud sources are known to have strong iron lines (Nandra et al. 1997).
We searched for Fe K$`\alpha `$ emission in our MECS spectra: none was found. The 90% upper limits on the equivalent width (in the source rest frame) of an unresolved iron line ($`\sigma =0`$) at energy 6.4 keV are the following: OF $`109`$: 380 eV; Pictor A: 150 eV; OM $`161`$: 300 eV; PHL 1657: 170 eV; PKS 2152$``$69: 400 eV. Note that for any broader line the limits are correspondingly higher. Our result for Pictor A is consistent with the upper limit of 100 eV given by Eracleous & Halpern (1998) based on a $`65`$ ks ASCA observation, and our limit for PHL 1657 agrees with the upper limit of 140 eV given by Williams et al. (1992) based on a 15 ks Ginga observation.
We note that the upper limits on iron lines in our sources are not very stringent and still consistent with values found in other lobe-dominated broad-lined sources (e.g., Inda et al. 1994; Lawson & Turner 1997; Woźniak et al. 1998).
## 8 Discussion
We have presented the first systematic, hard X-ray study of a well-defined sample of lobe-dominated, broad-line AGN. Our main result is that this class of objects has a hard X-ray spectrum with $`\alpha _\mathrm{x}`$ $`0.75`$ at $`E\text{ }>12`$ keV. In addition, we also detect a thermal emission component present at low energies in the spectra of two BLRGs, but we find that this component contributes only $`\text{ }<10\%`$ of the total flux.
Hard X-ray emission is also present in core-dominated radio-loud quasars, and the detection of similarly flat spectra in our sample of lobe-dominated AGN has important implications for our understanding of the relation between the two classes.
In order to make a more quantitative comparison between the hard X-ray spectra of the two classes we searched the literature for a study of core-dominated/flat-spectrum radio quasars similar to ours, i.e., based on a well-defined, homogeneous sample. Surprisingly, we found none. We then decided to collect all the information we could find on the $`210`$ keV spectra of FSRQ, excluding objects with large ($`>0.5`$) uncertainties on the X-ray spectral index. The data come from Ginga and EXOSAT/ME observations published in Makino (1989), Ohashi et al. (1989, 1992), Lawson et al. (1992), Saxton et al. (1993), Williams et al. (1992), Sambruna et al. (1994) and Lawson & Turner (1997; $`\alpha _\mathrm{x}`$ in the $`218`$ keV range). The resulting sample includes 15 objects and is characterized by $`\alpha _\mathrm{x}=0.70\pm 0.06`$, perfectly consistent with our results.
We stress again that the sample of FSRQ is heterogeneous whereas our sample, although relatively small, is well defined and has very well determined spectral indices. However, within the limits introduced by the biases likely present in the FSRQ sample, lobe-dominated and core-dominated broad-line AGN appear to have practically identical hard X-ray spectra in the $`210/18`$ keV region.
The objects in our sample are lobe dominated and therefore we investigated if inverse Compton scattering of cosmic microwave background photons into the X-ray band by relativistic electrons in the diffuse radio lobes could be responsible for the observed X-ray emission (see e.g., Harris & Grindlay 1979; Feigelson et al. 1995). We find that this is not the case and the derived X-ray emission is almost two orders of magnitude lower than observed. This is further supported by the fact that no strong resolved components have been found for our objects by ROSAT. The hotspot in Pictor A is known to have associated X-ray emission but this is very weak and indeed only marginally detected by Einstein (Röser & Meisenheimer 1987; Perley et al. 1997).
The hard X-ray component present in FSRQ is usually interpreted as due to inverse Compton emission, most likely due to a combination of synchrotron self-Compton emission (with the same population of relativistic electrons producing the synchrotron radiation and then scattering them to higher energies) and Comptonization of external radiation (possibly emitted by material being accreted by the central object; see e.g., Sikora, Begelman & Rees 1994). As the hard X-ray emission in our sources has a similar, flat slope, it seems natural to attribute it to the same emission mechanism. The smaller effect of Doppler boosting for lobe dominated sources would then make this component to appear and become dominant only at high energies, and this is exactly what is shown by our data. This is further confirmed and clearly shown in Figure 2, where we plot the $`210`$ keV spectral index versus the core dominance parameter $`R`$. (The open triangles indicate the SSRQ and BLRG found in the literature, in some of the papers listed above for FSRQ). It is evident from the figure that no correlation is present between the two quantities. This seems in disagreement with the correlation claimed by Lawson et al. (1992) (based on EXOSAT/ME data) and Lawson & Turner (1997) (based on Ginga data), the only previous hard X-ray studies which included some steep-spectrum radio quasars. We note that the spectral indices derived from EXOSAT data had relatively large errors and that our sample of FSRQ, SSRQ and BLRG is larger than the samples used by Lawson et al. (1992) and Lawson & Turner (1997) (26 vs. 18 and 15 objects respectively) especially as far as lobe-dominated broad-line sources are concerned (where we have basically doubled the number of available sources). Moreover, the correlation claimed by Lawson & Turner (1997) becomes significant only by excluding the three BLRG in their sample (the exclusion of the BLRG has no effect on the lack of correlation between $`\alpha _\mathrm{x}`$ and $`R`$ in our sample). Larger homogeneous samples (especially of FSRQ) are clearly required in order to investigate this issue in more details.
Our hard X-ray spectra are well fitted by a single power law and we find no evidence for the hard excess often seen in low-luminosity AGN (e.g., Nandra & Pounds 1994) and interpreted as due to Compton reflection of the X-rays off optically thick material (Guilbert & Rees 1988; Lightman & White 1988). In the case of Pictor A, this is also confirmed by the results of Eracleous & Halpern (1998) based on a longer ASCA observation. We note, however, that this reflection component should normally be apparent above $`10`$ keV and that our data reach these energies only for Pictor A (and even then with relatively small statistics; see Sect. 4.3).
Woźniak et al. (1998) have recently studied the X-ray (and soft $`\gamma `$-ray) spectra of BLRG using Ginga, ASCA, OSSE and EXOSAT data. Their object list includes 4 lobe-dominated BLRG, namely 3C 111, 3C 382, 3C 390.3, and 3C 445. The X-ray spectra have an energy index $`\alpha _\mathrm{x}0.7`$, with some moderate absorption. Fe K$`\alpha `$ lines have also been detected with typical equivalent widths $`100`$ eV. Any Compton reflection component is constrained to be weak and is unambiguously detected only in 3C 390.3. Our results are consistent with their findings.
Our MECS results for PHL 1657 are in agreement with the Ginga energy slope obtained by Williams et al. (1992: see their Table 3), while our $`210`$ keV flux appears to be $`35\%`$ smaller. Eracleous & Halpern (1998) reported on a $`65`$ ks ASCA observations of Pictor A. Our LECS and MECS data appear to require a slightly flatter spectral index than given by these authors ($`0.77\pm 0.03`$, from the SIS and GIS fits) while our $`210`$ flux is similar to that derived from the SIS (but $`10\%`$ smaller than the $`210`$ keV flux estimated from the GIS).
As discussed in the Introduction, various previous studies had found that SSRQ displayed a steep soft X-ray spectrum (see, e.g., Fiore et al. 1998). In fact, despite the hard component at higher energies, we nevertheless observe a steeper spectrum at lower energies. In the whole ROSAT band we find $`\alpha _\mathrm{x}=1.19\pm 0.13`$ (excluding OM$``$161, for which the ROSAT $`\alpha _\mathrm{x}`$ has large uncertainties), which is intermediate between the values obtained for SSRQ by Fiore et al. (1998) between $`0.42.4`$ keV ($`\alpha _\mathrm{x}=1.14`$) and $`0.10.8`$ keV ($`\alpha _\mathrm{x}=1.37`$). Our best fits to the whole $`0.110`$ keV range indeed require a spectral break $`\mathrm{\Delta }\alpha _\mathrm{x}0.5`$ between the soft and hard energy slopes at about $`12`$ keV. The dispersion in the energy indices is larger for the soft component. We find $`\sigma (\alpha _\mathrm{S})=0.28`$ while $`\sigma (\alpha _\mathrm{H})=0.10`$, which might suggest a more homogeneous mechanism at higher energies. We note that Fiore et al. (1998) also found a concave spectrum ($`\alpha _{0.10.8\mathrm{keV}}\alpha _{0.42.4keV}0.2`$) for radio-loud AGN (both flat- and steep-spectrum) in the ROSAT band.
There are some concerns (R. Mushotzky, private communication) of miscalibration between ROSAT, on one side, and BeppoSAX, ASCA and RXTE on the other side, which could affect some of our conclusions. Namely, the inferred ROSAT spectral indices might be steeper than those derived, in the same band, by other X-ray satellites (a detailed comparison of simultaneous ASCA/RXTE/BeppoSAX spectra of 3C 273 is given by Yaqoob et al. in preparation). The spectral breaks we find in the spectra of our objects could then be partly due to this effect. This is clearly an important point, very relevant for X-ray astronomy, but which goes beyond the scope of this paper. Nevertheless, we can still comment on this as follows: 1. the “BeppoSAX only” spectrum of PHL 1657 shows, by itself, significant evidence of a break, with best fit parameters consistent (within the rather large errors) with those obtained from the full ROSAT and BeppoSAX fit (see Sect. 4.2). At least in this source, then, the evidence for a spectral break is “ROSAT independent.” The fact that this is not the case for the two other objects with LECS data, Pictor A and OM$``$161, can be explained by the relatively smaller break in the first object and the small LECS statistics in the latter. In other words, the available evidence is consistent with breaks similar to those derived from the combined ROSAT and BeppoSAX fits to be present also in the LECS/MECS data; 2. our main result, that is, the presence of a hard X-ray component in all our sources at $`E>12`$ keV, is based on BeppoSAX data and therefore clearly independent of any possible ROSAT miscalibration.
One could also worry about possible miscalibrations between different X-ray instruments affecting the (lack of ) correlation in Fig. 2. However, the results of Woźniak et al. (1998) appear to exclude that possibility. The ASCA, Ginga and EXOSAT X-ray spectra of the sources studied by these authors, in fact, agree within the errors, particularly in the hard X-ray band. Given the good cross-calibration between BeppoSAX and ASCA, a large miscalibration between BeppoSAX, Ginga and EXOSAT (the instruments used to obtain the data used in Fig. 2) seems to be ruled out.
## 9 Conclusions
The main conclusions of this paper, which presents BeppoSAX data for a well defined sample of 2-Jy steep-spectrum radio quasars and broad-line radio galaxies can be summarized as below.
All five lobe-dominated, broad-line sources included in this study have been clearly detected up to 10 keV (50 keV for Pictor A) and display a flat X-ray spectrum ($`\alpha _\mathrm{x}0.75`$) in the $`210`$ keV range. One source (out of the three with LECS and MECS data, i.e., reaching down to 0.1 keV) shows significant evidence of a spectral break at $`E1`$ keV. When ROSAT PSPC data, available for all five sources, are included in the fit, the evidence for concave overall spectra, with $`\alpha _{\mathrm{soft}}\alpha _{\mathrm{hard}}0.5`$ and $`E_{\mathrm{break}}1.5`$ keV, becomes highly significant for all objects with good enough statistics at low energies (i.e., excluding OM$``$161). No iron lines are detected in our spectra but the upper limits we derive are not very stringent (due to the relatively short exposure times). The flat high-energy slope we find for our lobe-dominated sources is consistent with the hard X-ray emission present in core-dominated radio quasars. In fact, by collecting data from the literature on the X-ray spectra of radio quasars, we show that the available data are consistent with no dependence between the $`210`$ keV spectral indices and the core-dominance parameter, somewhat in contrast with the situation at lower energies. Finally, a thermal emission component is present at low energies in the spectra of the two broad-line radio galaxies, although only at the $`10\%`$ level.
Three more targets have been approved as part of this BeppoSAX observing program (one at a lower priority). We will be presenting results on these additional objects, and a more thorough discussion of the implications of our results in terms of emission processes, orientation, and more generally unified schemes in a future paper.
## Acknowledgements
We acknowledge useful discussions with Alfonso Cavaliere, Andrea Comastri, Fabrizio Fiore, Paolo Giommi, Paola Grandi, Richard Mushotzky, Tahir Yaqoob. We thank Paola Grandi also for her help with the analysis of the PDS data of Pictor A. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
|
no-problem/9901/quant-ph9901044.html
|
ar5iv
|
text
|
# REFERENCES
Broadband detection of squeezed vacuum:
A spectrum of quantum states
G. Breitenbach, F. Illuminati, S. Schiller, and J. Mlynek
Fakultät für Physik, Universität Konstanz, D-78457 Konstanz, Germany
http://quantum-optics.physik.uni-konstanz.de
Permanent address: Dipartimento di Fisica, Università di Salerno, and INFM, Unità di Salerno, I-84081 Salerno, Italy
Phone: +49(7531) 883842, FAX +49(7531) 883072, e-mail: Gerd.Breitenbach@uni-konstanz.de
We demonstrate the simultaneous quantum state reconstruction of the spectral modes of the light field emitted by a continuous wave degenerate optical parametric amplifier. The scheme is based on broadband measurement of the quantum fluctuations of the electric field quadratures and subsequent Fourier decomposition into spectral intervals. Applying the standard reconstruction algorithms to each bandwidth-limited quantum trajectory, a ”spectrum” of density matrices and Wigner functions is obtained. The recorded states show a smooth transition from the squeezed vacuum to a vacuum state. In the time domain we evaluated the first order correlation function of the squeezed output field, showing good agreement with theory.
The experimental techniques of quantum state reconstruction, first applied five years ago, have opened a new field of research, wherein simple quantum mechanical systems can be characterized completely by density matrices and Wigner functions . Our system consists of electromagnetic field modes at optical frequencies. We have previously generated the whole family of squeezed states of light using an optical parametric amplifier (OPA) and reconstructed these states using the method of optical homodyne tomography . The reconstructions presented therein were limited to essentially one particular pair of modes at frequencies $`\omega \pm \mathrm{\Omega }`$, where $`\mathrm{\Omega }`$ is a radio frequency and $`\omega `$ the optical frequency. The spectral bandwidth of this mode pair was $`\mathrm{\Delta }\mathrm{\Omega }/2\pi `$ = 100 kHz. Since an OPA pumped below threshold emits a frequency spectrum, with a bandwidth determined by the cavity linewidth in the order of several MHz, the output of the OPA is described more precisely by a whole spectrum of quantum states . General schemes for multimode reconstruction become quite complicated already at the two-mode level . So far only one experiment demonstrating photon-number correlations in the time domain by measuring the photon statistics via dual-pulse phase-averaged homodyne detection has been carried out . In this work we use a simple measurement scheme to record simultaneously the complete statistical information about the quantum states of all mode-pairs emitted by the OPA, disregarding correlations between the different mode-pairs. This enables us to reconstruct the Wigner functions and density matrices corresponding to a large set of mode pairs.
The experimental setup is shown in Fig.1. Center piece of the experiment is a monolithic standing-wave lithium-niobate OPA, pumped by a frequency-doubled continuous-wave Nd:YAG laser (532 nm). The output of the OPA is analyzed by a homodyne detector whose difference current $`i_{}`$ is directly proportional to the OPA’s output electric field. In order to obtain simultaneously information about the quantum states of all modes emitted by the OPA, the homodyne detector current $`i_{}`$ was recorded with a bandwidth covering the whole frequency range up to 30 MHz, exceeding the OPA’s 17.5 MHz cavity linewidth. For this purpose fast photodiodes (Epitaxx ETX500T, specified bandwidth 140 MHz) with broadband amplifiers and a 30 MHz A/D-board (IMTEC) for data collection were employed. The $`i_{}`$ data (about 500 000 points with 12 bit resolution) are taken while the local oscillator phase is swept by $`2\pi `$ in approximately 8 ms. The recorded noise trace is split into 16 separate traces, each containing the information of a mode pair of small bandwidth 1.9 MHz, centered at a sideband frequency $`\omega \pm \mathrm{\Omega }`$. This is done by dividing the Fourier transform of the recorded trace into 16 equal intervals, and taking the inverse transform of each interval separately. The same is done for a measured $`i_{}`$-trace of a vacuum state, in order to normalize each of the 16 traces. Due to electronic excess noise of the detection system at low frequencies the first of the 16 traces, containing the spectrum between 0 and 1.9 MHz, is discarded. The quantum states of the remaining 15 modes can be reconstructed the same way as described in , employing the inverse Radon transform and the pattern functions of the harmonic oscillator . Thus a whole “spectrum” of Wigner functions and density matrices is obtained from a single temporal record.
The main difference between the squeezed vacuum states reconstructed at different $`\mathrm{\Omega }`$ is the amount of noise reduction (squeezing) and enhancement (antisqueezing). The spectra of quantum noise power of the squeezed quadrature $`\mathrm{\Psi }_{}`$ and of the antisqueezed quadrature $`\mathrm{\Psi }_+`$ of an OPA on resonance are given by
$$\mathrm{\Psi }_\pm (\mathrm{\Omega },P)=\mathrm{\Psi }_0\left(1\pm \xi \eta \frac{4d}{(\mathrm{\Omega }/\mathrm{\Gamma })^2+(1d)^2}\right),$$
(1)
where $`d=\sqrt{P/P_{th}}`$ is the pump parameter with pump power $`P`$ and a threshold power $`P_{th}`$ of the OPA, $`\mathrm{\Gamma }/2\pi =17.5`$ MHz is the cavity linewidth (HWHM), $`\mathrm{\Psi }_0`$ is the spectral density of the vacuum state, $`\xi =0.88`$ is the escape efficiency of the resonator, and $`\eta `$ is the detection efficiency. The latter includes propagation losses after the OPA, homodyne efficiency and detector quantum efficiency. In contrast to our previous work we did not employ a mode cleaning cavity, thus classical noise of the pump wave was not negligible and the modematching between local oscillator and the OPA output signal was $`95\%`$. Furthermore the efficiency of the photo detectors had degraded slightly from the previously measured 95-97%, thus the overall detection efficiency including escape efficiency $`\xi \eta `$ was fitted to be 0.7. The pump power was approximately half the threshold power. Fig.2 shows the spectra of the maximally squeezed and anti-squeezed quadratures measured directly with a spectrum analyzer in comparison with the ones obtained via multimode reconstruction.
Three of the reconstructed states are plotted in Fig.3. They display strongly elliptical Wigner functions and oscillations in the photon number , characteristics that vanish when the state aproaches a vacuum state at frequencies sufficiently away from the OPA cavity center frequency. We remark that we did not include error corrections in the reconstruction algorithm, as described in . For a single mode treatment of previously measured data of us that takes into account the finite detection efficiency see . Note also that the present measurement method does not allow to make any statements about possible correlations between the different mode-pairs. Regarding the light field generated by the OPA this is of minor importance, since according to theory mode-pairs at different frequencies should be uncorrelated .
A direct measure of the total noise of the squeezed and anti-squeezed quadratures $`\mathrm{\Psi }_\pm (\mathrm{\Omega },P)𝑑\mathrm{\Omega }`$ of the overall OPA output within the bandwidth 1.9-30 MHz is obtained by analyzing the full quantum noise instead of its particular Fourier components. (For this, the non-uniform spectral response of the A/D-board has to be eliminated via normalization in Fourier space.) The inset of Fig.4 shows the total electric field variances as a function of the local oscillator phase. Well-known figures of this type usually depicted only the $`E`$-field variances of a single mode of the light field . Here we obtain for the overall OPA output a total squeezing of -2.9 dB (= 0.47 linear scale) and 6.7 dB (= 4.7) for the total antisqueezing. The corresponding photon statistics is shown as well. This multimode light field would be detected if one would employ photon counters instead of homodyne detection. Note that no photon number oscillations are observable in the total OPA output. This may seem surprising, since almost all the photon number distributions of the individual modes show oscillations and all the reconstruction transformations are linear. However, according to basic laws of probability theory the overall sampled distribution of the quantum noise is not given by the average but by the convolution of the distributions of the individual modes. Therefore the photon statistics of the total OPA output is not given by averaging the single mode statistics. What is averaged though in our case of uncorrelated Gaussian noise distributions is, as mentioned above, the variance at each particular phase angle.
A further perspective is gained by the analysis of our data in the time domain. At a fixed phase $`\theta `$ of the local oscillator, the recorded time trace can be regarded as the quantum trajectory of the quadrature $`x_\theta (t)=2^{1/2}(a^{}(t)e^{\mathrm{i}\theta }+a(t)e^{\mathrm{i}\theta })`$, where $`a(t)`$ is the Heisenberg output field operator. Thus the recorded data contain all information needed to extract the first order correlation function. This function is easily calculated using the input-output formalism for optical cavities .
$`g_{(1)}(\tau )`$ $``$ $`{\displaystyle \frac{a^{}(\tau )a(0)}{a^{}(0)a(0)}}`$ (2)
$`=`$ $`{\displaystyle \frac{x_\theta (\tau )x_\theta (0)_\theta }{x_\theta (0)x_\theta (0)_\theta }}`$ (4)
$`=`$ $`{\displaystyle \frac{1d^2}{2d}}\left[{\displaystyle \frac{1}{1d}}e^{(1d)\mathrm{\Gamma }\tau }{\displaystyle \frac{1}{1+d}}e^{(1+d)\mathrm{\Gamma }\tau }\right].`$ (6)
Here $`<>_\theta `$ means averaging over all phase angles. Fig.5 shows that the evaluation of the first order correlation function obtained from the experimental data is in good agreement with theory. In principle it is possible to obtain via broadband homodyne detection all higher order time correlations of the field quadratures $`x_\theta (\tau _1)x_\theta (\tau _2)\mathrm{}x_\theta (\tau _n)`$. Except for one special case their significance does not appear to have been studied. For our data the evaluation of these correlations could not be done reliably, due to difficulties with a proper normalization at $`\tau =0`$ and since the minimum recordable timestep of 1.7 ns was too large in comparison to the expected rapid exponential decay times $`e^{n(1d)\mathrm{\Gamma }\tau }`$.
Note also that since a homodyne measurement is always a two-mode measurement of the pair of modes at frequencies $`\omega \pm \mathrm{\Omega }`$, our experiment shows two-mode time correlations. For single-mode time correlation measurements a heterodyne system (detection efficiency $`\eta <`$0.5) can be employed. This is possible, since detection efficiency does not play the same crucial role in time correlation measurements that it does for quantum state reconstructions, in fact ideally $`g_{(n)}`$ is independent of $`\eta `$.
In conclusion, we have demonstrated the first simultaneous reconstruction of a whole spectrum of quantum states of the optical light field. Some of the reconstructed states emitted from the OPA display photon number oscillations and strongly elliptical Wigner functions. These characteristics vanish for the mode-pairs at frequencies sufficiently away from the OPA cavity center frequency. Furthermore the first order time correlation function was evaluated in good agreement with theory.
Our measurement scheme may be also useful for a variety of other quantum optical systems with more complex frequency or time dependencies. Examples are recent squeezing experiments with solitons in a fibre , pump-noise-suppressed laser diodes , or exciton-polariton systems in semi-conductors .
We thank Robert Bruckmeier and Kazimierz Rza̧żewski for very helpful discussions. Financial support was provided by the Deutsche Forschungsgemeinschaft and the Esprit LTR project 20029-ACQUIRE. One of us (F.I.) also gratefully acknowledges financial support from the Alexander von Humboldt Foundation, and the hospitality of the LS Mlynek at the Fakultät für Physik, Universität Konstanz, while on leave of absence from the Dipartimento di Fisica, Università di Salerno.
|
no-problem/9901/chao-dyn9901009.html
|
ar5iv
|
text
|
# Scaling and Intermittency in Animal Behavior
## Abstract
Scale-invariant spatial or temporal patterns and Lévy flight motion have been observed in a large variety of biological systems. It has been argued that animals in general might perform Lévy flight motion with power law distribution of times between two changes of the direction of motion. Here we study the temporal behaviour of nesting gilts. The time spent by a gilt in a given form of activity has power law probability distribution without finite average. Further analysis reveals intermittent eruption of certain periodic behavioural sequences which are responsible for the scaling behaviour and indicates the existence of a critical state. We show that this behaviour is in close analogy with temporal sequences of velocity found in turbulent flows, where random and regular sequences alternate and form an intermittent sequence.
Scale-invariant spatial and temporal patterns have been observed in a large variety of biological systems. It has been demonstrated that ants, Drosohyla and the wandering albatross, Diomedea exulants perform motion with power law distribution of times between two changes of the direction of motion. The power law distribution of times then leads to an anomalous Lévy type diffusion in space. In the last few years an increasing interest has been devoted to these superdiffusive processes in physics. and in econophysics. Inspite of the extensive experimental studies the detailed mechanism responsible for the creation of the underlying power law distributions is not well understood. In this letter we demonstrate the first time that the power law and scaling observed in the behaviour of certain animals is related to intermittency, a phenomenon familiar from the theory of dynamical systems and turbulence.
It is well known that non-hyperbolic dynamical systems show superdiffusive behaviour. In dissipative systems it is caused by the trapping of trajectories in the neighborhood of marginally unstable periodic orbits. The paradigmatic system showing such behaviour is Manneville’s one dimensional map
$$x_{n+1}=x_n+cx_n^z(mod1),$$
(1)
where $`z1`$. Here the sequence $`x_n`$ spends long time trapped in the neighborhood of the marginally unstable periodic orbit (a fixed point) $`x=0`$. In analytic maps typically $`z=2`$. The invatiant density behaves as $`\varrho (x)x^{1z}`$ near the origin and it is not normalizable for $`z2`$. Accordingly the distribution of times spent by the sequence near the unstable periodic orbit has a power law tail.
Next we will show experimental evidence on the existence of unstable periodic patterns in animal behaviour.
Members of the species Sus scrofa invest considerable time and effort into building a nest before farrowing. Our aim was to investigate the temporal pattern of this highly motivated activity. Using time-lapse video we recorded the behaviour of 27 gilts and analyzed the last 24 hours preceding the farrowing. The experimental subjects were Large White X Landrace gilts (Cotswold Pig Development Company, Lincoln, UK). On day 109 of pregnancy they were moved to their individual farrowing accommodations. A behavioural collection program was run to take data from the tapes. The behaviour of gilts was classified into eight mutually exclusive categories (see Table I). Further details of the experiment are published in Ref..
As a first step we assigned a symbol 0,1,…,6 to the 7 different types of behaviour listed on Table I. The records of the 27 gilts contain approximately 24.000 symbols. Then we computed the probabilities of the occurrences of symbolic sequences formed from the symbols. This has been done by evaluating the decimal value of the base seven number coded by the sequence. For example the code sequence 0156 is evaluated to be $`\underset{¯}{0}1+\underset{¯}{1}7+\underset{¯}{5}7^2+\underset{¯}{6}7^3=2310`$.
On the histograms of Fig. 1a, 1b and 1c we show the probabilities of the length 3, 4 and 5 sequences respectively. One can see that certain symbol sequences occur with high probability which does not decrease with increasing symbol length significantly, while the majority of symbols have relatively low probability and the occurrence of each particular symbol decreases with increasing length.
On the histogram of Fig. 2 we ordered the symbols of lengths $`L=2,3`$ and $`4`$ according to decreasing probability. One can see that the tail of the histogram is exponential. An exponential histogram of probability ordered symbols is a property of random texts and reflects the fact that most of the behavioural patterns are generated by a Markovian process in a purely stochastic way. On the other hand, the most probable part cannot be considered as a result of random uncorrelated processes. On the histogram we can see that the probability of the most probable sequences does not decay significantly with increasing length and stays approximately constant for $`L=2,3`$ and $`4`$.
We have identified these most probable behavioural sequences. For gilts kept in pens the most likely sequence is the cyclic repetition of ”nosing floor”-”alert”-”nosing floor”-… pattern. This sequence is the consequence of the nest building instinct which governs the behaviour of the animal most of the time. In crates however there is no possibility to try to build a nest due to the limited space and the absence of straw. Gilts become frustrated and the ideal ”nosing floor”-”alert”-”nosing floor”-… sequence is occasionally interrupted by periods of rest. On Fig. 3 we show a typical record from a gilt kept in pen. We can see that the deterministic sequences of 1-6-1-… are dominant interrupted eventually by short random-like sequences of other symbols. This is very reminiscent of temporal sequences of velocity found in turbulent flows, where random and regular sequences alternate and form an intermittent sequence.
We can quantify this qualitative analogy by studying the probability of periodic sequences. On Figure 4a we show the probability to observe the periodic sequences ”4-6-4-6-…” and ”1-4-1-4-…” as a function of the length $`L`$ of the sequence. These are typical periodic symbolic sequences. To add a new symbol to an existing sequence in this case is approximately an uncorrelated, Markovian process. The probability decays exponentially with the length. On the other hand, the probability to observe the special periodic sequence ”1-6-1-6-…” decays very slowly, according to the power law $`1/L^2`$ for a wide range of $`L`$ values until an upper cutoff $`L30`$ is reached (Fig. 4b). This is in close analogy with intermittent flows, where the probability of regular velocity patterns decays according to a power law. The occurrence of the correlated sequences has a drastic effect on the probability distribution of the time the animal spends engaged in a given type of behaviour as we show next. On Fig. 5 we show the probability distribution of these times. The data can be fitted very accurately with the power law
$$P(t)=C\frac{1}{(t+t_0)^2},$$
(2)
where $`t_0=21.3\pm 0.6`$ sec. The exponent $`2`$ of this power law is in accordance with the similar power law found for the probability of the ”1-6-1-6-…” sequences. This function is valid between some lower $`t_l`$ and upper $`t_u`$ cutoff times. Based on the available data we have not reached these and we can say that $`t_l`$ is less than 30 seconds and $`t_u`$ is more than 2000 seconds. The distribution (2) is normalizable, however it has no first and higher moments. For example the average time spent in an activity
$$\overline{t}=𝑑ttP(t)$$
(3)
does not exist. Taking into account that the validity of (2) is limited between lower and upper cutoffs the average time spent in an activity becomes $`\overline{t}t_0\mathrm{ln}(t_u/t_l)`$, which is a very slowly growing function of $`t_u`$. However the variance of the time spent in an activity $`\overline{(\mathrm{\Delta }t)^2}t_0t_u`$ and higher moments are very large making the behaviour of the animal very unpredictable.
It is important to note, that the observed power law distributions above are the same as those observable in the Manneville system for $`z=2`$. This indicates that the sequence ”1-6-1-6-…” marks a marginally unstable periodic orbit of the dynamics.
As a summary we demonstrated here, that the scaling behaviour in the animal behaviour found here is not related to environmental factors like in case of foraging animals, where the distribution of food might be distributed in a complex way forcing animals to follow Lévy flight patterns. The environment of gilts in pens and crates is almost absolutely unmotivating. The source of this complex behaviour can come only from the neural system forced by hormonal stimulus due to nesting instincts. This is the first carefully examined case, where complex scaling behaviour of animals is related to the self-organization and possibly to some unstable critical state of the nervous system. We hope, that further investigations of the behaviour of other types of animals subject to some internal hormonal pressure can be investigated and the existence of such a critical state can be established.
This work has been supported by the Hungarian Science Foundation OTKA (F17166/T17493/T25866) and the Hungarian Ministry of Education.
|
no-problem/9901/hep-ph9901414.html
|
ar5iv
|
text
|
# References
THE $`\tau ^{}(\omega ,\varphi )\pi ^{}\nu _\tau `$ DECAYS IN THE CHIRAL MODEL
K. R. Nasriddinov, B. N. Kuranov, T. A. Merkulova
Institute of Nuclear Physics, Academy of Sciences of Uzbekistan,
Ulugbek, Tashkent, 702132 Uzbekistan
> The $`\tau ^{}(\omega ,\varphi )\pi ^{}\nu _\tau `$ decays of the $`\tau `$ lepton are studied using the method of phenomenological $`SU(3)\times SU(3)`$ chiral Lagrangians. It is shown that the obtained value for the $`\tau ^{}\varphi \pi ^{}\nu _\tau `$ decay probability is very sensitive to deviations from the mixing angle. Calculated partial widths for these decays are compared with the available experimental and theoretical data.
At present the studies of the $`\tau ^{}(\omega ,\varphi )\pi ^{}\nu _\tau `$ decays are of great interest. These investigations could be provide evidence axial vector second class currents and useful information on the $`\omega \varphi `$ mixing angle.
The $`\tau ^{}\omega \pi ^{}\nu _\tau `$ decay mode has been studied in the framework of the vector-meson dominance model , a low energy $`U(3)\times U(3)`$ chiral Lagrangian model and, using the heavy vector-meson chiral perturbation formalism . The $`\tau ^{}(\omega ,\varphi )\pi ^{}\nu _\tau `$ decay channels have been studied using the CVC hypothesis and the vector meson dominance model as well.
In this paper we study the $`\tau ^{}(\omega ,\varphi )\pi ^{}\nu _\tau `$ decay channels using the method of phenomenological $`SU(3)\times SU(3)`$ chiral Lagrangians (PCL’s) . Recently the $`\tau VP\nu _\tau `$ decay channels of the $`\tau `$ lepton have been studied also using this method. In order to investigate these decay modes we have obtained the expression of weak hadron currents between pseudoscalar and vector meson states by including the gauge fields of these mesons in covariant derivatives . The weak hadron currents in this way have the form
$$J_\mu ^i=F_\pi gv_\mu ^a\phi ^bf_{abi},$$
(1)
where $`F_\pi `$ = 93 MeV, $`g`$ is the ”universal” coupling constant, which is fixed from the experimental $`\rho \pi \pi `$ decay width:
$$\frac{g^2}{4\pi }3.2,$$
$`v_\mu ^a`$ and $`\phi ^b`$ represent the fields of the $`1^{}`$ and $`0^{}`$ mesons, respectively.
The partial widths of the $`\tau ^{}(\omega ,\varphi )\pi ^{}\nu _\tau `$ decay channels are equal to zero according to the obtained expression of weak hadron currents Eq.(1). Therefore these channels can be realized via effects of secondary importance. Here, we calculate the partial widths of these decay modes in the framework of the abovementioned method. Note that the hadron decays of the $`\tau `$ lepton up to three pseudoscalar mesons in the final state have been studied also in the framework of this method .
In the PCL’s, the weak interaction Lagrangian has the form
$$L_W=\frac{G_F}{\sqrt{2}}J_\mu ^hl_\mu ^++h.c.,$$
(2)
where $`G_F10^5/m_P^2`$ is the Fermi constant,
$`l_\mu =\overline{u_l}\gamma _\mu (1+\gamma _5)u_{\nu _l}`$ is the lepton current, and hadron currents have the form
$$J_\mu ^h=J_\mu ^{1+i2}\mathrm{cos}\mathrm{\Theta }_c+J_\mu ^{4i5}\mathrm{sin}\mathrm{\Theta }_c,$$
where $`\mathrm{\Theta }_c`$ is the Cabibbo angle, and $`\rho ,\rho ^{}\mathrm{}`$ meson currents are defined as
$$J_\mu ^{1+i2}=\frac{m_\rho ^2}{g}\rho _\mu ^{}.$$
The strong interaction Lagrangian of vector mesons with vector and pseudoscalar mesons has the form
$$L_s\left(vv\phi \right)=g_{vv\phi }\epsilon _{\mu \nu \alpha \beta }Sp\left(_\mu \widehat{V}_\nu _\alpha \widehat{V}_\beta \widehat{\phi }\right),$$
(3)
where $`g_{vv\phi }=3g^2/16\pi ^2F_\pi `$ is the coupling constant, $`\widehat{V}_\mu =\frac{1}{2i}\lambda _iv_\mu ^i`$, and
$`\widehat{\phi }=\frac{1}{2}\lambda _i\phi ^i`$.
According to this equation the strong interaction Lagrangians of the $`\rho ^{},\rho ^{}\mathrm{}`$ mesons with ($`\omega ,\varphi `$) and $`\pi ^{}`$ mesons, at $`39^o`$ of the $`\omega \varphi `$ mixing angle, have the forms
$$L_S\left(\rho ^{}\omega \pi ^{}\right)=\frac{i}{2}g_{vv\phi }\epsilon _{\mu \nu \alpha \beta }_\mu \omega _\nu _\alpha \rho _\beta ^+\pi ^{},$$
(4)
$$L_S\left(\rho ^{}\varphi \pi ^{}\right)=0.0016ig_{vv\phi }\epsilon _{\mu \nu \alpha \beta }_\mu \varphi _\nu _\alpha \rho _\beta ^+\pi ^{}.$$
(5)
According to the Lagrangians (2) and (4) the decay amplitude for the $`\tau ^{}\omega \pi ^{}\nu _\tau `$ decay channel has the form
$$M=\frac{G_Fm_\rho ^2g_{vv\phi }cos\mathrm{\Theta }_c}{\sqrt{8}g\left[S_1m_\rho ^2i\left(m_\rho \mathrm{\Gamma }_\rho \right)\right]}\left[\epsilon _{\mu \nu \alpha \beta }\left(P_\alpha ^\omega K_\mu ^\rho ϵ_\beta ^\lambda \left(\omega \right)\right)\overline{u_\nu }\gamma _\nu \left(1+\gamma _5\right)u_\tau \right],$$
(6)
where $`ϵ_\beta ^\lambda (\omega )`$ is the polarization vector of $`\omega `$-meson, $`P_\alpha ^\omega `$, $`K_\mu ^\rho `$ are the 4-momenta of the $`\omega `$\- and $`\rho `$-mesons,respectively. Using Eq.(6) we have defined the squared matrix element of this process
$`M^2=K[m_\omega ^2m_\pi ^2(0.5(S_2m_\pi ^2)+0.5(S_3m_\omega ^2))m_\omega ^2(0.5(S_2m_\pi ^2))^2m_\pi ^2(0.5(S_3m_\omega ^2))^2+0.25(S_2m_\pi ^2)(S_3m_\omega ^2)(S_1m_\pi ^2m_\omega ^2)(0.5(S_1m_\pi ^2m_\omega ^2))^2(0.5(S_2m_\pi ^2)+0.5(S_3m_\omega ^2))]`$,
where
$$K=\frac{2G_F^2m_\rho ^4\left(3gcos\mathrm{\Theta }_c\right)^2}{\left(16\pi ^2F_\pi \right)^2\left[\left(S_1m_\rho ^2\right)^2+\left(m_\rho \mathrm{\Gamma }_\rho \right)^2\right]},$$
$`S_1`$,$`S_2`$ and $`S_3`$ are the Mandelstam variables. Note that the squared matrix element for the $`\tau ^{}\varphi \pi ^{}\nu _\tau `$ decay channel can be easily obtained by replacing $`m_\omega m_\varphi `$ and the corresponding $`K`$, which is defined according to the Lagrangian (5).
Using Lagrangians (2), (4), and (5) we calculated the partial width of the $`\tau ^{}(\omega ,\varphi )\pi ^{}\nu _\tau `$ decay channels by means of the TWIST code and obtained for the first decay channel $`\mathrm{\Gamma }(\tau ^{}\omega \pi ^{}\nu _\tau )=0.44\times 10^{11}sec^1`$.
This obtained result for the partial width is consistent (within errors) with the experimental value $`\mathrm{\Gamma }(\tau ^{}\omega \pi ^{}\nu _\tau )=(0.54\pm 0.17)\times 10^{11}sec^1`$
and with the VMD model prediction $`\mathrm{\Gamma }(\tau ^{}\omega \pi ^{}\nu _\tau )=(0.42\pm 0.19)\times 10^{11}sec^1`$ than the prediction $`\mathrm{\Gamma }(\tau ^{}\omega \pi ^{}\nu _\tau )=0.31\times 10^{11}sec^1`$, but it lies below the prediction by the CVC hypothesis $`\mathrm{\Gamma }(\tau ^{}\omega \pi ^{}\nu _\tau )=(0.73\pm 0.10)\times 10^{11}sec^1`$ . In these calculations we use $`\omega \varphi `$ mixing as
$`\omega =V_8sin\mathrm{\Theta }_V+V_0cos\mathrm{\Theta }_V,`$
$`\varphi =V_8cos\mathrm{\Theta }_VV_0sin\mathrm{\Theta }_V,`$
and at the $`\mathrm{\Theta }_V=39^o`$ for the probability of the $`\tau ^{}\varphi \pi ^{}\nu _\tau `$ decay channel we obtain
$`\mathrm{\Gamma }\left(\tau ^{}\varphi \pi ^{}\nu _\tau \right)=0.38\times 10^6s^1`$.
This obtained result is consistent with the experimental data $`\mathrm{\Gamma }(\tau ^{}\varphi \pi ^{}\nu _\tau )<(12.04\pm 0.07)\times 10^8s^1`$, but it lies below the VMD model prediction $`\mathrm{\Gamma }(\tau ^{}\varphi \pi ^{}\nu _\tau )=(0.41\pm 0.17)\times 10^8s^1`$, and almost four orders of magnitude below the upper limit obtained using the CVC hypothesis $`\mathrm{\Gamma }(\tau ^{}\varphi \pi ^{}\nu _\tau )<0.31\times 10^{10}s^1`$. And at the ideal mixing angle $`\mathrm{\Theta }_V=35,3^o`$, for this decay channel we obtain $`\mathrm{\Gamma }(\tau ^{}\varphi \pi ^{}\nu _\tau )=1.27\times 10^8s^1`$, which is lies above the prediction . We can observe that the obtained value for the $`\tau ^{}\varphi \pi ^{}\nu _\tau `$ decay probability is very sensitive to deviations from the mixing angle as
$$\frac{1}{2\sqrt{3}}cos\mathrm{\Theta }_V\frac{1}{2\sqrt{2}}sin\mathrm{\Theta }_V.$$
Therefore this decay channel could be used as ideal ”source” for the $`\omega \varphi `$ mixing study and could be defined from the measured partial width of the $`\tau ^{}\varphi \pi ^{}\nu _\tau `$ decay channel at a $`\tau `$-charm factory with high accuracy. Note that the $`\tau ^{}\omega \pi ^{}\nu _\tau `$ decay probability is almost independent from the $`\omega \varphi `$ mixing angle (at the $`\mathrm{\Theta }_V=39^o`$ and $`\mathrm{\Theta }_V=35,3^o`$ is equals approximately to 1/2) according to
$$\frac{1}{2\sqrt{3}}sin\mathrm{\Theta }_V+\frac{1}{2\sqrt{2}}cos\mathrm{\Theta }_V.$$
In these decays $`\rho (1450)`$\- and $`\rho (1700)`$-vector intermediate meson state contributions have been taken into account also according to Eqs. (2), (4), and (5), which have the same flavor quantum numbers as $`\rho (770)`$ meson. Therefore in our calculations we used also the same $`g`$-coupling constant, as in Ref. , for all the $`\rho (770)`$-, $`\rho (1450)`$-, and $`\rho (1700)`$\- vector intermediate meson states which have widths of 151, 310, and 235 MeV, respectively. And in these decays contributions of the $`\rho (1450)`$\- and $`\rho (1700)`$-vector intermediate meson states dominate those of the $`\rho (770)`$ ones. Note that in calculations contributions from these higher radial excitations have not been taken into account. And in Ref’s these decay channels are calculated assuming only one $`\rho ^{}`$ resonance in addition to the $`\rho `$ meson.
In summary, the Lagrangians (2), (4), and (5) allow us to describe of $`\tau ^{}(\omega ,\varphi )\pi ^{}\nu _\tau `$ decays in satisfactory agreement with the available experimental data and theoretical predictions. And, probably, taking into account corresponding coupling constants, as in Ref. , would allow us to describe these decays more correctly compared to these calculations.
We would like to express our deep gratitude to F.A.Gareev, F.Khanna, Chueng-Ryong Ji, M.M.Musakhanov, A.M.Rakhimov for usefull discussions.
|
no-problem/9901/cond-mat9901284.html
|
ar5iv
|
text
|
# Convective and Absolute Instabilities in the Subcritical Ginzburg-Landau Equation
## 1 Introduction
Several physico-chemical systems driven out of equilibrium present stationnary instabilities of the Turing type, or oscillatory instabilities corresponding to Hopf bifurcations. Such instabilities lead to the formation of various kinds of spatio-temporal patternscrosshoh . Well known examples are: Rayleigh-Bénard instabilities in Newtonian fluids, binary mixtures, or viscoelastic solutionsbinary ; larson , electrohydrodynamic instabilities in nematic liquid crystals ehd , Turing instabilities in nonlinear chemical systems shoka , convective instabilities in Taylor-Couette devices taylorcouette , etc. Close to such instabilities, the dynamics of the system may usually be reduced to amplitude equations of the Ginzburg-Landau type, which describe the evolution of the patterns that may appear beyond the bifurcation point daniel .
According to the system under consideration, and to the nature of the instability, these Ginzburg-Landau equations may contain mean flow terms induced by group velocities. In this case, pattern formation crucially depends on the convective or absolute nature of the instability. Let us recall that, when the reference state is convectively unstable, localized perturbations are driven by the mean flow in such a way that they grow in the moving reference frame, but decay at any fixed location. On the contrary, in the absolute instability regime, localized perturbations grow at any fixed location huerre . The behavior of the system is thus qualitatively very different in both regimes. In the convectively unstable regime, a deterministic system cannot develop the expected patterns, except in particular experimental set-ups, while in a stochastic system, noise is spatially amplified and gives rise to noise-sustained structures deissler ; ahlers ; marco . On the contrary, in the absolutely unstable regime, patterns are intrinsically sustained by the deterministic dynamics, which provides the relevant selection and stability criteria mueller ; buechel . Hence, the concepts of convective and absolute instability are essential to understand the behavior of nonlinear wave patterns and their stability deissler ; mutveit .
The nature of the instability of the trivial steady state has been studied, either numerically, analytically and experimentally: In the case of supercritical bifurcations, linear criteria are appropriate to determine the absolute instability threshold, and to analyze the transition from convective to absolute instability deissler ; ahlers ; marco ; deisslerbrand ; helmutda ; steinberg ; luecke ; marc . However, in the case of subcritical bifurcations, the nonlinearities are destabilizing, which leads to the failure of linear instability criteria. In a qualitative analysis based on the potential character of the real subcritical Ginzburg-Landau equation, Chomaz chomaz argued that the transition between convective and absolute instability of the trivial steady state should occur at the point where a front between the rest state and the nontrivial steady state is stationary in a frame moving with the group velocity. This defines the nonlinear convective-absolute instability threshold, above which nonlinear global modes are intrinsically sustained by the dynamics, as discussed by Couairon and Chomaz couairon . This argument relies on the existence of a unique front between the basic and the bifurcating states, as it is the case in the subcritical domain where both basic and bifurcating states are linearly stable. In the supercritical domain, where the basic state is linearly unstable, or in the complex Ginzburg-Landau equation, this front is not unique any more, and, as commented by van Hecke et al. vanhecke , one has to know which nonlinear front solution is selected, to determine the nonlinear stability properties of the basic state.
Within this context, our aim in this paper is to contribute to the study of this problem addressing some aspects of it that so far have not been considered. Additionally we study stochastic effects. A first aspect concerns finite size effects and their influence on the transition from convective to absolute instability for the trivial steady state. Indeed, our numerical analysis of the evolution of perturbations of this state show that it consists in two stages. The first one is devoted to the building of a front between this state and the bifurcating one. It is during the second stage that this front moves outwards or inwards according to the convective or absolute nature of the instability. We will show that, although the absolute instability threshold may effectively be shifted, due to nonlinear effects, in agreement with chomaz , the first step of the evolution is sensitive to the size of the system, and this may affect the practical determination of the absolute instability threshold, and even suppress it.
A second aspect is the effect of the group velocity on the unstable subcritical branch. In the subcritical domain, there is a middle branch of steady states, between the trivial and the bifurcating ones. In fact, the nature of the instability of this branch in the presence of a group velocity has not been considered so far. In the absence of group velocity, this branch is absolutely unstable. However, the nature of the instability may be modified in systems with group velocity or mean flow effects. We will effectively show, analytically and numerically, that this unstable subcritical branch may be convectively unstable, totally or partly, according to the mean flow intensity. Effectively, in deterministic systems, unstable states on this branch do not necessarily decay in the presence of group velocity, while they may remain long lived in stochastic systems. This fact may be of practical importance, since it provides an alternative way to stabilize the subcritical middle branch, which is qualitatively different from the one proposed by Thual and Fauve thualfauve . It could, furthermore, provide the last building block needed for the understanding of pattern formation in binary fluid convection, as suggested in crosshoh .
In section 2, we recall the dynamical model. In section 3 we discuss the nature of the instability of the trivial steady state, and present the results of a numerical analysis of the problem. In section 4, we show, analytically and numerically, that the subcritical middle branch of steady states may be convectively unstable, and may thus be stabilized by mean flow effects in deterministic systems. Finally, conclusions are drawn in section 5.
## 2 The Subcritical Scalar Ginzburg-Landau Equation
For the sake of simplicity, we will consider, in this paper, systems described by a scalar order parameterlike variable, and where the dynamics is given by the real fifth-order Ginzburg-Landau equation, which may be written, in one-dimensional geometries, as crosshoh ; wimhoh :
$$_tA+c_xA=ϵA+_x^2A+vA^3A^5+\sqrt{\xi }\chi (x,t),$$
(1)
For future reference we have added to the equation a stochastic term $`\chi (x,t)`$. This models a Gaussian white noise of zero mean and variance given by $`\chi (x,t)\chi (x^{},t^{})=2\delta (xx^{})\delta (tt^{})`$. In the remainder of this section we consider the deterministic situation with $`\xi =0`$.
Bifurcating uniform steady states $`A(x,t)=R`$ of this equation are well known:
$$R_\pm ^2=\frac{1}{2}(v\pm \sqrt{v^2+4ϵ}).$$
(2)
The linear evolution of the perturbations $`\rho _\pm =AR_\pm `$ around these states is then given by:
$$_t\rho _\pm +c_x\rho _\pm =2R_\pm ^2\sqrt{v^2+4ϵ}\rho _\pm +_x^2\rho _\pm .$$
(3)
Hence, in the absence of group velocity, the upper branch $`R_+`$ exists and is stable for $`\frac{v^2}{4}<ϵ`$, while the middle branch $`R_{}`$ exists and is unstable for $`\frac{v^2}{4}<ϵ<0`$ (cf. fig. 1).
This picture is, of course, known to change in the presence of a finite group velocity $`c`$. Let us first recall the linear and nonlinear criteria for convective and absolute instability for the trivial steady state $`A=0`$.
## 3 Linear and Nonlinear Instability of the Trivial Steady State
### 3.1 Analytical Results
The linear evolution around the trivial steady state $`A=0`$ is given by
$$_t\rho _0+c_x\rho _0=ϵ\rho _0+_x^2\rho _0$$
(4)
and the corresponding dispersion relation is
$$\omega =ϵc\kappa +\kappa ^2$$
(5)
with $`\kappa =k^{}+ik^{\prime \prime }`$. The usual linear instability criterion huerre
$$\mathrm{}\frac{d\omega }{d\kappa }=\mathrm{}\frac{d\omega }{d\kappa }=0$$
(6)
and $`\mathrm{}(\omega (\kappa ))=0`$ gives that the trivial steady state is convectively unstable for $`0<ϵ<c^2/4`$, and absolutely unstable for $`ϵ>c^2/4`$.
However, since the nonlinearities of the dynamics are destabilizing, the linear terms may possibly not govern the growth of perturbations of the steady state. Hence, a reliable stability analysis has to include nonlinear terms. As discussed by Chomaz and Couairon chomaz ; couairon , the nonlinear stability analysis of the trivial steady state relies on its response to perturbations of finite extent and amplitude. Hence, in the case eq. (1), without group velocity ($`c=0`$), it is sufficient to consider a front solution joining the $`0`$ state at $`x\mathrm{}`$ to the $`R_+`$ state at $`x+\mathrm{}`$.
In the case of the dynamics given by Eq. (1), this velocity, $`c_f`$ may be calculated exactly wim , and is found to be (cf. fig. 2)
$`c_f`$ $`=`$ $`c^{}={\displaystyle \frac{1}{\sqrt{3}}}(v+2\sqrt{v^2+4ϵ})(\mathrm{for}{\displaystyle \frac{v^2}{4}}<ϵ<{\displaystyle \frac{3v^2}{4}})`$
$`c_f`$ $`=`$ $`c^{}=2\sqrt{ϵ}(\mathrm{for}{\displaystyle \frac{3v^2}{4}}<ϵ)`$ (7)
Note that $`c^{}`$ is the linear marginal velocity.
If the front velocity is negative, which is the case for $`ϵ<3v^2/16`$, an isolated droplet of the $`R_+`$ state embedded into the $`0`$ state shrinks, and the $`0`$ state is stable. On the contrary, if $`c_f`$ is positive, which is the case for $`ϵ>3v^2/16`$, $`R_+`$ droplets grow, and the $`0`$ state is nonlinearly unstable. The value $`ϵ=3v^2/16`$ corresponds to the Maxwell construction of phase transitions in which the trivial and upper branch have equal stability.
When $`c0`$ and $`v=1`$, Chomaz chomaz showed that, in the unstable domain ($`ϵ>3v^2/16`$), the instability is nonlinearly convective (NLC) when $`c_f<c`$, since, in this case, although expanding, a $`R_+`$ droplet is finally advected out of the system. On the contrary, when $`c_f>c`$, the instability is absolute (NLA), since, in this case, $`R_+`$ droplets expand in such a way that they finally invade the system.
Hence, on generalizing this argument to arbitrary valued of $`v`$, one obtains imposing $`c_f=c`$ in Eq. (7) that the transition from convective to absolute instability occurs at:
$`ϵ_a`$ $`=`$ $`{\displaystyle \frac{3}{16}}(c^2+{\displaystyle \frac{2}{\sqrt{3}}}vcv^2)(\mathrm{for}c<\sqrt{3}v)`$ (8)
$`=`$ $`{\displaystyle \frac{1}{4}}c^2(\mathrm{for}c>\sqrt{3}v)`$
From this result, it appears clearly that, when group velocity effects dominate over nonlinear ones ($`c>\sqrt{3}v`$), the absolute instability threshold remains the linear one. However, when nonlinearities dominate ($`v>\frac{c}{\sqrt{3}}`$), the absolute instability threshold decreases, but remains in the $`ϵ>0`$ domain, when $`v<c\sqrt{3}`$. It only becomes negative when $`v>c\sqrt{3}`$. This last case is the one originally considered in chomaz .
### 3.2 Numerical Analysis
The above results have been checked through the numerical integration of the equation (1). We will present here some of the data obtained for systems being initially in the trivial steady state, and compare them to the predictions obtained from the analytical analysis outlined in the preceeding section. To observe a convective instability we consider a semi-infinite system with one of the boundaries anchored to the unstable state $`A(x=0)=0`$. Experimentally, this boundary condition can be achieved using a negative value for the control parameter $`ϵ`$ for $`x<0`$.
The numerical integrations have been performed using a finite difference method marc with a spatial step of $`\delta x=0.05`$ and time step $`\delta t=0.001`$, except where otherwise noted. As explained before, the boundary conditions for a system of size $`L`$ were taken as follows: $`A=0`$ at $`x=0`$ for all times and $`_xA=0`$ at $`x=L`$.
We only discuss here situations where the nonlinearities dominate over mean flow effects, thus where linear instability criterion fails.
(1) A first case corresponds to $`c=v=1`$. In this case, the transition from convective to absolute instability should occur at $`ϵ_a=\frac{\sqrt{3}}{8}0.21`$. This is illustrated by the numerical results presented in fig. 3. In fig. 3 (a) and (b), we show the deterministic evolution of the field $`A`$ from random initial conditions around $`A=0`$ and for $`ϵ=0.18`$. The data confirm the convective nature of the instability. Effectively, we see, in a first stage, the building of a front between the trivial state and the bifurcating one, and, in a second stage, this front is advected out of the system. On the contrary, for $`ϵ=0.23`$, the instability is absolute, as shown in fig. 3 (c) and (d), where the front moves in the opposite direction, and the bifurcating state invades the system.
The difference between subcritical and supercritical behavior is enlightened in fig. 3 (e) and (f), where the field evolution has been computed with the same parameters as in fig. 3 (c) and (d), except that $`v`$ has been changed from $`+1`$ to $`1`$ to simulate supercriticality. In this case, the instability should be convective, since the absolute threshold is $`ϵ=0.25`$, and the results are in agreement with this prediction.
The effect of noise in the regime of convective instability is presented in fig. 4. The field dynamics has been computed for the same values of the parameters as in fig. 3 (a), but in the presence of noise of different intensities. The noise intensity has been fixed at $`\xi =10^6`$ in fig. 4 (a) and at $`\xi =10^{14}`$ in fig. 4 (b). In both cases we observe noise sustained structures: Noise is able to sustain finite field amplitudes (positive or negative, according to the $`+,`$ symmetry of the system). Weaker noise induces larger healing length for the pattern. Hence, in the stochastic case, pattern formation is sensitive to system size, since the latter has to be larger than the healing length, for the pattern to be able to develop.
(2) In a second case, we chose $`c=0.5`$, $`v=1.5`$, and this corresponds to the situation presented by Chomaz chomaz , where nonlinear effects dominate ($`v>c\sqrt{3}`$.) and where the transition from convective to absolute instability occurs in the subcritical domain since $`ϵ_a0.21`$. For $`ϵ_a<ϵ`$, the instability is absolute, but the dynamics is qualitatively different if $`ϵ`$ is positive or negative. When $`ϵ>0`$, both linear and cubic terms are destabilizing, and the building and propagation of fronts between trivial and bifurcating states is much faster than for $`ϵ<0`$, when the linear term is stabilizing, and the cubic one is destabilizing. When the dynamics becomes very slow the time and system size needed to see the formation of a front from an initial perturbation become very large, so that even in the absolutely unstable regime one might not observe the decay of the state $`A=0`$ in finite times for a finite system. This effect is illustrated in fig. 5 which corresponds to the absolutely unstable regime. Note the significant increase of the times scales in comparison with figs. 3 and 4, despite the fact that the perturbation of the zero state at the initial time is much large (see the figure caption). We note that for $`ϵ<0`$ the evolution would still be slower. We first observe the formation of the front ( initially moving to the right) and much later, when it reaches the upper branch, invading the whole syste. Hence, when the characteristic length needed for the building of the front is larger than the system size, the instability is effectively convective, although the system should be in the absolute instability regime (in the sense of semi-infinite geometries). For the parameters chosen in this example and for a length $`L<2000`$ one does not observe the decay of the state $`A=0`$.
It is worthnoting that the observed finite size effects confirm and complement the analysis made by Chomaz and Couairon againstthewind of fully nonlinear solutions of Ginzburg-Landau equations in finite domains. In case (1), for $`ϵ=0.23`$, nonlinear global (NLG) modes exist, even in finite domains. However, since the basic state, $`A=0`$ is linearly absolutely stable, NLG modes only develop if the initial condition is sufficiently large for the transients to reach an order one amplitude in the finite domain. Since the amplification factor increases exponentially with $`L`$, the minimum amplitude of initial perturbations able to trigger the NLG mode decreases exponentially with $`L`$ againstthewind . As a result, the development of NLG modes is almost insensitive, in most practical situations, to system size.
On the contrary, in case (2), the basic state is linearly stable, absolutely and convectively, and the minimum amplitude of initial perturbations able to trigger NLG modes in finite boxes decreases linearly with $`L`$. It is why, in the conditions of our numerical analysis, no global mode is obtained for $`L<2000`$.
## 4 Stability Analysis of the Bifurcating States
### 4.1 Analytical Results
The linear evolution around the upper branch steady states $`R_+`$ and middle branch steady states $`R_{}`$ is given by eq. (3). The upper states $`R_+`$ are linearly stable for all $`ϵ>\frac{v^2}{4}`$.
On the other hand, the usual linear instability criterion shows that the $`R_{}`$ steady states are convectively unstable for $`8R_{}^2\sqrt{v^2+4ϵ}<c^2`$, and absolutely unstable for $`8R_{}^2\sqrt{v^2+4ϵ}>c^2`$. In other words, these states are absolutely unstable in the range
$$\frac{1}{8}(v^2+\frac{c^2}{2}+v\sqrt{v^2c^2})<ϵ<\frac{1}{8}(v^2+\frac{c^2}{2}v\sqrt{v^2c^2})$$
(9)
and convectively unstable in the windows defined by
$$\frac{v^2}{4}<ϵ<\frac{1}{8}(v^2+\frac{c^2}{2}+v\sqrt{v^2c^2})$$
(10)
and
$$\frac{1}{8}(v^2+\frac{c^2}{2}v\sqrt{v^2c^2})<ϵ<0$$
(11)
Hence, when $`v^2<c^2`$, these steady states are always linearly convectively unstable. Still, when $`v^2>c^2`$, there is a range of linear absolute instability in the middle of their domain of existence, and a range of linear convective instability close to the points where these states disappear. This is shown in fig. 6.
Nevertheless the linear stability criteria may fail in the presence of destabilizing nonlinearities. This is not only the case for the evolution of the perturbations of the trivial steady state since the bifurcation is subcritical, but it may also be the case for the perturbations around the middle steady state branch, whose evolution is given by
$`_t\rho _{}`$ $`+`$ $`c_x\rho _{}=+2R_{}^2\sqrt{v^2+4ϵ}\rho _{}+_x^2\rho _{}`$ (12)
$``$ $`R_{}(2v5\sqrt{v^2+4ϵ})\rho _{}^2(4v5\sqrt{v^2+4ϵ})\rho _{}^3`$
$``$ $`5R_{}\rho _{}^4\rho _{}^5`$
The quadratic nonlinearity is destabilizing for $`ϵ<ϵ_L=0.21v^2`$. In such cases, one has to perform a nonlinear analysis of the dynamics to determine the convective or absolute nature of the instability.
In the regime where the nonlinearities of the evolution equation (12) are stabilizing, i.e. for $`ϵ_L=0.21v^2<ϵ<0`$, the results of the linear analysis may be assumed to be valid. Hence, we may safely rely on these results above the metastability point, i.e. for $`ϵ_M=3/16v^2<ϵ<0`$. Below the metastability point, i.e. for $`0.25v^2<ϵ<3/16v^2`$, one has to perform a nonlinear analysis, which, in this case, relies on the evolution of fronts between middle branch states and the trivial steady state. We do not perform this analysis here since it would only affect quantitatively but not qualitatively the results presented above.
### 4.2 Dynamics of the Subcritical Unstable Branch
We have numerically confirmed the convective nature of the instability of the subcritical middle branch. The numerical integration has been performed as indicated in subsection 3.2. Also, as indicated in that subsection, to observe a convective instability we consider a semi-infinite system with one of the boundaries anchored to the unstable state. Here we have to take $`A(x=0)=R_{}`$ corresponding to the field amplitude of the subcritical middle branch. Experimentally, this boundary condition can not be achieved as easily as before because there is no value of the control parameter $`ϵ`$ for which $`A(x)=R_{}`$ is an homogeneous steady stable state. However depending on the system it can be imposed in different ways. In an optical system, for example, the left boundary condition could be achieved injecting an external field at $`x=0`$ with the apropriate amplitude. Finally, as in subsection 3.2, the right boundary condition is taken as $`_xA=0`$ at $`x=L`$
We computed the evolution from an initial steady state with $`R_{}^2=0.1`$ on the middle branch, which corresponds to $`v=1`$ and $`ϵ=0.09`$. We then study the system dynamics for different values of the group velocity $`c`$. According to the previous discussion, for
(1) $`c<0.8`$, the state $`R_{}`$ should be absolutely unstable
(2) $`c>0.8`$, the state $`R_{}`$ should be convectively unstable.
In fig. 7 (a), we present, for $`c=1`$, the results obtained for the deterministic evolution of an initial perturbation of the state $`R_{}`$. They show that the instability is effectively convective. On lowering the group velocity from $`c=1`$ to $`c=0.55`$, the nature of the instability changes from convective to absolute, as expected, and shown in fig. 7 (b). These results confirm that the middle branch, which is always stable for $`c=0`$, may be stabilized by mean flow effects in deterministic systems, in the sense that there is a range of parameters in which it is only convectively unstable.
The effect of noise in the convectively unstable regime of the trivial state was to sustain a structure continuously excited by noise. In the case of the middle branch. $`R_{}`$, and when this is convectively unstable, noise forces the system to relax randomly to either of the two coexisting stable branches, as shown in fig. 8. Still, if noise is weak in comparison with the strength needed to see its effect in a finite system, one would observe the middle branch as effectively stable.
## 5 Conclusions
In this paper, we considered systems described by the subcritical Ginzburg-Landau equation, and analyzed some problems related with the effect of group velocities on the stability of its steady states. In the case of the trivial steady state, it is known that the transition between convective and absolute linear instability regimes is shifted by the effect of destabilizing nonlinearities, and the corresponding nonlinear absolute instability threshold may easily be computed for semi-infinite systems chomaz ; couairon . Our numerical study of the evolution of perturbations from the trivial steady state in finite systems shows that, in a first step, a front is built between this state and the bifurcating one, which corresponds to the upper branch of steady states. Then, according to the intensity of the group velocity, the front moves outwards or inwards, which corresponds to convective or absolute instability, respectively. When the characteristic length needed for the building of the front is shorter than the system size, the nature of the instability is in agreement with the theoretical predictions made for semi-infinite systems. However, our numerical results show that, if the characteristic building length of the front is larger than the system size, one will never see inward motion of the front, and, in this case, even above the absolute instability threshold, the instability is effectively convective.
We also studied the instability of the subcritical middle branch of steady states, a problem that had not been addressed up to now. It may be shown, already at the level of a linear analysis, that this branch, which is absolutely unstable without group velocity, may entirely become convectively unstable in the presence of group velocities larger than some well-defined critical value. This result has been confirmed by the numerical analysis of the evolution of perturbations of steady states on this branch. The stabilization of such steady states has effectively be obtained, in deterministic systems, for group velocities in the predicted range. In stochastic systems, however, these steady states relax to one of the stable branches, as expected. Nevertheless, for this relaxation to occur, either noise strength or system size have to be large enough. This effect may be of practical importance, for example, in binary fluid convection, where, besides the fact that the role of subcriticality is not clearly understood yet kolodner , the presence of natural or forced mean flows, or group velocities, could effectively stabilize otherwise unstable branches of steady states.
## 6 Acknowledgements
Financial support from DGICYT (Spain) Project PB94-1167 is acknowledged. DW is supported by the Belgian National Fund for Scientific Research. The authors also acknowledge helpful discussions with W. van Saarloos.
|
no-problem/9901/cond-mat9901276.html
|
ar5iv
|
text
|
# Renormalization for Discrete Optimization
## Discussion —
For both the traveling salesman and the spin glass problems, our genetic renormalization algorithm finds solutions whose quality is far better than those found by local search. In a more general context, our approach may be considered as a systematic way to improve upon state of the art local searches. A key to this good performance is the treatment of multiple scales by renormalization and recursion. The use of a population of configurations then allows us to self-consistently optimize the problem on all scales. Just as in divide and conquer strategies, combinatorial complexity is handled by considering a hierarchy of problems. But contrary to those strategies, information in our system flows both from small scales to large scales and back. Clearly such a flow is necessary as a choice or selection of a pattern at small scales may be validated only at much larger scales.
In this work, we put such principles together in a simple manner; nevertheless, the genetic renormalization algorithm we obtained compared very well with the state of the art heuristics specially developed for the problems investigated. Improvements in the population dynamics and in the local search can make our approach even more powerful. We thus expect genetic renormalization algorithms to become widely used in the near future, both in fundamental and applied research.
###### Acknowledgements.
We are grateful to O. Bohigas for his comments and to J. Mitchell for providing the spin glass exact solutions. O. C. M. acknowledges support from the Institut Universitaire de France. J. H. acknowledges support from the M.E.N.E.S.R.
|
no-problem/9901/hep-ex9901002.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The centre-of-mass energy of the large electron-positron (LEP) collider increased to 161 GeV in 1996, allowing W-pair production in $`\mathrm{e}^+\mathrm{e}^{}`$ annihilation for the first time. This marked the start of the LEP2 programme. The energy has been further increased in a series of steps since then. A primary goal of LEP2 is to measure the W-boson mass, $`M_\mathrm{W}80.4`$ GeV. The beam energy sets the absolute energy scale for this measurement, leading to an uncertainty of $`\mathrm{\Delta }M_\mathrm{W}/M_\mathrm{W}\mathrm{\Delta }E_{\mathrm{beam}}/E_{\mathrm{beam}}`$. With the full LEP2 data sample, the statistical uncertainty on the W mass is expected to be around 25 MeV. To avoid a significant contribution to the total error, this sets a target of $`\mathrm{\Delta }E_{\mathrm{beam}}/E_{\mathrm{beam}}10^4`$, i.e. 10 to 15 MeV uncertainty for a beam energy around 90 GeV. This contrasts with LEP1, where the Z mass was measured with a total relative uncertainty of about $`2\times 10^5`$ .
The derivation of the centre-of-mass energy proceeds in several stages. First, the average beam energy around the LEP ring is determined. The overall energy scale is normalised with respect to a precise reference in occasional dedicated measurements during each year’s running. Time variations in the average beam energy are then taken into account. Further corrections are applied to obtain the $`\mathrm{e}^+`$ and $`\mathrm{e}^{}`$ beam energies at the four interaction points, and the centre-of-mass energy in the $`\mathrm{e}^+\mathrm{e}^{}`$ collisions. These procedures are elaborated below.
At LEP1, the average beam energy was measured directly at the physics operating energy with a precision of better than 1 MeV by resonant depolarisation (RD). The spin tune, $`\nu `$, determined by RD, is proportional to the beam energy averaged around the beam trajectory:
$$\nu =\frac{g_e2}{2}\frac{E_{\mathrm{beam}}}{m_ec^2}$$
(1)
Both $`\nu `$ and $`E_{\mathrm{beam}}`$ are also proportional to the total integrated vertical magnetic field, $`B`$, around the beam trajectory, $`\mathrm{}`$:
$$E_{\mathrm{beam}}=\frac{e}{2\pi c}_{\mathrm{LEP}}Bd\mathrm{}$$
(2)
Unfortunately, the RD technique can not be used in the LEP2 physics regime, because depolarising effects increase sharply with beam energy, leading to an insufficient build up of transverse polarisation to make a measurement. In 1997, the highest energy measured by RD was 55 GeV.
The beam energy in LEP2 operation is therefore determined from an estimate of the field integral derived from continuous magnetic measurements by 16 NMR probes situated in some of the 3200 LEP main bend dipoles. These probes are read out during physics running and RD measurements, and they function at any energy above about 41 GeV. Although they only sample a small fraction of the field integral, the relation between their readings and the beam energy can be precisely calibrated against RD measurements in the beam energy range 41 to 55 GeV.
The relation between the fields measured by the NMR probes and the beam energy is assumed to be linear, and to be valid up to physics energies. Although the linearity can only be tested over a limited range with the RD data themselves, a second comparison of the NMR readings with the field integral is available. A flux loop is installed in each LEP dipole magnet, and provides a measurement of 96.5% of the field integral. Flux loop experiments are performed only occasionally, without beam in LEP. The change in flux is measured during a dedicated cycling of the magnets. The local dipole bending fields measured by the NMR probes are read out at several steps in the flux-loop cycle, over the full range from RD to physics energies. This provides an independent test of the linearity of the relation between the probe fields and the total bending field.
The use of the NMR probes to transport the precise energy scale determined by RD to the physics operating energy is the main novelty of this analysis. The systematic errors on the NMR calibration are evaluated from the reproducibility of different experiments, and the variations from probe to probe. The dominant uncertainty comes from the quality of the linearity test with the flux loop.
At LEP2, with 16 NMR probes in the LEP tunnel, time variations in the dipole fields provoked by leakage currents from neighbourhood electric trains and due to temperature effects can be accounted for directly. This is in contrast to the LEP1 energy measurement, where understanding the time evolution of the dipole fields during a LEP fill formed a major part of the analysis .
The NMR probes and the flux loop measure only the magnetic field from the LEP dipoles, which is the main contribution to the field integral. The LEP quadrupoles also contribute to the field integral when the beam passes through them off axis, which occurs if for any reason the beam is not on the central orbit. The total orbit length is fixed by the RF accelerating frequency. Ground movements, for example due to earth tides or longer time scale geological effects, move the LEP magnets with respect to this fixed orbit . At LEP2, deliberate changes in the RF frequency away from the nominal central value are routinely used to optimise the luminosity by reducing the horizontal beam size. This can cause occasional abrupt changes in the beam energy. Orbit corrector magnets also make a small contribution to the total bending field. All of these corrections must be taken into account both when comparing the NMR measurements with the RD beam energies, and in deriving the centre-of-mass energy of collisions as a function of time.
The exact beam energy at a particular location differs from the average around the ring because of the loss of energy by synchrotron radiation in the arcs, and the gain of energy in the RF accelerating sections; the total energy lost in one revolution is about 2 GeV at LEP2. The $`\mathrm{e}^+`$ and $`\mathrm{e}^{}`$ beam energies at each interaction point are calculated taking into account the exact accelerating RF configuration. The centre-of-mass energy at the collision point can also be different from the sum of the beam energies due to the interplay of collision offsets and dispersion. The centre-of-mass energies for each interaction point are calculated every 15 minutes, or more often if necessary, and these values are distributed to the LEP experiments.
In the following section, the data samples and magnetic measurements are described. The beam energy model is outlined in section 3. The calibration of the NMR probes and the flux-loop test are described in section 4. More information on corrections to the beam energy from non-dipole effects is given in section 5, and on IP specific corrections to derive the centre-of-mass energy in section 6. The systematic uncertainties for the whole analysis are summarised in section 7. The evaluation of the instantaneous spread in centre-of-mass energies is given in section 8. In the conclusion, the prospects for future improvement are also outlined.
## 2 Data samples
### 2.1 Luminosity delivered by LEP2
LEP has delivered about 10 pb<sup>-1</sup> at each of two centre-of-mass energies, 161 and 172 GeV, in 1996, and over 50 pb<sup>-1</sup> at a centre-of-mass energy of around 183 GeV in 1997. Combining the data from all four LEP experiments, these data give a measurement of the W mass with a precision of about 90 MeV . This paper emphasises the 1997 energy analysis, with some information for 1996 where relevant.
### 2.2 Polarisation measurements
The successful RD experiments in 1996 and 1997 are listed in table 1. To reduce uncertainties from fill-to-fill variations, an effort was made to measure as many beam energies as possible with RD during the same LEP fill. Measuring two energies in the same fill was first achieved at the end of 1996. The need for more RD measurements motivated the “k-modulation” programme to measure the offsets between beam pick-ups and quadrupole centres , the improved use of magnet position surveys, and the development of a dedicated polarisation optics (the 60/60 optics<sup>1</sup><sup>1</sup>1The optics are designated by the betatron advance between focusing quadrupoles of the LEP arcs in the horizontal/vertical planes respectively.). These were all used in 1997 . Improving the orbit quality and reducing depolarising effects in this way resulted in 5 fills with more than one energy point, and 2 fills with 4 energy points, which allow a check of the assumption that the measured magnetic field is linearly related to the average beam energy. The range over which tests can be made increased from 5 to 14 GeV between 1996 and 1997, and the maximum RD calibrated energy increased from 50 to 55 GeV. At least a 4–5% level of polarisation is needed to make a reliable measurement, but only 2% level of polarisation was observed at 60 GeV in 1997.
### 2.3 Magnetic measurements
The LEP dipole fields are monitored continuously by NMR probes, and in occasional dedicated measurements by the flux loop. A total of 16 probes was installed for the 1996 LEP run. The probes are positioned inside selected main bend dipoles, as indicated in figure 1. Each octant has at least one probe, and octants 1 and 5 have strings of probes in several adjacent dipoles (and in one instance two probes in the same dipole). The probes measure the local magnetic field with a precision of around $`10^6`$, and they can be read out about every 30 seconds. Each probe only samples the field in a small region of one out of 3200 dipoles. A steel field plate is installed between each probe and the dipole yoke to improve the uniformity of the local magnetic field. During normal physics running and RD measurements, the probe readings over five minute time intervals are averaged. This reduces the effect of fluctuations in the magnetic fields induced by parasitic currents on the beam pipe (see section 3). The probes are also read out during flux-loop measurements, as described below.
In 1995 there were only two probes in dipoles in the LEP ring, and prior to 1995, only a reference magnet powered in series with the main dipoles was monitored . The larger number of available probes in 1996 and 1997 has allowed a simplification of the treatment of the dipole magnetic field evolution during a fill (see section 3).
The performance of the probes is degraded by synchrotron radiation at LEP2. A new probe gives a strong enough signal to lock on to and measure for fields corresponding to beam energies above about 40 GeV. During a year’s running, this minimum measurable field gradually increases, and the probes eventually have to be replaced. However, if a stable frequency lock is achieved, then the value of the field measured is reliable; only the range of measurable fields is compromised. During 1996 and 1997, all of the probes were working at high energy, with the exception of two probes, one in each multi-probe octant, which were not available for the running at 172 GeV centre-of-mass energy.
The flux loop is also shown schematically in figure 1. In contrast to the NMR probes, the flux loop samples 98% of the field of each main bend dipole, excluding fringe fields at the ends, corresponding to 96.5% of the total bending field of LEP. The loop does not include the weak (10% of normal strength) dipoles at the ends of the arcs, the double-strength dipoles in the injection regions, or other magnets such as quadrupoles or horizontal orbit correctors.
The flux loop measures the change in flux during a dedicated magnet cycle, outside physics running, and the corresponding change in the average main dipole bending field, $`B_{\mathrm{FL}}`$, is calculated. Five of these measurements were made in 1997. The gradual increase of the magnetic field during the cycle is stopped for several minutes at a number of intermediate field values, each of which corresponds to a known nominal beam energy, to allow time for the NMR probes to lock and be read out. The values chosen include the energies of RD measurements and physics running. The average of the good readings for each probe is calculated for each step. These special flux-loop measurements are referred to by an adjacent LEP fill number (one measurement in fills 4000, 4121 and 4206 and two measurements in fill 4434). This possibility to cross-calibrate the field measured by the flux loop and by the NMR probes is crucial to the analysis.
## 3 The beam energy model
The LEP beam energy is calculated as a function of time according to the following formula:
$`E_{\mathrm{beam}}(t)`$ $`=`$ $`(E_{\mathrm{initial}}+\mathrm{\Delta }E_{\mathrm{dipole}}(t))`$
$`(1+C_{\mathrm{tide}}(t))(1+C_{\mathrm{orbit}})(1+C_{\mathrm{RF}}(t))`$
$`(1+C_{\mathrm{h}.\mathrm{corr}.}(t))(1+C_{\mathrm{QFQD}}(t)).`$
The first term, $`E_{\mathrm{initial}}`$, is the energy corresponding to the dipole field integral at the point when the dipoles reach operating conditions, i.e. after the beams have been “ramped” up to physics energy, and after any bend modulation has been performed (see below). For RD fills, $`E_{\mathrm{initial}}`$ is calculated after each ramp to a new energy point. The shift in energy caused by changes in the bending dipole fields during a fill is given by $`\mathrm{\Delta }E_{\mathrm{dipole}}(t)`$.
Both of these “dipole” terms are averages over the energies predicted by each functioning NMR probe. For the initial energy, equal weight is given to each probe, since each gives an independent estimate of how a magnet behaves as the machine is ramped to physics energy. However, for the change in energy during a fill, equal weight is given to each octant. This gives a more correct average over the whole ring for dipole rise effects provoked by temperature changes, and by parasitic electrical currents on the beam pipe caused by trains travelling in the neighbourhood.
Modelling the dipole energy using the 16 NMR probes has simplified the treatment compared to LEP1, where only two probes were available in the tunnel in 1995, and none in earlier years. For LEP1, $`E_{\mathrm{initial}}`$ was derived from comparisons with RD measurements, and the rise in a fill from a model of the train and temperature effects (see ).
The dipole rise effects are minimised by bend modulation, i.e. a deliberate small amplitude variation of the dipole excitation currents after the end of the ramp. These were not recommissioned for the 1996 running, but in 1997 were carried out routinely for physics fills from fill 3948 on 5 August.
The remaining terms correct for other contributions to the integral bending field, and are listed below. They are discussed in section 5, and in more detail in reference . These terms must also all be taken into account when comparing the energy measured at a particular time by RD with the magnetic field measured in the main dipoles by the NMR probes.
This accounts for the effect of earth tides which change the size of the LEP ring, effectively moving the quadrupole magnets with respect to the fixed-length beam orbit.
This is evaluated once for each LEP fill. It corrects for distortions of the ring geometry on a longer time scale according to the measured average horizontal orbit displacement.
Regular changes in the RF frequency away from the nominal central frequency are made to optimise the luminosity, leading to this correction.
This accounts for changes in the field integral from horizontal orbit corrector magnets used to steer the beam.
Stray fields are caused when different excitation currents are supplied to focussing and defocussing quadrupoles in the LEP lattice. These are taken into account by this term.
## 4 Calibration of NMR probes
### 4.1 Calibration of NMR probes with RD measurements
The magnetic fields $`B_{\mathrm{NMR}}^i`$ measured by each NMR $`i=1,16`$, are converted into an equivalent beam energy. The relation is assumed to be linear, of the form
$$E_{\mathrm{NMR}}^i=a^i+b^iB_{\mathrm{NMR}}^i.$$
(4)
In general, the beam energy is expected to be proportional to the integral bending field. The two parameters for each probe are determined by a combined fit to all of the energies measured by resonant depolarisation. The NMR probes only give an estimate of the dipole contribution to the integral bending field, so all the other effects, such as those due to coherent quadrupole motion, must be taken into account according to equation 3 in order to compare with the energy measured by RD. A further complication arises because two different weighted averages over probes are used to derive $`E_{\mathrm{initial}}`$ and $`\mathrm{\Delta }E_{\mathrm{dipole}}(t)`$, so in practice an iterative procedure is used. The average offset, $`a`$, is 27 MeV, with an rms spread over 16 NMR probes of 64 MeV. The average slope, $`b`$, is 91.17 MeV/Gauss, with an rms spread of 0.25 MeV/Gauss over 16 probes.
The residuals, $`E_{\mathrm{pol}}E_{\mathrm{NMR}}^i`$, are examined for each NMR. The residuals evolve with beam energy in a different way for different probes, but for a particular probe this behaviour is reproduced from fill to fill. The residuals averaged over NMR probes at each polarisation point are shown in figure 2(a), in which the errors are displayed as the rms/$`\sqrt{N}`$, where $`N16`$ is the number of NMR probes functioning for the measurement. This figure shows the average residuals with respect to the simultaneous fit to all polarisation fills in 1997, which was used to calibrate the NMR probes. In figure 2(b), the residuals for the two fills with four RD energies are shown. Here the fit is made to each fill individually. The residuals show a reproducible small but statistically significant deviation from zero, with the 45 and 50 GeV points being a few MeV higher than those at 41 and 55 GeV. Despite some fill-to-fill scatter, this shape is present in all fills, not just the two fills with four RD energies.
### 4.2 Predicted energy for physics running
Using the calibration coefficients determined in section 4.1, the magnetic field measured by each NMR during physics running can be used to predict the beam energy. To assess the variations over the NMR probes, the average magnetic field is calculated for each NMR over all of the physics running at a nominal beam energy of 91.5 GeV. The average physics energies derived from these average fields have an rms scatter over the NMR probes of about 40 MeV, contributing $`40/\sqrt{16}=10`$ MeV to the systematic uncertainty from the normalisation procedure.
In 1996, the limited number of available RD measurements were fitted to a line passing through the origin, $`E_{\mathrm{pol}}=p^iB_{\mathrm{NMR}}^i`$. If this is tried for the 1997 data, the rms scatter increases to 60 MeV and the central value shifts by around 20 MeV. This is taken into account in evaluating the uncertainty for the 1996 data, as discussed in section 7.1. The reduced scatter for the two parameter fit can be taken to imply that a non-zero offset improves the description of the energy-magnetic field relation, or that it is an advantage to impose the linearity assumption only over the region between polarisation and physics energies.
Using different polarisation fills as input to the procedure gives some variation. Fill 4372 has the most unusual behaviour. This is partly because it only samples the two lower energy points, which have a different average slope to the four point fills. This fill also has the smallest number of functioning NMR probes, since it is at the end of the year. It therefore has little weight in the overall average. An uncertainty of 5 MeV is assigned to cover the range of central values derived using different combinations of polarisation fills.
The average residuals of the fit show a characteristic shape which is a measure of the non-linearity in the beam energy range 41–55 GeV. The amplitude of the deviations is larger than the statistical scatter over the NMR probes. The linearity is best examined by making fits to the individual 4 point fills. If the errors are inflated to achieve a $`\chi ^2`$/dof of 1, then they imply an uncertainty at physics energy of 7 MeV for the linear extrapolation. No additional uncertainty is included to account for this observation, because it is covered by the larger uncertainty assessed in section 4.5, where the linearity assumption is tested by a comparison of NMR and flux-loop measurements.
### 4.3 Initial fields for physics fills
The estimates of $`E_{\mathrm{initial}}`$, the initial energy from the dipole contribution to the bending field, for all fills with a nominal centre-of-mass energy of 183 GeV are shown in figure 3. There is a change in initial field after 5 August, attributed to the implementation of bend modulation at the start of each fill. A small drift in the dipole excitation current for the same nominal setting was observed during the year, and one or two fills have anomalous initial values due to known incorrect setting of the excitation current. The overall rms spread in $`E_{\mathrm{initial}}`$ is 11 MeV for 148 fills. The error on the mean beam energy from these variations and from other rare anomalies at the starts of fills is taken to be 2 MeV.
### 4.4 Uncertainty due to dipole rise
The average of the NMR probes is used to take into account changes in energy due to temperature and parasitic currents which cause the dipole bend field to change. Bend modulation was not performed in 1996, but in 1997 was routinely carried out at the start of each fill from 5 August (fill 3948). The average rise in dipole field during a fill was therefore smaller in 1997 than in 1996: the total dipole rise effect on the average beam energy was only 3.5 MeV. From experience at LEP1, and the fact that the 16 NMR probes give a good sampling of the whole ring, the uncertainty is expected to be less than 25% of the effect, so 1 MeV is assigned as the uncertainty due to the dipole rise.
### 4.5 Test of NMR calibration using the flux loop
The estimate of the integral dipole field from the NMR probes can also be compared with the measurement of 96.5% of the total bending field by the flux loop. This allows a test of the entire extrapolation method. From a fit in the 41–55 GeV region, the NMR probes can be used to predict the average bending field measured by the flux loop at the setting corresponding to physics energy. If the NMR probes can predict the flux-loop field, and the beam energy is proportional to the total bending field, then it is a good assumption that the probes are also able to predict the beam energy in physics. The flux loop can not be used to predict the beam energy in physics directly, since neither the slope nor the offset of the relationship between measured field and beam energy are known with sufficient precision. However, each point in the flux loop does correspond to a specific setting of the nominal beam energy. The flux-loop measurement is from 7 to 100 GeV.
For the test to be valid, a strong correlation should be observed between the offsets $`a^i`$ and $`c^i`$, and between the slopes $`b^i`$ and $`d^i`$, from the fits of the field measured by each probe, $`i`$, to the polarisation and flux-loop data:
$`E_{\mathrm{pol}}=a^i+b^iB_{\mathrm{NMR}}^i`$ and (5)
$`B_{\mathrm{FL}}=c^i+d^iB_{\mathrm{NMR}}^i`$ $`\text{fit restricted to 41–55 GeV}.`$ (6)
The fitted parameters for each NMR are shown in figure 4, and the expected correlation is seen. The average offset, $`c`$, is $`79.38`$ Gauss, with an rms spread over the 16 values of 0.67. This offset corresponds to the 7 GeV nominal beam energy setting at the start of the flux-loop cycle. The average slope, $`d`$, is 0.9811, with an rms spread over 16 NMR probes of 0.0027. The field plates cause the slope to be 2% different from unity.
The residuals with respect to equation 6 above the fit region (41–55 GeV) are used to test the linearity assumption. The residual difference in Gauss between the flux-loop and NMR fields at a particular beam energy can be converted to a residual bias in MeV by a scale factor of 92.9 MeV/Gauss (corresponding to the ratio of average slopes, $`b/d`$).
An example fit to one flux-loop measurement is shown in figure 6. The average over the NMR probes of the scaled residuals to the fits to equation 6 are shown at each nominal beam energy. The error bars show the rms scatter divided by the square root of the number of working probes. The fit is in the range 41–55 GeV, and the error bars increase above this region. The deviations measured at the physics energy of 91.5 GeV for each of the five flux-loop measurements are shown in figure 6, using the same conventions. The probes in different magnets show a different evolution of the residuals as the nominal beam energy increases. However, for a particular magnet the behaviour is similar for each flux-loop measurement.
The average bias at physics energy is up to $`20`$ MeV, with an rms over the probes of 30–40 MeV, corresponding to an uncertainty of 10–20 MeV depending on the number of working probes. The size of the bias tends to increase during the year; the bias becomes more negative. This is partially understood as being due to the smaller sample of NMR probes available in the latter part of the year, as is also illustrated in figure 6. To account for the correlated uncertainties from measurement to measurement, the difference in bias between the first and last measurements has been found for all of the probes that are common to the two. A significant average difference of $`21\pm 5`$ MeV is observed, for which no explanation has been found. In fact, only the last two flux loops include a 41 GeV point, but the biases measured are not sensitive to excluding this point altogether. Other systematic effects that have been observed, for example a discrepancy of around 2 MeV if two very close by energy points are measured, are too small to explain the trend.
A detailed comparison of NMR and flux-loop data in the region of the RD measurements (41–55 GeV), shown in figure 7, reveals a different non-linearity to the NMR–polarisation comparison. However, the subset of probes included here is not exactly the same as in figure 2, and the average residuals to the flux-loop fits are only a few MeV which is at the limit of the expected precision.
The tests with the flux loop are not used to correct the NMR calibration from polarisation data, but are taken as an independent estimate of the precision of the method. A systematic uncertainty of 20 MeV covers the maximum difference seen during the year. An average over the flux-loop measurements would give a smaller estimate.
### 4.6 Uncertainty from bending field outside the flux loop
The flux loop is embedded in each of the main LEP bending dipoles, but only samples 98% of the total bending field of each dipole. The effective area of the flux loop varies during the ramp because the fraction of the fringe fields overlapping neighbouring dipoles varies. The saturation of the dipoles, expressed as the change in effective length, was measured before the LEP startup on a test stand for different magnet cycles. The correction between 45 and 90 GeV is of the order of $`10^4`$, corresponding to a 5 MeV uncertainty in the physics energy.
The weak (“10%”) dipoles matching the LEP arcs to the straight sections contribute 0.2% to the total bending field. Assuming that their field is proportional to that of the main bends between RD and physics energies to better than 1%, their contribution to any non-linearity in the extrapolation is around 1 MeV.
The bending field of the double strength dipoles in the injection region contributes 1.4% of the total. Their bending field has been measured by additional NMR probes installed in the tunnel in 1998, and found to be proportional to the bending field of the main dipoles to rather better than $`10^3`$, which gives a negligible additional systematic uncertainty.
## 5 Quadrupole and horizontal orbit corrector effects
### 5.1 Earth tides
The model of earth tides is well understood from LEP1 . It should be noted that the amplitude of the tide effect is proportional to energy, and so is larger at LEP2.
### 5.2 Central Frequency and Machine Circumference
For a circular accelerator like LEP the orbit passing on average through the centre of the quadrupoles is referred to as the central orbit, and the corresponding RF frequency setting is known as the central RF frequency $`f_\mathrm{c}^{\mathrm{RF}}`$. When the RF frequency $`f^{\mathrm{RF}}`$ does not coincide with $`f_\mathrm{c}^{\mathrm{RF}}`$ the beam senses on average a dipole field in the quadrupoles, which causes a relative beam energy change $`\mathrm{\Delta }E`$ of :
$$\frac{\mathrm{\Delta }E}{E}=\frac{1}{\alpha }\frac{f^{\mathrm{RF}}f_\mathrm{c}^{\mathrm{RF}}}{f^{\mathrm{RF}}}$$
(7)
where $`\alpha `$ is the momentum compaction factor, which depends on the optics used in LEP. Its value is $`1.5410^4`$ for the 102/90 optics, $`1.8610^4`$ for the 90/60 optics and $`3.8610^4`$ for the 60/60 optics, with a relative uncertainty of $`\begin{array}{c}<\hfill \\ \hfill \end{array}1\%`$.
The central frequency is only measured on a few occasions during a year’s running and requires non-colliding beams . The monitoring of the central orbit and of the ring circumference relies on the measurement of the average horizontal beam position in the LEP arcs, $`x_{\mathrm{arc}}`$ . As the length of the beam orbit is constrained by the RF frequency, a change in machine circumference will be observed as a shift of the beam position relative to the beam position monitors (BPMs). Figure 8 shows the evolution of the central frequency determined through $`x_{\mathrm{arc}}`$ as well as the direct $`f_\mathrm{c}^{\mathrm{RF}}`$ measurements. The $`x_{\mathrm{arc}}`$ points have been normalised to the electrons $`f_\mathrm{c}^{\mathrm{RF}}`$ measurements. The occasional difference of about 4 Hz between the electrons and positron $`f_\mathrm{c}^{\mathrm{RF}}`$ (measured at physics energy) is not understood. A similar systematic effect has also been seen in 1998. Therefore a systematic error of $`\pm `$ 4 Hz is assigned to the central frequency. This results in an uncertainty of 1.5 MeV in the predicted difference between the energies measured by RD with the 90/60 and 60/60 optics at 45 GeV beam energy.
A correction to the energy of each fill using the measured orbit offset in the LEP arcs is applied to track the change in $`f_\mathrm{c}^{\mathrm{RF}}`$.
### 5.3 RF frequency shifts
For the first time in 1997, the RF frequency was routinely increased from the nominal value to change the horizontal damping partition number . This is a useful technique at high energy, causing the beam to be squeezed more in the horizontal plane, and increasing the specific luminosity, whereas at lower energy, beam-beam effects prevent the horizontal beam size reduction. Less desirable side effects are that the central value of the beam energy decreases, the beam energy spread increases, and slightly more RF accelerating voltage is needed to keep the beam circulating. A typical frequency shift of $`+100`$ Hz gives a beam energy decrease of about 150 MeV. Occasionally, when an RF unit trips off, the LEP operators temporarily decrease the RF frequency to keep the beam lifetime high, in which case the beam energy values are immediately recalculated instead of waiting the usual 15 minutes.
### 5.4 Horizontal Corrector Effects
Small, independently-powered dipole magnets are used to correct deviations in the beam orbit. Horizontal correctors influence the beam energy either through a change of the integrated dipole field or through a change of the orbit length $`\mathrm{\Delta }L_1`$ . In general the two effects could be mixed and cannot be easily disentangled, although simulations show that the orbit lengthening effect should dominate. For a given orbit and corrector settings, the predicted energy shifts can differ by 30% between the two models, which implies that a 30% error should be applied to the energy shifts predicted for the corrector settings.
In general the settings of the horizontal correctors are different for different machine optics, and also for different beam energies. The energy model described by equation 3 includes this optics dependent correction explicitly. RD measurements using any optics can therefore be combined when calibrating the NMR probes (section 4.1), which estimate the dominant contribution from the main bend dipoles.
For an orbit lengthening $`\mathrm{\Delta }L_1`$ the energy change is :
$$\frac{\mathrm{\Delta }E}{E}=\frac{\mathrm{\Delta }L_1}{\alpha C}$$
(8)
where $`C`$ is the LEP circumference and $`\mathrm{\Delta }L_1`$ is calculated from
$$\mathrm{\Delta }L_1=D_x\delta .$$
(9)
The sum is over all correctors, $`D_x`$ is the horizontal dispersion at the corrector, and $`\delta `$ is the “kick”, i.e. the deflection due to the corrector. The calculated and measured values of $`D_x`$ agree to within 2%, and the kick is known from the current in the corrector magnet. Figure 9 shows the evolution of $`\mathrm{\Delta }L_1`$ in physics for the 1997 LEP run. The size of the effect is somewhat larger than in previous years: for a large fraction of the run $`\mathrm{\Delta }E`$ reaches approximately 11 MeV. The contributions of the horizontal correctors to the beam energies measured by RD for the 90/60 and 60/60 optics differ by about 4 MeV at 45 GeV beam energy.
Recent simulations predict that the orbit corrector settings should not influence the central frequency by more than 0.5 Hz. This was confirmed by measurements made during the 1998 LEP run. Separate corrections for $`x_{\mathrm{arc}}`$ and for the energy shifts due to the horizontal corrector configurations are therefore applied.
The average beam energy shift from the orbit corrector settings is 6 MeV for the high energy running in 1997, which is much larger than in previous years. The 30% model uncertainty would imply an error of 2 MeV. This is increased to half of the total correction, 3 MeV, in view of the evolving understanding of the interplay of central frequency and orbit corrector effects.
### 5.5 Optics dependent effects
The majority of the RD measurements were made with the dedicated polarisation (60/60) optics, while the physics running was with the 90/60 optics. Both horizontal orbit corrector settings (see section 5.4) and current differences in the vertically and horizontally focussing quadrupoles can cause a difference of a few MeV between the beam energies with the two optics. These are accounted for by the corrections $`C_{\mathrm{h}.\mathrm{corr}.}`$ and $`C_{\mathrm{QFQD}}(t)`$ in the model of the beam energy given by equation 3.
Simulations show that the overall energy difference can be of either sign, depending on the exact imperfections in the machine, and the difference is predicted to scale with the beam energy. The predicted difference at 45 GeV also has an uncertainty of 1.5 MeV from central frequency effects, described in section 5.2. The measured difference in the data is evaluated from the residuals $`E_{\mathrm{pol}}E_{\mathrm{NMR}}`$ with respect to the simultaneous fit to all RD measurements, using either optics. These can be seen in figure 2(a). The observed average beam energy difference between RD measurements with the two optics is:
$$E(90/60)E(60/60)=+2\text{ MeV at 45 GeV.}$$
(10)
From the fill-to-fill scatter, the uncertainty on this measured difference is $`1`$ MeV. Scaling the difference with the beam energy, a systematic uncertainty of 4 MeV is therefore taken to cover all uncertainties due to optics dependent effects at physics energy.
## 6 Evaluation of the centre-of-mass energy at each IP
As at LEP1, corrections to the centre-of-mass energy arise from the non-uniformity of the RF power distribution around LEP and from possible offsets of the beam centroids during collisions in the presence of opposite-sign vertical dispersion.
### 6.1 Corrections from the RF System
Since the beam energy loss due to synchrotron radiation is proportional to $`E_{\mathrm{beam}}^4`$, operation of LEP2 requires a large amount of RF acceleration to maintain stable beam orbits. To provide this acceleration, new super-conducting (SC) RF cavities have been installed around all of the experiments in LEP. This implies that, contrary to LEP1, the exact anti-correlation of RF effects on the beam energy at IP4 and IP8 is no longer guaranteed, and that large local shifts in the beam energy can occur at any of the IPs. This also implies that the energy variation in the beams (the “sawtooth”) as they circulate around LEP is quite large (see figure 10.). This increases the sensitivity of the centre-of-mass energy to non-uniformities in the energy loss arising from differences in the local magnetic bend field, machine imperfections, etc.
The modelling of the energy corrections from the RF system is carried out by the iterative calculation of the stable RF phase angle $`\varphi _s`$ which proceeds by setting the total energy gain, $`V_{\mathrm{tot}}\mathrm{sin}\varphi _s`$, of the beams as they travel around the machine equal to all of the known energy losses. Here $`V_{\mathrm{tot}}`$ is the total RF accelerating voltage. The measured value of the synchrotron tune $`Q_s`$ and the energy offsets between the beams as they enter and leave the experimental IPs are used to constrain energy variations due to overall RF voltage scale and RF phase errors. In particular, the phase error at each IP is set using the size of the energy sawtooth measured by the LEP BOM system compared with the total RF voltage at that IP. This is a powerful constraint on potential phasing errors, as was seen in the LEP1 analysis.
The average corrections for the 1996 and 1997 running are shown in table 2. The corrections for running below the WW threshold are typically smaller by a factor of two, with correspondingly smaller errors. The bunchlet-to-bunchlet variation in corrections is negligible.
As at LEP1, the errors on the energy corrections are evaluated by a comparison of those quantities ($`Q_s`$, the orbit sawtooth, and the longitudinal position of the interaction point) which can be calculated in the RF model and can be measured in LEP. In addition, uncertainties from the inputs to the model, such as the misalignments of the RF cavities and the effects of imperfections in the LEP lattice, must also be considered. Since many of the uncertainties on the RF corrections scale with the energy loss in LEP, however, the overall uncertainty due to RF effects is larger at LEP2 than at LEP1. Note that the errors are given below in terms of $`E_{\mathrm{beam}}`$, and are obtained by dividing the error on $`E_{\mathrm{CM}}`$ by two.
Comparison of the measured and calculated $`Q_s`$ values reveals a discrepancy in the modelled and measured overall RF voltage seen by the beam. The difference is small, on the order of 4%, which can be explained by an overall scale error in the measured voltages or a net phase error in the RF system of a few degrees. An overall scale error changes the energy corrections by a corresponding amount (i.e., a 10 MeV correction acquires an error of 0.4 MeV, which is negligible), whereas phase errors can shift the energy by larger amounts at the IP closest to the error. For 1996, the overall error due to this mismatch was computed by assuming the entire phase error was localised to one side of an IP, and the largest shift taken as the error for all IPs (4 MeV $`E_{\mathrm{beam}}`$). For 1997, the total energy gain was normalised so that $`Q_s`$ was correct, and the phase errors for each IP were calculated using the orbit measurement of the local energy gain. This resulted in a smaller error of 1.5 MeV on $`E_{\mathrm{beam}}`$ from the voltage scale and phasing effects.
The positions of all of the RF cavities in LEP have been measured repeatedly using a beam-based alignment technique with a systematic precision of 1 mm and a 1 mm rms scatter over time . The systematic error on the energy corrections is evaluated by coherently moving the RF cavities in the model by 2 mm away from (towards) the IPs, and observing the change in $`E_{\mathrm{CM}}`$. This results in a 1.5 MeV error on $`E_{\mathrm{beam}}`$ for 1996 and 1997.
Recently, a study of the effects of imperfections in the LEP lattice on the energy loss of the beams at LEP2 has been performed . Calculations of the centre-of-mass energy in an ensemble of machines with imperfections similar to those of LEP yields an rms spread of 2.5 MeV $`E_{\mathrm{beam}}`$ in the predicted energy at the IPs due to non-uniformities in the energy loss of the beams. These shifts only depend on the misalignments and non-uniformities of all of the magnetic elements in LEP, which are essentially unmeasurable, and is not contained in any of the other error sources.
In order to keep the error estimate as conservative as possible, the error on the energy corrections from the RF should be considered 100% correlated between IPs and energy points. The total error from RF effects for each energy point is given in table 3.
### 6.2 Opposite sign vertical dispersion
In the bunch train configuration, beam offsets at the collision point can cause a shift in the centre-of-mass energy due to opposite sign dispersion . The change in energy is evaluated from the calculated dispersion and the measured beam offsets from beam-beam deflection scans.
The dispersions have been calculated using MAD for all the configurations in 1996 and 1997. No dedicated measurements of dispersion were made in 1996, while in 1997 the measured values agree with the prediction to within about 50%. This is the largest cause of uncertainty in the possible correction. Beam offsets were controlled to within a few microns by beam-beam deflection scans. The resulting luminosity-weighted correction to the centre-of-mass energies are typically 1 to 2 MeV, with an error of about 2 MeV. No corrections have been applied for this effect, and an uncertainty of 2 MeV has been assigned.
## 7 Summary of systematic uncertainties
The contributions from each source of uncertainty described above are summarised in table 3. The first groups describe the uncertainty in the normalisation derived from NMR-polarisation comparisons, NMR-flux-loop tests and the part of the bending field not measured by the flux-loop. These extrapolation uncertainties dominate the analysis. The subsequent errors concern the polarisation measurement, specifically its intrinsic precision (which is less than 1 MeV), the possible difference in energy between electrons and positrons, and the difference between optics. None of the additional uncertainties from time variations in a fill, and IP specific corrections contribute an uncertainty greater than 5 MeV.
### 7.1 Uncertainty for data taken in 1996
The analysis of the 1996 data was largely based on a single fill with RD measurements at 45 and 50 GeV. The apparent consistency of the flux-loop and NMR data compared to RD data was about 2 MeV over this 5 GeV interval, i.e. a relative error $`4\times 10^4`$, which using a naive linear extrapolation would give an uncertainty of 13.5 (15) MeV at 81.5 (86) GeV. These errors were inflated to 27 (30) MeV before the 1997 data were available, since there was no test of reproducibility from fill to fill, there was no check of the non-linearity possible from a fill with two energy points, and the field outside the flux loop had not been studied. Although more information is available in 1997, this larger uncertainty is retained for the 1996 data, partly motivated by the sparsity of RD measurements in that year. In addition, the single parameter fit that was used for 1996 leads to a shift of 20 MeV, and an increased scatter of 60 MeV, when used to predict the energies in physics in 1997.
The uncertainties for the two 1996 data samples can be assumed to be fully correlated. However, the extrapolation uncertainty for the 1997 data is somewhat better known. Since the energy difference between the maximum RD energy and the physics energy is nearly the same in the two years, it can be assumed that the 25 MeV uncertainty of 1997 data is common to the 1996 data.
### 7.2 Lower energy data taken in 1996 and 1997
During 1996 and 1997, LEP also operated at the Z resonance, to provide data samples for calibrating the four experiments, and at intermediate centre-of-mass energies, 130–136 GeV, to investigate effects seen at the end of 1995 at “LEP 1.5”. The dominant errors on the beam energy are from the extrapolation uncertainty, and scale with the difference between physics energy and RD energy. The optics difference scales in the same way. Several other effects such as the tide correction are proportional to the beam energy. The dipole rise per fill depends in addition on whether bend modulation was carried out at the start of fill. The total beam energy uncertainties are found to be 6 MeV for Z running, and 14 MeV for LEP 1.5 running.
## 8 Centre-of-mass energy spread
The spread in centre-of-mass energy is relevant for evaluating the width of the W boson, which is about 2 GeV. The beam energy spread can be predicted for a particular optics, beam energy and RF frequency shift. This spread has been calculated for every 15 minutes of data taking, or more often in the case of an RF frequency shift. Weighting the prediction by the (DELPHI) integrated luminosity gives the average “predicted” values in table 4 for each nominal centre-of-mass energy. Overall averages for all data taken close to 161, 172 and 183 GeV are also listed. The error in the prediction is estimated to be about 5%, from the differences observed when a quantum treatment of radiation losses is implemented.
The beam energy spread can also be derived from the longitudinal bunch size measured by one of the experiments. This procedure has been applied to the longitudinal size of the interaction region measured in ALEPH, $`\sigma _z^{\mathrm{ALEPH}}`$, which is related to the energy spread by :
$$\sigma _{E_{\mathrm{beam}}}=\frac{\sqrt{2}E_{\mathrm{beam}}}{\alpha R_{\mathrm{LEP}}}Q_\mathrm{s}\sigma _z^{\mathrm{ALEPH}}.$$
(11)
The momentum compaction factor $`\alpha `$ is known for each optics, $`R_{\mathrm{LEP}}`$ is the average radius of the LEP accelerator, and $`Q_\mathrm{s}`$ is the incoherent synchrotron tune. This is derived from the measured coherent $`Q_\mathrm{s}`$ using:
$$\frac{Q_\mathrm{s}^{\mathrm{coh}}}{Q_\mathrm{s}^{\mathrm{incoh}}}=1\kappa \frac{I^{\mathrm{bunch}}}{300\mu \mathrm{A}}$$
(12)
The parameter $`\kappa `$ was measured in 1995 to be $`0.045\pm 0.022`$ at the Z. For the same $`Q_\mathrm{s}`$ and machine configuration, this would scale with $`1/E_{\mathrm{beam}}`$. This scaling has been used for the central values of $`\sigma _{E_{\mathrm{beam}}}`$ evaluated from the measured bunch lengths in table 4. However, the reduction in the number of copper RF accelerating cavities in the machine since 1995 is expected to further reduce the value of $`\kappa `$, so the uncertainty of $`\pm 0.022`$ is retained for all energies. The value and uncertainty are consistent with estimates from the variation of bunch length with current measured with the streak camera<sup>2</sup><sup>2</sup>2The streak camera measures the bunch length parasitically by looking at synchrotron light emitted when the bunch goes through a quadrupole or wiggler magnet. in 1998. Where two errors are quoted for the derived number, they are the statistical uncertainty in the bunch length measurement, and a systematic uncertainty, which is dominated by the uncertainty in $`\kappa `$, with a 1 MeV contribution from the uncertainty in $`\alpha `$. The predicted and derived values agree well within the quoted errors.
The measurement of $`Q_\mathrm{s}`$ is difficult for high energy beams, and in 1997, a reliable value is only available for 58% of the data. It is therefore recommended to use the predicted values. The beam energy spreads must be multiplied by $`\sqrt{2}`$ to give the centre-of-mass energy spreads , which are: $`144\pm 7`$ MeV at 161 GeV, $`165\pm 8`$ MeV at 172 GeV and $`219\pm 11`$ MeV at 183 GeV.
## 9 Conclusions and outlook
The method of energy calibration by magnetic extrapolation of resonant depolarisation measurements at lower energy has made substantial progress with the 1997 data. The success in establishing polarisation above the Z has allowed a robust application of the method, and the mutual consistency of the resonant depolarisation, NMR and flux-loop data has been established at the 20 MeV level at physics energy, with a total systematic uncertainty in the beam energy of 25 MeV. The precision is limited by the understanding of the NMR/flux-loop comparison.
As LEP accumulates more high energy data, the experiments themselves will be able to provide a cross-check on the centre-of-mass energy by effectively measuring the energy of the emitted photon in events of the type $`e^+e^{}\mathrm{Z}\gamma f\overline{f}\gamma `$, where the Z is on-shell. This can be done using a kinematic fit of the outgoing fermion directions and the precisely determined Z-mass from LEP1. The ALEPH collaboration have shown the first attempt to make this measurement in the $`q\overline{q}\gamma `$ channel, where they achieve a precision of $`\delta E_{\mathrm{beam}}=\pm 110(\mathrm{stat})\pm 53(\mathrm{syst})`$ MeV. With 500 pb<sup>-1</sup> per experiment, the statistical precision on this channel should approach 15 MeV. Careful evaluation of systematic errors will determine the usefulness of this approach.
In future, a new apparatus will be available for measuring the beam energy. The LEP Spectrometer Project will measure the bend angle of the beams using standard LEP beam pick ups with new electronics to measure the position to the order of a micron precision as they enter or leave a special dipole in the LEP lattice whose bending field has been surveyed with high precision. A first phase of the spectrometer is already in place for the 1998 running, with the aim of checking the mechanical and thermal stability of the position measurement. In 1999, the new magnet will be installed, and the aim is to use this new, independent method to measure the beam energy to 10 MeV at high energy. It should be possible to propagate any improvement in the beam energy determination back to previous years by correcting the extrapolation and correspondingly reducing the uncertainty.
## Acknowledgements
The unprecedented performance of the LEP collider in this new high energy regime is thanks to the SL Division of CERN. In particular, careful work and help of many people in SL Division has been essential in making specific measurements for the energy calibration.
We also acknowledge the support of the Particle Physics and Astronomy Research Council, UK.
|
no-problem/9901/astro-ph9901133.html
|
ar5iv
|
text
|
# Submillimeter Continuum Emission in the 𝜌 Ophiuchus Molecular Cloud: Filaments, Arcs, and an Unidentified Far-Infrared Object
## 1 Introduction
Young star-forming regions can encompass a wide variety of sources and phenomena, from the pre-protostellar clumps and the quiescent filamentary structure of the parent molecular cloud, to low-mass protostars with their associated disks, jets, and outflows, to young massive stars powering extensive photon-dominated regions. In the nearest clouds, where we can achieve the best spatial resolution for studying small-scale phenomena such as circumstellar disks and for disentangling crowded clusters of protostars, the largest outflows may extend over tens of arcminutes (i.e. Dent, Matthews, & Walther 1995), while the molecular cloud itself may approach a degree or more in size (i.e. Maddalena et al. 1986). Thus, high resolution imaging over large fields of view is necessary for a complete understanding of such regions.
Many of these phenomena are best traced by continuum emission, particularly the high-density cores and envelopes associated with the early stages of star formation. Given the limited sensitivity of single-pixel bolometers, most surveys have targeted sources previously identified from larger area surveys in the near- or far-infrared (i.e. André & Montmerle 1994). Although such observations can give us a good understanding of the properties of young protostars, they cannot reveal the large-scale structures associated with the protostars and, in addition, may miss the coldest and most crowded objects. For example, a large-area survey of the $`\rho `$ Ophiuchus molecular cloud using the IRAM bolometer array (Motte, André, & Neri 1998) was able to identify a number of weaker sources located in and around the bright complex of emission associated with SM1 and VLA 1623 (André, Ward-Thompson, & Barsony 1993). With the advent of SCUBA (Holland et al. 1999) on the James Clerk Maxwell Telescope, we now have the opportunity to survey large areas of the sky in submillimeter continuum emission routinely. Sensitive, large-area surveys that extend beyond the known regions of bright emission are likely to reveal previously unknown features, as was clearly demonstrated by recent maps of the Orion A molecular cloud (Johnstone & Bally 1999). In this Letter, we present the first results from a large-area unbiased survey of the $`\rho `$ Ophiuchus molecular cloud at 850 and 450 $`\mu `$m.
## 2 Observations and Data Reduction
We observed a $`20\times 20^{}`$ region of the $`\rho `$ Ophiuchus molecular cloud roughly centered on the Ophiuchus A core on 1998 July 10 and 11 with the bolometer array SCUBA (Holland et al. 1999) at the James Clerk Maxwell Telescope. We observed in standard scan-mapping mode simultaneously at 850 and 450 $`\mu `$m. The region was broken down into four fields each measuring roughly $`10^{}\times 10^{}`$. In this paper, we present data for only the north-east field centered on 16:26:32 -24:22:30 (J2000); all subsequent discussion refers to this single field.
The field was mapped six times each night, with chop throws of 20<sup>′′</sup>, 30<sup>′′</sup>, and 65<sup>′′</sup> oriented in either the right ascension or declination directions. Each chop throw and direction was repeated at a different time on the two nights so that the scan direction would have a different orientation on each night. The data were acquired with 3<sup>′′</sup> sampling. The atmospheric optical depth, measured by sky dips approximately once an hour, was quite constant on the first night, with the value at zenith being 0.10 at 850 $`\mu `$m and 0.43 at 450 $`\mu `$m during the time these observations were obtained. On the second night, the optical depth improved steadily over the course of the observations, decreasing from 0.35 to 0.24 at 850 $`\mu `$m and from 1.77 to 1.33 at 450 $`\mu `$m. The pointing was checked every 45 minutes using the bright source IRAS 16293-2422 and was stable to better than 2<sup>′′</sup>. The beam sizes measured from observations of Uranus obtained with the same observing method were 15.1<sup>′′</sup> (FWHM) at 850 $`\mu `$m and 9.5<sup>′′</sup> at 450 $`\mu `$m. The presence of an extended error beam can be seen at both wavelengths, with a FWHM of 36<sup>′′</sup> at 850 $`\mu `$m and 72<sup>′′</sup> at 450 $`\mu `$m. At 850 $`\mu `$m the peak of the error beam is 5.6% of that of the main beam, while at 450 $`\mu `$m the peak of the error beam is 3.6%. The observations of Uranus were also used to determine the calibration factor, which was 198 Jy Volt<sup>-1</sup> at 850 $`\mu `$m and 629 Jy Volt<sup>-1</sup> at 450 $`\mu `$m.
The individual scan maps were processed using the standard SCUBA software (Holland et al. 1999) into six independent dual-beam maps at each wavelength, with pixel size 3<sup>′′</sup>. The maps have not been corrected for sky noise. The six dual-beam maps were exported to the MIRIAD software package (Sault, Teuben, & Wright 1995) to be analyzed using the maximum entropy algorithm. Each map was shifted by one half the chop throw and the six shifted maps were averaged to produce a “dirty map” of the region showing the pattern of the 6 chop throws. We then created an ideal “beam” consisting of the sum of one positive gaussian and six negative gaussians, each with a FWHM equal to that of the observed main beam, with the six negative gaussians offset from the origin by the six chop throws, and with peak amplitude one-sixth that of the positive gaussian. The contribution of the error beam was ignored in this analysis; the effect of this will be to increase artificially the integrated fluxes of extended sources above what would be observed with an ideal beam. For a gaussian source with a full-width half-maximum diameter of 35<sup>′′</sup>, the net effect would be to increase the total 450 $`\mu `$m flux inside a 40<sup>′′</sup> radius annulus by about 30%. However, peak fluxes will be unaffected by this problem. The dirty map and the beam were then used as inputs to the miriad routine “maxen” to create a maximum entropy restored image of the scan-mapped field. Clean boxes were used to isolate the large negative bowls around the bright sources SM1, SM1N, SM2, and VLA 1623, which improved the image restoration. The final images still contain residuals of the negative chop throws at a level of 4-6% of the peak flux in the map. However, the negative bowls are much reduced in area and depth compared to the images obtained with the “Emerson II” technique (Holland et al. 1999) and, thus, are more useful for identifying large scale features in the map. The typical rms noise in the final maps far from the negative bowl is estimated to be 30 mJy beam<sup>-1</sup> at 850 $`\mu `$m and 250 mJy beam<sup>-1</sup> at 450 $`\mu `$m.
## 3 A Wealth of Previously Unknown Structures
### 3.1 Linear Features
Color images of the 850 $`\mu `$m and 450 $`\mu `$m maps of the region around the Ophiuchus A cloud core are shown in Figure 3.3. One of the most striking aspects of these images is the two linear emission features extending from the northern tip of the bright complex of emission associated with the protostellar cores SM1, SM1N, SM2, and VLA 1623 (André et al. 1993). The two linear features appear to intersect near 16:26:29 -24:22:45 (J2000) and extend for at least 4 (0.2 pc) to the north-east and the north-west. The peak surface brightness in these linear features is 0.6-0.7 Jy beam<sup>-1</sup> at 850 $`\mu `$m near the SM1 complex, while at large distances from the complex the peak surface brightness falls to $`0.10.2`$ Jy beam<sup>-1</sup>. These linear features are so weak that they can only be identified with the high sensitivity of SCUBA and in wide-field images which allow their large linear extent to be traced. Similar linear features have been seen recently in Orion A (Johnstone & Bally 1999).
In the vicinity of the intersection of the two features, there is strong CO emission which is blueshifted by 1-2 km s<sup>-1</sup> compared to the central velocity of the emission around VLA 1623 (D. Koerner, private communication). Thus, it is possible that these two linear features trace the walls of a previously unidentified outflow cavity. For comparison, the walls of the south-east outflow lobe created by VLA 1623 are faintly visible in Figure 3.3. This outflow would have an opening angle of 56<sup>o</sup>, comparable to that observed for the L1551 outflow (Moriarty-Schieven et al. 1987). At the apex of the “V” formed by the two features there is a weak compact source, seen most easily in the 20% contour of Figure 3.3b. This source has been identified previously at 1.3 mm as A-MM6 (Motte et al. 1998) and could be the protostellar driver of the outflow (Figure 3.3). However, there are many other possible interpretations for these linear features. For example, they could trace two independent outflows, which would then have collimation factors of at least 7-10. Two independent outflows emanating from a single position have been observed in several other protostellar sources (i.e. L723, Anglada et al. 1991; IRAS 16293-2422, Mundy et al. 1992). Another possibility is that these structures are being externally heated by the photon-dominated region (PDR) produced by the young B3 star S1, which lies approximately 1.5 east of SM1N (i.e., see Motte et al. 1998). In this scenario, the north-east feature could mark the cavity wall containing gas and dust swept up by the PDR; indeed, the location of this feature matches well the edge of the PDR seen in ISO images (Abergel et al. 1996). The north-west feature might be material associated with a second outflow recently identified in this region (Kamazaki et al. 1998), which could be heated from the outside by the PDR. The presence of a weak linear feature along the southern edge of the map, which appears to lie along the northern edge of CO outflow associated with VLA 1623 (André et al. 1990), provides some support for this interpretation. Clearly, sensitive CO observations in the region of these new linear filaments, as well as continuum observations of regions which are not illuminated by a nearby massive star, would help to distinguish between these different interpretations.
Assuming a dust temperature of 30 K, a distance to $`\rho `$ Ophiuchus of 160 pc, and a dust opacity coefficient $`\kappa =0.02`$ cm<sup>2</sup> g<sup>-1</sup> at 850 $`\mu `$m (i.e. Motte et al. 1998), we can estimate the total mass (gas plus dust) contained in these linear features. The 850 $`\mu `$m fluxes in the peak regions near the SM1 complex correspond to masses within a 15<sup>′′</sup> beam of 0.04-0.05 M. However, the presence of extended low level emission associated with the SM1 complex means that these masses are likely upper limits to the true masses. The masses in the strongest emission regions 2-4 along the linear features from this bright complex are each of order 0.01 M. The average surface brightness of 0.04 Jy beam<sup>-1</sup> along the outer 150<sup>′′</sup> of the north-east linear feature (excluding the bright base) translates into a mass of 0.03 M. Thus, the total mass in the north-east linear feature is likely to be of order 0.1 M. For comparison, this is similar to the mass of 0.09 M estimated from CO observations of the IRAS 03282 outflow (Bachiller, Martin-Pintado, & Planesas 1991). However, the masses of $`0.01`$ M in the individual clumps are substantially larger than the masses of $`10^4`$ M obtained for the CO bullets seen in extremely high velocity outflows (i.e. Bachiller et al. 1990).
### 3.2 A New Compact Source in Ophiuchus
Another striking feature of the images presented in Figure 3.3 is the presence of four bright compact sources that lie well outside the bright emission associated with SM1 and VLA 1623. Three of these sources are the previously identified protostars EL24, EL27, and GSS26 (i.e. André & Montmerle 1994), while the fourth, located in the north-east corner of our field, appears to be previously unidentified. We will refer to this north-east source as SMM16267-2417. (Note that Loren, Wootten, and Wilking (1990) detected DCO<sup>+</sup> emission $`1^{}`$ north of this source, but the line emission was too weak for it to be designated as a DCO<sup>+</sup> core.) This source is located at 16:26:43.5 -24:17:26 (J2000) and has a peak flux of 0.4 Jy beam<sup>-1</sup> at 850 $`\mu `$m and 1.9 Jy beam<sup>-1</sup> at 450 $`\mu `$m. Its full-width half-maximum diameter at 850 $`\mu `$m deconvolved from the 15.1<sup>′′</sup> beam is $`26\times 34^{\prime \prime }`$ or $`5000`$ AU. Assuming an uncertainty of 10% for each point in the observed 450 $`\mu `$m radial profile, a “by-eye” fit between radii of 9<sup>′′</sup> and 24<sup>′′</sup> suggests a slope of $`0.8\pm 0.2`$. This slope is comparable to that seen in the outer portions of the pre-protostellar cores by Ward-Thompson et al. (1994). A more complete modeling of the radial profile of SMM16267-2417 will be presented in a future paper. No point-like source, or indeed any obvious emission, can be seen at any waveband in either the IRAS FRESCO or HIRES images, except at 100 $`\mu `$m where SMM16267-2417 is located on the northern slope of another structure to the south. Assuming a dust temperature of 10-20 K for SMM16267-2417 and the other parameters as described above for the linear features, the 850 $`\mu `$m flux implies a total mass of 0.3-1 M. These properties are all consistent with SMM16267-2417 being a pre-protostellar core (i.e. Ward-Thompson et al. 1994). However, if the linear features identified in this region indeed correspond to a molecular outflow, it is possible that SMM16267-2417 corresponds to shocked gas within the outflow cavity. Here again, CO and other line observations would help distinguish between these various possibilities.
The spectral index $`\gamma `$, defined such that the flux $`S_\nu `$ is proportional to $`\nu ^\gamma `$ between 450 and 850 $`\mu `$m, is related to the dust index $`\beta `$ and the dust temperature $`T`$. At temperatures below $`30`$K (Ward-Thompson et al. 1994), the Rayleigh-Jeans assumption is inappropriate, and $`\gamma `$, $`\beta `$, and $`T`$ are related by
$$\frac{S_{450}}{S_{850}}=(850/450)^\gamma =\frac{e^{16.9/T}1}{e^{32.0/T}1}(850/450)^{3+\beta }$$
where $`S_{450}`$ and $`S_{850}`$ are the fluxes at 450 and 850 $`\mu `$m. We have assumed that the gas is optically thin at 450 $`\mu `$m, which is appropriate if the flux originates from material that fills the beam. Using the integrated fluxes of 2.3 Jy at 850 $`\mu `$m and 19.3 Jy at 450 $`\mu `$m (correcting for the error beam as discussed in §2), SMM16267-2417 has an average spectral index of 3.3. The value of $`\beta `$ derived from these wavelengths is highly dependent on the assumed dust temperature. Assuming an uncertainty in the flux ratio of 20% and a dust temperature of 20 K, this spectral index corresponds to a value for $`\beta `$ of $`2.1_{0.4}^{+0.2}`$. This value of $`\beta `$ is consistent with that found on large scales in the interstellar medium (Hildebrand 1983), but somewhat larger than the values of $`\beta 1`$ that are typically found in compact cores and circumstellar disks (Beckwith & Sargent 1991).
### 3.3 Arcs and Other Features
Two curved arcs of continuum emission are clearly visible in the 850 $`\mu `$m image of Figure 3.3a to the north-west of the SM1 complex. The more distant arc (Arc #2, at 16:26:10, -24:20) has a peak surface brightness of 0.29 Jy beam<sup>-1</sup> and an integrated flux of 3.7 Jy. Arc #2 appears quite smooth and does not obviously break up into point sources. Assuming a dust temperature of 30 K, the total mass in this arc is 0.3 M. This arc may be related to the VLA 1623 outflow; although the outflow has not been mapped out this far (André et al. 1990, Dent et al. 1995), if the outflow continues on in the same direction in the outer portions, it would pass just to the south of this arc.
The inner arc (Arc #1, at 16:26:20, -24:23), although similar in structure to Arc #2, clearly breaks up into five clumps in the 850 $`\mu `$m map. From north to south, these clumps are A-MM4, LFAM1, A-MM1, A-MM3, and LFAM3 (André & Montmerle 1994, Motte et al. 1998). (We do not see a clear peak corresponding to A-MM2 in our 850 $`\mu `$m images.) At 850 $`\mu `$m, the emission from LFAM1 dominates the emission from GSS30-IRS1 and IRS2, although weak peaks at the approximate location of these sources can be seen in the higher resolution 450 $`\mu `$m map. Motte et al. (1998) suggested that the clumps A-MM1 to A-MM3 may be related to the VLA 1623 outflow, which passes through this region; however, A-MM4 lies well outside the outflow. The presence of Arc #2 more distant from VLA 1623 but still coincident with the outflow suggests that in both cases we may be seeing continuum emission associated with bow shocks in the VLA 1623 outflow.
The research of MF, DJ, GJ, GFM, REP, and CDW is supported through grants from the Natural Sciences and Engineering Research Council of Canada. The JCMT is operated by the Joint Astronomy Centre on behalf of the Particle Physics and Astronomy Research Council of the United Kingdom, the Netherlands Organization for Scientific Research, and the National Research Council of Canada.
|
no-problem/9901/astro-ph9901212.html
|
ar5iv
|
text
|
# Analytical Modeling of the Weak Lensing of Standard Candles I. Empirical Fitting of Numerical Simulation Results
## 1 Introduction
The use of standard candles is fundamental in observational cosmology. The distance-redshift relations for standard candles enable us to determine the basic cosmological parameters $`H_0`$ (the Hubble constant), $`\mathrm{\Omega }_m`$ (the matter density of the Universe in units of the critical density $`\rho _c=3H_0^2/(8\pi G)`$), and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ (the density contribution from the cosmological constant in units of $`\rho _c`$).
At present, the best candidates for standard candles are Type Ia supernovae (SNe Ia), because they can be calibrated to have very small intrinsic dispersions at cosmological distances (Phillips (1993); Riess, Press, & Kirshner (1995)). Two independent groups of observers (Perlmutter et al. (1998); Riess et al. (1998)) have demonstrated that SNe Ia can be potentially powerful tools for cosmology. Their current results (Perlmutter et al. (1999); Schmidt et al. (1998)) seem to indicate a low matter density universe, possibly with a sizable cosmological constant. It is clear that the observation of SNe Ia can potentially become a reliable probe of cosmology. However, there are important systematic uncertainties of SNe Ia as standard candles, in particular, evolution and gravitational lensing. The two groups have either assumed a smooth universe in their data analysis, or included lensing effects in rudimentary ways. Since we live in a clumpy universe, the effect of gravitational lensing must be taken into account adequately for the proper interpretation of SN data.
A number of authors have considered various aspects of the gravitational lensing of SNe Ia (Frieman (1997); Wambsganss et al. (1997); Kantowski (1998); Holz (1998); Holz & Wald (1998); Metcalf (1998); Porciani & Madau (1998)). A realistic calculation of the weak lensing effect of standard candles was conducted by Wambsganss et al. (1997), who computed the magnification distributions of standard candles using a N-body simulation which has a resolution on small scales that is of the order the size of a halo. However, since the magnification distributions depend on the cosmological model and redshift, the numerical results can not be directly used to compute the effect of weak lensing for an observed SN Ia at arbitrary redshift.
In this paper we derive accurate empirical fitting formulae of the weak lensing numerical simulation results for the distribution of the magnification of standard candles due to weak lensing; these simple formulae can be used to account for the effect of weak lensing in the analysis of SN Ia data. In §2, we give analytical formulae for the angular diameter distance of a standard candle in terms of the smoothness parameter $`\stackrel{~}{\alpha }`$ and constants which depend on redshift and the cosmological parameters $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$. In §3, we generalize the interpretation of the angular diameter distance obtained in §2 by allowing the smoothness parameter $`\stackrel{~}{\alpha }`$ to be a direction dependent variable, the direction dependent smoothness parameter; we extract the distributions of $`\stackrel{~}{\alpha }`$ from the magnification distributions found by numerical simulations. In §4, we give analytical formulae for computing the magnification distributions at arbitrary redshifts. §5 contains a summary and discussions.
## 2 Angular diameter distance as function of the smoothness parameter $`\stackrel{~}{\alpha }`$
In a Hubble diagram of standard candles, one must use distance-redshift relations to make theoretical interpretations. The distance-redshift relations depend on the distribution of matter in the universe. In this section, we express the angular diameter distance to a standard candle in terms of the smoothness parameter, $`\stackrel{~}{\alpha }`$, which is the mass-fraction of the matter in the universe smoothly distributed (Dyer & Roeder (1973)).
In a smooth Friedmann-Robertson-Walker (FRW) universe, $`\stackrel{~}{\alpha }=1`$ in all beams; the metric is given by $`ds^2=dt^2a^2(t)[dr^2/(1kr^2)+r^2(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)]`$, where $`a(t)`$ is the cosmic scale factor, and $`k`$ is the global curvature parameter ($`\mathrm{\Omega }_k=1\mathrm{\Omega }_m\mathrm{\Omega }_\mathrm{\Lambda }=k/H_0^2`$). The comoving distance $`r`$ is given by (Weinberg (1972))
$$r(z)=\frac{cH_0^1}{|\mathrm{\Omega }_k|^{1/2}}\text{sinn}\left\{|\mathrm{\Omega }_k|^{1/2}_0^z𝑑z^{}\left[\mathrm{\Omega }_m(1+z^{})^3+\mathrm{\Omega }_\mathrm{\Lambda }+\mathrm{\Omega }_k(1+z^{})^2\right]^{1/2}\right\},$$
(1)
where “sinn” is defined as sinh if $`\mathrm{\Omega }_k>0`$, and sin if $`\mathrm{\Omega }_k<0`$. If $`\mathrm{\Omega }_k=0`$, the sinn and $`\mathrm{\Omega }_k`$’s disappear from Eq.(1), leaving only the integral. The angular diameter distance is given by $`d_A(z)=r(z)/(1+z)`$, and the luminosity distance is given by $`d_L(z)=(1+z)^2d_A(z)`$.
However, our universe is clumpy rather than smooth. According to the focusing theorem in gravitational lens theory, if there is any shear or matter along a beam connecting a source to an observer, the angular diameter distance of the source from the observer is smaller than that which would occur if the source were seen through an empty, shear-free cone, provided the affine parameter distance (defined such that its element equals the proper distance element at the observer) is the same and the beam has not gone through a caustic. An increase of shear or matter density along the beam decreases the angular diameter distance and consequently increases the observable flux for given $`z`$. (Schneider, Ehlers, & Falco 1992)
For given redshift $`z`$, if a mass-fraction $`\stackrel{~}{\alpha }`$ of the matter in the universe is smoothly distributed, the largest possible distance for light bundles which have not passed through a caustic is given by the solution to the following equation:
$`g(z){\displaystyle \frac{d}{dz}}\left[g(z){\displaystyle \frac{dD_A}{dz}}\right]+{\displaystyle \frac{3}{2}}\stackrel{~}{\alpha }\mathrm{\Omega }_m(1+z)^5D_A=0,`$
$`D_A(z=0)=0,{\displaystyle \frac{dD_A}{dz}}|_{z=0}={\displaystyle \frac{c}{H_0}},`$ (2)
where $`g(z)(1+z)^3\sqrt{1+\mathrm{\Omega }_mz+\mathrm{\Omega }_\mathrm{\Lambda }[(1+z)^21]}`$ (Kantowski (1998)). The $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ form of Eq.(2) has been known as the Dyer-Roeder equation (Dyer & Roeder (1973); Schneider et al. (1992)).
The angular diameter distance for given smoothness parameter $`\stackrel{~}{\alpha }`$ and redshift $`z`$, $`D_A(\stackrel{~}{\alpha },z)`$, can be obtained via the proper integration of Eq.(2). However, for our purposes, it is useful to fit the angular diameter distance at given redshift, $`D_A(\stackrel{~}{\alpha }|z)`$, to a polynomial in $`\stackrel{~}{\alpha }`$, with the coefficients dependent on $`z`$ and cosmological parameters. For the ranges of $`\stackrel{~}{\alpha }`$ and $`z`$ of interest, the angular diameter distance given by Eq.(2) can be approximated as
$$D_A(\stackrel{~}{\alpha }|z)cH_0^1\left[d_0(z)+a(z)\stackrel{~}{\alpha }^3+b(z)\stackrel{~}{\alpha }^2+c(z)\stackrel{~}{\alpha }\right],$$
(3)
where $`d_0=D_A(\stackrel{~}{\alpha }=0)`$, and
$`a(z)`$ $`=`$ $`{\displaystyle \frac{4}{3}}\left(d_0+3d_{0.5}3d_1+d_{1.5}\right),`$
$`b(z)`$ $`=`$ $`2\left(d_02d_{0.5}+d_1{\displaystyle \frac{3a}{4}}\right),`$
$`c(z)`$ $`=`$ $`d_0+d_1ab,`$ (4)
with $`d_{0.5}=D_A(\stackrel{~}{\alpha }=0.5)`$, $`d_1=D_A(\stackrel{~}{\alpha }=1)`$, and $`d_{1.5}=D_A(\stackrel{~}{\alpha }=1.5)`$. Fig.1 shows that the difference between Eq.(3) and the solution to Eq.(2) is much smaller than 0.1 % for the ranges of interest (see Fig.3).
The constants $`d_0`$, $`d_{0.5}`$, $`d_1`$, and $`d_{1.5}`$ depend on the redshift $`z`$ and the cosmological parameters $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$; they are easily computed by integrating Eq.(2). Table 1 lists $`d_0`$, $`d_{0.5}`$, $`d_1`$, and $`d_{1.5}`$ for various cosmological models at various redshifts.
Note that Eq.(2) is usually used to describe a universe with a global smoothness parameter $`\stackrel{~}{\alpha }`$, with $`0\stackrel{~}{\alpha }1`$, since $`\stackrel{~}{\alpha }`$ is the fraction of matter in the universe which is smoothly distributed. However, Eq.(2) is well defined for all positive values of $`\stackrel{~}{\alpha }`$. Since $`\stackrel{~}{\alpha }`$ essentially represents the amount of matter that causes weak lensing of a given source, and matter distribution in our universe is inhomogeneous, we can think of our universe as a mosaic of cones centered on the observer, each with a different value of $`\stackrel{~}{\alpha }`$. This reinterpretation of $`\stackrel{~}{\alpha }`$ implies that we have $`\stackrel{~}{\alpha }>1`$ in regions of the universe in which there are above average amounts of matter which can cause magnification of a source (see §3).
## 3 The distribution of the direction dependent smoothness parameter $`\stackrel{~}{\alpha }`$
In the previous section, we showed that we can separate the dependence of the angular diameter distance on the mass-fraction of matter smoothly distributed ($`\stackrel{~}{\alpha }`$) from its dependence on the redshift and cosmological parameters (see Eq.(3)). Unlike angular separations and flux densities, distances are not directly measurable. Let us generalize the angular diameter distance obtained in §2 by allowing the smoothness parameter $`\stackrel{~}{\alpha }`$ to be direction dependent, i.e., a property of the beam connecting the observer and the standard candle. In order to derive a unique mapping between the distribution in distances and the distribution in the direction dependent smoothness parameter for given redshift $`z`$, we define the direction dependent smoothness parameter $`\stackrel{~}{\alpha }`$ to be the solution of Eq.(2) (or Eq.(3)) for given distance $`D_A(z)`$.
Note that in numerical simulations, as in the real universe, we can have two lines of sight with the same fraction of smoothly distributed matter, but different distances to a given redshift $`z`$, because weak lensing depends on where the matter is, as well as what fraction of the matter is smoothly distributed. Our definition of the direction dependent smoothness parameter $`\stackrel{~}{\alpha }`$ implies that it is no longer simply the fraction of matter smoothly distributed, it also contains information on where matter is distributed. We can interpretate $`\stackrel{~}{\alpha }`$ as the ratio of the effective density of matter smoothly distributed in the beam connecting the observer and the standard candle and the average matter density in the universe, with the effective matter density corresponding to a given amount of magnification. Two lines of sight with the same fraction of smoothly distributed matter but different distances would have different effective densities of smoothly distributed matter, thus different values of $`\stackrel{~}{\alpha }`$.
For given redshift $`z`$, we expect a distribution in the angular diameter distance $`D_A(z)`$ because the distribution of matter between redshift zero and redshift $`z`$ is inhomogeneous. We have parametrized matter distribution with $`\stackrel{~}{\alpha }`$, the direction dependent smoothness parameter; Eq.(3) then tells us how the angular diameter distance depends on the matter distribution for given redshift $`z`$.
Since the direction dependent smoothness parameter $`\stackrel{~}{\alpha }`$ describes the distribution of matter in an arbitrary beam, it is a random variable for given redshift. The direction dependent smoothness parameter $`\stackrel{~}{\alpha }`$ depends on the matter density in the beam connecting the observer and the source, as well as how the matter is distributed in the beam; for matter smoothly distributed throughout the beam, $`\stackrel{~}{\alpha }<1`$ in underdense beams, while $`\stackrel{~}{\alpha }>1`$ in overdense beams. Note that here we do not consider the possibility that a significant fraction of matter can be in point masses in some beams.
The matter density field is Gaussian on large scales and non-Gaussian on small scales, thus we parametrize the probability distribution of $`\stackrel{~}{\alpha }`$ in a form resembling the Gaussian distribution (see Eq.(5)).
Wambsganss et al. have found numerically the distributions of the magnifications of standard candles at various redshifts, $`p(\mu |z)`$, with $`z=`$0.5, 1, 1.5, 2, 2.5, 3, 5, for $`\mathrm{\Omega }_m=0.4`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$ (Wambsganss et al. (1997); Wambsganss (1999)). To extract the distribution of $`\stackrel{~}{\alpha }`$, we use $`\mu =\left[D_A(\stackrel{~}{\alpha }=1)/D_A(\stackrel{~}{\alpha })\right]^2`$ (see Eq.(8)), where $`D_A(\stackrel{~}{\alpha })`$ is given by Eq.(2) or Eq.(3). We find that the distribution of $`\stackrel{~}{\alpha }`$ at given redshift $`z`$ is well described by
$$p(\stackrel{~}{\alpha }|z)=C_{norm}\mathrm{exp}\left[\left(\frac{\stackrel{~}{\alpha }\stackrel{~}{\alpha }_{peak}}{w\stackrel{~}{\alpha }^q}\right)^2\right],$$
(5)
where $`C_{norm}`$, $`\stackrel{~}{\alpha }_{peak}`$, $`w`$, and $`q`$ depend on $`z`$ and are independent of $`\stackrel{~}{\alpha }`$. Fig.2 shows $`C_{norm}`$, $`\stackrel{~}{\alpha }_{peak}`$, $`w`$, and $`q`$ as functions of $`z`$; the points are extracted from the numerical $`p(\mu |z)`$, the solid curves are analytical fits given by
$`C_{norm}(z)`$ $`=`$ $`10^2\left[0.53239+2.79165\left({\displaystyle \frac{z}{5}}\right)2.42315\left({\displaystyle \frac{z}{5}}\right)^2+1.13844\left({\displaystyle \frac{z}{5}}\right)^3\right],`$
$`\stackrel{~}{\alpha }_{peak}(z)`$ $`=`$ $`1.013501.07857\left({\displaystyle \frac{1}{5z}}\right)+2.05019\left({\displaystyle \frac{1}{5z}}\right)^22.14520\left({\displaystyle \frac{1}{5z}}\right)^3,`$
$`w(z)`$ $`=`$ $`0.06375+1.75355\left({\displaystyle \frac{1}{5z}}\right)4.99383\left({\displaystyle \frac{1}{5z}}\right)^2+5.95852\left({\displaystyle \frac{1}{5z}}\right)^3,`$
$`q(z)`$ $`=`$ $`0.75045+1.85924\left({\displaystyle \frac{z}{5}}\right)2.91830\left({\displaystyle \frac{z}{5}}\right)^2+1.59266\left({\displaystyle \frac{z}{5}}\right)^3.`$ (6)
$`C_{norm}(z)`$ is the normalization constant for given $`z`$. The parameter $`\stackrel{~}{\alpha }_{peak}(z)`$ indicates the average smoothness of the universe at redshift $`z`$, it increases with $`z`$ and approaches $`\stackrel{~}{\alpha }_{peak}(z)=1`$ at $`z=5`$; the parameter $`w(z)`$ indicates the width of the distribution in the direction dependent smoothness parameter $`\stackrel{~}{\alpha }`$, it decreases with $`z`$. The $`z`$ dependences of $`\stackrel{~}{\alpha }_{peak}(z)`$ and $`w(z)`$ are as expected because as we look back to earlier times, lines of sight become more filled in with matter, and the universe becomes smoother on the average. The parameter $`q(z)`$ indicates the deviation of $`p(\stackrel{~}{\alpha }|z)`$ from Gaussianity (which corresponds to $`q=0`$).
Fig.3 shows $`p(\stackrel{~}{\alpha }|z)`$ for $`z=0.5`$, 2, and 5. The solid line is derived from $`p(\mu |z)`$ found numerically by Wambsganss et al. (1997); the dotted line is given by Eq.(5), with coefficients from Eq.(3); the dot-dash line shows the difference between the solid curve and the dotted curve; the dashed line shows the Gaussian distribution related to Eq.(5),
$$p^G(\stackrel{~}{\alpha }|z)=C_{norm}\mathrm{exp}\left[\left(\frac{\stackrel{~}{\alpha }\stackrel{~}{\alpha }_{peak}}{w\stackrel{~}{\alpha }_{peak}^q}\right)^2\right].$$
(7)
Note that it is difficult to see the dotted lines (our empirical fitting formulae), because they are so close to the solid lines (the numerical results). This is not surprising since we have a 4-parameter fit to a smooth bell-like curve. Comparisons with Eq.(7) show how much $`p(\stackrel{~}{\alpha }|z)`$ deviates from the Gaussian distribution.
For the same matter distribution, i.e., the same $`p(\stackrel{~}{\alpha }|z)`$, different values of $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ can lead to very different magnification distributions (see §4).
## 4 The magnification distribution due to weak lensing
At given redshift $`z`$, the magnification of a source can be expressed in terms of the apparent brightness of the source $`(\stackrel{~}{\alpha })`$, or in terms of the angular diameter distance to the source $`D_A(\stackrel{~}{\alpha })`$:
$$\mu =\frac{(\stackrel{~}{\alpha })}{(\stackrel{~}{\alpha }=1)}=\left[\frac{D_A(\stackrel{~}{\alpha }=1)}{D_A(\stackrel{~}{\alpha })}\right]^2,$$
(8)
where $`(\stackrel{~}{\alpha }=1)`$ and $`D_A(\stackrel{~}{\alpha }=1)`$ are the flux of the source and angular diameter distance to the source in a completely smooth universe, and $`\stackrel{~}{\alpha }`$ is the direction dependent smoothness parameter (see Eq.(3) and §3). Since distances are not directly measurable, we should interpret Eq.(8) as defining a unique mapping between the magnification of a standard candle at redshift $`z`$ and the direction dependent smoothness parameter $`\stackrel{~}{\alpha }`$ at $`z`$; $`\stackrel{~}{\alpha }`$ parametrizes the direction dependent matter distribution in a well-defined manner.
The distribution in the magnification of standard candles placed at redshift $`z`$ is
$`p(\mu |z)`$ $`=`$ $`p(\stackrel{~}{\alpha }|z){\displaystyle \frac{d\stackrel{~}{\alpha }}{d\mu }}=p(\stackrel{~}{\alpha }|z){\displaystyle \frac{D_A(\stackrel{~}{\alpha }=1)}{2\mu ^{3/2}}}\left({\displaystyle \frac{D_A}{\stackrel{~}{\alpha }}}\right)^1`$ (9)
$``$ $`p(\stackrel{~}{\alpha }|z){\displaystyle \frac{D_A(\stackrel{~}{\alpha }=1)}{2\mu ^{3/2}}}{\displaystyle \frac{1}{3a\stackrel{~}{\alpha }^2+2b\stackrel{~}{\alpha }+c}},`$
where the parameters $`a`$, $`b`$, and $`c`$ are given by Eq.(2); they depend on $`\mathrm{\Omega }_m`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$, and $`z`$.
Fig.4 shows $`p(\mu |z)`$ at $`z=0.5`$, 2, and 5 for the cosmological model $`\mathrm{\Omega }_m=0.4`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$. The solid line is the $`p(\mu |z)`$ found numerically by Wambsganss et al. (1997); the dotted line is given by Eq.(9), with $`p(\stackrel{~}{\alpha }|z)`$ given by Eq.(5). Note the excellent agreement between our empirical fitting formulae and the numerical results.
Fig.5 shows $`p(\mu |z)`$ for three cosmological models at three different redshifts: (a) $`z=0.5`$, (b) $`z=2`$, and (c) $`z=5`$. The three cosmological models are: (1) $`\mathrm{\Omega }_m=1`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$; (2) $`\mathrm{\Omega }_m=0.2`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$; (3) and $`\mathrm{\Omega }_m=0.2`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.8`$. We have computed $`p(\mu |z)`$ using Eq.(9), with $`p(\stackrel{~}{\alpha }|z)`$ given by Eq.(5). Note that Fig.5 and Fig.4 have the same matter distribution (the same $`p(\stackrel{~}{\alpha }|z)`$), but different cosmological parameters.
Models with different cosmological parameters should lead to somewhat different matter distributions $`p(\stackrel{~}{\alpha }|z)`$. It would be interesting to compare numerical predictions for $`p(\stackrel{~}{\alpha }|z)`$ from N-body simulations for different cosmological models. In the context of weak lensing of standard candles, we expect the cosmological parameter dependence to enter primarily through the magnification $`\mu `$ to direction dependent smoothness parameter $`\stackrel{~}{\alpha }`$ mapping at given $`z`$ (the same $`\stackrel{~}{\alpha }`$ corresponds to very different $`\mu `$ in different cosmologies).
## 5 Summary and discussions
We have derived accurate and simple empirical fitting formulae to the weak lensing numerical simulation results of Wambsganss et al. (1997). These empirical formulae can be conveniently used to compute the weak lensing effect of standard candles for various cosmological models. Our formulation is based on the unique mapping between the magnification of a source and the direction dependent smoothness parameter $`\stackrel{~}{\alpha }`$; $`\stackrel{~}{\alpha }`$ is the ratio of the effective density of matter smoothly distributed in the beam connecting the observer and the source and the average density of the universe, with the effective matter density corresponding to a given amount of magnification. We find that the distribution of $`\stackrel{~}{\alpha }`$ is well described by a modified Gaussian distribution; this is interesting since the matter density field is Gaussian on large scale and non-Gaussian on small scales.
We have derived empirical fitting formulae for $`p(\stackrel{~}{\alpha }|z)`$ (see §3) from the numerical magnification distributions, $`p(\mu |z)`$, found by Wambganss et al. (1997) for $`\mathrm{\Omega }_m=0.4`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$. For the same matter distribution, i.e., the same $`p(\stackrel{~}{\alpha }|z)`$, different values of $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ can lead to very different magnification distributions (see §4). It would be interesting to see how $`p(\stackrel{~}{\alpha }|z)`$ depends on the cosmological model (Wang (1999)).
Our empirical formulae can be used to calculate the weak lensing effects for observed Type Ia supernovae in general cosmologies and at arbitrary redshifts. At redshifts of a few, the dispersion in SN Ia luminosities due to weak lensing will become comparable or exceed the intrinsic dispersion of SNe Ia (Wang (1998)). The Next Generation Space Telescope (NGST) can detect SNe Ia at as high redshifts as they possibly exist; while there are theoretical uncertainties on the estimated SN Ia rate at high $`z`$, it is likely that the NGST will see quite a few SNe Ia at redshifts of a few (Stockman et al. (1998)). Complementing the NGST search, possible new ground based supernova pencil beam surveys can yield between dozens to hundreds of SNe Ia per 0.1 redshift interval up to at least $`z=1.5`$ (Wang (1998)). The systematic uncertainties of SNe Ia as standard candles will likely be well understood within the next decade.
We have used the numerical results by Wambsganss et al. (1997) in deriving our analytical formulae, which accurately describe the non-Gaussian magnification distributions of standard candles due to weak lensing. We note that Frieman (1997) gave analytical estimates of the magnification dispersions due to weak lensing which are of the same order of magnitude as the numerical results of Wambsganss et al. (1997), even though he did not consider the non-Gaussian nature of the magnification distribution. It would be useful to have numerical results for cosmologies other than the $`\mathrm{\Omega }_m=0.4`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$=0.6 model studied by Wambsganss et al. (1997). Holz & Wald (1998) studied a few cosmological models, including $`\mathrm{\Omega }_m=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$=0; they computed their magnification distributions assuming that the mass in the universe is distributed in unclustered isothermal spheres. It would be interesting to compare how different assumptions affect the numerical results.
Note that we have used the magnification distributions calculated by Wambsganss et al. (1997) using a N-body simulation which has a resolution on small scales that is of the order the size of a halo. These magnification distributions contain the information on how matter is distributed, including the clustering of galaxies. By generalizing the smoothness parameter $`\stackrel{~}{\alpha }`$ to a direction dependent variable, we have been able to describe the weak lensing of standard candles in a simple manner, leading to accurate empirical formulae which can be easily used to calculate the weak lensing effect of type Ia supernovae. The distributions of $`\stackrel{~}{\alpha }`$ also contain the information on how matter is distributed. The derivation of the distribution of $`\stackrel{~}{\alpha }`$ from the matter power spectrum should reveal how the measurement of $`p(\stackrel{~}{\alpha }|z)`$ (via $`p(\mu |z)`$) can probe the clustering of matter and structure formation in the universe (Wang (1999)).
Acknowledgements
It is a pleasure for me to thank Joachim Wambsganss for generously providing the magnification distributions used to fit $`p(\stackrel{~}{\alpha }|z)`$, and for helpful suggestions; and Liliya Williams for very useful comments.
|
no-problem/9901/astro-ph9901042.html
|
ar5iv
|
text
|
# Newly Synthesized Elements and Pristine Dust in the Cassiopeia A Supernova Remnant1footnote 11footnote 1Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA
## 1 INTRODUCTION
Infrared (IR) observations of supernova remnants (SNRs) primarily reveal the thermal continuum emission of shock-heated dust (Dwek & Arendt 1992; and references therein). For most remnants, this dust is typical interstellar dust that has been swept up in the expanding supernova blast wave. Only the very youngest SNRs offer the possibility of observing dust that has formed from the metal-enriched ejecta of the supernova itself, before it is dispersed and mixed into the general ISM. The dust content of SNRs is not directly observable in any other wavelength regime. The IR portion of the spectrum also contains ground-state fine-structure lines from neutral to moderately ionized species of atoms from carbon to nickel. Observations of these fine structure lines offer the capability of probing density and temperature regimes not represented by the commonly observed optical transitions. Another distinct advantage of IR observations is that extinction is much lower at IR than optical or UV wavelengths.
The earliest airborne and ground based searches for IR emission from the Cas A supernova remnant (SNR) only managed to set upper limits on the intensity of the IR emission (Wright et al. 1980, Dinerstein et al. 1982). Observations with the IRAS satellite, launched in 1983, provided the first clear detection of Cas A at IR wavelengths. These measurements of the mid-IR emission in broad bands at 12, 25, 60 and 100 $`\mathrm{\mu m}`$ enabled several detailed investigations into the nature of the heating mechanism of the dust within the remnant (Dwek et al. 1987a, Braun 1987). However the IRAS data lacked the spatial resolution to reveal more than the gross distribution of dust within the SNR. The IRAS data also lacked the spectral resolution needed to determine the amount of line emission that might be contributing in the mid-IR. Subsequent observations by (Dinerstein et al. 1987) detected weak line emission of \[S IV\] at 10.4 $`\mathrm{\mu m}`$. Greidanus & Strom (1991) made a higher resolution ground-based map of the northern part of the remnant at 20 $`\mathrm{\mu m}`$. This map reveals clumpy IR emission that does not appear to be strongly correlated with either the optical or X-ray emission. With the advent of the ISO, IR observations with high spatial and spectral resolution are available. An ISOCAM map of Cas A at 10.7 – 12 $`\mathrm{\mu m}`$ (Lagage et al. 1996) shows emission that is distributed in a manner similar to the de-extincted X-ray emission, or the radio emission. Much of the brightest emission correlates with line emission from high velocity ejecta within the SNR, leading Lagage et al. (1996) to propose that the observed emission arises from dust that has formed in the knots of supernova ejecta, and is now being heated as the knots are evaporating into the hot gas of the supernova blast wave (Dwek & Werner 1981).
In this paper we present analysis of IR observations of the young Cas A SNR. The 2.4 – 45 $`\mathrm{\mu m}`$ data acquired with the ISO Short Wavelength Spectrometer are described in §2. Section 3 is devoted to analysis of the observed line and continuum emission. Section 4 discusses the implications for mixing of the supernova ejecta, and the formation of dust within the ejecta.
## 2 DATA AND REDUCTION
The ISO data used in this study were obtained using the Short Wavelength Spectrometer (SWS) (de Graauw et al. 1996). All observations were collected in the AOT 1 observing mode which provided moderate resolution ($`\lambda /\mathrm{\Delta }\lambda 500`$) spectra over the entire 2.38 – 45.2 $`\mathrm{\mu m}`$ wavelength range of the instrument. Ten locations in the SNR were targeted as listed in Table 1. The “North” fields (N1 – N6) are located in the northern rim of the remnant where many optical knots are seen, the X-ray and radio emission are bright, and IR emission is strong. Regions N1 – N4 step across the area of the brightest optical knots. Region N5 was intended to sample emission from ejecta at larger radii from the expansion center, but was observed for a shorter interval than the other regions due to time constraints. Region N6 was aimed at a knot identified by Fesen (1990; knot “D”) as a potential source of \[Ne III\] emission. The “South” fields (S1 - S5) are in the southeastern portion of the remnant at a location where the 25 $`\mathrm{\mu m}`$ emission observed with IRAS and the X-ray emission are fairly strong (Dwek et al. 1987a), but the optical and radio emission are relatively weak. (Region S3 was not observed due to time constraints.) The data have been processed through Off-Line Processing (OLP) 6.11 and 6.31, with further identification and exclusion of bad data and averaging using ISAP<sup>3</sup><sup>3</sup>3The ISO Spectral Analysis Package (ISAP) is a joint development by the LWS and SWS Instrument Teams and Data Centers. Contributing institutes are CESR, IAS, IPAC, MPE, RAL and SRON. and our own similarly functional software.
Background emission in the spectra is apparently very weak and thus has not been subtracted for any of the spectra shown here. Some of the South regions show little or no line or continuum emission from either the SNR or the background. Modeling of the zodiacal light (Kelsall et al. 1998), the strongest background component at mid-IR wavelengths, indicates that this emission from local interplanetary dust should contribute less than 0.25 Jy at 25 $`\mathrm{\mu m}`$ in the SWS aperture. Similarly, the data have not been corrected for extinction. Using either the Mathis (1990) extinction law or that calculated by Drain & Lee (1984), and the visual extinction estimate of $`A_V`$ = 4.3 mag (Searle 1971), the largest extinction in the mid-IR should be $`\tau 0.28`$, found at the wavelength of the 10 $`\mathrm{\mu m}`$ silicate feature. This amount of extinction would reduce the observed intensities by $`24\%`$ at $`10`$ $`\mathrm{\mu m}`$. At other wavelengths the extinction generally reduces the observed intensities by less than 10 %. Correction for this would have little effect on the results presented here, and would introduce additional sources of error related to the accuracy of the $`V`$-band extinction and its extrapolation to IR wavelengths, and the uniformity of extinction across the remnant.
We had previously obtained spectra of Cas A at three locations using the KAO (Moseley et al. 1993). These data cover a smaller wavelength range at lower resolution than the SWS data. Therefore, their main utility is to provide confirmation of the spectral features and relative calibration of the SWS data. The highest quality KAO data were obtained at a location roughly matching that of Region N1. The other observed locations, at regions S1 and at the optically bright Filament 1 of Baade & Minkowski (1954), showed no detectable emission.
## 3 ANALYSIS
### 3.1 Line Emission
Selected portions of the SWS spectra of the ten observed fields are illustrated in Figure 1. The panels in each row of Figure 1, cover a wavelength range of exactly $`\pm 4\%`$ centered on the rest wavelength of a particular fine structure transition. Each row is selected to cover the region of a different transition. In this depiction, different transitions observed at the same radial velocity in a given field should be aligned in each column.
The most prominent line in the spectra is the \[O IV\] 25.89 $`\mathrm{\mu m}`$ line (seventh row of Fig. 1) which is seen in all but the S4 spectrum. As well as being intrinsically strong, this line falls in a region where the SWS sensitivity is good. The dashed lines running vertically in each column of Figure 1 are drawn at the location each transition should appear if it occurs at the same radial velocity as the \[O IV\] emission in the field. In Region N1 there are distinct red- and blueshifted components. The \[Ar II\] 6.98 $`\mathrm{\mu m}`$ line (first row of Fig. 1) is the second most prevalent line observed. Like the \[O IV\] line it benefits from being intrinsically strong and in a spectral region where the SWS sensitivity is good. The radial velocities of the \[Ar II\] and the \[O IV\] lines are fairly close. This agreement allows us to discriminate between the 25.89 $`\mathrm{\mu m}`$ \[O IV\] line and the 25.99 $`\mathrm{\mu m}`$ \[Fe II\] line. If the 26 $`\mathrm{\mu m}`$ line were emitted by Fe II, then in Region N1 we would have the unlikely situation where the iron ejecta would be moving $`10^3`$ km s<sup>-1</sup> faster than the argon ejecta on the front side of the SNR, but $`10^3`$ km s<sup>-1</sup> slower than the argon on the back side of the remnant. Almost all other lines that are observed match the \[O IV\] and \[Ar II\] velocities. The exception is the apparent \[Si II\] line in the N1 region. All of the detected lines correspond to ionized species that are created by overcoming $`n`$-th ionization potentials in the range of 8 - 55 eV, and that have cosmic elemental abundances at least as great as argon. The only species with similar abundances and ionizations with lines in the SWS range that that are not observed are H I and Fe II, Fe III, and Fe V.
All lines were fit for a baseline and a Gaussian line profile, to determine radial velocities, line widths, and fluxes. These results are presented in Table 2. In most regions the radial velocities are in excess of 1000 km s<sup>-1</sup>. The velocity uncertainties quoted in Table 2 are the formal uncertainties of the fits, and do not account for systematic errors that may influence the apparent line positions and widths. This indicates that the line emission is arising from fast moving ejecta (e.g. the FMKs) and not relatively slow swept up material such as the quasistationary flocculi (QSFs) with typical velocities of several 10<sup>2</sup> km s<sup>-1</sup>. The velocities of the emission can be used to identify corresponding knots which have been studied at optical wavelengths. Comparison to the observations of Hurford & Fesen (1996) suggests the following correspondence between emission in our observed regions and their fast moving knots: Region N1 ($`v=1700`$ km<sup>-1</sup>) $``$ FMK 2, Region N3 $``$ FMK 5, and Region N6 $``$ FMK 4. The line ratios observed in the N regions are similar to those observed with other instruments aboard ISO (Lagage et al. 1996; Tuffs et al. 1998).
The \[Ar II\] emission is strongly correlated with the \[O IV\] emission in intensity as well as velocity as shown in Figure 2a. Figure 2b shows that the ratio of F(\[Ar II\]) / F(\[O IV\]) may increase as a function of intensity. However, regions in the southeast portion of the remnant seem to exhibit the \[O IV\] emission without corresponding \[Ar II\] emission.
The ratio of 18.71 $`\mathrm{\mu m}`$ and 33.48 $`\mathrm{\mu m}`$ \[S III\] lines seen in Region N3 can be used as a density diagnostic (e.g. Houck et al. 1984). The identification and the intensity of the 33.48 $`\mathrm{\mu m}`$ line is uncertain, but if reliable, the indicated electron density of the emitting region is $`n_e=600_{400}^{+700}`$ cm<sup>-3</sup>. Ratios of \[Ne III\] (15.56 and 36.01 $`\mathrm{\mu m}`$) and \[Ar III\] (8.99 and 21.83 $`\mathrm{\mu m}`$) lines are also density sensitive (Butler & Mendoza 1984; Keenan & Conlon 1993), but for these ionic species one line of each pair is too weak to be observed in the SWS data.
Ratios of the \[Ar III\] 7135Å and 8.99 $`\mathrm{\mu m}`$ lines can be used as a temperature indicator (Keenan & Conlon 1993). Using the Hurford & Fesen (1996) intensities for the 7135Å \[Ar III\] lines, we find electron temperatures of $`T_e`$ 11,000, 4800, and 5500 K for Regions N1, N3, and N6 respectively. The temperatures are highly uncertain, and are consistently lower than the $`T_e20,00025,000`$ K that Hurford & Fesen (1996) derive from the ratios of the optical \[O III\] lines. This difference may be explained if some of the optical emission is missing because the knots are wider than the $`2\stackrel{}{\mathrm{.}}5`$ slit used for the optical spectroscopy, or if the IR emission arises from a blend of several knots. Alternatively, it may be that the \[Ar III\] and \[O III\] are tracing different regions of the shocked FMK.
Estimates of the mass in each of the ionic species observed have been made assuming that the electron temperature is $`T_e=10^4`$ K and the electron density is $`n_e=1000`$ cm<sup>-3</sup>, which is at least an order of magnitude below the critical densities at which the observed IR transitions would be collisionally deexcited (Table 3). At densities below the critical densities, the derived masses are proportional to $`T_e^{0.5}e^{hc/\lambda kT_e}n_e^1`$.
### 3.2 Continuum Emission
#### 3.2.1 Dust Composition and Mass
The IR spectra from the various regions also exhibit continuum emission with an intensity that varies as widely as the line intensities. While some fraction of the continuum emission of the SNR should be synchrotron radiation, the bright continuum emission that is detected in the SWS data is far too strong to be an extension of the radio power-law spectrum through the IR wavelengths (e.g. Mezger et al. 1986; Dwek et al. 1987a). The continuum emission also exhibits a shape that strongly indicates that it is thermal emission from dust grains within the SNR.
We find that the continuum flux density at 26 $`\mathrm{\mu m}`$ (the baseline flux density underlying the fit to the \[O IV\] lines) is well correlated with the line intensities of both \[O IV\] and \[Ar II\] (Figures 2c and 2d). This suggests that the dust that produces the observed continuum emission is associated with the same fast moving ejecta which produces the line emission. A similar correlation between the \[S IV\] 10.5 $`\mathrm{\mu m}`$ emission and the 11.3 $`\mathrm{\mu m}`$ continuum has led Lagage et al. (1996) to the same conclusion.
In the following, we will concentrate on the analysis of the region N3, which was the brightest of the regions and has the cleanest spectrum. Other than the changes in intensity, it is not clear that any of the other continuum spectra are different from that of region N3. The continuum spectrum after clipping the observed spectral lines and averaging to a resolution of $`R=\lambda /\mathrm{\Delta }\lambda =200`$ is shown in Figure 3. Also shown is the KAO spectrum, arbitrarily scaled to match. The KAO and ISO results appear to be in very good agreement. Both data sets clearly indicate a spectrum that rises quickly, peaks at 21 – 22 $`\mathrm{\mu m}`$ and then fades more slowly at longer wavelengths. The IRAS flux densities at 12, 25, 60, and 100 $`\mathrm{\mu m}`$ normalized to roughly match the 25 $`\mathrm{\mu m}`$ flux density of Region N3 are shown in Fig. 3.
The peak in this spectrum cannot be fit with single temperature blackbody emission from dust with a $`\lambda ^2`$ emissivity. The line drawn through Figure 3 indicates a single dust temperature fit to the SWS spectrum for dust grains with an emissivity taken to be that of Mg protosilicate (Dorschner et al. 1980). The derived dust temperature for this fit is 169 K, and can vary by $`40`$ K without producing a distinctly poor fit to the data. The best fitting spectra for other potential grain compositions are illustrated in Figure 4. Blackbody emission, graphite, and silicate grains with Draine & Lee (1984) optical constants provide relatively poor fits. The 22 $`\mathrm{\mu m}`$ peak in the Cas A spectrum is at a wavelength too long for the typical astronomical silicate, which is usually stated to have an emission feature at $`18\mathrm{\mu m}`$. Of the various silicate optical properties which have been measured and published, the only ones which produced good fits to the Cas A spectrum were the protosilicates measured by Dorschner et al. (1980). For the SWS data at region N3, the Fe protosilicate emissivity provides a slightly better fit than the Mg protosilicate. Considering only the KAO data, the Mg protosilicate provides the better fit. Ca protosilicate has a emission feature at 22 $`\mathrm{\mu m}`$, but it is narrower than that of the other protosilicates and the data. Iron oxide (FeO) has also has a sharp emission feature near 22 $`\mathrm{\mu m}`$ (Henning et al. 1995). The feature can be broadened to produce a fair fit to the data if it is assumed that, rather than spherical grains, a continuous distribution of ellipsoidal grains (CDE approximation; Bohren & Huffman, 1983) is present. However, FeO grains cannot account for both the $`22\mathrm{\mu m}`$ feature and the continuum emission at $`>30\mathrm{\mu m}`$.
The IRAS measurements shown in Figure 3, show an excess at 60 and 100 $`\mathrm{\mu m}`$ over the extrapolated spectra that fit the 22 $`\mathrm{\mu m}`$ emission peak (Figs. 3 or 4e,f). This indicates the presence of cooler dust within the SNR. However, since the IRAS flux densities are for the integrated emission of the SNR, it is not clear whether the cooler dust resides in the FMKs or elsewhere in the SNR.
The mass of hot ($`170`$ K) dust observed in Cas A is given by
$$M_{dust}=\frac{S_\nu d^2}{B_\nu (T_d)\kappa _\nu }$$
(1)
where $`S_\nu `$ is the observed flux density, $`d`$ is the distance to Cas A, $`B_\nu (T_d)`$ is the Planck blackbody function evaluated at the dust temperature $`T_d`$, and $`\kappa _\nu =3Q/4a\rho `$ is the mass absorption coefficient for the dust. For region N3, $`S_\nu (26\mathrm{\mu m})=6.2`$ Jy, $`d=3.4`$ kpc (Reed et al. 1995), $`T_d=169`$ K, and $`\kappa _\nu (26\mathrm{\mu m})=1332`$ cm<sup>2</sup> g<sup>-1</sup> for Mg protosilicate (Dorschner et al. 1980), yielding $`2.8\times 10^6`$ M of dust in the region. Comparison to the mass of the gas in Region N3 (Table 3) suggests that as much as 2% of the Si ejecta in the FMKs has been condensed into dust assuming that Mg protosilicate is $`25\%`$ Si by mass, and all the Si is traced by the observed dust and \[Si II\] emission.
The total mass of hot dust in the SNR can be roughly estimated by scaling this dust mass by the ratio of the total flux density of Cas A to the flux density of this region. With a total flux density from IRAS observations of $`164`$ Jy (Arendt 1989) at 25 $`\mathrm{\mu m}`$, we estimate that Cas A contains $`7.7\times 10^5`$ M of hot dust. The mass derived here is a factor of 10 – 100 lower than previous estimates of $`0.57\times 10^3`$ M based on IRAS observations (Mezger et al. 1986; Braun 1987; Dwek et al. 1987a; Greidanus & Strom, 1991; Saken, Fesen & Shull 1992). There are two reasons why our dust mass estimate is lower. First, the mass absorption coefficient for Mg protosilicate is about a factor of 2 larger than that applied in the other studies. Second and more importantly, the dust temperature we find is significantly higher than those estimated from the IRAS observations. Part of this difference is the high temperature implied when fitting the data with Mg protosilicate emissivity (cf. Figs. 3 and 4c), and part of the difference is that the SWS data do not include the 60 and 100 $`\mathrm{\mu m}`$ and are therefore insensitive to the presence of cooler dust grains in the FMKs or elsewhere in the SNR.
The apparent excess of 60 and 100 $`\mathrm{\mu m}`$ emission in Figure 3 suggests the presence of a second, cooler dust component in the Cas A SNR. A fit to the 60 and 100 $`\mathrm{\mu m}`$ flux densities only gives a dust temperature of 52 K (Arendt 1989), and a resulting cool dust mass of $`3.8\times 10^2`$ M for the entire SNR. (Note that masses quoted by Arendt 1989 are too large by a factor of $`\pi `$.) The 25 $`\mathrm{\mu m}`$ emission of this cooler dust would be $`18`$ Jy or $`10\%`$ of the total emission. This contribution would not greatly alter the spectral fitting discussed previously.
#### 3.2.2 Dust Heating and Location
The collisional heating rate for a dust grain of radius $`a`$ (in cm) is given by (Dwek 1987):
$$H_{coll}(a,T_e)=\pi a^2\left(\frac{32}{\pi m_e}\right)^{1/2}n_e(kT_e)^{3/2}h(a,T_e)$$
(2)
where $`m_e`$, $`n_e`$, and $`T_e`$ are the electron mass, number density, and temperature, and $`h(a,T_e)`$ is a grain heating efficiency, which approaches a value of $`1.0`$ for larger grains and lower temperatures. The radiative heating rate of the same dust grain can be expressed as:
$$H_{rad}(a)=\pi a^2Q_\nu cU$$
(3)
where $`U`$ is the radiative energy density and $`Q_\nu `$ denotes the spectrally averaged absorption coefficient. In the region immediately behind a shock passing through a FMK, we can approximate the energy density as $`U=\frac{1}{c}I𝑑\omega =\frac{1}{c}ln_e^2\mathrm{\Lambda }`$ where $`l`$ is a mean scale length of the emitting region (e.g. the cooling length of the shock), and $`\mathrm{\Lambda }`$ is the cooling function of the shocked gas in units of erg cm<sup>3</sup> s<sup>-1</sup>. The ratio of these heating rates is thus
$$\frac{H_{coll}}{H_{rad}}=\left(\frac{32kT_e}{m_ec^2}\right)^{1/2}\frac{n_ekT_e}{U}\frac{h(a,T_e)}{Q_\nu }$$
(4)
The sum of these heating rates will be equal to the grain cooling rate or luminosity:
$$L_{gr}=4\pi a^2\sigma T_d^4Q(a,T_d)$$
(5)
where $`T_d`$ is the dust temperature, and $`Q(a,T_d)`$ is the dust absorption coefficient averaged over a Planck spectrum of temperate $`T_d`$.
Equating the grain heating and cooling rates, we can solve for the grain temperature given estimates to the gas temperature, density, and cooling function. Sutherland & Dopita (1995) present models of the structure and emission from shocked knots of O-rich ejecta. Hurford & Fesen (1996) suggest that these models with cloud shock velocities of $`150200`$ km s<sup>-1</sup> may be appropriate for the Cas A FMKs, if the extinction is $`1`$ mag greater than the commonly adopted $`A_V=4.3`$ mag (Searle 1971). Thus, the 200 km s<sup>-1</sup> shock model of Sutherland & Dopita (1995) estimates a postshock temperature of $`T_e=10^{6.64}`$ K, $`n_e=400`$ cm<sup>-3</sup>, and $`\mathrm{\Lambda }=10^{17.5}`$ erg cm<sup>3</sup> s<sup>-1</sup>. For these conditions, collisional heating dominates radiative heating by factors of several hundred, and the dust temperature is given by
$$T_d=\left[\frac{\left(\frac{32}{\pi m_e}\right)^{1/2}n_e(kT_e)^{3/2}h(a,T_e)}{4\sigma Q}\right]^{1/4}.$$
(6)
For silicate grains with $`T_d>150`$ K, $`Q=0.25(a/1\mathrm{\mu m})`$ (Draine & Lee 1984). Approximating $`h1.0`$, the estimated temperature for $`a`$ = 0.01 $`\mathrm{\mu m}`$ grains will be $`T_d180`$ K. Larger grains will be correspondingly cooler. This simple calculation shows that the high dust temperature indicated by the continuum spectrum of Cas A is consistent with emission from collisionally heated dust in the post shock region of FMKs, but not with radiatively heated dust in the FMKs. Note that the collisional heating mechanism is only tenable if the electron temperature equilibrates with the ion temperature as in the models presented by Sutherland & and Dopita (1995), in contrast to some of the pure oxygen shock models calculated by Itoh (1988) where electron temperatures only reach $`T_e10^5`$ K.
In Region N3, the integrated flux of thermal emission from the hot dust is $`3`$ times the total flux in IR, optical, and UV lines, assuming that Hurford & Fesen (1996) FMK5 corresponds to the observed knot, and that the spectrum of Sutherland & Dopita’s OSP200 is a reasonable estimate of the UV line intensities. This suggests that thermal radiation from collisionally heated dust is the most important cooling mechanism for the shocked FMKs, in agreement with the more general comparison of the relative importance of X-ray and IR cooling found by Dwek et al. (1987b) for various SNRs.
While the hot dust appears to be associated with the FMKs, the location of the cooler dust producing the 60 and 100 $`\mathrm{\mu m}`$ emission is not as clear. If the cooler dust component is also associated with the FMKs, it could be dust within the unshocked (or cooled post-shock) volume of the FMKs. In a cooler environment, collisional heating of the dust may no longer be significant, but the radiation of the shock passing through a typical FMK may be sufficient to heat nearby dust to the required temperature of $`T_d50`$ K. In this case, the ratio of the mass of hot dust to the mass of cool dust ($`1/500`$) will be equal to the fraction of the FMK mass that is contained within the hot post-shock region. However, the global 60 and 100 $`\mathrm{\mu m}`$ emission depicted in Fig. 3 may not originate in the FMKs. It may be emitted by dust in the X-ray emitting gas between and around the FMKs. A dust temperature of $`50`$ K would then imply electron densities of $`1`$ cm<sup>-3</sup> if the dust grains have radii of $`a=0.1`$ $`\mathrm{\mu m}`$ (Dwek 1987). Higher electron densities would be required if the grains are larger. This derived density is not atypical for SNRs, but is somewhat low compared to some recent estimates of the density, $`n_et100\times 10^{11}`$ cm<sup>-3</sup> s, derived from the X-ray observations of Cas A (Holt et al. 1994; Vink, Kaastra, & Bleeker 1996; Favata et al. 1997).
#### 3.2.3 A Stochastic Heating Alternative
In the previous sections, the correlation of the continuum emission with the line emission of the FMKs is taken as evidence for dust radiating at equilibrium temperatures of $``$ 170 K and collisionally heated by the shocked FMK gas. The composition of the dust appears to reinforce this conclusion. However, an alternate model for the strength and morphology of the continuum emission can be constructed, assuming that the bulk of the dust is embedded in within the X-ray emitting gas rather than the FMKs. In this case, as IRAS observations had indicated, most of the dust is collisionally heated to temperatures of $`100`$ K in the hot X-ray emitting portion of the SNR. Then in regions where the FMKs are plowing into the reverse shock, the bowshocks of the knots compress the gas which will increase the dust temperatures and the spatial density of dust grains (Sutherland & Dopita 1995). However, this environment is at a significantly lower density than the material of the FMKs. Therefore, if the dust grains are small enough they will be stochastically heated, i.e. the grain cooling time will be shorter than the collision time, and the grain temperatures will fluctuate instead of remaining at equilibrium temperatures. From the work of Dwek (1986), we see that silicate grains with radii of $`a=0.005`$ $`\mathrm{\mu m}`$ immersed in a gas with $`n_e=10`$ cm<sup>-1</sup> and $`T_e=10^7`$ K will have dust temperatures $`30<T_{dust}<200`$ K. The presence of the colder grains leads to additional emission at longer wavelengths. For the example chosen here, this improves the fit between the model and the SWS and KAO data (dashed line in Fig. 3).
While dust heated in the bow shocks of the FMKs may be responsible for the correlation of the optical FMKs with the brightest emission in the 11.3 $`\mathrm{\mu m}`$ image (Lagage et al. 1996), the fainter diffuse emission that forms a nearly complete shell correlates fairly well with both X-ray and radio emission (e.g. Anderson & Rudnick 1995). This shell would be emission from stochastically heated dust grains in the lower density X-ray emitting gas. In addition, the shell could contain significant synchrotron emission, depending on the actual shape of the extension of the radio synchrotron spectrum into the IR regime (e.g. Mezger et al. 1986; Dwek et al. 1987a; Tuffs et al. 1998). The morphology of the X-ray and radio emission are similar after correction for extinction (Keohane, Rudnick & Anderson 1996; Keohane, Gotthelf, & Petre 1998), making it difficult to distinguish thermal from synchrotron emission by the apparent structure.
## 4 DISCUSSION
### 4.1 Mixing of the Ejecta
On a large scale, the Cas A SNR gives the impression of a star that has been dissected by layers. First, most of the H- and N-rich outer layers are lost in winds prior to the SN explosion, leading to the QSFs (van den Bergh 1971; Peimbert & van den Bergh 1971). Then, in the SN explosion, the outermost layer of the star is blasted away at the highest velocity, producing the N-rich FMFs (Fesen, Becker, & Blair 1987; Fesen, Becker, & Goodrich 1988). Inner layers of the star produce the FMKs, some of which are mainly oxygen, and others, from deeper layers of the star, contaminated with O-burning products (Si, Ar, Ca) (Chevalier & Kirshner 1979; Winkler, Roberts, & Kirshner 1991). All of these knots of ejecta are rendered visible when they are shocked in the expanding shell of the SNR blast wave. The QSFs are caught by the shell, at the forward shock; the fast-moving knots and flocculi catch the decelerating shell from the inside, at the reverse shock.
Within this framework, the lack of \[Ar II\] emission from the South regions (§3.1) suggests that the material in this region originated in layers of the star above the O-burning zone. The observed ejecta is presumably trailed by ejecta enriched with O-burning products which has yet to catch up with the reverse shock because either the knots have lower velocities or the reverse shock is located at a larger radius than in the northern portion of the SNR. Additional suggestions of a lack of mixing in the SN ejecta are provided by the composition of the dust, discussed below.
Despite the large scale differentiation of the ejecta, evidence for partial mixing of the ejecta is provided by the observation of \[Fe II\] and \[Ni II\] in optical spectra of FMKs (e.g. Winkler, Roberts, & Kirshner 1991), and now by the detection of IR lines of both Ne and Ar in the FMK spectra in the North regions of the SNR (Lagage et al. 1996; Tuffs et al. 1998). The argon, iron, and nickel originate in layers of the progenitor that lie below the oxygen-burning shell, whereas the neon is found in layers above the oxygen-burning shell (Weaver, Zimmerman & Woosley 1978; Woosley, Axelrod, & Weaver 1984; Johnston & Yahil 1984).
### 4.2 Dust Formation in the Ejecta
Since the Cas A SNR is still sweeping up the circumstellar material shed by its progenitor, the dust in the SNR must be relatively recently formed either in the pre-SN stellar wind, or from the ejecta of the SN itself. The correlation of the continuum intensity with the O and Ar line strengths of the FMKs suggest that the observed dust has formed from the ejecta. In §3.2, we found that the continuum spectrum is well represented by dust with optical properties similar to those the Mg protosilicates measured in the lab by Dorschner et al. (1980). We favor the Mg over the Ca and Fe protosilicates because of the good match to the position and width of the 22 $`\mathrm{\mu m}`$ emission feature, and because the Mg abundance should exceed the Ca and Fe abundances in the O-rich layers of evolved massive stars (Weaver et al. 1978; Woosley et al. 1984), with no mixing of the ejecta required. Similar mixing considerations have led Clayton et al. (1997) to propose that type X SiC grains found in meteorites must originate in type Ia SNe, rather than type Ib or II SNe as Cas A is generally believed to be. We also favor the Mg protosilicate identification of the dust over the possibility of FeO dust, because while FeO dust could create the 22 $`\mathrm{\mu m}`$ feature, its spectrum drops quickly at $`\lambda >25`$ $`\mathrm{\mu m}`$ and would still require another dust component to provide the longer wavelength emission seen in the SWS spectra and in the IRAS data. Finally, the 400 K dust temperatures required for the FeO grains is higher that we expect to find in the SNR at this time.
The apparent presence of Mg protosilicates does not preclude the presence of carbonaceous dust which may have formed in the ejecta. The fate of C in the Cas A ejecta is difficult to determine. Normally strong C lines are obscured by high extinction at UV wavelengths, optical lines are weak (Hurford & Fesen 1996), and no C lines are present in the spectral range of the SWS. If the carbon is locked up in graphitic dust, its IR spectrum will lack any distinctive features in contrast to the strong spectral features of silicate grains. The composition of the dust required to explain the 60 and 100 $`\mathrm{\mu m}`$ emission seen by the IRAS is unconstrained by the broadband observations. This dust component could be either additional silicate dust or carbonaceous dust since only a fraction of the carbon in the ejecta is expected to be locked up in CO molecules (Clayton, Liu, & Dalgarno 1998).
## 5 Conclusion
The ISO SWS data have allowed a detailed look at the line and continuum emission from the Cas A SNR. The data provide new information on the composition of both the gas and dust in the supernova ejecta. The line emission is dominated by \[O IV\] 25.89 $`\mathrm{\mu m}`$, which is not surprising for this prototype of oxygen-rich SNRs. Emission from oxygen-burning products, particularly \[Ar II\] 7.99 $`\mathrm{\mu m}`$, is also strong in some regions of the SNR. The dust composition revealed by these data clearly show that the dust formed in the ejecta differs from typical interstellar silicates. A Mg protosilicate composition is suggested for the recently formed dust. The mass of the hot dust observed appears to be a small fraction of the condensible silicon. However, a cooler dust component is also present in the SNR, though not necessarily associated with the FMKs.
Somewhat unexpectedly, the SWS data do not provide evidence for emission from iron in the SN ejecta, in either the gas phase or the dust. Another unresolved question concerns the apparent morphological differences that are implied between the $`2025`$ $`\mathrm{\mu m}`$ emission observed with the SWS and previous IRAS and ground-based studies (Dwek et al. 1987a; Greidanus & Strom 1991).
We wish to thank A. Noriega-Crespo, S. Unger, and the IPAC staff for help with data reduction. R. Smith, W. Glaccum, and K.-W. Chan provided useful assistance with the KAO data and its interpretation. We thank H. Mutschke and the group at Friedrich-Schiller-Universität Jena for making their optical data available on the internet. This analysis was funded through NASA’s ISO Guest Observer program.
|
no-problem/9901/astro-ph9901159.html
|
ar5iv
|
text
|
# 1 Left panel shows the relation between the abundances using Strömgren 𝑏₁ colour, Cousins 𝑅-𝐼 and spectroscopically determined abundances. The right panel shows the histogram of oxygen abundances of 248 stars. This histogram shows clearly that there is a “K-dwarf problem”. The Pagel (1989) inflow model is shown as a dashed line and fits the data quite well.
One of the main issues in the chemical evolution of galaxies is the so called “G-dwarf problem” (Pagel and Patchett, 1975) in which the observed stellar metal distribution differs from the predicted in the sense that there are too many metal deficient stars. The same problem was now been seen among K-dwarfs, indicating the problem is a general one (Flynn and Morell, 1997).
We have used the Hipparcos catalogue to choose all K-dwarfs with absolute magnitude $`M_V`$ between $`5.5<M_V<7.3`$, within a radius of 54 parsecs with an apparent visual magnitude $`V<8.2+1.1\mathrm{s}in|b|`$, where $`b`$ is the galactic latitude. This sample consists of 642 stars and is complete. 408 of these stars were found from the Geneva photometric catalogue (Rufener, 1989) and 364 from the Strömgren catalogue of Hauck and Mermilliod (1998).
We have measured metallicities for 248 of these stars using $`RI`$ photometry from Bessell (1990) and Geneva and Strömgren based metallicity indicators. For stars with Geneva colours metallicities were found using existing relations for K-dwrafs (Flynn and Morell, 1997). We have developed a Strömgren photometric metallicity indicator for the stars with Strömgren colours. We used 34 G and K dwarfs from Flynn and Morell (1997) to define a relation between the Strömgren colours, the abundance \[Fe/H\] and effective temperature $`T_{\mathrm{eff}}`$. For these stars accurate, spectroscopically determined metallicity abundances were known and the effective temperatures were estimated using Cousins $`RI`$ photometry from the Gliese catalogue (Bessell, 1990).
In Fig. 1a the relation between spectroscopically determined metallicity and the Strömgren based metallicity is shown. The scatter around the one-to-one relation is $`0.2`$ dex.
Iron abundances \[Fe/H\] have been converted to oxygen abundances \[O/H\] as in Flynn and Morell (1997) using their equation 6. When comparing data and models oxygen abundances are more appropriate because oxygen is produced in short lived stars and can be treated using the convenient approximation of instantaneous recycling (Pagel, 1989).
The metallicity histogram for all 248 stars is shown in Fig. 1b. In this figure the K-dwarf problem is clearly seen. The histogram is peaked near solar metallicity and the number of metal deficient stars is small.
Note that the sample has not been corrected for two effects. Firstly, there is a kinematic bias against higher velocity stars in nearby samples (Sommer-Larsen, 1991). Secondly, the sample is slightly biased toward too many metal weak stars since these are more likely to be in the photometric catalogues we used. In the future we plan to observe the complete sample and thus remove these biases.
References
Bessell, M., 1990, A&AS, 83, 357
Flynn, C. and Morell, O., 1997, MNRAS, 286, 617
Hauck, B., Mermilliod, M., 1998, A&AS, 129, 431
Sommer-Larsen, J., 1991, MNRAS, 249, 368
Pagel, B.E.J., 1989, Rev. Mex. Astr. Astrofis., 18, 153
Pagel, B.E.J., Patchett, B.E., 1975. MNRAS, 172, 13
Rufener, F., A&AS, 78, 469
|
no-problem/9901/hep-ph9901391.html
|
ar5iv
|
text
|
# Predicting the critical density of topological defects in 𝑂(𝑁) scalar field theories.
\[
## Abstract
$`O(N)`$ symmetric $`\lambda \varphi ^4`$ field theories describe many critical phenomena in the laboratory and in the early Universe. Given $`N`$ and $`D3`$, the dimension of space, these models exhibit topological defect classical solutions that in some cases fully determine their critical behavior. For $`N=2,D=3`$ it has been observed that the defect density is seemingly a universal quantity at $`T_c`$. We prove this conjecture and show how to predict its value based on the universal critical exponents of the field theory. Analogously, for general $`N`$ and $`D`$ we predict the universal critical densities of domain walls and monopoles, for which no detailed thermodynamic study exists. This procedure can also be inverted, producing an algorithm for generating typical defect networks at criticality, in contrast to the canonical procedure , which applies only in the unphysical limit of infinite temperature.
preprint: UNIGE-TH-98-12-1023
\]
$`O(N)`$ symmetric scalar field theories are a class of models describing the critical behavior of an great variety of important physical systems. For example, for $`N=3`$ they describe ferromagnets, the liquid vapor transition and binary mixtures for $`N=1`$ and superfluid $`{}_{}{}^{4}He`$ and the statistical properties of polymers, for $`N=2`$. In the early Universe $`N=2`$ describes the phase transition associated with the breakdown of Peccei-Quinn symmetry, and models of high energy particle physics may belong to the universality class of $`O(N)`$ scalar models, whenever the mass of the Higgs bosons is larger that that of the gauge bosons. $`O(N)`$ scalar models are also invoked in most implementations of cosmological inflation.
One of the fundamental properties of $`O(N)`$ $`\lambda |\varphi |^4`$ field theories is the existence, for $`ND`$, of static non-linear classical solutions, (domain walls, vortices, monopoles) that we will refer to henceforth as topological defects. At sufficiently high temperatures, topological defects can be excited as non-perturbative fluctuations. Their dominance over the thermodynamics, due to their large configurational entropy, is known to trigger the phase transition in $`O(2)`$ in 3D and 2D, and their persistence at low energies prevents the onset of long range order in $`O(2)`$ $`D2`$ and in $`O(1)`$ in 1D.
It is therefore natural that the universal critical exponents characterizing the phase transition in terms of defects and through the behavior of field correlators must be connected. This connection is made more quantitative whenever one can construct dual models, field theories which possess these collective solutions as their fundamental excitations . In the absence of supersymmetry rigorous mappings between the fundamental models and their dual counterparts exist only in very special cases . Duality has been suggested and empirically observed to be a much more general phenomenon, though.
In this letter we explore the duality between the critical behavior of the 2-point field correlation function and defect densities at criticality. We will show that it leads to the proof that the critical density of vortex strings, observed in recent non-perturbative thermodynamic studies of $`O(2)`$, is a universal number. Among other insights this shows that the phase transition in $`O(2)`$ in 3D occurs when a critical density of defects is reached, connecting directly the familiar picture of the Hagedorn transition in vortex densities to the more abstract critical behavior of the fields. We also extend our procedure to different $`N`$ and $`D`$, making predictions for the values of the universal densities of domain walls and monopoles, in 2 and 3D.
Finally the inversion of this procedure allows us to easily generate typical field configurations at criticality. This is of fundamental practical importance. Recent experiments in $`{}_{}{}^{3}He`$ and large scale numerical studies of the theory have lent quantitative support to the ideas, due to Kibble and Zurek , that defects form at a second order phase transition due to critical slowing down of the fields response over large length scales, in the vicinity of the critical point. Defect networks hence formed have densities and length distributions set by thermal equilibrium at $`T=T_c^+`$.
In contrast most realizations of defect networks used, eg., in cosmological studies are generated using the Vachaspati-Vilenkin (VV) algorithm. It relies on laying down random field phases on a lattice and searching for their integer windings along closed paths. The absolute randomness of the phases corresponds to the $`T\mathrm{}`$ limit of the theory. More fundamentally it yields defect networks that are quantitatively distinct from those in equilibrium at criticality, i.e., at formation.
Fig. 1 shows the behavior of a system of vortex strings at a second-order phase-transition, for $`O(2)`$ in 3D. The data was obtained from the study of the non-perturbative thermodynamics of the field theory . At $`T_c`$ the total density of string $`\rho _{\mathrm{tot}}`$ displays a discontinuity in its derivative, signaling a second order phase transition.
A disorder parameter can be constructed in terms of string quantities by dividing the string population into long string (typically string longer than $`L^2`$, where $`L`$ is the size of the computational domain) and loops, comprising of shorter strings. The corresponding densities are denoted by $`\rho _{\mathrm{long}}`$ and $`\rho _{\mathrm{loop}}`$. In Fig. 1 we can observe that $`\rho _{\mathrm{long}}`$ consistently vanishes below $`T_c`$, except for a small range of $`\beta `$ where it increases rapidly to a finite critical value. In we conjectured that in the infinite volume limit $`\rho _{\mathrm{inf}}`$ exhibits a discontinuous transition.
The value of the total string density at $`\beta _c`$, $`\rho _{\mathrm{tot}}(\beta _c)0.20`$ coincides with results from studies of different models in the same universality class . This fact lead us to the conjecture that $`\rho _{\mathrm{tot}}(\beta _c)`$ is universal.
In order to prove this conjecture we appeal to a well known result, due to Halperin and Mazenko . Halperin’s formula expresses $`\rho _0`$, the density of zeros of a Gaussian field distribution in terms of its two-point function. For an $`O(N)`$ theory the relevant quantity is the $`O(N)`$ symmetric correlation function $`G(x)=\varphi (0)\varphi (x)^{}`$, resulting in
$$\rho _0\left|\frac{G^{\prime \prime }(x=0)}{G(x=0)}\right|^{\frac{N}{2}}.$$
(1)
Eq. (1) measures the density of coincident zeros of all $`N`$ components of the field at a point. Coincident zeros occur at the core of topological defects. Depending on $`N`$ and $`D`$, coincident zeros can be interpreted as either monopoles, strings or domain walls. In the particular case of a Gaussian $`O(2)`$ theory in $`D=3`$, Halperin’s formula allows us to compute the density of vortex strings crossing an arbitrary plane in three dimensional space, a quantity that is clearly proportional to $`\rho _{\mathrm{tot}}`$.
The last key observation is that in the critical domain of a second order transition, all $`O(N)`$ theories are effectively approximately Gaussian, but with non-trivial critical exponents. In particular renormalization group analysis shows that the mass and quartic coupling vanish at $`T_c`$ . Higher order polynomial terms (eg. $`\varphi ^6`$) may be generated but are small. Hence in the critical domain the field two-point function can be written as
$$G(𝐱)d^D𝐤\frac{e^{i𝐤𝐱}}{|𝐤|^{2+\eta }},$$
(2)
where $`\eta <<1.`$ is the universal critical exponent taking into account deviations from the mean-field result.
Thus the effective Gaussianity of the theory allows us to use Halperin’s result to compute the critical value of $`\rho _{\mathrm{tot}}(\beta _c)`$. Note that, modulo renormalization, the final result depends only on $`\eta `$ establishing, as conjectured, that $`\rho _{\mathrm{tot}}(\beta _c)`$ is a universal quantity.
Substituting, Eq. (2) into (1) we obtain:
$$\rho _{tot}\left(\frac{\eta +1}{\eta +3}\right)\frac{k_{\mathrm{max}}^{3+\eta }k_{\mathrm{min}}^{3+\eta }}{k_{\mathrm{max}}^{1+\eta }k_{\mathrm{min}}^{1+\eta }},$$
(3)
where we have introduced upper and lower momentum cut-offs $`k_{\mathrm{max}}`$ and $`k_{\mathrm{min}}`$. In the case of a lattice of size $`L`$ and lattice spacing $`a`$ we take $`k_{\mathrm{min}}=2\pi /L`$ and $`k_{\mathrm{max}}=2\pi /a`$ which leads to
$$\rho _{tot}\left(\frac{\eta +1}{\eta +3}\right)\frac{1(a/L)^{3+\eta }}{1(a/L)^{1+\eta }}.$$
(4)
For large enough lattices, $`a/L<<1`$, and we obtain
$$\rho _{tot}\left(\frac{\eta +1}{\eta +3}\right).$$
(5)
In order to generate quantitative predictions we need to determine the exact proportionality factor in Eq.(5). This can be achieved by invoking the other instance when the interacting theory becomes Gaussian. In the high temperature limit $`\beta 0`$, the effective interaction becomes irrelevant. In terms of renormalization group the theory displays a (trivial) Gaussian fixed point, with vanishing effective coupling. On the lattice fields at different points will be completely uncorrelated. Note that this situation corresponds to the usual VV algorithm where a field is thrown randomly on the lattice and a network of strings is built by identifying phase windings. Fig. 1 shows the agreement of the densities observed at $`\beta =0.`$ with the well known VV result of $`\rho _{\mathrm{tot}}=1/3`$ with 75% long string. Since the totally uncorrelated field corresponds to a flat power spectrum $`G(k)k^0`$ we normalize Halperin’s expression by imposing $`\rho _{\mathrm{tot}}=1/3`$ for $`\eta =2`$,
$$\rho _{tot}=\frac{5}{9}\left(\frac{\eta +1}{\eta +3}\right).$$
(6)
$`\eta `$ is always much smaller than $`1`$. Setting $`\eta =0`$ we obtain $`\rho _{tot}(T_c)=5/27=0.185`$, close to the value $`\rho _{tot}0.2`$ observed both in the $`\lambda \varphi ^4`$ and $`XY`$ studies in 3D. A more accurate estimate can be obtained by replacing $`\eta `$ by its precise value for the universality class to which these theories belong. From we have $`\eta 0.035`$ we get $`\rho _{\mathrm{tot}}=0.190`$, closer to the observed value.
A similar exercise permits the computation of the critical density of domain walls for $`O(1)`$ and monopoles in a $`O(3)`$ theory at the critical temperature in 3D.
The density of domain walls per link is
$$\rho _{tot}=\frac{1}{2}\left(\frac{5}{3}\right)^{1/2}\left(\frac{\eta +1}{\eta +3}\right)^{1/2}.$$
(7)
Note the value of $`\rho _{tot}(\beta =0)=1/2`$ corresponding to the high-temperature limit. At $`\beta _c`$ we get, with $`\eta =0.034`$ , $`\rho _{tot}(\beta _c)0.38`$. For monopoles we will take for the flat-spectrum case $`\rho _{tot}(\beta =0)0.1`$. A better estimate can be obtained from a tetrahedral discretization of the sphere, resulting in $`\rho _{tot}(\beta =0)=3/32`$ and
$$\rho _{tot}=\frac{3}{32}\left(\frac{5}{3}\right)^{3/2}\left(\frac{\eta +1}{\eta +3}\right)^{3/2}.$$
(8)
with $`\eta =0.038`$ leading to the critical value $`\rho _{tot}(\beta _c)=0.040`$. Finally for domain walls in 2D, the density per link at $`\beta _c`$ is (using $`\rho _{tot}(\beta =0)=1/2`$):
$$\rho _{tot}=\frac{1}{\sqrt{2}}\left(\frac{\eta }{\eta +2}\right)^{1/2}.$$
(9)
Taking $`\eta =0.26`$ we obtain $`\rho _{tot}(\beta _c)=0.24`$.
The present procedure can be inverted to generate a typical defect network at criticality. The approximate Gaussianity of the field theory at $`T_c`$ implies that the statistical distribution of fields, $`P[\varphi ]`$ is given by
$`P[\varphi ]=𝒩e^{\beta {\scriptscriptstyle d^3k{\scriptscriptstyle \frac{1}{2}}\varphi _kG(|k|)\varphi _k}}`$ (10)
This distribution can be sampled by generating fields as
$`\varphi _k=R(k)\sqrt{\beta ^1G(|k|)}`$ (11)
where $`R(k)`$ is a random number extracted from a Gaussian distribution, with zero mean and unit variance. The field can then be Fourier transformed to coordinate space, its phases identified at each site, and vortices found in the standard way. Since we will be willing to compare results from this algorithm with the ones measured in lattice Langevin simulations we chose to employ the exact form of the field correlator on the lattice:
$`G(|k|)^1=\left[{\displaystyle \underset{i=1}{\overset{D}{}}}2\left(1\mathrm{cos}(k_i)\right)\right]^{\frac{2\eta }{2}}_{|k|0}|𝐤|^{2\eta }.`$ (12)
We have performed several tests on the algorithm, by comparing it to the results of the non-perturbative thermodynamics of the fields at criticality. We used lattices of size $`N_{\mathrm{lat}}^3`$ with $`N_{\mathrm{lat}}=16,32,64`$ and $`128`$. All results are averages over 50 samples obtained from independent random realizations. Fig. 2 shows the string densities for values of $`\eta `$ between 0. and 0.1, including all reasonable values of $`\eta `$ in 3D.
The values for the densities depend on the size of the lattice, converging to finite values for large $`N_{\mathrm{lat}}`$. In Fig. 3 we can see the scaling of $`\rho _{\mathrm{tot}}`$ with box size for two choices of the critical exponent, the mean field value $`\eta =0.`$ and the theoretical result for the $`O(2)`$ universality class in 3D, $`\eta 0.035`$. We can predict the form and the power of this scaling through Eq. (4). Writing $`a/L=1/N_{\mathrm{lat}}`$, the number of points in the lattice, and expanding Eq. (4) in powers of $`1/N_{\mathrm{lat}}`$ we see that Halperin’s result converges to its infinite volume limit according to:
$$\rho _{\mathrm{tot}}(\mathrm{})\rho _{\mathrm{tot}}(N_{\mathrm{lat}})=\frac{1}{N_{\mathrm{lat}}^{1+\eta }}+O(1/N_{\mathrm{lat}}^2)$$
(13)
To check these scalings we fitted the data of Fig. 3 to a power law of the form:
$$\rho _{\mathrm{tot}}(N_{\mathrm{lat}})=\rho _{\mathrm{tot}}(\mathrm{})+\frac{A}{N_{\mathrm{lat}}^\alpha }$$
(14)
For $`\eta =0`$ and $`\eta =0.035`$ we found:
$`\eta =0.0,\rho _{\mathrm{tot}}(\mathrm{})=0.1969,A=0.3259,\alpha =1.060;`$ (15)
$`\eta =0.035,\rho _{\mathrm{tot}}(\mathrm{})=0.2012,A=0.3422,\alpha =1.124.`$ (16)
These values of $`\alpha `$ are indeed close to 1, with a larger correction for $`\eta =0.035`$ as expected from Eq. (13).
In for a lattice of size $`N_{\mathrm{lat}}=100`$ we measured $`\rho _{\mathrm{tot}}(\beta _c)=0.198\pm 0.004`$. For a Gaussian field with $`\eta =0.035`$ we obtain $`\rho _{\mathrm{tot}}=0.203\pm 0.003`$. The agreement of the two results is very satisfactory.
The results for $`\rho _{\mathrm{long}}`$ and $`\rho _{\mathrm{loop}}`$ using these two different methods are also in good agreement. In this case we were not able to find a reasonable scaling expression though, but this is to be expected given the arbitrariness of the long string definition. The results for $`N_{\mathrm{lat}}=100`$, using $`\eta =0.035`$ are $`\rho _{\mathrm{long}}=0.080\pm 0.004`$, $`\rho _{\mathrm{loop}}=0.121\pm 0.004`$. These compare well with the non-perturbative results $`\rho _{\mathrm{long}}=0.076\pm 0.005`$, $`\rho _{\mathrm{loop}}=0.120\pm 0.004`$. Even more impressive is that the string length distribution at criticality can also be reproduced by our Gaussian field algorithm. This distribution can be successfully fitted to an expression of the form ,
$`n(l)=Al^\gamma e^{\beta \sigma l}.`$ (17)
The fit to the results of the Gaussian field algorithm shows a small variation of the parameters $`A,\gamma `$, and $`\sigma `$ for $`\eta 0.0.1`$. $`\sigma `$ is consistently zero, reflecting the fact that the spectrum is always scale invariant. The value of $`\gamma `$ varies between 2.34 and 2.40. For the critical exponent $`\eta =0.035`$ we obtained $`\gamma 2.35`$. Once again this is in good agreement with the result from the lattice non-perturbative thermodynamics at $`T_c`$ , $`\gamma 2.36`$.
Finally the predictions for $`\rho _{\mathrm{tot}}`$ from Halperin’s formula, when compared to the accuracy of the Gaussian algorithm seem rather poor. The expression is meant to apply for continuum distributions, while all other values of $`\rho _{\mathrm{tot}}`$ were obtained on the lattice. A straight substitution of the lattice correlators (12) into (1) increases $`\rho _{\mathrm{tot}}`$ to 0.21 from 0.19, covering our full range of results. To perform a precise comparison however Halperin’s formula should be rederived for a field theory on the lattice. Despite these shortcomings Halperin’s formula has the merit of being the only analytic way of estimating the critical densities of defects in theories where non-perturbative thermodynamic results are scarce.
We have therefore established the connection between the universal critical exponent characterizing the behavior of the $`O(N)`$ field 2-point correlator and the critical density of defects. This relation implies that defect densities at $`T_c`$ for a system undergoing a second order phase transition are universal numbers. We predicted them for several $`O(N)`$ models in 2 and 3D. Based on these insights we proposed a new algorithm for generating networks of defects at the time of formation. In particular, we have shown that this algorithm reproduces accurately all the features of a string network in 3D at criticality. This procedure, instead of the more usual algorithm of should be used to generate typical defect networks at the time of their formation.
We thank R. Dürrer and W. Zurek for useful discussions. This work was partially supported by the ESF and by the DOE, under contract W-7405-ENG-36.
|
no-problem/9901/cond-mat9901242.html
|
ar5iv
|
text
|
# Modelling Molecular Motors as Folding-Unfolding Cycles
## Abstract
We propose a model for motor proteins based on a hierarchical Hamiltonian that we have previously introduced to describe protein folding. The proposed motor model has high efficiency and is consistent with a linear load-velocity response. The main improvement with respect to previous models is that this description suggests a connection between folding and function of allosteric proteins.
An important class of proteins are alosteric: part of the molecule undergoes large conformational changes as a result of a chemical reaction which takes place in another region of the protein. One example of this is motor proteins which through a series of conformational changes perform directed motion using the chemical energy gained $`ATP`$ hydrolysis. Physicists have proposed that this may be viewed as a ratchet-like mechanism, similar to Feynman’s famous ratchet but with the addition that ATP binding/hydrolysis has the ability to turn this ratchet on and off. Several models have been proposed along this lines, see .
The energy associated with the chemical cycle of real motor proteins is comparable to the total binding energy of a protein. This leads us to speculate that the directed motion associated to motor action may be connected to the pathway for protein folding. Thus, we propose to describe the motor variables in terms of a pathway parametrization of protein folding using three variables $`\phi _0`$, $`\phi _1`$ and $`\phi _2`$, that each takes value $`0`$ or $`1`$. These variables, which label the conformations visited during the motor cycle, also label subsequent states along the folding pathway of the protein. Thus we propose that the energy function of the motor protein in the absence of ATP and external load is described by the Hamiltonian :
$$H_0=\phi _0\phi _0\phi _1\phi _0\phi _1\phi _2.$$
(1)
The ground state of $`H_0`$ is $`(\phi _0,\phi _1,\phi _2)=(1,1,1)`$. At low temperatures $`T`$ the variables will tend to freeze into the ground state, whereas they melts for $`T>T_c=1/\mathrm{ln}(2)`$. In order to take into account ATP hydrolysis and an external load we suggest the following extension of the energy function:
$$H_1=[ATP]\phi _0\phi _0\phi _0\phi _1\phi _0\phi _1\phi _2+F(\phi _0,\phi _1,\phi _2)$$
(2)
where \[ATP\] represents the effects of ATP on the chemical potential of the molecule. \[ATP\] can cycle between two values, say $`0`$ and $`A`$ where $`A`$ is the energy associated to hydrolysis, and $`F`$ is the external load on the motor. With this structure we single out one variable $`\phi _0`$ to represent ATP binding and hydrolysis while the two other variables represent events associated to attachment ($`\phi _1`$) and mechanical motion ($`\phi _2`$) of the motor.
The dynamics of the motor is generated by switching ATP between $`A`$ and $`0`$. When ATP is zero, $`\phi _0`$ is driven to $`\phi _0=1`$ by the first term in the energy function. Subsequently this sets an ordering where typically first the variable $`\phi _1`$, then the variable $`\phi _2`$ takes the value $`1`$. Thus first the “leg” binds to the substrate, then it performs a step (or “power stroke”) $`\phi _2=01`$, and it advances forward by one step. In case it by chance do the opposite, i.e. if $`\phi _2`$ moves before $`\phi _1`$ it represent the event when the stroke takes place before the binding, and we assign it a one step backwards movement. When $`(\phi _1,\phi _2)=(1,1)`$ we initiate the chemical switch \[ATP\] $`=0A`$ which subsequently forces $`\phi _00`$, and accordingly release the binding constraints of $`\phi _1,\phi _2`$. The ordering of the melting of these variables is random. The asymmetry in the freezing — where the process typically proceed in the order $`\phi _01`$, $`\phi _11`$ and $`\phi _21`$ — and the melting — which can proceed in any order, we refer to as the mirror effect , and lies at the heart of the functioning of this motor. The total move table is
$`(\phi _1,\phi _2)`$ $`=`$ $`(0,0)(1,0)(1,1)\text{defines move}xx+dx`$ (3)
$`(\phi _1,\phi _2)`$ $`=`$ $`(0,0)(0,1)(1,1)\text{defines move}xxdx`$ (4)
$`(\phi _1,\phi _2)`$ $`=`$ $`(1,1)(1,0)(0,0)\text{defines move}xxdx`$ (5)
$`(\phi _1,\phi _2)`$ $`=`$ $`(1,1)(0,1)(0,0)\text{defines move}xx+dx`$ (6)
In terms of the normal nomenclature in the literature on motor molecules, the moves of Eq.(3) represent respectively the binding and the working stroke whereas the moves in Eq. (5) undo these. The moves of Eq. (6) then represent the optimal sequence of unbinding and recovery.
In order to study the model numerically, we have chosen to use the following dynamics: In each update, either zero or one of the $`\phi `$-variables changes value. The probability that a given change is made (either $`(\phi _0,\phi _1,\phi _2)(1\phi _0,\phi _1,\phi _2)`$, $`(\phi _0,1\phi _1,\phi _2)`$, $`(\phi _0,\phi _1,1\phi _2)`$, or $`(\phi _0,\phi _1,\phi _2)`$) is proportional to the Boltzmann factor associated with the configuration the model is entering into,
$$P(s_is_f)e^{H_1(s_f)/T}.$$
(7)
where $`s_i`$ and $`s_f`$ are initial and final states respectively. As the variables are discrete, it is not possible to assign a physical time to the process. However, the amount of ATP that is hydrolyzed (i.e., the number of times $`A`$ takes the value $`a`$) is a natural clock in this process.
A more realistic dynamics for this model is based on constructing a Langevin equation from the Hamiltonian (2). Continuous variables may then be used and a connection with “real time” may be made. The Langevin equation reads
$$\frac{d\phi _i}{dt}=\mu _i\frac{dH_1}{d\phi _i}+\eta _i(t)$$
(8)
where $`\mu _i`$ are mobilities, $`\eta _i(t)`$ thermal noise and the variables $`\phi _i`$ now has to be continuously varying . To obtain the real cycle times we in addition have to introduce the residence times in the different states. We note that if the limiting step of the dynamics is the ATP binding, i.e. at very low ATP concentration, then the efficiency curves given by the phase space method also represent real time velocity curves up to timescale that is given by ATP hydrolysis rate. In the opposite limit, the Langevin dynamics captures the timescale of the motion through the coefficients $`\mu _i`$. Experiments show a roughly linear load-velocity characteristics . We can obtain a linear dependence with this Langevin dynamics for certain realizations of the force but this feature appears not to be robust with respect to the functional form of the load.
In Figure 1 we show the average position of the motor as function of the number of times an ATP molecule was hydrolyzed, i.e., that the ATP was switched from $`0`$ to $`A`$. In the figure we show two cases, both for $`A=4`$, one at a low temperature $`T=0.25`$ (the upper curve) and one at a high temperature, $`T=5`$. In each case the corresponding mean trajectories are also shown (average over 5000 trajectories). First we note the high efficiency, for the $`T=0.25`$ case. About 95% of the hydrolysis events lead to forward motion. Such a high efficiency is seen in real motor proteins at least at low ATP concentrations .
Coming back to the drag force we note that its form has to fulfill certain conditions. Firstly it should act through pushing the system into a particular “corner” of configuration space. Secondly, the drag force should not change the ATP parameter $`A`$ directly. However, the reaction rate, set by the term $`[\text{ATP}]\phi _0`$, can be influenced through forcing the dynamical variables $`\phi _0`$, $`\phi _1`$ and $`\phi _2`$. Thirdly, the drag force influences the motor function through changing the rate of entering the state $`(\phi _1,\phi _2)=(1,1)(0,0)`$ vs. the $`(\phi _1,\phi _2)=(0,0)(1,1)`$ rate. An explicit realization that fulfills these conditions are
$$F(\phi _0,\phi _1,\phi _2)=+f(\phi _0+\phi _0\phi _1+\phi _0\phi _1\phi _2).$$
(9)
This is not the only possible choice, but given the structure of the Hamiltonian (2), this is a natural one. Another possible choice is simply $`F=f(\phi _0+\phi _1+\phi _2)`$. Note that with $`f`$ positive, the drag force acts against the direction of the motion of the motor.
In figure 2 we show the efficiency $`p`$ versus drag $`f`$ for different temperatures $`T`$. We observe that $`p`$ decreases as the drag force is increased. For high temperature the efficiency $`p`$ decreases approximately approximately linearly with $`f`$. At lower temperatures one observes that a very high efficiency is maintained even for moderate drag. At larger drag, the motor stops moving forward. The stalling at $`f=1`$ occurs because the energy-minimum of $`H_0`$, Eq. (1), which is equal to -3, is destroyed by the contribution of the drag term, which is $`3f`$ when $`f=1`$. For $`f>1`$, the state $`(\phi _0,\phi _1,\phi _2)=(0,0,0)`$ has lower energy that the state $`(1,1,1)`$, and the ATP-cycling looses its effect. The net result is that the motor stalls. The question now is, what temperature should we choose in the model to compare with actual motors. The $`ATP`$ cycle is known to generate about $`25kT_{room}`$ and the total binding energy af a protein domain is of the same order. Further motor proteins stall at a drag corresponding to a work per step of about $`10kT_{room}`$. In the model the total binding energy is $`3`$ and the ATP cycle generates $`4`$ units of energy. Thus room temperature corresponds to a temperature in the model of about $`0.2`$. In the discussion above, this is what we mean with low temperature. Thus our model predicts high efficiency maintained nearly up to stalling drag.
Finally we would like to remark that if we pull on the motor, i.e., employ a negative $`f`$, the success rate increases for some range of pulls, until finally the efficiency of the motor drops dramatically. This drop is because for $`f`$ large and negative both values of $`A`$ lead to the same ground state. To be quantitative for the case $`A=4`$: For $`[\text{ATP}]=0`$ the ground state is $`(1,1,1)`$ for all $`f<0`$. For $`[\text{ATP}]=4`$, then for $`f>1/3`$ the ground state is $`(0,0,0)`$ whereas for $`f<1/3`$ the ground state is again $`(1,1,1)`$. Thus for $`f>1/3`$ the motor switch between the ground states $`(0,0,0)`$ and $`(1,1,1)`$ when the ATP variable cycles. However when $`f1/3`$, the $`(1,1,1)`$ state becomes the ground state for both $`[\text{ATP}]`$ values, and the ATP-cycling no longer forces the motor to switch between states. The result is a drop in efficiency. In Figure 2, the drop occurs at a smaller $`f`$ than $`1/3`$, but seems to approach this value as the temperature is decreased.
We now discuss how the present model is different from existing ones. It has previously been proposed that motors could work as ratchets . Our model is indeed a type of ratchet, it cycles between two states, one of random diffusion and one of directed motion, much like the sawtooth ratchet originally proposed by Prost et al. . However a ratchet mechanism similar to the one studied by Prost et al. would, dependent on the asymmetry of the potential, at most make one positive move at half the ATP hydrolysis events. The other half of the ATP cycles, it will remain in same potential well, or maybe even stop. Thus the maximum efficiency would be $`1/2`$. In this letter we instead propose, that forward motion depends on the relative ordering of two events associated to two different degrees of freedom. This results in a motor which is close to 100% efficient.
In summa the present paper discusses how ordering of events may be utilized to construct a simple motor. The motor contains three variables $`\phi _0`$, $`\phi _1`$ and $`\phi _2`$. However, the hierarchical structure of the Hamiltonian (2) is easily extended to any number of variables — and thereby levels. The hierarchical nature of the Hamiltonian ensures that each level controls all subsequent levels, that is we have an explicit realization of local control of global motion as seen in allosteric proteins. Furthermore, the model suggests a connection between two seemingly unrelated properties: folding and motor action.
|
no-problem/9901/astro-ph9901083.html
|
ar5iv
|
text
|
# Disk Accretion onto Magnetized Neutron Stars: The Inner Disk Radius and Fastness Parameter
## 1 Introduction
Interaction between a magnetized, rotating star and a surrounding accretion disk has been of continued interest since the discovery of X-ray pulsars in binary systems (Giacconi et al. 1971, etc). Its solution may hold the key to understanding fundamental problems such as structure and dynamics of magnetized accretion disks, angular momentum exchange between the disk and the star, etc, in a variety of astrophysical systems, including cataclysmic variables, X-ray binaries, T Tauri stars and active galactic nuclei. A detail model for the disk-star interaction was first developed by Ghosh & Lamb (1979a, 1979b), who argued that the stellar magnetic field penetrates the disk via Kelvin-Helmholtz instability, turbulent diffusion and reconnection, producing a broad transition zone joining the unperturbed disk flow far from the star to the magnetospheric flow near the star. The transition zone is composed of two qualitatively different regions, a broad outer zone where the angular velocity is nearly Keplerian and a narrow inner zone or boundary layer where it departs significantly from the Keplerian value. The total torque $`N`$ exerted on the star with mass $`M`$ is then divided into two separate components:
$$N=N_{\mathrm{in}}+N_{\mathrm{out}},$$
(1)
where the subscript represents the appropriate zone in the disk. The “material” torque of the (inner) accretion flow is
$$N_{\mathrm{in}}N_0=\dot{M}R_0^2\mathrm{\Omega }_{\mathrm{k0}},$$
(2)
where $`\mathrm{\Omega }_{\mathrm{k0}}(GM/R_0^2)`$ is the angular velocity of disk plasma at the magnetospheric radius $`R_0`$ (G is the gravitation constant), and $`\dot{M}`$ is the accretion rate. Here we have adopted cylindrical coordinate $`(R,\varphi ,z)`$ centered on the star and the disk is assumed to be located on the $`z=0`$ plane, which is perpendicular to the star’s spin and magnetic axes. The torque in the outer zone supplied by the Maxwell stress is given by
$$N_{\mathrm{out}}=_{R_0}^{\mathrm{}}B_\varphi B_\mathrm{z}R^2dR,$$
(3)
where $`B_\varphi `$ is the azimuthal field component generated by shear motion between the disk and the vertical field component $`B_\mathrm{z}`$. The torque $`N_{\mathrm{out}}`$ can be positive and negative depending on the value of the “fastness parameter” $`\omega _\mathrm{s}\mathrm{\Omega }_\mathrm{s}/\mathrm{\Omega }_{\mathrm{k0}}=(R_0/R_\mathrm{c})^{3/2}`$, where $`\mathrm{\Omega }_\mathrm{s}`$ is the angular velocity of the star and $`R_\mathrm{c}(GM/\mathrm{\Omega }_\mathrm{s}^2)^{1/3}`$ is the corotation radius.
Inside the magnetospheric boundary, the magnetic stress dominates the rotation of disk plasma, and forces it to corotate with the magnetosphere at the radius $`R_{\mathrm{in}}`$<sup>1</sup><sup>1</sup>1According to Spruit & Taam (1990), the accreting gas, nearly corotating with the star, could drift further inward across the field by an interchange instability.. To understand how far the disk can extend into the stellar magnetosphere and how the torque is transferred to the star, it is necessary to obtain the solutions of the structure of the magnetosphere. Ghosh & Lamb (1979a) suggested that $`R_0`$ and $`\delta `$ be constrained by the following relation
$$(RB_\mathrm{z}B_\varphi /4\pi )_02\pi R_02\delta \dot{M}(R\mathrm{\Delta }v_\varphi )_0$$
(4)
(the subscript 0 denotes quantities evaluated at $`R=R_0`$), which follows the equation of angular momentum conservation, though contribution from gas funnel flow to angular momentum transfer has not been included. Ghosh & Lamb (1979a) also assumed a priori that $`\delta `$ corresponds to the electromagnetic screening length, and is much smaller than $`R_0`$. To testify the self-consistency of this assumption, from equation (4) we derive $`\delta (R_0/R_\mathrm{A})^{7/2}R_0`$ when $`\omega _\mathrm{s}1`$, where $`R_\mathrm{A}`$ is the conventional Alfv́en radius in spherical accretion. If $`R_00.5R_\mathrm{A}`$ as obtained by Ghosh & Lamb (1979a), then indeed $`\delta R_0`$. However, if $`R_0R_\mathrm{A}`$ as suggested by some recent investigations (e.g., Arons 1993; Ostriker & Shu 1995; Wang 1996; Li 1997), $`\delta `$ will be as large as $`R_0`$.
Li, Wickramasinghe & Rüdiger (1996) have presented a class of funnel flow solutions in which the angular momentum is carried by the matter. These authors argue that for an ideally conducting star, the stellar boundary condition requires the toroidal field be very small and most of the angular momentum of the matter in the funnel be propagated back to the disk rather to the star. Li & Wickramasinghe (1997) show that a consequence of this result is that the true inner disk radius $`R_{\mathrm{in}}`$ could be much smaller than $`R_0`$, with $`R_{\mathrm{in}}/R_0`$ as low as $`0.10.2`$ in slow rotator case, and the values of the critical fastness parameter at which the net torque vanishes would also change. This “torqueless” suggestion was criticized by Wang (1997), who points out that a small toroidal field is enough to produce a significant torque on the star, and that the viscous stress is unable to remove angular momentum back outwards from the inner edge of the accretion disk.
This paper is organized as follows. In section 2, based on Wang’s (1997) arguments, we calculate the lower limit of $`R_{\mathrm{in}}`$ that is required for efficient angular momentum transfer, and find it to be at least $`0.8R_0`$, implying that $`R_0`$ is a good indicator of the inner edge of the disk. In section 3 we discuss the possibility of inefficient angular momentum transfer, which is likely to occur in LMXBs, and the implications for the resulting variation in the fastness parameter. We conclude in section 4.
## 2 The inner disk radius
An estimation of $`R_0`$ can be obtained by setting the magnetic stress equal to the rate at which angular momentum is removed in the disk (Wang 1987)
$$(R^2B_\varphi B_\mathrm{z})_0=[\dot{M}\frac{\mathrm{d}(R^2\mathrm{\Omega })}{\mathrm{d}R}]_0,$$
(5)
where $`\mathrm{\Omega }=\mathrm{\Omega }(R)`$ is the angular velocity of disk plasma. Following Li & Wang (1996), we assume that $`\mathrm{\Omega }(R)`$ reaches its maximum at $`R=R_0`$, i.e, $`(\mathrm{d}\mathrm{\Omega }/\mathrm{d}R)_0=0`$, with $`\mathrm{\Omega }(R_0)\mathrm{\Omega }_{\mathrm{k0}}`$, and rewrite equation (5) as
$$\frac{B_{\varphi }^{}{}_{0}{}^{}B_{\mathrm{z}}^{}{}_{0}{}^{}}{\dot{M}(GMR_0)^{1/2}}=\frac{2}{R_{0}^{}{}_{}{}^{3}}.$$
(6)
The torque $`N_{\mathrm{in}}`$ in equation (1) in fact contains two components: the real material torque $`N_\mathrm{f}`$ in the funnel flow, and the magnetospheric torque $`N_{\mathrm{mag}}`$ arising from the shearing motion between the corotating magnetosphere and the non-Keplerian disk boundary layer, i.e.,
$`N_{\mathrm{in}}`$ $`=`$ $`N_\mathrm{f}+N_{\mathrm{mag}}`$ (7)
$`=`$ $`{\displaystyle _{kR_0}^{R_0}}\frac{\mathrm{d}\dot{M}}{\mathrm{d}R}R^2\mathrm{\Omega }dR{\displaystyle _{kR_0}^{R_0}}B_\varphi B_\mathrm{z}R^2dR,`$
where $`k=R_{\mathrm{in}}/R_0`$.
We first consider the second term on the righ-hand side of equation (7). In estimating the toroidal magnetic filed $`B_\varphi `$, we assume as usual that $`B_\varphi `$ is generated by the shearing between the disk and the stellar magnetosphere, and the growth of $`B_\varphi `$ is limited by diffusive decay due to turbulent mixing within the disk (e.g., Campbell 1992; Yi 1995; Wang 1995)
$$\frac{B_\varphi }{B_\mathrm{z}}\frac{\mathrm{\Omega }_\mathrm{s}\mathrm{\Omega }}{\mathrm{\Omega }}.$$
(8)
Several different phenomenological descriptions of disk-magnetosphere interaction (Wang 1995) cause little practical differences to our conclusions. Assuming that $`B_\mathrm{z}`$ follows a dipolar field and using equations (6) and (8), we may write the magnetospheric spin-up torque into dimensionless form as
$$\xi _{\mathrm{mag}}=\frac{N_{\mathrm{mag}}}{N_0}=\frac{4\omega _{\mathrm{s}}^{}{}_{}{}^{2}}{3(1\omega _\mathrm{s})}_{k^{3/2}\omega _\mathrm{s}}^{\omega _\mathrm{s}}[1(\mathrm{\Omega }_\mathrm{s}/\mathrm{\Omega })]y^3dy,$$
(9)
where $`y=(R/R_\mathrm{c})^{3/2}`$.
In the first term on the righ-hand side of equation (7), $`\frac{\mathrm{d}\dot{M}}{\mathrm{d}R}`$ denotes the vertical mass loss rate from the boundary layer. This can be described by the equation (e.g., Ghosh & Lamb 1979a)
$$\frac{\mathrm{d}\dot{M}}{\mathrm{d}R}=4\pi R\rho c_\mathrm{s}g(R),$$
(10)
scaling the vertical flow velocity from the boundary layer in terms of the local sound speed $`c_\mathrm{s}`$ and introducing a “gate” function $`g(R)`$ describing the radial profile of mass loss out of the boundary layer ($`\rho `$ is the mass density). To keep things simple, we assume $`g(R)=0`$ outside $`R_0`$, and $`g(R)=1`$ when $`R_{\mathrm{in}}RR_0`$. Assuming that the disk plasma around $`R_0`$ is thermally supported and optically thick to Thomson scattering (appropriate for a binary X-ray pulsar), from the $`\alpha `$-disk model of Shakura & Sunyaev (1973), we obtain $`\frac{\mathrm{d}\dot{M}}{\mathrm{d}R}R^1`$ or $`\frac{\mathrm{d}\dot{M}}{\mathrm{d}R}=\frac{\dot{M}}{(\mathrm{log}k)R}`$, which leads to
$$\xi _\mathrm{f}=\frac{N_\mathrm{f}}{N_0}=\frac{1}{(\mathrm{log}k)}_{kR_0}^{R_0}(\frac{\mathrm{\Omega }}{\mathrm{\Omega }_{\mathrm{k0}}})\frac{R}{R_{0}^{}{}_{}{}^{2}}dR.$$
(11)
For generality, we have also assumed $`\frac{\mathrm{d}\dot{M}}{\mathrm{d}R}=const`$, and found that the results do not change significantly.
The value of $`k`$ can be obtained by setting $`\xi =\xi _\mathrm{f}+\xi _{\mathrm{mag}}=1`$. From equations (9) and (11) it can be seen that the magnitude of $`k`$ depends on the detail profile of $`\mathrm{\Omega }(R)`$ between $`R_{\mathrm{in}}`$ and $`R_0`$, which is unknown before solving the magnetohydrodynamics (MHD) equations. A lower limit of $`k`$, however, can be estimated in the following way. Because of the steep radial dependence of the stellar magnetic field ($`B_\mathrm{z}R^3`$), the slope of the angular velocity $`\frac{\mathrm{d}\mathrm{\Omega }}{\mathrm{d}R}`$ should increase from 0 at $`R_0`$ to a large value at $`R_{\mathrm{in}}`$. This requires that, in the $`\mathrm{\Omega }(R)\mathrm{vs}.R`$ diagram (Figure 1a) $`\mathrm{\Omega }(R)`$ should always lie beyond the dashed line which represents a linear relation between $`\mathrm{\Omega }`$ and $`R`$. As we know from equations (9) and (11), the larger $`\mathrm{\Omega }`$, the larger the $`\xi `$, and the smaller the $`\delta `$ required. So the smallest $`k`$ corresponds to the extreme case that $`\mathrm{\Omega }(R)`$ decreases linearly from $`\mathrm{\Omega }_{\mathrm{k0}}`$ to $`\mathrm{\Omega }_\mathrm{s}`$ between $`R_0`$ and $`R_{\mathrm{in}}`$, i.e.,
$$\mathrm{\Omega }(R)=aR+b,$$
(12)
where
$$a=\frac{\mathrm{\Omega }_{\mathrm{k0}}}{R_0}\frac{(1\omega _\mathrm{s})}{(1k)},$$
(13)
and
$$b=\mathrm{\Omega }_{\mathrm{k0}}\frac{(\omega _\mathrm{s}k)}{(1k)}.$$
(14)
In Figure 1b we plot the calculated values of $`k`$ as a function of the fastness parameter $`\omega _\mathrm{s}`$ in the dashed curve. It is seen that $`k`$ naturally reaches 1 when $`\omega _\mathrm{s}=1`$, but remains $`0.8`$ within a large range of $`\omega _\mathrm{s}`$. The solid curves in Figure 1a and 1b represent an example with more practical boundary conditions, which are always located above the dashed curves.
## 3 The fastness parameter
Our calculations in last section are based on the assumption that a torque of order $`\dot{M}R_0^2\mathrm{\Omega }_{\mathrm{k0}}`$ can be efficiently transmited to the star by both the gas funnel flow and the magnetic stress. It has been suggested that (Shu et al. 1994; Ostriker & Shu 1995; Li, Wickramasinghe, & Rüdiger 1996) a star may accrete matter from a magnetically truncated Keplerian disk without experiencing any spin-up torque, because it would require the surface magnetic field to have a large azimuthal twist which would lead to dynamical instabilities. Wang (1997) estimated the azimuthal pitch $`\gamma _{}`$ at the stellar surface required to transmit this torque to be
$$\gamma _{}\frac{B_\varphi }{B_\mathrm{z}}\frac{\xi }{2^{3/2}}(\frac{R_{}}{R_0})^{3/2}\frac{R_0}{\delta },$$
(15)
where the subscript asterisk denotes quantities at the stellar surface. Since MHD instabilities are expected to occur only when $`|\gamma _{}|\stackrel{>}{}1`$, one sees that, with $`\delta \stackrel{<}{}0.2R_0`$ and $`\xi 1`$, $`\gamma _{}`$ could be much less than unity if $`R_{}R_0`$. This condition is satisfied by binary X-ray pulsars, which generally possess surface magnetic fields as strong as $`10^{12}10^{13}`$ G, and hence $`R_010^810^9`$ cm with the mass accretion rates of $`10^{16}10^{18}\mathrm{g}\mathrm{s}^1`$, much larger than $`R_{}10^6`$ cm. Thus the standard accretion disk models based on Ghosh & Lamb (1979a, 1979b) are suitable for these systems, and the value of the critical fastness parameter $`\omega _\mathrm{c}`$, according to Wang (1995) and Li & Wang (1996), may lie between $`0.7`$ and 0.95.
There could exist possibility of an accreting star unable to absorb the excess angular momentum at $`R_0`$. This may occur in accreting neutron stars in LMXBs, in which the condition $`|\gamma _{}|1`$ may not hold unless $`\xi 0`$, because the surface field strengths in these neutron stars are so low ($`10^810^9`$ G or lower) that the accretion disk usually extends close to the star’s surface, i.e., $`R_0R_{}`$. Since MHD instabilies prevent transmission of the torque of $`N_{\mathrm{in}}`$, in this case the star only experiences the magnetic torque $`N_{\mathrm{out}}`$, and the critical fastness parameter decreases to $`0.40.6`$ (Li & Wickramasinghe 1997). Note that the angular momentum of the material at $`R_0`$ cannot be transported from the boundary layer to the outer parts of the disk by either viscous stress or a stellar magnetic field threading the inner region of the disk (Wang 1997), it is more likely to be removed by a magnetocentrifugal wind. During wind mass loss, some angular momentum of the star could also be lost, equivalent to an extra braking torque exerted on the star, so that the critical fastness parameter would be even smaller.
Observational evidence for the above arguments comes from kiloHertz quasi-periodic oscillations (kHz QPOs) recently discovered with the Rossi X-ray Timing Explorer (RXTE) in some 16 neutron star LMXBs (see van der Klis 1997 for a review). These kHz QPOs are characterized by their high levels of coherence (with quality factors $`Q\nu /\mathrm{\Delta }\nu `$ up to $`200`$), large rms amplitudes (up to several $`10\%`$), and wide span of frequencies ($`3001200`$ Hz), which, in most cases, are strongly correlated with X-ray fluxes. In many cases, two simultaneous kHz peaks are observed in the power spectra of the X-ray count rate variations, with the separation frequency roughly constant (e.g., Strohmayer et al. 1996; Ford et al. 1997; Wijnands et al. 1997). Sometimes a third kHz peak is detected in a few atoll sources during type I X-ray bursts at a frequency equal to the separation frequency of the two peaks (Strohmayer et al. 1996; Ford et al. 1997) or twice that (Wijnands & van der Klis 1997; Smith, Morgan & Bradt 1997). This strongly suggests a beat-frequency interpretation, with the third peak at the neutron star spin frequency (or twice that), the upper kHz peak at the Keplerian orbital frequency at a preferred radius around the neutron star, and the lower kHz peak at the difference frequency between them<sup>2</sup><sup>2</sup>2The applicability of such a model is less clear when the difference frequency is not constant as in the Z source Sco X-1 and possibly the atoll source 4U 1608-52; see van der Klis et al. (1996) and Méndez et al. (1998).. If the upper kHz QPOs are associated with the orbital frequency at the magnetospheric radius or the sonic point in the accretion disk (Strohmayer et al. 1996; Miller et al. 1998; see also Lai 1998), one would expect that the disk can extend towards the marginally stable orbit of a neutron star.
As pointed by Zhang et al. (1997) and White & Zhang (1997), these neutron stars may have accreted a considerable amount of mass, so they possibly have reached the equilibrium spin. The measured spin and QPO frequencies of these neutron stars reveal the critical fastness parameter $`\omega _\mathrm{c}`$ lying between $``$ 0.2 and 0.7. Additionally, considering the spin and orbital evolution in low-mass binary pulsars which are thought to originate from LMXBs, Burderi et al. (1996) also found a small value of $`\omega _\mathrm{c}0.1`$ from the observed relation between spin period, magnetic field and orbital period. Though subject to both observational and theoretical uncertainties, these results do indicate a smaller fastness parameter than predicted by current theory for binary X-ray pulsars.
## 4 Conclusions
We summarize our results as follows: (1) The inner disk radius $`R_{\mathrm{in}}`$ is always close to the magnetospheric radius $`R_0`$. (2) For binary X-ray pulsars, the critical fastness parameter lies in the rang $`0.710.95`$; neutron stars in LMXBs may not absorb angular momentum flux at $`R_0`$ as efficiently as X-ray pulsars, resulting in a critical fastness parameter considerably smaller than in X-ray pulsars.
This work was supported by National Natural Science Foundation of China.
|
no-problem/9901/cond-mat9901061.html
|
ar5iv
|
text
|
# Solidity of viscous liquids II: Anisotropic flow events
## Abstract
Recent findings on the displacements in the surroundings of isotropic flow events in viscous liquids \[Phys. Rev. E, in press\] are generalized to the anisotropic case. Also, it is shown that a flow event is characterized by a dimensionless number reflecting the degree of anisotropy.
In a previous paper henceforth referred to as (I) , it was argued that viscous liquids close to the glass transition \- where viscosity is roughly $`10^{15}`$ times larger than that of, e.g., room temperature water - are more like solids than like the less viscous liquids studied in standard liquid theory . The idea that viscous liquids are qualitatively different from less-viscous liquids is, of course, not new. It is a rather obvious idea, given the following fact. While “ordinary” less-viscous liquids have relaxation times in the picosecond range, i.e., comparable to typical phonon times, viscous liquids have much longer average relaxation times (roughly given by Maxwell’s expression $`\tau =\eta /G_{\mathrm{}}`$, where $`\eta `$ is the viscosity and $`G_{\mathrm{}}`$ the instantaneous shear modulus). This decoupling of relaxation times from phonon times is also reflected in a decoupling of diffusion constants : For less-viscous liquids the molecular diffusion constant $`D`$ is of the same order of magnitude as the transverse momentum diffusion constant, the dynamic viscosity of Navier-Stokes equation $`\nu \eta `$. However, with increasing viscosity $`D`$ decreases (roughly as $`\eta ^1`$ from a simple Stokes-Einstein type argument) while $`\nu `$ increases. At the glass transition $`\nu `$ is about $`10^{30}`$ times larger than $`D`$.
The average relaxation time increases dramatically upon cooling. Goldstein has argued that already when $`\tau `$ becomes longer than about 1 nanosecond is there a gradual onset of typical viscous-liquid behavior. As noted first by Angell , this is roughly at the temperature below which ideal mode-coupling theory breaks down. It is generally believed that in viscous liquids “real” molecular motion beyond pure vibration takes place on the time scale defined by $`\tau `$, although inhomogeneities are likely to give rise to faster relaxations in some parts of the liquid . “Real” motion is rare because it involves overcoming energy barriers large compared to $`k_BT`$ . The transition itself is a jump between two potential energy minima, a process that lasts just a few picoseconds. One thus arrives at the following picture: Most molecular motion in a viscous liquid is purely vibrational, real motion is rare and takes place via sudden molecular rearrangements. It is interesting to note that this old picture has never really been challenged (while the nature of the energy barrier to be overcome in the transition is still being debated ). In fact, extensive computer simulations have now definitively confirmed the picture .
The sudden molecular rearrangements are referred to as “flow events” below. It is generally believed that flow events are localized in the sense that only a limited number of molecules experience large displacements, while all other molecules are only slightly displaced; the large-displacement molecules involved in a flow event thus define a “region” of the liquid. Because flow events are rare and molecules most of the time just vibrate, a viscous liquid looks much like a solid. In (I) the small displacements in the surroundings of a flow event were calculated from solid elasticity theory assuming spherical symmetry. It was shown that the displacement $`u`$ in the surroundings of a region is given by (where $`r`$ is the distances to the region)
$$u\frac{1}{r^2}.$$
(1)
The displacement is purely radial. However, spherical symmetry of flow events is not realistic; when molecules move from one potential energy minimum to another there must be some violation of spherical symmetry, even if the molecules were spheres with only radially dependent interactions. One is thus lead to ask to whether Eq. (1) and its consequences remain valid in the anisotropic case.
As in (I) the starting point is the solidity of viscous liquids as reflected in the slow “real” motion of the molecules. This fact implies that the average force on any molecule is extremely close to zero. In a continuum description, the average force per unit volume is the divergence of the stress tensor $`\sigma _{ij}`$, where $`i,j=1,2,3`$ are spatial indices. The condition of average zero force - elastic equilibrium - is (where $`_i=/x_i`$ and one sums over repeated indices)
$$_i\sigma _{ij}=0.$$
(2)
Linear elasticity theory may be applied to the region surroundings, because the molecular displacements in these surroundings are small and because there is elastic equilibrium in the liquid before as well as after a flow event. Most likely, there are large “frozen-in” stresses in the liquid, but the changes in the stress tensor induced by one flow event are small, except in the region itself. Now, define a sphere centered at the region, large enough that outside the sphere the flow event induced displacements and stress tensor changes are so small that linear elasticity theory applies for the changes. Imagine all molecules within the sphere being removed and the forces from these molecules acting on the molecules outside the sphere replaced by external forces applied to the surface of the sphere. This is done before as well as after the flow event. The flow event induced displacements of the surroundings can then be calculated from the change of these external forces. To do this we first consider the distance dependence of displacements in an elastic solid when an external force is applied to just one point. There is then a continuous flow of momentum into the solid at that point. The stress tensor is the momentum current and the mechanical equilibrium condition Eq. (2) is the zero-divergence equation reflecting momentum conservation. By considering Gauss surfaces at various distances from the point, one concludes from Eq. (2) that the stress tensor decays as $`r^2`$, where $`r`$ is the distance to the point. Since the stress tensor is formed from first order spatial derivatives of the displacement $`u`$, we conclude that $`ur^1`$ . This result applies also when several external forces are applied to the solid, as long as these forces do not sum to zero. In our case, the external forces replacing the forces from the molecules within the sphere do sum to zero: The forces from the molecules outside the sphere on those inside must sum to zero - otherwise the latter molecules would start to move. By Newton’s third law, the sum of the forces acting from the molecules inside the sphere on those outside \- the forces that are replaced by external forces - must therefore also sum to zero. When the external forces sum to zero, the stress tensor does not decay as $`r^2`$ but as $`r^3`$ (the mathematics behind this fact is the same as that implying that the electric field from a charge distribution with zero total charge decays as $`r^3`$ and not as $`r^2`$). Consequently, since the stress tensor is given as first order derivatives of the displacement vector we arrive at Eq. (1), which is now to be understood as valid for each of the three components of the displacement vector. In particular, we note that the predictions of (I) for the displacement and rotation angle distributions in the surroundings of a flow event ($`P(u)u^{5/2}`$ and $`P(\varphi )\varphi ^2`$) are valid also in the anisotropic case. The first prediction has recently been confirmed in computer simulations of a binary Lennard-Jones mixture , the second is consistent with the small rotation angle distribution tentatively inferred from NMR experiments by Böhmer and Hinze on glycerol, $`P(\varphi )1/\mathrm{sin}^2(\varphi )`$ .
We now show that it is possible to characterize flow events according to their anisotropy. The elastic equilibrium in the surroundings of a flow event region before as well as after the flow event implies that the stress tensor change, $`\mathrm{\Delta }\sigma _{ij}`$, has zero divergence (i.e., obeys Eq. (2)). Since $`𝐮`$ and $`\mathrm{\Delta }\sigma _{ij}`$ are linked by linear elasticity theory one has
$$^2(𝐮)=0.$$
(3)
This equation can be solved asymptotically for $`r\mathrm{}`$: Equation (1) implies $`𝐮r^3`$. Any real solution to the Laplace equation decaying as $`r^3`$ can be written as $`\alpha P_2(\theta ,\varphi )/r^3`$, where $`\alpha 0`$ is a constant and $`P_2`$ is a normalized linear combination of second order spherical harmonics: $`P_2=_{m=2}^{m=2}c_mY_{lm}`$, where $`c_m^{}=c_m`$ and $`_{m=2}^{m=2}|c_m|^2=1`$. This expression applies far away from the flow event: $`r_0r`$, where $`r_0`$ is the region size. On the other hand, the expression does not apply beyond the “solidity length” $`l`$ discussed in (I), where essentially no flow event induced displacements are expected. Since $`𝐮`$ is dimensionless $`\alpha `$ has dimension $`(\mathrm{length})^3`$. Writing $`\alpha =a/\rho _0`$, where $`\rho _0`$ is the average (number) density and $`a`$ is dimensionless, we have
$$𝐮=a\frac{P_2(\theta ,\varphi )}{\rho _0r^3}(r_0rl).$$
(4)
The parameter $`a`$ is a measure of the flow event anisotropy, the case $`a=0`$ corresponding to isotropic flow events.
In a homogeneous system described by linear elasticity theory the density change following an elastic displacement is equal to $`\rho _0𝐮`$ . Thus, if a viscous liquid were homogeneous the density change in the surroundings of a flow event would be given by Eq. (4) (looking like an electronic d-orbital). However, the density of a viscous liquid is not quite spatially constant. As is easy to show, the density change induced by a flow event has an extra term, $`𝐮\rho `$, coming from the fact that the whole density profile is displaced. Far away from the flow event this extra density change term dominates over the ($`𝐮`$)-term.
The flow event induced changes given by Eqs. (1) and (4) were calculated from the fact that there is a linear relation between displacement and stress tensor change. Therefore, these results are valid independent of the chemical nature of the liquid. One possible objection to these results is that dynamic inhomogeneities most likely give rise to spatially varying elastic constants. However, being mainly interested in the high viscosity limit where the solidity length is large, these inhomogeneities are not expected to have any significant effect on the average displacements in the surroundings of a flow event (the “long wave length” limit). Finally, we note that the sharp distinction between “real” motion and vibration is somewhat blurred by the fact that “real” motion takes place not only in the region itself in the form of large jumps but also in the surroundings in the form of small jumps. However, as may be shown from Eq. (1), the dominant contribution to the mean-square displacement of a molecule comes from the “real” motion of molecules inside regions.
To summarize, arguing from the “solidity” of viscous liquids, the flow induced displacements in the surroundings have been calculated for the general, anisotropic case. It has been shown that the $`r`$-dependence of these displacements is the same as that induced by isotropic flow events. A dimensionless number $`a`$ has been introduced as a measure of the degree of anisotropy of a flow event.
###### Acknowledgements.
The author wishes to thank Austen Angell and Ralph Chamberlin for numerous stimulating discussions and also for their most kind hospitality during the author’s stay at Arizona State University, where parts of this work was carried out. This work was supported by the Danish Natural Science Research Council.
|
no-problem/9901/astro-ph9901409.html
|
ar5iv
|
text
|
# THE THIRD STROMLO SYMPOSIUM – THE GALACTIC HALO: BRIGHT STARS & DARK MATTER
## 1. Luminous Halo
The stellar halo of the Galaxy contains only a small fraction of its total luminous mass, but the abundances and kinematics of halo stars, globular clusters, and the dwarf spheroidal satellites contain imprints of the formation of the entire Milky Way (Eggen, Lynden–Bell & Sandage 1962; Searle & Zinn 1978; Baugh, Cole & Frenk 1996; Wyse, White).
Abundances and ages. Long-term systematic programs on abundances and kinematics of halo stars (Beers, Carney) show that very few really metal–poor stars (\[Fe/H\] $`<2.0`$) occur in the halo. Below \[Fe/H\]$`=2.0`$ the numbers decrease by a factor of 10 for every dex in \[Fe/H\]. Despite much effort, there is no evidence for stars significantly more metal poor than \[Fe/H\]$`=4.0`$ (Beers, Norris). Stars with \[Fe/H\]$`<3.0`$ display a large range in abundances at fixed \[Fe/H\]. While those of C and N may result from internal pollution (Fujimoto), that of the heavy-neutron-capture elements probably reflects the shot noise of individual enrichment events by early Type II supernovae, before the Type Ia’s contribute. This suggests that the most metal-poor stars formed in an interval $`<`$1 Gyr.
New color-magnitude diagrams, an improved metallicity calibration (Carretta & Gratton 1997), and further work on theoretical stellar models show that (i) nearly all globular clusters have a remarkably small age-spread, of about 1 Gyr or less, (ii) the ages of the halo clusters do not correlate with \[Fe/H\], and (iii) there is a hint that the clusters at the largest galactocentric radii might be slightly younger (Piotto, Vandenberg, Sarajedini). The debate on the absolute ages of the globular clusters continues (Mould 1998), but they agree with those of the oldest and most metal-poor field halo stars, based on HIPPARCOS calibrations and the Th/Eu radioactive clock (Norris). While the dwarf spheroidal companions (Da Costa) and the Magellanic Clouds (Olszewski) as well as M31 (Freeman) and M33 (Sarajedini) all have experienced different formation histories, it appears that the oldest populations invariably have the same age, and that the first generation of stars was formed synchronously throughout the entire Local Group around 12–14 Gyr ago.
Kinematics and substructure. HIPPARCOS proper motions and available radial velocities allow analysis of the space motions of a few hundred local halo stars. There are Galactic disk stars with $`1.6<`$ \[Fe/H\] $`1.0`$, but there is no sign of the disk for \[Fe/H\]$`<1.6`$, and little evidence for clumping in velocities (Chiba). It is not easy to interpret the results for the HIPPARCOS sample, let alone extrapolate them to larger distances (Moody & Kalnajs). Hints for velocity clumping in high-$`|z|`$ samples of field halo giants can be found in many studies in the past decade (Majewski). Some satellite dwarf galaxies may be on the same orbit, i.e., are parts of a ‘ghostly stream’ (Lynden–Bell). The orbits of the globular clusters display similar correlations (Majewski).
Tidal stripping in a spherical potential results in a coherent structure in $`𝐫`$ and $`𝐯`$ space. This applies at large galactocentric radii, and gives a good description of the tidal tails of globular clusters and the Magellanic Stream (Johnston). Tidal disruption in the flattened inner halo leaves a signature in velocity space, but not in configuration space. Picking out disrupted streams is possible with good kinematic data which includes astrometry. The properties of the progenitor can then be inferred, and in this way one can hope to reconstruct the merging history of our own Galaxy (Helmi & White, Harding et al.).
There was much interest in the details of the encounter of the Large and Small Magellanic Clouds (LMC/SMC) with the Milky Way (MW). There were talks on observations of the gas (Putman) and stars (Majewski) of the Magellanic Stream, on theoretical models of the encounter (Weinberg), and on the possible involvement of the Sgr dwarf (Zhao). The total mass of the LMC may be as large as $`2\times 10^{10}M_{}`$, in which case the LMC can induce the observed warp as well as lopsidedness in the Milky Way. In turn, the MW tidal field puffs up the LMC, generating a stellar halo which might contribute significantly to the microlensing event rate (Weinberg, Olszewski).
Gas. The Leiden–Dwingeloo Survey (Hartmann & Burton 1997), its southern extension with the radio telescope in Villa Elisa, Argentina, and the Parkes HIPASS survey have resulted in major progress in the understanding of the high-velocity (halo) gas. A coherent picture is finally emerging in which (i) some HVC’s are connected to the HI in the Galactic disk, (ii) much of it is connected to the Magellanic Stream, with the beautiful HIPASS data of Putman et al. (1998) now also showing the leading arm predicted by numerical simulations of the encounter of the Magellanic Clouds with the Milky Way, and (iii) steady accretion of material either from the immediate surroundings of the Galaxy (Oort 1970), or from within the Local Group (Blitz). Absorption line studies and metallicity measurements are—at long last—starting to constrain the distances of individual clouds (Wakker, van Woerden).
## 2. Dark Halo
All measurements to date of the mass of the Galaxy are consistent with a model in which the mass distribution is essentially an isothermal sphere with a constant circular velocity $`v_c`$$``$ 180 km/s (Zaritsky). Out to a distance of $``$300 kpc this corresponds to a mass of about $`2\times 10^{12}M_{}`$, and an average mass-to-light ratio of more than 100 in solar units. The uncertainties in the halo mass profile remain significant, even inside the orbit of the LMC.
Microlensing. The 20 microlensing events seen towards the LMC indicate that point-like objects in the mass range $`10^7<M/M_{}<10^2`$ can at most form a minor constituent of the Galactic halo (Alcock, Perdereau, Stubbs). This eliminates most objects of substellar mass, including planets as small as the Earth. If the LMC events are caused by massive compact halo objects (MACHOs), they must have masses in a rather narrow range around 0.5 $`M_{}`$. The unknown binary fraction in the lens population remains a major uncertainty (Bennett). It is possible that the MACHOs are not in the dark halo at all. Flaring of the disk and/or the warp of the Milky Way, or the presence of another intervening object have been considered, but it now seems unlikely that these can provide the entire set of observed events. However, self-lensing by the LMC, in particular by its own stellar halo, may well be quite significant (Weinberg, Olszewski), leaving open the possibility that all the lenses are in the LMC itself. The next generation microlensing experiments should settle this issue (Stubbs).
Audit of the Universe. The density of luminous matter $`\mathrm{\Omega }_{\mathrm{lum}}`$ is $``$1/10 the baryon density $`\mathrm{\Omega }_{\mathrm{baryon}}`$, which is $``$1/10 the total matter density of the Universe $`\mathrm{\Omega }_M`$, which in turn is $``$1/3 the critical density (Turner). Recent results on distant supernovae (Schmidt et al. 1998; Perlmutter et al. 1999) suggest that the ‘dark energy’ $`\mathrm{\Omega }_\mathrm{\Lambda }`$ brings the total $`\mathrm{\Omega }_0=\mathrm{\Omega }_M+\mathrm{\Omega }_\mathrm{\Lambda }`$ to the critical value. The value of $`\mathrm{\Omega }_{\mathrm{baryon}}`$ is of the same order as $`\mathrm{\Omega }_{\mathrm{galaxies}}`$, suggesting that galactic halos consist of dark baryons, and that non-baryonic dark matter is needed at the scales of clusters and larger in order to provide $`\mathrm{\Omega }_M`$. If only one kind of dark matter exists on all scales, then the location of most of the baryons needs to be found.
Composition. Many talks addressed the nature of dark matter. The measured number density of low-mass objects shows that brown dwarfs cannot provide all the dark mass in the Galactic halo (Flynn, Tinney). The microlensing events towards the LMC suggest white dwarfs as major constituent of the dark halo (Chabrier). The required numbers imply a special initial mass function early-on, and a resulting metal-enrichment which is hard to hide (Gibson & Mould 1997). Neutron stars face similar problems (Silk).
Ultracold (4K) clumps of H<sub>2</sub>, with masses of $`10^3M_{}`$, diameters of 30 AU, and densities of $`10^{10}`$ cm<sup>-3</sup> have also been proposed as dark matter candidates. Current microlensing experiments are not sensitive to such clumps, but they have not been seen directly in the local interstellar medium. They might reside in a large outer disk (Pfenniger), or in a spheroidal halo (Walker). The ionized and evaporating outer envelopes of the clumps could cause extreme scattering events at radio wavelengths (Fiedler et al. 1987), but it is unclear whether the expected cosmic-ray induced gamma rays are seen (Chary; Dixon et al. 1998).
Ionized gas at $`10^6`$ K may be the best bet for baryonic dark matter attached to the Galaxy (Fukugita, Hogan & Peebles 1998). The current observational constraints on the total amount of this material (Maloney) are no stronger than they were 40 years ago (Kahn & Woltjer 1959), but ROSAT (Kalberla) and ongoing H$`\alpha `$ surveys (Bland–Hawthorn) will allow significant improvement soon.
Candidates for non-baryonic dark matter include massive neutrinos, axions, neutralinos, and primordial black holes (Sadoulet, Silk, Turner). Neutrinos are not favored by theories of structure formation, and the required mass of $``$25 eV may be ruled out. There is no experimental evidence for axions or neutralinos, but the laboratory sensitivity will improve considerably in the coming year.
## 3. Towards a stereoscopic census
A quantitative test of formation scenarios for the Galaxy is a crucial complement to high-redshift studies of galaxy formation. It will require accurate distances and kinematic data for large samples of halo objects. We can expect ongoing objective prism surveys (Christlieb) to identify $``$20000 candidate halo objects. Multi-object spectroscopy by 2DF or SLOAN will provide radial velocities and abundances (Pier). The Tycho Reference Catalog will contain proper motions of 2–3 mas/yr for 3 million stars to $`V`$$``$12 (Hoeg et al. 1998), allowing space motions to be determined for $``$30000 halo stars out to a few kpc.
The next major step will have to await NASA’s Space Interferometry Mission, and GAIA, a mission under study by ESA (Gilmore et al. 1998). SIM and GAIA will obtain proper motions and parallaxes to $`<`$10 micro-arcsecond accuracy. While SIM will be a pointed observatory, GAIA will measure all one billion stars to $`V`$$``$20, and obtain radial velocities and photometry for most objects brighter than $`V`$$``$17. GAIA will provide the full six-dimensional phase space information (positions and velocities) to $``$20 kpc from the Sun, and velocity information on individual stars to much larger distances. High-resolution ground-based spectroscopic follow-up will provide abundances throughout the halo, so that the entire formation history of the Galaxy can be reconstructed.
## References
Baugh C.M., Cole S., Frenk C.S., 1996, MNRAS, 283, 1361
Carretta E., Gratton R.G., 1997, A&AS, 121, 95
Dixon D., et al. 1998, New Astro, 3, 539
Eggen O., Lynden–Bell D., Sandage A.R., 1962, ApJ, 136, 748
Fiedler R.L., Dennison B., Johnston K.J., Hewish A., 1987, Nature 326, 675
Fukugita M., Hogan C.J., Peebles P.J.E., 1998, ApJ, 503, 518
Gibson B.K., Mould J.R., 1997, ApJ, 482, 98
Gilmore G., et al., 1998, in Astronomical Interferometry, SPIE Proc. 3350, ed. R.D. Reasenberg, 541–550
Hartmann D., Burton W.B., 1997, Atlas of Galactic Neutral Hydrogen (Cambridge Univ. Press)
Hoeg E., et al., 1998, AA, 335, L65
Kahn F.D., Woltjer L., 1959, ApJ, 130, 705
Mould J.R., 1998, Nature, 395, Supp., A22
Oort J.H., 1970, AA, 7, 381
Perlmutter S., et al., 1999, ApJ, in press (astro-ph/9812133)
Putman M.E., Gibson B.K., Staveley–Smith L., et al., 1998, Nature, 394, 752
Schmidt B.P., et al., 1998, ApJ, 507, 46
Searle L., Zinn R., 1978, ApJ, 225, 357
|
no-problem/9901/math9901106.html
|
ar5iv
|
text
|
# UNCOUNTABLY MANY ARCS IN 𝑆³ WHOSE COMPLEMENTS HAVE NON-ISOMORPHIC, INDECOMPOSABLE FUNDAMENTAL GROUPS
## 1. Introduction
At the 1996 Workshop in Geometric Topology F. D. Ancel posed the following questions:
###### Question 1.1.
Let $`A`$ be the Fox-Artin arc in $`S^3`$ which is pictured in Figure 1. Is $`\pi _1(S^3A)`$ indecomposable with respect to free products?
###### Question 1.2.
Are there infinitely (uncountably?) many wild arcs $`A_i`$ in $`S^3`$ such that $`\pi _1(S^3A_i)`$ and $`\pi _1(S^3A_j)`$ are non-isomorphic for $`ij`$?
Fox and Artin proved that $`\pi _1(S^3A)`$ is non-trivial. ($`A`$ is actually the mirror image of their Example 1.1.) At the workshop Ancel remarked that an incorrect proof that it is indecomposable had been published by Rosłaniec . He also noted that an affirmative answer to Question 1.1 would give an affirmative answer to the countable case of Question 1.2 by concatenating finitely many copies of $`A`$; the resulting groups are free products of copies of $`\pi _1(S^3A)`$ and so would be non-isomorphic \[9, Vol. II, p. 27\]. These examples would have a finite but unbounded number of wild points.
In this paper we answer these two questions in the affirmative. In particular, regarding Question 1.2 we construct an uncountable family of arcs $`A_i`$ such that the fundamental groups $`\pi _1(S^3A_i)`$ are non-isomorphic for distinct indices and also are indecomposable and non-trivial. Moreover each arc is wild precisely at its endpoints.
We remark that if the fundamental group of the complement of an arc in $`S^3`$ is non-trivial, then it is not finitely generated \[3, Corollary 2.6\].
Ancel also posed the following question, to which one can of course add the question of indecomposability. As of this writing these questions remain open, but it seems likely that affirmative answers could be obtained by the methods of this paper.
###### Question 1.3.
Let $`B`$ be the wild arc in the solid torus $`V`$ pictured in Figure 2. Suppose $`k_i:VS^3`$ is a knotted embedding such that $`\pi _1(S^3k_i(V))`$ is not isomorphic to $`\pi _1(S^3k_j(V))`$ for $`ij`$. Is $`\pi _1(S^3k_i(B))`$ not isomorphic to $`\pi _1(S^3k_j(V))`$ for $`ij`$?
The paper is organized as follows. In section 2 we give a criterion for the fundamental group of a non-compact 3-manifold to be indecomposable and non-trivial. In section 3 we prove that the exterior of the Fox-Artin arc satisfies this criterion. In section 4 we prove a lemma about embeddings of torus knot groups in torus knot groups. In section 5 we construct the uncountable family of arcs mentioned above and verify its properties.
The author thanks Bill Banks for drawing the Fox-Artin arc which is used in Figures 1, 2, and 3.
## 2. A Criterion for Indecomposability
Recall that a group $`G`$ is decomposable if it is a free product $`KL`$, where $`K`$ and $`L`$ are non-trivial. $`G`$ is indecomposable if it is not decomposable.
###### Lemma 2.1.
Let $`\{H_k\}_{k0}`$ be a sequence of non-trivial, non-infinite-cyclic, indecomposable subgroups of $`G`$ such that $`H_kH_{k+1}`$ for all $`k0`$ and $`G=_{k=0}^{\mathrm{}}H_k`$. Then $`G`$ is indecomposable.
###### Proof..
Suppose $`G=KL`$, where $`K`$ and $`L`$ are non-trivial. Then no non-trivial element of $`K`$ is conjugate to an element of $`L`$. This can be seen as follows. Let $`N`$ be the normal closure of $`K`$ in $`G`$. Let $`p:GG/N`$ be the natural projection. Then there is an isomorphism $`q:G/NL`$ such that the restriction of $`qp`$ to $`L`$ is the identity of $`L`$ \[10, pp. 101–102\]. But $`qp`$ sends any conjugate of an element of $`K`$ to the trivial element of $`L`$.
By the Kurosh subgroup theorem any subgroup of $`G`$ is a free product of a free group and conjugates of subgroups of $`K`$ and of $`L`$. Since $`H_0`$ in indecomposable and non-infinite-cyclic we may thus assume that it is conjugate to a subgroup of $`K`$. Similarly $`H_1`$ must be conjugate to a subgroup of $`K`$ or of $`L`$. The latter cannot happen since then some non-trivial element of $`K`$ would be conjugate to an element of $`L`$. Continuing in this fashion we get that each $`H_k`$ is conjugate to a subgroup of $`K`$. This implies that $`G`$ cannot be the union of the $`H_k`$ since the non-trivial elements of $`L`$ are excluded. $`\mathrm{}`$
We now consider fundamental groups of non-compact 3-manifolds. For basic definitions in 3-manifold topology we refer to and . A 3-manifold $`M`$ is $``$-irreducible if $`M`$ is incompressible in $`M`$. Let $`S`$ and $`S^{}`$ be compact surfaces such that $`S`$ is properly embedded in $`M`$ and $`S^{}`$ either is properly embedded in $`M`$ or lies in $`M`$. Then $`S`$ and $`S^{}`$ are parallel in $`M`$ if there is an embedding of $`S\times [0,1]`$ in $`M`$ (called a parallelism from $`S`$ to $`S^{}`$) such that $`S\times \{0\}=S`$, $`S\times \{1\}=S^{}`$, and $`(S)\times [0,1]`$ lies in $`M`$. If $`S^{}`$ lies in $`M`$ then $`S`$ is $``$-parallel in $`M`$. The topological interior of $`N`$ in $`M`$ is denoted by $`IntN`$.
###### Lemma 2.2.
Let $`W`$ be a connected, non-compact 3-manifold which can be expressed as the union $`W=_{n=\mathrm{}}^{\mathrm{}}X_n`$ of compact, connected, irreducible, $``$-irreducible 3-manifolds $`X_n`$ such that $`X_mX_n=\mathrm{}`$ for $`|mn|>1`$ and $`X_nX_{n+1}=X_nX_{n+1}`$ is a compact, connected surface which is incompressible in $`X_n`$ and in $`X_{n+1}`$ and is not a disk. Then $`\pi _1(W)`$ is non-trivial and indecomposable.
###### Proof..
Standard arguments show that $`Y_k=_{n=k}^kX_n`$ is irreducible and $``$-irreducible. It follows that $`\pi _1(Y_k)`$ is non-trivial, non-infinite-cyclic, and indecomposable \[5, Theorem 5.2, Lemma 6.6\]. The incompressibility of each $`X_nX_{n+1}`$ shows that $`\pi _1(Y_k)`$ injects into $`\pi _1(M)`$. We now apply Lemma 2.1. $`\mathrm{}`$
## 3. The Fox-Artin Arc
###### Theorem 3.1.
$`\pi _1(S^3A)`$ is indecomposable, where $`A`$ is the Fox-Artin arc in Figure 1.
###### Proof..
Let $`N`$ be a tapered regular neighborhood of $`A`$. Thus $`N`$ is a 3-ball containing $`A`$ such that $`AN=A`$, $`A`$ is isotopic in $`N`$ rel $`A`$ to a diameter of $`N`$, and $`N`$ is tamely embedded in $`S^3`$ except at $`A`$. Let $`W=S^3(IntNA)`$. (We call $`W`$ the exterior of $`A`$. We also use this term for the closure of the complement of a regular neighborhood of a tame submanifold of a manifold.) Then $`\pi _1(W)\pi _1(S^3A)`$, and $`W=NA`$ is homeomorphic to an open annulus $`S^1\times 𝐑`$. It suffices to show that $`W`$ satisfies the hypotheses of Lemma 2.2. In the figures which follow we do not explicitly draw $`N`$, but its presence should be understood.
$`S^3A`$ can be parametrized by $`S^2\times 𝐑`$ in such a way that $`A`$ meets each $`S^2\times [m,m+1]`$, $`m𝐙`$, in three arcs as indicated in Figure 3.
It is natural to consider the exterior of the union of these three arcs in $`S^2\times [m,m+1]`$ and to regard $`W`$ as the union of these exteriors. Unfortunately these manifolds are cubes with two handles and so are not $``$-irreducible. Instead we take $`S^2\times [2n1,2n+1]`$, $`n𝐙`$, which also meets $`A`$ in three arcs, and let $`X_n`$ be the exterior of their union. The generic copy $`X`$ of $`X_n`$ is then the exterior of the union of the three arcs $`\alpha `$, $`\beta `$, and $`\gamma `$ in $`S^2\times [1,1]`$ as indicated in Figure 4.
Since no component of $`X(S^2\times \{1,1\})`$ or of the closure of $`X(S^2\times \{1,1\})`$ is a disk it suffices to prove the following.
###### Lemma 3.2.
$`X`$ is irreducible and $``$-irreducible.
###### Proof..
Irreducibility follows from the Schönflies theorem together with the fact that $`X`$ is a compact, connected submanifold of $`S^3`$ with connected boundary.
The strategy for proving $``$-irreducibility is to exhibit $`X`$ as a double covering space of a solid torus $`V`$ branched over a certain properly embedded arc $`\delta `$ in $`V`$. If $`X`$ were compressible, then by the $`𝐙_2`$ case of the equivariant loop theorem there would be a compressing disk $`\stackrel{~}{D}`$ for $`X`$ such that either $`\tau (\stackrel{~}{D})\stackrel{~}{D}=\mathrm{}`$ or $`\tau (\stackrel{~}{D})=\stackrel{~}{D}`$, where $`\tau `$ is the non-trivial covering translation. Let $`D`$ be the image of $`\stackrel{~}{D}`$ in $`V`$. In the first case $`D`$ would miss $`\delta `$. In the second case we could assume that $`D`$ would meet $`\delta `$ in a single transverse intersection point, since otherwise $`\stackrel{~}{D}`$ would contain the fixed point set $`\stackrel{~}{\delta }`$ of $`\tau `$, and we could reduce to the first case by replacing $`\stackrel{~}{D}`$ by a nearby parallel disk. In both cases $`D`$ would be a compressing disk for $`V`$ in $`V`$ since if $`D=E`$ for some disk $`E`$ in $`V`$, then the preimage of $`E`$ in $`X`$ would have a component $`\stackrel{~}{E}`$ with $`\stackrel{~}{E}=\stackrel{~}{D}`$. The proof is completed by showing that no such disk $`D`$ exists.
By sliding one endpoint of each of $`\alpha `$ and of $`\beta `$ onto $`\gamma `$ we see that $`X`$ is homeomorphic to the exterior of the graph $`\omega `$ in $`S^2\times [1,1]`$ shown in Figure 5.
This in turn is homeomorphic to the exterior $`\stackrel{~}{V}`$ of the graph $`\stackrel{~}{\theta }`$ in $`S^3`$ shown in Figure 6.
This graph is invariant under the order two rotation $`\tau `$ about the simple closed curve $`\stackrel{~}{\rho }`$. This involution defines a branched double covering $`q:S^3S^3`$. The images $`\theta `$ and $`\rho `$ of $`\stackrel{~}{\theta }`$ and $`\stackrel{~}{\rho }`$ are shown in Figure 7.
Figure 8 shows a regular neighborhood $`R`$ of $`\theta `$ in $`S^3`$ and the arc $`\delta =\rho (S^3IntR)`$. Figure 9 shows $`R`$ straightened by an isotopy to a standard solid torus. Figure 10 moves the point at $`\mathrm{}`$ to a finite point. Figure 11 displays the solid torus $`V=S^3IntR`$ containing $`\delta `$.
###### Lemma 3.3.
There is no meridinal disk $`D`$ in $`V`$ such that $`D\delta `$ is either empty or a single transverse intersection point.
###### Proof..
Let $`U`$ be a regular neighborhood of $`\delta `$ in $`V`$. Let $`F=VInt(UV)`$ and $`M=VIntU`$. It suffices to show that $`F`$ is incompressible in $`M`$ and that there is no properly embedded incompressible annulus $`G`$ in $`M`$ with one boundary component in the frontier (topological boundary) $`C=FrU`$ of $`U`$ in $`V`$ and the other a curve in $`F`$ which bounds a meridinal disk $`D`$ in $`V`$ with $`DM=G`$. Let $`E`$ be the meridinal disk shown in Figure 11. It meets $`U`$ in a pair of disks and so meets $`M`$ in a disk with two holes $`S`$. Let $`V_0`$ be the 3-ball obtained by splitting $`V`$ along $`E`$ and $`M_0`$ the 3-manifold obtained by splitting $`M`$ along $`S`$. Then $`E`$ splits $`\delta `$ into three arcs $`\delta _0`$, $`\delta _1`$, and $`\delta _2`$, $`U`$ into the regular neighborhoods $`U_0`$, $`U_1`$, and $`U_2`$ of these arcs, $`C`$ into the three annuli $`C_0`$, $`C_1`$, and $`C_2`$, and $`F`$ into the surface $`F_0`$. See Figure 12. Let $`S_0`$ and $`S_1`$ be the copies of $`S`$ in $`M_0`$ which are identified to obtain $`S`$, where $`S_0`$ meets $`C_0`$ and $`S_1`$ meets $`C_1`$ and $`C_2`$.
Let $`K`$ be the disk in $`M_0`$ shown in Figure 12. Its boundary consists of one arc each in $`F_0`$, $`S_1`$, $`C_1`$, and $`C_2`$. Splitting $`M_0`$ along $`K`$ gives a 3-manifold $`M_1`$ which is homeomorphic to $`(S_0C_0)\times [0,1]`$ with $`S_0C_0=(S_0C_0)\times \{0\}`$. See Figure 13. $`M_0`$ is then obtained by attaching a 1-handle with cocore $`K`$ to $`(S_0C_0)\times \{1\}`$, so it is irreducible.
We first show that $`S`$ is incompressible in $`M`$. It suffices to show that $`S_0`$ and $`S_1`$ are each incompressible in $`M_0`$. The first of these follows from our description above of $`M_0`$ as a product $`I`$-bundle with a 1-handle attached. The second follows from homology considerations.
We next show that $`F_0`$ is incompressible in $`M_0`$. Suppose $`L`$ is a compressing disk. Then $`L`$ separates one non-empty set of components of $`F_0`$ from another. The seven possible partitions are all ruled out by a combination of homology arguments and the incompressibility of $`S_0`$.
We now show that $`S`$ is $``$-incompressible rel $`F`$ in $`M`$. This means that whenever $`L`$ is a disk in $`M`$ such that $`LS`$ is a properly embedded arc $`\lambda `$ in $`S`$ and $`LM`$ is an arc $`\mu `$ in $`F`$ such that $`\lambda \mu =\lambda =\mu `$ and $`L=\lambda \mu `$, then there is an arc $`\nu `$ in $`S`$ and a disk $`L^{}`$ in $`F`$ such that $`\mu \nu =\mu =\nu `$ and $`L^{}=\mu \nu `$. It suffices to prove that $`S_0`$ and $`S_1`$ are $``$-incompressible rel $`F_0`$ in $`M_0`$.
For $`S_0`$ this follows from homology considerations and the incompressibility of $`F_0`$ in $`M_0`$. For $`S_1`$ similar arguments reduce the problem to the case in which $`L=\lambda \mu `$ where $`\lambda `$ is an arc in $`S_1`$ such that $`\lambda `$ lies in $`S_1F_0`$ and $`\lambda `$ separates $`S_1C_1`$ from $`S_1C_2`$ on $`S_1`$ and $`\mu `$ is an arc in $`F_0`$ separating $`F_0C_1`$ from $`F_0C_2`$.
Isotop $`L`$ so that $`K`$ and $`L`$ are in general position and the arcs $`KS_1`$ and $`LS_1`$ meet in a single transverse intersection point. Then there is an arc $`\xi `$ in $`KL`$ joining this point to a point in $`KF_0`$. Since $`M_0`$ is irreducible we may assume that in addition $`KL`$ contains no simple closed curves. The intersection then consists of $`\xi `$ and possibly some arcs $`\eta `$ with $`\eta `$ in $`KF_0`$. Assume $`\eta `$ is outermost on $`L`$. Let $`\zeta `$ be an arc in $`L`$ such that $`\zeta \eta `$ bounds a disk $`L_0`$ in $`L`$ whose interior misses $`K`$. Let $`\epsilon `$ be the arc on $`KF_0`$ with $`\epsilon =\eta =\zeta `$. There is a disk $`K_0`$ in $`K`$ such that $`K_0=\eta \epsilon `$. Then $`K_0L_0=\eta `$ and $`K_0L_0`$ is a disk with boundary $`\zeta \epsilon `$. Since $`F_0`$ is incompressible in $`M_0`$ this curve bounds a disk $`F_1`$ in $`F_0`$. Since $`M_0`$ is irreducible $`K_0L_0F_1`$ bounds a 3-ball $`B_0`$ in $`M_0`$. Note that $`\xi B_0=\mathrm{}`$. An isotopy of $`L`$ which moves $`L_0`$ across $`B_0`$ to $`K_0`$ and then off $`K_0`$ removes $`\eta `$ and possibly other components of $`KL`$ but does not affect $`\xi `$.
Thus we may assume that $`KL=\xi `$. We now split $`M_0`$ along $`K`$ to obtain $`M_1`$, as before. This splits $`L`$ into disks $`L_0`$ and $`L_1`$ either of which we can take as a compressing disk for $`(S_0C_0)\times \{1\}`$ in $`M_1=(S_0C_0)\times [0,1]`$. This contradiction completes the proof that $`S_0`$ and $`S_1`$ are $``$-incompressible rel $`F_0`$ in $`M_0`$ and hence that $`S`$ is $``$-incompressible rel $`F`$ in $`M`$.
Now suppose that $`D`$ is a compressing disk for $`F`$ in $`M`$. Put $`D`$ in general position with respect to $`S`$ so that $`DS`$ has a minimal number of components. By the incompressibility of $`S`$ and the irreducibility of $`M`$ none of them are simple closed curves. Since $`S`$ is $``$-incompressible rel $`F`$ in $`M`$ none of them can be arcs, so $`DS=\mathrm{}`$. Since $`F_0`$ is incompressible in $`M_0`$ we have that $`D`$ cannot exist.
Finally suppose that $`G`$ is an incompressible annulus in $`M`$ with one boundary component in $`C`$ and the other a curve in $`F`$ which bounds a meridinal disk $`D`$ of $`V`$ such that $`DM=G`$. We may assume that the first boundary component misses $`S`$, that $`G`$ is in general position with respect to $`S`$ and that among all such annuli in its isotopy class $`GS`$ has a minimal number of components. Then none of these components is a simple closed curve which bounds a disk in $`S`$ or in $`G`$ or is an arc joining the two components of $`G`$.
Suppose some component $`\kappa `$ of $`GS`$ is a simple closed curve. Then we may assume that $`\kappa `$ and $`GC`$ form the boundary of a subannulus $`G_0`$ of $`G`$ which lies in $`M_0`$. If $`\kappa `$ lies in $`S_0`$, then for homological reasons $`GC`$ must lie in $`C_0`$. We can isotop $`G_0`$ so that it misses $`K`$. Hence $`G_0`$ lies in $`M_1=(S_0C_0)\times [0,1]`$. By \[16, Corollary 3.2\] $`G_0`$ is parallel to an annulus in $`(S_0C_0)\times \{0\}`$ and so $`\kappa `$ can be removed by an isotopy, contradicting minimality. If $`\kappa `$ lies in $`S_1`$, then for homological reasons $`GC`$ must be in $`C_1`$ or $`C_2`$, say $`C_1`$. Let $`M_2=M_0U_0`$. Then $`M_2`$ is homeomorphic to $`S_1\times [0,1]`$ with $`S_1=S_1\times \{1\}`$. Now $`G_0`$ is incompressible in $`M_2`$ and can be isotoped keeping $`\kappa `$ fixed to an annulus $`G_0^{}`$ such that $`G_0^{}`$ lies in $`S_1`$. It then follows from \[16, Corollary 3.2\] that $`G_0^{}`$ is parallel to an annulus in $`S_1`$ and hence $`G_0`$ is $``$-parallel in $`M_2`$. Since this parallelism does not meet $`U_0`$ we have that $`G_0`$ is $``$-parallel in $`M_0`$. It follows that $`\kappa `$ can be removed by an isotopy, again contradicting minimality.
Hence any component of $`GS`$ must be an arc whose boundary lies in $`FS`$. Since $`S`$ is $``$-incompressible rel $`F`$ in $`M`$ and $`S`$ is incompressible in $`M`$ any outermost such arc can be removed by an isotopy. Thus $`GS=\mathrm{}`$, and we may regard $`G`$ as lying in $`M_0`$. For homological reasons $`GC`$ must lie in $`C_1`$ or $`C_2`$, say $`C_1`$. Since $`D`$ is a meridinal disk of $`V`$ we must have for homological reasons that $`D`$ splits $`F_0`$ into two components such that one contains $`F_0S_0`$ and $`F_0C_1`$ and the other contains $`F_0S_1`$ and $`F_0C_2`$. Let $`M_1^{}=M_0U_1`$. Then $`M_1^{}`$ is homeomorphic to $`(S_0C_0)\times [0,1]`$ with $`S_0C_0=(S_0C_0)\times \{0\}`$. So $`D`$ is a compressing disk for $`M_1^{}(S_0C_0)`$ in $`M_1^{}`$. This contradiction completes the proof of Lemma 3.3. $`\mathrm{}`$
This completes the proof of Lemma 3.2. $`\mathrm{}`$
This completes the proof of Theorem 3.1. $`\mathrm{}`$
## 4. Embeddings of Torus Knot Groups
In this section we prove a technical result concerning embeddings of torus knot groups in torus knot groups which will be used in the next section to distinguish among the fundamental groups of the complements of a certain uncountable collection of arcs. Recall that the fundamental group of the complement of a $`(p,q)`$ torus knot is the group $`G_{p,q}=x,y|x^p=y^q`$.
###### Lemma 4.1.
Let $`p`$, $`q`$, $`r`$, and $`s`$ be primes such that $`p<q`$ and $`r<s`$. Then $`G_{p,q}`$ embeds in $`G_{r,s}`$ if and only if $`p=r`$ and $`q=s`$.
###### Proof..
Let $`Z(G)`$ denote the center of the group $`G`$. Recall that $`Z(G_{p,q})`$ is an infinite cyclic group generated by $`x^p`$ and that $`G/Z(G_{p,q})𝐙_p𝐙_q`$. Recall also that a free product of two non-trivial groups has trivial center and that any element of finite order in a free product is conjugate to an element of one of the factors. (See \[10, pp. 140–141, 100–101\].)
We may assume that $`G_{p,q}`$ is a subgroup of $`G_{r,s}`$. Let $`K=G_{p,q}Z(G_{r,s})`$. Then $`K`$ is a subgroup of $`Z(G_{p,q})`$ and is the kernel of the restriction of the natural projection $`G_{r,s}𝐙_r𝐙_s`$ to $`G_{p,q}`$. If $`uG_{r,s}`$, then let $`\overline{u}`$ denote its image in $`𝐙_r𝐙_s`$.
Suppose $`K=Z(G_{p,q})`$. Then we have an embedding $`𝐙_p𝐙_q𝐙_r𝐙_s`$. Since $`\overline{x}`$ has order $`p`$ it must be conjugate to an element of $`𝐙_r`$ or of $`𝐙_s`$, hence $`p|r`$ or $`p|s`$, hence since $`r`$ and $`s`$ are prime we have $`p=r`$ or $`p=s`$. Similarly $`q=r`$ or $`q=s`$. Since $`p<q`$ and $`r<s`$ we must have $`p=r`$ and $`q=s`$.
Now suppose that $`K`$ is a proper subgroup of $`Z(G_{p,q})`$. Then it is generated by $`x^{pk}`$ for some $`k0`$, $`k1`$. Let $`G_{p,q,k}=G_{p,q}/K`$. It embeds in $`𝐙_r𝐙_s`$ and has presentation $`\overline{x},\overline{y}|\overline{x}^p=\overline{y}^q,\overline{x}^{pk}=1`$. By the Kurosh subgroup theorem $`G_{p,q,k}`$ must be a free product of cyclic groups and so must either be cyclic or have trivial center. It thus suffices to show that neither of these is the case.
For $`k=0`$ this group is just $`G_{p,q}`$, and we are done. So assume $`k2`$. Define functions $`f,g:𝐙_{pqk}𝐙_{pqk}`$ by $`f(n)=n+qmodpqk`$ and $`g(n)=n+pmodpqk`$. Then $`f`$ and $`g`$ are one to one and so may be regarded as elements of the symmetric group $`𝒮_{pqk}`$. Define $`\psi :G_{p,q,k}𝒮_{pqk}`$ by $`\psi (\overline{x})=f`$ and $`\psi (\overline{y})=g`$. Then $`\psi `$ is well defined because $`f^p(n)=n+pq=n+qp=g^q(n)`$ and $`f^{pk}(n)=n+pkq=nmodpqk`$. Since $`\psi (\overline{x}^p)=f^pid`$ we have that $`Z(G_{p,q,k})`$ is non-trivial. Since $`G_{p,q,k}`$ maps onto $`𝐙_p𝐙_q`$ it is non-cyclic, and so we are done. $`\mathrm{}`$
## 5. Uncountably Many Arcs
###### Theorem 5.1.
There are uncountably many arcs $`A_i`$ in $`S^3`$ such that:
1. $`\pi _1(S^3A_i)`$ is indecomposable and non-trivial.
2. $`\pi _1(S^3A_i)`$ and $`\pi _1(S^3A_j)`$ are isomorphic if and only if $`i=j`$.
3. $`A_i`$ is wildly embedded precisely at its endpoints.
###### Proof..
We first outline the proof and then fill in the details with a sequence of lemmas.
The construction of the $`A_i`$ will have a pattern similar to that of the Fox-Artin arc. $`S^3A_i`$ will be parametrized as $`S^2\times 𝐑`$, and for each integer $`n`$ we will have that $`A_i`$ meets $`S^2\times [n,n+1]`$ in three properly embedded arcs $`\alpha _n`$, $`\beta _n`$, and $`\gamma _n`$, where $`\alpha _n`$ runs from $`S^2\times \{n\}`$ to itself, $`\beta _n`$ runs from $`S^2\times \{n+1\}`$ to itself, and $`\gamma _n`$ runs from $`S^2\times \{n\}`$ to $`S^2\times \{n+1\}`$. These arcs will be chosen so that the exterior $`X_n`$ of $`\alpha _n\beta _n\gamma _n`$ in $`S^2\times [n,n+1]`$ is irreducible and $``$-irreducible. Hence by Lemma 2.2 we will have that $`\pi _1(S^3A_i)`$ is indecomposable and non-trivial. Thus $`A_i`$ will be wild. It will clearly be tame at points not in $`A_i`$. It will be wild at both endpoints since otherwise its complement would be simply connected. (Any meridian of the arc would bound a disk consisting of an annulus which follows the arc to a tame endpoint and is then is capped off by a disk behind it. In fact it can be shown as in \[2, Example 1.2\] that $`S^3A_i`$ would be homeomorphic to $`𝐑^3`$.)
A map is $`\pi _1`$-injective if it induces an injection on fundamental groups; the same term is applied to a submanifold if its inclusion map has this property. The arcs will be chosen so that the interior of $`X_n`$ will contain a $`\pi _1`$-injective submanifold $`Q_n`$ which is homeomorphic to the exterior of a $`(p_n,q_n)`$ torus knot in $`S^3`$, where $`p_n`$ and $`q_n`$ are primes with $`p_n<q_n`$. It will follow from the $``$-irreducibility of all the $`X_m`$ that $`\pi _1(S^3A_i)`$ will have a subgroup isomorphic to $`\pi _1(Q_n)`$. Moreover it will be shown that any subgroup of $`\pi _1(S^3A_i)`$ which is isomorphic to a $`(p,q)`$ torus knot group for primes $`p`$ and $`q`$ with $`p<q`$ must be isomorphic to one of the $`\pi _1(Q_n)`$. We then let $`J`$ be the set of all pairs of primes $`(p,q)`$ with $`p<q`$ and let $`2^J`$ be the set of all subsets of $`J`$. For each non-empty $`i2^J`$ we construct an arc $`A_i`$ as above such that the $`(p,q)`$ torus knot subgroups of $`\pi _1(S^3A_i)`$ with $`(p,q)J`$ are precisely those for which $`(p,q)i`$. It follows that $`\pi _1(S^3A_i)`$ and $`\pi _1(S^3A_j)`$ are isomorphic if and only if $`i=j`$. Since $`2^J`$ is uncountable we will be done.
We next recall some terminology. Let $`M`$ be a compact, connected, orientable 3-manifold. We say that $`M`$ is atoroidal if every properly embedded, incompressible torus $`S^1\times S^1`$ in $`M`$ is $``$-parallel in $`M`$ and is anannular if every properly embedded, incompressible annulus $`S^1\times [0,1]`$ in $`M`$ is $``$-parallel in $`M`$. If $`M`$ is irreducible, $``$-irreducible, anannular and atoroidal, contains a 2-sided, properly embedded incompressible surface, and is not a 3-ball, then $`M`$ is excellent; the same term is applied to a compact, properly embedded 1-manifold in a compact 3-manifold $`P`$ if its exterior in $`P`$ has these properties.
###### Lemma 5.2.
Let $`Y^{}`$ and $`Y^{\prime \prime }`$ be excellent 3-manifolds. Suppose $`Y=Y^{}Y^{\prime \prime }`$, where $`S=Y^{}Y^{\prime \prime }=Y^{}Y^{\prime \prime }`$ is a compact surface such that $`S`$ is incompressible in $`Y^{}`$ and in $`Y^{\prime \prime }`$, $`Y^{}IntS`$ is incompressible in $`Y^{}`$, $`Y^{\prime \prime }IntS`$ is incompressible in $`Y^{\prime \prime }`$, and each component of $`S`$ has negative Euler characteristic. Then $`Y`$ is excellent.
###### Proof..
This is \[14, Lemma 2.1\]. $`\mathrm{}`$
We now construct the arcs. Let $`R`$ be an unknotted solid torus in the interior of $`S^2\times [0,1]`$. Let $`P=S^2\times [0,1]IntR`$. (We say that $`R`$ is unknotted if there is a properly embedded disk $`E`$ in $`P`$ such that $`ER`$ and a meridinal disk $`D`$ of $`R`$ such that $`D`$ and $`E`$ meet transversely in a single point.)
###### Lemma 5.3.
There exist disjoint properly embedded arcs $`\alpha `$, $`\beta `$, and $`\gamma `$ in $`P`$ such that $`\alpha S^2\times \{0\}`$, $`\beta S^2\times \{1\}`$, $`\gamma `$ has one endpoint in $`S^2\times \{0\}`$ and the other in $`S^2\times \{1\}`$, and $`\alpha \beta \gamma `$ is excellent.
###### Proof..
Let $`\alpha ^{}`$, $`\beta ^{}`$, and $`\gamma ^{}`$ be any arcs in $`P`$ whose boundaries satisfy the given conditions. By \[14, Theorem 1.1\] any compact, properly embedded 1-manifold in a compact, connected, orientable 3-manifold which meets each 2-sphere boundary component in at least two points is homotopic relative its boundary to a properly embedded 1-manifold which is excellent. Let $`\alpha `$, $`\beta `$, and $`\gamma `$ be the respective components of this new 1-manifold.
For those who prefer a more concrete construction of such arcs we give an alternative proof at the end of this section. $`\mathrm{}`$
Now let $`Q`$ be the exterior of a $`(p,q)`$ torus knot in $`S^3`$, where $`(p,q)J`$. Glue $`P`$ and $`Q`$ together by identifying $`R`$ with $`Q`$ in such a way that $`E`$ is identified with a meridian of $`Q`$. Then the union of $`Q`$ and a regular neighborhood of $`E`$ in $`P`$ is a 3-ball, and so $`PQ`$ is homeomorphic to $`S^2\times [0,1]`$. Let $`Y`$ be the exterior of $`\alpha \beta \gamma `$ in $`P`$ and $`X=YQ`$. It follows from the irreducibility and $``$-irreducibility of $`Y`$ and of $`Q`$ that $`X`$ is irreducible and $``$-irreducible and that $`Q`$ is $`\pi _1`$-injective in $`X`$.
We now repeat this construction using $`(p_n,q_n)`$ torus knots with $`(p_n,q_n)i`$ to obtain $`\alpha _n`$, $`\beta _n`$, $`\gamma _n`$, $`P_n`$, $`Q_n`$, $`Y_n`$, and $`X_n`$ contained in $`S^2\times [n,n+1]`$. We construct an arc $`A_i`$ by identifying the endpoints of the arcs so that the arcs occur in the sequence $`\mathrm{},\gamma _n,\alpha _{n+1},\beta _n,\gamma _{n+1},\mathrm{}`$ on $`A_i`$. The exterior $`W_i`$ of $`A_i`$ then satisfies the hypotheses of Lemma 2.2, and so $`\pi _1(S^3A_i)`$ is indecomposable and non-trivial. Moreover the incompressibility of each $`X_nX_{n+1}`$ implies that each $`Q_n`$ is $`\pi _1`$-injective in $`W_i`$.
We next review some characteristic submanifold theory , following but restricting attention to the special case which we will need. We first refine our notion of parallel surfaces. A pair $`(M,F)`$ is an irreducible 3-manifold pair if $`M`$ is a compact, orientable, irreducible 3-manifold and $`F`$ is a compact, incompressible surface in $`M`$. Let $`S`$ and $`S^{}`$ be disjoint compact surfaces in $`M`$ such that $`S`$ is properly embedded in $`M`$, $`S^{}`$ is either properly embedded in $`M`$ or contained in $`M`$, and $`SS^{}`$ is contained in $`F`$. We say that $`S`$ and $`S^{}`$ are parallel in $`(M,F)`$ if there is a parallelism $`S\times [0,1]`$ from $`S`$ to $`S^{}`$ such that $`(S)\times [0,1]`$ is contained in $`F`$; if $`S^{}F`$ we say that $`S`$ is $`F`$-parallel. Our old definitions of “parallel” and “$``$-parallel” in $`M`$ correspond to the case of $`F=M`$.
The characteristic pair of the irreducible 3-manifold pair $`(M,M)`$ is a certain irreducible 3-manifold pair $`(\mathrm{\Sigma },\mathrm{\Phi })`$ such that $`\mathrm{\Sigma }M`$ and $`\mathrm{\Sigma }M=\mathrm{\Phi }`$. For its definition and proof of existence see \[7, Chapter V\]. We will limit our discussion to two basic issues: using $`(\mathrm{\Sigma },\mathrm{\Phi })`$ and recognizing $`(\mathrm{\Sigma },\mathrm{\Phi })`$. The property we will use is that any $`\pi _1`$-injective map from a Seifert fibered space with non-cyclic fundamental group into $`M`$ which is not homotopic to a map whose image lies in $`M`$ must be homotopic to a map whose image lies in $`\mathrm{\Sigma }`$ \[7, p. 138\].
We will recognize $`\mathrm{\Sigma }`$ by recognizing its components and using the Splitting Theorem \[7, p. 157\] to recognize the frontier $`Fr\mathrm{\Sigma }`$ of $`\mathrm{\Sigma }`$ in $`M`$. The components $`(\sigma ,\phi )`$ of $`(\mathrm{\Sigma },\mathrm{\Phi })`$ are Seifert pairs, i.e. $`\sigma `$ is either an $`I`$-bundle over a compact surface with $`\phi `$ the associated $`I`$-bundle or $`\sigma `$ is a Seifert fibered space with $`\phi `$ a union of fibers in $`\sigma `$. One of the properties we will need is that the inclusion map from $`(\sigma ,\phi )`$ into $`(M,M)`$ is not homotopic as a map of pairs to a map whose image lies in $`\mathrm{\Sigma }\sigma `$. Also the components of $`Fr\mathrm{\Sigma }`$ are incompressible annuli and tori none of which is $``$-parallel in $`M`$ though some components may be parallel in $`(M,M)`$ to each other. (See the examples in \[6, Chapter IX\].) A union $`Fr^{}\mathrm{\Sigma }`$ of components of $`Fr\mathrm{\Sigma }`$ such that no two components of $`Fr^{}\mathrm{\Sigma }`$ are parallel in $`(M,M)`$ to each other and $`Fr^{}\mathrm{\Sigma }`$ is maximal with respect to inclusion among all such unions is called a reduction of $`Fr\mathrm{\Sigma }`$. We call the components of $`Fr\mathrm{\Sigma }Fr^{}\mathrm{\Sigma }`$ redundant components of $`Fr\mathrm{\Sigma }`$. Now suppose we are given a compact, properly embedded surface $`𝒯`$ in $`M`$ satisfying the following two conditions:
1. The components of $`𝒯`$ are incompressible annuli and tori none of which is $``$-parallel in $`M`$.
2. Let $`(M^{},^{}M)`$ be the pair obtained by splitting $`M`$ along $`𝒯`$ and $`M`$ along $`𝒯`$. Then each component $`(N,L)`$ of $`(M^{},^{}M)`$ is either a Seifert pair or a simple pair, i.e. every incompressible, properly embedded torus in $`N`$ or annulus in $`N`$ with boundary in $`IntL`$ is either $`L`$-parallel or parallel in $`(N,L)`$ to a component of $`NIntL`$.
If $`𝒯`$ is minimal with respect to inclusion among all compact, properly embedded surfaces in $`M`$ satisfying (a) and (b), then by the Splitting Theorem $`𝒯`$ is isotopic to $`Fr^{}\mathrm{\Sigma }`$.
Now let $`M_k=_{n=k}^kX_n`$ and $`C_k=_{n=k}^kQ_n`$.
###### Lemma 5.4.
$`(M_k,M_k)`$ is an irreducible 3-manifold pair, and its characteristic pair $`(\mathrm{\Sigma },\mathrm{\Phi })=(C_k,\mathrm{})`$.
###### Proof..
The irreducibility and $``$-irreducibility of $`M_k`$ and the incompressibility of $`C_k`$ in $`M_k`$ follow from the irreducibility and $``$-irreducibility of the $`X_n`$, the incompressibility of the $`X_nX_{n+1}`$ in $`X_n`$ and in $`X_{n+1}`$, and the incompressibility of $`Q_n`$ in $`X_n`$.
Let $`𝒯=C_k`$. Since $`M_k`$ is a surface of genus two no component of $`𝒯`$ is $``$-parallel in $`M_k`$. The components of $`(M_k^{},^{}M_k)`$ are the $`(Q_n,\mathrm{})`$ and $`(Z,M_k)`$, where $`Z=_{n=k}^kY_n`$. Each $`Q_n`$ is a Seifert fibered space. By Lemma 5.2 we have that $`Z`$ is excellent and therefore $`(Z,M_k)`$ is a simple pair. Thus $`𝒯`$ satisfies properties (a) and (b). Deleting any components of $`𝒯`$ gives a surface which splits $`M_k`$ into components one of which, say $`N`$, is the union of $`Z`$ and some of the $`Q_n`$. Now $`N`$ is not Seifert fibered since it contains $`M_k`$. It is not an $`I`$-bundle over a compact surface $`S`$ since $`S`$ would be covered by $`M_k`$, and so $`\pi _1(S)\pi _1(N)`$ could not contain the $`𝐙𝐙`$ subgroup $`\pi _1(Q_n)`$. Finally $`(N,M_k)`$ is not a simple pair because $`Q_n`$ is not $``$-parallel in $`N`$. Thus $`𝒯`$ is minimal with respect to inclusion among surfaces satisying (a) and (b). So by the Splitting Theorem $`𝒯=Fr^{}\mathrm{\Sigma }`$.
By arguments similar to those applied above to $`N`$ we have that $`(Z,M_k)`$ is not a Seifert pair. So if there are no redundant components we must have $`(\mathrm{\Sigma },\mathrm{\Phi })=(C_k,\mathrm{})`$, and we are done.
Suppose there is a redundant component. Then it must be a torus which is parallel in $`(M_k,M_k)`$ to $`Q_n`$ for some $`n`$; denote it by $`T_n`$. Thus we may assume that there is an embedding of $`T_n\times [0,1]`$ in $`M_k`$ such that $`T_n\times [0,1]`$ meets $`Q_n`$ in $`T_n\times \{0\}=Q_n`$, $`T_n\times \{1\}=T_n`$, and $`T_n\times (0,1)`$ contains all other redundant tori which are parallel to $`Q_n`$. If there are such extra redundant tori, then they are isotopic in $`T_n\times [0,1]`$ to tori of the form $`T_n\times \{t\}`$ \[16, Corollary 3.2\]. It follows that there is some component $`\sigma `$ of $`\mathrm{\Sigma }`$ of the form $`T_n\times [r,s]`$. Its inclusion map into $`M_k`$ is homotopic to a map whose image lies in $`\mathrm{\Sigma }\sigma `$, contradicting one of the properties of $`\mathrm{\Sigma }`$.
Thus there are no extra redundant tori. Now let $`Z^{}`$ be the closure of the complement in $`Z`$ of the union of all the products $`T_n\times [0,1]`$. Then $`Z^{}`$ is homeomorphic to $`Z`$, and so $`(Z^{},M_k)`$ is a simple pair which is not Seifert pair. Thus $`T_n\times [0,1]`$ is a component of $`\mathrm{\Sigma }`$, and $`(Q_n,\mathrm{})`$ is a simple pair. Now in fact $`(Q_n,\mathrm{})`$ actually is a simple pair. However, it is also a Seifert fibered space with non-cyclic fundamental group. Its inclusion map cannot be homotopic to a map whose image lies in $`M_k`$ because $`\pi _1(M_k)`$ has no $`𝐙𝐙`$ subgroups. Thus it must be homotopic to a map whose image lies in some component $`\sigma `$ of $`\mathrm{\Sigma }`$. In particular the image lies in the complement of $`Q_n`$.
Now it follows from \[7, Squeezing Theorem, p. 139\] or \[6, Theorem IX.12\] that $`Q_n`$ is actually isotopic to a submanifold of $`\sigma `$. This fact can be used to contradict our knowledge of the structure of $`Z^{}`$. We choose, however, to give the following somewhat more direct argument.
Let $`p:\stackrel{~}{M}_kM_k`$ be the covering map corresponding to $`\pi _1(Q_n)`$. There is a component $`\stackrel{~}{Q}_n`$ of $`p^1(Q_n)`$ such that the restriction $`\stackrel{~}{Q}_nQ_n`$ of $`p`$ is a homeomorphism and $`\pi _1(\stackrel{~}{Q}_n)\pi _1(\stackrel{~}{M}_k)`$ is an isomorphism. It follows that $`\pi _1(\stackrel{~}{Q}_n)\pi _1(\stackrel{~}{M}_kInt\stackrel{~}{Q}_n)`$ is an isomorphism. Now the homotopy of $`Q_n`$ into its complement lifts to a homotopy of $`\stackrel{~}{Q}_n`$ into $`\stackrel{~}{M}_kInt\stackrel{~}{Q}_n`$. This implies that $`\pi _1(Q_n)`$ is abelian, which is not the case. $`\mathrm{}`$
We now suppose that $`\pi _1(S^3A_i)`$ and $`\pi _1(S^3A_j)`$ are isomorphic. Then $`\pi _1(W_i)`$ and $`\pi _1(W_j)`$ are isomorphic, where $`W_i`$ and $`W_j`$ are the exteriors of $`A_i`$ and $`A_j`$, respectively. Since these spaces are irreducible and orientable, the sphere theorem implies that they are aspherical. Hence there is a map $`h:W_jW_i`$ such that $`h_{}:\pi _1(W_j)\pi _1(W_i)`$ is an isomorphism. We then restrict $`h`$ to a $`(p,q)`$ torus knot space arising in the construction of $`A_j`$. This map is $`\pi _1`$-injective. Its image lies in some $`M_k`$. Since $`\pi _1(M_k)`$ has no $`𝐙𝐙`$ subgroups Lemma 5.4 implies that it is homotopic to a map whose image lies in some $`(r,s)`$ torus knot space arising in the construction of $`A_i`$. By Lemma 4.1 we have that $`(p,q)=(r,s)`$. Thus $`ji`$. The symmetric argument shows that $`ij`$, concluding the proof of Theorem 5.1. $`\mathrm{}`$
###### Alternative Proof of Lemma 5.3.
Figure 14 shows a three component tangle in a 3-ball. Figure 15 shows a two component tangle in a 3-ball. By \[13, Proposition 4.1\] and \[12, Proposition 4.1\] these two tangles are excellent. Let $`Y^{}`$ and $`Y^{\prime \prime }`$ be their respective exteriors.
We glue $`Y^{}`$ and $`Y^{\prime \prime }`$ together as indicated in Figure 16 to obtain the exterior $`Y`$ of the union of the arcs $`\alpha `$, $`\beta `$, and $`\gamma `$ in the space $`P`$ obtained by removing the interior of an unknotted solid torus $`R`$ contained in the interior of $`S^2\times [0,1]`$. $`S=Y^{}Y^{\prime \prime }=Y^{}Y^{\prime \prime }`$ has two components; each is a disk with two holes. Since a compact surface contained in an incompressible boundary component of a compact 3-manifold is incompressible if none of the components of its complement in the boundary component has closure a disk, we have that $`S`$ is incompressible in $`Y^{}`$ and in $`Y^{\prime \prime }`$. We now apply Lemma 5.2 to conclude that $`Y`$ is excellent. $`\mathrm{}`$
|
no-problem/9901/cond-mat9901130.html
|
ar5iv
|
text
|
# Dynamic scaling in the spatial distribution of persistent sites
## Abstract
The spatial distribution of persistent (unvisited) sites in one dimensional $`A+A\mathrm{}`$ model is studied. The ‘empty interval distribution’ $`n(k,t)`$, which is the probability that two consecutive persistent sites are separated by distance $`k`$ at time $`t`$, is investigated in detail. It is found that at late times this distribution has the dynamical scaling form $`n(k,t)t^\theta k^\tau f(k/t^z)`$. The new exponents $`\tau `$ and $`z`$ change with the initial particle density $`n_0`$, and are related to the persistence exponent $`\theta `$ through the scaling relation $`z(2\tau )=\theta `$. We show by rigorous analytic arguments that for all $`n_0`$, $`1<\tau <2`$, which is confirmed by numerical results.
First passage problems in non-equilibrium systems undergoing time evolution has become an important field of research lately with the discovery of persistence. Persistence probability in general is defined as follows: Given a stochastic variable $`\varphi (t)`$ which fluctuates about a mean value, say zero, what is the probability $`P(t_1,t_2)`$ that $`\varphi (t)`$ does not change sign throughout the time interval $`[t_1,t_2]`$. For a large class of physical systems, persistence shows a power-law decay $`P(t_1,t_2)(t_2/t_1)^\theta `$ for $`t_2t_1`$, with a non-trivial persistence exponent $`\theta `$ which is, in general, unrelated to other known static and dynamic exponents.
Let us consider spatially extended systems with a stochastic field $`\varphi (𝐱,t)`$ at each lattice site $`𝐱`$, the time evolution of which is coupled to that of its neighbouring sites. $`\varphi (𝐱,t)`$ could be, for instance, an Ising spin, a phase ordering field, a diffusing field or the height of a fluctuating interface. In such cases, the system gets broken up into domains of persistent and non-persistent sites in course of time. In $`d=1`$, this reduces to a set of disjoint persistent and non-persistent clusters appearing alternately. As persistence decays with time, the persistent clusters shrink in size and hence, their separation grows. The following questions arise naturally in this context, which we address here: (i) How are the persistent clusters distributed in space at a given time? (ii) How does their average separation grow with time?
In one dimension, the zeroes of the stochastic field can be viewed as a set of particles, moving about in the lattice, annihilating each other when two of them meet. When a particle moves across a lattice site for the first time, the field there flips sign, and the site becomes non-persistent. If each particle is assumed to perform purely diffusive motion, this reduces to the well-known reaction-diffusion model $`A+A\mathrm{}`$, with appropriate initial conditions. The simplest case is random initial distribution of particles, with average density $`n_0`$, for which $`P(t)t^\theta `$ with $`\theta =3/8`$, independent of $`n_0`$ . We investigate spatial ordering of persistent sites in this simple model.
Our study is centered around the Empty Interval Distribution $`n(k,t)`$ — the probability that two randomly chosen consecutive persistent sites are separated by distance $`k`$ at time $`t`$. This distribution is analogous to the well-studied Inter-Particle Distribution Function (IPDF) in diffusion-reaction systems. Our numerical results show that $`n(k,t)`$ has a non-trivial dynamic scaling form with power-law decay in $`k`$ and $`t`$, characterized by exponents $`\tau `$ and $`\theta `$ respectively. The power law decay is valid for $`kL(t)`$, where $`L(t)t^z`$ is a new dynamic length scale, which may be interpreted as the average separation between persistent regions. The three exponents are connected by the scaling relation $`z(2\tau )=\theta `$. Although the persistence exponent $`\theta `$ is universal for this model, we find that $`\tau `$ and $`z`$ do change with the initial particle density. We give rigorous analytical arguments on the bounds of $`\tau `$, showing that $`1<\tau <2`$ for all values of $`n_0`$. Power-law decay of $`n(k,t)`$ in $`k`$ is a consequence of spatial correlations— a random distribution of sites would correspond to exponential decay.
Our numerical simulation is done on a 1-d lattice of size $`N=10^4`$ with periodic boundary conditions. Particles are initially distributed at random on the lattice, and their positions are sequentially updated— each particle was made to move one step in either direction with probability 1/2. When two particles came on top of each other, both vanished instantaneously. The time evolution is done up to 12000 Monte-Carlo steps (1 MC step is counted after all the particles in the lattice were touched once). All simulations are repeated for three different values of initial density, $`n_0=0.2`$, 0.5 and 0.8. The results are averaged over 500 different initial configurations.
We observe that at large times $`t`$ and $`k1`$, $`n(k,t)k^\tau `$ for $`kL(t)`$. Here $`L(t)`$ is a cut-off length scale that grows with time. In Fig. 1, we present the data for $`n_0=0.5`$ and for three values of time. The same data as presented in Fig. 2 shows that for each $`k`$, $`n_k(t)t^\omega `$ for all $`kL(t)`$. (It will be shown later that $`\omega =\theta `$). Similar power-law decay in $`k`$ and $`t`$ has been observed for other values of $`n_0`$ also. These observations are fairly well represented by the following dynamic scaling form for $`n(k,t)`$, for late times and large enough $`k`$.
$$n(k,t)t^\omega k^\tau f(k/L(t))$$
(1)
where the scaling function $`f(x)1`$ for $`x1`$ and decreases faster than any power of $`x`$ for $`x1`$.
The exponents appearing in Eq. 1 are not all independent. The moments of the distribution are useful in deriving the scaling relations between them. The $`m`$-th moment is $`I_m(t)=_kk^mn(k,t)_1^{\mathrm{}}n(s,t)s^m𝑑s`$. From the definition of $`n(k,t)`$, one can easily see that
$$I_0(t)=P(t)t^\theta ;I_1(t)=N$$
(2)
The average separation between persistent sites is given by $`I_2(t)/I_1(t)L(t)`$. In Fig. (3) we have $`L(t)`$ plotted against $`t`$ on a logarithmic scale for three values of $`n_0`$. We find that $`L(t)`$ diverges with time as
$$L(t)t^z$$
(3)
where $`z`$ is a new dynamic exponent.
The scaling relations between the exponents are obtained by making use of the conditions in Eq. 2. First of all, we show that only $`\tau <2`$ is physically reasonable. For, if $`\tau 2`$, $`I_1(t)t^\omega `$, and from the second part of Eq. 2 it follows that $`\omega =0`$. But since $`\omega \theta `$ for reasons of convergence, we get $`\theta 0`$ which is absurd. So we conclude that $`\tau <2`$. In this case, $`I_1(t)t^{\omega +z(2\tau )}`$, which according to Eq. 2 imply that
$$z(2\tau )=\omega $$
(4)
Another set of scaling relations can be derived using the condition on $`I_0(t)`$ in Eq. 2. Combined with Eq. 4, this gives
$$z=\theta ;\omega =\theta (2\tau )\text{if}\tau <1$$
(5)
$$\omega =\theta ;z(2\tau )=\theta \text{if}\tau >1$$
(6)
We present a summary of our numerical results in Table I. It is easily seen that for all values of $`n_0`$, $`z>\theta `$, $`\tau >1`$ and the scaling relations in Eq. 6 are satisfied within numerical errors. Moreover, the exponents $`z`$ and $`\tau `$ show a consistent decrease with increasing $`n_0`$— they are non-universal, unlike $`\theta `$. We now present an intuitive argument which accounts for these observations fairly rigorously.
As persistence decays, the non-persistent regions grow in time (the length scale of which is set by $`L(t)`$ in Eq. 1) while the clusters of persistent sites shrink in size and eventually disappear. Let $`p(l,t)`$ be the number of persistent clusters of size $`l`$, at time $`t`$. The total number of persistent sites at time $`t`$ is $`_llp(l,t)=P(t)`$, and the total number of such clusters is $`N_c(t)=_lp(l,t)`$. The latter is related to $`n(k,t)`$ through the exact relation $`N_c(t)=_{k=2}^{\mathrm{}}n(k,t)=P(t)n(1,t)`$. The average size of a cluster at time $`t`$ is
$$\overline{l}(t)=\frac{P(t)}{N_c(t)}=\left(1\frac{n(1,t)}{P(t)}\right)^1$$
(7)
From Eq. 1, $`n(1,t)t^\omega `$ and since $`P(t)t^\theta `$ we have $`\overline{l}(t)=\left[1\gamma t^{(\omega \theta )}\right]^1`$ where $`\gamma `$ is a numerical constant. Since $`\omega \theta `$, $`\overline{l}(t)`$ is a constant for late times. Now, if $`\omega >\theta `$, $`\overline{l}(t)=1`$ strictly; only if $`\omega =\theta `$ any other value is possible. We argue for the latter case as follows. When clusters of persistent sites shrink in size, the depletion happens at the two ends of the cluster, independent of its size. Let the average decrease in the size of a cluster over time $`t`$ be $`\xi (t)`$. Clusters of initial size $`l>\xi (t)`$ shrink to size $`l\xi (t)`$ after time $`t`$, while those with length $`l\xi (t)`$ disappear. It follows that
$$p(l,t)=p(l+\xi (t),0)$$
(8)
Here, $`p(l,0)=n_0^2(1n_0)^l`$ since the initial distribution of particles is done at random with probability $`n_0`$. After substitution in Eq. 8, we find that the time evolution of the cluster size distribution has the extremely simple form $`p(l,t)=e^{\lambda \xi (t)}p(l,0)`$ where $`\lambda =`$ln$`(1n_0)`$. This result is also supported by simulations (Fig. 4). Consequently, the average cluster size $`\overline{l}(t)=\overline{l}(0)=1/n_0`$. This implies $`\omega =\theta `$ from our arguments following Eq. 7, and thus validates Eq. 6. Furthermore, since $`P(t)t^\theta `$ we find $`\xi (t)\frac{\theta }{\lambda }`$ln $`t`$ at large $`t`$. It follows that a persistent cluster of initial size $`L`$ has an average life-time $`\tau _Lexp(\frac{\lambda }{\theta }L)`$ for large $`L`$. The exponential dependence of the life-time of the cluster on its size reflects the slow algebraic decay of persistence.
Our argument can be extended to show why $`\tau `$ and $`z`$ are possibly non-universal. First of all, the exponent relation $`\omega =\theta `$ makes it possible to write the formal relation $`n(1,t)=g(\tau ,n_0)P(t)`$. Combined with Eq. 7 and using the result $`\overline{l}(t)=1/n_0`$, we obtain the relation $`g(\tau ,n_0)+n_0=1`$, which expresses implicitly the dependence of $`\tau `$ on $`n_0`$. For instance, if Eq. 1 were exact for all values of $`k`$, then $`P(t)_1^{\mathrm{}}n(s,t)𝑑s=\frac{n(1,t)}{\tau 1}`$ so that $`g(\tau ,n_0)\tau 1`$, from which it follows that $`\tau 2n_0`$. This result, although not exact, is consistent with the bounds $`1<\tau <2`$, and appears to be valid in the high density limit $`n_01`$, as indicated by the numerical values in Table I.
In summary, we have shown that the spatial distribution of persistent clusters in one dimension exhibits rich dynamic scaling characterised by two new exponents. We have given rigorous arguments on the bounds and universality properties of these exponents, which is well-supported by numerics. Interestingly, the normalized size distribution of persistent clusters was found to be independent of time.
Our work is the first study that brings out the non-trivial features in the spatial distribution of persistent sites in a one dimensional model. The dynamic scaling form in Eq. 1 is by no means specific to the model studied, and we have observed similar forms in other one dimensional systems —diffusion equation and kinetic Ising model, for example. Similar scaling in size distribution has been observed in entirely different contexts also—for instance, diffusion-limited cluster aggregation and diffusion-limited deposition. The feature that is common to all these processes is the irreversible coalescence of clusters (empty intervals in our case).
We are grateful to Satya Majumdar for discussions and for pointing out the similarities to aggregation models. G. M thanks B. Derrida and P. R thanks D. Stauffer for critical reading of the manuscript and valuable suggestions.
|
no-problem/9901/quant-ph9901062.html
|
ar5iv
|
text
|
# Tunneling of Bound Systems at Finite Energies: Complex Paths Through Potential Barriers.
\[
## Abstract
We adapt the semiclassical technique, as used in the context of instanton transitions in quantum field theory, to the description of tunneling transmissions at finite energies through potential barriers by complex quantum mechanical systems. Even for systems initially in their ground state, not generally describable in semiclassical terms, the transmission probability has a semiclassical (exponential) form. The calculation of the tunneling exponent uses analytic continuation of degrees of freedom into a complex phase space as well as analytic continuation of the classical equations of motion into the complex time plane. We test this semiclassical technique by comparing its results with those of a computational investigation of the full quantum mechanical system, finding excellent agreement.
\]
1. Tunneling phenomena are inherent in numerous quantum systems, from atoms to condensed matter to quantum field theory. Even in systems with a small parameter—coupling constant—a quantitative description of tunneling is possible only in a limited number of cases. Perhaps the best known example is the WKB approximation familiar from one-dimensional wave mechanics; similar techniques, such as the “most probable escape path” and instanton methods , are used to study tunneling from the bottom of potential wells. In the latter cases the calculation of the tunneling probability may be reduced to the solution of classical equations of motion for real generalized coordinates in imaginary (“Euclidean”) time, supplemented by the analysis of small fluctuations about this classical Euclidean trajectory. However, these methods often fail in describing tunneling of systems with more than one degrees of freedom at finite energies.
It has been suggested recently , in the context of instanton transitions in quantum field theory, that semiclassical techniques may be used for calculating the exponential suppression factors in a class of processes where multi-dimensional systems tunnel at finite energies. The proposal involves a double analytic continuation: the degrees of freedom are continued into a complex phase space, and the equations of motion are solved along a contour in complex time. The tunneling exponent is determined by an appropriate solution of the calssical, albeit complexified, equations of motion. Computation by numerical methods is then feasible even for systems with a large number of degrees of freedom, as has already been demonstrated in a field theoretic model . A problem with the formalism of Refs. is that its derivation from first principles is still lacking, although its plausibility has been supported by perturbative calculations about an instanton .
The purpose of this paper is two fold. First, we adapt the technique of Refs. to tunneling of quantum mechanical bound systems through high and wide potential barriers. As an example, we consider a system of two degrees of freedom with linear binding force. We find that if the bound system is initially in a highly excited state, the tunneling exponent is indeed calculable in a semiclassical way. This result is hardly surprising, as the initial state itself can be described in semiclassical terms. We formulate the complexified classical boundary value problem relevant to the calculation of the exponent in this case.
Second, the real strength of this formalism is that it also enables one to treat barrier penetration when the bound system is initially in a low lying state, e.g. the ground state. This is far from obvious, as this initial state cannot be described semiclassically. Nevertheless, we argue that in this case the tunneling exponent can be obtained by an appropriate limiting procedure. The resulting technique is less-well justified, so we have chosen to test it by direct computation of the transmission probability in the full quantum theory. We briefly describe the numerical methods involved, and present the results of both the full quantum mechanical and semiclassical analyses. We find good agreement between the two, confirming the validity of the semiclassical approach. 2. To be specific, let us consider a quantum mechanical system of two particles of equal mass $`m=1/2`$ moving in one dimension. Let these particles be bound by the harmonic potential $`(\omega ^2/8)(x_1x_2)^2`$, and one of these particles be repelled from the origin by a positive semidefinite potential $`V(x_1)`$ that vanishes as $`x_1\pm \mathrm{}`$ (we could of course allow $`V`$ to depend on $`x_2`$ as well, provided it couples to the internal degree of freedom). We take this potential to have the form $`V(x_1)=g^2U(gx_1)`$, where $`g`$ is a small constant. We set $`\mathrm{}=1`$, so the classical limit corresponds to $`g0`$. In what follows we present the results of numerical calculations for $`\omega =1/2`$ and gaussian potential, $`U(x)=\mathrm{exp}(x^2/2)`$, although the treatment of other potentials would be similar. In terms of the center-of-mass and relative coordinates, $`X=(x_1+x_2)/2`$ and $`y=(x_1x_2)/2`$, the Lagrangian reads
$$L=\frac{1}{2}\dot{X}^2+\frac{1}{2}\dot{y}^2\frac{1}{2}\omega ^2y^2\frac{1}{g^2}U[g(X+y)]$$
(1)
Far from the origin ($`|X|\mathrm{}`$), the center-of-mass and internal degrees of freedom decouple and the system can be characterized by its center-of-mass momentum $`P`$ and oscillator excitation number $`n`$, or, equivalently, by $`n`$ and the total energy $`E=P^2/2+\omega (n+1/2)`$. We wish to calculate the probablity $`T_n(E)`$ for transmission of the system through the barrier $`V`$. Of particular interest is $`T_0(E)`$, the transmission probability of this system initially in its oscillator ground state.
It is convenient to introduce rescaled total energy and occupation number $`ϵ=g^2E`$ and $`\nu =g^2n`$. With our choice of $`U`$, the top of the barrier corresponds to a potential energy $`ϵ=1`$. For $`ϵ<1`$ transmission is possible only via tunneling. For $`ϵ`$ just above $`1`$ classical over-barrier transitions are possible for very special initial states. Indeed, there exists an unstable, static classical solution with both particles stationary at the top of the barrier, $`x_1=x_2=0`$, so that $`ϵ=1`$. If one perturbs this solution by giving an arbitrarily small, common positive velocity to both particles, they will move toward $`X=\mathrm{}`$. The reversed evolution takes the system to $`X=\mathrm{}`$, with the classical oscillator characterized by a certain excitation energy $`ϵ_0^{osc}\omega \nu _0`$ (and a certain phase of the classical oscillator). The combined evolution is the classical transition over the barrier from this particular asymptotic state. By solving the (real time) classical equations of motion numerically, we found that $`\nu _00.9`$ for $`\omega =1/2`$.
The classical evolution of the system initially in the classical oscillator ground state ($`x_1=x_2`$) leads to the excitation of the oscillator as it approaches the barrier. Classical transition over the barrier occurs in this case only if the total energy exceeds some critical value. In our example we found numerically $`ϵ_{crit}=1.8`$.
If $`ϵ`$ and $`\nu `$ are such that classical transitions over the barrier are not possible, the system has to tunnel. We will shortly see that at $`ϵ`$ and $`\nu `$ fixed, and $`g0`$ (i.e., at large total energy and initial occupation number, $`E,ng^2`$) the transmission probability has the semiclassical form
$$T_n(E)=C(ϵ,\nu )e^{\frac{1}{g^2}F(ϵ,\nu )}$$
(2)
The case of the initial oscillator ground state is more subtle. In analogy to Refs. we suggest that the transmission probability at $`n=0`$ has the form
$$T_0(E)=C_0(ϵ)e^{\frac{1}{g^2}F_0(ϵ)}$$
(3)
and that the exponent is obtained by taking the limit
$$F_0(ϵ)=\mathrm{lim}_{\nu 0}F(ϵ,\nu )$$
(4)
One of the main purposes of this paper is to check this limiting procedure by comparison with a fully quantum mechanical calculation.
3. To see that Eq.(2) is indeed valid, and to obtain the procedure for calculating the exponent $`F(ϵ,\nu )`$, let us consider the transmission amplitude $`A(X_f,y_f;P,n)=X_f,y_f|\mathrm{exp}[iH(t_ft_i)]|P,n`$, where $`X_f`$ ($`>0`$) and $`y_f`$ are the coordinates at time $`t_f`$, and we eventually take the limit $`(t_ft_i)\mathrm{}`$. This amplitude may be written as a convolution of the evolution operator in the coordinate basis and the wave function of the initial state. The former is given by the path integral $`X_f,y_f|\mathrm{exp}[iH(t_ft_i)]|X_i,y_i=[dX][dy]\mathrm{exp}iS`$ where the integration runs over paths satisfying $`(X,y)(t_i)=(X_i,y_i)`$, $`(X,y)(t_f)=(X_f,y_f)`$. For an initial state with $`Pg^1`$, $`ng^2`$, the initial wave function is semiclassical and has the exponential form. In the case of a harmonic binding potential, this follows from the integral representation in the coherent state formalism:
$`X_i,y_i|P,n={\displaystyle \frac{e^{iPX_i}}{\sqrt{2\pi }}}{\displaystyle \frac{dzd\overline{z}}{2\pi i}e^{\overline{z}z}\frac{\overline{z}^n}{\sqrt{n!}}e^{\frac{1}{2}z^2\frac{1}{2}\omega y_i^2+\sqrt{2\omega }zy_i}}`$
(One may replace $`\overline{z}^n/\sqrt{n!}`$ by $`\mathrm{exp}(n\mathrm{log}\overline{z}/\sqrt{n}+n/2)`$ at large $`n`$.) By introducing the rescaled integration variables $`XgX`$, $`ygy`$, etc., we observe that $`A(X_f,y_f;P,n)`$ is given by an integral of an exponential of the form $`\mathrm{exp}(g^2\mathrm{\Gamma })`$ where $`\mathrm{\Gamma }`$ depends only on the rescaled integration variables, $`\nu `$ and $`ϵ`$, and does depend explicitly on $`g^2`$. This allows for a semiclassical analysis: we find stationary points of $`\mathrm{\Gamma }`$ and evaluate the integrals using a stationary phase approximation. We outline the main steps in the derivation of the stationary point equations.
Variation of $`\mathrm{\Gamma }`$ with respect to $`X(t)`$ and $`y(t)`$ for $`t_i<t<t_f`$ leads to the conventional classical equations of motion. When classical transitions are forbidden, there will be no real solutions satisfying the boundary conditions. Nevertheless there will be solutions with complex values of the integration variables. When performing the analytic continuation we will, in general, encounter singularities. To deal with this problem, we note that the time contour, originally the real axis, can be distorted into the complex plane, keeping the end points $`t_i`$, $`t_f`$ fixed. This deformation of the time contour allows us to avoid these singularities. Thus, our strategy is to search for complex solutions of the classical equations of motion along a contour ABCDE in the complex time plane, as shown in Fig. 1.
There are further stationary point equations coming from variation of $`\mathrm{\Gamma }`$ with respect to the integration variables at the end point $`t_i`$. It is convenient to formulate these equations along part B of the contour, where $`t=iT/2+t^{}`$, $`t^{}=\text{real}\mathrm{}`$ (this is possible because the equations of motion decouple in the asymptotic past). Instead of $`ϵ`$ and $`\nu `$ we introduce new real parameters $`T`$ and $`\theta `$; $`T`$ enters the problem through the shape of the contour. The general complex solution at large negative $`t^{}`$ is $`X(t^{})=X_0+pt^{}`$, $`y(t^{})=ue^{i\omega t^{}}+ve^{i\omega t^{}}`$ where $`X_0`$, $`p`$, $`u`$ and $`v`$ are complex parameters. The stationary point equations at the initial time lead to the following boundary conditions: (i) $`X(t^{})`$ is real (i.e. $`p`$ is real and $`T`$ may be chosen so that $`X_0`$ is also real), (ii) the positive and negative frequency parts of $`y(t^{})`$ are related to each other by $`v=u^{}e^\theta `$.
More boundary conditions appear when one evaluates the total transmission probability, i.e. integrates $`|A(X_f,y_f;P,n)|^2`$ over $`X_f`$ and $`y_f`$, again in a gaussian approximation. These conditions involve the final time and simply require that (iii) $`X(t)`$ and $`y(t)`$ are real on the DE part of the contour.
At given $`T`$ and $`\theta `$ these three boundary conditions are sufficient to specify the complex solution of the classical equations of motion on the contour BCDE (up to time translations along the real axis). Given this solution, the exponent for the transmission probability (2) is the value of $`2\text{Re}\mathrm{\Gamma }`$ at the stationary point. Explicitly, we find
$`F(ϵ,\nu )=2\text{Im}S_0ϵT\nu \theta `$
where
$`S_0={\displaystyle _{BCDE}}𝑑t\left[{\displaystyle \frac{1}{2}}X_t^2X+{\displaystyle \frac{1}{2}}y_t^2y+{\displaystyle \frac{\omega ^2}{2}}y^2+U(X+y)\right]`$
is the (rescaled) classical action for the complex solution of the above boundary value problem. The total energy and excitation number are related to $`T`$ and $`\theta `$ by
$`{\displaystyle \frac{(2\text{Im}S_0)}{T}}=ϵ,{\displaystyle \frac{(2\text{Im}S_0)}{\theta }}=\nu `$
i.e. the pairs $`(ϵ,\nu )`$ and $`(T,\theta )`$ are Legendre-conjugate.
We have solved the equations of motion numerically along the contour BCDE subject to the boundary conditions (i)–(iii). In particular, we have evaluated the limit (4). The result of this semiclassical calculation is shown in Fig. 3.
4. To check this semiclassical procedure, we have performed a numerical analysis of the full quantum system defined by (1). This is conveniently done in a basis of center-of-mass coordinate $`X`$ eigenstates and oscillator excitation number $`n`$. In this basis the state is represented by a multi-component wave function $`\psi _n(X)X,n|\mathrm{\Psi }`$, and the time-independent Schrödinger equation reads
$`{\displaystyle \frac{^2\psi _n(X)}{X^2}}+\left(n+{\displaystyle \frac{1}{2}}\right)\omega \psi _n(X)`$ (5)
$`+{\displaystyle \underset{n^{}}{}}V_{nn^{}}(X)\psi _n^{}(X)`$ $`=`$ $`E\psi _n(X)`$ (6)
where $`V_{nn^{}}(X)=n|V(X+y)|n^{}`$. Our choice of a gaussian potential $`V`$ enables us to calculate $`V_{nn^{}}(X)`$ by a numerical iteration procedure. Equation (5) is supplemented with the standard boundary conditions: (a) the incoming wave ($`X\mathrm{}`$) is in a state of given center-of-mass momentum $`P`$ and excitation number $`n`$; (b) only outgoing waves exist at $`X+\mathrm{}`$.
To solve the system (5) numerically, we introduce a lattice with equal spacing, $`X_k=ka`$, and discretize eq. (5) using the Numerov–Cowling algorithm (which reduces the discretization error to $`O(a^6)`$). We also truncate the system to a finite number of oscillator modes $`nN_0`$. In order to insure good accuracy of the solution, we have chosen the number of lattice sites $`2N_X`$ and the cutoff $`N_0`$ as large as $`2N_X=24096`$, $`N_0=400`$. This corresponds to over 3 million coupled complex equations. To deal with them, we take advantage of the special form of Eq. (5). Indeed, by inverting a set of $`(N_0+1)\times (N_0+1)`$ matrices, which is computationally feasible, Eq. (5) can be recast in the form $`\psi _n(X_k)=_n^{}[L_k\psi _n^{}(X_{k1})+R_k\psi _n^{}(X_{k+1})]`$. The elimination of $`\psi _n`$ at definite $`X_k`$ leads to a system of similar form for the remaining variables (with suitably redefined $`L`$ and $`R`$), again after $`(N_0+1)\times (N_0+1)`$ matrix algebra and matrix inversion. In this way we progressively eliminate variables at intermediate values of $`X_k`$ and ultimately obtain a system that linearly relates $`\psi _n`$ at the end points $`X=N_Xa`$ and $`X=+N_Xa`$. With a discretized version of the boundary conditions (a) and (b), this final system is straightforward to solve. The transmission probability is then determined by $`|\psi _n|^2`$ at the end point $`X=N_Xa`$.
We performed a series of checks of this numerical procedure to insure that our calculations are sufficiently precise and that the results are close to the continuum limit.
We present in Figs. 2 and 3 the results of the full quantum mechanical computation of the transmission probability for the system initially in its oscillator ground state. The potential $`V`$ is gaussian, and $`\omega =1/2`$. Figure 2 shows that the transmission probability $`T_0(E)`$ indeed has the functional form (3): at fixed $`ϵg^2E`$, the logarithm of $`T_0`$ is very well fit by a linear function of $`g^2`$. We use this fit to obtain the exponent $`F_0(ϵ)`$. Both the full quantum mechanical results for $`F_0(ϵ)`$ and the semiclassical results (the latter obtained by implementing the limiting procedure (4)) are shown in Fig. 3. Clearly, there is good agreement between the two. (The slight discontinuities in the quantum mechanical results are an artifact of the energy dependence of the $`g^2`$ range from which we can extract $`F_0`$. They provide an indication of the errors due to higher order effects.) We conclude that the validity of the semiclassical approach is confirmed by the direct quantum mechanical computation.
5. Full quantum mechanical computations (analytic or numerical) of barrier penetration probabilities are rarely possible. Even for our simplified system, values of $`g`$ smaller than $`0.1`$ are difficult to study, as one has to deal with very small transmission coefficients. On the other hand, limitations of the semiclassical computations are far less severe. The generalization of the semiclassical approach to quantum-mechanical systems with harmonic binding of more than two particles in more than one space dimension is straightforward, and we also expect that other binding potentials may be treated in a similar way provided their semiclassical wave functions are known. Indeed, in all such cases the transmission amplitudes with highly excited initial states will be given by (path) integrals of exponential functions, and the tunneling exponents will be determined by appropriate stationary points. The latter will be complex solutions to classical field equations on contours in complex time, with boundary conditions depending on the binding potential. A limit analogous to Eq. (4) will then determine the tunneling exponent for incoming systems in low lying bound states.
The semiclassical calculability of pre-exponential factors is less clear. While it is plausible that these factors are given by functional determinants about complex classical solutions for highly excited incoming states (finite $`\nu `$ in our model), we do not expect that a limiting property similar to Eq. (4) will continue to hold for the pre-exponents. The calculation of such pre-exponential factors for low lying states remains an interesting open problem.
Ackowledgements. We are indebted to P. Tinyakov for helpful discussions. This research was supported in part under DOE grant DE-FG02-91ER40676, Russian Foundation for Basic Research grant 96-02-17449a and by the U.S. Civilian Research and Development Foundation for Independent States of FSU (CRDF) award RP1-187. Two of the authors (C.R. and V.R.) would like to thank Professor Miguel Virasoro for hopsitality at the Abdus Salam International Center for Theoretical Physics, where part of this work was carried out.
|
no-problem/9901/astro-ph9901395.html
|
ar5iv
|
text
|
# Conference Summary
## 1. Luminous Halo
The stellar halo of the Galaxy contains only a small fraction of its total luminous mass, but the kinematics and abundances of halo stars, globular clusters, and the dwarf satellites contain imprints of the formation of the entire Milky Way (e.g., Eggen, Lynden–Bell & Sandage 1962; Searle & Zinn 1978).
### 1.1. Field stars: abundances and ages
We have heard results from two long-term systematic programs on halo stars. The first is by Carney, Latham & Laird, and is based on 1450 kinematically selected stars for which radial velocities, proper motions, and abundance information have been collected, and orbits have been calculated. Beers, Norris and Ryan reported preliminary results based on a sample of 4660 objects from the HK Survey (Beers, Preston & Shectman 1992), for which radial velocities, broad-band colors and abundance indicators are available, but no proper motions (yet). These studies show that very few really metal–poor stars (\[Fe/H\] $`<2.0`$) occur in the halo. Below \[Fe/H\]$`=2.0`$ the numbers decrease by about a factor of 10 for every dex in \[Fe/H\]. Norris showed that despite much effort, and some false claims, there seems to be no evidence for stars significantly more metal poor than \[Fe/H\]$`=4.0`$.
Measurements of detailed element ratios for \[Fe/H\]$`<3.0`$ reveal a significant range in C and N abundances at fixed \[Fe/H\], which may well reflect the localised shotnoise of individual enrichment events caused by early supernovae (e.g., Audouze & Silk 1995). This abundance variation is most evident in $`r`$-process elements, which suggests that it was put in place by Type II supernovae during the earliest epoch of star formation, before there was time for intermediate mass stars to return $`s`$-process elements to the interstellar medium. Laird showed that \[O/Fe\] measurements support this. This then suggests that the most metal-poor stars were indeed all formed in a short time interval (less than 1 Gyr), some 12–14 Gyr ago. However, Fujimoto argued that the observed abundance variations may also be caused by internal pollution during the evolution of very metal-poor stars.
Independent and more direct measurements of ages come from two sources. The HIPPARCOS parallaxes of nearby low-metallicity field halo stars allow them to be placed very accurately in the Hertzsprung–Russell diagram. Comparison with theoretical isochrones then results in ages between 11–13 Gyr (Reid 1997). The direct measurement of the Th/Eu ratio, which is a radioactive clock, gives similar ages (Cowan et al. 1997), albeit with a somewhat larger uncertainty.
### 1.2. Globular Clusters
Work continues on the ages and metallicities of globular clusters. Vandenberg described the latest improvements on the theoretical stellar models. Sarajedini reminded us that the \[Fe/H\]-scale has recently been recalibrated by Carretta & Gratton (1997). Fortunately, the correction relative to the earlier scale of Zinn & West (1984) is monotonic, but it is non-linear, and does affect not only the inferred ages of individual globular clusters, but also the commonly accepted properties of the ensemble of clusters. For example, the well-known bimodal \[Fe/H\] distribution of the disk and halo globular clusters may be much less pronounced than was generally assumed.
Piotto described results derived from systematic programs of groundbased and Hubble Space Telescope (HST) observations aimed at obtaining accurate and homogeneous color-magnitude diagrams that reach down deep enough to include the key regions that are used for age determinations. The main results are that (i) nearly all globular clusters show a remarkably small age-spread, of about 1 Gyr or less, (ii) the ages of the halo clusters do not correlate with \[Fe/H\], and (iii) there is a hint that the clusters at the largest galactocentric radii might be slightly younger. The globular cluster ages agree with those of the oldest and most metal-poor field halo stars, and with those of the oldest populations seen in the dwarf spheroidal companions. The debate on the precise value of the age of all these oldest populations continues (e.g., Mould 1998), but for the purpose of this summary, I will take it to lie in the range of 12–14 Gyr.
Assuming that the evolutionary tracks will continue to improve, there are two areas where progress can be made in the next few years. It became evident during the discussions that the various groups that are interpreting color-magnitude diagrams need to agree on a methodology, so that e.g., the same quantities are being measured. Secondly, the beautiful study of NGC 6397 by King et al. (1998) shows that HST proper motions based on WFPC2 images taken less than three years apart allow a clean separation of members and field halo stars, and provide a much improved color-magnitude diagram, extending significantly deeper than similar ground-based work (e.g., Cudworth 1997). This approach can easily be extended to other nearby clusters, in particular to Piotto’s large sample for which first-epoch exposures are already in the HST archive.
### 1.3. Ghostly Streams
Scenarios for galaxy formation suggest that the Galactic halo should contain substructure: extra-tidal material around globular clusters, tidally disrupted small satellites, and effects of the interaction of the Milky Way with the Large and Small Magellanic Clouds. Persistent hints for velocity clumping in high-$`|z|`$ samples of field halo giants can be found in many papers over the past decade (e.g., Freeman 1987; Majewski, these proceedings). The discovery of the Sagittarius dwarf (Ibata et al. 1994), provided direct and dramatic evidence for a fairly massive dwarf galaxy that is in the process of tidal disruption in the Galactic halo. Mateo showed that the extension of Sgr can now be traced over 34 degrees and counting. Good kinematics and distances are needed to analyze this further. Grillmair et al. (1995) presented evidence for tidal streamers associated with globular clusters. Much work in this general area has been done by Majewski, who reviewed the evidence for retrograde/direct motion at high/low $`|z|`$, and reported that the stars associated with the gaseous Magellanic Stream (§1.4) have now also been found.
The HIPPARCOS Catalog contains a few hundred local halo stars, and the accurate absolute proper motions combined with the available radial velocities allow analysis of their space motions. Chiba showed that while there are Galactic disk stars with $`1.6<`$ \[Fe/H\] $`1.0`$, there is no sign of the disk for \[Fe/H\]$`<1.6`$, and little evidence for clumping in velocities (but see Helmi et al. 1999). The HIPPARCOS sample is small, and is somewhat of a mixed bag, so it is not easy to interpret the data. And, as Moody & Kalnajs illustrated in a poster contribution, one has to be very careful with extrapolating local halo measurements to larger distances.
The hints for substructure have triggered work to develop better detection methods. The Great Circle Method of Lynden–Bell & Lynden-Bell (1995), developed for spherical geometry, suggests that the satellite dwarf galaxies may not all be on independent orbits, but together occupy a small number of orbits, i.e., are parts of ‘ghostly streams’. Majewski showed that the orbits of the globular clusters display similar signatures. A natural next step is to apply this kind of analysis to individual stars with good distances and motions, such as the Carney, Latham & Laird (1996) sample.
Tidal stripping in a spherical potential is reasonably well understood, and was the main topic of the talk by Johnston. The spherical approximation applies at large galactocentric radii $`r`$, and predicts a coherent structure in $`𝐫`$ and $`𝐯`$ space. Applications include the tidal tails of globular clusters and the Magellanic Stream. And as Zhao et al. showed in a poster contribution, if one knows which stars belong to the stream through independent means, the observed kinematics can be used to constrain the Galactic potential, but only if proper motions of sufficient accuracy are available.
The first results of a systematic observational program for finding substructure in the inner, flattened, halo, was presented by Harding et al., based on a strategy derived from N-body simulations of their ‘spaghetti’ model (which graced the announcement poster for this meeting). Helmi & White presented a powerful analytic formalism to analyze tidal disruption in a flattened potential. In this case a coherent structure remains in $`𝐯`$–space, but not in configuration space. Picking out disrupted streams is therefore more difficult, but is possible with good kinematic data which includes astrometry. The properties of the progenitor can then be inferred from the current measurements, and in this way one can hope to reconstruct the merging history of our own Galaxy.
There was much interest in the details of the encounter of the Large and Small Magellanic Clouds (LMC/SMC) with the Milky Way (MW). Weinberg showed preliminary results of a detailed analysis based on N-body simulations and normal mode calculations. He computed the effect of the LMC on the MW halo and disk, and finds that the LMC can induce the observed warp as well as lopsidedness in the Milky Way, provided the LMC is sufficiently massive. In turn, the MW tidal field puffs up the LMC. Weinberg finds a total mass of the LMC of about $`2\times 10^{10}M_{}`$, considerably larger than previous estimates, perhaps suggesting it has its own dark halo. Photometric evidence for a stellar halo around the LMC was presented by Olszewski. This is relevant for the interpretation of the observed microlensing events towards the LMC (§2.2).
Zhao described a scenario in which the Sgr dwarf was involved in an encounter with the LMC/SMC at a particular time in the past, which dropped it into a lower orbit. The a priori probability of this is modest, but it does tie together a number of apparently unrelated events, and provides a way for Sgr to survive long enough to be torn apart only in its current perigalacticon passage. It is intriguing that we may obtain an independent handle on these orbital issues by determination of the star formation history in the satellites. This may well have been punctuated by short bursts of star formation, coinciding with close encounters or merging. Clearly a full model is needed of the LMC/SMC/Sgr interaction with the MW, including both the stars and the gas (§1.4). The data to constrain this is quite detailed, and this nearest encounter may tell us much.
### 1.4. Gas: neutral and ionized
Many of the participants who work on properties of the gaseous Galactic halo attended the High Velocity Cloud (HVC) workshop at Mount Stromlo, just prior to this meeting. For a detailed summary, see Wakker, van Woerden & Gibson (1999). A number of key results were presented again during this Symposium, and I will mention these briefly.
For many years the distribution of high-velocity gas seemed chaotic, at least to the non-initiate. The publication of the Leiden–Dwingeloo Survey (Hartmann & Burton 1997), and the imminent completion of its southern extension with the radio telescope in Villa Elisa, Argentina, is a milestone for the study of neutral gas in the halo, as it provides a uniform dataset, with improved sensitivity and angular resolution. This progress is evident on comparing Wakker’s summary map of the HVC’s with the discovery map of the HI in the Magellanic Stream (Mathewson et al. 1974). The ambitious HIPASS project at Parkes is adding further sensitivity and resolution. My understanding of the results presented here is that a coherent picture is finally emerging in which (i) some HVC’s are connected to the HI in the Galactic disk, (ii) much of it is connected to the Magellanic Stream, with the beautiful HIPASS data of Putman et al. (1998) now also showing the leading arm predicted by numerical simulations of the encounter of the Magellanic Clouds with the Milky Way (e.g., Gardiner & Noguchi 1996; §1.3), and (iii) steady accretion of material either from the immediate surroundings of the Galaxy (as suggested by Oort 1970), or from within the Local Group (as advocated by Blitz at this meeting). Van Woerden and Wakker showed that absorption line studies and metallicity measurements are—at long last—starting to constrain the distances of individual clouds. Open issues include the possibility of hydrodynamic effects influencing the velocities (discussed by Benjamin and Danly), and the nature of the dense clumps seen in the HIPASS data.
Ionized halo gas remains elusive. Kalberla, Dettmar and Danly showed that X-rays and H$`\alpha `$ emission indicate the presence of $`10^6`$ K gas to $`|z|4`$ kpc away from the disk. Maloney showed that the constraints on the total amount of this material remain rather weak, an issue to which I shall return in §2.4. The dispersion measures of pulsars in the LMC may help here, as discussed by Bailes, but the sample of objects is still small.
## 2. Dark Halo
The main topics discussed were the total mass and extent of the dark halo, the constraints on the mass of halo objects provided by microlensing experiments, and the nature of the dark matter.
### 2.1. Mass and Extent
Zaritsky summarized what we know about the mass of the Galaxy. The rotation curve is well established inside a galactocentric radius of 20 kpc, and constrains the mass profile fairly accurately. Outside this radius the main constraints are distances and radial velocities of the globular clusters, the dwarf satellites, and M31. A variety of methods and arguments show that all measurements to date are consistent with a model in which the mass distribution is essentially an isothermal sphere with a constant circular velocity $`v_c`$$``$ 180 km/s. Out to a distance of $``$300 kpc—nearly halfway to M31—this corresponds to a mass of about $`2\times 10^{12}M_{}`$. The average mass-to-light ratio is over 100 in solar units, so most of this matter is dark, or at least severely underluminous (§2.4).
The uncertainties in the halo mass profile remain significant, not only at the largest radii, but even inside the orbit of the LMC. This affects the determination of the mass fraction of MACHOs in the halo (§2.2). Substantial improvement will have to await better distances and in particular more accurate absolute proper motions for the distant globular clusters and satellites. This is not easy, as the required accuracy for Leo I and II is about 10 $`\mu `$as/yr (but see § 4). The derived space motions would leave only the nature of the orbits as uncertainty in the determination of the Galactic potential. The fact that some satellites and clusters may actually be on the same orbit is a complication.
### 2.2. Microlensing
Enormous effort world-wide is being put into microlensing studies of the Galactic Bulge, the LMC, and the SMC. Alcock summarized this in a public evening lecture. At the Symposium, he, Stubbs, and Perdereau gave more detailed status reports of the MACHO and EROS projects. The main triumph of these projects, derived from the 20 events seen to date towards the LMC, is that point-like objects in the mass range $`10^7<M/M_{}<10^2`$ can at most form a minor constituent of the halo, and hence do not form the bulk of the dark matter that is tied to the Galaxy. This eliminates most objects of substellar mass, including planets as small as the Earth. Furthermore, the events seen towards the LMC have a fairly narrow duration distribution, which differs from that seen towards the Galactic Bulge. If these events are caused by halo objects, they must have masses in a rather narrow range around 0.5 $`M_{}`$. Bennett showed how microlensing events which deviate from the standard lightcurve may be used to constrain the nature of the lenses further, and he reminded us that a major uncertainty is the unknown binary fraction in the lens population (e.g., DiStefano 1999).
The precise MACHO mass fraction of the halo is not easy to determine, not only because the number of observed events is still modest, but also because the halo mass distribution out to the distance of the LMC is not known very well (§2.1). It is possible that the MACHOs are not in the dark halo at all. Flaring of the disk and/or the warp of the Milky Way, or the presence of another intervening object have been considered, but it now seems unlikely that these can provide the entire set of observed events (e.g., Gyuk, Flynn & Evans 1999). Weinberg’s theoretical models of the LMC/MW interaction, and Olszewski’s analysis of the color-magnitude diagram of the LMC indicate that self-lensing by the LMC, in particular by its own stellar halo, may well be quite significant. Their results allow the possibility that all the lenses are in the LMC itself, as suggested already by Sahu (1994). This would require lens masses smaller than 0.5 $`M_{}`$, which is plausible. However, if this is the explanation for the observed events, it then remains a puzzle why the duration distribution is different from that in the Bulge. The plans for the future outlined by Stubbs in the preceding talk address these issues in more detail.
A ‘byproduct’ of the microlensing surveys are massive homogeneous samples of variable stars in the LMC/SMC and also in the Bulge. These are a veritable gold mine for constraining stellar models, and for tracing Galactic structure. For example, Minniti showed that the RR Lyrae samples in the Bulge allow an accurate determination of the luminosity profile of the inner halo to very small galactocentric radii.
### 2.3. How many kinds of dark matter?
Turner presented his beautifully illustrated ‘Audit of the Universe’. This has seen considerable improvement in the past year driven by new observations. In particular, the distant supernovae projects (Schmidt et al. 1998; Perlmutter et al. 1999) indicate a non-zero cosmological constant $`\mathrm{\Lambda }`$. Turner writes the total mass and energy density as $`\mathrm{\Omega }_0=\mathrm{\Omega }_M+\mathrm{\Omega }_\mathrm{\Lambda }`$, and finds (to within factors of less than two, and in units of the critical density):
This means that while the baryon density $`\mathrm{\Omega }_{\mathrm{baryon}}`$ is ten times larger than the density of luminous matter $`\mathrm{\Omega }_{\mathrm{lum}}`$, it is still only ten percent of the total matter density of the Universe $`\mathrm{\Omega }_M`$. The ‘dark energy’ $`\mathrm{\Omega }_\mathrm{\Lambda }`$ brings $`\mathrm{\Omega }_0`$ to the critical value within the uncertainties. As Turner pointed out, this state of affairs raises a number of fascinating questions, two of which are most relevant for this conference. If $`\mathrm{\Omega }_{\mathrm{baryon}}`$ is in fact in galaxy halos, which is plausible given that $`\mathrm{\Omega }_{\mathrm{galaxies}}`$ is of the same order, then non-baryonic dark matter is needed at the scales of clusters and larger in order to make up the difference between $`\mathrm{\Omega }_{\mathrm{baryon}}`$ and $`\mathrm{\Omega }_M`$. What is this material? If, on the other hand, we want only one kind of dark matter on all scales (e.g., for reasons of simplicity), then the question becomes: where are all the baryons? These questions lead directly to the issue of dark matter composition, to which I now turn.
### 2.4. Composition
Many talks were devoted to the nature of the dark matter. With a few notable exceptions, most speakers were more sure about what the dark mass is not made of than about what it is, a conclusion also reached by Silk, who managed to summarize most of these contributions on the first day of the conference, i.e., before they were presented!
Brown dwarfs. Tinney and Flynn reported on programs aimed at establishing the number density of low-mass objects in the halo, using groundbased and HST observations. The halo samples contain objects at a typical distance of 2 kpc, with a tail extending beyond 10 kpc. The observed mass functions extend somewhat below 0.08 $`M_{}`$, which marks the transition from stars to substellar objects. The number density increases with decreasing mass, and perhaps turns over at the lowest masses—in agreement with the microlensing results (§2.2). In any case, smooth extrapolation of the measurements to lower masses shows that these objects cannot provide all the dark mass in the Galactic halo. A caveat is that the results are based on calibrations of local disk objects, which presumably are more metal-rich than field halo objects.
Compact objects. Much work was done in the past two years to investigate whether white dwarfs could be the major constituent of the dark halo. This activity was triggered by the microlensing events seen towards the LMC, which suggest a typical lens mass around 0.5 $`M_{}`$, and the white dwarf nature of dark matter was defended with great vigor by Chabrier. However, it seems that in order to have white dwarfs in sufficiently large numbers requires a special initial mass function early-on, i.e., a non-standard star formation history. This cannot be ruled out a priori, but the resulting inevitable metal-enrichment is hard to hide (Gibson & Mould 1997). Flynn showed that HST has not seen these white dwarfs, but perhaps one needs to go even fainter. Goldman reported that a proper motion survey being carried out by the EROS team will settle this issue by providing a strong local constraint. Silk briefly discussed the possibility that the dark halo consists mostly of neutron stars, and concluded that these suffer from similar problems as the white dwarfs, and furthermore, their masses are inconsistent with the microlensing results.
Cold gas. In the past few years, a number of authors have suggested that perhaps the dark matter consists of ultracold (4K) clumps of H<sub>2</sub>, with masses of about $`10^3M_{}`$, diameters of 30 AU, and densities of $`10^{10}`$ cm<sup>-3</sup> (e.g., Pfenniger et al. 1994; Gerhard & Silk 1996). The current microlensing experiments are not sensitive to such clumps, as they are extended objects rather than effective point masses. They have never been observed directly in the local interstellar medium. There is strong disagreement on the presumed location of this material. Pfenniger suggested that this dark matter is in a large outer disk, while Walker proposed that it is in a spheroidal halo. This latter suggestion has the advantage that the ionized and evaporating outer envelopes of these clumps could be responsible for the so-called extreme scattering events seen in radio observations (Fiedler et al. 1987). While Chary reported that the expected cosmic-ray induced gamma rays are not seen, recent work by Dixon et al. (1998) suggests that they are evident in the EGRET data.
Ionized Gas. Kahn & Woltjer (1959) established the mass of the Milky Way and M31 through their famous timing argument, and suggested that most of the unseen mass could be ionized gas of about $`10^6`$ K, which is very hard to detect. Maloney summarized the best observational constraints on the total amount of such gas, and showed that the limits are no stronger than they were 40 years ago. Kalberla showed recent ROSAT evidence for such gas, and derived a scale-height of about 4 kpc. The very sensitive H$`\alpha `$ surveys that are now possible with TAURUS-2 (Bland–Hawthorn et al. 1998) and with the WHAM camera (Tufte et al. 1998) should allow a measurement of the total amount of ionized gas in the near future. This may be the best bet for baryonic dark matter attached to the Galaxy (cf. Fukugita, Hogan & Peebles 1998).
Nonbaryonic dark matter. All of the above candidates for the dark matter are baryonic. Assuming Turner’s audit is correct, this can make up only 10% of the total amount of matter in the Universe. It is natural to assume that this material is associated with galaxies. The remaining 90% of the mass in the Universe (the difference between $`\mathrm{\Omega }_{\mathrm{baryon}}`$ and $`\mathrm{\Omega }_M`$) then has to be made up of non-baryonic material. Sadoulet, Silk and Turner discussed the candidates, which include massive neutrinos, axions and neutralinos. It seems unlikely that the neutrinos provide all the unseen mass on large scales, because (i) they would not constitute the cold dark matter that is currently favored by theories of structure formation, and (ii) the required neutrino mass of $``$25 eV seems to be ruled out by experiments. To date, there is no experimental evidence for axions or neutralinos, but the laboratory sensitivity is expected to improve considerably in the coming year. And finally, Silk argued that perhaps primordial black holes are the culprit, a most fascinating suggestion.
## 3. Other Galaxies
Surface photometry of edge-on disk galaxies to $`V`$$``$28 mag/arcsec<sup>2</sup> by Morrison and by Yock shows that these systems display a variety of luminosity profiles perpendicular to the disk. This may indicate an outer bulge, a thick disk, or perhaps a luminous stellar halo. It will be very interesting to try to go to fainter limiting magnitudes, and to enlarge the sample to search for correlations with e.g., Hubble type. Determination of the mass distribution in other disk galaxies is based mostly on HI rotation curves, and gives strong evidence for extended halos of dark matter, consistent with the findings for our own Galaxy. Unfortunately, Kalnajs fell ill, and was unable to present his views on this topic. Bland–Hawthorn showed convincing evidence that H$`\alpha `$ measurements can now probe the mass distribution beyond the HI edge, even though the effort required can only be described as heroic.
The luminous halos of galaxies in the Local Group can be studied in more detail. Sarajedini showed color-magnitude diagrams for ten globular clusters in M33, obtained with HST. While some of these clusters have the same age as the Galactic globular clusters, and are just as metal-poor, others are several Gyr younger, and have intermediate metallicities, indicating a more extended halo formation process. Freeman showed that M31 is very similar to the Milky Way, albeit a little more massive and with a larger bulge. The kinematics, abundances, and ages of the M31 globular clusters are very similar to those in the Milky Way, suggesting they must also have formed quickly. However, the field stars in the M31 halo are decidedly more metal-rich than those in the Galaxy. It is not clear how this comes about, especially since a significant fraction of the halo field stars must have been tidally dislodged from clusters. The kinematics of the M31 field halo stars should contain further clues to their formation, and can now be studied in detail through the set of over 1000 planetary nebulae for which radial velocities have been measured. Still closer in, Da Costa and Olszewski showed that satellite dwarf spheroidals and the Magellanic Clouds all have experienced different star formation histories, as is evident from the beautiful variety of color-magnitude diagrams. With the possible exception of the SMC, the oldest populations invariably have the same age as the Galactic globular clusters. Clearly, around 12–14 Gyr ago the first generation of stars was formed synchronously throughout the entire Local Group.
## 4. Observational prospects: towards a stereoscopic census
The currently popular formation scenario was summarized by Wyse and by White. It assumes a hierarchical build-up of structure in a cold dark matter universe, with the baryons collecting inside dark halos, and forming disks. Elliptical galaxies and bulges then result from major mergers, while disk galaxies remain disks only as long as they steadily accrete no more than small satellites (e.g., Baugh, Cole & Frenk 1996). This scenario then suggests there should be signs of ongoing accretion and fossil substructure in the Galactic halo, as is indeed observed. The next step is to test this formation scenario quantitatively through a detailed comparison with the observed properties of the Galaxy, and notably its halo. This is a crucial complement to high-redshift studies of galaxy formation, and should provide the detailed formation history of our Galaxy.
The theoretical tools to analyse halo substructure that are now being developed (e.g., Helmi & White 1999) demonstrate the need for accurate kinematic data for large samples of halo objects. The ongoing Hamburg/ESO objective prism survey, outlined in a poster by Christlieb, covers 10000 square degrees and promises to extend the HK Survey (§1.1) to about 20000 candidate halo objects. Multi-object spectroscopy by 2DF or SLOAN will provide radial velocities and abundances (see poster by Pier). The HIPPARCOS Catalog contains globally accurate proper motions with accuracies of $``$1 mas/yr for a few hundred bright and nearby halo stars. Combination of the Tycho positions for nearly 3 million stars to $`V`$$``$12 with those in the Astrographic Catalog, which are each of modest individual accuracy but have an epoch difference of about 80 years, will provide proper motions of 2–3 mas/yr (Hoeg et al. 1998). The resulting TRC/ACT database will contain about 30000 halo stars, and should be available in a year or two. As 2 mas/yr translates to 10 km/s at 1 kpc, this will allow space motions to be derived for halo stars out to distances of a few kpc. The USNO-B Catalog reaches about five magnitudes fainter, and can be put on the HIPPARCOS/Tycho reference system, but the accuracies of individual proper motions will be only $`8`$ mas/yr (Monet 1997), making them of modest value for the study of the Galactic halo.
The success of HIPPARCOS has resulted in further interest in space astrometry, leading to the ongoing ESA study of a mission-concept called GAIA (e.g., Gilmore et al. 1998). Interest in imaging terrestrial extrasolar planets is pushing NASA’s Space Interferometry Mission. SIM will be a pointed observatory, and will provide proper motions and parallaxes of micro-arcsecond ($`\mu `$as) accuracy by the end of the next decade. If approved, GAIA will provide parallaxes and absolute proper motions of similar quality for all the one billion stars to $`V`$$``$20, together with radial velocities and photometry or low-resolution spectroscopy for most objects brighter than $`V`$$``$17, by 2015. This is a long time, but the tremendous gain over HIPPARCOS, summarized below, will be worth the wait.
Comparison of HIPPARCOS and GAIA
This is not the place to summarize the entire scientific rationale for $`\mu `$as astrometry from space, and its impact on all aspects of the structure of the Galaxy, stellar physics, detection of planets around other stars, etc. The relevance for the study of the Galactic halo is best illustrated by considering the mind-boggling accuracy of the SIM and GAIA proper motions: an uncertainty of 10$`\mu `$as/yr translates to 5 km/s at 100 kpc! This will make it possible to measure the space motions of globular clusters and all the dwarf satellites, including Leo I and II. The unbiased coverage of the entire sky by GAIA will allow identification of very large halo samples from the narrow-band photometry, and provide measurements of the full six-dimensional phase space information (positions and velocities) throughout the inner halo (to distances of about 20 kpc from the Sun), and full velocity information on individual stars to much larger distances. This will e.g., (i) provide the mass distribution of the Galaxy with unprecedented accuracy, (ii) allow kinematical selection of the members of the globular clusters and dwarf satellites, leading to clean color-magnitude diagrams, and (iii) trace ghostly streams and substructure. High-resolution spectroscopic follow-up will provide the distribution of the abundances throughout the halo. This will allow reconstruction of the full formation history of the Galaxy.
## 5. Concluding remarks
This Symposium is dedicated to Alex Rodgers. Alex was well-ahead of the field with his early work on metal-rich halo stars (Rodgers, Harding & Sadler 1981), and his investigation on retrograde globular cluster orbits (Rodgers & Paltoglou 1984), both of which indicated the presence of infall and substructure. He would clearly have enjoyed this meeting. The string of three succesful and stimulating international Stromlo Symposia have established the series, and are a credit to Australian astronomy. We are all looking forward to number four!
#### Acknowledgments.
It is a pleasure to thank Jeremy and Joan Mould for warm hospitality during the workshop, and for introducing me to the Red Belly Black Cafe. The Leids Kerkhoven Bosscha fund kindly provided a travel grant. Ken Freeman, Amina Helmi and John Norris commented on an earlier version of the manuscript.
## References
Audouze J., Silk J., 1995, ApJ, 451, L49
Baugh C.M., Cole S., Frenk C.S., 1996, MNRAS, 283, 1361
Beers T.C., Preston G.W., Shechtman S., 1992, AJ, 103, 1987
Bland–Hawthorn J., Veilleux S., Cecil G.N., Putman M.E., Gibson B.K., Maloney P.R., 1998, MNRAS, 299, 611
Carney B., Latham D., Laird J., 1996, AJ, 112, 668
Carretta E., Gratton R.G., 1997, A&AS, 121, 95
Cowan J.J., McWiliam A., Sneden C., Burris D.L., 1997, ApJ, 480, 246
Cudworth K.M., 1997, in Proper Motions and Galactic Astronomy, ed. R.M. Humphreys, ASP Conf. Ser., 127, 91
DiStefano R., 1999, astro-ph/99010351
Dixon D., et al. 1998, New Astro, 3, 539
Eggen O., Lynden–Bell D., Sandage A.R., 1962, ApJ, 136, 748
Fiedler R.L., Dennison B., Johnston K.J., Hewish A., 1987, Nature 326, 675
Freeman K.C., 1987, ARA&A, 25, 603
Fukugita M., Hogan C.J., Peebles P.J.E., 1998, ApJ, 503, 518
Gardiner L.T., Noguchi M., 1996, MNRAS, 278, 191
Gerhard O.E., Silk J., 1996, ApJ, 472, 34
Gibson B.K., Mould J.R., 1997, ApJ, 482, 98
Gilmore G., et al., 1998, in Astronomical Interferometry, SPIE Proc. 3350, ed. R.D. Reasenberg, 541–550
Grillmair C.J., Freeman K.C., Irwin M., Quinn, P.J., 1995, AJ, 109, 2553
Gyuk G., Flynn C., Evans N.W., 1999, astro-ph/9812338
Hartmann D., Burton W.B., 1997, Atlas of Galactic Neutral Hydrogen (Cambridge Univ. Press)
Helmi A., White, S.D.M., 1999, MNRAS (astro-ph/9901102)
Helmi A., White S.D.M., Zhao H.S., de Zeeuw P.T., 1999, in preparation
Hoeg E., 1998, AA, 338, L65
Ibata R., Gilmore G., Irwin M., 1994, Nature, 370, 194
Kahn F.D., Woltjer L., 1959, ApJ, 130, 705
King I.R., Anderson J., Cool A.M., Piotto G., 1998, ApJ, 492, L37
Lynden–Bell D., Lynden–Bell R.M., 1995, MNRAS
Mathewson D.S., Cleary M.N., Murray J.D., 1974, ApJ, 190, 291
Monet D., 1997, in Proper Motions and Galactic Astronomy, ed. R.M. Humphreys, ASP Conf. Ser., 127, 31
Mould J.R., 1998, Nature, 395, Supp., A22
Oort J.H., 1970, AA, 7, 381
Perlmutter S., et al., 1999, ApJ, in press (astro-ph/9812133)
Pfenniger D., Combes F., Martinet L., 1994, AA, 285, 79
Putman M.E., Gibson B.K., Staveley–Smith L., et al., 1998, Nature, 394, 752
Reid I.N., 1997, AJ, 114, 161
Rodgers A.W., Harding P., Sadler E.M., 1981, ApJ, 244, 912
Rodgers A.W., Paltoglou G., 1984, ApJ, 283, L5
Sahu K., 1994, Nature, 370, 275
Schmidt B.P., et al., 1998, ApJ, 507, 46
Searle L., Zinn R., 1978, ApJ, 225, 357
Tufte S.L., Reynolds R.J., Haffner L.M., 1998, ApJ, 504, 773
Wakker B., van Woerden H., Gibson B.K., 1999, in Stromlo Workshop on High Velocity Clouds, eds B.K. Gibson & M.E. Putman, ASP Conf. Ser., 000, 000
Zinn R.J., West M.J., 1984, ApJS, 55, 45
|
no-problem/9901/hep-ph9901293.html
|
ar5iv
|
text
|
# Is there a light fermiophobic Higgs ?
## I Introduction
Despite the great success of the standard $`SU(2)\times U(1)`$ electroweak model (SM), one of its fundamental principles, the spontaneous symmetry breaking mechanism, still awaits experimental confirmation. This mechanism, in its minimal version, requires the introduction of a single doublet of scalar complex fields and gives rise to the existence of a neutral particle with mass $`m_H`$. The combined analysis of all electroweak data as a function of $`m_H`$ favors a value of $`m_H`$ close to $`100GeV/c^2`$ and predicts with 95% confidence level an upper bound of $`m_H<200GeV/c^2`$. Hence, one can still envisage the possibility of a Higgs discovery in the closing stages of the LEP operation.
Nevertheless, even if this turned out to be true, one still would like to know if there is just one family of Higgs fields or, on the contrary, if nature has decided to replicate itself. In our view this is the main motivation to consider multi Higgs models. In this paper we continue the study of the two-Higgs-doublet model (2HDM). Following our previous work , we examine models without explicit CP violation and which are also naturally protected from developing a spontaneous CP breaking minimum. There are two different ways of achieving this. To illustrate the different phenomenology we calculate, in both models, the decay width for the process $`h^0\gamma \gamma `$, which can be particularly relevant if $`h^0`$ is a fermiophobic Higgs.
## II The potentials
The Higgs mechanism in its minimal version (one scalar doublet) introduces in the theory an arbitrary parameter — the Higgs boson mass $`m_H`$. In fact, the potential depends on two parameters, which are the coefficients of the quadratic and quartic terms. However the perturbative version of the theory replaces them by the vacuum expectation value $`v=247GeV`$ and by $`m_H`$. If we generalize the theory introducing a second doublet of complex fields, the number of free parameters in the potential $`V`$ grows from two to fourteen. At the same time, the number of scalar particles grows from one to four. In this general form the potential contains genuine new interaction vertices which are independent of the vacuum expectation values and of the mass matrix of the Higgs bosons. However, these new interactions can be avoided if one imposes the restriction that $`V`$ is invariant under charge conjugation $`C`$. In fact, if $`\mathrm{\Phi }_i`$ with $`i=1,2`$ denote two complex scalar doublets with hyper-charge 1, under $`C`$ the fields transform themselves as $`\mathrm{\Phi }_i\mathrm{exp}(i\alpha _i)\mathrm{\Phi }_i^{}`$ where the parameters $`\alpha _i`$ are arbitrary. Then, choosing $`\alpha _1=\alpha _2=0`$, and defining $`x_1=\varphi _1^{}\varphi _1`$, $`x_2=\varphi _2^{}\varphi _2`$, $`x_3=\mathrm{}\{\varphi _1^{}\varphi _2\}`$ and $`x_4=\mathrm{}\{\varphi _1^{}\varphi _2\}`$ it is easy to see that the most general 2HDM potential without explicit $`C`$ violation<sup>*</sup><sup>*</sup>*At this level $`C`$ conservation is equivalent to $`CP`$ conservation since all fields are scalars., is:
$$V=\mu _1^2x_1\mu _2^2x_2\mu _{12}^2x_3+\lambda _1x_1^2+\lambda _2x_2^2+\lambda _3x_3^2+\lambda _4x_4^2+\lambda _5x_1x_2+\lambda _6x_1x_3+\lambda _7x_2x_3.$$
(1)
In general, the minimum of this potential is of the form
$`\mathrm{\Phi }_1`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left(\begin{array}{c}0\hfill \\ v_1\hfill \end{array}\right)`$ (2c)
$`\mathrm{\Phi }_2`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left(\begin{array}{c}0\hfill \\ v_2e^{i\theta }\hfill \end{array}\right),`$ (2f)
in other words it breaks $`CP`$ spontaneously. To use this potential in perturbative electroweak calculations the physical parameters that should replace the $`\lambda `$’s and $`\mu `$’s are the following:
1. the position of the minimum, $`v_1`$, $`v_2`$ and $`\theta `$, or alternatively, $`v^2=v_1^2+v_2^2`$, $`\mathrm{tan}\beta =\frac{v_2}{v_1}`$ and $`\theta `$;
2. the masses of the charged boson $`m_+`$ and of the three neutral bosons $`m_1`$, $`m_2`$ and $`m_3`$;
3. and the three Cabibbo like angles $`\alpha _1`$, $`\alpha _2`$ and $`\alpha _3`$ that represent the orthogonal transformation that diagonalizes the $`3\times 3`$ mass matrixThe mass matrix corresponding to the neutral components $`\left(T_3=\frac{1}{2}\right)`$ of the doublets is a $`4\times 4`$ matrix, but one eigenvalue is zero because it corresponds to the $`Z`$ would be Goldstone boson. of the neutral sector.
In a previous paper we have examined the different types of extrema for potential $`V`$. In particular it was shown in that there are two ways of naturally imposing that a minimum with $`CP`$ violation never occurs. This, in turn, leads to two different 7-parameter potentials. The first one, denoted $`V_{(A)}`$, is the potential discussed in the review article of M. Sher and corresponds to setting $`\mu _{12}^2=\lambda _6=\lambda _7=0`$ in equation (1). The second 7-parameter potential, that we shall call $`V_{(B)}`$, is essentially the version analyzed in the Higgs Hunters Guide and it corresponds to the conditions $`\lambda _6=\lambda _7=0`$ and $`\lambda _3=\lambda _4`$. As we have already pointed out but would like to stress again, these potentials have different phenomenology. This is illustrated in section III when we consider the fermiophobic limit of both models.
Since $`V_{(A)}`$ and $`V_{(B)}`$ do not have spontaneous $`CP`$-violation, the number of so-called “physical parameters” is immediately reduced to seven. In fact, $`\theta =0`$ and only one rotation angle, $`\alpha `$, is needed to diagonalize the $`2\times 2`$ mass matrix of the $`CP`$-even neutral scalars. This is clearly seen if we transform the initial doublets $`\mathrm{\Phi }_i`$ into two new ones $`H_i`$ given by
$$\left(\begin{array}{c}H_1\\ H_2\end{array}\right)=\frac{1}{\sqrt{v_1^2+v_2^2}}\left(\begin{array}{cc}\hfill v_1& \hfill v_2\\ \hfill v_2& \hfill v_1\end{array}\right)\left(\begin{array}{c}\mathrm{\Phi }_1\\ \mathrm{\Phi }_2\end{array}\right).$$
(3)
In this Higgs bases, only $`H_1`$ acquires a vacuum expectation value. Then, the $`T_3=+\frac{1}{2}`$ component and the imaginary part of the $`T_3=\frac{1}{2}`$ component of $`H_1`$ are the $`W^\pm `$ and $`Z`$ would be Goldstone bosons, respectively. The $`C`$-odd neutral boson, $`A^0`$, is the imaginary part of the $`T_3=\frac{1}{2}`$ component of $`H_2`$. On the other hand, the light and heavy $`CP`$-even neutral Higgs, $`h^0`$ and $`H^0`$, are linear combinations of the real parts of the $`T_3=\frac{1}{2}`$ component of $`H_1`$ and $`H_2`$.
Notice that $`V_{(A)}`$ is invariant under the $`Z_2`$ transformation $`\mathrm{\Phi }_1\mathrm{\Phi }_1`$ and $`\mathrm{\Phi }_2\mathrm{\Phi }_2`$, whereas in $`V_{(B)}`$ only the $`\mu _{12}^2`$ term breaks the $`U(1)`$ symmetry, $`\mathrm{\Phi }_2e^{i\alpha }\mathrm{\Phi }_2`$. Because this breaking occurs in a quadratic term it does not spoil the renormalizability of the model. Hence, in both cases the terms that were set explicitly to zero, will not be needed to absorb infinities that occur at higher orders. The complete renormalization program of the model based on $`V_{(A)}`$ was carried out in . The results for $`V_{(B)}`$ are similar but the cubic and quartic scalar vertices have to be changed appropriately.
For the sake of completeness we will close this section with a summary of the results that will be used later. As we have already said they are not new and can be obtained either from or . We agree with both.
For $`V_{(A)}`$ the minimum conditions are
$`0=T_1`$ $`=`$ $`v_1\left(\mu _1^2+\lambda _1v_{1}^{}{}_{}{}^{2}+\lambda _+v_{2}^{}{}_{}{}^{2}\right)`$ (4a)
$`0=T_2`$ $`=`$ $`v_2\left(\mu _2^2+\lambda _2v_{2}^{}{}_{}{}^{2}+\lambda _+v_{1}^{}{}_{}{}^{2}\right)`$ (4b)
with $`\lambda _+=\frac{1}{2}(\lambda _3+\lambda _5)`$. They lead to the following solutions:
either i)
$`v_1^2`$ $`=`$ $`{\displaystyle \frac{\lambda _2\mu _1^2\lambda _+\mu _2^2}{\lambda _1\lambda _2\lambda _+^2}}`$ (5a)
$`v_2^2`$ $`=`$ $`{\displaystyle \frac{\lambda _1\mu _2^2\lambda _+\mu _1^2}{\lambda _1\lambda _2\lambda _+^2}};`$ (5b)
oreth ii)
$`v_1^2`$ $`=`$ $`0`$ (6a)
$`v_2^2`$ $`=`$ $`{\displaystyle \frac{\mu _2^2}{\lambda _2}}.`$ (6b)
The masses of the Higgs bosons and the angle $`\alpha `$ are given by the following relations:
$`m_{H^+}^2`$ $`=`$ $`\lambda _3\left(v_1^2+v_2^2\right)`$ (7a)
$`m_{A^0}^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(\lambda _4\lambda _3\right)\left(v_1^2+v_2^2\right)`$ (7b)
$`m_{H^0,h^0}^2`$ $`=`$ $`\lambda _1v_1^2+\lambda _2v_2^2\pm \sqrt{\left(\lambda _1v_1^2\lambda _2v_2^2\right)^2+v_1^2v_2^2(\lambda _3+\lambda _5)^2}`$ (7c)
$$\mathrm{tan}2\alpha =\frac{v_2v_1(\lambda _3+\lambda _5)}{\lambda _1v_1^2\lambda _2v_2^2}.$$
(8)
On the other hand, for $`V_{(B)}`$ the minimum conditions are
$`0`$ $`=`$ $`T_1{\displaystyle \frac{\mu _{12}^2}{2}}v_2`$ (9a)
$`0`$ $`=`$ $`T_2{\displaystyle \frac{\mu _{12}^2}{2}}v_1`$ (9b)
with the $`T_i`$ given by the previous equations (4). The solution of this set of equations is
$`v_1^2`$ $`=`$ $`{\displaystyle \frac{\lambda _1\lambda _2\pm \sqrt{\left(\lambda _1\lambda _2\right)^24\left(\lambda _1\lambda _+\right)\left(\lambda _2\lambda _+\right)\left[\left(\lambda _+v^2\mu _1^2\right)\left(\lambda _2v^2\mu _2^2\right)\frac{1}{4}\mu _{12}^4\right]}}{2\left(\lambda _1\lambda _+\right)\left(\lambda _2\lambda _+\right)}}`$ (10a)
$`v_2^2`$ $`=`$ $`{\displaystyle \frac{\lambda _2\lambda _1\pm \sqrt{\left(\lambda _1\lambda _2\right)^24\left(\lambda _2\lambda _+\right)\left(\lambda _1\lambda _+\right)\left[\left(\lambda _+v^2\mu _2^2\right)\left(\lambda _1v^2\mu _1^2\right)\frac{1}{4}\mu _{12}^4\right]}}{2\left(\lambda _1\lambda _+\right)\left(\lambda _2\lambda _+\right)}}.`$ (10b)
Notice that, in this case, the solution with vanishing vacuum expectation value in one of the doublets is not possible. Now the masses and the value of $`\alpha `$ are given by
$`m_{H^+}^2`$ $`=`$ $`\lambda _3\left(v_1^2+v_2^2\right)+\mu _{12}^2{\displaystyle \frac{v_1^2+v_2^2}{v_1v_2}}`$ (11a)
$`m_{A^0}^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mu _{12}^2{\displaystyle \frac{v_1^2+v_2^2}{v_1v_2}}`$ (11b)
$`m_{H^0,h^0}^2`$ $`=`$ $`\lambda _1v_1^2+\lambda _2v_2^2+\frac{1}{4}\mu _{12}^2\left(\frac{v_2}{v_1}+\frac{v_1}{v_2}\right)`$ (11d)
$`\pm \sqrt{\left(\lambda _1v_1^2\lambda _2v_2^2+\frac{1}{4}\mu _{12}^2\left(\frac{v_2}{v_1}\frac{v_1}{v_2}\right)\right)^2+\left(v_1v_2(\lambda _3+\lambda _5)\frac{1}{2}\mu _{12}^2\right)^2}`$
$$\mathrm{tan}2\alpha =\frac{2v_1v_2\lambda _+\frac{1}{2}\mu _{12}^2}{\lambda _1v_1^2\lambda _2v_2^2+\frac{1}{4}\mu _{12}^2\left(\frac{v_2}{v_1}\frac{v_1}{v_2}\right)}.$$
(12)
## III The fermiophobic limit
Despite the fact that $`V_{(A)}`$ and $`V_{(B)}`$ are different, it is obvious that the gauge bosons and the fermions couplings to the scalars are the same for both models. In particular, the introduction of the Yukawa couplings without tree-level flavor changing neutral currents is easily done extending the $`Z_2`$ symmetry to the fermions. This leads to two different ways of coupling the quarks and two different ways of introducing the leptons, giving a total of four different models, usually denoted as model I, II, III and IV (cf. e.g. ).
In here, we use model I, where only $`\mathrm{\Phi }_2`$ couples to the fermions. Then, the coupling of the lightest scalar Higgs, $`h^0`$, to a fermion pair (quark or lepton) is proportional to $`\mathrm{cos}\alpha `$. As $`\alpha `$ approaches $`\frac{\pi }{2}`$ this coupling tends to zero and in the limit it vanishes, giving rise to a fermiophobic Higgs.
Examining equations (8) and (12) we see that the fermiophobic limit ($`\alpha =\frac{\pi }{2}`$) can be obtained in potential A in two ways: either $`\lambda _+=0`$ or $`v_1=0`$. In potential B there is only one possibility $`2v_1v_2\lambda _+=\frac{1}{2}\mu _{12}^2`$. In this latter case, equations (11) and (12) give immediately:
$`m_{A^0}^2`$ $`=`$ $`2\lambda _+\left(v_1^2+v_2^2\right)`$ (13a)
$`m_{H^0}^2`$ $`=`$ $`2\lambda _2v_2^2+\mathrm{\hspace{0.17em}2}\lambda _+v_1^2=m_{A^0}^2+\mathrm{\hspace{0.17em}2}(\lambda _2\lambda _+)v^2\mathrm{sin}^2\beta `$ (13b)
$`m_{h^0}^2`$ $`=`$ $`2\lambda _1v_1^2+\mathrm{\hspace{0.17em}2}\lambda _+v_2^2=m_{A^0}^2\mathrm{\hspace{0.17em}2}(\lambda _+\lambda _1)v^2\mathrm{cos}^2\beta .`$ (13c)
In the former case ($`V_{(A)}`$), $`\lambda _+=0`$ gives
$`m_{H^0}^2`$ $`=`$ $`2\lambda _2v_2^2`$ (14a)
$`m_{h^0}^2`$ $`=`$ $`2\lambda _1v_1^2`$ (14b)
while $`v_1=0`$ gives a massless $`h^0`$. In this analysis we have assumed that $`v_1<v_2`$. The reversed situation leads to similar conclusions since one is then interchanging the role of the two doublets.
The triple couplings involving two gauge bosons and a scalar particle like, for instance $`Z_\mu Z^\mu h^0`$, are always proportional to the angle $`\delta =\alpha \beta `$. In particular, the couplings for $`h^0`$ are proportional to $`\mathrm{sin}\delta `$ whereas the corresponding $`H^0`$ couplings are proportional to $`\mathrm{cos}\delta `$. This general results can be understood if one recalls the argument about the role played by the neutral scalars in restoring the unitarity in the scattering of longitudinal $`W`$’s, i.e. in $`W_L^+W_L^{}W_L^+W_L^{}`$. The restoration of unitarity requires that the sum of the squares of the $`W^+W^{}h^0`$ and $`W^+W^{}H^0`$ couplings adds up to a constant proportional to the $`SU(2)`$ gauge coupling, $`g`$.
Current searches of the SM Higgs boson at LEP put the mass limit at $`89GeV/c^2`$. Since the production mechanism is the reaction $`e^+e^{}Z^{}Zh^0`$, this limit can be substantially lower in the 2HDM if $`\mathrm{sin}\delta `$ is small. In our numerical application to the two $`\gamma `$ decay of a light fermiophobic $`h^0`$ we will explore the region $`\mathrm{sin}^2\delta 0.1`$ .
Bounds on the Higgs masses have been derived by several authors . Recently next-to-leading order calculations in the SM give a prediction for the branching ratio $`Br(BX_s\gamma )`$ which is slightly larger than the experimental CLEO measurement. In model II the charged Higgs loops always increase the SM value. Hence, this process provides good lower bounds on $`m_{H^\pm }`$ as a function of $`\mathrm{tan}\beta `$. On the contrary, in model I the contribution from the charged Higgs reduces the theoretical prediction and so brings it to a value closer to the experimental result. This reduction is larger for small $`\mathrm{tan}\beta `$, since in model I the $`H^+`$ coupling to quarks is proportional to $`\mathrm{tan}^1\beta `$. However, a small $`\mathrm{tan}\beta `$ gives a large top Yukawa coupling which leads to large new contributions to $`R_b`$, the $`B_0\overline{B}_0`$ mixing. A recent analysis by Ciuchini et al. derives the bounds $`\mathrm{tan}\beta >1.8,1.4\text{ and }1.0`$ for $`m_{H^\pm }=85,200\text{ and }425GeV/c^2`$, respectively.
The Higgs contribution to the $`\rho `$-parameter is :
$$\mathrm{\Delta }\rho =\frac{1}{16\pi ^2v^2}\left[\mathrm{sin}^2\delta F(m_{H^\pm }^2,m_{A^0}^2,m_{H^0}^2)+\mathrm{cos}^2\delta F(m_{H^\pm }^2,m_{A^0}^2,m_{h^0}^2)\right]$$
(15)
where
$`F(a,b,c)=a+{\displaystyle \frac{bc}{bc}}\mathrm{ln}{\displaystyle \frac{b}{c}}{\displaystyle \frac{ab}{ab}}\mathrm{ln}{\displaystyle \frac{a}{b}}{\displaystyle \frac{ac}{ac}}\mathrm{ln}{\displaystyle \frac{a}{c}}.`$
Since the current experimental value of $`\rho =1.0012\pm 0.0013\pm 0.0018`$ exceeds the SM prediction by 3$`\sigma `$, one should at least try to avoid a positive $`\mathrm{\Delta }\rho `$.A more recent SM fit gives $`\rho =0.9996+0.0031(0.0013)\text{[13]}.`$. A simpler examination of the function $`F(a,b,c)`$ shows that this is impossible if $`m_{H^\pm }`$ is the largest mass. On the other hand, if $`m_{A^0}>m_{H^\pm }`$ one obtains a negative value for $`\mathrm{\Delta }\rho `$ which grows with the splitting $`m_{A^0}m_{H^\pm }`$. In line with our limit ($`\mathrm{sin}^2\delta 0.1`$), negative values of $`\mathrm{\Delta }\rho `$ of the order of the experimental statistical error, i.e. $`\mathrm{\Delta }\rho 10^3`$, can be obtained essentially in two ways. Either with a large $`m_{H^\pm }300GeV/c^2`$ but with a modest $`m_{A^0}m_{H^\pm }`$ splitting ($`m_{A^0}340GeV/c^2`$) or with a smaller $`m_{H^\pm }100GeV/c^2`$ but with $`m_{A^0}200GeV/c^2`$. The variation of $`\mathrm{\Delta }\rho `$ with $`m_{h^0}`$ is rather modest, less than $`10`$% for the range $`20GeV/c^2m_{h^0}100GeV/c^2`$. With seven parameters in the Higgs sector it is difficult and not very illuminating to discuss in detail all possibilities. So, this discussion should be regarded as a simple justification for the fact that a fermiophobic Higgs scenario is not ruled out by the existing experiments. We would like to stress, that there could exist a light $`h^0`$ almost decoupled from the fermions ($`\alpha \frac{\pi }{2}`$) and at the same time with a small LEP production rate via the $`Z`$-bremsstrahlungs reaction $`Z^{}Zh^0(\mathrm{sin}^2\delta 10^1)`$. If such a boson exists it will decay mainly via the process $`h^0\gamma \gamma `$.
## IV The decay $`h^0\gamma \gamma `$
The decay $`h^0\gamma \gamma `$ is particularly suitable to illustrate the fact that $`V_{(A)}`$ and $`V_{(B)}`$ give rise to different phenomenologies. In fact, the decay occurs at one-loop level and for a fermiophobic Higgs one has vector bosons and charged Higgs contributions. The latter are different for models $`A`$ and $`B`$, because the $`h^0H^+H^{}`$ vertex is different. It is interesting to point out how this difference arises. Since the term in $`\lambda _4`$ does not contribute to this vertex, both potentials give rise to the same effective $`h^0H^+H^{}`$ coupling, $`g_{h^0H^+H^{}}`$, namely:
$`[h^0H^+H^{}]`$ $`=`$ $`2v_2\lambda _2\mathrm{cos}^2\beta \mathrm{cos}\alpha +v_2\lambda _3\mathrm{sin}\alpha \mathrm{cos}\beta \mathrm{sin}\beta v_1\lambda _5\mathrm{cos}^2\beta \mathrm{sin}\alpha `$ (17)
$`2v_1\lambda _1\mathrm{sin}^2\beta \mathrm{sin}\alpha +v_2\lambda _5\mathrm{sin}^2\beta \mathrm{cos}\alpha v_1\lambda _3\mathrm{cos}\alpha \mathrm{cos}\beta \mathrm{sin}\beta `$
However, as we have already said, what is relevant for perturbative calculations is the position of the minimum of $`V`$ and the values of its derivatives at that point. This means that one has to express all coupling constants in terms of the particle masses. This is simply done by inverting equations (7) and (11). The result is
$$[h^0H^+H^{}]_{(A)}=\frac{g}{m_W}\left(m_{h^0}^2\frac{\mathrm{cos}\left(\alpha +\beta \right)}{\mathrm{sin}2\beta }\left(m_{H^+}^2\frac{1}{2}m_{h^0}^2\right)\mathrm{sin}\left(\alpha \beta \right)\right)$$
(18)
and
$$[h^0H^+H^{}]_{(B)}=\frac{g}{m_W}\left(\left(m_{h^0}^2m_{A^0}^2\right)\frac{\mathrm{cos}\left(\alpha +\beta \right)}{\mathrm{sin}2\beta }\left(m_{H^+}^2\frac{1}{2}m_{h^0}^2\right)\mathrm{sin}\left(\alpha \beta \right)\right)$$
(19)
which clearly shows the difference that we have pointed out.
In Fig. 1 we show all the diagrams that were included. A previous work by Diaz and Weiler did not include the Higgs-bosons diagrams. Our calculation, in the ’tHooft-Feynman gauge, was done with xloops. We have been using this program to calculate other amplitudes in the framework of the 2HDM . Throughout this process we have made several checks on the computer results. In this particular case we have verified that the contribution of the vector boson loops agrees with a calculation done by M. Spira et al. using the supersymmetric version of the 2HDM.
In Fig. 2 we show the product $`m_{h^0}`$ times the decay width ($`\mathrm{\Gamma }`$) for the process $`h^0\gamma \gamma `$ in model A as a function of $`\delta `$ and for several values of $`m_{h^0}`$ and a fixed value of $`m_{H^+}`$. This function shows a gentle rise with $`m_{h^0}`$ which reflects the proportionality between $`g_{h^0H^+H^{}}`$ and $`m_{h^0}^2`$. Looking at this coupling constant one could naively assume that there would be an enhancement for $`\beta `$ approaching $`\pi /2`$, i. e., in our plot, when $`\delta `$ approaches zero. However, a close examination shows that such an enhancement does not exist. On the contrary, the coupling vanishes in this limit, since $`m_{h^0}`$ goes to zero when $`\beta \pi /2`$. Alternatively, if one keeps $`m_{h^0}`$ fixed, then the mass relation
$$m_{h^0}=\sqrt{2\lambda _1}v_1=\sqrt{2\lambda _1}v\mathrm{cos}\beta $$
(20)
imposes a lower bound for $`\beta `$. In Fig. 2 the dotted line gives this limit, evaluated assuming $`\lambda _1=1/2`$. The dashed area shows the exclusion region implied by the LEP experimental results. In the work of Ackerstaff et al. an experimental bound on the SM $`h\gamma \gamma `$ branching ratio is derived. For a fermiophobic Higgs with $`m_h<m_W`$ the $`\gamma \gamma `$ branching ratio is one. On the other hand, the production mechanism is suppressed by a factor $`\mathrm{sin}^2\delta `$. Hence, we have turned the OPAL experimental bounds into a bound on $`\delta `$. Fig. 3 gives the equivalent information for potential B.
In Fig. 4 we plot, as a function of $`m_{h^0}`$, the ratio $`R`$, of the widthes calculated with potentials $`V_B`$ and $`V_A`$, respectively. According to the fermiophobic limit, we set $`\alpha =\pi /2`$ and $`\delta =0.29`$. For the other relevant masses we have used $`m_{H^+}=200GeV/c^2`$ and $`m_{A^0}=250GeV/c^2`$. In the range of variation of $`m_{h^0}`$, i.e., $`20GeV/c^2<m_{h^0}<120GeV/c^2`$, R decreases smoothly from 25 till 3. However, it is misleading to assume that potential A always gives smaller results. This is clearly shown in Fig. 5 where we plot the same function $`R`$ evaluated with the same parameters except for $`m_{A^0}`$ that was set up to $`120GeV/c^2`$. Again, $`R`$ is a decreasing function of $`m_{h^0}`$ that has a zero for $`m_{h^0}`$ around $`70GeV/c^2`$ and increases afterwards. However, in this case, the values obtained with potential B are smaller than the corresponding ones for potential A.
This behavior can be qualitatively understood if one examines the coupling constants $`[h^0H^+H^{}]_{(A)}`$ and $`[h^0H^+H^{}]_{(B)}`$ given by equations (18) and (19), respectively. In the range of $`m_{h^0}`$ that we are considering, and for the same values of $`\alpha `$, $`\beta `$ and $`m_{H^+}`$ , the coupling corresponding to potential $`A`$ is always negative and decreases from about $`85GeV/c^2`$ till $`230GeV/c^2`$. On the contrary, the coupling constant corresponding to potential $`B`$ is positive. For large values of $`m_{A^0}`$ (around $`250GeV/c^2`$), it decreases from $`930GeV/c^2`$ till $`780GeV/c^2`$ for $`20GeV/c^2<m_{h^0}<120GeV/c^2`$. These values of the coupling constant, when compared with the corresponding ones for potential $`A`$, explain the qualitative behavior of the ratio $`R`$ given in Fig. 4. The explanation of Fig. 5 is more subtle, but again, it depends on the coupling constant of potential $`B`$. In fact, when $`m_{A^0}=120GeV/c^2`$ the coupling corresponding to potential $`B`$ starts at $`100GeV/c^2`$ and decreases smoothly till $`60GeV/c^2`$, having a zero around $`m_{h^0}=95GeV/c^2`$. This behavior has two consequences. When the coupling is positive, its order of magnitude is the correct one to almost cancel the W-loops contributions to the width. Hence, $`R`$ is small because potential $`B`$ gives a small width. This cancellation is exact for $`m_{h^0}`$ around $`70GeV/c^2`$ and after that, because the coupling changes sign, the charged Higgs contribution adds up to the normal W-loop result. Hence $`R`$ increases.
Despite the fact that the $`[hWW]`$ coupling is suppresed by $`\mathrm{sin}\delta `$, one should keep in mind that when $`m_h`$ is larger then $`m_W`$ the decay channel $`hWW^{}Wq\overline{q}`$ starts to compete with the $`\gamma \gamma `$ channel. We have evaluated the $`WW^{}`$ decay width and in table I we show some results in comparison with the width for the $`\gamma \gamma `$ channel evaluated for potential A and $`m_{H^+}=100GeV`$. The table is representative of a situation that can be summarized qualitatively as follows: i) for small $`\delta (\delta =0.1)`$ the $`WW^{}`$ width is comparable with the $`\gamma \gamma `$ width for $`m_h=120GeV/c^2`$; ii) for large $`\delta (\delta =0.3)`$ even at $`m_h=120GeV/c^2`$ the $`WW^{}`$ decay width is already larger than the $`\gamma \gamma `$ width by a factor of ten.
## V Conclusion
We have examined the 2HDM where the potential does not explicitly break CP violation and furthermore it is naturally protected from the appearance of minima with CP violation . There are two ways of accomplishing this, leading to two different potentials $`V_A`$ and $`V_B`$. $`V_A`$ is invariant under the discrete group $`Z_2`$ and $`V_B`$ is invariant under $`U(1)`$ except for the presence of a soft breaking term. These two symmetries ensure that the parameters that, at tree-level, were set to zero, are not required to renormalize the models.
The potential $`V_A`$ and $`V_B`$ have different cubic and quartic scalar vertices. Then, it is obvious that they give different Higgs-Higgs interactions. However, even before one is able to test such interactions, one could still sense these two different phenomenologies via Higgs-loop contributions.
To illustrate this point we have considered a fermiophobic neutral Higgs, decaying mainly into two photons. The widthes for the decays calculated with both potentials can differ by orders of magnitude for reasonable values of the parameters. Clearly, with four masses and two angles as free parameters, it is not worthwhile to perform a complete analysis. Nevertheless, we believe that the results presented here are sufficient for illustrative purposes. The experimental searches in this area should be made with an open mind for surprises.
## VI Acknowledgment
We would like to thank our experimental colleagues at LIP for some useful discussions and the theoretical elementary particle physics department of Mainz University for allowing us to use their computer cluster. L.B. is partially supported by JNICT contract No. BPD.16372.
|
no-problem/9901/quant-ph9901014.html
|
ar5iv
|
text
|
# Tomographic measurement of nonclassical radiation states
## I INTRODUCTION
The concept of nonclassical states of light has drawn much attention in quantum optics . The customary definition of nonclassicality is given in terms of the Glauber-Sudarshan $`P`$-function: a nonclassical state does not admit a regular positive $`P`$-function representation, namely, it cannot be written as a statistical mixture of coherent states. Such states produce effects that have no classical analogue. These kinds of states are of fundamental relevance not only for the demonstration of the inadequacy of classical description, but also for applications, e.g., in the realms of information transmission and interferometric measurements .
In this paper we are interested in testing the nonclassicality of a quantum state by means of an operational criterium, which is based on a set of quantities that can be measured experimentally with some given level of confidence, even in the presence of loss, noise, and less-than-unity quantum efficiency. The positivity of the $`P`$-function itself cannot be adopted as a test, since there is no method available to measure it. The $`P`$-function is a Fourier transform on the complex plane of the generating function for the normal-ordered moments; hence, in principle, it could be recovered by measuring all the quadrature components of the field, and subsequently performing an (deconvolved) inverse Radon transform . Currently, there is a well-established quantitative method for such a universal homodyne measurement, and it is usually referred to as quantum homodyne tomography (see Ref. for a review). However, as proven in Ref. , only the generalized Wigner functions of order $`s<1\eta ^1`$ can be measured, $`\eta `$ being the quantum efficiency of homodyne detection. Hence, through this technique, all functions from $`s=1`$ to $`s=0`$ cannot be recovered, i.e., we cannot obtain the $`P`$-function and all its smoothed convolutions up to the customary Wigner function. For the same reason, the nonclassicality parameter proposed by Lee , namely, the maximum $`s`$-parameter that provides a positive distribution, cannot be experimentally measured.
Among the many manifestations of nonclassical effects, one finds squeezing, antibunching, even-odd oscillations in the photon-number probability, and negativity of the Wigner function . Any of these features alone, however, does not represent the univocal criterium we are looking for. Neither squeezing nor antibunching provides a necessary condition for nonclassicality . The negativity of the Wigner function, which is well exhibited by the Fock states and the Schrödinger-cat-like states, is absent for the squeezed states. As for the oscillations in the photon-number probability, some even-odd oscillations can be simply obtained by using a statistical mixture of coherent states .
Many authors have adopted the nonpositivity of the phase-averaged $`P`$-function $`F(I)=\frac{1}{2\pi }_0^{2\pi }𝑑\varphi P(I^{1/2}e^{i\varphi })`$ as the definition for a nonclassical state, since $`F(I)<0`$ invalidates Mandel’s semiclassical formula of photon counting, i.e., it does not allow a classical description in terms of a stochastic intensity. Of course, some states can exhibit a “weak” nonclassicality , namely, a positive $`F(I)`$, but with a non-positive $`P`$-function (a relevant example being a coherent state undergoing Kerr-type self-phase modulation). However, from the point of view of the detection theory, such “weak” nonclassical states still admit a classical description in terms of having the intensity probability $`F(I)>0`$. For this reason, we adopt nonpositivity of $`F(I)`$ as the definition of nonclassicality.
## II SINGLE-MODE NONCLASSICALITY
The authors of Refs. have recognized relations between $`F(I)`$ and generalized moments of the photon distribution, which, in turn, can be used to test the nonclassicality. The problem is reduced to an infinite set of inequalities that provide both necessary and sufficient conditions for nonclassicality . In terms of the photon-number probability $`p(n)=n|\widehat{\rho }|n`$ of the state with density matrix $`\widehat{\rho }`$, the simplest sufficient condition involves the following three-point relation for $`p(n)`$
$`B(n)`$ $``$ $`(n+2)p(n)p(n+2)`$ (2)
$`(n+1)[p(n+1)]^2<0.`$
Higher-order sufficient conditions involve five-, seven-, …, $`(2k+1)`$-point relations, always for adjacent values of $`n`$. It is sufficient that just one of these inequalities be satisfied in order to assure the negativity of $`F(I)`$. Notice that for a coherent state $`B(n)=0`$ identically for all $`n`$.
In the following we show that quantum tomography can be used as a powerful tool for performing the nonclassicality test in Eq. (2). For less-than-unity quantum efficiency ($`\eta <1`$), we rely on the concept of a “noisy state” $`\widehat{\varrho }_\eta `$, wherein the effect of quantum efficiency is ascribed to the quantum state itself rather than to the detector. In this model, the effect of quantum efficiency is treated in a Schrödinger-like picture, with the state evolving from $`\widehat{\varrho }`$ to $`\widehat{\varrho }_\eta `$, and with $`\eta `$ playing the role of a time parameter. Such lossy evolution is described by the master equation
$`_t\widehat{\varrho }(t)={\displaystyle \frac{\mathrm{\Gamma }}{2}}\left\{2\widehat{a}\widehat{\varrho }(t)\widehat{a}^{}\widehat{a}^{}\widehat{a}\widehat{\varrho }(t)\widehat{\varrho }(t)\widehat{a}^{}\widehat{a}\right\},`$ (3)
wherein $`\widehat{\varrho }(t)\widehat{\varrho }_\eta `$ with $`t=\mathrm{ln}\eta /\mathrm{\Gamma }`$.
For the nonclassicality test, reconstruction in terms of the noisy state has many advantages over the true-state reconstruction. In fact, for nonunit quantum efficiency $`\eta <1`$ the tomographic method introduces errors for $`p(n)`$ which are increasingly large versus $`n`$, with the additional limitation that quantum efficiency must be greater than the minimum value $`\eta =0.5`$ . On the other hand, the reconstruction of the noisy-state probabilities $`p_\eta (n)=n|\widehat{\rho }_\eta |n`$ does not suffer such limitations, and even though all quantum features are certainly diminished in the noisy-state description, nevertheless the effect of nonunity quantum efficiency does not change the sign of the $`P`$-function, but only rescales it as follows:
$`P(z)P_\eta (z)={\displaystyle \frac{1}{\eta }}P(z/\eta ^{1/2}).`$ (4)
Hence, the inequality (2) still represents a sufficient condition for nonclassicality when the original probabilities $`p(n)=n|\widehat{\rho }|n`$ are replaced with the noisy-state probabilities $`p_\eta (n)=n|\widehat{\rho }_\eta |n`$, the latter being given by the Bernoulli convolution
$`p_\eta (n)={\displaystyle \underset{k=n}{\overset{\mathrm{}}{}}}\left({\displaystyle \genfrac{}{}{0pt}{}{k}{n}}\right)\eta ^n(1\eta )^{kn}p(k).`$ (5)
Hence, when referred to the noisy-state probabilities $`p_\eta (n)`$, the inequality in Eq. (2) keeps its form and simply rewrites as follows
$`B_\eta (n)`$ $``$ $`(n+2)p_\eta (n)p_\eta (n+2)`$ (7)
$`(n+1)[p_\eta (n+1)]^2<0.`$
According to Eq. (7), the quantity $`B_\eta (n)`$ is nonlinear in the density matrix. This means that $`B_\eta (n)`$ cannot be measured by averaging a suitable kernel function over the homodyne data, as for any other observable . Hence, in the evaluation of $`B_\eta (n)`$ one needs to tomographically reconstruct the photon-number probabilities, using the kernel functions
$`K_\eta ^{(n)}(x)`$ $`=`$ $`2\kappa ^2e^{\kappa ^2x^2}{\displaystyle \underset{\nu =0}{\overset{n}{}}}{\displaystyle \frac{()^\nu }{\nu !}}\left({\displaystyle \genfrac{}{}{0pt}{}{n}{n\nu }}\right)(2\nu +1)!\kappa ^{2\nu }`$ (8)
$`\times `$ $`\text{Re }\{D_{(2\nu +2)}(2i\kappa x)\},`$ (9)
where $`D_\sigma (z)`$ denotes the parabolic cylinder function and $`\kappa =\sqrt{\eta /(2\eta 1)}`$. The true-state probabilities $`p(n)`$ are obtained by averaging the kernel function in Eq. (9) over the homodyne data. On the other hand, the noisy-state probabilities $`p_\eta (n)`$ are obtained by using the kernel function in Eq. (9) for $`\eta =1`$, namely without recovering the convolution effect of nonunit quantum efficiency. Notice that the expression (9) does not depend on the phase of the quadrature. Hence, the knowledge of the phase of the local oscillator in the homodyne detector is not needed for the tomographic reconstruction, and it can be left fluctuating in a real experiment.
Regarding the estimation of statistical errors, they are generally obtained by dividing the set of homodyne data into blocks. However, in the present case, the nonlinear dependence on the photon number probability introduces a systematic error that is vanishingly small for increasingly larger sets of data. Therefore, the estimated value of $`B(n)`$ will be obtained from the full set of data, instead of averaging the mean value of the different statistical blocks.
In Figs. 17 we present some numerical results that are obtained by a Monte-Carlo simulation of a quantum tomography experiment. The nonclassicality criterium is tested either on a Schrödinger-cat state $`|\psi (\alpha )(|\alpha +|\alpha )`$ or on a squeezed state $`|\alpha ,rD(\alpha )S(r)|0`$, wherein $`|\alpha `$, $`D(\alpha )`$, and $`S(r)`$ denote a coherent state with amplitude $`\alpha `$, the displacement operator $`D(\alpha )=e^{\alpha \widehat{a}^{}\overline{\alpha }\widehat{a}}`$, and the squeezing operator $`S(r)=e^{r(\widehat{a}^2\widehat{a}^2)/2}`$, respectively. Figs. 13 show tomographically-obtained values of $`B(n)`$, with the respective error bars superimposed, along with the theoretical values for a Schrödinger-cat state, for a phase-squeezed state ($`r>0`$), and for an amplitude-squeezed state ($`r<0`$), respectively. For the same set of states the results for $`B_\eta (n)`$ \[cf. Eq. (7)\] obtained by tomographic reconstruction of the noisy state are reported in Figs. 46. Let us compare the statistical errors that affect the two measurements, namely, those of $`B(n)`$ and $`B_\eta (n)`$ on the original and the noisy states, respectively. In the first case (Figs. 13) the error increases with $`n`$, whereas in the second (Figs. 46) it remains nearly constant, albeit with less marked oscillations in $`B_\eta (n)`$ than those in $`B(n)`$. Fig. 7 shows tomographically-obtained values of $`B_\eta (n)`$ for the phase-squeezed state (cf. Fig. 5), but for a lower quantum efficiency $`\eta =0.4`$. Notice that, in spite of the low quantum efficiency, the nonclassicality of such a state is still experimentally verifiable, as $`B_\eta (0)<0`$ by more than five standard deviations. In contrast, for coherent states one obtains small statistical fluctuations around zero for all $`n`$.
We remark that the simpler test of checking for antibunching or oscillations in the photon-number probability in the case of the phase-squeezed state considered here (Figs. 2, 5, and 7) would not reveal the nonclassical features of such a state.
## III TWO-MODE NONCLASSICALITY
Quantum homodyne tomography can also be employed to test the nonclassicality of two-mode states. For a two-mode state nonclassicality is defined in terms of nonpositivity of the following phase-averaged two-mode $`P`$-function :
$`F(I_1,I_2,\varphi )={\displaystyle \frac{1}{2\pi }}{\displaystyle _0^{2\pi }}𝑑\varphi _1P(I_1^{1/2}e^{i\varphi _1},I_2^{1/2}e^{i(\varphi _1+\varphi )}).`$ (10)
In Ref. it is also shown that a sufficient condition for nonclassicality is
$`C=(\widehat{n}_1\widehat{n}_2)^2(\widehat{n}_1\widehat{n}_2)^2\widehat{n}_1+\widehat{n}_2<0,`$ (11)
where $`\widehat{n}_1`$ and $`\widehat{n}_2`$ are the photon-number operators of the two modes.
A tomographic test of the inequality in Eq. (11) can be performed by averaging the kernel functions for the operators in the ensemble averages in Eq. (11) over the two-mode homodyne data. For the normal-ordered field operators one can use the Richter formula in Ref. , namely
$`[a^na^m](x,\varphi )=e^{i(mn)\varphi }{\displaystyle \frac{H_{n+m}(\sqrt{2\eta }x)}{\sqrt{(2\eta )^{n+m}}\left(\genfrac{}{}{0pt}{}{n+m}{n}\right)}},`$ (12)
$`H_n(x)`$ denoting the Hermite polynomial and $`\varphi `$ being the phase of the fiels with respect to the local oscillator of the homodyne detector. Again, as for the kernel function in Eq. (9), the value $`\eta =1`$ is used to reconstruct the ensemble averages of the noisy state $`\widehat{\rho }_\eta `$. Notice that for $`n=m`$ Eq. (12) is independent on the phase $`\varphi `$, and hence no phase knowledge is needed to reconstruct the ensemble averages in Eq. (11). As an example, we consider the twin-beam state at the output of a nondegenerate parametric amplifier
$`|\chi (1|\lambda |^2){\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\lambda ^n|n|n,`$ (13)
where $`|n|n`$ denotes the joint eigenvector of the number operators of the two modes with equal eigenvalue $`n`$, and the parameter $`\lambda `$ is related to the gain $`G`$ of the amplifier by the relation $`|\lambda |^2=1G^1`$. The theoretical value of $`C`$ for the state in Eq. (13) is $`C=2|\lambda |^2/(1|\lambda |^2)<0`$. A tomographic reconstruction of the twin-beam state in Eq. (13) is particularly facilitated by the self-homodyning scheme, as shown in Ref. . With regard to the effect of quantum efficiency $`\eta <1`$, the same argument still holds as for the single-mode case: one can evaluate $`C_\eta `$ for the twin-beam state that has been degraded by the effect of loss. In this case, the theoretical value of $`C_\eta `$ is simply rescaled to $`C_\eta =2\eta ^2|\lambda |^2/(1|\lambda |^2)`$.
In Fig. 8 we report $`C_\eta `$ vs. $`1\eta `$, $`\eta `$ ranging from 1 to 0.3 in steps of 0.05, for the twin-beam state in Eq. (13) with $`|\lambda |^2=0.5`$, corresponding to the total average photon number equal to 2. The values of $`C_\eta `$ result from a Monte-Carlo simulation of a homodyne tomography experiment with a sample of $`4\times 10^5`$ data, using the theoretical joint homodyne probability of the state $`|\chi `$
$`p_\eta (x_1,x_2,\varphi _1,\varphi _2)={\displaystyle \frac{2\mathrm{exp}\left[\frac{(x_1+x_2)^2}{d_z^2+4\mathrm{\Delta }_\eta ^2}\frac{(x_1x_2)^2}{d_z^2+4\mathrm{\Delta }_\eta ^2}\right]}{\pi \sqrt{(d_z^2+4\mathrm{\Delta }_\eta ^2)(d_z^2+4\mathrm{\Delta }_\eta ^2)}}},`$ (14)
with
$`z=e^{i(\varphi _1+\varphi _2)}\mathrm{\Lambda },`$ (15)
$`d_{\pm z}^2={\displaystyle \frac{|1\pm z|^2}{1|z|^2}},`$ (16)
$`\mathrm{\Delta }_\eta ^2={\displaystyle \frac{1\eta }{4\eta }},`$ (17)
$`\varphi _1`$ and $`\varphi _2`$ denoting the phases of the two modes relative to the respective local oscillator. Notice that the nonclassicality test in terms of the noisy state gives values of $`C_\eta `$ that are increasingly near the classically positive region for decreasing quantum efficiency $`\eta `$. However, the statistical error remains constant and is sufficiently small to allow recognition of the nonclassicality of the twin-beam state in Eq. (13) up to $`\eta =0.3`$.
## IV CONCLUSIONS
We have shown that quantum homodyne tomography allows one to perform nonclassicality tests for various single- and two-mode radiation states, even when the quantum efficiency of homodyne detection is rather low. The method involves reconstruction of the photon-number probability or of some suitable function of the number operators pertaining to the noisy state, namely, the state degraded by the less-than-unity quantum efficiency. The noisy-state reconstruction is affected by the statistical errors; however, they are sufficiently small that the nonclassicality of the state can be tested even for low values of $`\eta `$. For the cases considered in this paper, we have shown that the nonclassicality of the states can be proven (deviation from classicality by many error bars) with $`10^5`$$`10^7`$ homodyne data. Moreover, since the knowledge of the phase of the local oscillator in the homodyne detector is not needed for the tomographic reconstruction, it can be left fluctuating in a real experiment. Hence, we conclude that the proposed nonclassicality test should be easy to perform experimentally.
## Acknowledgments
This work is supported by the Italian Ministero dell’Universitá e della Ricerca Scientifica e Tecnologica under the program Amplificazione e rivelazione di radiazione quantistica
|
no-problem/9901/hep-ph9901372.html
|
ar5iv
|
text
|
# 1 Possible mechanisms for K_L→μ⁺μ⁻.
ZTF–99/01
LONG VS. SHORT DISTANCE DISPERSIVE TWO-PHOTON $`K_L\mu ^+\mu ^{}`$ AMPLITUDE<sup>1</sup><sup>1</sup>1Presented by K. K. on the conference *“Nuclear and Particle Physics with CEBAF”*, Dubrovnik, Croatia, Nov 3-10, 1998
JAN O. EEG<sup>a</sup>, KREŠIMIR KUMERIČKI<sup>b</sup> and IVICA PICEK<sup>b</sup>
<sup>a</sup>Department of Physics, University of Oslo, Norway
<sup>b</sup>Department of Physics, Faculty of Science, University of Zagreb, Croatia
January 1999
We report on the calculation of the two-loop electroweak, two-photon mediated short-distance dispersive $`K_L\mu ^+\mu ^{}`$ decay amplitude. QCD corrections change the sign of this contribution and reduce it by an order of magnitude. The resulting amplitude enables us to provide a constraint on the otherwise uncertain long-distance dispersive amplitude.
PACS numbers: 12.15.-y, 12.39.Fe UDC: 539.12
Keywords: kaon decay, muon, dispersive amplitude, QCD corrections
The decay mode $`K_L\mu ^+\mu ^{}`$ is a classical example of the rare flavour changing neutral process that provided valuable insights into the nature of weak interactions. Its non-observation at a rate comparable with that of $`K^+\mu ^+\nu _\mu `$ led to the discovery of the GIM mechanism and to the derivation of the early constraints on the masses of the charmed and, later, top quark.
Also, by studying this mode it could be possible to determine the Wolfenstein $`\rho `$ parameter, to study the CP violation, and even to discover some new physics (*e. g.* through SUSY-induced FCNC enhancement). Because of this, this decay mode has received sustained theoretical attention over the last three decades.
The lowest-order electroweak amplitude for $`K_L\mu ^+\mu ^{}`$ in a free-quark calculation (Fig. 1a and Fig. 1b) is represented by one-loop (1L) W-box and Z-exchange diagrams, respectively, and exhibits a strong GIM cancellation. Therefore, one is addressed to consider the two-loop (2L) diagrams with photons in the intermediate state (Fig. 1c) as a potentially important contribution.
If we normalize the amplitude $`𝒜`$ to the branching ratio:
$$B(K_L\mu ^+\mu ^{})=|\text{Re}𝒜|^2+|\text{Im}𝒜|^2,$$
(1)
then the absorptive (Im$`𝒜`$) part, which is dominated by the process $`K_L\gamma \gamma \mu ^+\mu ^{}`$ (Fig. 1c) with the *real* photons, is easily calculable and gives the so called unitarity bound
$$B(K_L\mu ^+\mu ^{})|\text{Im}𝒜|^2=(7.1\pm 0.2)\times 10^9.$$
(2)
corresponding to $`|\text{Im}𝒜|=(8.4\pm 0.1)\times 10^5`$. If we compare this to the experimental number
$$B(K_L\mu ^+\mu ^{})=(7.2\pm 0.5)\times 10^9,$$
(3)
we see that the absorptive part almost saturates the amplitude, leaving only the small window for the dispersive (Re$`𝒜`$) part
$$\text{Re}𝒜=𝒜_{\mathrm{SD}}+𝒜_{\mathrm{LD}},|\text{Re}𝒜|^2<5.6\times 10^{10}.$$
(4)
Thus, the total real part of the amplitude, being the sum of short-distance (SD) and long-distance (LD) dispersive contributions, must be relatively small compared with the absorptive part of the amplitude. Such a small total dispersive amplitude can be realized either when the SD and LD parts are both small or by partial cancellation between these two parts.
Now, the major obstacle in extracting useful short distance information out of this decay mode is the poor knowledge of $`𝒜_{LD}`$. There are several calculations of this LD part to be found in literature and, later in this paper, we will try to compare them. To this end it is necessary to have a reliable estimate of the other, theoretically more tractable, SD part $`𝒜_{SD}`$.
Frequently, $`𝒜_{SD}`$ has been identified as the weak contribution represented by the one-loop W-box and Z-exchange diagrams of Figs. 1a and 1b. This one-loop SD contribution $`𝒜_{1L}=𝒜_{\mathrm{Fig}.1\mathrm{a}}+𝒜_{\mathrm{Fig}.1\mathrm{b}}`$ is dominated by the $`t`$-quark in the loop (proportional to the small KM-factor $`\lambda _t`$), and the inclusion of QCD corrections does not change this amplitude essentially. In the present paper we stress that the diagrams of Fig. 1c, with *virtual* intermediate photons, with relatively high-momentum, lead to the same SD operator. That is, both the $`1L`$ diagrams contained in Figs. 1a and 1b, as well as $`2L`$ diagrams like those in Fig. 2, lead to the same SD operator of the type
$$K_{SD}(\overline{d}\gamma ^\beta Ls)(\overline{u}\gamma _\beta \gamma _5v),$$
(5)
where $`s,\overline{d},u,v`$ are the spinors of the $`s`$\- and $`\overline{d}`$ quarks in the $`K`$-meson, and the $`\mu ^+`$ and $`\mu ^{}`$, respectively. The quantity $`K_{SD}`$ is a constant which contains the result of the SD calculations. The leading contributions from $`2L`$ diagrams are $`\alpha _{\mathrm{em}}^2G_\mathrm{F}\lambda _u`$ and dominated by $`c`$-quarks in the loop, while the leading $`1L`$ is proportional to $`G_\mathrm{F}^2m_t^2\lambda _t`$.
One should note that as already pointed out in , the two-loop diagrams with two intermediate virtual photons have a short-distance part $`𝒜_{2\mathrm{L}}`$ (contained in $`𝒜_{\mathrm{Fig}.1\mathrm{c}}=𝒜_{\mathrm{LD}}+𝒜_{2\mathrm{L}}`$) that could pick up a potentially sizable contribution, leading to the total SD amplitude is $`𝒜_{\mathrm{SD}}=𝒜_{1\mathrm{L}}+𝒜_{2\mathrm{L}}`$. By exploring the contribution from Fig. 1c leading to the $`𝒜_{2\mathrm{L}}`$ amplitude, we will be able to isolate the strongly model-dependent LD dispersive piece.
A complete treatment of the two-loop SD dispersive amplitude for $`K_L\mu ^+\mu ^{}`$ was given by us in Ref. . There we used the momenta of the intermediate photons from the diagrams in Fig. 1c to distinguish between SD and LD contributions, SD part being defined by diagrams with photon momenta above some infrared cut-off of the order of some hadronic scale $`\mathrm{\Lambda }m_\rho `$. The fact that the resulting amplitudes depended only mildly on the precise choice of $`\mathrm{\Lambda }`$ assured us that the calculated amplitude was sensible.
Our SD calculation in is dominated by the region $`m_\rho <q^2<m_c^2`$ (the high energy ($`q^2>m_c^2`$) region is also included). After performing QCD corrections in the leading logarithmic approximation , the original electroweak amplitude was considerably suppressed and its sign changed:
$$0.38\times 10^5𝒜_{2\mathrm{L}}0.001\times 10^5,$$
(6)
where error bars stem mostly from empirical uncertainty in $`\alpha _s`$.
Effectively, the LD calculation of the diagram on Fig. 1c is reduced to the evaluation of the form-factor $`F(q_1^2,q_2^2)`$ contained in the amplitude
$$A(K_L\gamma ^{}(q_1,ϵ_1)\gamma ^{}(q_2,ϵ_2))=i\epsilon _{\mu \nu \rho \sigma }ϵ_1^\mu ϵ_2^\nu q_1^\rho q_2^\sigma F(q_1^2,q_2^2),$$
(7)
where $`q_1^2,q_2^20`$ measure the virtuality of the intermediate photons.
The low energy regime $`q^2<\mathrm{\Lambda }^2m_\rho ^2`$ is explorable by chiral techniques determining $`F(0,0)`$. In the standard $`SU(3)_LSU(3)_R`$ ChPT, where $`\eta ^{}`$ is absent, one recovers the cancellation owing to the Gell-Mann-Okubo mass relation, $`(3M_\eta ^2+M_\pi ^24M_K^2)0`$. Keeping the $`\eta ^{}`$ pole contribution in the enlarged $`U(3)_LU(3)_R`$ symmetric theory there is a destructive interference between the $`\eta `$ and $`\eta ^{}`$ contributions, so that the final amplitude is dominated by the pion pole.
If going beyond the ChPT, one faces model calculations, and in particular the calculations based on vector meson dominance (Refs. for example).The chiral-quark model may also be used for the LD regime. Some preliminary analysis within the chiral quark model indicates that the dispersive LD amplitude is of the same order of magnitude as the SD.
Combining Eqs. (4) and (6), and $`𝒜_{1L}`$ , enables us to find the following allowed range for $`𝒜_{\mathrm{LD}}`$:
$$0.1\times 10^5𝒜_{\mathrm{LD}}6.5\times 10^5.$$
(8)
Thus, having a dispersive LD part $`𝒜_{\mathrm{LD}}`$ of the size comparable with the absorptive part is still not ruled out completely.
The two vector-meson dominance calculations for the LD amplitude considered as the referent calculations in Ref. have basically opposite signs,
$$2.9\times 10^5𝒜_{\mathrm{LD}}0.5\times 10^5\text{[6]},$$
$$0.27\times 10^5𝒜_{\mathrm{LD}}4.7\times 10^5\text{[7]},$$
and the result of seems to be more in agreement with the bound (8). There are also some other, more recent, attempts to calculate the $`𝒜_{\mathrm{LD}}`$ . The most stringent bound obtained is
$$|\text{Re}𝒜_{LD}|<2.9\times 10^5,$$
(9)
also well inside the allowed range (8).
|
no-problem/9901/quant-ph9901053.html
|
ar5iv
|
text
|
# Optimal Manipulations with Qubits: Universal NOT Gate
## Abstract
It is not a problem to complement a classical bit, i.e. to change the value of a bit, a $`0`$ to a $`1`$ and vice versa. This is accomplished by a NOT gate. Complementing a qubit in an unknown state, however, is another matter. We show that this operation cannot be done perfectly. We define the Universal-NOT (U-NOT) gate which out of $`N`$ identically prepared pure input qubits generates $`M`$ output qubits in a state which is as close as possible to the perfect complement. This gate can be realized by classical estimation and subsequent re-preparation of complements of the estimated state. Its fidelity is therefore equal to the fidelity $`=(N+1)/(N+2)`$ of optimal estimation, and does not depend on the required number of outputs. We also show that when some additional a priori information about the state of input qubit is available, than the fidelity of the quantum NOT gate can be much better than the fidelity of estimation.
PACS number: 03.65.Bz, 03.67.-a
> There was an odd qubit from Donegal,
> who wanted to become most orthogonal.
> He went through a gate,
> but not very straight,
> and came out instead as a Buckyball.
Classical information consists of bits, each of which can be either $`0`$ or $`1`$. Quantum information, on the other hand, consists of qubits which are two-level quantum systems with one level labeled $`|0`$ and the other $`|1`$. Qubits can not only be in one of the two levels, but in any superposition of them as well. This fact makes the properties of quantum information quite different from those of its classical counterpart. For example, it is not possible to construct a device which will perfectly copy an arbitrary qubit while the copying of classical information presents no difficulties. Another difference between classical and quantum information is as follows: It is not a problem to complement a classical bit, i.e. to change the value of a bit, a $`0`$ to a $`1`$ and vice versa. This is accomplished by a NOT gate. Complementing a qubit, however, is another matter. The complement of a qubit $`|\mathrm{\Psi }`$ is the qubit $`|\mathrm{\Psi }^{}`$ which is orthogonal to it. Is it possible to build a device which will take an arbitrary (unknown) qubit and transform it into the qubit orthogonal to it?
The best intuition for this problem is obtained by looking at the desired operation as an operation on the Poincaré sphere, which represents the set of pure states of a qubit system. Thus every state, pure or mixed, is represented by a vector in a three-dimensional space, whose components are the expectations of the three Pauli matrices. The full state space is thereby mapped onto the unit ball, whose surface represents the set of pure states. In this picture the ambiguity of choosing an overall phase for $`|\mathrm{\Psi }`$ is already eliminated. The points corresponding to $`|\mathrm{\Psi }`$ and $`|\mathrm{\Psi }^{}`$ are antipodes of each other. The desired Universal-NOT (U-NOT) operation is therefore nothing but the inversion of the Poincaré sphere.
Note that the inversion preserves angles (related in a simple way to the scalar product $`|\mathrm{\Phi },\mathrm{\Psi }|`$ of rays), so by Wigner’s Theorem the ideal U-NOT must be implemented either by a unitary or by an anti-unitary operation. Unitary operations correspond to proper rotations of the Poincaré sphere, whereas anti-unitary operations correspond to orthogonal transformations with determinant $`1`$. Clearly, the U-NOT operation is of the latter kind, and an anti-unitary operator $`\mathrm{\Theta }`$ (unique up to a phase) implementing it is
$$\mathrm{\Theta }\left(\alpha |0+\beta |1\right)=\beta ^{}|0\alpha ^{}|1.$$
(1)
The difficulty with anti-unitarily implemented symmetries is that they are not completely positive, i.e., they cannot be applied to a small system, leaving the rest of the world alone. (The tensor product of an anti-linear and a linear operator is ill-defined). Thus time-reversal, perhaps the best known operation of this kind, can only be a global symmetry, but makes no sense when applied only to a subsystem. By definition, a “gate” is an operation applied to only a part of the world, so must be represented by a completely positive operation. By the Stinespring Dilation Theorem this is equivalent to saying that any gate must have a realization by coupling the given system to a larger one (some ancillas), performing a unitary operation on the large system, and subsequently restricting to a subsystem. Hence an ideal U-NOT gate does not exist.
The same is true, of course, for other anti-unitarily implemented operations like the complex conjugation (or equivalently the transposition) of the density matrix, which corresponds to the reflection of the sphere at the $`x_2=0`$ plane, because only the Pauli matrix $`\sigma _2`$ has imaginary entries. Clearly, any such operation can be represented by a U-NOT, followed by a suitable unitary rotation, and conversely. On the other hand, if we relax the “universality” condition, the U-NOT operation may become viable: if we are promised that the elements of the density matrix (or the components of $`|\mathrm{\Psi }`$) are real, the states lie in the $`x_2=0`$ plane so that the inversion at the center is equivalent to a proper rotation by $`\pi `$ around the $`x_2`$-axis.
Because we cannot design a perfect Universal-NOT gate, what we would like to do is see how close we can come. At this point we can consider two scenarios. The first one is based on the measurement of input qubit(s) – using the results of an optimal measurement we can manufacture an orthogonal qubit, or any desired number of them. Obviously, the fidelity of the NOT operation in this case is equal to the fidelity of estimation of the state of the input qubit(s). The second scenario would be to approximate an anti-unitary transformation on a Hilbert space of the input qubit(s) by a unitary transformation on a larger Hilbert space which describes the input qubit(s), blank qubits which are to become the complements, and the quantum device playing the rôle of the gate. We demand that the gate performs equally well for any (unknown) pure input state, so it is natural to focus on universal gates “U-NOT”, i.e., gates which treat every state vector in the same way in the sense of unitary symmetry. In what follows we shall address both scenarios.
In order to state our problem precisely, let $`=𝐂^2`$ denote the two-dimensional Hilbert space of a single qubit Then the input state of $`N`$ systems prepared in the pure state $`|\mathrm{\Psi }`$ is the $`N`$-fold tensor product $`|\mathrm{\Psi }^N^N`$. The corresponding density matrix is $`\rho \sigma ^N`$, where $`\sigma =|\mathrm{\Psi }\mathrm{\Psi }|`$ is the one-particle density matrix. An important observation is that the vectors $`|\mathrm{\Psi }^N`$ are invariant under permutations of all $`N`$ sites, i.e., they belong to the symmetric, or “Bose”-subspace $`_+^N^N`$. Thus as long as we consider only pure input states we can assume all the input states of the device under consideration to be density operators on $`_+^N`$. We will denote by $`𝒮()`$ the density operators over a Hilbert space $``$. Then the U-NOT gate must be a completely positive trace preserving map $`T:𝒮\left(_+^N\right)𝒮()`$. Our aim is to design $`T`$ in such a way that for any pure one-particle state $`\sigma 𝒮()`$ the output $`T(\sigma ^N)`$ is as close as possible to the orthogonal qubit state $`\sigma ^{}=𝟙\sigma `$. In other words, we are trying to make the fidelity $`:=\mathrm{Tr}[\sigma ^{}T(\sigma ^N)]=1\mathrm{\Delta }`$ of the optimal complement with the result of the transformation $`T`$ as close as possible to unity for an arbitrary input state. This corresponds to the problem of finding the minimal value of the error measure $`\mathrm{\Delta }(T)`$ defined as
$$\mathrm{\Delta }(T)=\underset{\sigma ,\mathrm{pure}}{\mathrm{max}}\mathrm{Tr}\left[\sigma T(\sigma ^N)\right].$$
(2)
Note that this functional $`\mathrm{\Delta }`$ is completely unbiased with respect to the choice of input state. More formally, it is invariant with respect to unitary rotations (basis changes) in $``$: When $`T`$ is any admissible map, and $`U`$ is a unitary on $``$, the map $`T_U(\rho )=U^{}T(U^N\rho U_{}^{}{}_{}{}^{N})U`$ is also admissible, and satisfies $`\mathrm{\Delta }(T_U)=\mathrm{\Delta }(T)`$. We will show later on that one may look for optimal gates $`T`$, minimizing $`\mathrm{\Delta }(T)`$, among the universal ones, i.e., the gates satisfying $`T_U=T`$ for all $`U`$. For such U-NOT gates, the maximaization can be omitted from the definition (2), because the fidelity $`\mathrm{Tr}\left[\sigma T(\sigma ^N)\right]`$ is independent of $`\sigma `$.
Measurement-based scenario
An estimation device by definition takes an input state $`\rho 𝒮(_+^N)`$ and produces, on every single experiment, an “estimated pure state” $`\sigma 𝒮()`$. As in any quantum measurement this will not always be the same $`\sigma `$, even with the same input state $`\rho `$, but a random quantity. The estimation device is therefore described completely by the probability distribution of pure states it produces for every given input. Still simpler, we will characterize it by the corresponding probability density with respect to the unique normalized measure on the pure states (denoted “$`d\mathrm{\Phi }`$” in integrals), which is also invariant under unitary rotations. For an input state $`\rho 𝒮(_+^N)`$, the value of this probability density at the pure state $`|\mathrm{\Phi }`$ is
$$p(\mathrm{\Phi },\rho )=(N+1)\mathrm{\Phi }^N,\rho \mathrm{\Phi }^N.$$
(3)
To check the normalization, note that $`𝑑\mathrm{\Phi }p(\mathrm{\Phi },\rho )=\mathrm{Tr}[X\rho ]`$ for a suitable operator $`X`$, because the integral depends linearly on $`\rho `$. By unitary invariance of the measure “$`d\mathrm{\Phi }`$” this operator commutes with all unitaries of the form $`U^N`$, and since these operators, restricted to $`_+^N`$ form an irreducible representation of the unitary group of $``$ \[for $`d=2`$, it is just the spin $`N/2`$ irreducible representation of SU(2)\], the operator $`X`$ is a multiple of the identity. To determine the factor, one inserts $`\rho =𝟙`$, and uses the normalization of “$`d\mathrm{\Phi }`$” to verify that $`X=1`$.
Note that the density (3) is proportional to $`|\mathrm{\Phi },\mathrm{\Psi }|^{2N}`$, when $`\rho =|\mathrm{\Psi }^N\mathrm{\Psi }^N|`$ is the typical input to such a device: $`N`$ systems prepared in the same pure state $`|\mathrm{\Psi }`$. In that case the probability density is clearly peaked sharply at states $`|\mathrm{\Phi }`$ which are equal to $`|\mathrm{\Psi }`$ up to a phase.
Suppose now that we combine the state estimation with the preparation of a new state, which is some function of the estimated state. The overall result will then be the integral of the state valued function with respect to the probability distribution just determined. In the case at hand the desired function is $`f(\mathrm{\Phi })=(𝟙|\mathbb{\Phi }\mathbb{\Phi }|)`$. So the result of the whole measurement-based (“classical”) scheme is
$$\sigma ^{(out)}=T(\rho )=𝑑\mathrm{\Phi }p(\mathrm{\Phi },\rho )\left(𝟙|\mathbb{\Phi }\mathbb{\Phi }|\right).$$
(4)
The fidelity required for the computation of $`\mathrm{\Delta }`$ from Eq.(2) is then equal to (see also )
$`\mathrm{\Delta }=(N+1){\displaystyle 𝑑\mathrm{\Phi }|\mathrm{\Phi },\mathrm{\Psi }|^{2N}(1|\mathrm{\Phi },\mathrm{\Psi }|^2)}={\displaystyle \frac{1}{N+2}},`$ (5)
where we have used that the two integrals have exactly the same form (differing only in the choice of $`N`$), and that the first integral is just the normalization integral. Since this expression does not depend on $`\sigma `$, we can drop the maximization in the definition (2) of $`\mathrm{\Delta }`$, and find $`\mathrm{\Delta }(T)=1/(N+2)`$, from which we find that the fidelity of creation of a complement to the original state $`\sigma `$ is $`=\frac{N+1}{N+2}`$. Finally we note, that the result of the operation (4) can be expressed in the form
$$\sigma ^{(out)}=s__N\sigma ^{}+\frac{1s__N}{2}𝟙,$$
(6)
with the “scaling” parameter $`s__N=\frac{N}{N+2}`$. From here it is seen that in the limit $`N\mathrm{}`$, perfect estimation of the input state can be performed, and, consequently, the perfect complement can be generated. For finite $`N`$ the mean fidelity is always smaller than unity. The advantage of the measurement-based scenario is that once the input qubit(s) is measured and its state is estimated an arbitrary number $`M`$ of identical (approximately) complemented qubits can be produced with the same fidelity, simply by replacing the output function $`f(\mathrm{\Phi })=(𝟙|\mathbb{\Phi }\mathbb{\Phi }|)`$ by $`f_M(\mathrm{\Phi })=(𝟙|\mathbb{\Phi }\mathbb{\Phi }|)^𝕄`$.
Quantum scenario: U-NOT gate
Let us assume we have $`N`$ input qubits in an unknown state $`|\mathrm{\Psi }`$ and we are looking for a transformation which generates $`M`$ qubits at the output in a state as close as possible to the orthogonal state $`|\mathrm{\Psi }^{}`$. The universality of the proposed transformation has to guarantee that an arbitrary input state is complemented with the same fidelity. If we want to generate $`M`$ approximately complemented qubits at the output, the U-NOT gate has to be represented by $`2M`$ qubits (irrespective of the number, $`N`$, of input qubits), $`M`$ of which will only serve as ancilla, and $`M`$ of which become the output complements. We will indicate these subsystems by subscripts “a”=input, “b”=ancilla, and “c”=(prospective) output. The U-NOT gate transformation, $`U_{NM}`$, acts on the tensor product of all three systems. The gate is always prepared in some state $`|X_{bc}`$, independently of the input state $`|\mathrm{\Psi }`$. The transformation is determined by the following explicit expression, valid for every unit vector $`|\mathrm{\Psi }`$:
$`U_{NM}|N\mathrm{\Psi }_a|X_{bc}={\displaystyle \underset{j=0}{\overset{M}{}}}\gamma _j|X_j(\mathrm{\Psi })_{ab}|\{(Mj)\mathrm{\Psi }^{};j\mathrm{\Psi }\}_c;\gamma _j=(1)^j\left({\displaystyle \genfrac{}{}{0pt}{}{N+Mj}{N}}\right)^{1/2}\left({\displaystyle \genfrac{}{}{0pt}{}{N+M+1}{M}}\right)^{1/2},`$ (7)
where $`|N\mathrm{\Psi }_a=|\mathrm{\Psi }^N`$ is the input state consisting of $`N`$ qubits in the same state $`|\mathrm{\Psi }`$. On the right hand side of Eq.(7) $`|\{(Mj)\mathrm{\Psi }^{};j\mathrm{\Psi }\}_c`$ denotes symmetric and normalized states with $`(Mj)`$ qubits in the complemented (orthogonal) state $`|\mathrm{\Psi }^{}`$ and $`j`$ qubits in the original state $`|\mathrm{\Psi }`$. Similarly, the vectors $`|X_j(\mathrm{\Psi })_{ab}`$ consist of $`N+M`$ qubits, and are given explicitly by
$`|X_j(\mathrm{\Psi })_{ab}=|\{(N+Mj)\mathrm{\Psi };j\mathrm{\Psi }^{}\}_{ab}.`$ (8)
Here the coefficients $`\gamma _j`$ were chosen so that the scalar product of the right hand side with a similar vector written out for $`|\mathrm{\Phi }`$, becomes $`\mathrm{\Psi },\mathrm{\Phi }^N`$. This implies at the same time that $`U_{NM}`$ is linear and that it is unitary after suitable extension to the orthogonal complement of the vector $`|X_{bc}`$.
Each of the $`M`$ qubits at the output of the U-NOT gate is described by the density operator (6) with $`s__N=\frac{N}{N+2}`$, irrespective of the number of complements produced. The fidelity of the U-NOT gate depends only on the number of inputs. This means that this U-NOT gate can be thought of as producing an approximate complement and then cloning it, with the quality of the cloning independent of the number of clones produced. The universality of the transformation is directly seen from the “scaled” form of the output operator (6).
We stress that the fidelity of the U-NOT gate (7) is exactly the same as in the measurement-based scenario. Moreover, it also behaves as a classical (measurement-based) gate in a sense that it can generate an arbitrary number of complements with the same fidelity. We have also checked that these cloned complements are pairwise separable.
The $`N+M`$ qubits at the output of the gate which do not represent the complements are individually in the state described by the density operator
$`\sigma _j^{(out)}=s\sigma +{\displaystyle \frac{1s}{2}}𝟙,𝕛=\mathrm{𝟙},\mathrm{},+𝕄,`$ (9)
with the scaling factor $`s=\frac{N}{N+2}+\frac{2N}{(N+M)(N+2)}`$ i.e. these qubits are the clones of the original state with a fidelity of cloning larger than the fidelity of estimation. This fidelity depends on the number, $`M`$, of clones produced out of the $`N`$ originals, and in the limit $`M\mathrm{}`$ the fidelity of cloning becomes equal to the fidelity of estimation. These qubits represent the output of the optimal $`NN+M`$ cloner introduced by Gisin and Massar . This means that the U-NOT gate as presented by the transformation in Eq. (7) serves also as a universal cloning machine.
At this point the question arises whether the transformation (7) represents the optimal U-NOT gate via quantum scenario. If this is so, then it would mean that the measurement-based and the quantum scenarios realize the U-NOT gate with the same fidelity.
Theorem. Let $``$ be a Hilbert space of dimension $`d=2`$. Then among all completely positive trace preserving maps $`T:𝒮\left(_+^N\right)𝒮()`$, the measurement-based U-NOT scenario (4) attains the smallest possible value of the error measure defined by Eq.(2), namely $`\mathrm{\Delta }(T)=1/(N+2)`$.
We have already shown \[see Eq.(5)\] that for the measurement-based strategy the error $`\mathrm{\Delta }`$ attains the minimal value $`1/(N+2)`$. The more difficult part, however, is to show that no other scheme \[i.e., quantum scenario\] can do better. Here we will largely follow the arguments in .
Recall first the rotation invariance of the functional $`\mathrm{\Delta }`$, noted after Eq.(2). Moreover, $`\mathrm{\Delta }`$ is defined as the maximum of a collection of linear functions in $`T`$, and is therefore convex. Putting these observations together we get
$$\mathrm{\Delta }(\widehat{T})𝑑U\mathrm{\Delta }(T_U)=\mathrm{\Delta }(T),$$
(10)
where $`\widehat{T}=𝑑UT_U`$ is the average of the rotated operators $`T_U`$ with respect to the Haar measure on the unitary group. Thus $`\widehat{T}`$ is at least as good as $`T`$, and is a universal NOT gate ($`\widehat{T}_U=\widehat{T}`$). Without loss we will therefore assume from now on that $`T_U=T`$ for all $`U`$.
An advantage of this step is that a very explicit general form for universal operations is known from the “covariant form” of the Stinespring Dilation Theorem (see for a version adapted to our needs). The form of $`T`$ is further simplified in our case by the fact that both representations involved are irreducible: the defining representation of SU(2) on $``$, and the representation by the operators $`U^N`$ restricted to the symmetric subspace $`_+^N`$. Then $`T`$ can be represented as a convex combination $`T=_j\lambda _jT_j`$, with $`\lambda _j0,_j\lambda _j=1`$, and $`T_j`$ universal gates in their own right, but of an even simpler form. Universality of $`T`$ already implies that the maximum can be omitted from the definition (2) of $`\mathrm{\Delta }`$, because the fidelity no longer depends on the pure state chosen. In a convex combination of universal operators $`T_j`$ we therefore get
$$\mathrm{\Delta }(T)=\underset{j}{}\lambda _j\mathrm{\Delta }(T_j).$$
(11)
Minimizing this expression is obviously equivalent to minimizing with respect to the discrete parameter $`j`$.
We write the general form of the extremal gates $`T_j`$ in terms of expectation values of the output state for an observable $`X`$ on $``$:
$$\mathrm{Tr}\left[T(\rho )X\right]=\mathrm{Tr}\left[\rho V^{}(X𝟙)𝕍\right],$$
(12)
where $`V:_+^N𝐂^{2j+1}`$ is an isometry intertwining the respective representations of SU(2), namely the restriction of the operators $`U^N`$ to $`_+^N`$ (which has spin $`N/2`$) on the one hand, and the tensor product of the defining representation (spin-$`1/2`$) with the irreducible spin-$`j`$ representation. By the triangle inequality for Clebsch-Gordan reduction, this implies $`j=(N/2)\pm (1/2)`$, so only two terms appear in the decomposition of $`T`$. It remains to compute $`\mathrm{\Delta }(T_j)`$ for these two values. Omitting the details of the calculations (these follow closely the arguments presented in Ref.) we find that
$$\mathrm{\Delta }(T)=\{\begin{array}{cc}1\hfill & \text{for }j=\frac{N}{2}+\frac{1}{2}\hfill \\ \frac{1}{N+2}\hfill & \text{for }j=\frac{N}{2}\frac{1}{2}\hfill \end{array}.$$
(13)
The first value corresponds to getting the state $`\sigma `$ from a set of $`N`$ copies of $`\sigma `$. The fidelity $`1`$ is expected for this trivial task, because taking any one of the copies will do perfectly. On the other hand, the second value is the minimal error in the optimal U-NOT gate, which we were looking for. This clearly coincides with the value (5), so the Theorem is proved.
Rôle of a priori knowledge
As was noted earlier, if the input state $`|\mathrm{\Psi }=\alpha |0+\beta |1`$ is restricted to the case where the coefficients $`\alpha `$ and $`\beta `$ are real, then it is possible to construct a perfect quantum NOT gate. A measurement-based strategy in this case does not do as well. Specifically, the mean fidelity of optimal estimation in the present case increases as a function of input qubits as (see ) $`=\frac{1}{2}+\frac{1}{2^{N+1}}_{j=0}^{N1}\sqrt{\left(\genfrac{}{}{0pt}{}{N}{j}\right)\left(\genfrac{}{}{0pt}{}{N}{j+1}\right)}`$, and it attains a value equal to unity only in the limit $`N\mathrm{}`$. This means that with a priori knowledge of the set of inputs, the quantum NOT can perform better than the measurement-based strategy.
Summarizing our conclusions, we have shown that there is another difference between classical and quantum information: classical bits can be complemented, while arbitrary qubits cannot. It is, none the less, possible to construct approximate quantum-complementing devices the quality of whose output is independent of the state of their input. These devices we called U-NOT gates. They are closely related to quantum cloners, and exploiting this connection it is possible to find an explicit transformation for a $`N`$-qubit input and $`M`$-qubit output U-NOT gate. When there is no a priori information available about the state of input qubits then these U-NOT gates do not do better than a measurement-based strategy. On the other hand, as we have shown, partial a priori information can dramatically improve performance of the U-NOT gate.
This work was in part supported by the Royal Society and by the Slovak Academy of Sciences. V.B. and R.F.W. thank the Benasque Center for Physics where part of this work was carried out.
|
no-problem/9901/cond-mat9901292.html
|
ar5iv
|
text
|
# Heteropolymers in a Solvent at an Interface
\[
## Abstract
Exact bounds are obtained for the quenched free energy of a polymer with random hydrophobicities in the presence of an interface separating a polar from a non polar solvent. The polymer may be ideal or have steric self-interactions. The bounds allow to prove that a “neutral” random polymer is localized near the interface at any temperature, whereas a “non-neutral” chain is shown to undergo a delocalization transition at a finite temperature. These results are valid for a quite general a priori probability distribution for both independent and correlated hydrophobic charges. As a particular case we consider random AB-copolymers and confirm recent numerical studies.
PACS numbers: 61.41.+e, 05.40.+j, 36.20.-r
\]
The statistical behavior of heteropolymers has been intensively studied in recent years . They model random copolymers and to some extent protein folding . For example, a chain composed by hydrophobic and hydrophilic (polar or charged) components in a polar (aqueous) solvent evolves toward conformations where the hydrophobic part are buried in order to avoid water, wheras the polar part is mainly exposed to the solvent . This is what commonly happens to proteins and makes them soluble in aqueous solutions. However other proteins (e.g. structural proteins) are almost insoluble under physiological conditions and prefer to form aggregates . Many of the proteins which are insoluble in water are segregated into membranes which have a lipid bilayer structure . Membrane proteins have a biological importance at least as great as those which are water soluble. Usually one distinguishes integral and non integral membrane proteins according to whether the protein is most immersed in the lipid bilayers or simply anchored to the membrane respectively (in the latter case the protein is essentially water soluble) . An analogous situation occurs for copolymers at interfaces separating two immiscible fluids. If the solvents are selective (i.e. poor for one of the two species and good for another), AB-copolymers are found to stabilize the interface . Moreover, random copolymers have been claimed to be more effective in carrying out this re-enforcement action .
The simplest theoretical approach to the above problems has been proposed by Garel et al. . In the case of membrane proteins the finite layer of lipidic environment is modeled as an infinite semi-space. Though a quite rough approximation, this is the simplest attempt in capturing the relevant features due to the competition of different selective effects .
We will study a lattice discretized version of their model. The nodes of an $`N`$ links chain occupy the sites $`\stackrel{}{r}_i=(x_{i1},\mathrm{},x_{id})`$, $`i=0,\mathrm{},N`$ of a $`d`$-dimensional hypercubic lattice. A flat interface passing through the origin and perpendicular to the $`\stackrel{}{u}=(1,\mathrm{},1)`$ direction separates a polar (e.g. water), on the $`\stackrel{}{u}\stackrel{}{r}>0`$ side, from a nonpolar (e.g. oil or air), on the $`\stackrel{}{u}\stackrel{}{r}<0`$ side, solvent. The $`i`$-th monomer interacts with the solvent through its hydrophilic charge $`q_i>0`$ ($`q_i<0`$ means that it is hydrophobic) and contributes to the energy with a term $`q_i\mathrm{sgn}(\stackrel{}{u}\stackrel{}{r}_i)`$ . For simplicity we can associate the charge to the link between adjacent positions on the chain instead that to the single monomer.
The partition function of the model for a chain $`W`$ starting at position $`\stackrel{}{r}`$ is
$$𝒵(\stackrel{}{r},\{q_i\})=\underset{W:\stackrel{}{r}.}{}\mathrm{exp}\left\{\beta \underset{i=1}{\overset{N}{}}q_i\mathrm{sgn}(\stackrel{}{u}\stackrel{}{r}_i)\right\},$$
(1)
where $`\beta ^1=k_BT`$. If one sums over non-interacting (ideal) chains, then the lattice version of the model introduced in Ref. is recovered. We will consider also the more physical case where steric interaction among monomers does not allow for multiple occupancy of lattice nodes, studied for a particular case in Ref. .
The free energy density of the system reads $`f(\stackrel{}{r},\beta )=lim_N\mathrm{}\frac{1}{\beta N}\overline{\mathrm{ln}𝒵(\stackrel{}{r},\{q_i\})}`$, where $`\overline{\mathrm{}}`$ denotes the quenched average over the distribution of the charges $`\{q_i\}`$. In the following we will assume that $`\{q_i\}`$ are independent random variables having a Gaussian distribution of the form
$$P(q_i)=\frac{1}{\sqrt{2\pi \mathrm{\Delta }^2}}\mathrm{exp}\left[\frac{(q_iq_0)^2}{2\mathrm{\Delta }^2}\right].$$
(2)
More general cases will be treated at the end. In particular considering charges not independently distributed is of interest for designed sequences as happens for real proteins.
We will show that for a neutral chain ($`q_0=0`$) $`f(0,\beta )<f(\stackrel{}{r},\beta )`$ with $`|\stackrel{}{r}|N`$, in the large $`N`$ limit, and for all $`\beta `$. The same holds also for $`q_00`$ if $`\beta >\beta _{upper}(|q_0|,\mathrm{\Delta })`$ with $`\beta _{upper}0`$ if $`\frac{|q_0|}{\mathrm{\Delta }}0`$. This implies that the chain is localized around the interface at any temperature if $`q_0=0`$, and at sufficently low temperature if $`q_00`$. The proof is rigorous for the ideal chain, whereas for the self-avoiding case only a mild and well accepted hypothesis on the asymptotic behavior of the entropy is needed. When $`q_00`$ a rigorous lower bound on the free energy for both the ideal and the self-avoiding chain allows to determine a $`\beta _{lower}(|q_0|,\mathrm{\Delta })`$ below which the chain is delocalized.
We will first consider the ideal chain case and then explain the modifications necessary to extend the results to self-avoiding chains.
Ideal chain. For clarity we derive the bounds in the $`d=1`$ case. The general case does not contain any further difficulty . Let us first consider initial positions far from the interface in the favorable solvent, $`xN`$ if $`q_0>0`$ or $`xN`$ if $`q_0<0`$. Under these assumptions all chains remain in the same side, implying that $`\mathrm{sgn}\left(x_i\right)=1`$ (or $`\mathrm{sgn}\left(x_i\right)=1`$ respectively) for all $`i`$. Upon averaging over the charge distribution, we obtain the free energy density of a walk in the favorable solvent:
$$f^{}=\frac{1}{\beta }\mathrm{ln}2|q_0|.$$
(3)
We give an upper bound to the free energy as follows. Consider only chains made up of blobs of $`k`$ steps, with $`k`$ even. Bringing a blob in its globally favored side leads to an energy contribution of the form $`H_{j\mathrm{th}\mathrm{blob}}=\left|_{i=1}^kq_{k(j1)+i}\right|`$, so that
$$𝒵(x=0,\{q_i\})\left(C_k\right)^{\frac{N}{k}}\mathrm{exp}\left\{\beta \underset{j=1}{\overset{\frac{N}{k}}{}}\left|\underset{i=1}{\overset{k}{}}q_{k(j1)+i}\right|\right\},$$
where $`C_k`$ is the number of chains starting and ending in the origin and remaining in the same side. In the one-dimensional case it is easy to exactly determine $`C_k`$. It turns out $`C_k={\displaystyle \frac{1}{\frac{k}{2}+1}}\left(\begin{array}{c}k\\ \frac{k}{2}\end{array}\right)`$, so that, by using the Stirling’s formula, the asymptotic result $`C_k2^kk^{\frac{3}{2}}`$ is found (in $`d`$ dimensions $`C_k(2d)^kk^{\frac{d+2}{2}}`$ ). The upper bound on the free energy is then:
$$f(0,\beta )\frac{1}{\beta }\frac{\mathrm{ln}C_k}{k}\frac{1}{k}\overline{\left|\underset{i=1}{\overset{k}{}}q_i\right|},$$
(4)
and, by using Eq. (2), we obtain:
$`\mathrm{\Delta }f`$ $`=`$ $`f(0,\beta )f^{}h_{q_0}(k,\beta )`$ (5)
$``$ $`{\displaystyle \frac{1}{\beta }}\left[\mathrm{ln}2{\displaystyle \frac{\mathrm{ln}C_k}{k}}\right]|q_0|G\left({\displaystyle \frac{\sqrt{k}|q_0|}{\sqrt{2}\mathrm{\Delta }}}\right),`$ (6)
where the scaling function $`G`$ is given by
$$G(x)=\frac{1}{\sqrt{\pi }}\frac{1}{x}e^{x^2}[1\mathrm{erf}(x)].$$
(7)
$`G(x)`$ is a positive decreasing monotonic function for positive arguments.
We consider separately the neutral and the $`q_00`$ cases. In the neutral case it turns out that the chain is always localized at the interface. In fact, if $`q_0=0`$ we have
$$\mathrm{\Delta }fh_0(k,\beta )=\frac{1}{\beta }\left[\mathrm{ln}2\frac{\mathrm{ln}C_k}{k}\right]\sqrt{\frac{2}{\pi }}\mathrm{\Delta }\frac{1}{\sqrt{k}}.$$
(8)
It is easy to see that for any $`\beta `$ there exists a value $`k(\beta )`$ such that $`h_0(k,\beta )<0`$ for $`k>k(\beta )`$. For example at high temperature $`k(\beta )\left[\mathrm{ln}\left(\beta \mathrm{\Delta }\right)\right]^2(\beta \mathrm{\Delta })^2`$. This shows that at any temperature a neutral random chain is always adsorbed by the interface.
In the non-neutral case with $`k=2`$, one has $`\mathrm{\Delta }f<0`$ if
$$\beta >\beta _{upper}=\frac{\mathrm{ln}2}{|q_0|G(\frac{|q_0|}{\mathrm{\Delta }})}.$$
(9)
The limit $`lim_k\mathrm{}h_{q_0}(k,\beta )=0_+`$ does not allow to deduce the existence of a negative minimum in $`k`$, so that the previous argument, showing that the neutral chain is always localized, does not hold for $`q_00`$. Equation (9) proves localization at sufficiently low temperatures. For $`|q_0|\mathrm{\Delta }`$, it yields $`\beta _{upper}=\frac{\sqrt{\pi }\mathrm{ln}2}{\mathrm{\Delta }}`$, and in the opposite regime $`\mathrm{\Delta }|q_0|`$, $`\beta _{upper}=2\sqrt{\pi }\mathrm{ln}2e^{\frac{|q_0|^2}{\mathrm{\Delta }^2}}\frac{|q_0|^2}{\mathrm{\Delta }^3}`$. In the limit $`|q_0|/\mathrm{\Delta }1`$ we can give a better estimate for $`\beta _{upper}`$, such that $`\beta _{upper}0`$ as $`|q_0|0`$, by considering a larger blob size $`k=2x_0^2\left(\mathrm{\Delta }/|q_0|\right)^2`$, where $`x_0|q_0|/\mathrm{\Delta }`$ is fixed:
$$\beta _{upper}=\frac{3}{2x_0^2G(x_0)}\mathrm{ln}\left(\sqrt{2}x_0\mathrm{\Delta }/|q_0|\right)\frac{|q_0|}{\mathrm{\Delta }^2}.$$
(10)
We now look for a lower bound on $`f`$ for all chain initial positions. If, for example, $`q_0>0`$, the preferred side is the right one. Consider then the starting point at $`x=Nk`$ ($`0<kN`$) and let $`g_E`$, with $`E`$ some subset of the last $`k`$ steps of the walk ($`0<|E|k`$ with $`|E|`$ the number of elements in $`E`$), be the number of walks having the steps belonging to $`E`$ in the unfavorable side. Upon defining $`G_k=_Eg_E`$ ($`G_k<2^k`$) one has
$`\begin{array}{c}𝒵(x=Nk,\{q_i\})=\left(2^NG_k\right)e^{\beta _{i=1}^Nq_i}+\hfill \\ +e^{\beta _{i=1}^Nq_i}{\displaystyle \underset{E}{}}g_Ee^{2\beta _{iE}q_i},\hfill \end{array}`$ (13)
so that, by using the inequality $`\mathrm{ln}\overline{x}\overline{\mathrm{ln}x}`$ and averaging over the charge distribution, one obtains
$$f(Nk)f^{}\frac{1}{\beta N}\mathrm{ln}\left[1+\underset{E}{}g_E\frac{a(\mathrm{\Delta },q_o,\beta )^{|E|}1}{2^N}\right],$$
with $`a(\mathrm{\Delta },q_o,\beta )=\mathrm{exp}\left[2\mathrm{\Delta }^2\beta ^22\beta q_0\right]`$. This equation and its analogous in the $`q_0<0`$ case show that there is a delocalization temperature
$$\beta _{lower}=\frac{|q_0|}{\mathrm{\Delta }^2}$$
(14)
such that, if $`\beta <\beta _{lower}`$, $`f(|Nk|)f^{}`$ and the chain delocalizes. This argument can be extended to the $`d`$-dimensional case, in which $`1G_k<(2d)^k`$, $`𝒵_d^{}(q_i)=(2d)^Ne^{\beta _{=1}^Nq_i}`$ and $`f_d^{}=\frac{1}{\beta }\mathrm{ln}2d|q_0|`$.
The bounds we have proved above allow to conclude that there is a critical value $`\beta _c`$ such that for values of $`\beta `$ smaller than $`\beta _c`$ the chain is delocalized in the favorable solvent, while for larger values it is adsorbed by the interface, with the estimates $`\beta _{lower}<\beta _c<\beta _{upper}`$ (it is easy to verify that $`\beta _{lower}<\beta _{upper}`$). The lower bound (14) and the upper bound (10), in the limit $`|q_0|/\mathrm{\Delta }1`$, show the same behavior found by using both an Imry-Ma type argument and variational approaches .
Self-avoiding chain. All the results shown for a random chain can be readily generalized for a self-avoiding chain. Namely, a neutral chain is localized at all temperatures, whereas a non-neutral chain undergoes a localization transition at some critical temperature $`\beta _c`$.
The delocalization temperature $`\beta _{lower}`$ can be derived exactly in the same way, since the division of walks into classes according to the number of steps made in the unfavorable solvent does not depend on the self-avoidance constraint.
The upper bounds on the free energy, which allow to prove chain localization, requires instead some refinements with respect to the previous case. While the energy term is computed in the same way as before, the entropy term is different. Firstly, the connective constant ($`\kappa =2d`$ for a random walk in $`d`$ dimensions) is different. We recall that the existence of the connective constant, $`\kappa =lim_N\mathrm{}\mathrm{ln}S_N/N`$, for self-avoiding walks (SAW) has been rigorously established ($`S_N`$ is the total number of $`N`$-steps SAW starting from the same site). The subleading correction of the form $`S_N\kappa ^NN^{\gamma 1}`$ is widely agreed upon, although not rigorously proved . Secondly we introduce the notion of loop, following e.g. , and consider only walks made up of $`N/k`$ blobs, each blob being a $`k`$-loop, in such a way that different blobs can be embedded independently, as well as for a random chain. A $`N`$-loop is a $`N`$-steps SAW, starting and ending on the interface, which always remains in the same half-space, with the further condition $`x_{01}x_{02}x_{i1}x_{i2}<x_{N1}x_{N2}i`$. It has been proved that the free energy density of loops is the same as for SAW, $`\kappa _llim_N\mathrm{}\mathrm{ln}L_N/N=\kappa `$, where $`L_N`$ is the number of $`N`$-loops. The subleading correction is usually assumed in the same form as for the number of SAW:
$$L_N\kappa ^NN^{\gamma _s1}.$$
(15)
These considerations are sufficient to generalize the previous results to the self-avoiding case, yielding the following bounds for the critical temperature:
$$\frac{\mathrm{ln}\kappa }{|q_0|G(\frac{\sqrt{2}|q_0|}{\mathrm{\Delta }})}\beta _c\frac{|q_0|}{\mathrm{\Delta }^2},$$
(16)
which do not depend on the assumption (15) and is therefore rigorous. Again, in the limit $`|q_0|/\mathrm{\Delta }1`$ a better estimate $`\beta _{upper}`$ can be derived by using Eq. (15):
$$\beta _{upper}=\frac{1\gamma _s}{x_0^2G(x_0)}\mathrm{ln}\left(\sqrt{2}x_0\mathrm{\Delta }/|q_0|\right)\frac{|q_0|}{\mathrm{\Delta }^2}.$$
(17)
Generic probability distribution. Up to now we have considered the hydrophobic charges as independently distributed Gaussian random variables. Actually, the results we have proved do not depend on this assumption. We will briefly sketch this in a few cases .
The argument showing localization at any temperature for a neutral chain holds true, both for random and self-avoiding chains, if $`\overline{\left|_{i=1}^kq_i\right|}\sqrt{k}`$ as $`k\mathrm{}`$. The central limit theorem ensures this for independent random variables having a generic probability distribution with finite variance and null mean. In the non-neutral case, the existence of a delocalization transition can be proved e.g. for a bimodal distribution. This corresponds to the more realistic case of two kinds of monomers, one hydrophilic and the other hydrophobic. We thus consider the generic bimodal distribution :
$$P\left(q_i\right)=\alpha \delta \left(q_iq_+\right)+\left(1\alpha \right)\delta \left(q_i+q_{}\right),$$
(18)
with $`q_+,q_{}>0`$. The probability distribution (18) has three independent parameters, and fixing the average charge $`q_0=\alpha \left(q_++q_{}\right)q_{}`$ and the variance $`\mathrm{\Delta }=\sqrt{\alpha \left(1\alpha \right)}\left(q_++q_{}\right)`$ we are left with one free parameter. It is interesting to report the delocalization temperature $`\beta _{lower}`$, which provides a good estimate for the critical temperature in the previous cases:
$$\beta _{lower}^{bim}=\frac{\sqrt{\alpha \left(1\alpha \right)}}{2\mathrm{\Delta }}\left|\mathrm{ln}\left[1+\frac{q_0}{\sqrt{\alpha \left(1\alpha \right)}\mathrm{\Delta }\left(1\alpha \right)q_0}\right]\right|$$
Notice that in the limit of nearly neutral chain ($`|q_0|\mathrm{\Delta }`$) we get $`\beta _c^d\frac{|q_0|}{2\mathrm{\Delta }^2}`$, which is the same function of $`|q_0|`$ and $`\mathrm{\Delta }`$ as in the Gaussian case, suggesting the existence of a universal behavior. In the limit of nearly homogeneous chain ($`\mathrm{\Delta }|q_0|`$ which implies $`\alpha 0`$ or $`\alpha 1`$) instead $`\beta _{lower}^{bim}=\frac{1}{2\left(q_++q_{}\right)}\left|\mathrm{ln}\left[\frac{\alpha }{1\alpha }\frac{q_+}{q_{}}\right]\right|`$ diverges logarithmically in contrast with the Gaussian case.
We consider now the case in which the hydrophobic charges $`\left\{q_i\right\}`$ are not independent random variables, but are Gaussianly distributed with $`\overline{q_i}=q_0i`$, $`\overline{q_iq_j}\overline{q_i}\overline{q_j}=M_{ij}^1`$. We assume $`M_{ii}^1=\mathrm{\Delta }^2i`$, in analogy with the non-correlated case, and also translational invariance along the chain for the correlation matrix: $`M_{ij}=b(\left|ij\right|)`$. One can prove that if long range correlations decay exponentially or even algebraically the neutral chain is again localized at all temperatures. In fact, by assuming an algebraic decay, $`b\left(r\right)r^\eta `$, it turns out that $`\overline{\left|_{i=1}^kq_i\right|}k^{\delta /2}`$ with $`\delta =\mathrm{min}(\eta ,1)`$. Only if correlations are so strong that they do not vanish along the chain ($`\eta =0`$), the chain does not localize at all temperatures.
In the non neutral case the existence of the transition can be proved. For example the estimate of the delocalization temperature is
$$\beta _{lower}^{corr}=\underset{E}{\mathrm{min}}\left\{\frac{\left|E\right|\left|q_0\right|}{_{i,jE}M_{ij}^1}\right\}.$$
(19)
If the charges are positively correlated ($`M_{ij}^1>0`$ for $`ij`$), chain localization is more favored than in the non-correlated case, whereas if charges are anti-correlated ($`M_{ij}^1<0`$ for $`ij`$) it is less favored.
Asymmetric interface potentials. Finally we extend our demonstrations to the random AB-copolymers studied by Sommer et al. . Their model corresponds to consider the following Hamiltonian:
$$=\underset{i}{}|q_i|[\lambda \theta (q_i)\theta (\stackrel{}{u}\stackrel{}{r}_i)+\theta (q_i)\theta (\stackrel{}{u}\stackrel{}{r}_i)],$$
(20)
with the charges distributed according to Eq. (18) with $`\alpha =1/2`$ and $`q_+=q_{}`$ ($`|\lambda 1|`$ measures the potential asymmetry). Such an AB-copolymer (Eq. (20)) is equivalent to a non-neutral chain in symmetric potentials ($`\lambda =1`$), a case that we have already discussed. We have proved the existence of a delocalization transition for a neutral chain also in the Gaussian case. For both distributions, the delocalization temperature shows the behavior $`\beta _{lower}\frac{|\lambda 1|}{\mathrm{\Delta }}`$ in the limit of nearly symmetric potentials ($`\lambda 1`$), in agreement with the scaling law and the numerical results found in . On the contrary, in the highly asymmetric cases (small and large $`\lambda `$) different asymptotic behaviors for $`\beta _{lower}`$ occur .
To conclude, in this Letter we have proved several exact results on random heteropolymers in the presence of an interface. Namely, a neutral chain is localized at all temperatures, whereas a charged chain delocalizes at a finite temperature. The results are quite general and hold for ideal and self-avoiding chains, Gaussian and bimodal distribution with independent and correlated charges. Furthermore, our lower bounds for the transition temperature confirm previous estimates.
We would like to thank Jayanth Banavar, Cristian Micheletti and Flavio Seno for useful discussion and ongoing collaboration, and Enzo Orlandini for bringing references to our attention.
|
no-problem/9901/astro-ph9901028.html
|
ar5iv
|
text
|
# Limits on the gravity wave contribution to microwave anisotropies
## I Introduction
The presence of a primordial gravitational wave perturbation spectrum was an early prediction of inflationary models of the big bang . However, it was not until the results of the COBE satellite mission that it became possible to begin to meaningfully constrain the tensor contribution to the overall perturbation spectrum . In an early result, Salopek found that, assuming power-law inflation, tensors must contribute less than about $`50\%`$ of the cosmic microwave background (CMB) fluctuations at the $`10^{}`$ scale.
Since that time, ground- and balloon-based experiments have begun to fill in the smaller-scale regions of the CMB power spectrum. These scales are crucial for constraining the gravity wave contribution because the tensor spectrum is expected to be negligible on scales finer than $`1^{}`$, and therefore large-scale power greater than that expected for scalars can be attributed to tensors. Markevich and Starobinsky have set some stringent limits on the tensor contribution. For example, they found that the ratio of tensor to scalar components of the CMB spectrum is $`T/S<0.7`$ at $`97.5\%`$ confidence for a flat, cosmological-constant-free universe with $`H_0=50\mathrm{kms}^1\mathrm{Mpc}^1`$. However, their analysis used a limited CMB data set and considered only a restricted set of values for the cosmological parameters. Recently Tegmark has performed an analysis using a compilation of CMB data, and found a $`68\%`$ upper confidence limit of 0.56 on the tensor to scalar ratio. In this work specific inflationary models were not considered, but a number of parameters were allowed to vary freely. In another recent study Lesgourgues et al. analysed a particular broken-scale-invariance model of inflation with a steplike primordial perturbation spectrum, and found that the tensor to scalar ratio can reach unity. Melchiorri et al. placed limits on tensors allowing for a blue scalar spectral index, and indeed found that blue spectra and a large tensor component are most consistent with CMB observations.
Our aim here is to provide a more comprehensive answer (or set of answers) to the question: how big can $`T/S`$ be? We present constraints on tensors for specific models of inflation as well as for freely varying parameters. In all cases we marginalize over the important, but as yet undetermined, cosmological parameters. We use both COBE and small-scale CMB data, as well as information about the matter power spectrum from galaxy correlation, cluster abundance, and Lyman $`\alpha `$ forest measurements. We refer to these various measurements of the power spectra as “data sets”. We additionally consider the effect, for each data set, of various observational constraints on the cosmological parameters, such as the age of the universe, cluster baryon density, and recent supernova measurements. We refer to these constraints as “parameter constraints” (this separation between “data sets” and “parameter constraints” is somewhat subjective, but dealt with consistently in our Bayesian approach; it is conceptually simpler to consider power spectrum constraints as measurements with some Gaussian error, while regarding allowed limits on cosmological parameters as restrictions on parameter space). Finally we consider what implications our results have for the direct detection of primordial gravity waves.
## II Inflation models
Our goal is to provide limits on the tensor contribution to the primordial perturbation spectra using a variety of recent observations. In models of inflation, the scalar (density) and tensor (gravity wave) metric perturbations produced during inflation are specified by two spectral functions, $`A_\mathrm{S}(k)`$ and $`A_\mathrm{T}(k)`$, for wave number $`k`$. These spectra are determined by the inflaton potential $`V(\varphi )`$ and its derivatives . However, when comparing model predictions with actual observations of the CMB, it is more useful to translate the inflationary spectra into the predicted multipole expansions of the CMB temperature field: $`\mathrm{\Delta }T/T(\theta ,\varphi )=_\mathrm{}ma_\mathrm{}mY_\mathrm{}m(\theta ,\varphi )`$, where $`Y_\mathrm{}m(\theta ,\varphi )`$ are the spherical harmonics. The spectrum $`C_{\mathrm{}}|a_\mathrm{}m|^2`$ can be decomposed into scalar and tensor parts, $`C_{\mathrm{}}=C_{\mathrm{}}^\mathrm{S}+C_{\mathrm{}}^\mathrm{T}`$. In the literature the tensor to scalar ratio is conventionally specified either at $`\mathrm{}=2`$ or in the spectral plateau at $`\mathrm{}1020`$. Here we have chosen the $`\mathrm{}=2`$ or quadrupole moments of the temperature field, and write $`S5C_2^\mathrm{S}/4\pi `$ and $`T5C_2^\mathrm{T}/4\pi `$ as usual .
In order to constrain the tensor contribution $`T/S`$, we need to specify the particular model of inflation under consideration. This is because the model may provide a specific relationship between the ratio $`T/S`$ and the scalar spectral index $`n_\mathrm{S}`$. Except in Sec. VI we only consider spatially flat inflation models (i.e. $`\mathrm{\Omega }_0+\mathrm{\Omega }_\mathrm{\Lambda }=1`$, where $`\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=\mathrm{\Lambda }/(3H_0^2)`$ are the fractions of critical density due to matter and a cosmological constant, respectively). In addition, we do not consider the “quintessence” models , where a significant fraction of the critical density is currently in the form of a scalar field with equation-of-state different from that of matter, radiation, or cosmological constant (although it would not be difficult to extend our results for explicit models with recent epoch dynamical fields). We will also restrict ourselves to models which use the slow-roll approximation, and incorporate only a single dynamical field – a class of models sometimes called “chaotic inflation” . This is not as restrictive as it might sound, since most viable inflationary models are of this form. Although some genuinely two-field models are known , many multi-field models, the “hybrid” class, have only one field dynamically important and in these cases we effectively regain the single field case . In addition, theories which modify general relativity (e.g. “extended” inflation ) can often be recast as ordinary general relativity with a single effective scalar field .
It is convenient to classify inflationary models as either “small-field”, “large-field”, or the already mentioned hybrid models . Small-field models are characterized by an inflaton field which rolls from a potential maximum towards a minimum at $`\varphi 0`$. These models generally produce negligible tensor contribution, but may result in the spectral index $`n_\mathrm{S}`$ differing significantly from scale invariance . In hybrid models, the important scalar field rolls towards a potential minimum with non-zero vacuum energy. These models also typically have very small $`T/S`$, and the scalar index can be greater than unity . The large-field models involve so-called “chaotic” initial conditions, where an inflaton initially displaced from the potential minimum rolls towards the origin. Large-field models can produce large $`T/S`$ and $`1n_\mathrm{S}`$, and these are the models considered in this paper. This is not to say that small-field and hybrid models are not interesting; on the contrary, current views of inflation in the particle physics context suggest that $`T/S`$ is expected to be small . However, large-field models must be considered when examining the observational evidence for a large tensor contribution.
It is also worth pointing out that we could construct models with a dip in the scalar power spectrum at large scales which compensates for the tensor contribution. Although we have not explored detailed models, we imagine that in principle models could be constructed with arbitrarily high $`T/S`$. We consider all such models with features at relevant scales to be unappealing unless there are separate physical arguments for them.
In addition to considering models with free scalar index and tensor contribution $`T/S`$, we shall thus focus on two classes of inflationary models which can be considered representative of those predicting large gravity wave contributions. Both are restricted to “red” spectral tilts, $`n_\mathrm{S}1`$. The first, “power-law inflation” (PLI) , is characterized by exponential inflaton potentials of the form
$$V(\varphi )\mathrm{exp}\sqrt{\frac{16\pi \varphi ^2}{qm_{\mathrm{Pl}}^2}},$$
(1)
and results in a scale factor growth $`a(t)t^q`$, hence the name. For PLI the tensor-to-scalar ratio in $`k`$-space can be calculated exactly as a function of $`n_\mathrm{S}`$:
$$\frac{A_\mathrm{T}^2(k)}{A_\mathrm{S}^2(k)}=\frac{1n_\mathrm{S}}{3n_\mathrm{S}}.$$
(2)
Note the tensor contribution is directly related to the scalar spectral index $`n_\mathrm{S}`$, which is further related to the tensor spectral index $`n_\mathrm{T}=n_\mathrm{S}1`$ in this model. Converting from $`k`$-space to the observed anisotropy spectrum introduces a dependence on the cosmological constant which can be approximated by
$$T/S=7\stackrel{~}{n}\left[0.97+0.58\stackrel{~}{n}+0.25\mathrm{\Omega }_\mathrm{\Lambda }\left(1+1.1\stackrel{~}{n}+0.28\stackrel{~}{n}^2\right)\mathrm{\Omega }_\mathrm{\Lambda }^2\right],$$
(3)
where $`\stackrel{~}{n}n_\mathrm{S}1=n_\mathrm{T}`$. The dependence on $`\mathrm{\Lambda }`$ arises because of different evolution for scalars and tensors when $`\mathrm{\Lambda }`$ dominates at late times. The dependence on other cosmological parameters is negligible .
We also consider the large-field polynomial potentials,
$$V(\varphi )\varphi ^p,$$
(4)
for integral $`p>1`$ . In this case both $`n_\mathrm{S}`$ and $`n_\mathrm{T}`$ are determined by the exponent $`p`$ :
$$n_\mathrm{S}=1\frac{2p+4}{p+200},$$
(5)
$$n_\mathrm{T}=\frac{2p}{p+200}.$$
(6)
The tensor index may be related to $`T/S`$ through the consistency relation
$$\frac{T}{S}=7\frac{f_\mathrm{T}^{(0)}}{f_\mathrm{S}^{(0)}}n_\mathrm{T},$$
(7)
where the cosmological parameter dependence, again dominated by $`\mathrm{\Omega }_\mathrm{\Lambda }`$, can be approximated by
$`f_\mathrm{S}^{(0)}`$ $`=`$ $`1.040.82\mathrm{\Omega }_\mathrm{\Lambda }+2\mathrm{\Omega }_\mathrm{\Lambda }^2,`$ (8)
$`f_\mathrm{T}^{(0)}`$ $`=`$ $`1.00.03\mathrm{\Omega }_\mathrm{\Lambda }0.1\mathrm{\Omega }_\mathrm{\Lambda }^2.`$ (9)
As a third possibility, we will also consider models with scalar index varying over the range $`n_\mathrm{S}=0.81.2`$, but with an independently varying tensor contribution $`T/S`$.
## III Microwave background anisotropies
In order to evaluate likelihoods and confidence limits for $`T/S`$ based on CMB measurements, we performed $`\chi ^2`$ fits of model $`C_{\mathrm{}}`$ spectra to CMB data. We did this for a set of “band-power” estimates of anisotropy at different scales, and separately for the COBE data themselves. For our first approach, we used a collection of binned data to represent the anisotropies as a function of $`\mathrm{}`$. Specifically we took the flat-spectrum effective quadrupole values listed in Smoot and Scott and binned them into nine intervals separated logarithmically in $`\mathrm{}`$. We chose this simplified approach since we anticipated a large computational effort in covering a reasonably large parameter space. The use of binned data has been shown elsewhere to give similar results to more thorough methods. If anything, there is a bias towards lowering the height of any acoustic peak, inherent in the simplifying assumption of symmetric Gaussian error bars ; for placing upper limits on $`T/S`$ our approach is therefore conservative. We are also erring on the side of caution by using the binned data only up to the first acoustic peak, neglecting constraints from detections and upper limits at smaller angular scales.
We ignored the effect of reionization on the $`C_{\mathrm{}}`$ spectra. Reionization to optical depth $`\tau `$ reduces the power of small-scale anisotropies by $`e^{2\tau }`$. Thus, in placing upper limits on $`T/S`$, it is conservative to set $`\tau =0`$.
A fitting function for the spectrum, valid up to the first peak at $`l220`$, has been provided by White :
$$C_l(\nu )=\left(\frac{l}{10}\right)^\nu C_l(\nu =0),$$
(10)
where $`\nu `$ is the (nearly) degenerate combination of cosmological parameters
$$\nu n_\mathrm{S}10.32\mathrm{ln}(1+0.76r)+6.8(\mathrm{\Omega }_\mathrm{B}h^20.0125)0.37\mathrm{ln}(2h)0.16\mathrm{ln}(\mathrm{\Omega }_0).$$
(11)
Here $`r1.4C_{10}^\mathrm{T}/C_{10}^\mathrm{S}`$ is the tensor to scalar ratio at $`\mathrm{}=10`$, normalized to provide $`r=T/S`$ for $`\mathrm{\Omega }_0=1`$ and $`n_\mathrm{S}1`$. The parameter $`h`$ is defined through $`H_0=100h\mathrm{kms}^1\mathrm{Mpc}^1`$, and $`\mathrm{\Omega }_\mathrm{B}`$ is the fraction of the critical density in baryons. Thus the standard CDM (sCDM) spectrum is specified by $`\nu =0`$. We found that the $`\mathrm{\Omega }_\mathrm{\Lambda }`$ dependence of $`r`$ can be well captured by introducing the rescaled variable $`r^{}`$, defined by
$$r^{}=\frac{r}{0.94+1.105\mathrm{\Omega }_\mathrm{\Lambda }^{3.75}},$$
(12)
and setting $`r^{}T/S`$.
We fitted the model spectra of Eq. (10) to the binned data as follows. For each combination of parameters ($`h,\mathrm{\Omega }_\mathrm{B}h^2,\mathrm{\Omega }_0,n_\mathrm{S},T/S`$) we normalized the model spectrum to the binned data, and evaluated the likelihood $`(h,\mathrm{\Omega }_\mathrm{B}h^2,\mathrm{\Omega }_0,n_S,T/S)\mathrm{exp}(\chi ^2/2)`$. Next this likelihood was integrated, uniformly in the parameter, over the ranges of $`h=0.50.8`$, $`\mathrm{\Omega }_\mathrm{B}h^2=0.0070.024`$, and $`\mathrm{\Omega }_0=0.251`$, subject to the constraints of Eq. (3) for PLI and Eqs. (5), (6), and (7) for polynomial potentials. For the case of free $`T/S`$, the scalar index was varied in the range $`n_\mathrm{S}=0.81.2`$. Finally the resultant $`(T/S)`$ was normalized to a peak value of unity and the $`95\%`$ confidence limits evaluated. We tried to choose reasonable ranges for the prior probability distributions of the “nuisance parameters”, guided by the current weight of evidence. We checked that mild departures from our adopted ranges lead to only small modifications to our results. However, we caution that our conclusions will not necessarily be applicable for models which lie significantly outside the parameter space we considered. In addition, note that according to Eq. (11) we can crudely estimate an upper limit on $`T/S`$ by combining the observational lower limit on $`\nu `$ with the maximal baryon density and minimal $`\mathrm{\Omega }_0`$ and $`h`$ from our parameter ranges. However, this turns out to be an overly conservative estimate: for example, for $`n_\mathrm{S}=1`$, and using a lower limit of $`\nu =0.2`$, Eq. (11) gives an upper limit of $`T/S=3.8`$, compared with the limit $`T/S=1.6`$ from Sec. VII.
For the separate constraint from the COBE data, we used the software package CMBFAST to calculate likelihoods based only on the COBE results at large scales. CMBFAST calculates the spectrum using a line-of-sight integration technique. It then calculates likelihoods by finding a quadratic approximation to the large scale spectrum and using the COBE fits of Bunn and White . These likelihoods were integrated and $`95\%`$ limits calculated as above, except that the baryon density was fixed at $`\mathrm{\Omega }_\mathrm{B}=0.05`$ to save computation time (and since $`\mathrm{\Omega }_\mathrm{B}`$ has negligible affect at these scales). The results of this procedure are presented in Sec. VII.
## IV Large-scale structure
### A Galaxy correlations
We next applied observations of galaxy correlations to constrain $`T/S`$ indirectly through the power spectrum of the density fluctuations, $`\mathrm{\Delta }^2(k)`$. The power spectrum $`\mathrm{\Delta }^2(k)`$ is expressed, following Bunn and White , by
$$\mathrm{\Delta }^2(k)=\delta _\mathrm{H}^2\left(\frac{ck}{H_0}\right)^{3+n_\mathrm{S}}T^2(k).$$
(13)
Here $`\delta _\mathrm{H}`$ is the ($`\mathrm{\Omega }_0`$, $`n_\mathrm{S}`$, and $`r`$ dependent) normalization described in Sec. IV B, and $`T(k)`$ is the transfer function which describes the evolution of the spectrum from its primordial form to the present.
We explicitly used for the transfer function the fit of Bardeen et al.,
$$T(q)=\frac{\mathrm{ln}(1+2.34q)}{2.34q}\left[1+3.89q+(16.1q)^2+(5.46q)^3+(6.71q)^4\right]^{1/4},$$
(14)
with the scaling of Sugiyama
$$q=\frac{k(T_{\gamma 0}/2.7\mathrm{K})^2}{\mathrm{\Omega }_0h^2\mathrm{exp}\left(\mathrm{\Omega }_\mathrm{B}\sqrt{h/0.5}\mathrm{\Omega }_\mathrm{B}/\mathrm{\Omega }_0\right)}.$$
(15)
Here $`T_{\gamma 0}`$ is the temperature of the CMB radiation today.
We performed $`\chi ^2`$ fits of the (unnormalized) model power spectrum given by Eqs. (13) - (15) to the compilation of data provided in Table I of Peacock and Dodds , excluding their four smallest scale data points. These points were omitted because, while there are theoretical reasons to expect that the galaxy bias approaches a constant on large scales, at the smallest scales the assumption of a linear bias appears to break down . Here we are fitting for the shape of the matter power spectrum, ignoring the overall amplitude, since the normalization is complicated by the ambiguities of galaxy biasing.
The fitting was performed in exactly the same way as was described in Sec. III for the binned microwave anisotropies. Namely the model curves were normalized to the Peacock and Dodds data, the integrated likelihood was calculated, and the $`95\%`$ confidence limits for $`n_\mathrm{S}`$ were evaluated. Since the shape of the power spectrum \[Eq. (15)\] is independent of the tensor amplitude, this technique can only provide limits on $`T/S`$ when $`T/S`$ is determined by the spectral index. That is, the galaxy correlation data can only constrain $`T/S`$ for our PLI and $`\varphi ^p`$ cases, using the relationships \[Eq. (3) or Eqs. (5), (6), and (7)\] between $`T/S`$ and $`n_\mathrm{S}`$.
### B Cluster abundance
A very useful quantity for constraining the amplitude of the power spectrum is the dispersion of the density field smoothed on a scale $`R`$, defined by
$$\sigma ^2(R)=_0^{\mathrm{}}W^2(kR)\mathrm{\Delta }^2(k)\frac{dk}{k}.$$
(16)
Here $`W(kR)`$ is the smoothing function, which we take to be a spherical top-hat specified by
$$W(kR)=3\left[\frac{\mathrm{sin}(kR)}{(kR)^3}\frac{\mathrm{cos}(kR)}{(kR)^2}\right].$$
(17)
Traditionally the dispersion is quoted at the scale $`8h^1`$ Mpc, and given the symbol $`\sigma _8`$. For our experimental value we used the result of Viana and Liddle , who analysed the abundance of large galaxy clusters to obtain
$$\sigma _8=0.56\mathrm{\Omega }_0^{0.47},$$
(18)
with relative $`95\%`$ confidence limits of $`18\mathrm{\Omega }_0^{0.2\mathrm{log}_{10}\mathrm{\Omega }_0}`$ and $`+20\mathrm{\Omega }_0^{0.2\mathrm{log}_{10}\mathrm{\Omega }_0}`$ percent. Several other estimates have been published; the one we used is fairly representative, and with a more conservative error bar than most.
To compare this experimental result with the model value predicted by Eq. (16), we must fix the normalization $`\delta _\mathrm{H}`$. We used the result of Liddle et al. who fitted $`\delta _\mathrm{H}`$ using the COBE large scale normalization to obtain
$$10^5\delta _\mathrm{H}(n_\mathrm{S},\mathrm{\Omega }_0)=1.94\mathrm{\Omega }_0^{0.7850.05\mathrm{ln}\mathrm{\Omega }_0}\mathrm{exp}[f(n_\mathrm{S})],$$
(19)
where
$$f(n_\mathrm{S})=\{\begin{array}{cc}0.95\stackrel{~}{n}0.169\stackrel{~}{n}^2,\hfill & \text{No tensors,}\hfill \\ 1.00\stackrel{~}{n}+1.97\stackrel{~}{n}^2,\hfill & \text{PLI.}\hfill \end{array}$$
(20)
For the case of non-PLI tensors, we used the fitting form of Bunn et al.:
$$10^5\delta _\mathrm{H}=1.91\mathrm{\Omega }_0^{0.800.05\mathrm{ln}\mathrm{\Omega }_0}\frac{\mathrm{exp}(1.01\stackrel{~}{n})}{\sqrt{1+\left(0.750.13\mathrm{\Omega }_\mathrm{\Lambda }^2\right)r}}\left(1+0.18\stackrel{~}{n}\mathrm{\Omega }_\mathrm{\Lambda }0.03r\mathrm{\Omega }_\mathrm{\Lambda }\right).$$
(21)
We calculated likelihoods for our model $`\sigma _8`$ using a Gaussian with peak and $`95\%`$ limits specified by Eq. (18), and then integrated $``$ and found limits for $`T/S`$ as in the binned microwave case.
### C Lyman $`\alpha `$ absorption cloud statistics
Another measure of the amplitude of the matter power spectrum has been obtained recently by Croft et al., who analysed the Lyman $`\alpha `$ (Ly$`\alpha `$) absorption forest in the spectra of quasars at redshifts $`z2.5`$. These results apply at smaller comoving scales than the cluster abundance $`\sigma _8`$ measurements, and hence are potentially more constraining. Croft et al. found
$$\mathrm{\Delta }^2(k_p)=0.57_{0.18}^{+0.26}$$
(22)
at $`1\sigma `$ confidence, where the effective wavenumber $`k_p=0.008(\text{km s}^1)^1`$ at $`z=2.5`$.
These results cannot be directly compared with the model predictions of Eq. (13), because Eq. (13) provides its predictions for the current time, i.e. $`z=0`$. To translate to $`z=2.5`$, we must first convert the model $`k`$ from the comoving $`\text{Mpc}^1`$ units conventionally used in discussions of the matter power spectrum to $`(\text{km s}^1)^1`$ at $`z=2.5`$, using
$$k[(\text{km s}^1)^1]=\frac{1+z}{H(z)}k[\text{Mpc}^1],$$
(23)
where
$$H(z)=H_0\sqrt{\mathrm{\Omega }_0(1+z)^3+\mathrm{\Omega }_\mathrm{\Lambda }}$$
(24)
for flat universes.
Next, we must consider the growth of the perturbations themselves. In a critical density universe (and assuming linear theory), the growth law is simply $`\mathrm{\Delta }^2(k,z)=\mathrm{\Delta }^2(k,0)(1+z)^2`$. As $`\mathrm{\Omega }_\mathrm{\Lambda }`$ increases, the growth is suppressed, and this can be accounted for by writing
$$\mathrm{\Delta }^2(k,z)=\mathrm{\Delta }^2(k,0)\frac{g^2[\mathrm{\Omega }(z)]}{g^2(\mathrm{\Omega }_0)}\frac{1}{(1+z)^2},$$
(25)
where the growth suppression factor $`g(\mathrm{\Omega })`$ can be accurately parametrized by
$$g(\mathrm{\Omega })=\frac{5}{2}\mathrm{\Omega }\left(\frac{1}{70}+\frac{209\mathrm{\Omega }}{140}\frac{\mathrm{\Omega }^2}{140}+\mathrm{\Omega }^{4/7}\right)^1,$$
(26)
and the redshift dependence of $`\mathrm{\Omega }`$ is given by
$$\mathrm{\Omega }(z)=\mathrm{\Omega }_0\frac{(1+z)^3}{1\mathrm{\Omega }_0+(1+z)^3\mathrm{\Omega }_0},$$
(27)
all for spatially flat universes.
We calculated likelihoods using the normalized model predictions of Eq. (13), translated to $`z=2.5`$ as described above, and then obtained limits for $`T/S`$ as in the cluster abundance case.
## V Parameter constraints
### A Age of the universe
In flat $`\mathrm{\Lambda }`$ models, the age of the universe is
$$t_0=\frac{2}{3H_0}\frac{\mathrm{sinh}^1\left(\sqrt{\mathrm{\Omega }_\mathrm{\Lambda }/\mathrm{\Omega }_0}\right)}{\sqrt{\mathrm{\Omega }_\mathrm{\Lambda }}}.$$
(28)
During the integration of the likelihoods, we investigated the effect of imposing a constraint on the parameters $`h`$ and $`\mathrm{\Omega }_0`$, so that regions of parameter space corresponding to ages below various limits were excluded. This simply corresponds to a more complex form for the priors on the parameters. The precise limit on the age of the universe is a matter of on-going debate (e.g. ). A lower limit of around $`11`$ Gyr now seems to be the norm, so we considered this case explicitly. We also considered the effect of a more constraining limit of $`13`$ Gyr, still preferred by some authors.
### B Baryons in clusters
Recent measurements of the baryon density in clusters have suggested low $`\mathrm{\Omega }_0`$ for consistency with nucleosythesis. We chose to use the results of White and Fabian for the baryon density
$$\frac{\mathrm{\Omega }_\mathrm{B}}{\mathrm{\Omega }_0}=(0.056\pm 0.014)h^{3/2},$$
(29)
where the errors are at the $`1\sigma `$ level. We explored the implications of applying this constraint during the likelihood integrations, by adding a term
$$\left(\frac{h^{3/2}\mathrm{\Omega }_\mathrm{B}/\mathrm{\Omega }_00.056}{0.014}\right)^2$$
(30)
to each value of $`\chi ^2`$.
### C Supernova constraints
Measurements of high-$`z`$ Type-Ia supernovae (SNe Ia) are in principle well-suited to constraining $`\mathrm{\Omega }_0`$ on the assumption of a flat $`\mathrm{\Lambda }`$ universe, since such measurements are sensitive to (roughly) the difference between $`\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$. We used the experimental results of Filippenko and Riess of the High-$`z`$ Supernova Search team , who found for flat $`\mathrm{\Lambda }`$ models
$$\mathrm{\Omega }_0=0.25\pm 0.15$$
(31)
at $`1\sigma `$ confidence. We also investigated the effect of applying this constraint as above.
## VI Open models
For models with open geometry the situation is more complicated, and so we restrict ourselves to a brief discussion here. In addition to the added technical complexity involved in working in hyperbolic spaces, the presence of an additional scale, the curvature scale, renders ambiguous the meaning of scale-invariant fluctuations. For the most obvious scale-invariant spectrum of gravity wave modes, the quadrupole anisotropy actually diverges! For this reason one requires a definite calculation of the fluctuation spectrum from a well realized open model. The advent of open inflationary models has allowed, for the first time, a calculation of the spectrum of primordial fluctuations in an open universe. As with all inflationary models, a nearly scale-invariant spectrum of gravitational waves (tensor modes) is produced . The size of these modes in $`k`$-space, and their relation to the spectral index, is not dissimilar to the flat space models we have been considering. In the inflationary open universe models the spectrum of perturbations is cut-off at large spatial scales, leading to a finite gravity wave spectrum. However, the exact scale of the cutoff depends on details of the model, introducing further model dependence into the $`\mathrm{}`$-space predictions.
Since gravitational waves provide anisotropies but no density fluctuations, their presence will in general lower the normalization of the matter power spectrum (for a fixed large angle CMB normalization). Open models already have quite a low normalization , so the most conservative limits on gravity waves come from models which produce the minimal tensor anisotropies, i.e. where the cutoff operates as efficiently as possible. The COBE normalization for such models with PLI is
$$10^5\delta _\mathrm{H}=1.95\mathrm{\Omega }_0^{0.350.19\mathrm{ln}\mathrm{\Omega }_0+0.15\stackrel{~}{n}}\mathrm{exp}\left(1.02\stackrel{~}{n}+1.70\stackrel{~}{n}^2\right).$$
(32)
Combining this normalization with the cluster abundance gives a strong constraint on $`T/S`$. We show in Fig. 1 the 95% CL upper limit on $`T/S`$ as a function of $`\mathrm{\Omega }_0`$ in these models.
## VII Results
Figure 2 presents likelihoods, integrated over the parameter ranges described above, and plotted versus $`T/S`$, for the various data sets, and specifically for PLI models. For the curve labelled “combined”, likelihoods for each data set (except the COBE data) were multiplied together before integration. (Including the COBE data would have been redundant, since the binned CMB set already contains the COBE results.) Thus the “combined” values represent joint likelihoods for the relevant data sets, on the assumption of independent data. Note that the combined data curve of Fig. 2 differs significantly from the product of the already marginalized curves for the different data sets, which indicates that parameter covariance is important here. Also, the maximum joint likelihood in Fig. 2 corresponds to $`\chi ^29`$, which indicates a good fit for the 15 degrees of freedom involved.
Figure 3 displays integrated likelihoods versus $`T/S`$ for each data set and for the combined data, again on the assumption of PLI. The effect of each parameter constraint is illustrated. The COBE data shape constraint is very weak, and exhibits essentially no cosmological parameter dependence, as expected. Thus the parameter constraints have little effect on the likelihoods, and the curves are not shown here. Cluster abundance is not much more constraining than the COBE shape, but exhibits considerably stronger cosmological parameter dependence, and hence is affected substantially by the various parameter constraints. The matter power spectrum shape constraint is so weak that we do not plot it here. The strongest constraint comes from the binned CMB data, and indeed these data dominate the joint results.
We can understand the general features of the large parameter dependence exhibitted by the likelihoods for the matter spectrum data sets as follows. Near sCDM parameter values, it is well known that the matter power spectrum contains too much small-scale power when COBE-normalized at large scales. The presence of tensors improves the fit at small scales by decreasing the scalar normalization at COBE scales. Reducing $`h`$ or $`\mathrm{\Omega }_0`$, however, also decreases the power at small scales, improving the fit over sCDM, and thus reducing the need for tensors. When an age constraint is applied, we force the model towards lower $`h`$ and $`\mathrm{\Omega }_0`$ according to Eq. (28), and hence towards lower $`T/S`$, as is seen in Fig. 3. The cluster baryon and supernova constraints similarly move us to smaller $`\mathrm{\Omega }_0`$.
Figure 4 displays likelihoods versus $`p`$ for $`\varphi ^p`$ inflation, while Fig. 5 presents likelihoods versus $`T/S`$ for the case of free tensor contribution and $`n_\mathrm{S}=1`$. In all plots, curves have been omitted for the very weakly constraining data sets. The curves of Fig. 4 closely resemble those of Fig. 3. This is because, for $`p2`$, Eqs. (5), (6), and (7) give $`T/S6.85\stackrel{~}{n}`$, which is similar to the PLI result of Eq. (3).
In Fig. 5 we see that the data are considerably less constraining, compared with the PLI case, when we allow $`T/S`$ to vary freely. This was expected, since in the PLI case, the lowering of $`n_\mathrm{S}`$ tends to enhance the effect of increasing $`T/S`$. We also expect that a blue scalar tilt would oppose the effect of tensors on the spectrum and hence allow larger $`T/S`$. Figure 6 illustrates this effect by plotting the $`95\%`$ upper confidence limits on $`T/S`$ versus scalar index with $`T/S`$ free. Note that we cannot meaningfully constrain $`T/S`$ here by marginalizing over $`n_\mathrm{S}`$, since the best fits to the spectrum remain good even for very blue tilts and very large $`T/S`$.
Our confidence limits are summarized in Table I for the case of power-law inflation, Table II for polymomial potentials, and Table III for free $`T/S`$ and $`n_\mathrm{S}=1`$. In all cases $`95\%`$ upper limits on $`T/S`$ are presented, after integrating over the ranges of parameter space specified in Sec. III. The row gives the data set used, while the column specifies the type of parameter constraint applied, if any.
## VIII The Future
The discovery of a nearly scale-invariant spectrum of long wavelength gravity waves would be tremendously illuminating. Inflation is the only known mechanism for producing an almost scale-invariant spectrum of adiabatic scalar fluctuations, a prediction which is slowly gaining observational support. In the simplest, “toy”, models of inflation a potentially large amplitude almost scale-invariant spectrum of gravity waves is also predicted. For monomial inflation models within the slow-roll approximation, detailed characterization of this spectrum could in principle allow a reconstruction of the inflaton potential . This surely is a window onto physics at higher energies than have ever been probed before.
Inflation models based on particle physics, rather than “toy” potentials, predict a very small tensor spectrum . However, essentially nothing is known about particle physics above the electroweak scale, and extrapolations of our current ideas to arbitrarily high energies could easily miss the mark. We must be guided then by observations. We have argued that observational support for a large gravity wave component is weak. Indeed observations definitely require the tensor anisotropy to be subdominant for large angle CMB anisotropies. One the other hand, it is still possible to have $`T/S0.5`$, and since it would be so exciting to discover any tensor signal at all we are led to ask: how small can a tensor component be and still be detectable? What are the best ways to look for a tensor signal?
### A Direct detection
The feasibility of the direct detection of inflation-produced gravitational waves has been addressed by a number of authors , with pessimism expressed by most.
The ground-based laser interferometers LIGO and VIRGO will operate in the $`f100`$Hz frequency band, while the European Space Agency’s proposed space-based interferometer LISA would operate in the $`f10^4`$Hz band. Millisecond pulsar timing is sensitive to waves with periods on the order of the observation time, i.e. frequencies $`f10^710^9`$Hz . These instruments probe regions of the tensor perturbation spectrum which entered during the radiation dominated era. Expressions for the fraction of the critical density due to gravity waves per logarithmic frequency interval can be found in . Assuming that $`\mathrm{\Omega }_0=1`$ in a PLI model, with the only relativistic particles being photons and 3 neutrino species, and taking the COBE quadrupole $`Q=T+S4.4\times 10^{11}`$, one finds
$$\mathrm{\Omega }_{\mathrm{GW}}(f)h^2=5.1\times 10^{15}\frac{n_\mathrm{T}}{n_\mathrm{T}1/7}\mathrm{exp}\left[n_\mathrm{T}N+\frac{1}{2}N^2(dn_\mathrm{T}/d\mathrm{ln}k)\right],$$
(33)
where $`N\mathrm{ln}(k/H_0)`$ and $`n_\mathrm{T}=(T/S)/7`$ is the tensor spectral index.
Using Eq. (33) Turner found that the local energy density in gravity waves is maximized at $`T/S=0.18`$ for $`f10^4`$Hz. At this maximum, the local energy density is in the range $`\mathrm{\Omega }_{\mathrm{GW}}h^210^{15}10^{16}`$, which lies a couple of orders of magnitude below the expected sensitivity of LISA, and several orders below that of LIGO/VIRGO . This is also well below the current upper limit of $`\mathrm{\Omega }_{\mathrm{GW}}h^2<6\times 10^8`$ (at $`95\%`$ confidence) from pulsar timing . As $`T/S`$ increases above $`0.18`$, $`\mathrm{\Omega }_{\mathrm{GW}}(f10^4\text{Hz})h^2`$ begins to decrease due to the increasing magnitude of the tensor spectral index.
Recall that our joint data constraint for PLI gives $`T/S0.5`$, so our results predict that the inflationary spectrum of gravity waves from PLI is not amenable to direct detection.
### B Limits from the CMB
With the advent of MAP and especially the Planck Surveyor, with its higher sensitivity, detailed maps of the CMB are just around the corner. What do we expect will be possible from these missions? This question has been dealt with extensively before. Assuming a cosmic variance limited experiment capable of determining only the anisotropy in the CMB but with all other parameters known, one can measure $`T/S`$ only if it is larger than about 10% . A more realistic assessment for MAP and Planck suggests this limit is rarely reached in practice .
However the ability to measure linear polarization in the CMB anisotropy offers the prospect of improving the sensitivity to tensor modes (for a recent review of polarization see ). In addition to the temperature anisotropy, two components of the linear polarization can be measured. It is convenient to split the polarization into parity even ($`E`$-mode) and parity odd ($`B`$-mode) combinations – named after the familiar parity transformation properties of the electric and magnetic fields, but not to be confused with the $`E`$ and $`B`$ fields of the electromagnetic radiation.
Polarization offers two advantages over the temperature. First, with more observables the error bars on parameters are tightened. In addition the polarization breaks the degeneracy between reionization and a tensor component, allowing extraction of smaller levels of signal . Model dependent constraints on a tensor mode as low as $`1\%`$ appear to be possible with the Planck satellite . Extensive observations of patches of the sky from the ground (or satellites even further into the future) could in principle push the sensitivity even deeper.
There is a further handle on the tensor signal however. Since scalar modes have no “handedness” they generate only parity even, or $`E`$-mode polarization . A definitive detection of $`B`$-mode polarization would thus indicate the presence of other modes, with tensors being more likely since vector modes decay cosmologically. Moreover a comparison of the $`B`$-mode, $`E`$-mode and temperature signals can definitively distinguish tensors from other sources of perturbation (e.g. ).
Unfortunately the detection of a $`B`$-mode polarization will prove a formidable experimental challenge. The level of the signal, shown in Fig. 7 for $`T/S=0.01`$, 0.1 and 1.0, is very small. As an indicative number, with $`T/S=0.5`$, our upper limit, the total rms $`B`$-mode signal, integrated over $`\mathrm{}`$, is $`0.24\mu `$K in a critical density universe. These sensitivity requirements, coupled with our current poor state of knowledge of the relevant polarized foregrounds make it seem unlikely a $`B`$-mode signal will be detected in the near future.
## IX Conclusions
We have examined the current experimental limits on the tensor-to-scalar ratio. Using the COBE results, as well as small-scale CMB observations, and measurements of galaxy correlations, cluster abundances, and Ly$`\alpha `$ absorption we have obtained conservative limits on the tensor fraction for some specific inflationary models. Importantly, we have considered models with a wide range of cosmological parameters, rather than fixing the values of $`\mathrm{\Omega }_0`$, $`H_0`$, etc. For power-law inflation, for example, we find that $`T/S<0.52`$ at the $`95\%`$ confidence level. Similar constraints apply to $`\varphi ^p`$ inflaton models, corresponding to approximately $`p<8`$. Much of this constraint on the tensor-to-scalar ratio comes from the relation between $`T/S`$ and the scalar spectral index $`n_\mathrm{S}`$ in these theories. For models with tensor amplitude unrelated to scalar spectral index it is still possible to have $`T/S>1`$. Currently the tightest constraint is provided by the combined CMB data sets. Since the quality of such data are expected to improve dramatically in the near future, we expect much tighter constraints (or more interestingly a real detection) in the coming years.
###### Acknowledgements.
This research was supported by the Natural Sciences and Engineering Research Council of Canada. M.W. is supported by the NSF.
|
no-problem/9901/astro-ph9901183.html
|
ar5iv
|
text
|
# 1 The AGILE detector
## 1 The AGILE detector
The AGILE design is derived from a refined study of the GILDA project (Barbiellini et al. 1995, Morselli et al. 1995) that was based on the techniques developed for the Wizard silicon calorimeter, already successful flown in balloon experiments (Golden et al. 1990). Like previous orbiting high-energy gamma-ray telescopes, AGILE relies on the unambiguous identification of the incident gamma-rays by recording the characteristic track signature of the e<sup>+</sup> \- e<sup>-</sup> that result from pair creation from the incident photons in thin layers of converter material. The trajectories of the pairs, recorded and measured in the tracking section of the detector, give the information on the direction of the incident gamma-rays. A light calorimeter made of 1.5 radiation lengths (X<sub>0</sub>) of cesium iodide (CsI) allows to determine the energy of the incident photons. The detector has a height of $``$35 cm, with a $`40\times 40`$ cm<sup>2</sup> area and a total conversion length, for electromagnetic cascades, of $``$2.3 X<sub>0</sub>. A schematic view of the AGILE detector is presented in Tavani et al. (1998). The three AGILE components are: i) a tracker: this is the conversion zone. It consists of 12 planes of single sided silicon strips with 204 $`\mu `$m pitch (distance between strips). The strips are implanted on pads of $`8\times 8`$ cm<sup>2</sup>. Each pad has a thickness of 380 $`\mu `$m and carries 384 strips. The first ten planes consist of two layers of silicon detectors, with orthogonal strips interspersed with a layer of 0.07 X<sub>0</sub> of tungsten. The last two planes have no tungsten sheets. Each plane reads 3840 strips and the entire tracker consists of 46080 strips. The distance between consecutive planes is 1.6 cm; ii) a light calorimeter: it is made of 1.5 X<sub>0</sub> of CsI and will provide an indication of the photon energy; iii) an anticoincidence system: it consists of 0.5 cm thick plastic scintillator layers surrounding the top and four lateral sides of the silicon tracker. It will be able to significantly reduce the background due to charged particles.
——————————————————————————-
($``$) Adapted from a paper presented at the Conference Dal nano- al Tera-eV: tutti i colori degli AGN, Rome 18-21 May 1998, to be published by the Memorie della Societa’ Astronomica Italiana.
The expected performance of AGILE in angular and energy resolution, as derived from Montecarlo simulations, are shown in Fig. 1. The reliability of the Montecarlo simulations, based on the GEANT code, has been checked with experimental data obtained during tests performed at CERN (Bocciolini et al. 1993).
Despite its smaller area and weight, the AGILE instrument is able to reach performances comparable to those of the EGRET experiment on CGRO. The main characteristics that make this telescope more efficient are: i) the use of silicon strips instead of a spark chamber as the main device for reconstructing the gamma-ray trajectories; this also avoids the problems of gas refilling, high voltages and dead time (silicon strips have lower dead time than gas detectors) and allows to reach a spatial resolution of two hundred microns, instead of a few millimeters. ii) the elimination of the anticoincidence counters for the high energy trigger, so that an efficiency of 50% up to 50 GeV can be reached; iii) the elimination of Time of Flight, resulting in an increase of the field of view and a decrease of the lower energy threshold ($``$20 MeV for AGILE vs. 35 MeV for EGRET). The impact of theese performances on the study of the AGN’s is described in Mereghetti et al. (1998).
|
no-problem/9901/cond-mat9901307.html
|
ar5iv
|
text
|
# Global versus local billiard level dynamics: The limits of universality
## Abstract
Level dynamics measurements have been performed in a Sinai microwave billiard as a function of a single length, as well as in rectangular billiards with randomly distributed disks as a function of the position of one disk. In the first case the field distribution is changed globally, and velocity distributions and autocorrelation functions are well described by universal functions derived by Simons and Altshuler. In the second case the field distribution is changed locally. Here another type of universal correlations is observed. It can be derived under the assumption that chaotic wave functions may be described by a random superposition of plane waves.
05.45.-a, 73.23.-b
In 1972 Edwards and Thouless noticed that the conductivity of a disordered system is closely related to the sensitivity of its eigenvalues on an external perturbation . For a ring with a perpendicularly applied magnetic field they conjectured that the conductivity $`C`$ is proportional to the averaged curvature of the eigenvalues, $`C|\frac{^2E_n}{\phi ^2}|_{\phi =0}`$, where $`\phi `$ is the magnetic flux through the ring. In 1992 Akkermans and Montambaux showed that the conductivity may alternatively be expressed in terms of the eigenvalue velocities, $`C|\frac{E_n}{\phi }|^2`$ . This suggests to rescale the parameter and the eigenvalues by
$$x=\frac{1}{\mathrm{\Delta }}\left|\frac{E_n}{\phi }\right|^2^{1/2}\phi ,ϵ_n(x)=\frac{E_n(\phi )}{\mathrm{\Delta }},$$
(1)
where $`\mathrm{\Delta }`$ is the mean level spacing. Szafer, Simons and Altshuler studied a number of parametric correlations of the rescaled eigenenergies , in particular the velocity autocorrelation function
$$c(x)=\frac{ϵ_n(X+x)}{X}\frac{ϵ_n(X)}{X}\frac{ϵ_n(X)}{X}^2,$$
(2)
originally introduced by Yang and Burgdörfer , and conjectured a universal behavior as long as the so-called zero-mode approximation holds, i.e., in the range where the energy fluctuations show random matrix behavior. For the velocity distribution Simons and Altshuler found a Gaussian behavior . The same behavior has been obtained by a completely different approach starting from the analogy between the level dynamics of a chaotic system and the dynamics of a one-dimensional gas with repulsive interaction . In the region of onset of localization deviations from the Gaussian behavior are found .
Since in the zero-mode approximation the energy correlations of a disordered system are identical to that of random matrices, it came as no surprise that the universal behavior of parametric correlations was found in billiard systems as well . Universal behavior was observed also for the hydrogen atom in a strong magnetic field , conformally deformed and ray-splitting billiards , and in the acoustic spectra of vibrating quartz blocks . In all cases the general features of the conjectured universal behavior had been reproduced reasonably well, but a number of significant discrepancies remained unexplained.
This was our motivation to study different types of billiard level dynamics a bit more detailed. All results to be presented below have been obtained in microwave billiards . Here it is sufficient to note that for flat resonators the electromagnetic spectrum is completely equivalent to the quantum mechanical spectrum of the corresponding billiard, as long as one does not surpass the frequency $`\nu _{max}=c/2h`$, where $`h`$ is the resonator height. In the experiments we choose $`h=`$ 8 mm yielding a maximum frequency of 18.74 GHz.
One of the systems studied was a quarter Sinai billiard with a width $`b=`$ 200 mm, a radius $`r=`$ 70 mm of the quarter circle, and a length $`a`$ which was varied between 480 and 500 mm in steps of 0.2 mm. About 120 eigenvalues entered into the data analysis in the frequency range 14.5 to 15.5 GHz. The second system was a rectangular billiard with side lengths $`a=`$ 340 mm, $`b=`$ 240 mm, containing 20 randomly distributed circular disks with a diameter of 5 mm (see Fig. 1). By a spatially resolved measurement we found that all eigenfunctions $`\psi `$ in the studied frequency range were delocalized and $`|\psi |^2`$ was Porter-Thomas distributed. The position of one of the disks was varied in one direction in steps of 1 mm. Whereas the first type of level dynamics may be considered as global, since a shift of the billiard length of the order of 1 wavelength will change the wavefunction pattern everywhere in the billiard, the shift of the disk gives rise to a local modification only.
We start with a discussion of the global level dynamics. Figure 2 shows the velocity distribution for the quarter Sinai billiard with length $`a`$ as the level dynamics parameter. The distribution is well described by a Gaussian in accordance with the expected universal behavior (this result has been presented already in ). Figure 3 shows the corresponding velocity correlator. To obtain the result, each eigenvalue was studied over a range of four to five avoided crossings, and the scaling was performed by calculating the mean squared velocity for each eigenvalue independently. Subsequently the results of about 120 eigenvalues were superimposed. The solid line corresponds to Simons’ and Altshuler’s universal function . The overall agreement between experiment and theory is good, but for $`x>2.5`$ (not shown) the correlation function does not approach zero but stays at negative values. This is an artifact resulting from an insufficient number of data points making the calculation of the average $`\frac{ϵ_n(X+x)}{X}\frac{ϵ_n(X)}{X}`$ unreliable for large $`x`$ values. Most correlation functions found in the literature end at $`x`$ values of at most 1.5, probably just for this reason.
Let us now turn to the discussion of the local level dynamics, where the position of one disk was varied. Whether a level dynamics must be considered as global or local, depends on the parameter $`\delta =kD`$, where $`D`$ is the diameter of the disk, and $`k`$ the wavenumber. It is well known that in the limit of small $`\delta `$ values the spectral properties of billiards containing hard spheres deviate significantly from random matrix behavior . Figure 4 shows the velocity distributions for three different $`\delta `$ ranges. In Fig. 4(a) a disk with $`D=`$ 5 mm was used, and the eigenvalues were taken in the frequency range 3.4 to 6 GHz. In Figs. 4(b) and (c) the diameter of the movable disk was $`D=`$ 20 mm with eigenvalues in the frequency ranges 3.4 to 6 GHz and 12.5 to 14.5 GHz, respectively. None of the found velocity distributions is Gaussian. One observes instead a distribution with a pronounced peak at $`v=0`$, decreasing only exponentially for large values of $`|v|`$. With increasing $`\delta `$ values the distributions turn gradually into a Gaussian. We completed the series by a level dynamics measurement for a half Sinai billiard, where the position of the half circle was varied. Here the obtained velocity distribution (not shown), corresponding to $`\delta `$ values between 30 and 37, was already close to a Gaussian distribution.
Figure 5 shows the corresponding velocity autocorrelation functions. The scaling technique applied was the same as above. There is no longer any similarity between the experimental curves and the universal function. Only for the largest $`\delta `$ value displayed, the experimental curve seems to approach the Simons-Altshuler correlation function again.
The results can be understood, if the movable disk is interpreted as a perturber probing the field in the resonator (the perturbing bead method has been used many years ago to map the field distributions in microwave cavities , and has recently been applied to the study of wavefunctions in chaotic billiards as well ). In two-dimensional billiards the insertion of a metallic perturber leads to a negative frequency shift proportional to $`E^2`$, where $`E`$ is the electric field strength in the resonator in the absence of the perturber. This holds as long as the dimensions of the perturber are small compared to the wavelength, i.e., in the limit $`\delta 0`$. Applied to the present problem this means that the eigenvalue velocity is given by $`E_n/r=\alpha |\psi |^2`$ where $``$ is the gradient in the direction of the displacement, and $`\alpha `$ is a constant depending on the geometry of the perturber. It follows for the velocity distribution function
$$P(v)=\delta (v2\alpha \psi \psi ).$$
(3)
Under the assumption that the wavefunctions can be described by a random superposition of plane waves , $`\psi `$ and $`\psi `$ are uncorrelated, and Gaussian distributed ,
$$P_1(\psi )=\sqrt{\frac{A}{2\pi }}e^{\frac{A\psi ^2}{2}},P_2(\psi )=\sqrt{\frac{A}{2\pi k^2}}e^{\frac{A(\psi )^2}{2k^2}}.$$
(4)
The influence of the boundary is negligible here, since the linear dimensions of the billiard exceed the typical wavelength by factors of 5 to 10. Using Eq. (4) the average (3) is easily calculated and yields
$$P(v)=\frac{\beta }{\pi }K_0(\beta |v|),$$
(5)
where $`K_0(x)`$ is a modified Bessel function, and $`\beta =A/2\alpha k`$. The solid lines plotted in addition to the Gaussian curves in Figs. 2 and 4 have been calculated from Eq. (5). In the limit of small $`\delta `$ values distribution (5) describes the experimental distributions perfectly.
The influence of local perturbations on the energy levels has been studied by Aleiner and Matveev who derived an explicit expression for the joint distribution function of initial and final energy levels. In their model the velocities are Porter-Thomas distributed , if the coupling strength is taken as the level dynamics parameter. The same distribution would have been expected in our case, if the coupling strength $`\alpha `$ would have been varied instead of the position (which, however, would be technically difficult to realize).
For the quadratical average of the eigenvalue velocities we obtain using Eq. (5)
$$\left(\frac{E_n}{r}\right)^2=\frac{1}{\beta ^2}.$$
(6)
Entering with this expression into Eq. (1), we get for the rescaled parameter
$$x=\frac{1}{\mathrm{\Delta }\beta }r=\frac{\alpha }{2\pi }kr,$$
(7)
where we have used that in billiards the mean level spacing is given by $`\mathrm{\Delta }=4\pi /A`$. Equation (7) shows that for the local level dynamics $`x`$ is not an universal parameter, since it depends via $`\alpha `$ on the geometry of the movable disk. We shall therefore use the rescaled parameter
$$\overline{x}=kr$$
(8)
instead in the following. From the approach of random superposition of plane waves the velocity autocorrelation function can be easily calculated, too. Using standard techniques as they are described, e.g., in Ref. , we get
$$c(\overline{x})=\left[J_0^2(\overline{x})\right]^{\prime \prime }=J_0^2(\overline{x})2J_1^2(\overline{x})J_0(\overline{x})J_2(\overline{x}).$$
(9)
Figure 6 shows again the velocity autocorrelation of Fig. 5(a) for the local level dynamics, but now as a function of $`\overline{x}`$. The solid line corresponds to the theoretial expectation (9). The experimental curve follows closely the predicted oscillations. With increasing $`\delta `$ the oscillations are more and more damped, but the wavelength is still in accordance with the theory (not shown).
This paper has shown that two different regimes of level dynamics have to be discriminated. In the local regime velocity distributions and autocorrelation functions are quantitatively described by the approach of random superposition of plane wave, if the scaling (8) is applied. In the global regime, on the other hand, Simons’ and Altshuler’s universal functions describe the experimental results well, and the scaling (1) is the appropriate one. The parameter $`\delta =kD`$ governs the transition between the two regimes.
Discussions with Y.V. Fyodorov and T. Guhr at different stages of the experiments are gratefully acknowledged. We thank H.U. Baranger for calling our attention to Ref. , and E.R. Mucciolo for making his calculations of the universal velocity correlator available to us. The experiments were supported by the Deutsche Forschungsgemeinschaft via the SFB 185 ”Nichtlineare Dynamik” and by an individual grant.
|
no-problem/9901/cond-mat9901148.html
|
ar5iv
|
text
|
# Transition between two ferromagnetic states driven by orbital ordering in La0.88Sr0.12MnO3
\[
## Abstract
A lightly doped perovskite mangantite La<sub>0.88</sub>Sr<sub>0.12</sub>MnO<sub>3</sub> exhibits a phase transition at $`T_{OO}=145`$ K from a ferromagnetic metal ($`T_C=172`$ K) to a novel ferromagnetic insulator. We identify that the key parameter in the transition is the orbital degree of freedom in $`e_g`$ electrons. By utilizing the resonant x-ray scattering technique, orbital ordering is directly detected below $`T_{OO}`$, in spite of a significant diminution of the cooperative Jahn-Teller distortion. The new experimental features are well described by a theory treating the orbital degree of freedom under strong electron correlation. The present experimental and theoretical studies uncover a crucial role of the orbital degree in the metal-insulator transition in lightly doped manganites.
\]
Colossal magnetoresistance (CMR), recently discovered in perovskite manganites, occurs in the vicinity of metal-insulator (MI) transition. It was proposed many years ago that the double-exchange (DE) mechanism plays an essential role to realize the ferromagnetic metallic state in doped manganites . However, the CMR effects cannot be explained within this simple concept and additional ingredients, such as lattice distortion, electron correlation, and orbital degree of freedom, are stressed. To reveal the mechanism of the MI transition and its mutual relation to CMR, it is essential to study the lightly doped regime in detail, where several phase boundaries are entangled.
In La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> around $`x0.12`$, the temperature dependence of electrical resistivity shows metallic behavior below $`T_C`$ consistent with the DE picture. As temperature decreases further, however, it shows a sharp upturn below a certain temperature , defined $`T_{OO}`$ in the present paper. Note that a transition from ferromagnetic metallic (FM) state to the ferromagnetic insulating (FI) state occurs at $`T_{OO}`$. Kawano et al. revealed by neutron diffraction that La<sub>0.875</sub>Sr<sub>0.125</sub>MnO<sub>3</sub> ($`T_C=230`$ K) exhibits successive structural phase transitions; high-temperature pseudo-cubic phase (O: $`abc/\sqrt{2}`$) to intermediate Jahn-Teller (JT) distorted orthorhombic phase (O: $`b>ac/\sqrt{2}`$) at $`T_H=260`$ K and to low-temperature O phase at $`T_{OO}=160`$ K. Here, we use orthorhombic $`Pbnm`$ notation. These complicated behaviors are far beyond the simple DE scenario.
In this Letter, we report that the MI transition in La<sub>0.88</sub>Sr<sub>0.12</sub>MnO<sub>3</sub> is actually driven by orbital ordering, which was directly observed by the recently developed decisive technique, i.e., the resonant x-ray scattering. It is counter-intuitive that the orbital ordering appears in the FI phase where a long-range cooperative JT distortion significantly diminishes . As discussed later, this orbital ordering can be realized by the super-exchange (SE) process under strong electron correlation together with ferromagnetic ordering, not necessarily associated with a cooperative JT distortion. This ferromagnetic SE interaction coexists with the DE interaction which facilitates carrier mobility above $`T_{OO}`$. The transition from FM to FI can be induced by applying a magnetic field . Our theoretical calculation, where the orbital degree and the electron correlation are considered, well reproduces these unconventional experimental results in La<sub>0.88</sub>Sr<sub>0.12</sub>MnO<sub>3</sub>.
We have grown series of single crystals by the floating-zone method using a lamp image furnace. Typical mosaicness measured by neutron diffraction is less than 0.3 FWHM, indicating that the samples are highly crystalline. We have carried out neutron-diffraction of an $`x=0.12`$ single crystal using the TOPAN triple-axis spectrometer in the JRR-3M reactor in JAERI. As shown in Fig. 1, we found successive structural phase transition and magnetic ordering consistent with Ref. , though transition temperatures are different reflecting a slight discrepancy in the hole concentration, i.e., $`x=0.12`$ and $`0.125`$. However, high-resolution synchrotron x- ray powder diffraction by Cox et al. on a carefully crushed small crystal from the same batch reveals that the intermediate phase is monoclinic and that the low-temperature phase is triclinic though the distortion is extremely small. An electron-diffraction study by Tsuda et al. using the same batch gave consistent results. Note that typical x-ray peak widths are $`0.01^{}`$, close to the instrumental resolution. This indicates that the $`d`$-spacing distribution is negligible, which is another proof of high quality of our samples.
Here, we briefly mention the charge ordering proposed by Yamada et al. We have indeed confirmed the superlattices below $`T_{OO}`$ by neutron and x-ray scatterings. In the x-ray study, however, the energy dependence of the superlattice peak around the Mn K-edge does not show a resonance feature which is a characteristic in Mn<sup>3+</sup>/Mn<sup>4+</sup> charge ordering. We thus conclude that, below $`T_{OO}`$, there appears a long-range structural modulation along the $`c`$\- axis though neither a conventional charge ordering nor a long-range cooperative JT distortion as seen in the O phase exist.
Ferromagnetic ordering below $`T_C=172`$ K was observed by neutron diffraction as shown in Fig. 1(b). With further decreasing temperature, the $`(\mathrm{2\; 0\; 0})`$ peak exhibits a discontinuous increase at temperature corresponding to $`T_{OO}`$, where the structural phase transition shown in Fig. 1(a) occurs. We found that magnetic Bragg peaks appear at $`(\mathrm{0\; 0}l)`$ ($`l=\mathrm{odd}`$) below $`T_{OO}`$ indicating antiferromagnetic component. When the magnetic structure below $`T_{OO}`$ is interpreted as a canted structure, the canting angle is, however, small and the FI state is a good approximation.
The orbital states were observed by synchrotron x-ray diffraction measurements on four-circle spectrometers at beamlines 4C and 16A2 in the Photon Factory in KEK. We have tuned the incident energy near the Mn K- edge (6.552 KeV). The $`(\mathrm{0\; 1\; 0})`$ plane of a La<sub>0.88</sub>Sr<sub>0.12</sub>MnO<sub>3</sub> single crystal ($`2\varphi \times 2`$ mm) from the same batch, which was carefully polished, was aligned within the scattering plane.
Figure 2(a) shows the incident energy dependence of $`(\mathrm{0\; 3\; 0})`$ peak, which is structurally forbidden, at 12 K. The peak exhibits a sharp enhancement at the Mn K-edge, determined experimentally from fluorescence. As discussed in La<sub>1.5</sub>Sr<sub>0.5</sub>MnO<sub>4</sub> and also in LaMnO<sub>3</sub>, the appearance of such a forbidden peak is considered as a signature of orbital ordering of Mn<sup>3+</sup> $`e_g`$ electrons: Orbital ordering gives rise to anisotropy in the anomalous scattering factor, which is enhanced and thus visible near the Mn K-edge. The antiferro(AF)-type orbital ordering is directly confirmed by rotating the crystal around the scattering vector kept at $`(\mathrm{0\; 3\; 0})`$, i.e., azimuthal scan. Fig. 2(b) shows the azimuthal scan, clearly revealing square of sinusoidal angle dependence of two-fold symmetry.
Although the orbital ordering in La<sub>0.88</sub>Sr<sub>0.12</sub>MnO<sub>3</sub> seems similar to that of LaMnO<sub>3</sub>, there is a marked difference. As shown in Fig. 2(c), the orbital ordering appears only below $`T_{OO}=145`$ K, where the cooperative JT distortion disappears or significantly decreases. On the other hand, in LaMnO<sub>3</sub> where the orbital ordering appears in the JT distorted orthorhombic phase, it has been believed that long- range arrangement of JT distorted MnO<sub>6</sub> octahedra facilitates $`d_{3x^2r^2}/d_{3y^2r^2}`$ type orbital ordering. The orbital ordering is consistent with the spin-wave dispersion reported by Hirota et al., who proposed the dimensional crossover in lightly doped La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub>. The spin dynamics of La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> drastically changes from two-dimenional state as seen in LaMnO<sub>3</sub>, due to the AF-type orbital ordering of $`d_{3x^2r^2}/d_{3y^2r^2}`$, to three-dimensional isotropic ferromagnetic state around $`x0.1`$. Therefore, we anticipate that La<sub>0.88</sub>Sr<sub>0.12</sub>MnO<sub>3</sub> should have a different orbital state, e.g., the hybridization of $`d_{z^2x^2(y^2z^2)}`$ and $`d_{3x^2r^2(3y^2r^2)}`$. Note that the intensity of $`(\mathrm{0\; 3\; 0})`$ resonant peak is significantly reduced at low temperatures compared with that just below $`T_{OO}`$, indicating that the AF- type orbital ordering becomes reduced with decreasing temperature. This reduction is not necessarily due to the instability of orbital ordering at low temperatures because the gradual change of type of the orbital ordering , e.g., AF-type to ferro-type, gives rise to the effect.
Now we theoretically reveal the microscopic mechanism of the newly found experimental features. The spin and orbital states are investigated by utilizing the model Hamiltonian where the spin and orbital degrees of freedom are treated on an equal footing together with the strong electron correlation : $`=_t+_J+_H+_{AF}`$. The first and second terms correspond to the so-called $`t`$\- and $`J`$-terms in the $`tJ`$-model for $`e_g`$ electrons. These are given by $`_t=_{ij\gamma \gamma ^{}\sigma }t_{ij}^{\gamma \gamma ^{}}\stackrel{~}{d}_{i\gamma \sigma }^{}\stackrel{~}{d}_{j\gamma ^{}\sigma }+H.c.`$ and
$`_J=`$ $``$ $`2J_1{\displaystyle \underset{ij}{}}\left({\displaystyle \frac{3}{4}}n_in_j+\stackrel{}{S}_i\stackrel{}{S}_j\right)\left({\displaystyle \frac{1}{4}}\tau _i^l\tau _j^l\right)`$ (1)
$``$ $`2J_2{\displaystyle \underset{ij}{}}\left({\displaystyle \frac{1}{4}}n_in_j\stackrel{}{S}_i\stackrel{}{S}_j\right)\left({\displaystyle \frac{3}{4}}+\tau _i^l\tau _j^l+\tau _i^l+\tau _j^l\right),`$ (2)
where $`\tau _i^l=\mathrm{cos}(\frac{2\pi }{3}n_l)T_{iz}\mathrm{sin}(\frac{2\pi }{3}n_l)T_{ix}`$ and $`(n_x,n_y,n_z)`$$`=(1,2,3)`$. $`l`$ denotes a direction of a bond connecting $`i`$ and $`j`$ sites. $`\stackrel{~}{d}_{i\gamma \sigma }`$ is the annihilation operator of $`e_g`$ electron at site $`i`$ with spin $`\sigma `$ and orbital $`\gamma `$ with excluding double occupancy. The spin and orbital states are denoted by the spin operator $`\stackrel{}{S}_i`$ and the pseudo-spin operator $`\stackrel{}{T}_i`$, respectively. The latter describes which of the orbitals is occupied. The third and fourth terms in the Hamiltonian describe the Hund coupling: $`_H=J_H_i\stackrel{}{S}_{ti}\stackrel{}{S}_i`$ and the AF magnetic interaction between $`t_{2g}`$ spins: $`_{AF}=J_{AF}_{ij}\stackrel{}{S}_{ti}\stackrel{}{S}_{tj}`$, respectively, where $`\stackrel{}{S}_{ti}`$ is the spin operator for $`t_{2g}`$ electrons with $`S=3/2`$. Since the cooperative JT distortion has been experimentally found to be weak around $`x0.1`$, the electron-lattice coupling is neglected in the model. As seen in the first term of $`_J`$, the ferromagnetic SE interaction results from the orbital degeneracy and the Hund coupling between $`e_g`$ electrons : Through the coupling between spin and orbital degrees in $`_J`$, the ferromagnetic ordering and AF-type orbital ordering are cooperatively stabilized.
The mean field approximation is adopted in the calculation of the spin and orbital states at finite $`x`$ and $`T`$ . Two kinds of the mean field for $`\stackrel{}{S}_i`$ and $`\stackrel{}{T}_i`$ are introduced. The both states are described by the distribution functions and the mean fields are optimized by minimizing the free energy. The ferromagnetic spin and G-type pseudo-spin alignments are assumed. The detailed formulation will be presented elsewhere.
The calculated phase diagram is presented in Fig. 3. With doping of holes, a leading magnetic interaction gradually turns from the SE interaction in the lower $`x`$ to the DE one . $`T_C`$ monotonically increases with increasing $`x`$. On the other hand, the orbital state changes from the AF-type ordering favored by $`_J`$ to the ferro-type ordering induced by $`_t`$ due to the gain of the kinetic energy. Thus, $`T_{OO}`$ decreases with doping of holes. In the undoped insulator, $`T_{OO}`$ is higher than $`T_C`$ because the interaction between orbitals $`(3J_1/2)`$ are larger than that between spins $`(J_1/2)`$, as expected from the first term in $`_J`$. Consequently, $`T_C`$ and $`T_{OO}`$ cross with each other at $`x_c0.1`$ as seen in Fig. 3.
We next focus on the region where $`x`$ is slightly higher than $`x_c`$. There are two kinds of the ferromagnetic phase; the phase between $`T_C`$ and $`T_{OO}`$ and that below $`T_{OO}`$. In the high-temperature phase, the orbital is disordered. In the low-temperature phase, on the other hand, the AF-type orbital ordering appears and the SE interaction is enhanced through the spin-orbital coupling in $`H_J`$. Since the AF-type ordering reduces the kinetic energy, the DE interaction is weakened. Consequently, the metallic character is degraded. We identify the low- and high-temperature phases to be FI and FM in $`\mathrm{La}_{0.88}\mathrm{Sr}_{0.12}\mathrm{MnO}_3`$, respectively. Note again that the AF-type orbital ordering is driven by the electronic mechanism, not supported by the JT distortion . The orbital ordering (the inset of Fig. 3) is denoted as $`(\theta _A/\theta _B=\theta _A)`$ with $`\theta _A=\pi /2`$, where $`\theta _{A(B)}`$ is the angle in the orbital space in the $`A(B)`$ sublattice. This is the mixture of $`d_{z^2x^2(y^2z^2)}`$ and $`d_{3x^2r^2(3y^2r^2)}`$, which is consistent with the isotropic ferromagnetic spin wave dispersion. The characteristic curve in the azimuthal angle dependence of the resonant x-ray scattering (Fig. 2(b)) is reproduced by this type of the ordering . The coupling between spin and orbital reflects on the temperature dependence of the magnetization. It is shown in Fig. 4(a), that the magnetization is enhanced below $`T_{OO}`$. The calculated result is consistent with the experimental observation in $`\mathrm{La}_{0.88}\mathrm{Sr}_{0.12}\mathrm{MnO}_3`$ (the inset of Fig. 4(a)). This is a strong evidence of the novel coupling between spin and orbital degrees.
In Fig. 4(b), $`T_{OO}`$ is shown as a function of the applied magnetic field. The applied magnetic field stabilizes the low-temperature ferromagnetic phase accompanied with the orbital ordering. This is because the ferromagnetic spin correlation induced by the field enhances the interaction between orbitals through the spin-orbital coupling in $`_J`$. In other word, the magnetic field controls the orbital states. The theoretical $`T_{OO}`$ versus $`H`$ curve qualitatively reproduces the experimental results in $`\mathrm{La}_{0.88}\mathrm{Sr}_{0.12}\mathrm{MnO}_3`$ (the inset of Fig. 4(b)) and strongly supports that the orbital degree plays a key role in the low temperature phase and the transition at $`T_{OO}`$.
To conclude, the transition from the ferromagnetic metallic to the ferromagnetic insulating phases in $`\mathrm{La}_{0.88}\mathrm{Sr}_{0.12}\mathrm{MnO}_3`$ is ascribed to the transition of orbital order-disorder states. The orbital ordering is observed in the low temperature phase where the cooperative Jahn-Teller type distortion is significantly diminished. The stability of the two phases are controlled by changing temperature and/or applying magnetic field, and the unique coupling between spin and orbital degrees is found. The present investigation shows a novel role of the orbital degree of freedom as a hidden parameter in the MI transition in lightly doped CMR manganites.
Authors acknowledge D. E. Cox, K. Tsuda, and T. Inami for their valuable discussions. Part of the numerical calculation was performed in the HITACS-3800/380 superconputing facilities in IMR, Tohoku University. S.O. acknowledges the financial support of JSPS Research Fellowships for Young Scientists.
|
no-problem/9901/astro-ph9901239.html
|
ar5iv
|
text
|
# Thermal Flipping and Thermal Trapping – New Elements in Dust Grain Dynamics
## 1 Introduction
One of the essential features of grain dynamics in the diffuse interstellar medium (ISM) is suprathermal rotation (Purcell 1975, 1979) resulting from systematic torques that act on grains. Purcell (1979, henceforth P79) identified three separate systematic torque mechanisms: inelastic scattering of impinging atoms when gas and grain temperatures differ, photoelectric emission, and H<sub>2</sub> formation on grain surfaces<sup>1</sup><sup>1</sup>1 Radiative torques suggested in Draine & Weingartner (1996) as a means of suprathermal rotation are effective only for grains larger than $`5\times 10^6`$ cm. . The latter was shown to dominate the other two for typical conditions in the diffuse ISM (P79). The existence of systematic H<sub>2</sub> torques is expected due to the random distribution over the grain surface of catalytic sites of H<sub>2</sub> formation, since each active site acts as a minute thruster emitting newly-formed H<sub>2</sub> molecules.
The arguments of P79 in favor of suprathermal rotation were so clear and compelling that other researchers were immediately convinced that interstellar grains in diffuse clouds should rotate suprathermally. Purcell’s discovery was of immense importance for grain alignment. Suprathermally rotating grains remain subject to gradual alignment by paramagnetic dissipation (Davis & Greenstein 1951), but due to their large angular momentum are essentially immune to disalignment by collisions with gas atoms.
Spitzer & McGlynn (1979, henceforth SM79) showed that suprathermally rotating grains should be susceptible to disalignment only during short intervals of slow rotation that they called “crossovers”<sup>2</sup><sup>2</sup>2Crossovers are due to various grain surface processes that change the direction (in body-coordinates) of the systematic torques. . Therefore for sufficiently infrequent crossovers suprathermally rotating grains will be well aligned with the degree of alignment determined by the time between crossovers, the degree of correlation of the direction of grain angular momentum before and after a crossover (SM79), and environmental conditions (e.g., magnetic field strength $`B`$).
The original calculations of SM79 obtained only marginal correlation of angular momentum before and after a crossover, but their analysis disregarded thermal fluctuations within the grain material. Lazarian & Draine (1997, henceforth LD97), showed that thermal fluctuations are very important, and result in a high degree of correlation for grains larger than a critical radius $`a_c`$, the radius for which the time for internal dissipation of rotational kinetic energy is equal to the duration of a crossover. Assuming that the grain relaxation is dominated by Barnett relaxation (P79), LD97 found $`a_c1.5\times 10^5\mathrm{cm}`$, in accord with observations that indicated that the dividing line between grains that contribute and those that do not contribute to starlight polarization has approximately this value (Kim & Martin 1995).
Here we report the discovery that a new effect of thermal fluctuations – which we term “thermal flipping” – should lead to alignment of even the small grains with $`a\stackrel{<}{}a_c`$, if they rotate suprathermally. However, small grains are observed to not be aligned. We argue that this is due to a second effect – “thermal trapping” – which causes small grains to rotate thermally a significant fraction of the time, despite systematic torques which would otherwise drive suprathermal rotation.
In §2 we review the role of Barnett fluctuations in the crossover process. In §3 we discuss how crossovers influence grain alignment, and in §4 we argue that “thermal trapping” can account for the observed lack of alignment of small grains.
## 2 Crossovers and Thermal Flipping
SM79 revealed two basic facts about grain crossovers. First, they showed that in the absence of external random torques the direction of grain angular momentum $`𝐉`$ remains constant during a crossover, while the grain itself flips over. Second, they demonstrated that in the presence of random torques (arising, for instance, from gaseous bombardment) the degree of correlation of the direction of $`𝐉`$ before and after a crossover is determined by $`J_{\mathrm{min}}`$, the minimum value of $`|J|`$ during the crossover. As a grain tends to rotate about its axis of maximal moment of inertia (henceforth referred to as “the axis of major inertia”), SM79 showed that the systematic torque components perpendicular to this axis are averaged out and the crossover is due to changes in $`J_{}`$ due to the component of the torque parallel to the axis.<sup>3</sup><sup>3</sup>3 Indices $``$ and $``$ denote components parallel and perpendicular to the axis of major inertia. Midway through the flipover $`J_{}=0`$ and $`J_{\mathrm{min}}=J_{}`$.
The time scale for Barnett relaxation is much shorter than the time between crossovers. For finite grain temperatures thermal fluctuations deviate the axis of major inertia from $`𝐉`$ (Lazarian 1994). These deviations are given by the Boltzmann distribution (Lazarian & Roberge 1997) which, for an oblate grain (e.g., an $`a\times b\times b`$ “brick” with $`b>a`$) is
$$f(\beta )d\beta =\mathrm{const}\mathrm{sin}\beta \mathrm{exp}\left[E_k(\beta )/kT_d\right]d\beta ;$$
(1)
$$E_k(\beta )=\frac{J^2}{2I_z}\left[1+\mathrm{sin}^2\beta \left(\frac{I_z}{I_{}}1\right)\right]$$
(2)
is the kinetic energy, and $`\beta `$ the angle between the axis of major inertia and $`𝐉`$. We define
$$J_d\left(\frac{I_zI_{}kT_d}{I_zI_{}}\right)^{1/2}(2I_zkT_d)^{1/2}.$$
(3)
where the approximation assumes $`I_z1.5I_{}`$, as for an $`a\times b\times b`$ brick with $`b/a=\sqrt{3}`$. The Barnett relaxation time is (P79)
$$t_\mathrm{B}=8\times 10^7a_5^7(J_d/J)^2\mathrm{sec},$$
(4)
where $`a_5a/10^5\mathrm{cm}`$. For $`J>J_d`$, the most probable value of $`\beta `$ for distribution (1) has $`J_{}=J\mathrm{sin}\beta =J_d`$, while for $`J<J_d`$ the most probable value of $`\beta `$ is $`\pi /2`$. It follows from (1) that during suprathermal rotation ($`J^2J_d^2`$) the fluctuating component of angular momentum perpendicular to the axis of major inertia $`J_{}^2J_d^2`$.
SM79 defined the crossover time as $`t_c=2J_{}/|\dot{J_{}}|`$ where $`\dot{J}_{}`$ is the time derivative of $`J_{}`$. If $`t_ct_B`$, the Barnett fluctuations can be disregarded during a crossover, and $`J_{}=constJ_d`$. The corresponding trajectory is represented by a dashed line in Fig. 1. Initially the grain rotates suprathermally with $`\beta 0`$; $`\beta `$ crosses through $`\pi /2`$ during the crossover and $`\beta \pi `$ as the grain spins up again to suprathermal velocities.
The condition $`t_c=t_\mathrm{B}`$ was used in LD97 to obtain a critical grain size $`a_\mathrm{c}1.5\times 10^5`$ cm. It was shown that $`t_c<t_B`$ for $`a>a_\mathrm{c}`$, and paramagnetic dissipation can achieve an alignment of $`80\%`$ for typical values of the interstellar magnetic field. If paramagnetic aligment were suppressed for $`a<a_\mathrm{c}`$ this would explain the observed dichotomy in grain alignment: large grains are aligned, while small are not.
What spoils this nice picture is that sufficiently strong thermal fluctuations can enable crossovers: fluctuations in $`\beta `$ span $`[0,\pi ]`$ and therefore have a finite probability to flip a grain over for an arbitrary value of $`J`$. The probability of such fluctuations is small for $`J^2J_d^2`$, but becomes substantial when $`|J|`$ approaches $`J_d`$. Indeed, it is obvious from (1) that in the latter case the probability of $`\beta \pi /2`$ becomes appreciable. LD98 show that the probability per unit time of a flipover due to fluctuations is
$$t_{\mathrm{tf}}^1t_\mathrm{B}^1\mathrm{exp}\left\{(1/2)\left[(J/J_d)^21\right]\right\}.$$
(5)
Whether the grain trajectory is approximately a straight line in Fig. 1, (a “regular crossover”), or two lines connected by an arc (a “thermal flip”) depends on the efficacy of the Barnett relaxation. Roughly speaking, thermal flipping happens when $`t_{\mathrm{tf}}`$ $`\stackrel{<}{}`$$`J/\dot{J}`$. If $`JJ_d`$ the ratio of the flipping and crossover time $`t_{\mathrm{tf}}/t_ct_\mathrm{B}/t_c`$. The latter ratio was found in LD97 to be equal to $`(a/a_c)^{13/2}`$. Therefore flipping was correctly disregarded in LD97 where only grains larger than $`a_c`$ were considered, but should be accounted for if $`a<a_c`$.
The last issue is the problem of multiple flips: a grain with $`\beta >\pi /2`$ can flip back. Thermal flips do not change $`𝐉`$. Therefore after a flip (from quadrant $`\beta <\pi /2`$ to quadrant $`\beta >\pi /2`$ in Fig. 1) the grain has the same $`𝐉`$ as before the flip. For $`J>J_d`$, the thermal distribution (1) has two most probable values of $`\beta `$: $`\beta _{}=\mathrm{sin}^1(J_d/J)`$, and $`\beta _+=\pi \beta _+`$. For both $`\beta _{}`$ and $`\beta _+`$ we have $`J_{}=J_d`$. If we idealize the grain dynamics as consisting of systematic torques changing $`J_{}`$ with $`J_{}=const`$, plus the possibility of instantaneous “flips” (at constant $`𝐉`$) between $`\beta _{}`$ and $`\beta _+`$, then we can estimate the probability of one or more “flips” taking place during a crossover. Let $`\varphi (\beta )d\beta =t_{\mathrm{tf}}^1dt`$ be the probability of a flip from $`\beta `$ to $`\pi \beta `$ while traversing $`d\beta `$. The probability of zero flips between $`0`$ and $`\beta `$ is $`f_{00}(\beta )=\mathrm{exp}[_0^\beta \varphi (x^{})𝑑x^{}]`$. The probability of a “regular crossover” (zero flips) is $`p_{00}=f_{00}(\pi )=e^{2\alpha }`$, where
$$\alpha _0^{\pi /2}\varphi (x)𝑑x\sqrt{\frac{\pi }{2}}\frac{t_c}{t_B}=\sqrt{\frac{\pi }{2}}\left(\frac{a_c}{a}\right)^{13/2}.$$
(6)
Similarly, $`df_{10}=f_{00}^2\varphi d\beta `$ is the probability of one forward flip in the interval $`d\beta `$, with no prior or subsequent flips, and the probability of exactly one forward and zero backward flips during the crossover is $`p_{10}=f_{10}(\pi /2)=(1e^{2\alpha })/2`$. Therefore the probability of one or more backward flips is $`1p_{00}p_{10}=(1e^{2\alpha })/21/2`$ for $`aa_{\mathrm{cr}}`$.
## 3 Efficacy of Paramagnetic Alignment
SM79 showed that disalignment of suprathermally rotating grains occurs only during crossovers whereas thermally rotating grains undergo randomization all the time. Consider a grain subject to random (nonsystematic) torques which provide an excitation rate $`\mathrm{\Delta }J^2/\mathrm{\Delta }t`$, and damping torque $`𝐉/t_d`$, where $`t_d`$ is the rotational damping time. Thermally rotating grains have $`J^2=(1/2)t_d(\mathrm{\Delta }J^2/\mathrm{\Delta }t)J_{th}^2`$. This definition of thermally rotating grains encompasses grains whose rotation is excited by random H<sub>2</sub> formation, cosmic ray bombardment etc. – so long as the associated torques have no systematic component. For suprathermally rotating grains $`J^2J_{th}^2`$.
In what follows we roughly estimate the efficacy of grain alignment for $`t_ct_\mathrm{B}`$, i.e., $`a<a_c`$. Following P79 we assume that H<sub>2</sub> torques are the dominant spin-up mechanism.
A crossover requires $`NJ_{\mathrm{min}}/\mathrm{\Delta }J_z`$ impulse events, where the mean impulse per recombination event (see SM79) $`\mathrm{\Delta }J_z\left(2m_\mathrm{H}a^2E/3\nu \right)^{1/2}`$ where $`\nu `$ is the number of active recombination sites, $`E`$ is the kinetic energy per newly-formed H<sub>2</sub>, and $`J_{\mathrm{min}}`$ is the minimum $`J`$ reached during the crossover. If $`N`$ is multiplied by the sum of mean squared random angular momentum impulses $`(\mathrm{\Delta }J_z^2+\mathrm{\Delta }J_{}^2)`$ it gives the mean squared change of $`J`$ during a crossover. Therefore the mean squared change of angle during a flipping-assisted crossover is
$$F\frac{N(\mathrm{\Delta }J_z^2+\mathrm{\Delta }J_{}^2)}{J_{\mathrm{min}}^2}\frac{(\mathrm{\Delta }J_z^2+\mathrm{\Delta }J_{}^2)}{J_{\mathrm{min}}\mathrm{\Delta }J_z}$$
(7)
which differs only by a factor of order unity from the expression for disorientation parameter $`F`$ in SM79, provided that $`J_{\mathrm{min}}`$ is used instead of $`J_{}`$. The latter is the major difference between the regular crossovers that were described by SM79 and LD97 and our present study. SM79 and LD97 dealt with the case for which flipping is negligible and the disorientation was mostly happening when $`J_{}0`$ due to the action of regular torques, in which case $`J_{\mathrm{min}}J_d`$. As flipping becomes important, $`J_{\mathrm{min}}>J_d`$ is obtained from the condition $`t_{\mathrm{tf}}(J_{\mathrm{min}})J_{\mathrm{min}}/\dot{J}`$.
Grain alignment is measured by $`\sigma (3/2)(\mathrm{cos}^2\theta 1/3)`$ where $`\theta `$ is the angle between the magnetic field direction and $`𝐉`$. Generalizing LD97,
$$\sigma A\left[1+\frac{3}{\delta _{\mathrm{eff}}}\left(\frac{\mathrm{arctan}\sqrt{e^{\delta _{\mathrm{eff}}}1}}{\sqrt{e^{\delta _{\mathrm{eff}}}1}}1\right)\right]+(1A)\sigma _{\mathrm{th}},$$
(8)
$$\delta _{\mathrm{eff}}=\frac{2C\overline{t}_b/t_r}{\left[1\mathrm{exp}(F)\right]}\frac{2.6Ct_d/t_r}{\left[1\mathrm{exp}(F)\right]}\left(1+\frac{t_L}{t_d}\right)$$
(9)
where $`A=C=1`$ in LD97 theory, $`\sigma _{\mathrm{th}}`$ is the alignment parameter for thermally rotating grains (Roberge & Lazarian 1998), $`t_r`$ is the paramagnetic damping time (Davis & Greenstein 1951), $`\overline{t}_b1.3(t_d+t_L)`$ is the mean time back to the last crossover (P79), and $`t_L`$ is the resurfacing time. For typical ISM conditions ($`n_\mathrm{H}=20`$ cm<sup>-3</sup>, $`B=5\mu `$G) $`t_d/t_r0.05/a_5`$.
Expressions (8) and (9) (with $`A=C=1`$) were obtained in LD97 assuming that grains spend nearly all their time rotating suprathermally, except for brief crossover events with a characteristic disorientation parameter $`F`$. We now argue that a significant fraction of the small grains do not rotate suprathermally, and an appreciable fraction of crossovers have $`F\stackrel{>}{}1`$.
## 4 Thermal trapping of small grains
P79 theory of suprathermal rotation did not take into account the “thermal flipping” process discussed here. We now argue that thermal flipping will suppress the suprathermal rotation of very small grains.
With the Barnett relaxation time $`t_\mathrm{B}`$ from (4), the ratio $`t_\mathrm{B}/t_d1\times 10^4(15/T_d)(J_d/J)^2a_5^6`$, where the drag time $`t_d`$ is evaluated for a diffuse cloud with $`n_\mathrm{H}=30\mathrm{cm}^3`$ and $`T=100\mathrm{K}`$. Thus the timescale for a thermal flip is
$$t_{\mathrm{tf}}/t_d10^4a_5^6(J_d/J)^2\mathrm{exp}\left\{(1/2)\left[(J/J_d)^21\right]\right\}$$
(10)
showing that thermal flipping is strongly favored for small grains. The critical question now is: Can the systematic torques drive small grains to large enough $`J`$ to suppress the thermal flipping, or is the thermal flipping sufficiently rapid to suppress the superthermality?
Consider a grain with a systematic torque $`(G1)^{1/2}J_{\mathrm{th}}/t_d`$ along the major axis (fixed in grain coordinates). The condition $`J_{\mathrm{min}}=\dot{J}t_{\mathrm{tf}}(J_{\mathrm{min}})`$ becomes
$$t_{\mathrm{tf}}/t_d=(J_{\mathrm{min}}/J_{\mathrm{th}})/(G1)^{1/2}.$$
(11)
Thermal flipping causes the systematic torque to randomly change sign in inertial coordinates, so that
$$J^2=J_{\mathrm{th}}^2+(G1)J_{\mathrm{th}}^2t_{\mathrm{tf}}/(t_{\mathrm{tf}}+t_d),$$
(12)
giving a condition for $`t_{\mathrm{tf}}`$ in terms of $`J^2`$:
$$t_{\mathrm{tf}}/t_d=\left[(J/J_{\mathrm{th}})^21\right]/\left[G(J/J_{\mathrm{th}})^2\right].$$
(13)
For given $`a`$, $`G`$, and $`(J_{th}/J_d)^2`$, (10) and (13) have either one or three solutions for $`J^2`$. If $`J^2^{1/2}`$ has multiple solutions $`J_1<J_2<J_3`$, the intermediate solution $`J_2`$ is unstable: if $`J_1<J<J_2`$, then $`t_{\mathrm{tf}}`$ from (10) is smaller than the value required by (13), so $`JJ_1`$; if $`J_2<J<J_3`$, then $`JJ_3`$. In the former case thermal flipping leads to suppression of suprathermal rotation: if the grain enters the region $`J<J_2`$, then it is trapped with $`JJ_1`$ until a fluctuation brings it to $`J>J_2`$. The timescale for such a fluctuation is $`t_{\mathrm{trap}}t_d\mathrm{exp}[(J_2/J_{\mathrm{th}})^2]`$. We refer to this phenomenon as “thermal trapping”.
As an example, consider $`(J_{\mathrm{th}}/J_\mathrm{d})^2=5`$, $`G=10^3`$ and $`a_5=0.5`$. Thermal flipping takes place during a crossover at $`J_{\mathrm{min}}^25.9J_{\mathrm{th}}^2`$. If $`J^2`$ drops below $`J_2^2=4.5J_{\mathrm{th}}^2`$, the grain will be thermally trapped. For this case thermal flipping will tend to maintain the grain at $`J^2J_1^21.02(J_{\mathrm{th}})^2`$, unless thermal fluctuations succeed in getting the grain to $`J>J_2`$, in which case thermal flipping is unable to prevent the grain spinning up to a superthermality $`(J/J_{\mathrm{th}})^2G=10^3`$. For this example the thermal trapping time $`t_{\mathrm{trap}}50t_d`$. During this time paramagnetic alignment will be minimal. Grains that escape the thermal trap become suprathermal and align on the timescale for paramagnetic alignment.
Let $`\eta `$ be the probability per crossover of becoming “thermally trapped”. The fraction of grains which are not thermally trapped at any time is $`x=\overline{t}_b/(\overline{t}_b+\eta t_{\mathrm{trap}})`$. We can estimate the grain alignment by using (8) and (9) with $`A=x`$ and $`C=[1\mathrm{exp}(F)]\{\eta +(1\eta )[1\mathrm{exp}(F)]\}^1`$.
During a crossover, the first thermal flip takes place at $`JJ_{\mathrm{min}}`$, only a bit larger than $`J_2`$, the thermal trapping boundary. We have seen above that for $`a<a_c`$, $`50\%`$ of crossovers involve one or more “backward” flips. We do not know what fraction of the crossovers end up “thermally trapped”, but we speculate that it could be appreciable, say $`\eta 0.1`$.
The time between crossovers is of the order of the damping time $`t_d`$ (see P79). Returning to our example of a grain with $`a_5=0.5`$, for which we estimated $`t_{\mathrm{trap}}50t_d`$, we see that the fraction of grains which are not trapped $`x=1/(1+50\eta t_d/\overline{t}_b)`$ could be small if $`\eta \stackrel{>}{}0.1`$. More detailed studies of the dynamics (Lazarian & Draine 1999) will be required to estimate $`\eta `$, and to provide more reliable estimates of $`t_{\mathrm{trap}}`$, before we can quantitatively estimate the degree to which thermal trapping will suppress the alignment of small grains.
Inspection of Fig. 2 shows that thermal trapping solutions are only found if $`G`$ is not too large (e.g. for $`G=10^5`$ we have no thermal trapping solution in Fig. 2 for $`a_5=0.5`$).
Such degrees of suprathermality would follow from variations of accomodation coefficient and photoelectric yield, but higher values were obtained in the literature for the case of H<sub>2</sub> torques (P79, LD97). For example, LD97 estimate $`G=2\times 10^7a_5(\gamma /0.2)^2(10^{11}\mathrm{cm}^2/\alpha )`$, where $`\alpha `$ is the surface density of active recombination sites. The values of $`G\stackrel{<}{}10^4`$ required for thermal trapping to be possible for $`a_50.5`$ grains would appear to require $`(\gamma /0.2)^2/\alpha \stackrel{<}{}10^{14}\mathrm{cm}^2`$. If essentially every surface site is an active chemisorption site, we could have $`\alpha 10^{15}\mathrm{cm}^2`$; alternatively, it is conceivable that $`\gamma 1`$ for very small grains. The latter idea was advocated by Lazarian (1995), who found that oxygen poisoning of catalytic sites is exponentially enhanced for grains with $`a<10^5`$ cm. Recent experimental work (Pironello et al. 1997a,b) suggests that $`\gamma `$ may be much smaller than is usually assumed. Moreover, recent research (Lazarian & Efroimsky 1998, Lazarian & Draine 1999) shows that faster processes of internal relaxation are possible. These processes should enable thermal trapping for larger values of $`G`$.
## 5 Conclusions
We have found that “thermal flipping” is a critical element of the dynamics of small ($`a\stackrel{<}{}10^5`$ cm) grains. If small grains rotate suprathermally, then thermal flipping would promote their alignment by suppressing disalignment during “flipping-assisted” crossovers. Since small grains are observed to not be aligned, it follows that most must not rotate suprathermally.
One way for small grains to not rotate suprathermally would be for the systematic torques from $`\mathrm{H}_2`$ formation and photoelectric emission to be much smaller than current estimates. However, we also find that thermal flipping can result in “thermal trapping”, whereby rapid thermal flipping can prevent systematic torques from driving small grains to suprathermal rotation rates. As a result, at any given time an appreciable fraction of small grains are thermally trapped and being disaligned by random processes.
The thermal trapping effect is of increasing importance for smaller grains, and may explain the observed minimal alignment of $`a\stackrel{<}{}5\times 10^6\mathrm{cm}`$ dust grains (Kim & Martin 1995).
This work was supported in part by NASA grants NAG5-2858 and NAG5-7030, and NSF grant AST-9619429.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.