id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9812/cs9812009.html
ar5iv
text
# Vocal Access to a Newspaper Archive: Design Issues and Preliminary Investigation ## 1 Introduction For the last 50 years Information Retrieval has been concerned with enabling users to retrieve textual documents (most often only references) in response to textual queries. With the availability of faster computers and cheaper electronic storage it is now possible to store and search the full text of millions of large textual documents online. It has been observed that the widespread access to Internet and the high bandwidth often available to users makes them think that there is nothing easier than connect and search a large information repository. Nevertheless, there are cases in which users may need to search an information repository without having access to a computer or to an Internet connection. There are cases in which the only thing available is a low bandwidth telephone line. We can make a number of examples of such situations: an automatic booking service for an airline company that give direct access to recognised customers, a press archive for journalists on mission abroad (in particular in developing countries), an automatic telephone based customer support system, and so on. In all these cases it is necessary to have access and interact with a system capable not only of understanding the user spoken query, finding documents and presenting them as speech, but also capable of interacting with the user in order to better clarify and specify his information need whenever this is not clear enough to proceed effectively with the searching. This paper is concerned with the design issues and the architecture of a prototype of one of such systems. The paper is organised as follows. Section 2 introduces the difficult marriage between IR and speech. Section 3 gives the background of this work and explains its final objective: the Interactive Vocal Information Retrieval System. Section 4 reports on the current state of the implementation of the prototype system, while section 5 briefly describes some interesting findings of the feasibility study. ## 2 Information Retrieval and Speech Information Retrieval (IR) is the branch of computing science that aims at storing and allowing fast access to a large amount of multimedia information, like for example text, images, speech, and so on . An Information Retrieval System is a computing tool that enables a user to access information by its semantic content using advanced statistical, probabilistic, and/or linguistic techniques. Most current IR systems enable fast and effective retrieval of textual information or documents, in collections of very large size, sometimes containing millions of documents. The retrieval of multimedia information, on the other hand, is still an open problem. Very few IR systems capable of retrieving multimedia information by its semantic content have been developed. Often multimedia information is retrieved by means of an attached textual description. The marriage between IR and speech is a very recent event. IR has been concerned for the last 50 years with textual documents and queries. It is only recently that talking about multimedia IR has become possible. Progress in speech recognition and synthesis and the availability of cheap storage and processing power have made possible what only a few years ago was unthinkable. The association between IR and speech has different possibilities: * textual queries and spoken documents; * spoken queries and textual documents; * spoken queries and spoken documents. In the large majority of current IR systems capable of dealing with speech, the spoken documents or queries are first transformed into their textual transcripts and then dealt by the IR system with techniques that are derived from those used in normal textual IR. The retrieval of spoken documents using a textual query is a fast emerging area of research (see for example ). It involves an efficient, more than effective, combination of the most advanced techniques used in speech recognition and IR. The increasing interest in this area of research is confirmed by the inclusion, for the first time, of a retrieval of spoken documents retrieval track in the TREC-6 conference . The problem here is to devise IR models that can cope with the large number of errors inevitably found in the transcripts of the spoken documents. Models designed for retrieval of OCRed documents have proved useful in this context . Another problem is related to the fact that, although IR models can easily cope with the fast searching of large document collections, fast speech recognition of a large number of long spoken documents is a much more difficult task. Because of this, spoken documents are converted into textual transcripts off-line, and only the transcripts are dealt by the IR system. The problem of retrieving textual documents using a spoken query may seem easier than the previous one, because of the smaller size of the speech recognition task involved. However, it is not so. While the incorrect or uncertain recognition of an instance of a word in a long spoken document can be compensated by its correct recognition in some other instances, the incorrect recognition of a word in a spoken query can have disastrous consequences. Queries are generally very short<sup>1</sup><sup>1</sup>1There is an on-going debate about realistic query lengths. While TREC queries are on average about 40 words long, Web queries are only 2 words long on average. This recently motivated the creation in TREC of a “short query” track, to experiment with queries of more realistic length. and the failing of recognising a query word, or worse, the incorrect recognition of a query word will fail to retrieve a large number of relevant documents or wrongly retrieve a large number of non-relevant documents. The retrieval of spoken queries in response to spoken documents is a very complex task and is more in the realm of speech processing than IR, although IR techniques could be useful. Speech recognition and processing techniques can be used to compare spoken words and sentences in their raw form, without the need of generating textual transcripts. We will not address this issue here. ## 3 The Interactive Vocal Information Retrieval System In the second half of 1997, at Glasgow University, we started a project on the sonification of an IR environment. The project is funded by the European Union under the Training and Mobility of Researchers (TMR) scheme of the Fourth Framework. The main objective of the project is to enable a user to interact (e.g. submit queries, commands, relevance assessments, and receive summaries of retrieved documents) with a probabilistic IR system over a low bandwidth communication system, like for example a telephone line. An outline of the system specification is reported in the figure 1. The interactive vocal information retrieval system (IVIRS), resulting from the “sonification” of a probabilistic IR system, has the following components: * a vocal dialog manager (VDM) that provides an “intelligent” speech interface between user and IR system; * a probabilistic IR system (PIRS) that deals with the probabilistic ranking and retrieval of documents in a large textual information repository; * a document summarisation system (DSS) that produces a summary of the content of retrieved documents in such a way that the user will be able to assess their relevance to his information need; * a document delivery system (DDS) that delivers documents on request by the user via electronic mail, ftp, fax, or postal service. It is important to emphasise that such a system cannot be developed simply with off the shelf components. In fact, although some components (DSS, DDS, and the Text-to-Speech module of the VDM) have already been developed in other application contexts, it is necessary to modify and integrate them for the IR task. The IVIRS prototype works in the following way. A user connects to the system using a telephone. After the system has recognised the user by means of a username and a password (to avoid problems in this phase we devised a login procedure based on keying in an identification number using a touch tone), the user submit a spoken query to the system. The VDM interact with the user to identify the exact part of spoken dialogue that constitutes the query. The query is then translated into text and fed to the PIRS. Additional information regarding the confidence of the speech recognisers is also fed to the PIRS. This information is necessary in order to limit the effects of wrongly recognised words in the query, Additionally, an effective interaction between the system and the user can also help to solve this problem. The system could ask the user for confirmation in case of a uncertain recognition of a word, asking to re-utter the word or to select one of the possible recognised alternatives. The PIRS searches the textual archive and produces a ranked list of documents, and a threshold is used to find the a set of document regarded as surely relevant (this feature can be set in the most appropriate way by the user). The user is informed on the number of documents found to be relevant and can submit a new query or ask to inspect the documents found. Documents in the ranked list are passed to the DSS that produces a short representation of each document that is read to the user over the telephone by the Text-to-Speech module the VDM. The user can wait until a new document is read, ask to skip the document, mark it as relevant or stop the process all together. Marked documents are stored in retrieved set and the use can proceed with a new query if he wishes so. A document marked as relevant can also be used to refine the initial query and find additional relevant documents by feeding it back to the PIRS. This relevance feedback process is also useful in case of wrongly recognised query words, since the confidence values of query words increase if they are found in relevant documents. This interactive process can go on until the user is satisfied with the retrieved set of documents. Finally, the user can ask the documents in the retrieved set to be read in their entirety or sent to a known addressed by the DDS. ### 3.1 The Vocal Dialog Manager The human-computer interaction performed by the VDM is not a simple process that can be done by off the shelf devices. The VDM needs to interact in an “intelligent” way with the user and with the PIRS in order to understand completely and execute correctly the commands given by the user. In fact, while the technology available to develop the speech synthesis (Text-to-Speech) module is sufficient for this project (but see section 5.1), the technology available for the speech recognition (Speech-to-Text) module is definitely not. On one hand, the translation of speech into text is a much harder task than the translation of text to speech, in particular in the case of continuous speech, speaker independent, large vocabulary, noisy channel speech recognition. On the other hand, it is necessary to take into consideration the three possible forms of uncertainty that will be present in the IR environment by adding a vocal/speech component: 1. the uncertainty related to the speech recognition process; 2. the uncertainty given by the word sense ambiguity present in the natural language; 3. the uncertainty related to the use of the spoken query in the retrieval process. In order to deal with these different forms of uncertainty we need not only to developed a advanced retrieval model for the PIRS, but also develop a model of the user-VMD-PIRS interaction that encompasses a language model, an interaction model and a translation model. This later part of the project is still at the early stages of development and will not be presented here. In this context we also make use of the results of previous work in this area, although in rather different applications (see for example . Currently we are building a model of the telephone interactions between users and PIRS analysing the results of a study carried out using the “Wizard of Oz” technique. The Wizard of Oz technique is a way of simulating human-computer interfaces. Unknown to the user, a human “wizard” performs some of the functions of the computer such as responding to the user’s spoken input. The technique is commonly used where the technology for running the interface does not yet exist or is not yet sophisticated enough to be used in real time. Examples include experiments where the recognition vocabulary is within current capabilities (e.g. sequences of digits) but the recognition performance required is beyond current capabilities. The particular Wizard of Oz simulation we are currently using for the design of the VDM incorporates a statistical model of word recognition errors, whereby a realistic distribution of speech recogniser errors can be generated at any desired overall accuracy level. A limitation to the realism of this form of simulation is that recognition performance depends only on the content of the user’s input, and not on its quality (clarity of speaking, background noise etc) as it would with a real recogniser. This is particularly relevant in the case of word-spotting, where the recogniser is designed to pick out instances of keywords embedded in arbitrary speech. Typically the accuracy of a real word-spotter is better for isolated keyword utterances than for embedded ones. To address this, a second-generation simulation method is currently being developed, in which the wizard’s input (giving the keyword content of the utterance) is combined with acoustic information extracted automatically from the speech signal. This will enable us to design appropriate error recovery strategies whenever one or more words in the spoken query are below a certain threshold of recognition. The VMD has two sub-components: a Speech-to-Text module and a Text-to-Speech module. The Speech-to-Text module is arguably the most important and the most problematic module of the VMD. Speech recognition has been progressing very rapidly in the last few years and results are improving day by day, in particular for speaker dependent systems. There is already a number of system commercially available that guarantee quite impressive performance once they have been properly trained by the user (e.g. Dragon Naturally Speaking, IBM Viva Voce, etc.). The user also needs to teach how to recognise words that are not part of the recogniser’s vocabulary. The situation is not so good with speaker independent continuous speech recognition systems over the telephone, although a number of studies have achieved some acceptable results . In this context we do not intend to develop any new speech recognition system. This is a difficult area of research for which we do not have necessary know-how. Instead, we make use of available state-of-the-art technology and try to integrate it in our system. The aim is to enable the PIRS to deal with the uncertainty related to the spoken query recognition process, integrating this uncertainty with the classical uncertainty of intrinsic in the IR process. Two strategies are currently being experimented with: * use the speech recogniser’s “confidence” in recognising a word; * merging the output of a number of speech recognisers to compensate for errors of single speech recognisers and create an “combined confidence” in recognising a word. In section 5.2 we report some initial results on the use of these techniques. The Text-to-Speech module uses the state-of-the-art technology in speech synthesis . We carried out a survey and an initial testing of available speech synthesis systems. In section 5.1 we report some initial results of an experimentation into the user’s perception of relevance of spoken documents. In the experiments carried out we used to different commercial speech synthesis systems. ### 3.2 The Probabilistic Information Retrieval System The PIRS performs a ranking of the document in the collection in decreasing order of their estimated probability of relevance to the user’s query. Past and present research has made use of formal probability theory and statistics in order to solve the problems of estimation . An schematic example of how a probabilistic IR system works is depicted in figure 2. Currently there is no PIRS with speech input/output capabilities. The adding of these capabilities to a PIRS is not simply a matter of fitting together a few components, since there needs to be some form of feedback between PIRS and VMD so that the speech input received from the user and translated into text can be effectively recognised by the PIRS and used. If, for example, a word in a spoken query is not recognised by the PIRS, but is very close (phonetically) to some other word the PIRS knows about and that is used in a similar context (information that the PIRS can get from statistical analysis of the word occurrence), then the PIRS could suggest to the user to use this new word in the query instead on the unknown one. Additionally, it is generally recognised in IR that improvement in effectiveness can be achieved by enhancing the interaction between system and user. One of the most advanced techniques used for this purpose is relevance feedback (RF). RF is a technique that allows a user to interactively express his information requirement by adapting his original query formulation with further information . This additional information is often provided by indicating some relevant documents among the documents retrieved by the system. When a document is marked as relevant the RF process analyses the text of the document, picking out words that are statistically significant, and modifies the original query accordingly. RF is a good technique for specifying an information need, because it releases the user from the burden of having to think up of words for the query and because it limits the effects of errors in the recognition of query words. Instead the user deals with the ideas and concepts contained in the documents. It also fits in well with the known human trait of “I don’t know what I want, but I’ll know it when I see it”. RF can also be used to detect words that were wrongly recognised by the VMD. In fact if, for example, a query word uttered by the user never appears in documents that the user points out to be relevant, while another word similarly spelled (or pronounced) often occurs, then it is quite likely (and we can evaluate this probability) that the VMD was wrong in recognising that word and that the word really uttered by the user is the other word. The overall performance of the interactive system can be enhanced also using other similar forms of interactions between PIRS and VMD that are current object of study. ### 3.3 The Document Summarisation System The DSS performs a query oriented document summarisation aimed at stressing the relation between the document content and the query. This enables the user to assess in a better and faster way the relevance of the document to his information need and provide correct feedback to the PIRS on the relevance of the document. Query oriented document summarisation attempts to concentrate users’ attention on the parts of the text that possess a high density of relevant information. This emerging area of research has its origins in methods known as passage retrieval (see for example ). These methods identify and present to the user individual text passages that are more focussed towards particular information needs than the full document texts. The main advantage of these approaches is that they provide an intuitive overview of the distribution of the relevant pieces of information within the documents. As a result, it may be easier for users to decide on the relevance of the retrieved documents to their queries. The summarisation system employed in IVIRS is the one developed by Tombros and Sanderson at Glasgow University . The system is based on a number of sentence extraction methods that utilise information both from the documents of the collection and from the queries used. A thorough description of the system can be found in and will not be reported here. In summary, a document passes through a parsing system, and as a result a score for each sentence of the document is computed. This score represents the sentence’s importance for inclusion in the document’s summary. Scores are assigned to sentences by examining the structural organisation of each document (to distinguish word in the title or in other important parts of the document, for example), and by utilising within-document word frequency information. The document summary is then generated by selecting the top-scoring sentences, and outputting them in the order in which they appear in the full document. The summary length is a fraction of the full document’s length. For the documents used in our experimentation the summary length was set at 15% of the document’s length, up to a maximum of five sentences. ### 3.4 The Document Delivery System The DDS performs the delivery of all or parts of the document(s) requested by the user. The user can decide the way and format of the document delivery; this information is usually stored in a user profile, so that the user does not need to give this information every time he uses the system. Documents can be delivered by voice (read in their entirety to the user through the telephone), electronic mail, postal service, or fax; if delivered by electronic mail a number of different document formats are available, like for example, PDF, postscript, RDF, or ASCII. ## 4 Prototype Implementation The implementation of the prototype system outlined in the previous sections requires, as a first step, a careful choice of some already existing software components: a speech recognition system, a speech synthesis system, a probabilistic IR system, and a document summarisation system. This calls for a survey of the state-of-the-art of several different areas of research some of which are familiar to us, while others are new to us. A second step involves the development of a model for the VDM and of its interaction with the other components. The prototype implementation of the overall system requires a careful tuning and testing with different users and in several different conditions (noisy environment, foreign speaker, etc.). The prototype implementation of IVIRS is still in progress. A “divide et impera” approach is currently being followed, consisting of dividing the implementation and experimentation of IVIRS in the parallel implementation and experimentation of its different components. The integration of the various components will be the last stage. Currently we have implemented and experimented with the DSS, the Text-to-Speech and Speech-to-Text modules of the VDM, and the DDS. We are currently developing the PIRS, and the VDM. ## 5 Initial Evaluation Results In this section we report on the initial results found experimenting with the DSS, and the Text-to-Speech and Speech-to-Text modules of the VDM. This constitutes part of a feasibility study of the system. In the pilot implementation of the IVIRS prototype we use a collection of newspaper articles, in particular the TREC Wall Street Journal collection. ### 5.1 Experimentation of the Document Summarisation System and Text-to-Speech Module One of the underlying assumptions of the design and development of the IVIRS system is that a user would be able to assess the relevance of a retrieved document by hearing a synthesised voice reading a brief description of its semantic content through a telephone line. This is essential for an effective use of the system. In order to test the validity of this assumption we carried out a series of experiments with the DSS and Text-to-Speech module of the IVIRS. The aim of this experimentation was to investigate the effect that different forms of presentation of document descriptions have on users’ perception of the relevance of a document. In a previous study, Tombros and Sanderson used document titles, and automatically generated, query biased summaries as document descriptions, and measured user performance in relevance judgements when the descriptions were displayed on a computer screen and read by the users. The results from that study were used in this experiment, and compared to results obtained when users are listening to the document descriptions instead of reading them. Three different ways of auditory transmission are employed in our study: document descriptions are read by a human to the subjects, read by a human to the subjects over the telephone, and finally read by a speech synthesis system over the telephone to the subjects. The objective was that by manipulating the level of the independent variable of the experiment (the form of presentation), we could examine the value of the dependent variable of the experiment (the user performance in relevance judgements). We also tried to prove that any variation in user performance between the experimental conditions was to be attributed only to the change of level of the independent variable. In order to be able to make such a claim, we had to ensure that the so-called “situational variables” (e.g. background noise, equipment used, experimenter’s behaviour) were held constant throughout the experimental procedure. Such variables could introduce bias in the results if they systematically changed from one experimental condition to another . In order to be able to use the experimental results reported in , the same task was introduced in our design: users were presented with a list documents retrieved in response to a query, and had to identify as many documents relevant to that particular query as possible within 5 minutes. The information that was presented for each document was its title, and its automatically generated, query oriented description. Moreover, we used exactly the same set of queries (50 randomly chosen TREC queries), set of retrieved documents for each query (the 50 top-ranked documents), and document descriptions as in . Queries were randomly allocated to subjects by means of a draw, but since each subject was presented with a total of 15 queries (5 queries for each condition) we ensured that no query was assigned to a specific user more than once. A group consisting of 10 users was employed in the experimental procedure. The population was drawn from postgraduate students in computer science. All users performed the same retrieval task described in the previous paragraph under the three different experimental conditions. The experiment involved the presentation of document descriptions to subjects in three different forms, all of which were of an auditory nature. In two of the experimental conditions the same human read the document descriptions to each subject, either by being physically in the same room (though not directly facing the subject), or by being located in a different room and reading the descriptions over the telephone. In the last experimental condition a speech synthesiser read the document description to the user over the telephone. User interaction with the system was defined in the following way: the system would start reading the description of the top ranked document. At any point in time the user could stop the system and instruct it to move to the next document, or instruct it to repeat the current document description. If none of the above occurred, the system would go through the current document description, and upon reaching its end it would proceed to the next description. Table 1 reports the results of the user relevance assessment in terms of precision, recall and average time for all four conditions: on screen description (S), read description (V), description read over the telephone (T), and computer synthesised description read over the telephone (C). These results show how users in condition S performed better in terms of precision and recall and were also faster. Recall and average time slowly decreased from S to C, although some of these differences are not statistically significant (i.e. the average time of V and T). We were surprised to notice that precision was slightly higher for condition T than for conditions V or C. Users tended to be more concentrated when hearing the descriptions read over the telephone than by the same person in the same room, in front of them, but this concentration was not enough when the quality of the voice was getting worse. Nevertheless, the difference in precision between conditions S and C was not so large (only about 5%) to create unsolvable problems for a telephone based IR system. A difference that was certainly significant was in the average time taken to assess the relevance for one document. The difference between the condition S and C was quite large and enabled a user to assess on average, in the same amount of time (5 minutes), 70% more documents in condition S than in condition C (22 documents instead of 13). A sensible user would have to evaluate if it is more cost effective, in terms of time connected to the service, to access the system using computer and modem and looking at the documents on the screen, than accessing the system using a telephone. Nonetheless, difference in the average time taken to assess the relevance for one document were very subjective, in fact, we could notice that some users were slow whatever the condition, while other were always fast. These preliminary results enable us to conclude that a IVIRS system is indeed feasible and, provided we solve the issued related to the correct recognition of the user query, we can expect to develop a system that could be useful. ### 5.2 Experimentation of the Speech-to-Text Module In we presented the results of the experimentation of a number of techniques for the retrieval of spoken documents. In particular, two techniques were experimented, both attempting to use confidence values generated in the speech recognition process to deal with the uncertainty related to the spoken words in the documents. In one set of experiments we used confidence values generated by the speech recogniser, in another set of experiments we generated this confidence values by merging the transcripts of a number of different recognisers. While the first strategy was not successful due to our ignorance of the way the confidence values are generated by a speech recogniser, using the simple strategy of merging the transcripts of spoken documents of different recognisers showed most promise. In the context of the SIRE project we attempted to use the same technique of merging different transcripts with spoken queries. The challenge here is due to the fact the queries are usually much shorter than documents and errors in the recognition of words in queries cause more damage to retrieval effectiveness than errors in the recognition of words in documents. Although this experimentation is still going on, the initial results were not encouraging. We experimented with merging the transcripts of spoken queries originated by two and three different speech recognition systems. The improvement in word recognition and confidence obtained over the use of a single speech recognition system, although considerable, was not enough to increase the level of effectiveness of a PIRS. The major problem was caused by the inclusion in the transcript of queries of words wrongly recognised. Although the event of words being wrongly recognised by more that a recogniser was rare, resulting in wrongly recognised words having usually low confidence levels, the inclusion of any such word in the query transcript had disastrous effects on the effectiveness of the retrieval process. The ranking produced by the IR system was highly influenced by the wrongly recognised words since this were quite often rare words and therefore had high indexing weights. Instead, the effect of missing a word from the query transcript was not detrimental, but this result may not be entirely fair as very few of the queries we used had words outside recognisers’ vocabulary. More “realistic” queries containing many proper nouns might produce different results and require an alternative approach: for example, a recogniser using both word and sub-word unit recognition. An alternative strategy could consist in using a PIRS employing a query expansion technique to expand, from a text corpus, unrecognised query words with those in the recognisers’ vocabulary (using, for example, Local Context Analysis ). We are currently experimenting with the latter technique. ## 6 Conclusions In this paper we outlined the design of an interactive vocal IR system. We also reported on some initial experimentation which highlights the complexity of the implementation of such a system. The work presented here is still in progress and the full implementation of the system is under way. We expect to be able to have a working prototype system very soon. The evaluation of such system and, in particular, of the user interaction will constitute an exciting area of research. ## Acknowledgements The author is currently supported by a “Marie Curie” Research Fellowship from the European Union. The experimentation reported in section 5.1 was carried out with the help of Anastasios Tombros.
no-problem/9812/cond-mat9812270.html
ar5iv
text
# 1 Introduction ## 1 Introduction In two-dimensional models, exact solutions and Coulomb gas methods have led to the discovery of a large collection of universality classes, including some that depend on a continuously variable parameter (see e.g. Refs. ), such as the Baxter model, the critical and tricritical Potts models, and the critical O($`n`$) model. However, such results are not known for the tricritical points of the O($`n`$) model with general $`n`$, in spite of the fact that there are no obvious reasons why such a family of tricritical points should be absent. Tricritical behavior is known only for discrete values of $`n`$: the so-called theta point at $`n=0`$ and the Blume-Capel tricritical point at $`n=1`$, where the model intersects with the $`q=2`$ Potts model, so that the results for Potts tricriticality apply. The branch of $`q`$-state Potts critical points applies to a limited range of $`q`$ values. Its analytic continuation beyond the end point at $`q=4`$ appears to cover the same range of $`q`$, and is then found to describe a branch of tricritical points . In the case of the O($`n`$) model on the honeycomb lattice, the situation is similar in a sense: two branches of critical behavior are known , which are the analytic continuation of each other, and connect at the upper boundary at $`n=2`$. However, the analytic continuation of the critical branch does not describe a phase transition: instead, it describes the low-temperature O($`n`$) phase which is still critical in the sense that the correlations decay algebraically for general $`n`$. Several other branches of critical behavior are known for O($`n`$) models with additional interactions, see e.g. Ref. for an analysis of the square-lattice O($`n`$) model. One of these branches, called branch 3, marks the boundary between a range of ordinary O($`n`$) transitions and a first-order range, just as expected for tricritical points. However, this higher critical point of branch 3 is accompanied by the freezing out of Ising-like degrees of freedom and does not match the known tricritical behavior at $`n=0`$ and 1. Another family of critical points, called branch 0, describes a dilute form of the O($`n`$) model. It includes a point at $`n=0`$ that belongs to the universality class of the theta point. In this sense branch 0 is a generalization of the theta point to other values of $`n`$. Thus it could be a logical candidate for a class of tricritical points, but the $`n=1`$ point of branch 0 represents the F model and not the tricritical Ising model. This somewhat puzzling situation raises some questions, such as what precisely is an O($`n`$) tricritical point, and what sort of connection exists between the theta point and the Ising tricritical point? As for the first question, we adhere to the accepted picture at $`n=0`$ and $`n=1`$ that a gradual increase of the activity of the vacancies will eventually drive the O($`n`$) phase transition first order; the point where the first-order line meets the line of continuous transitions is considered a tricritical point. As for the second question, we may look for a regime of tricritical points that interpolates between a theta point and an Ising tricritical point. In the absence of exact results we use a numerical method, namely finite-size scaling of transfer-matrix results, in order to explore such tricritical points for general $`n`$. Thus we select a model for our study that includes vacancies explicitly. It includes the theta point defined by Duplantier and Saleur and the honeycomb O($`n`$) model without dilution as special cases. For $`n=1,2,\mathrm{}`$ this model is equivalent with an O($`n`$) symmetric spin model . The model and the numerical methods are described in Section 2. The results and their interpretation are presented in Section 3, and our conclusions and outlook are summarized in Section 4. ## 2 The model and numerical procedures We define a dilute O($`n`$) loop model in which the loops run on the edges of the honeycomb lattice. Each edge covered by a loop has weight $`w`$, and each loop has a weight $`n`$. Dilution is represented by means of face variables: the elementary hexagons of the lattice are ‘vacant’ with weight $`y`$ and ‘occupied’ with weight $`1y`$. The loops are forbidden to touch any vacant faces. The occupied faces form a lattice $``$ which is any subset of the dual (triangular) lattice. The loop configurations of this model are represented by graphs $`𝒢`$ consisting of non-intersecting polygons, which avoid the edges of the vacant hexagons. Thus the partition function is $$Z=\underset{}{}\underset{𝒢|}{}y^{N_v}(1y)^{NN_v}w^{N_w}n^{N_l}$$ (1) where $`N_v`$ is the number of empty faces specified by the vacancy configuration $``$. The number $`N_w`$ of edges covered by $`𝒢`$ is equal to the number of vertices visited by a polygon, and $`N_l`$ the number of loops. The second summation is only over those graphs $`𝒢`$ allowed by $``$. For the construction of the transfer matrix we choose the usual geometry of a model wrapped on a cylinder, such that one of the lattice edge directions runs parallel to the axis of the cylinder. The finite size $`L`$ of the model is taken to be the number of hexagons spanning the cylinder; the unit of length is thus $`\sqrt{3}`$ times the nearest-neighbor distance. The transfer-matrix index represents the loop connectivity, i.e. the way in which the dangling bonds at the end of the cylinder are mutually connected, and the vacancy distribution on the last row of the lattice. The introduction of vacancies leads to an increase of the number of possible values of the transfer-matrix index for a given system size. Therefore the present calculations are restricted to somewhat smaller system sizes than e.g. those investigated in Ref. . Our numerical analysis focuses in particular on the asymptotic long-distance behavior of the O($`n`$) spin-spin correlation function. In the language of the O($`n`$) loop model, this correlation function between two points separated by a distance $`r`$ assumes the form $`g(r)=Z^{}/Z`$, where $`Z`$ is defined by Eq. (1) and $`Z^{}`$ by a similar sum on graphs $`𝒢^{}`$ which include, apart from a number of closed loops, also a single segment which connects the two correlated points. We take these points far apart in the length direction of the cylinder. For the calculation of $`Z^{}`$ we use the ‘odd’ transfer matrix built on connectivities that describe, in addition to a number of pairwise connected points, also a single point representing the position of the unpaired loop segment. The ‘even’ transfer matrix uses only connectivities representing pairwise coupled points. Let the largest eigenvalue of the even transfer matrix for a system of size $`L`$ be $`\mathrm{\Lambda }_L^{(0)}`$, and that of the corresponding odd transfer matrix $`\mathrm{\Lambda }_L^{(1)}`$. These eigenvalues determine the magnetic correlation length $`\xi _h(L)`$ along a cylinder of size $`L`$: $$\xi _h^1(L)=(2/\sqrt{3})\mathrm{ln}(\mathrm{\Lambda }_L^{(0)}/\mathrm{\Lambda }_L^{(1)})$$ (2) where the geometric factor $`2/\sqrt{3}`$ is the ratio between the small diameter of an elementary face $`(\sqrt{3})`$ and the length added to the cylinder by each transfer matrix multiplication (3/2). The present calculations of the relevant eigenvalues of the even and odd transfer matrices are restricted to translationally invariant eigenstates. The magnetic correlation length could thus be calculated numerically for finite sizes $`L`$ up to 10. Its dependence on the vacancy weight, the bond weight, and $`L`$ allows the exploration of the phase diagram. Suppose that the parameters of the model are near a renormalization fixed point, at a distance $`t`$ along some temperature-like field direction. The finite-size scaling behavior of the magnetic correlation length is $$\xi _h^1(t,L)=L^1\stackrel{~}{\xi }_h^1(tL^{Y_t})$$ (3) For sufficiently small $`t`$ we may expand the scaling function $`\stackrel{~}{\xi }_h`$: $$\xi _h^1(t,L)=L^1(\stackrel{~}{\xi }_h^1(0)+atL^{Y_t}+\mathrm{})$$ (4) where $`a`$ is an unknown constant. For conformally invariant models, $`\stackrel{~}{\xi }_h^1(0)=2\pi X_h`$ where $`X_h`$ is the magnetic scaling dimension of the pertinent fixed point . This leads us to define the scaled gap $`X_h(t,L)L/(2\pi \xi _h(t,L))`$ which scales as $$X_h(t,L)=X_h+a^{}tL^{Y_t}+\mathrm{}$$ (5) where further contributions may be due to irrelevant scaling fields, or a second relevant temperature-like field if the system is close to a tricritical point. In actual calculations, we obtain the scaled gaps as a function of $`y`$ and $`w`$, and their interpretation in terms of scaling fields involves one relevant field in the case of an ordinary critical point, and two in the case of a tricritical point. In order to get rid of the most relevant field we may e.g. fix $`y`$ to some value and solve for $`w`$ in $$X_h(y,w,L)=X_h(y,w,L+1)$$ (6) The solutions, which are denoted $`w(y,L)`$, are expected to approach the actual critical point as $`w(y,L)=w(y,\mathrm{})+cL^{Y_{t,2}Y_t}`$ where $`Y_t`$ is the renormalization exponent of the most relevant temperature field and $`Y_{t,2}`$ the second largest temperature-like exponent. Thus, by solving Eq. (6) for a range of $`L`$ values we obtain a series of estimates of the critical point $`w(y,L)`$. In the case of an ordinary critical point, the irrelevance of other temperature-like fields ($`Y_{t,2}<0`$) implies that the values $`X_h(y,w,L)`$ taken at the solutions of Eq. (6) for large $`L`$ converge to $`X_h`$ as $$X_h(y,w(y,L),L)X_hL^{Y_{t,2}}$$ (7) By taking $`w(y,L)`$ we have effectively removed the most relevant field. This formula still holds when $`y`$ has a value such that $`y,w(y,\mathrm{})`$ is in the vicinity of a tricritical point, but then the second temperature exponent is relevant: $`Y_{t,2}>0`$. In that case, the values $`X_h(y,w,L)`$ taken at the solutions of Eq. (6) appear to diverge from the fixed-point value of $`X_h`$, i.e. the tricritical magnetic exponent. These considerations allow us to locate a tricritical point, if present. Convergent behavior of $`X_h(y,w(y,L),L)`$ corresponds with ordinary critical points, or with first-order transitions to which similar arguments apply. Divergent behaviour, i.e. the $`X_h(y,w(y,L),L)`$ are moving away from the tricritical value of $`X_h`$ when $`L`$ increases, reveals a tricritical point. Only when $`y`$ is set precisely at its tricritical value, $`X_h(y,w(y,L),L)`$ will converge. ## 3 Numerical results A superficial exploration of the O(0) model Eq. (1) by means of transfer-matrix calculations using small system sizes indicated that it has a phase diagram similar to that found in Ref. : the low-temperature and high-temperature phases are separated by a line of phase transitions which includes a range of ordinary O(0) critical points, separated from a range of first-order transitions by an exactly known theta point. Furthermore, this preliminary work indicated that the phase behavior does not change in a qualitative sense in the range $`0n1`$. Encouraged by this outcome we performed more extensive analyses for $`n=0`$, $`\frac{1}{2}`$, and 1. We obtained solutions of Eq. (6) for fixed values $`y=0`$, 0.1, $`\mathrm{}`$, 0.8 using system sizes up to $`L=10`$. The results for $`X_h(y,w(y,L),L)`$ are shown in Figs. 1-3. For small $`y`$ the data tend to converge to the known magnetic dimensions for the O($`n`$) critical points, indicated by dashed lines. For large $`y`$ values the data tend to converge to a much lower value of $`X_h`$ which does not only reflect the expected behavior near a discontinuity fixed point but also the algebraic nature of the dense O($`n`$) phase. For intermediate $`y`$ we observe the behavior formulated above for the vicinity of a tricritical point: the data indicate that there exists a special value of $`y`$ for which the $`X_h(y,w(y,L),L)`$ converge to a constant level which is different from the large- and small $`y`$ asymptotes. For $`n=0`$ (see Fig. 1) the tricritical point is located at $`y=\frac{1}{2}`$; the numerical data coincide with the exact tricritical value $`X_h=0`$ for all $`L`$. For $`n=\frac{1}{2}`$ we do not know the exact location of the tricritical point and its associated magnetic exponent, but from the data in Fig. 2 we conclude that it is located near $`y=0.54`$, $`w=1.058`$, and that $`X_h0.036`$ (dotted line). For $`n=1`$ we estimate the tricritical point at $`y=0.58`$, $`w=1.115`$. The finite-size data (see Fig. 3) are in a good agreement with the exactly known tricritical value $`X_h=3/40`$ which is indicated by a dotted line. As expected, for increasing $`L`$ the data are moving away from that line. The solutions $`w(y,L)`$ seem to converge well with increasing $`L`$. The phase transitions thus found are shown as data points in the phase diagrams of Figs. 4-6, which apply to $`n=0`$, $`\frac{1}{2}`$ and 1 respectively. The estimated errors are much smaller than the symbol sizes. For $`y=0`$ our numerical procedures accurately reproduce the exact critical points given in Ref. : $`w=0.54120\mathrm{}`$ for $`n=0`$, $`w=0.55687\mathrm{}`$ for $`n=\frac{1}{2}`$, and $`w=0.57735\mathrm{}`$ for $`n=1`$. These exact critical points are shown as black circles. For $`n=0`$ (Fig. 4) the exactly known tricritical point (black square) coincides with the data point for $`y=\frac{1}{2}`$. For $`n=\frac{1}{2}`$ and $`n=1`$ the estimated tricritical points are shown as black squares. The data points in each of Figs. 4-6 are connected by a line of phase transitions that divides into a continuous part (dashed line) and a first-order part (full curve). These curves do not represent any further data but serve only as a guide to the eye. ## 4 Conclusion and outlook The numerical analysis of the O($`n`$) model with vacancies demonstrates that one can interpolate between the known tricritical points at $`n=0`$ and $`n=1`$. These two points are thus not isolated cases, but special cases of generic O($`n`$) tricriticality. In spite of the apparent difficulty to find a class of tricritical points analytically, the existence of such a class is obvious. Forthcoming numerical analysis will show whether this family of tricritical points covers the whole range $`2n2`$ for which analytical results for the ordinary O($`n`$) critical point are known. At present we see no obvious reason for a negative answer. The tricritical exponents remain to be determined; our present work contributes only $`X_h0.036`$ as a new result for the O($`\frac{1}{2}`$) model. This determines the decay of the magnetic correlation function with distance $`r`$ according to $`r^\eta `$ with $`\eta =2X_h`$. As demonstrated e.g. in Ref. , several other exponents can be determined such as the temperature dimension $`X_t`$ and magnetic exponents associated with different types of interfaces. A further characterization of the family of O($`n`$) tricritical points will also require a determination of the conformal anomaly. While this is possible in principle , it remains to be seen whether the inaccuracies involved in the determination of the tricritical points will allow the emergence of a clear picture. The present calculations are still rather limited in magnitude; only a few hours of CPU time on a few Silicon Graphics workstations were used, which we found to be sufficient for the present purposes. Thus, the use of larger computer systems will open the possibility to explore significantly larger system sizes $`L`$, and thus help to conduct such more detailed investigations. We conclude with a few remarks on the relation between O($`n`$) tricriticality and the percolation of the vacancies. For $`n=0`$ the vanishing loop weight enforces a perfect loop vacuum on the left hand side of the line of phase transitions (Fig. 4). Thus, the site percolation on the triangular lattice applies: the vacancy percolation line lies at $`y=\frac{1}{2}`$ and ends at the tricritical point. Once the density of the vacancies has increased above the percolation threshold, there is no way in which the loop configurations can reach criticality, and the transition becomes discontinuous. We thus expect that, also for $`n0`$, the vacancy percolation line connects to the tricritical point. However, for $`n>0`$ the loop vacuum is no longer perfect so that the percolation threshold moves to $`y>\frac{1}{2}`$, in accordance with the estimated tricritical values of $`y`$. Acknowledgements: We are indebted to Murray Batchelor, whose contributions to Ref. were also valuable for the present work. This research is supported in part by the FOM (‘Stichting voor Fundamenteel Onderzoek der Materie’) which is financially supported by the NWO (‘Nederlandse Organisatie voor Wetenschappelijk Onderzoek’).
no-problem/9812/astro-ph9812391.html
ar5iv
text
# The ROSAT X-ray Background Dipole ## 1 Introduction According to the paradigm that the X-ray background (XRB) originates mainly from sources at redshifts $`1<z<3`$ (eg Shanks et al. 1991), it should provide a means of measuring the well established solar motion with respect to the cosmic rest-frame, defined by the Cosmic Microwave Background (CMB), towards $`l=264^{}`$, $`b=48^{}`$ (cf. Lineweaver et al 1996). An imprint of this reflex motion should be a dipole pointing towards the direction of the CMB dipole (Compton-Getting effect). The available all-sky X-ray maps (HEAO-1 and ROSAT ) are composed by X-ray counts not only originating from such distant sources but also from local extragalactic sources ($`z<0.1`$). These gravitating extragalactic objects, which cause our peculiar motion with respect to the cosmic rest frame, emit X-rays and therefore their dipole should also point towards the CMB dipole. This has been demonstrated for the case of AGNs (Miyaji & Boldt 1994) and of X-ray clusters (Lahav et al 1989; Plionis & Kolokotronis 1998). Therefore, the dipole pattern of the XRB results from at least two effects; (a) the motion of the observer with respect to the XRB which in the context of the cosmic-ray background was discussed first by Compton & Getting (1935) and (b) the X-ray emission of extragalactic objects the gravitational field of which causes the observers motion with respect to the cosmic rest frame. The relative contribution of these effects have been analytically estimated by Lahav, Piran & Treyer (1997) and were found to be of the same order of magnitude. The dipole anisotropy of the XRB has been measured in hard X-rays (2-10 keV) using UHURU (Protheroe, Wolfendale & Wdoczyk 1980) and HEAO-1 data (Shafer & Fabian 1983, Jahoda 1993). The resulting dipole was found to point towards the general direction of the CMB dipole but with a large uncertainty: the 90 per cent confidence levels quoted by Shafer & Fabian (1983) cover 12 per cent of the whole sky. The use of the ROSAT all-sky survey can extend these studies to soft energies with higher sensitivity and angular resolution. Kneissl et al. (1997) presented a cross-correlation of the COBE DMR and the ROSAT all-sky survey maps. However, their attempt to measure the extragalactic dipole was hampered by the contamination of the Galactic emission which is appreciable, especially at low Galactic latitudes, even in the hard ROSAT band. They concluded that proper modelling of the Galactic contribution is necessary in order to obtain a measurement of the dipole. In this letter, we attempt to model the diffuse Galactic component using a simple disk model. After subtracting our Galaxy model, we estimate the cosmological X-ray dipole using the ROSAT all-sky survey hard maps (1.5 keV) of Snowden et al. (1995). ## 2 The ROSAT all-sky survey data We use the ROSAT all-sky survey maps at a mean energy of 1.5 keV (PSPC PI channels 91-201). These maps are now publicly released and are described in detail in Snowden et al. (1995). The maps cover $`98`$ per cent of the sky with a resolution of 2 degrees. Point sources detected in the all-sky survey are included in the maps. The particle background, scattered solar X-ray background and other non-cosmic background contamination components have been removed (see Snowden et al. 1994, Snowden et al. 1995). The X-ray emission in the 1.5 keV band is mainly extragalactic: Hasinger et al. (1998) have resolved about 70 per cent of the background at these energies into discrete extragalactic sources. However, there is some contamination, due to the poor energy resolution of the ROSAT PSPC, from a Galactic component. This is well-fitted with a Raymond-Smith spectrum, having a temperature of $``$0.17 keV, and may be associated with the Galaxy halo (see Wang & McCray 1993, Gendreau et al. 1995, Hasinger 1996, Pietz et al. 1998). We note that at higher energies (3-60 keV) there is evidence for a even harder Galactic component with a bremsstrahlung spectrum of 9 keV (Iwan et al. 1982); however, this is expected to contribute less than one per cent in the ROSAT 1.5 keV band. Moreover, the 1.5 keV maps of Snowden et al. (1995) show some extended features superimposed on the extragalactic and the diffuse hard Galactic component (see Snowden et al. 1995) mainly originating from nearby supernova remnants (eg the North Polar Spur, the Cygnus superbubble). All the above local features must be subtracted before deriving the extragalactic X-ray dipole. ### 2.1 Modelling the Galactic emission The derivation of a detailed Galactic emission model requires observations in many wavebands and is outside the scope of this paper; detailed modelling of the Galactic halo is discussed in Iwan et al. (1982), Nousek et al. (1982) and Pietz et al. (1998). Here instead, we attempt to make a rough model of the Galactic contamination in our energy band. We model the diffuse Galactic components with a finite radius disk with an exponential scale height, (eg Iwan et al. 1982) which provides a good description of the Galactic component at both soft (0.75 keV) and hard energies (2-60 keV). This is given by: $$𝒞(l,b)=𝒞_b\left[1+\frac{Eh}{\mathrm{sin}|b|}\left(1e^{f(l,R_d)\mathrm{tan}|b|/h}\right)\right]$$ (1) where $`f(l,R_d)=\mathrm{cos}l+\sqrt{R_d^2\mathrm{sin}^2l}`$, with $`𝒞`$ the total X-ray intensity, $`𝒞_b`$ the average extragalactic component, in units of $`10^6`$ $`\mathrm{cts}\mathrm{s}^1\mathrm{arcmin}^2`$, $`E`$ the fraction of the total X-ray emission which is due to the Galaxy, $`h`$ and $`R_d`$ the disk scale height and disk radius, respectively both in units of 10 kpc (the galactocentric distance of the Sun). We exclude from the fit the regions of the most apparent extended emission features: the bulge and the North Polar Spur (ie. $`40^{}<b<75^{}`$ and $`300^{}<\text{ }l<\text{ }30^{}`$ eg Snowden et al. 1997) as well as the Galactic plane strip (with $`|b|<20^{}`$ or $`30^{}`$). Although it is unknown whether there is some small residual bulge emission outside the above excised region, our rough model should provide a good first order approximation to the Galactic halo emission. Furthermore, applying a homogeneous mean count we mask the most apparent ”local” extragalactic features; a $`4^{}`$ radius region around the Virgo cluster ($`l,b287^{},75^{}`$) and a 10 radius region around the Magellanic clouds ($`l,b278^{},32^{}`$). In the minimization procedure we weight each pixel by $`1/\sigma `$, where $`\sigma =(\sigma _I^2+\sigma _P^2)^{1/2}`$, with $`\sigma _I^2`$ the variance of the X-ray ROSAT counts due to the intrinsic extragalactic fluctuations and $`\sigma _P^2`$ the variance due to Poisson count statistics. We estimate $`\sigma _I`$ excluding from the map the North Polar Spur and the $`|b|45^{}`$ regions and find $`\sigma _I0.27𝒞`$. Starting from different initial guesses of the input parameters the minimization procedure does not reach a unique minimum, although the reduced $`\chi ^2`$ is around one and the output model parameters are closely clustered. This suggests the existence of a broad and shallow minimum. We therefore run 1000 $`\chi ^2`$ minimizations starting from a broad range of initial values. These values are centred on those that Iwan et al (1982) find for the Galactic component using HEAO-1 data, ie., $`h7`$ kpc, $`R_d28`$ kpc and $`E<\text{ }10`$ per cent; while the input extragalactic contribution is centred on $`𝒞_b120`$, the value obtained from the ROSAT XRB spectral fits in this band (eg Georgantopoulos et al. 1996). The results cluster around some preferred values as can be seen in figure 1, in which we show as hatched histograms the distribution of the input parameters and as thick lines the best-fit output parameter distribution. The most probable values of the fitted parameters as well as their standard deviation over the 1000 minimizations are presented in table 1. For $`|b|>20^{}`$ we have typically that $`\chi ^233600`$ for 33064 degrees of freedom and hence this model cannot be rejected. The Galaxy contributes a significant fraction ($`E2030`$ per cent) of the average total ROSAT 1.5 keV X-ray emission. As discussed earlier this Galactic component could arise as contamination from lower energies (eg from the 0.17 keV Raymond-Smith Galactic component) due to the coarse energy resolution of the ROSAT PSPC. Although this percentage is rather high it is not inconsistent with XRB spectral fits in deep ROSAT and ASCA pointings: the excellent spectral resolution ASCA spectrum of the XRB in the 0.4-10 keV band (Gendreau et al. 1995) presents a steep upturn at 1 keV, suggesting a high contamination of the 1.5 keV ROSAT band from lower energy photons. The scale height of the finite emitting disk is $`16`$ kpc, higher than both the value obtained by Pietz et al. (1998) in the 0.75 keV ROSAT band and by Iwan et al (1982) in the 2-60 keV HEAO-1 band. Using the above best-fit model, the predicted Galactic halo luminosity is $`L_x2\times 10^{39}`$ ergs $`s^1`$. The best-fit extragalactic component arises to $`100\pm 11\times 10^6`$ $`\mathrm{cts}\mathrm{s}^1\mathrm{arcmin}^2`$. This value is in excellent agreement with the extrapolation of the HEAO-1 3-60 keV data of Marshall et al. (1980) in the ROSAT band but $``$20 per cent lower ($`1.5\sigma `$) than the normalization of the extragalactic power-law component obtained from XRB spectral fits in deep, high galactic latitude ROSAT fields (Hasinger 1996). If instead we force the extragalactic contribution to the value obtained from the ROSAT XRB fits ($`120\times 10^6\mathrm{cts}\mathrm{s}^1\mathrm{arcmin}^2`$ in this band, eg Georgantopoulos et al. 1996) then the best-fit parameters are $`h=4\pm 2`$ kpc, $`R_d=15\pm 4`$ kpc and $`E=0.28\pm 0.10`$ consistent with the values of Pietz et al. (1998). We finally note that, had we not excluded the North Polar Spur region in our fit, with all four parameters ($`𝒞_b,E,h,R_d`$) free, we would have erroneously obtained a significantly higher Galactic fraction ($`E40`$ per cent) as well as a larger scale height ($``$ 19 kpc) but with an unacceptable fit in this case ($`\chi _{red}^21.2`$). ## 3 XRB ROSAT Dipole ### 3.1 Dipole fitting procedure The multipole components of the ROSAT X-ray intensity are calculated by summing moments. The dipole moment is estimated by weighing the unit directional vector pointing to each 40<sup>2</sup> arcmin<sup>2</sup> ROSAT cell with the X-ray intensity $`𝒞_i`$ of that cell. We normalize the dipole by the monopole term (the mean X-ray intensity over the sky): $$𝒟\frac{|𝐃|}{M}=\frac{𝒞_i\widehat{𝐫}_i}{𝒞_i}$$ (2) We attempt to estimate the cosmological XRB dipole by applying the above procedure to the ROSAT counts after subtracting our best Galaxy model (table 1) and the regions of Galactic and ”local” extragalactic X-ray emission. To this end we mask: (a) The Galactic plane, (b) the area dominated by the Galactic bulge and the North Polar Spur (see definitions in section 2.1) and (c) a region of 10 radius around the Large Maggelanic Clouds (we have verified that small variations in the limits of all the above regions do not change appreciably our results). We use two methods to model these regions: the first consists in substituting the observed intensity with the mean value estimated at high galactic latitudes (homogeneous filling procedure) and the second based on a spherical harmonic extrapolation procedure (cf. Yahil et al 1986). Since the later is slightly more involved we briefly review the method which is based on expanding the sky surface density field $`\sigma (\vartheta ,\phi )`$ in real spherical harmonics: $$\sigma (\vartheta ,\phi )=\underset{l,m}{}a_l^mY_l^m(\vartheta ,\phi )$$ (3) where $`\vartheta =90^{}\mathrm{Gal}.\mathrm{latitude}`$, $`\phi =\mathrm{Gal}.\mathrm{longitude}`$ (do not confuse the multipole $`l`$ with the Galactic Longitude). In this formulation the normalized dipole is defined as $$𝒟=\frac{1}{3a_0^0}\left[\underset{m=1}{\overset{1}{}}(a_m^1)^2\right]^{1/2}$$ (4) where the factor 3 enters for consistency with the definition of eq.2. The observed, $`\mathrm{\Sigma }(\vartheta ,\phi )`$, and intrinsic surface density field $`\sigma (\vartheta ,\phi )`$, are related according to: $`\mathrm{\Sigma }(\vartheta ,\phi )=(\vartheta ,\phi )\sigma (\vartheta ,\phi )`$, where the mask $`(\vartheta ,\phi )`$ takes values of 1 or 0 depending on whether the $`(\vartheta ,\phi )`$ direction points in an observed or excluded part of the sky, respectively. Since we are interested in recovering the dipole ($`l=1`$) components of $`\sigma (\vartheta ,\phi )`$, the correction terms should at least involve the quadrupole ($`l=2`$) components. Expanding $`\mathrm{\Sigma }(\vartheta ,\phi )`$ up to the quadrupole order and allowing for the orthogonality relation of the Legendre polynomials, we can express the observed coefficients $`A_l^m`$, in terms of the intrinsic ones, $`a_l^m`$, forming a $`9\times 9`$ matrix, the inversion of which then gives $`a_l^m`$. A more accurate procedure would entail an expansion to higher order $`l`$’s (cf. Lahav et al. 1994) but to recover a smooth underlying dipole structure (in which higher order $`l`$’s are negligible) we have verified, using mock samples, that the above procedure recovers extremely accurately the direction and amplitude of the true underlying dipole. ### 3.2 Dipole Results In table 2 we present our main results for different treatment of the data. It is evident that when using the raw ROSAT data, the dipole points roughly towards the Galactic centre (in agreement with Kneissl et al 1997). However, when we exclude both the Galaxy and the North Polar Spur, the measured dipole is in much better directional agreement with the CMB dipole. For the homogeneous filling method we find $`\delta \theta _{cmb}<20^{}`$ excluding the Galactic plane below $`|b|=20^{}`$ or $`|b|=30^{}`$. For the spherical harmonic method the misalignment angle is larger, $`\delta \theta _{cmb}>39^{}`$. The Virgo cluster, being quite near and very bright in X-rays, could contribute significantly to the measured dipole. Excluding an area of 4 radius around the Virgo cluster ($`l,b287^{},75^{}`$), which is very apparent in the X-ray map, we find that it is responsible for $`20`$ per cent of the total dipole, while the dipole misalignment angle with the CMB increases by $`15^{}`$. However, it should be expected that many Galactic sources, probably dominating the higher ROSAT counts, are still present in the data and could affect the behaviour of the extragalactic XRB dipole. We therefore present in figure 2 the misalignment angle between the ROSAT and CMB dipoles as well as the normalized dipole amplitude, $`𝒟`$, as a function of the ROSAT upper count limit ($`𝒞_{up}`$). The errorbars have been estimated by using the different Galactic model parameters resulting from the $`\chi ^2`$ fits of section 2.1 and presented in figure 1. We do find that our main results are very robust in such variations of the Galactic model. The two methods, used to mask the excluded regions, give consistent dipole results for $`𝒞_{up}<\text{ }140`$ (which cover $``$ 97 per cent of the unmasked sky). For this limit the ROSAT -CMB dipole misalignment angle is $`<\text{ }26^{}`$ and $`33^{}`$ for the homogeneous filling and spherical harmonics methods respectively. It is evident that the ROSAT -CMB dipole misalignment angle increases substantially when we include the few higher intensity cells, before however the Virgo cluster (which enters at $`𝒞_{up}>\text{ }200`$ counts) starts reducing again the misalignment angle, as can be clearly seen in figure 2. The interpretation that the high intensity ($`𝒞>\text{ }140`$) cells are associated with Galactic sources is supported by the fact that when we include these few cells the resulting dipole direction moves towards the Galactic centre. We therefore consider as our best estimate of the XRB dipole its value at $`𝒞_{\mathrm{up}}140`$, for which both methods used to model the masked areas agree and the XRB-CMB dipole misalignment angle is minimum. To take into account all possible sources of uncertainty, we use a Monte-Carlo simulation approach in which we vary all the model parameters within their range of validity. Using 6000 dipole realizations to take into account (a) the uncertainties of the Galactic model subtracted from the raw counts, (b) the different methods used to mask the excluded sky regions (c) the different galactic latitude limits and (d) variations of the excision radii around the bulge and the North Polar Spur we conclude that the XBG ROSAT dipole has: $$𝒟_{\mathrm{XRB}}0.017\pm 0.008(l,b)(286^{},22^{})\pm 19^{}$$ which deviates from the CMB dipole directions in the heliocentric and Local Group frames by $`\delta \theta _{\mathrm{CMB}_{}}30^{}`$ and $`\delta \theta _{\mathrm{CMB}_{\mathrm{LG}}}10^{}`$, respectively. It is interesting that the ROSAT dipole is nearer to the Local Group frame CMB dipole direction. Our results are consistent with the HEAO-1 (2-10 keV) dipole (Shafer & Fabian 1983) which points in a similar direction ($`282^{},30^{}`$), albeit with a larger uncertainty, but has a lower amplitude: $`𝒟_{\mathrm{HEAO}1}0.005`$. ### 3.3 Interpretation The motion of the Sun with respect to an isotropic radiation background produces a dipole in the radiation intensity according to: $$\frac{\delta 𝒞}{𝒞}=(3+\alpha )V_{}\mathrm{cos}\theta /c$$ (5) where $`\alpha `$ is the spectral index of the radiation ($`𝒞\nu ^\alpha `$. For the 1.5 keV ROSAT band we have $`\alpha 0.4`$ (Gendreau et al. 1995). If the ROSAT dipole was totally due to the motion of the Sun with respect to the XRB (Compton-Getting effect) then we would obtain a solar velocity with respect to the XRB of $`V_{}=1300\pm 600`$ km/sec, which should be compared with $`V_{}=369`$ km/sec with respect to the CMB (Lineweaver et al 1996). We therefore verify that the observed XRB ROSAT dipole cannot be only due to the Compton-Getting effect but it is significantly contaminated by the dipole produced by X-ray emitting sources that trace the large-scale structure. We estimate the large-scale dipole component of the XRB by subtracting from the ROSAT map the expected Compton-Getting dipole and we obtain a dipole with $`𝒟_{\mathrm{LSS}}0.9𝒟_{\mathrm{XRB}}`$ pointing towards $`(l,b)(284^{},18^{})`$. This is in general agreement with Lahav et al (1997) who found that the two effects, contributing to the XRB dipole, are of the same order of magnitude. ## 4 Conclusions We have estimated the dipole of the diffuse 1.5 keV X-ray background using the ROSAT all-sky survey maps of Snowden et al (1995). We have first subtracted from the ROSAT counts the diffuse Galactic emission (the halo and bulge components) as well as local extended features such as the North Polar Spur. The Galactic halo model used is that of a finite radius disk model with an exponential scale height (eg Iwan et al 1982). The mean Galactic X-ray component is $`2030`$ per cent of the background and the scale height and radius are $`16\pm 5`$ and 27$`\pm 5`$ kpc respectively. We model the excluded regions by either homogeneously “painting” these regions with the mean X-ray count (derived after subtracting the Galaxy) or using a spherical harmonic expansion of the X-ray surface intensity field. We estimate that the ROSAT XRB dipole is pointing towards $`(l,b)(286^{},22^{})\pm 19`$, within $`30^{}`$ of the CMB direction and having a normalized amplitude of $`1.7`$ per cent. The dipole direction is in agreement with previous estimates in hard X-rays (Shafer & Fabian 1982) but the positional errors have now been improved. We also find that the two effects expected to contribute to the XRB, ie., the Compton-Getting effect and the anisotropy due to X-ray sources tracing the large-scale structure, are of the same order of magnitude, in general agreement with the predictions of Lahav et al (1997). However, the latter dominates having $`𝒟_{\mathrm{LSS}}0.9𝒟_{\mathrm{XRB}}`$. Finally we estimate that the nearest cluster, Virgo, contributes $`20`$ per cent to the total measured XRB dipole. ## Acknowledgments We thank the referee, Dr. Marie Treyer, for her helpful comments and suggestions.
no-problem/9812/hep-th9812237.html
ar5iv
text
# Note on the Quantum Mechanics of M Theory ## 1 Introduction It is universally believed that M Theory is described by ordinary quantum mechanics. In this paper we will present evidence to the contrary. However, the modification of quantum mechanics we propose is very mild and indeed the formalism we will use to investigate this question is the standard formalism of the quantum theory. The only conventional axiom that we have to drop is the implicit one which allows us to define Heisenberg operators, given an initial Hilbert space and Hamiltonian. We argue that in asymptotically Minkowski space time, this cannot be done in M Theory (or any other candidate quantum theory of gravity) for any timelike asymptotic Killing vector. This is a consequence of the Bekenstein-Hawking formula for black hole entropy. The same considerations tell us why it is possible to construct a standard Hamiltonian quantum mechanics for M Theory in asymptotically Anti-de Sitter (AdS) spaces. We argue that when the number of Minkowski dimensions is greater than four, Hamiltonian quantum M theory in the light cone frame is still sensible. We also note in passing that the success of light cone string theory (in contrast with temporal gauge quantization of the string) is related to the Hagedorn density of states of the string. In precisely four noncompact dimensions, the light cone formalism fails to cope with the density of states of black holes. This suggests that there may not even be a light cone Hamiltonian formalism for nonperturbative M Theory in this framework. Matrix Theory has already provided evidence for the possible breakdown of the light cone description of M Theory with four noncompact dimensions. However, in Matrix Theory this appeared to be related to the extra compactification of a lightlike circle. It is not clear to what extent these problems are related to the (much milder) difficulties we will point out here. A light-cone description of the four dimensional theory in terms of a non-standard quantum theory (such as a little string theory) is not ruled out by our considerations; in the Matrix Theory context such a description arises already when there are six non-compact dimensions. We also use similar methods to analyze non-gravitational little string theories, and we conclude that they also do not have an ordinary quantum mechanical description in the usual time frame. However, they can (and do) have such a description in a light-cone frame. ## 2 The Spectrum of a Hamiltonian, and Heisenberg Operators In a generally covariant theory, the definition of time and time translation must be based on a physical object. In noncompact spacetime with appropriate asymptotic boundary conditions, appropriate physical objects are the frozen classical values of the asymptotic spacetime metric, and other fields. We will restrict attention to such asymptotically symmetric spaces in this paper. The quantum theory lives in a Hilbert space which carries a representation of the asymptotic symmetry group, and among its generators we find the time translation operator<sup>1</sup><sup>1</sup>1For asymptotically Minkowski spaces we will ignore the full Bondi-Metzner-Sachs symmetry group in this paper and imagine that a particular Poincare subgroup of it has been chosen by someone wiser than ourselves. Perhaps one should think instead of the Spi group of asymptotic diffeomorphisms of spacelike infinity, where a natural Poincare subgroup exists.. In the Minkowski case there is, up to conjugation, a unique choice of time translation generator, while in the AdS case there are two interesting inequivalent choices. Having chosen a Hamiltonian, we are now ready to discuss Heisenberg operators, naively defined by $$O(t)e^{iHt}Oe^{iHt}.$$ (1) To test the meaning of this definition we compute the two point (Wightman) function; the ground state expectation value of the product of two operators at different times. By the usual spectral reasoning it has the form $$W(t)=<0|O^{}(0)O(t)|0>=_0^{\mathrm{}}𝑑Ee^{iEt}\rho _O(E),$$ (2) where the spectral weight is defined by: $$\rho _O(E)=\underset{n}{}\delta (EE_n)|<0|O|n>|^2$$ (3) and is manifestly a sum of positive terms. The crucial issue is now the convergence of this formal Fourier transform. In interacting quantum field theory, an operator localized in a volume $`V`$ will typically have matrix elements between the vacuum and almost any state localized in $`V`$. The only restrictions will come from some finite set of global quantum numbers. At very high energies the density of states in volume $`V`$ is controlled by the UV fixed point theory of which the full theory is a relevant perturbation. Scale invariance and extensivity dictate that it has the form $$\rho (E)e^{cV^{\frac{1}{d}}E^{\frac{d1}{d}}}$$ (4) for some constant $`c`$, where $`d`$ is the spacetime dimension. As a consequence, even if an operator $`O`$ has matrix elements between the vacuum and states of arbitrarily high energy (and indeed, even if these matrix elements grow as the exponential of a power of the energy less than one, which is not typical), the Fourier integral converges and defines a distribution<sup>2</sup><sup>2</sup>2This may be seen by analytic continuation to Euclidean space.. In local field theory there are special operators whose matrix elements between the vacuum and high energy states are highly suppressed. These are operators which are linear functionals of local fields of fixed dimension at the UV fixed point. The spectral function of such fields has power law dependence on the energy (as opposed to the exponential dependence implied by (4)), which means that most of their matrix elements to most high energy states vanish rapidly with the energy. A typical operator localized in volume $`V`$ is a nonlinear functional of such fields. Using the operator product expansion we can write a formal expansion of it in terms of the linear fields, but it will generically contain large contributions from fields of arbitrarily large dimension, and its spectral function will have the asymptotics of the full density of states. Physically, we can understand the special role of local operators by first thinking of a massive field theory at relatively low energy. The intermediate states of energy $`E`$ created by a local field will be outgoing scattering states of a number of massive particles bounded by $`E/m`$, whose momenta point back to the place where the field acts. When $`E`$ is large these will be very special states among all those available in the same volume and with the same energy. A localized burst of energy will dissipate rather than thermalize. As $`E`$ gets very large, this description becomes inadequate. However, note that if we choose a basis of local fields of fixed dimension in the UV conformal field theory, then any given field in the basis creates only states in a single irreducible representation of the conformal algebra. The degeneracy of these representations is much smaller than that of the full theory. Now we want to ask the same question for M Theory compactified to $`d`$ asymptotically Minkowski spacetime dimensions. As in our discussion of field theory, we work in a fixed asymptotic Lorentz frame and discuss the time evolution operator appropriate to that frame. We claim that the high energy density of states in superselection sectors with finite values of all Lorentz scalar charges, is dominated by $`d`$ dimensional Schwarzschild black holes. The density of such black hole states is, by the Bekenstein-Hawking formula, $$\rho _{BH}e^{c(E/M_P)^{\frac{(d2)}{(d3)}}},$$ (5) where $`M_P`$ is the $`d`$ dimensional Planck mass and $`c`$ is a constant. The Fourier integral no longer converges unless the matrix elements of the operator vanish rapidly for large $`E`$ <sup>3</sup><sup>3</sup>3or unless the operator connects the vacuum only to a very small fraction of the black hole states. Given the thermal nature of the black hole ensemble, i.e. the fact that it consists primarily of states with the same gross properties, operators of the latter type do not correspond to very realistic probes of the system.. Note furthermore that if we employ an operator with an energy cutoff much larger than the Planck mass (the point above which the behavior (5) presumably sets in), then for all times longer than the inverse cutoff, the integral will be completely dominated by the states at the cutoff energy scale. Thus the only kinds of operators whose Green functions exhibit the usual property that the variation over a time scale $`\mathrm{\Delta }T`$ probes excitations of energy $`1/\mathrm{\Delta }T`$ are those with an energy cutoff below $`M_P`$. The necessity of cutting off operators implies a non-locality in the physics of M Theory at the time scale $`M_P^1`$. Indeed, a probe of the system localized at time $`t=t_0`$ differs from one localized at $`t=0`$ because it couples to the operator $`𝑑Ee^{iEt_0}O(E)`$ rather than the same integral with $`t_0`$ set to zero. If $`t_0M_P^1`$ and $`O(E)`$ is cut off at $`EM_P`$ then these operators are indistinguishable. An important feature of the density of states (5), which distinguishes it from that of field theory, is its independence of the volume. This is related to the familiar instability of the thermal ensemble in quantum gravity to the formation of black holes . In ordinary field theory, the typical state of very high energy $`E`$ on a torus is a member of the translation-invariant thermal ensemble. However, as a consequence of the Jeans instability, any attempt to create a translationally invariant state with finite energy density in a theory containing gravity will fail. Once the size of a patch of finite energy density exceeds its Schwarzschild radius, it collapses into a black hole. So, the generic high energy state in M Theory with four or more asymptotically flat dimensions is a single black hole. This gives us some insight into the question raised in the previous footnote of why there are no analogs in M Theory of local operators of fixed dimension which couple only to a few of the high energy states. In quantum gravity an operator carrying a very large energy $`E`$ cannot create a state more localized than the corresponding Schwarzschild radius. The most localized states of a given energy are also the generic states of that energy and they are black holes. Some readers familiar with the AdS/CFT correspondence will at this point be objecting that these considerations seem to contradict the successful description of M Theory in AdS spaces by quantum field theories. Other AdS/CFT cognoscenti will have already recognized that in fact that correspondence is another example of the rules we have just promulgated. AdS spaces have two inequivalent types of interesting time evolutions, the Poincare and the global time. The appropriate black objects which dominate the high energy density of states for these two definitions of energy are near extremal black branes, and AdS Schwarzschild black holes respectively. Both of these have positive specific heat, that is the density of states is an exponential of a power of the energy less than one. This is completely consistent with quantum field theory, and of course the matching of the thermodynamics of these objects with that of conformal field theories is one of the primary clues which led to the AdS/CFT correspondence. Note that the major discrepancy between the behavior of the density of states in AdS and Minkowski spacetimes suggests that the extraction of flat space physics from that of AdS may be quite subtle. In it was suggested that in appropriate regions of parameter space, the Hilbert space of the quantum field theories describing AdS physics contain an energy regime describing flat space black holes whose Schwarzschild radius is much smaller than the radius of curvature of AdS. Somehow one must find observables in the AdS theory which probe only this energy regime and reduce to the corresponding flat space S-matrix. To summarize: we have argued that in asymptotically Minkowski space time, M Theory cannot be described as a conventional quantum theory. Although the violation of the rules of quantum mechanics appears mild, we would like to emphasize that any quantum system which corresponds to the quantization of a classical action which is the integral over time of a Lagrangian by Euclidean path integrals would appear to have well defined Green functions. Thus, although the systems which might describe M Theory in Minkowski space violate the abstract rules of quantum mechanics only by having a bizarre spectral density for the Hamiltonian, they are unlikely to have a conventional Lagrangian description with respect to an ordinary time variable. ## 3 Light Cone Time and Black Holes String theory has traditionally been formulated in light cone gauge because, for various technical reasons, one was unable to find another gauge in which the system was obviously a unitary quantum theory. More recently, the light cone gauge has been seen as the framework in which the holographic nature of string theory and M Theory becomes apparent. Matrix Theory is a nonperturbative formulation of M Theory in Discrete Light Cone Quantization (DLCQ) on spaces with at least $`6`$ Minkowski dimensions. In this section we would like to propose that the success of Hamiltonian quantization in light cone gauge is partly due to the absence in light cone gauge of the problems described in the previous section. This leads us to anticipate a problem with any light cone formulation of the theory in 4 noncompact dimensions. The argument is extremely simple. Light cone energy is defined by $$=\frac{𝐏^2+M^2}{P^+}.$$ (6) For fixed longitudinal momentum and vanishing transverse momentum, we can therefore write the density of black hole states in light cone energy as $$\rho ()e^{(\frac{P^+}{M_P^2})^{\frac{d2}{2(d3)}}}.$$ (7) Thus, for $`d>4`$ the density of states is well behaved, and we can hope to describe the system as some sort of conventional quantum mechanics. In precisely four dimensions the Bekenstein-Hawking formula implies a Hagedorn spectrum for M Theory in light cone energy. It is extremely interesting to compare these observations with the problems encountered in compactified DLCQ M Theory, or Matrix Theory. There it is known that for 7 or more noncompact dimensions the DLCQ description is a quantum field theory, while in 6 dimensions it is a little string theory. As described in the next section, the little string theory has a Hagedorn spectrum. Finally, for five dimensions the theory involves gravity, at least in the simplest maximally SUSY case . Note that, qualitatively, the DLCQ density of states seems to mirror that of the uncompactified light cone theory with two fewer dimensions. It is field theoretical down to $`d=7`$ and has a Hagedorn form for $`d=6`$. There does not, however, appear to be any quantitative mapping of one problem on to the other. The exponents in the energy-entropy relation are completely different. Indeed, it is well known that the DLCQ theory contains many states which must decouple in the large $`N`$ limit if the limiting theory is to be Lorentz invariant. These undoubtedly are responsible for the dramatically different behavior of the density of states in the DLCQ and uncompactified light cone descriptions. The Lorentz invariant theory contains only the states with energy of order $`1/N`$ in the large $`N`$ limit, and the asymptotic density of states in this theory refers to the asymptotics of the coefficient $`1/N`$ in the DLCQ theory. With five noncompact dimensions, the lightcone Hamiltonian of toroidally compactified $`DLCQ_N`$ M Theory is that of 11D SUGRA in the presence of an $`A_N`$ singularity. In this picture the time is that of the rest frame of the singularity. The theory is compactified on a six torus with radii of order the Planck scale. For finite $`N`$, the high energy density of states of this theory is dominated by 5D black holes and a conventional Heisenberg quantum mechanics will not exist. On the other hand, it has been suggested by Kachru, Lawrence, and Silverstein (KLS) that DLCQ M Theory compactified on a Calabi-Yau manifold will have fewer and more innocuous extraneous states. These authors also propose that the Matrix description of this theory might be a $`3+1`$ dimensional field theory. Note that the entropy of such a field theory scales like $`E^{3/4}`$, which is precisely the scaling with light cone energy of the entropy of five dimensional black holes. In the present authors’ opinion, this observation supports the suggestion of and should encourage us to search for the relevant $`3+1`$ dimensional field theory. One clue to its nature might be the fact that the field theoretical nature of the spectrum seems to survive the large $`N`$ limit. In maximally SUSY Yang Mills theory, there are degrees of freedom with energies as low as $`N^{1/3}`$ and a $`3+1`$ dimensional spectrum, but there do not appear to be any field theoretical degrees of freedom with energies as low as $`1/N`$. The KLS field theory should have such states. It is well known that with four noncompact dimensions (i.e. two transverse dimensions) the DLCQ theory ceases to exist. The light cone Hamiltonian is the rest frame Hamiltonian of toroidally wrapped D7 branes in weakly coupled IIB string theory. This is only a picturesque description, for it is not self-consistent. If we assume an asymptotically locally flat space, the D7 branes have infinite energy (by a BPS formula). However, the gravitational back reaction is infinite and one merely learns that the asymptotically flat ansatz is not self-consistent. Quite likely the theory does not exist at all. The DLCQ theory is really a compactification to $`2+1`$ flat dimensions. Furthermore, like all light cone theories it describes only excited states of the vacuum rather than the vacuum itself. If a 2+1 dimensional theory has four or more supercharges (so that there is an exactly massless scalar in the SUGRA multiplet) then we do not expect there to be many such states. A generic “localized” excitation of the vacuum creates a geometry which is not asymptotically flat. Thus the Hilbert space of the system with asymptotically flat boundary conditions (or even asymptotically locally flat) is very small and contains only a few topological excitations . This problem, which arises in DLCQ, does not appear to have much to do with the apparent absence of a Heisenberg quantum mechanics in the lightcone theory compactified to 4 dimensions. The problem there is an excess of states which do satisfy the asymptotic boundary conditions, rather than a lack of such states. Thus while Matrix Theory may be a useful guide to many properties of M Theory, we cannot expect to get the physics of low dimensional compactifications right without finding a light-cone description of the Lorentz invariant theory (without compactifying a light-like circle). ### 3.1 A Loose End A possible objection to the above discussion is that black holes are not stable. Thus they do not really correspond to the eigenspectrum of the Hamiltonian. However, the lifetime of black holes goes to infinity as a power of their mass. They are thus extremely narrow resonances and correspond to an enhancement of the density of scattering states. In principle we could have formulated our description of the behavior of Green’s functions in terms of complete sets of scattering states and the properties of the S-matrix. ## 4 Hamiltonian Description of Little String Theories In section 2 we briefly discussed the convergence of the formal expression (2) in the case of local field theories, and noted that there appeared to be no problem with it in this case. However, it was realized in the past few years that there are also non-local field theories which can be decoupled from gravity, in particular little string theories. Although decoupled from gravity, these behave in many ways like critical string theories. In this section we will analyze the behavior of these theories at high energies, and we will argue that they have a Hagedorn density of states. Therefore, using the arguments of section 2, it is not clear how to define local operators in these theories using the usual time variable, but a light-cone description of these theories in terms of an ordinary Lagrangian quantum mechanics does seem to make sense. We will discuss in detail only the case of little string theories with 16 supercharges in 6 dimensions of type $`A_{k1}`$, but we expect the conclusions to be more general. The construction of little string theories with 16 supercharges in 6 dimensions was discussed in . The original definition of these theories involved looking at $`k`$ NS 5-branes in type IIA (for the $`𝒩=(2,0)`$ little string theories) or type IIB (for the $`𝒩=(1,1)`$ little string theories) string theories, and taking the limit of $`g_s0`$ with the string scale $`\alpha ^{}`$ constant. While this construction provides evidence for the existence of such little string theories, it does not allow for explicit computations in these theories. Two independent methods for making direct computations in these theories were developed in the past two years, and we will use both of them to compute the density of states at high energies. ### 4.1 The Equation of State from the Holographic Description The first method we will use is the holographic description of the little string theories , which is a generalization of the AdS/CFT correspondence . The little string theories are claimed to be dual to a background of M theory or string theory which, at a large radial coordinate, asymptotes to a linear dilaton background of string theory (with a string metric which is the standard metric on $`^7\times S^3`$). It is easy to generalize this description also to the case of finite energy density or temperature. As in the case of the AdS/CFT correspondence , the relevant background (at least at high energy densities) is the near-horizon background of near-extremal NS5-branes. The simplest way to derive this background is just to take the $`g_s0`$ limit in the background of a near-extremal 5-brane . The result is very similar to the background described in , but with the linear dilaton direction and the time direction replaced by an $`SL(2)/U(1)`$ black hole (with an appropriate $`SL(2)`$ level so that the total central charge is $`\widehat{c}=10`$). The black hole background also has a varying dilaton with the string coupling going to zero far from the horizon. If we start with an energy density $`E/V=\mu M_s^6`$ on the 5-brane, the string coupling at the horizon is $`g_s^2k/\mu `$. Since for large $`k`$ the curvature of this background is small, it follows that for $`\mu k1`$ we can trust the supergravity description of this background , and it provides a holographic dual for the little string theories with this energy density<sup>4</sup><sup>4</sup>4For smaller energy densities we expect this background to be corrected, at least in the $`𝒩=(2,0)`$ case where at low temperatures and energy densities the solution should become localized in the eleventh direction .. In particular, we can use this description to compute the equation of state of the little string theories from the Bekenstein-Hawking formula (as was done for the case of the AdS/CFT correspondence in ), and we find that the equation of state is $$E=\frac{M_s}{\sqrt{6k}}S,$$ (8) where $`E`$ and $`S`$ can be taken to be either the total energy and entropy or the energy and entropy densities (since the formula does not depend on the volume). As was noted in a similar context in , this is the same density of states as that of a free string theory with a string scale of $`k\alpha ^{}`$ and $`c=6`$, or of a free string theory with a string scale of $`\alpha ^{}`$ and $`c=6k`$, even though the theory does not seem to be a free string theory; an interpretation as a theory with a string scale $`\alpha ^{}`$ seems more likely since the theory has a T-duality symmetry corresponding to this scale . In any case, for our purposes it is enough to note that at high energy densities we get the equation of state (8), which is a Hagedorn density of states with a Hagedorn temperature of $`T_H=M_s/\sqrt{6k}`$ (signifying that the canonical ensemble can only be defined below this temperature). The fact that (at least for large volumes compared to the string scale) the equation of state does not depend on the volume suggests that, as in M theory, the high energy density of states is dominated by the states of a single object. In little string theory we believe that the analog of the black hole is a single long string. This seems to be the message of the Hagedorn spectrum. Note that this is not strictly true in free string theory, since there the numbers of strings in each string state are an infinite set of conserved quantities. However, when interactions are turned on, multiple string states can convert into a single long string and this has more entropy. In the interacting little string theory we should expect this phenomenon to occur as well. We will provide some supporting evidence for this from the DLCQ description below, by computing the Hagedorn description via an independent argument which shows that it can naturally arise just from single string states. As noted above, the corresponding phenomenon for black holes in asymptotically Minkowski spacetime is the Gross Perry Yaffe instability of the translation invariant thermal ensemble to the formation of a single large black hole. In the little string theories we do not expect a similar localization of the generic high energy states, but they still seem to correspond to single objects, unlike the local field theory case. We argued for the full M Theory that the existence of black holes precluded the existence of local operators, which couple only to a small subset of the high energy states. We believe that the same is true in the little string theory, as a consequence of the fact that the generic high energy state is a single big string. In field theory (on a large but finite torus) the generic state of high energy is a translation-invariant gas. But in an interacting string gas in any finite volume, once the energy is taken large enough, the density of strings is such that overlaps are inevitable, and in the presence of interactions the high entropy single string state will be preferred. In a gas, it is easy to construct operators which create only one of the constituents from the vacuum. On the other hand, if perturbative string theory is any guide, it is very difficult to construct operators with matrix elements between the vacuum and only a few of the highly degenerate excited string states. Our arguments here have necessarily been quite heuristic because we do not have a good description of the eigenstates of little string theories. Nonetheless, combined with the supporting evidence from the DLCQ and holographic pictures, and especially the calculation of Peet and Polchinski which suggests that correlation functions of little string theory operators in momentum space do not have Fourier transforms, our description of the physics of this system seems plausible. ### 4.2 The Equation of State from the DLCQ Description A completely different description of the little string theories is their discrete light-cone quantization, which was described in for the case with $`𝒩=(2,0)`$ supersymmetry and in for the case with $`𝒩=(1,1)`$. In both cases the description of the theory with light-like momentum $`P^+=N/R`$ is given by a $`1+1`$ dimensional conformal theory with $`c=6Nk`$, compactified on a circle of radius $`\mathrm{\Sigma }=1/RM_s^2`$. Conformal invariance dictates the equation of state of these theories at high energies (above the scale $`1/\mathrm{\Sigma }`$) to be $$E_{DLCQ}=6Nk\mathrm{\Sigma }T^2;S=6Nk\mathrm{\Sigma }T,$$ (9) or $$E_{DLCQ}=S^2/6Nk\mathrm{\Sigma }.$$ (10) As in the previous section, we can easily translate this into an equation of state for the full space-time theory (a similar procedure was carried out for Matrix black holes in ). In the absence of transverse momentum we have $`E_{DLCQ}P^+=E^2`$, so we get $$E=\sqrt{E_{DLCQ}N/R}=\frac{M_s}{\sqrt{6k}}S,$$ (11) which is exactly the same relation as (8). Note that all factors of $`N`$ and $`R`$ dropped out of this expression, as well as any dependence on the volume of space, without the necessity of taking the large $`N`$ limit; this happens here because of special properties of the DLCQ of the little string theories and would not be true in general. The computation above gives the high-energy density of states in the DLCQ theory; unfortunately this is not what we are interested in for the Lorentz-invariant theory, for which only states whose energies are of order $`1/N`$ are relevant (obviously for large $`N`$ these energies will become smaller than the scale $`1/\mathrm{\Sigma }`$ above which our previous computation was valid). Luckily, as in the case of the DLCQ description of type IIA string theory , we can argue that these theories have states whose energy scales like $`1/N`$ which obey the same equation of state as the full theory. In the case of type IIA string theory, these states involved “long string” states which changed by a $`U(N)`$ gauge transformation when going around the compact circle. For free type IIA string theory the DLCQ involved a free CFT of central charge $`\widehat{c}=8N`$, while the “long string states” (for the lowest-lying states for which the gauge transformation was equivalent to a permutation of order $`N`$ of the eigenvalues of the $`U(N)`$ adjoint matrices) involved a CFT of central charge $`\widehat{c}=8`$ but compactified on a circle of radius $`\stackrel{~}{\mathrm{\Sigma }}=N\mathrm{\Sigma }`$; since the formula (10) depends only on the product $`c\mathrm{\Sigma }`$ these states obey the same equation of state as that of the full theory. We would like to suggest that a similar mechanism holds also in the DLCQ description of the little string theories. For the $`𝒩=(2,0)`$ case this involves the Higgs branch of a $`U(N)`$ gauge theory while for the $`𝒩=(1,1)`$ case it involves the Coulomb branch of a $`U(N)^k`$ gauge theory. At least in the latter case it is clear that the theory includes “long string” states with energies of order $`1/N`$ just like in the full type IIA string theory, and it seems likely that the central charge for the theory describing the “long strings” will be $`1/N`$ of the total central charge. A complete analysis of these “long string” states will be presented elsewhere. Unfortunately, since these states are strongly interacting (unlike the case discussed above of weakly coupled type IIA string theory), it is not clear if we can really trust this description for computing the density of states. In particular, it is not obvious that the “long string” states are adequately described by a local CFT. However, it seems plausible that “long string” states do exist and obey an equation of state similar to (10) (up to possible numerical factors). We view this as additional evidence for the validity of (8) in the little string theories, and for the entropy being dominated by single-string states. ### 4.3 Discussion Let us now discuss the consequences of (8) for the description of the little string theories. As discussed in section 2, this behavior implies that correlation functions of standard Heisenberg operators do not exist in these theories, at least when the time difference between the operators is smaller than the Hagedorn scale $`1/T_H`$. Indeed, using the holographic description of the little string theories, Peet and Polchinski have provided independent evidence that the correlation functions of these theories are not Fourier transformable and do not obey the rules of quantum field theory. This supplements the arguments based on T-duality. Thus, we expect the DLCQ description of these theories (or perhaps a direct large $`N`$ limiting version of it) to be the only Lagrangian quantum theory which computes the correlation functions and eigenspectrum of the theory. We want to emphasize the way in which the DLCQ analysis agrees with the Bekenstein-Hawking analysis of these theories. DLCQ predicts a Hagedorn spectrum in ordinary Lorentz frames in a very robust way. The argument depends only on general properties of $`1+1`$ dimensional field theories. The only possible loophole in the argument is that the spectrum of states whose energies in the large $`N`$ limit are of order $`1/N`$ might not be field theoretic. We believe we have provided plausible arguments which close this loophole, though more work is necessary to elucidate the nature of long string states in these interacting theories. The success of the Bekenstein-Hawking argument in predicting the correct density of states in these systems motivated us to apply it to quantum gravity in the bulk of this paper. ## 5 Conclusions and Questions What are we to make of the failure of Heisenberg quantum mechanics in light cone gauge for gravitational theories in four dimensions? Does this spell the end of the search for a nonperturbative Lagrangian formulation of M Theory ? There are several possibilities: * 1. Our reaction to the nonexistence of Green’s functions has been too violent, at least in the case of a Hagedorn spectrum. For example, in first quantized string theory, the system has a Hagedorn spectrum in ordinary Lorentz frames. Nonetheless a covariant Lagrangian and Hamiltonian formalism exists in conformal gauge. The time variable in this formalism is not connected to any spacetime variable. There are two reasons to be suspicious of the possibility of generalizing this formalism to a nonperturbative interacting theory. First of all, the light cone gauge formulation of the theory does not have any problems, and the covariant formalism bears a very close resemblance to it. Secondly, the free string theory is completely integrable and there are natural operators which communicate only with single states among the Hagedorn spectrum. One should however emphasize as well that the divergence of Green’s functions is much less dramatic for a Hagedorn spectrum than it is for black holes far from extremality. In particular, in the Hagedorn case, Euclidean Green’s functions exist as long as all time intervals are sufficiently long. Furthermore, the little string theories give us an example of interacting, non-gravitational systems with a Hagedorn spectrum. * 2. Perhaps, as in Matrix Theory, light cone M Theory in four dimensions can be formulated as some sort of compactification of a theory with an auxiliary Lorentz invariance under which the light cone time variable of M Theory transforms as the time component of a Lorentz vector. Then we can formulate the theory in the light cone frame of the auxiliary Lorentz group and deal with the Hagedorn spectrum by the same trick which works in first quantized string theory. This is the proposal of for treating little string theories. It is not clear how such a proposal could work for four dimensional M theory without compactifying a light-like direction (which we cannot do in this case as discussed in section 3). In a DLCQ theory, the auxiliary Lorentz group relates the lightcone Hamiltonian to the charges of longitudinally wrapped branes. These extra unwanted “momentum” quantum numbers disappear into the ultraviolet in the large $`N`$ limit, so the auxiliary Lorentz group does not act on the states which survive in the large $`N`$ limit (the “momentum” states all have an infinite energy in the limit of a non-compact light-cone description). Thus, this symmetry should not exist in the exact light-cone description of the four dimensional theory. Nonetheless, we should emphasize again that the closest analog to a putative four dimensional light cone M Theory is the little string theory, and this does have a DLCQ description which is an ordinary quantum mechanical theory. * 3. Perhaps M Theory with four Minkowski dimensions can only be defined as the limit of M Theory with AdS asymptotics. We have pointed out above that the current understanding of the AdS/CFT correspondence does not furnish us with a prescription for extracting the Minkowski S-Matrix from the AdS theory, but perhaps this difficulty can be overcome. The most serious objection we can find to such a proposal is that the most likely candidate theory would be of the form $`AdS_2\times S^2\times X`$, but $`AdS_2`$ theories seem to be topological . * 4. The real world is not Minkowski space but rather a cosmological space time. Perhaps we should be searching for the fundamental formulation of M Theory only in the context of closed cosmologies (cosmologies where all space-like slices are compact). In this case we do not expect the notion of Hamiltonian to have a fundamental significance. Time evolution is a concept which is recovered only in a semiclassical approximation. The problems we seem to have with formulating a Heisenberg picture quantum mechanics may signal a breakdown of this semiclassical approximation rather than a fundamental problem. We have to admit that we don’t understand how this could be the case. Whatever the resolution of these difficulties, we cannot end this paper without making note of two significant points. The first is the privileged position of four dimensions in this discussion. Gravitational theories with fewer than four (Minkowski) dimensions do not have many states. This has been advanced in as a reason why they are not the endpoint of cosmological evolution. On the other hand, the Bekenstein-Hawking formula tells us that in some sense four dimensional Minkowski spacetimes have more states at a given asymptotic energy level than higher dimensional spaces (and all Minkowski spaces have more states than any AdS space). Perhaps this observation will be the key to understanding why the world we observe is four dimensional. Our final comment is to emphasize the similarity between the high energy spectra of four dimensional light cone M Theory and of little string theories in an ordinary reference frame. This suggests that, although the light cone quantum mechanics describing four dimensional M Theory is not a conventional Lagrangian theory, it may be some sort of little string theory. This fascinating conjecture is an obvious direction for future work. ###### Acknowledgments. We are grateful to M. Berkooz, W. Fischler, D. Kutasov, N. Seiberg and Sasha Zamolodchikov for valuable discussions. This work was supported in part by the DOE under grant number DE-FG02-96ER40559.
no-problem/9812/cond-mat9812408.html
ar5iv
text
# Electronic Density of States of Atomically Resolved Single-Walled Carbon Nanotubes: Van Hove Singularities and End States ## Abstract The electronic density of states of atomically resolved single-walled carbon nanotubes have been investigated using scanning tunneling microscopy. Peaks in the density of states due to the one-dimensional nanotube band structure have been characterized and compared with the results of tight-binding calculations. In addition, tunneling spectroscopy measurements recorded along the axis of an atomically-resolved nanotube exhibit new, low-energy peaks in the density of states near the tube end. Calculations suggest that these features arise from the specific arrangement of carbon atoms that close the nanotube end. preprint: The electronic properties of single-walled carbon nanotubes (SWNTs) are currently the focus of considerable interest . According to theory , SWNTs can exhibit either metallic or semiconducting behavior depending on diameter and helicity. Recent scanning tunneling microscopy (STM) studies of SWNT have confirmed this predicted behavior, and have reported peaks in the density of states (DOS), Van Hove singularities (VHS), that are believed to reflect the 1D band structure of the SWNTs. A detailed experimental comparison with theory has not been carried out, although such a comparison is critical for advancing our understaning of these fascinating materials. For example, chiral SWNTs have unit cells that can be significantly larger than the cells of achiral SWNTs of similar diameter, and thus chiral tubes may exhibit a larger number of VHS than achiral ones. Recent theoretical work suggested, however, that semiconducting (or metallic) SWNTs of similar diameters will have a similar number of VHS near the Fermi level, independent of chiral angle. In addition, the electronic properties of localized SWNT structures, including end caps, junctions and bends , which are essential to proposed device applications, have not been characterized experimentally in atomically resolved structures. In this Letter, we report STM investigations of the electronic structure of atomically resolved SWNTs and compare these results with tight-binding calculations. Significantly, we find that the VHS in the DOS calculated using a straight-forward zone-folding approach agree with the major features observed in our experiments. We have observed new peaks in the local DOS (LDOS) at an end of a metallic SWNT and compared these results to calculations. This analysis suggests that the new peaks can be associated with a specific topology required to cap the SWNT. The implications of these results and important unresolved issues are discussed. Experimental procedures and are described elsewhere in detail . In brief, SWNT samples were prepared by laser vaporization , purified and then deposited onto a Au (111)/mica substrate. Immediately after deposition, the sample was loaded into a UHV STM that was stabilized at 77 K; all of the experimental data reported in this Letter was recorded at 77 K. Imaging and spectroscopy were measured using etched tungsten tips with the bias ($`V`$) applied to the tip. Spectroscopy measurements were made by recording and averaging 5 to 10 tunneling current ($`I`$) versus $`V`$ ($`I`$-$`V`$) curves at specific locations on atomically resolved SWNTs. The feedback loop was open during the $`I`$-$`V`$ measurement while the setpoint was the same as that of imaging. The tunneling conductance, $`dI/dV`$, was obtained by numerical differentiation. An atomically resolved STM image of several SWNTs is shown in Fig. 1(a). The upper isolated SWNT rests on the Au surface and is on the edge of a small rope that contains around 10 nanotubes. Below we concentrate our analysis on this individual SWNT. The diameter and chiral angle measured for this tube were 1.35 $`\pm `$ 0.1 nm and -20 $`\pm `$ $`1^{}`$, respectively. These values are consistent with $`(13,7)`$ and $`(14,7)`$ indices , where $`(13,7)`$ and $`(14,7)`$ are expected to be metallic and semiconducting respectively. The $`I`$-$`V`$ data exhibits metallic behavior with relatively sharp, stepwise increases at larger $`|V|`$ (Fig. 1(b)). The $`I`$-$`V`$ curves have a finite slope, and thus the normalized conductance $`(V/I)(dI/dV)`$, which is proportional to the LDOS, has appreciable non-zero value around $`V=0`$ as expected for a metal (Fig. 1(b) inset). This suggests that the $`(13,7)`$ indices are the best description of the tube (we address this point further below). At larger $`|V|`$, several sharp peaks are clearly seen in $`dI/dV`$ and $`(V/I)(dI/dV)`$ vs, $`V`$. These peaks were observed in four independent data sets recorded at different positions along this atomically-resolved tube (but not on the Au(111) substrate), and thus we believe these reproducible features are intrinsic to the SWNT. We attribute these peaks to the VHS resulting from the extremal points in the 1D energy bands . The availability of spectroscopic data for atomically-resolved nanotubes represents a unique opportunity for comparison with theory. In this regard, we have calculated the band structure of a $`(13,7)`$ SWNT using the tight-binding method. If only $`\pi `$ and $`\pi ^{}`$ orbitals are considered, the SWNT band structure can be constructed by zone-folding the 2D graphene band structure into the 1D Brillouin zone specified by the $`(n,m)`$ indices . Fig. 2(a) shows the graphene $`\pi `$ band structure around the corner point ($`𝐊`$) of the hexagonal Brillouin zone. For the metallic $`(13,7)`$ tube, the degenerate 1D bands which cross $`𝐊`$ result in a finite DOS at the Fermi level. Note that the energy dispersion is isotropic (circular contours) near $`𝐊`$, and becomes anisotropic (rounded triangular contours) away from $`𝐊`$. Therefore, the first two VHS in the 1D bands closest to $`𝐊`$ (depicted by $`\mathrm{}`$ ,$`\mathrm{}`$) have a smaller splitting in the energy due to the small anisotropy around K, while the next two VHS (depicted by $`\mathrm{}`$, $`\mathrm{}`$) have a larger splitting due to increasing anisotropy. If the energy dispersion were completely isotropic, both sets of peaks would be degenerate. Values for the hopping integral, $`V_{pp\pi }`$, reported in the literature range from ca. 2.4 to 2.9 eV . We used a value of 2.5 eV determined from previous measurements of the energy gap vs. diameter . Our STS data shows relatively good agreement with the DOS for a $`(13,7)`$ tube calculated using the zone-folding approach (Fig. 2(b)). The agreement between the VHS positions determined from our $`dI/dV`$ data and calculations are especially good below the Fermi energy ($`E_F`$) where the first seven peaks correspond well. Deviations between experiment data and calculations are larger above than below $`E_F`$. The observed differences may be due to the band repulsion, which arises from the curvature-induced hybridization, or surface-tube interaction that were not accounted for in our calculations. Detailed ab initio calculation have shown that the effect of curvature induced by hybridization is much higher in $`\pi ^{}/\sigma ^{}`$ than $`\pi /\sigma `$ orbitals. Bands above $`E_F`$ are thus more susceptible to the hybridization effect, and this could explain the greater deviations between experiments and calculation that we observe for the empty states. In the future, we believe that comparison between experiment and more detailed calculations should help (a) to resolve such subtle but important points, and (b) to understand how inter-tube and tube-substrate interactions affect SWNT band structure. In addition, we have compared these results to a recent $`\sigma +\pi `$ calculation for a $`(13,7)`$ SWNT and a $`\pi `$ only calculation for a closely related set of indices. The bottom curve shown in Fig. 2(b) is adopted from , and was obtained using 2s and 2p orbitals. Although the direct comparison is difficult due to an large broadening, all peaks within $`\pm `$ 2 eV match well with our $`\pi `$-only calculation. This comparison suggests that curvature induced hybridization is only a small perturbation within the experimental energy scale ($`|V|<2`$ V) for the $`(13,7)`$ tube. We have also investigated the sensitivity of the DOS to $`(n,m)`$ indices by calculating the DOS of the next closest metallic SWNT to our experimental diameter and angle;that is, a $`(12,6)`$ tube. Significantly, the calculated VHS for this $`(12,6)`$ tube deviate much more from the experimental DOS peaks than in the case of the $`(13,7)`$ tube(Fig. 2(b)). We believe that this analysis not only substantiates our assignment of the indices in Fig. 1(a), but more importantly, demonstrates the sensitivity of detailed DOS to subtle variations in diameter and chirality. Lastly, we have also investigated the electronic structure of the ends of atomically-resolved SWNTs. Analogous to the resonant or localized surface states in bulk crystals , resonant or localized states are expected at the ends of nanotubes . In accordance with Euler’s rule, a capped end should contain six five-membered carbon rings (pentagons). The presence of these topological defects can cause dramatic changes in the LDOS near the end of a nanotube . Previous STM studies of multi-walled nanotubes reported localized states at the ends of these tubes, although the atomic structure of the tubes was not resolved. To the best of our knowledge, STM studies of the ends of SWNT have not been reported. Fig. 3(a) shows an atomically-resolved image of the end of an isolated SWNT that has also been characterized spectroscopically. The rounded structure exhibited in this and bias-dependent images (e.g., insets Fig. 4(a)) suggests strongly that the end is closed, although the atomic structure cannot be obtained since the tube axis is parallel to the image plane. These images enable us to assign the nanotube $`(13,2)`$ indices (the left-handed counterpart to an $`(11,2)`$ tube). The expected metallic behavior of the $`(13,2)`$ tube was confirmed in $`(V/I)(dI/dV)`$ data recorded away from the end ($`\mathrm{}`$ in Fig. 3(a),(d)). Significantly, spectroscopic data recorded at and close to the SWNT end ($``$&$`\mathrm{}`$ in Fig. 3(d)) show two distinct peaks at 250 mV and 500 mV that decay and eventually disappear in bulk DOS recorded far from the tube end ($`\mathrm{}`$). The peaks were observed in 10 independent data sets recorded at the tube end, and are very reproducible. To investigate the origin of these new features, we carried out tight-binding calculations for a $`(13,2)`$ model tube with different end caps (Fig. 3(b)). All the models exhibit a bulk DOS far from the end (solid curve in Fig. 3(c));however, near the end, the LDOS show pronounced differences from the bulk DOS:two or more peaks appear above the $`E_F`$, and these peaks decay upon moving away from the end to the bulk. (Fig. 3(b)) shows two representative cap models. These models were chosen to illustrate the relatively large peak differences for caps closed with isolated vs. adjacent pentagons. These topological configurations are not unique, although additional calculations show that other isolated (adjacent) pentagon configuration have similar LDOS. Significantly, the LDOS obtained from the calculation for cap II shows excellent agreement with the measured LDOS at the tube end, while cap I does not (Fig. 3(c-d)). The positions of the two end LDOS peaks as well as the first band edge of cap II match well with those from the experimental spectra.These results suggest that the topological arrangement of pentagons is responsible for the observed localized features in the experimental DOS at the SWNT end, and are thus similar to conclusions drawn from measurements on multi-walled nanotubes that were not atomically-resolved . The nature of the DOS peaks at the nanotube end was further investigated using bias dependent STM imaging. At the bias of strong DOS peak, -500 mV,the tip-nanotube separation $`h(x)`$ decays with increasing $`x`$, where $`x`$ is the distance from the tube end (inset, Fig. 4(a)). As indicated in Fig. 4(a), $`\mathrm{exp}(k_dh(x))`$, which is proportional to the integrated LDOS , sharply increases around the capped end of the tube and then decays with a characteristic length scale ca. 1.2 nm. Our tight-binding calculation suggests that this decay can be attributed to resonant end states. Wavefunctions whose eigenenergies correspond to the LDOS peaks (250, 500 meV) decay exponentially from the end into the bulk but retain a finite magnitude (Fig. 4(b));this type of decay is a signature of a resonant state . Note that $`h(x)`$ does not decay at $`V`$ far from the resonance (e.g. Fig. 3(a)) nor do wavefunctions whose eigenenergies are away from the end LDOS peaks decay with $`x`$ (Fig. 4(b)). Resonant end states in metallic tubes could serve an important function in electronic devices by improving the contact between nanotubes and electrodes. In summary, we have reported sharp VHS in the DOS of atomically-resolved SWNTs using STM, and have compared these data to tight-binding calculations for specific tube indices. A remarkably good agreement was obtained between experiment and $`\pi `$-only calculations, although deviations suggest that further work will be needed to understand fully the band structure of SWNTs in contact with surfaces. Pronounced peaks in the LDOS were also observed at the end of an atomically-resolved metallic SWNT. Comparison of these data with calculation suggests that the topological arrangement of pentagons is responsible for the localized features in the experiment. Such end states could be used to couple nanotube effectively to electrodes in future nanotube-based devices. We thank R.E. Smalley (Rice) and C.-L. Cheung (Harvard) for SWNT samples. T.W.O. acknowledges predoctoral fellowship support from the NSF, and C.M.L. acknowledges support of this work by the National Science Foundation (DMR-9306684).
no-problem/9812/hep-th9812130.html
ar5iv
text
# Regularizing Property of the Maximal Acceleration Principle in Quantum Field Theory ## I Introduction During the last years, strong evidences have arosed in different areas of theoretical physics, that the proper acceleration of elementary particles (in general case, of any physical object) cannot be arbitrary large, but it should be superiorly limited by some universal value $`𝒜_m`$<sup>*</sup><sup>*</sup>*We are using the unit system where $`\mathrm{}=c=1`$. Therefore the dimensions of acceleration and mass are the same. (maximal proper acceleration). For instance, in string theory , seeking to present a unified description of all fundamental interactions, including gravity, it was derived that string acceleration must be less than some critical value, determined by the string tension and its mass . Otherwise in string dynamics the Jeans–like instabilities arise, which lead to unlimited grow of the string length . On the other hand the theories of the fundamental and hadronic strings unambiguously predict an upper limit to the temperature in the thermodynamical ensemble of the strings. It is due to an extremely fast (exponential) grow of the number of levels in the string spectrum when the energy or mass raise . At the critical temperature, the statistical weight of the energy levels completely suppresses the Boltzmann factor $`\mathrm{exp}(E_n/T)`$ and, as a result, the statistical sum of the string ensemble proves to be infinite. In hadronic physics this critical temperature is the Hagedorn temperatureR. Hagedorn derived this critical temperature in the framework of bootstrap description of the hadron dynamics, before the development of the hadronic string models.. Further, there is a well–known relation between the acceleration of an observer, $`a`$, and the temperature of photons bath detected, $`a=2\pi T`$ (Unruh’s effect ). Thus the Hagedorn critical temperature gives rise to an upper limiting value of the proper acceleration. In the framework of an absolutely different approach, a conjecture about the existence of a maximal proper acceleration of an elementary particle was introduced by E.R. Caianiello . In these papers a new geometric setting for the quantum mechanics has been developed in which the quantization was interpreted as introduction of a curvature in the relativistic eight dimensional space–time tangent bundle $`TM=M_4TM_4`$, that incorporates both the space–time manifold $`M_4`$ and the momentum space $`TM_4`$. The standard operators of the Heisenberg algebra, $`\widehat{q}`$ and $`\widehat{p}`$ are represented as the covariant derivatives in $`TM`$, the quantum commutation relations being treated as the components of the curvature tensor. It is remarkable, that the line element in $`TM_8`$ intrinsically involves an upper limit on the proper acceleration of the particle . The existence of the upper bound on the proper acceleration is intrinsically connected with the extended nature of the particle or string. Therefore one can expect that quantum field theory, involving maximal proper acceleration, could be free of ultraviolet divergencies, originated by the point–like character of the particles in local quantum field theory, or, at least,that the degree of these divergencies could be lower. Without claiming to solve the problem of constructing a new quantum field theory inglobing the principle of maximal acceleration, this note seeks to present some convincing evidences that, in fact, the bond on the maximal acceleration can at least smooth the problem of divergencies. The starting point of our consideration will be the classical model of a relativistic particle with maximal proper acceleration . Upon quantizing this model the equations for the wave functions can be treat as the dynamical equations for the corresponding field function. Finally we are interested to the Green’s function for this new field equations, specifically, to the behaviour of this function in momentum space. It will be shown that using this Green’s function as the propagator in the Feynman integrals leads to an essential improvement of the convergence properties of the latter. The layout of this paper is as follows. In Sect. II the classical dynamics of the relativistic particle with maximal proper acceleration is presented both in the Lagrangian and in the Hamiltonian forms. Sect. III is devoted to the quantum theory of this model. In Sect. IV (Conclusion) the arguments presented in favor of the regularizing role of the maximal proper acceleration are shortly discussed. ## II Relativistic Particle with Maximal Proper Acceleration Let $`x^\mu (s),\mu =0,1,\mathrm{},D1`$ be the world trajectory of a particle in the $`D`$ dimensional Minkowski space-time with the Lorentz signature $`(+,,\mathrm{},)`$. We are using here the natural parameterization of the particle trajectory, $`ds^2=dx_\mu dx^\mu `$. From the geometrical point of view the proper acceleration of the particle is nothing else as the curvature of its trajectory $$k^2(s)=\frac{d^2x_\mu }{ds^2}\frac{d^2x^\mu }{ds^2}.$$ (1) In a complete analogy with the action for an usual spinless relativistic particle $$S_0=m𝑑s$$ (2) the action of a particle with upper bonded acceleration is given by the formula $$S=\mu _0\sqrt{𝒜_m^2k^2(s)}𝑑s.$$ (3) Here $`\mu _0=m/𝒜_m`$ and $`𝒜_m`$ is the maximal proper acceleration of the particle. When $`𝒜_m\mathrm{}`$ the action (3) reduces to (2). In paper the classical dynamics generated by the action (3) has been investigated completely in the framework of a new method for integrating the equations of motion for the Lagrangians with arbitrary dependence on the particle proper acceleration. It was shown, in particularly, that the particle acceleration in this model always obeys the condition $`k^2(s)<𝒜_m^2`$. In order to quantize this model, one has to develop the Hamiltonian formalism. For this purpose an arbitrary parameterization of the particle trajectory $`x^\mu (\tau )`$ should be considered, where the evolution parameter $`\tau `$ is subjected only to the condition $`\dot{x}^2>0`$. Dot means differentiation with respect $`\tau `$. In the $`\tau `$–parameterization the proper acceleration of the particle is determined by the formula $$k^2(\tau )=\frac{(\dot{x}\ddot{x})^2\dot{x}^2\ddot{x}^2}{(\dot{x^2})^3}.$$ (4) When passing from the natural parameterization $`(s)`$ to the arbitrary evolution parameter $`(\tau )`$, the relations $$\frac{d}{d\tau }=\sqrt{\dot{x}^2}\frac{d}{ds},\frac{d^2x_\mu }{ds^2}=\frac{\dot{x}^2\ddot{x}_\mu (\dot{x}\ddot{x})\dot{x}_\mu }{(\dot{x}^2)^2},k\frac{k}{\ddot{x}_\mu }=\frac{1}{\dot{x}^2}\frac{d^2x_\mu }{ds^2},$$ (5) $$k\frac{k}{\dot{x}_\mu }=\frac{1}{(\dot{x}^2)^4}\{\dot{x}^2(\dot{x}\ddot{x})\ddot{x}^\mu +[2\dot{x}^2\ddot{x}^23(\dot{x}\ddot{x})^2]\dot{x}^\mu \}$$ prove to be useful. According to Ostrogradskii , the Hamiltonian description of the model in question requires introduction of the following canonical variables $$q_{1\mu }=x_\mu ,q_{2\mu }=\dot{x}_\mu ,$$ (6) $$p_1^\mu =\frac{L}{\dot{x}_\mu }\frac{dp_2^\mu }{d\tau },p_2^\mu =\frac{L}{\ddot{x}_\mu },$$ where $`L`$ is the Lagrange function in the $`\tau `$–parameterization. The invariance of the action (3) under the transformations $`x^\mu x^\mu +a^\mu `$, $`a^\mu =\text{const}`$ entails the conservation of the energy–momentum vector $`p_1^\mu `$. Hence in our consideration the mass should be defined by $$M^2=p_1^2.$$ (7) In contrast to the usual relativistic particle with the action (2) the mass of the particle with restricted proper acceleration, in general case, does not coincide with the parameter $`m`$ entered its action (3). The invariance of the action (3) under the Lorentz transformations leads to conservation of the angular momentum tensor $$M_{\mu \nu }=\underset{a=1}{\overset{2}{}}(q_{a\mu }p_{a\nu }q_{a\nu }p_{a\mu }).$$ (8) Usually the tensor $`M_{\mu \nu }`$ is used for constructing the spin variable $`S`$. In the case of the $`D`$–dimensional space–time $`S`$ is defined by $$S^2=\frac{W}{M^2},$$ (9) where $$W=\frac{1}{2}M_{\mu \nu }M^{\mu \nu }p_1^2(M_{\mu \sigma }p_1^\mu )^2,M^2=p_1^2.$$ For $`D=4`$ the invariant $`W`$ is the squared Pauli–Lubanski vector $$W=w_\mu w^\mu ,w_\mu =\frac{1}{2}\epsilon _{\mu \nu \rho \sigma }M^{\nu \rho }p_1^\sigma .$$ (10) Obviously spin $`S`$ is also a conserved quantity. An essential distinction of the Lagrangians with higher derivatives, like (3), is the following . The mass $`M`$ and the spin $`S`$ are not expressed in terms of the parameters of the Lagrangian, but are integrals of motion, whose specific values should be determined by the initial conditions for the equations of motion The spin $`S`$ of the usual relativistic particle with the action (2), calculated by the formulae (9) or (10), identically equals zero because in this case there is only one pair of the canonical variables $`q,p`$.. This point considerably complicates transition to the secondary quantized theory (quantum field theory) starting just from the action (3). Usually the wave equation for quantum field uniquely specifies the mass and the spin of the particles described by this field . The action (3), as well as (2), is invariant under the reparameterization $`\tau f(\tau )`$ with an arbitrary function $`f(\tau )`$. As a result there should be the constraints in the phase space . Using the definition of $`p_2`$ in (6) and formulae (5) one easily deduces the primary constraint $$\varphi (q,p)=p_2^\mu q_{2\mu }0,$$ (11) where $``$ means a weak equality . There are no other primary constraints in the model under consideration because the rank of the Hessian matrix $$\frac{^2L}{\ddot{x}_\mu \ddot{x}_\nu }$$ equals $`D1`$. The canonical Hamiltonian $$H=p_1^\mu \dot{x}_\mu p_2^\mu \ddot{x}_\mu L,$$ (12) rewritten in terms of the canonical variables $`q_a,p_a,a=1,2,`$ becomes $$H=p_1q_2+𝒜_m\sqrt{q_2^2(\mu _0^2q_2^2p_2^2)}.$$ (13) The equations of motion in the phase space $$\frac{df}{dt}=\frac{f}{t}+\{f,H_\text{T}\}$$ (14) are generated by the total Hamiltonian $$H_\text{T}=H+\lambda (\tau )\varphi (q,p).$$ (15) Here $`f`$ is an arbitrary function of the canonical variables and of the evolution parameter $`\tau `$, $`\lambda (\tau )`$ is the Lagrange multiplier. The Poisson brackets are defined in a standard way $$\{f,g\}=\underset{a=1}{\overset{2}{}}\left(\frac{f}{p_a^\mu }\frac{g}{q_{a\mu }}\frac{f}{q_a^\mu }\frac{g}{p_{a\mu }}\right).$$ (16) Requirement of the stationarity of the primary constraint (11) $$\frac{d\varphi }{d\tau }=\{\varphi ,H_T\}0$$ (17) leads to the secondary constraint $$\{\varphi ,H_T\}=\{\varphi ,H\}=H0.$$ (18) As one could anticipate, the canonical Hamiltonian vanishes in a weak sense. It is a direct consequence of the reparameterization invariance of the initial action (3) . Obviously, at this step the procedure of the constraint generation terminates. Thus in this model there are only two constraints (11) and (18), and they are of the first class because $$\{\varphi ,H\}0.$$ (19) For the spin variable $`S`$ we deduce from Eqs. (9) or (10) $`S^2p_1^2`$ $`=`$ $`p_1^2p_2^2q_2^2+2(p_1p_2)(p_1q_2)(p_2q_2)`$ (21) $`(p_1p_2)^2q_2^2p_1^2(p_2q_2)^2p_2^2(p_1q_2)^2.`$ The energy–momentum vector squared $`p_1^2`$ and the spin squared $`S^2`$ are the integrals of motion . In the Hamiltonian dynamics this implies $$\frac{dp_1^2}{d\tau }=\{p_1^2,H_\text{T}\}0,$$ (22) $$\frac{dS^2}{d\tau }=\{S^2,H_\text{T}\}0.$$ (23) Specifying the values of $`p_1^2`$ and $`S^2`$ $`p_1^2`$ $`=`$ $`M^2,`$ (24) $`S^2(p,q)`$ $`=`$ $`S^2`$ (25) we pick a certain sector in the classical dynamics of the model, which is invariant with respect to the evolution in time. Further, it is very convenient to use the natural parameterization of the particle trajectory putting $$\dot{x}^2=q_2^2=m^2.$$ (26) It is remarkable that this condition is in accordance with the Hamiltonian equations of motion $$\{q_2^2,H_\text{T}\}0,$$ (27) the Lagrange multiplier $`\lambda (\tau )`$ being undetermined. Therefore Eq. (26), on the same footing as Eqs. (24) and (25), should be treated as an invariant relation rather than a gauge condition . In the parameterization (26) the canonical momenta $`p_1`$ and $`p_2`$ prove to be proportional to $`\dot{x}=q_2`$ and $`\ddot{x}`$, respectively. Hence, we have now $$p_1p_20$$ (28) in view of (11) or (26). ## III Quantum theory Transition to quantum theory is accomplished in a standard way by imposing the commutators $$[\widehat{A},\widehat{B}]=i\{A,B\},$$ (29) where $`\{\mathrm{},\mathrm{}\}`$ are the Poisson brackets (16), $`A`$ and $`B`$ are arbitrary functions of the canonical variables $`q_a,p_a,a=1,2`$. All the constraints in the model concerned are of first–class and the gauge is not fixed, thus there is no need to use the Dirac brackets in our consideration . In quantum theory we are interested in Eqs. (24) and (25) which should be imposed as the conditions on the physical state vectors $`|\psi >`$ $$(p_1^2M^2)|\psi >=0,$$ (30) $$S^2|\psi >=S(S+D1)|\psi >=0,S=0,1,2,\mathrm{},$$ (31) where $`D`$ is the dimension of space–time. The classical value of the spin variable $`S^2`$ has been substituted here by the eigenvalues of the spin operator $`S^2`$ in the $`D`$-dimensional space-time, $`S(S+D1)`$. These equations fix the values of the Casimir operators of the Poincare group specifying the irreducible representation of this group in terms of the state vectors $`|\psi >`$ and providing in this way the basis for consistent description of the elementary particles in the relativistic quantum theory. As was noted in Introduction we shall treat equations (30) and (31) in two-fold way, as the equations for the wave function $`|\psi >`$ in the quantum mechanics of the model in hand and as the wave equations for the corresponding relativistic quantum field $`\mathrm{\Phi }(q_1,q_2)`$ (the secondary quantized theory). This field, as well as the wavefunction $`|\psi >`$, should depend on two arguments $`q_1^\mu =x^\mu `$ and $`q_2^\mu =\dot{x}^\mu `$ which are treated as the canonical coordinates in the Ostrogradskii formalism . Before proceeding further we have to take into account the constraints. Substituting Eqs. (11), (18), (26) and (28) into Eq. (21) we obtain $$S^2=\frac{p_2^2}{m^2}\left[1\left(\frac{m^2}{M^2}\frac{𝒜_m^2}{M^2}\frac{p_2^2}{m^2}\right)\right].$$ (32) Now the second field equation (31) can be rewritten in the form $$(\mathrm{}_2+m_1^2)(\mathrm{}_2+m_2^2)\mathrm{\Phi }(q_1,q_2)=0,$$ (33) where $$\mathrm{}_a_{a\mu }^{a\mu },_{a\mu }=\frac{}{q_a^\mu },a=1,2.$$ (34) There is no summation with respect to $`a`$ in Eq. (34). The dependence of the quantum field $`\mathrm{\Phi }(q_1,q_2)`$ on $`q_1`$ is determined by the Klein–Gordon equation (30) with the mass $`M`$ $$(\mathrm{}_1+M^2)\mathrm{\Phi }(q_1,q_2)=0.$$ (35) For $`M0`$ the parameters $`m_a^2,a=1,2`$ in Eq. (33) are given by $$m_{1,2}^2=\frac{m^2}{2}\left[1\mu ^2\pm \sqrt{(1\mu ^2)^2+4S(S+D1)\frac{𝒜_m^2}{M^2}}\right],$$ (36) where $`\mu =m/M`$. In the massless case $`(M=0)`$, we have $$m_1^2=0,m_2^2=\mu _0^2m^2.$$ (37) Thus in general case in Eq. (33) there are one real ‘mass’ $`m_1^20`$ and one tachyonic ‘mass’ $`m_2^2<0`$. The latter is not dangerous here because the physical mass of the quanta described by the field $`\mathrm{\Phi }(q_1,q_2)`$ is equal to $`M`$ according to Eq. (35). From Eqs. (33) and (35) we deduce the Green function in quantum field theory concerned $$G(k_1,k_2)=\frac{1}{(k_1^2M^2)(k_2^2m_1^2)(k_2^2m_2^2)}.$$ (38) The regularizing property of this propagator can be elucidated by considering the one–loop Feynman diagram in such bilocal field theory $$G^2(k_1,k_2)d^4k_1d^4k_2\frac{d^4k_1}{k_1^4}\frac{d^4k_2}{k_2^8}\frac{d^4k_1}{k_1^4}\frac{d^4k_2}{k_2^5(k_1)}\frac{d^4k_1}{k_1^4}\frac{1}{k_1^4}.$$ (39) Here we have taken into account that the momenta $`k_1`$ and $`k_2`$ will be, as usually, ‘mixed’ in this diagram describing the interacting field $`\mathrm{\Phi }(q_1,q_2)`$, and the integration over $`dk_2`$ has been carried out at first. As a result, we obtained in Eq. (39) an additional factor $`k_1^4`$ in comparison with the usual one–loop Feynman diagram with the propagator $`(k^2m^2)^1`$. This gives evidence that the maximal acceleration can really provide natural regularization in corresponding quantum field theory. ## IV Conclusion In comparison with other physical regulators of quantum field theory, for example, elementary length, which also originates in extended nature of elementary particles, the principle of maximal acceleration is obviously favorable because it preserves the continuous space–time. In order to put the arguments in favor of the regularizing property of the maximal acceleration principle onto a more rigorous footing, it is required to develop the theory of bilocal quantum field introduced above. A special attention should be paid here on the interaction mechanism in this formalism. However all these problems are far beyond the scope of this short note. Acknowledgments This work has been accomplished during the stay of one of the authors (V.V.N.) at Salerno University. It is a pleasant duty for him to thank Prof. G. Scarpetta, Dr. G. Lambiase, and Dr. A. Feoli for warm hospitality. The work was supported by Russian Foundation of Fundamental Research (Grant No. 97-01-00745) and by fund MURST ex 60% and ex 40% DPR 382/80.
no-problem/9812/astro-ph9812002.html
ar5iv
text
# LOCALLY BIASED GALAXY FORMATION AND LARGE SCALE STRUCTURE ## 1 Introduction Within a decade of defining his classification system for galaxies, Hubble (1936) observed that elliptical and spiral galaxies tend to reside in different environments, with elliptical galaxies preferentially represented in rich clusters and spiral galaxies more numerous in the field. More recent studies have confirmed and detailed this dependence of clustering on morphological type, and they have shown that luminous galaxies cluster more strongly than faint galaxies (e.g., Hamilton (1988); Park et al. (1994); Loveday et al. (1995); Benoist et al. (1996); Willmer, da Costa & Pellegrini (1998)) and that optically selected galaxies cluster more strongly than infrared selected galaxies (e.g., Lahav, Nemiroff, & Piran (1990); Saunders, Rowan-Robinson & Lawrence (1992); Peacock & Dodds (1994)). At most one of these galaxy populations can trace the underlying distribution of mass, and it is more likely that none of them does. The idea of biased galaxy formation — preferential formation of galaxies in high density environments — was originally introduced with the hope of reconciling the low apparent mass-to-light ratios of bound galaxy structures with the theoretically motivated assumption of an $`\mathrm{\Omega }=1`$ universe (Davis et al. (1985); Bardeen et al. (1986)). However, it is now broadly recognized that bias, or anti-bias, is a possibility that we are stuck with, whatever the value of $`\mathrm{\Omega }`$. Cosmological N-body simulations can predict statistical properties of the mass distribution in a specified cosmological model, but they cannot predict the clustering of galaxies without an additional prescription for the relation between galaxies and mass. In this paper, we apply a variety of simple bias prescriptions to N-body simulations in order to see how assumptions about bias influence measurable properties of large scale structure, such as the correlation function, the power spectrum, moments of galaxy counts, pairwise velocities, and the relation between galaxy density and peculiar velocity fields. While the full theory of galaxy formation is likely to be complicated, a plausible and still very general assumption is that the efficiency of galaxy formation is determined by properties of the local environment. For practical purposes, “local” means a scale comparable to the galaxy correlation length, the distance over which material in non-linear structures has mixed during the history of the universe. In a local theory of galaxy formation, when a collapsing region of the mass distribution decides whether to become a galaxy (or what type of galaxy to become, or how many galaxies to become), it has no direct knowledge of material that it has never encountered. On the other hand, “non-local” biasing, in which the efficiency of galaxy formation varies coherently over larger scales, requires a more exotic physical mechanism, such as suppression or stimulation of galaxy formation by ionizing radiation or by some other influence that can propagate at the speed of light. In this paper, we examine several examples of local biasing, with the efficiency of galaxy formation determined by the density, pressure, or geometry of the mass distribution in a surrounding $`4h^1\mathrm{Mpc}`$ sphere (where $`hH_0/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$). We compare the effects of these local bias models to those of non-local models inspired by the proposals of Babul & White (1991) and Bower et al. (1993). For purposes of our study, it is important to maintain the distinction between a model of biased galaxy formation, which is a fully specified prescription for populating a mass distribution with galaxies, and a description of the effects of bias. The widely used “linear bias model” — $`\delta _g=b\delta _m`$, where $`\delta _m\rho /\overline{\rho }1`$ is the mass density contrast, $`\delta _g`$ is the galaxy density contrast, and $`b`$ is the bias factor — is more properly regarded as a description rather than a model. One cannot use the $`\delta _g=b\delta _m`$ formula to assign galaxies to the mass distribution on any scale where the minimum density contrast is $`\delta _{m,\mathrm{min}}<1/b`$, since it would demand an unphysically negative galaxy density. However, linear bias could emerge as a reasonable approximation to the effects of a physical bias model on large scales, at least for some purposes. The relation between the galaxy and mass correlation functions is often written $`\xi _g(r)=b^2(r)\xi _m(r)`$, which is again a description rather than a model. While it might seem reasonable to let $`b(r)`$ be an arbitrary function of scale, a series of analytic arguments of increasing generality suggest that for any local bias the quantity $`b(r)`$ must asymptote to a constant value in the limit $`\xi (r)1`$, making it impossible for local bias to resolve a discrepancy between the predicted and observed shapes of $`\xi (r)`$ on large scales (Coles (1993); Fry & Gaztañaga (1993); Gaztanaga & Baugh (1998); Scherrer & Weinberg (1998)). Even the most general of these arguments relies on assumptions about the mass clustering and the bias model, and it yields only an asymptotic result. The numerical approach here complements the analytic studies by exhibiting fully non-linear solutions for a diverse set of biasing models, including some that do not satisfy the formal assumptions of the analytic calculations. In principle, the relation between galaxies and mass should be a prediction of a cosmological theory, not an input to it. There has been substantial progress towards making such a priori predictions in recent years using hydrodynamic simulations (Cen & Ostriker (1992), 1993, 1998; Katz, Hernquist, & Weinberg (1992), 1996, 1998; Blanton et al. (1998)), N-body simulations that attempt to resolve galaxy mass halos within larger virialized systems (Colín et al. (1998)), or a combination of lower resolution N-body simulations with semi-analytic models of galaxy formation (Kauffmann et al. 1997, 1998; Governato et al. (1998)). However, the semi-analytic models require a series of approximations and assumptions, and the numerical studies suffer from limited resolution and simulation volumes and also rely on simplified descriptions of star formation and supernova feedback. At present these approaches yield physically motivated models of biased galaxy formation that can be tested against observations, but they do not provide definitive predictions. In this paper we take a different and in some ways less ambitious tack. We combine large volume, relatively low resolution N-body simulations with simple bias prescriptions that are deliberately designed to “parametrize ignorance” about galaxy formation, by illustrating a wide range of possibilities. We extend Weinberg’s (1995) study along similar lines by using better simulations and examining a much wider range of galaxy clustering measures. Our study also overlaps that of Mann, Peacock, & Heavens (1998, hereafter MPH98), who applied several different biasing prescriptions to large N-body simulations and investigated their influence on the galaxy power spectrum and on cluster mass-to-light ratios. Our goal here is to see which measurable properties of galaxy clustering are sensitive to assumptions about bias and which properties are robust, or at least are affected in a simple way by local bias. Our numerical study complements the general analytic treatment of non-linear, stochastic biasing models by Dekel & Lahav (1998, hereafter DL98). The biasing prescriptions that we adopt in this paper are all Eulerian models, meaning that the efficiency of galaxy formation is determined by properties of the evolved mass density field. These can be contrasted with Lagrangian bias models, such as the high peaks model (Davis et al. (1985); Bardeen et al. (1986)), in which the efficiency of galaxy formation depends on the initial (linear) density field. Because galaxies can merge as structure evolves, neither a pure Lagrangian nor a pure Eulerian description of bias is entirely realistic. However, for studies of galaxy clustering at a specified redshift, either approach may provide a reasonable approximate description, and the distinction between them is more technical than physical. The distinction between local and non-local models, on the other hand, is fundamentally tied to the physical mechanisms that cause bias. Eulerian, Lagrangian, and intermediate approaches would yield different predictions for the evolution of bias with redshift, but we do not examine this issue in this paper. The evolution of bias has been studied using the numerical and semi-analytic approaches mentioned above and in a more general analytic framework by Fry (1996) and Tegmark & Peebles (1998). We begin our investigation in §2 with a simple but informative example, in which the galaxy population as a whole traces the mass but the mix of morphological types is determined by a local morphology-density relation. This example illustrates some general properties of local bias models, and our calculations provide generic predictions, testable with the next generation of redshift surveys, for the dependence of the large scale correlation function and power spectrum on morphological type. We then move to models in which the galaxy population has an overall bias, which is applied using a variety of local and non-local prescriptions. We describe the models and examine their influence on a number of different measures of large scale structure in §3. Section 4 provides a fairly complete summary of our results and discusses their implications in light of other recent studies and anticipated observational developments. ## 2 The Morphology-Density Relation and the Large Scale Clustering of Galaxy Types The longest recognized and most clearly established form of “bias” in the galaxy distribution is the marked preference of early-type galaxies for dense environments (Hubble (1936); Zwicky (1937); Abell (1958)). There have been numerous efforts to quantify this connection between morphology and environment (e.g., Dressler (1980); Postman & Geller (1984); Lahav & Saslaw (1992); Whitmore, Gilmore, & Jones (1993)), and several groups have used angular correlation functions, redshift-space correlation functions, or de-projected real-space correlation functions to characterize the dependence of clustering on morphology (Davis & Geller (1976); Giovanelli, Haynes & Chincarini (1986); Loveday et al. (1995); Guzzo et al. (1997); Willmer, da Costa & Pellegrini (1998)). The stronger clustering of early-type galaxies explains the stronger clustering found in optically selected galaxy catalogs relative to IRAS-selected catalogs, which preferentially include dusty, late-type galaxies (Babul & Postman (1990)). Postman & Geller (1984, hereafter PG84) parametrize the morphology-environment connection as a relation between morphological fractions and local galaxy density: for overdensities $`\rho _g/\overline{\rho }_g100`$ the fractions of ellipticals, S0s, and spirals are independent of environment, but at higher densities the elliptical and S0 fractions climb steadily, eventually saturating for $`\rho _g/\overline{\rho }_g10^5`$. In this Section, we consider a model in which the galaxy population as a whole traces the underlying mass distribution but morphological types are assigned according to PG84’s prescription. Our results will show what should be expected from large redshift surveys like the 2 Degree Field (2dF) survey (Colless (1998)) and the Sloan Digital Sky Survey (SDSS; Gunn & Weinberg (1995)) if a local morphology-density relation is a correct description of the influence of environment on morphological type. This investigation also provides a “warmup” for the investigations in §3, since the morphology-density relation is a simple example of a local bias mechanism. For our underlying mass distribution, we use the output of an N-body simulation of an open cold dark matter (CDM) model performed by Cole et al. (1998, hereafter CHWF98). The cosmological parameters are $`\mathrm{\Omega }_m=0.4`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, $`\mathrm{\Gamma }=0.25`$, $`n=1`$, where $`\mathrm{\Gamma }\mathrm{\Omega }_mh\mathrm{exp}(\mathrm{\Omega }_b\mathrm{\Omega }_b/\mathrm{\Omega }_m)`$ (Sugiyama (1995)), $`\mathrm{\Omega }_m,\mathrm{\Omega }_b`$, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ are the matter, baryon, and vacuum energy density parameters, and $`n=1`$ implies a scale-invariant inflationary power spectrum. The model is cluster-normalized (White, Efstathiou, & Frenk (1993); Eke, Cole & Frenk (1996)) and has an rms mass fluctuation $`\sigma _8=0.95`$ in spheres of radius $`8h^1\mathrm{Mpc}`$. The amplitude $`\sigma _8`$ is related to the linear theory power spectrum $`P(k)`$ by $$\sigma _8^2=_0^{\mathrm{}}4\pi k^2P(k)\stackrel{~}{W}^2(kR)𝑑k,$$ (1) where $`\stackrel{~}{W}(kR)`$ is the Fourier transform of a spherical top hat filter with radius $`R=8h^1\mathrm{Mpc}`$. The CHWF98 simulation uses a modified version of the AP3M code of Couchman (1991) to follow the gravitational evolution of $`192^3`$ particles in a periodic cubical box of side $`345.6h^1\mathrm{Mpc}`$. The gravitational force softening is $`ϵ=90h^1\mathrm{kpc}`$ comoving (for a Plummer force law) and the particle mass is $`m_p=6.47\times 10^{11}h^1M_{}`$. Further details of the simulation are given in CHWF98; this is their model O4S. Our implementation of the morphology-density relation is based loosely on figures 1 and 2 of PG84. We assign each particle in the simulation a density equal to the mean density in a sphere enclosing its five nearest neighbors. We assign the particle a spiral (Sp), S0, or elliptical (E) morphological type at random, with relative probabilities $`F_{\mathrm{Sp}}`$, $`F_{\mathrm{S0}}`$, $`F_\mathrm{E}`$ that depend on the density. For $`\rho <\rho _F=100\overline{\rho }`$, the morphological fractions are $`F_{\mathrm{Sp}}=0.7`$, $`F_{\mathrm{S0}}=0.2`$, $`F_\mathrm{E}=0.1`$. For $`\rho _F<\rho <\rho _C=6\times 10^4\overline{\rho }`$, the fractions are $`F_{\mathrm{Sp}}`$ $`=`$ $`0.70.6\alpha `$ $`F_{\mathrm{S0}}`$ $`=`$ $`0.2+0.3\alpha `$ (2) $`F_\mathrm{E}`$ $`=`$ $`1F_{\mathrm{Sp}}F_{\mathrm{S0}}`$ $`\alpha `$ $`=`$ $`\mathrm{log}_{10}(\rho /\rho _F)/\mathrm{log}_{10}(\rho /\rho _C).`$ For $`\rho >\rho _C`$, the morphological fractions saturate at $`F_{\mathrm{Sp}}=0.1`$, $`F_{\mathrm{S0}}=0.5`$, $`F_\mathrm{E}=0.4`$. With this formulation, $`13.7\%`$ of the particles are classified as ellipticals, $`23.7\%`$ as S0s, and the remaining $`62.6\%`$ as spirals. Changes to $`\rho _F`$ or $`\rho _C`$ or the density assignment method would alter the numerical values of the bias parameters discussed below, but they would not change our results in a qualitative way. Figure 1 compares the spatial distributions of the different types of galaxies. The full galaxy population, a random subset of the N-body particles with space density $`n_g=0.01h^3\mathrm{Mpc}^3`$, is shown in panel (a). For panels (b)-(d), we have artificially boosted the selection probabilities so that each morphological sub-population has density $`0.01h^3\mathrm{Mpc}^3`$; the visual differences among the panels therefore reflect the differences in the clustering strengths of the galaxy types rather than their different number densities. For our adopted parameters, these visual differences are rather subtle, though the contrast between clusters and voids is more noticeable for the ellipticals (panel b) than for the spirals (panel d), as expected. Figure 2 (top panel) compares the two-point correlation functions $`\xi (r)`$ of the three galaxy populations to the two-point correlation function of the mass distribution, which is equal to that of the full galaxy population by construction. For $`r4h^1\mathrm{Mpc}`$, the correlation functions of the E and S0 galaxies are both stronger and steeper than that of the spiral galaxies, which is just the behavior found in observational studies of the angular and spatial correlation functions of different galaxy types (Davis & Geller (1976); Giovanelli et al. 1986; Loveday et al. (1995); Guzzo et al. (1997); Willmer et al. 1998). If one were to fit power laws to these correlation functions and extrapolate to large $`r`$, they would cross at $`r10h^1\mathrm{Mpc}`$, with the early-type galaxies more weakly clustered at large scales. The true behavior in the simulation is quite different, however. At $`r4h^1\mathrm{Mpc}`$, the shape of the $`\xi (r)`$ of the early-type galaxies changes to match that of the spirals, and the amplitude of $`\xi (r)`$ for the early-type galaxies is higher at all scales. The large scale behavior is consistent with the observational measurements cited above, but these do not have sufficient precision at large $`r`$ to clearly show the E, S0, and spiral correlation functions tracing each other with a constant logarithmic offset. The prediction that the large scale shape of $`\xi (r)`$ is independent of morphological type should be easily testable with the 2dF and SDSS redshift surveys. The bottom panel in Figure 2 shows the bias function $`b_\xi (r)`$, defined as $$b_\xi (r)=\sqrt{\frac{\xi _g(r)}{\xi _m(r)}},$$ (3) where $`\xi _g(r)`$ and $`\xi _m(r)`$ are the correlation functions of the galaxy and the mass distributions, respectively. This function becomes independent of scale beyond about $`4h^1\mathrm{Mpc}`$, and it settles at a different level for each of the galaxy types. We will show in §3 below that this constancy of $`b_\xi (r)`$ at large $`r`$ is a generic feature of local biasing models. The early-type galaxies are always positively biased with respect to the mass distribution, $`b_\xi (r)>1`$, while the spirals are anti-biased on all scales, $`b_\xi (r)<1`$. The clustering amplitude of the elliptical galaxies on large scales is a factor of 1.3 larger than that of the spiral galaxies, consistent with the ratio derived using the galaxies in the Stromlo-APM redshift survey (Loveday et al. (1995)). The bias of the early-type galaxies increases towards smaller scales, while that of the spirals decreases. The increase in the relative clustering strength of ellipticals and spirals towards smaller scales is observed in the clustering analysis of the Southern Sky Redshift Survey (Willmer et al. 1998). The bias of the full galaxy population is $`b_\xi (r)=1`$ by construction. The top panel of Figure 3 shows the power spectra, $`P(k)`$, of the mass and the galaxy distributions. We form continuous density fields by cloud-in-cell (CIC) binning the discrete particle distributions onto a $`192^3`$ grid, and compute $`P(k)`$ using a Fast Fourier Transform (FFT). We sample all the galaxy distributions to the same number density of $`0.01h^3`$Mpc<sup>-3</sup>, so all the power spectra have the same shot noise contribution. The amplification of the clustering of early-type galaxies and suppression of the clustering of spiral galaxies is similar to that seen in Figure 2, but the $`P(k)`$ plot better reveals the behavior on the largest scales. The bottom panel shows the bias functions in Fourier space, $`b_P(k)`$, defined as $$b_P(k)=\sqrt{\frac{P_g(k)}{P_m(k)}},$$ (4) where $`P_g(k)`$ and $`P_m(k)`$ are the power spectra of the galaxy and the mass distributions, respectively. This bias function also becomes independent of scale on large scales and has the same relative behavior as $`b_\xi (r)`$ for the different galaxy types. The assignment of morphological types based on local density produces a difference in the clustering strength of different galaxy types at all scales. The reason for the large scale bias is the same one identified by Kaiser (1984) in his discussion of cluster correlations: regions that are highly overdense on small scales tend to reside in regions where the large scale, background density is also high. The bias functions in Figures 2 and 3 are scale-dependent in the regime where clustering is non-linear, but they asymptote to constant values on large scales. We will soon see that this behavior is a generic property of local biasing models. We have assumed in this Section that the galaxy population as a whole traces the mass, but we would expect the relative clustering of different galaxy types to be similar even if the galaxy population has a net bias or anti-bias, at least if the mechanism that produces that bias is local. While the numerical values of the relative bias parameters depend on our choices of $`\rho _F`$ and $`\rho _C`$ (eq. ) and our method of defining the local density, the qualitative prediction that the correlation functions and power spectra of E, S0, and spiral galaxies have the same shape at large scales should hold in any model where the morphological type is determined by the local density. ## 3 The Influence of Bias on Galaxy Clustering If the environment influences the formation efficiency of individual galaxy types, then it is likely to influence the overall efficiency of galaxy formation as well. While we are far from having a definitive theoretical account of galaxy formation, the numerical and semi-analytic studies cited in §1 suggest that galaxies should be significantly biased tracers of structure, at least in some regimes of luminosity and redshift. With the morphology-density results as background, we now turn to a much broader set of biasing models, some local and some non-local. We no longer attempt to discriminate among galaxy types but instead consider models in which the galaxy population as a whole is biased with respect to the underlying mass distribution. For our underlying model (described in §3.1), we adopt an $`\mathrm{\Omega }=1`$, tilted CDM model. This model has an rms mass fluctuation amplitude $`\sigma _8=0.55`$, about a factor of two below the observed fluctuation amplitude of bright optical galaxies (e.g., Davis & Peebles (1983); Loveday et al. (1995)). We apply a number of different local and non-local Eulerian biasing prescriptions (described in §3.2 and §3.3) to this model, each of which has a single adjustable parameter that controls the strength of the bias. We choose the value of this parameter by requiring that the ratio of rms galaxy fluctuations to rms mass fluctuations, $$b_\sigma =\frac{\sigma _g(R)}{\sigma _m(R)},$$ (5) is $`b_\sigma =2`$ in spheres of radius $`R=12h^1\mathrm{Mpc}`$. The definition of the bias factor in equation (5) is identical to the quantity $`b_{\mathrm{var}}`$ defined by DL98. We fix the bias factor at $`R=12h^1\mathrm{Mpc}`$ rather than $`R=8h^1\mathrm{Mpc}`$ so that our local bias models have similar bias at very large scales. We sample the biased galaxy distributions to an average density of $`0.01h^3\mathrm{Mpc}^3`$, so that all of the galaxy distributions have comparable shot-noise properties. ### 3.1 Mass Distribution In all of the Eulerian biasing schemes that we study in this paper, we choose the galaxies from the underlying mass distribution of a tilted CDM model with $`\mathrm{\Omega }_m=1,\mathrm{\Omega }_\mathrm{\Lambda }=0`$, and $`\mathrm{\Omega }_b=0.05`$. The primordial slope of the power spectrum, $`n`$, is adjusted to simultaneously match the CMB anisotropies on large scales and the observed masses of galaxy clusters on small scales. The cluster constraint requires $`\sigma _8=0.55`$ (White et al. 1993), which implies $`n=0.803`$ if one incorporates the standard inflationary prediction for gravitational wave contributions to the COBE anisotropies. We compute the CDM transfer function using the analytical fitting functions provided by Eisenstein & Hu (1998). Our model parameters are the same as those of the E2 model of Cole et al. (1997, see also CHWF98). However, for the purposes of our investigation it is more important to have good statistics on very large scales than to have high gravitational force resolution. We therefore perform our own N-body simulations of this model using a particle-mesh (PM) N-body code written by C. Park, which is described and tested by Park (1990; see also Park (1991)). Each simulation uses $`200^3`$ particles and a $`400^3`$ force mesh to follow the gravitational evolution in a periodic cube $`400h^1\mathrm{Mpc}`$ on a side. We start the gravitational evolution from a redshift $`z=20`$ and follow it to $`z=0`$ in 80 equal incremental steps of the expansion scale factor $`a(t)`$. We also evolved this mass distribution using twice this number of time steps and found that the correlation function and the velocity dispersion of the resulting mass distributions changed very little, showing that our results are robust to increasing the temporal resolution of the N-body simulation. We ran four such simulations with different random phases for the Fourier components of the initial density field, and the results we show are averaged over these four independent mass/galaxy distributions. ### 3.2 Local Biasing Schemes We now describe our local Eulerian biasing models in detail. The first two schemes, density-threshold bias and power-law bias, select galaxies based on the local mass density alone. The sheet bias scheme selects galaxies based on the geometry of the local mass distribution. We also investigated two other local bias schemes, one based on pressure and one on geometry, that we will omit from our detailed presentation of results because they prove very similar to the density-threshold and sheet bias schemes, respectively. We compute all the local properties associated with a mass particle, including the local mass density and the moment of inertia tensor, in a sphere of radius $`4h^1`$Mpc around that particle. #### 3.2.1 Density-threshold bias In order for galaxies to have a net bias $`b_\sigma >1`$, they should preferentially populate regions of higher mass density. The simplest prescription that achieves this is a density-threshold bias: galaxy formation is entirely suppressed below some threshold, and galaxies form with equal efficiency per unit mass in all regions above the threshold. This biasing scheme was adopted in some of the early numerical investigations of CDM models (e.g., Melott & Fry (1986)), and it has been used extensively by J. Einasto and collaborators in theoretical modeling of voids and superclusters (e.g., Einasto et al. (1994)). In the density-threshold bias model, the probability that a particle with local mass density $`\rho _m`$ is selected as a galaxy is $$P=\{\begin{array}{cc}A\hfill & \text{if }\rho _mB,\hfill \\ 0\hfill & \text{if }\rho _m<B\text{.}\hfill \end{array}$$ (6) We choose the threshold density $`B`$ so that the bias factor $`b_\sigma (12h^1\mathrm{Mpc})=2`$ (eq. ) and the probability $`A`$ so that the mean galaxy density is $`n_g=0.01h^3\mathrm{Mpc}^3`$. #### 3.2.2 Power-law bias The threshold model is extreme in the sense that galaxy formation is completely suppressed at low densities and independent of density above the threshold. In our second model, we make the galaxy density a steadily increasing, power-law function of the mass density, $`(\rho _g/\overline{\rho }_g)(\rho _m/\overline{\rho }_m)^B`$. The selection probability for a particle with local mass density $`\rho _m`$ to become a galaxy is therefore $$P=A(\rho _m/\overline{\rho }_m)^{B1}.$$ (7) We choose the value of $`B`$ so that $`b_\sigma =2`$ and the value of $`A`$ so that $`n_g=0.01h^3`$Mpc<sup>-3</sup>. This biasing relation is similar to the one suggested by Cen & Ostriker (1993) based on hydrodynamic simulations incorporating physical models for galaxy formation (Cen & Ostriker (1992)), but it differs in that there is no quadratic term that saturates the biasing relation at high mass densities. Little & Weinberg (1994) have compared the influence of density-threshold bias and Cen-Ostriker bias on the void probability function, showing that the size of empty voids is substantially larger for threshold bias at fixed $`b_\sigma `$. MPH98 included Cen-Ostriker bias in their general study of Eulerian bias schemes, and CHWF98 employ both density-threshold bias and power-law bias in their mock catalogs of the 2dF and SDSS redshift surveys. #### 3.2.3 Sheet bias The two biasing schemes described above choose galaxies with a probability that is some function of the local mass density alone. In principle, we could envisage other local properties that can influence the efficiency of galaxy formation. Redshift surveys of the local universe reveal a striking pattern in the galaxy distribution, with a large number of galaxies lying in thin walls and narrow filaments on the periphery of huge voids (de Lapparent, Huchra & Geller (1986)). The process of gravitational condensation and cooling could be substantially different in a structure that is effectively 2-dimensional rather than 3-dimensional (e.g., Ostriker & Cowie (1981); Vishniac, Ostriker, & Bertschinger (1985); White & Ostriker (1990)). These considerations, and the desire to investigate a radical alternative to density-based models, motivate us to consider a biasing scheme in which the efficiency of galaxy formation depends solely on the geometry of the local mass distribution. In order to identify sheet-like regions of the mass distribution, we compute the three eigenvalues $`\lambda _3>\lambda _2>\lambda _1`$ of the moment of inertia tensor in the $`4h^1\mathrm{Mpc}`$ sphere surrounding each N-body particle. The selection probability for a particle to be a galaxy is $$P=\{\begin{array}{cc}A\hfill & \text{if }\lambda _3/\lambda _1B\hfill \\ 0\hfill & \text{if }\lambda _3/\lambda _1<B\text{.}\hfill \end{array}$$ (8) This procedure selects galaxies in regions where the mass distribution is planar, $`\lambda _3\lambda _1`$, and avoids regions where the mass distribution is isotropic, $`\lambda _3\lambda _1`$. The flatness ratio $`\lambda _3/\lambda _1`$ is correlated with the mass density, so raising the threshold $`B`$ increases the bias factor $`b_\sigma `$. We choose the value of $`B`$ so that $`b_\sigma (12h^1\mathrm{Mpc})=2`$ and the value of $`A`$ so that $`n_g=0.01h^3\mathrm{Mpc}^3`$. Because the density and the flatness ratio are not perfectly correlated, the sheet bias scheme differs significantly from the density-threshold bias scheme. In fact, one cannot obtain an arbitrarily high bias simply by raising the value of $`B`$ in equation (8), but for our adopted parameters we find that we can always achieve $`b_\sigma =2`$. We eliminate all particles that have fewer than 18 neighbors within $`4h^1\mathrm{Mpc}`$ ($`\rho _m/\overline{\rho }_m<0.55`$) because the moment-of-inertia tensor would be too noisy. #### 3.2.4 Pressure and filament bias We also investigated two other local biasing schemes considered by Weinberg (1995). The first of these, pressure bias, is similar to density-threshold bias, except that the galaxies are selected based on a threshold in $`\rho _m\sigma _v^2`$ rather than $`\rho _m`$, where $`\sigma _v^2`$ is the velocity dispersion in a $`4h^1\mathrm{Mpc}`$ sphere. This scheme is motivated by the possibility that the pressure of the local intergalactic medium could play a role in stimulating galaxy formation. However, the velocity dispersion $`\sigma _v^2`$ is itself fairly well correlated with $`\rho _m`$, and we decided to omit results for the pressure bias model from our figures below because they are nearly identical to those of the density-threshold bias model (as also found by Weinberg 1995). This similarity of results seems somewhat at odds with Blanton et al.’s (1998) finding that the inclusion of gas temperature or dark matter velocity dispersion as a variable in the bias relation increases the scale-dependence of the bias factor. We return to this issue in §4. The second additional scheme, filament bias, is similar to sheet bias (eq. ), except that the selection is based on the eigenvalue ratio $`\lambda _3/\lambda _2`$ instead of $`\lambda _3/\lambda _1`$. Our quantitative results for filament bias are somewhat different from those of sheet bias, but they are similar enough that we decided not to include them as separate curves or panels in our figures. In the case of an identical threshold flatness ratio $`B`$, the particles selected by filament bias are a subset of those selected by sheet bias. However, a given value of $`B`$ produces different values of $`b_\sigma `$ for sheet and filament bias. ### 3.3 Non-local Biasing Schemes In addition to the local biasing schemes described in §3.2, we will investigate two non-local biasing schemes, inspired by the models of Babul & White (1991, hereafter BW91) and Bower et al. (1993, hereafter BCFW93). These papers were a response to measurements of angular clustering in the APM galaxy catalog (Maddox et al. (1990)), which showed that the “standard” CDM model (SCDM, with $`\mathrm{\Omega }=1`$, $`h=0.5`$, $`n=1`$) predicted the wrong power spectrum shape on large scales. BW91 showed that the SCDM power spectrum could be reconciled with the APM measurements if the formation of luminous galaxies was suppressed in randomly placed spheres with a filling factor $`f0.7`$ and radii $`R15h^1\mathrm{Mpc}`$, perhaps because of photoionization by quasars (Dekel & Rees (1987); Braun, Dekel, & Shapiro (1988)). BCFW93 achieved a similar result with a scheme that modulates galaxy luminosities less drastically but over larger scales (Gaussian filter radii $`R20h^1\mathrm{Mpc}`$). Both papers argue that the discrepancy between SCDM and the APM data could arise from galaxy formation physics rather than a fundamental failing of the cosmological model. However, while both papers introduced non-local bias models, neither addressed the question of whether non-locality was essential to achieving the desired modulation of the large scale galaxy correlation function. Subsequent analytic arguments have suggested that non-locality is indeed essential (Coles (1993); Fry & Gaztañaga (1993); Gaztanaga & Baugh (1998); Scherrer & Weinberg (1998)), and our results below will strengthen the case. In our first scheme, which we will refer to simply as “non-local bias,” we select galaxies with a probability $$P_{\mathrm{nl}}=P_\mathrm{l}(1+\kappa \overline{\nu }),$$ (9) where $`P_{\mathrm{nl}}`$ and $`P_\mathrm{l}`$ refer respectively to the probability of selecting a mass particle to be a galaxy in the presence and absence of non-local effects. We model the local probability $`P_\mathrm{l}`$ using the power-law scheme described in §3.2.2. We fix the index of this power law, $`B`$, by requiring that $`b_\sigma =2`$ on a scale of $`12h^1\mathrm{Mpc}`$ and the probability $`A`$ so that $`n_g=0.01h^3`$Mpc<sup>-3</sup>. The non-locality is introduced through the dependence of $`P_{\mathrm{nl}}`$ on $`\overline{\nu }`$, which is the density in a top hat sphere of radius $`R_{\mathrm{nl}}`$ about the particle, in units of the rms mass fluctuation on this scale. The large scale smoothing radius $`R_{\mathrm{nl}}`$ defines the scale of the non-local influence, and the modulation coefficient $`\kappa `$ controls its strength. Here, we choose $`R_{\mathrm{nl}}=30h^1\mathrm{Mpc}`$ and $`\kappa =0.25`$ for purely illustrational purposes, guided mainly by the fact that non-local effects on this scale can reconcile the SCDM model with the APM correlation function (BCFW93; note our use of a top hat filter rather than a Gaussian filter). While the non-local biasing scheme resembles BCFW93’s cooperative galaxy formation model, our implementation is quite different in detail — Eulerian instead of Lagrangian, with power-law bias taking the place of high peak bias. In our second non-local scheme, which we will refer to as “void bias,” we randomly place spheres of radius $`R_v=15h^1\mathrm{Mpc}`$ and eliminate all particles lying within these spheres. We randomly select galaxies from the remaining particles so that $`n_g=0.01h^3`$Mpc<sup>-3</sup>. We choose the filling factor of the voids so that $`b_\sigma =2`$ at $`R=12h^1\mathrm{Mpc}`$. This model is similar to that of BW91, who suggested that the voids might be produced by photoionization of pre-galactic gas by quasars. Although one could easily construct many other non-local bias prescriptions, these two models will suffice to illustrate the differing effects of local and non-local bias. We regard non-local bias as a priori less plausible than local bias because of the difficulty of producing coherent modulations in galaxy properties over such large scales. A wide-ranging exploration of non-local models therefore seems justified only if future observational developments force us to consider them more seriously. ### 3.4 Galaxy Distributions Figure 4 shows the mass distribution (in panel a) and the various biased galaxy distributions from one of our simulations. We plot the locations of all the galaxy particles that lie in a region $`20h^1\mathrm{Mpc}`$ thick about the center of the cube. The galaxy distributions appear strikingly different from each other even though all of them have the same bias factor $`b_\sigma =2`$ at $`R=12h^1\mathrm{Mpc}`$. Density-threshold bias (panel b) completely eliminates galaxies in the low density regions, leaving only the peaks and filaments of the mass distribution populated by galaxies. The power-law bias distribution (panel c) looks like a more dynamically evolved version of the mass distribution, as the contrast between the overdense and the underdense regions is enhanced. However, the underdense regions are not completely devoid of galaxies as they are in the threshold bias model. The sheet bias scheme (panel d) preferentially selects galaxies lying in anisotropic structures, avoiding both the underdense regions and the more isotropic clusters that are so prominent in the power-law model. In this two-dimensional representation, the galaxies appear to lie on thin elongated filaments, with very few knot-like features. The galaxy distribution of the non-local bias model (panel e) looks remarkably similar to that of the local power-law bias model, although they have very different large scale clustering properties as we will see below. The void bias scheme (panel f) completely wipes out all galaxies in some regions. The void radius $`R_v`$ imprints an obvious characteristic length scale on the galaxy distribution. ### 3.5 Correlation Function and Power Spectrum Figure 5 shows the correlation functions $`\xi (r)`$ and the bias functions $`b_\xi (r)`$ of the different galaxy distributions under the local biasing schemes. Our normalization condition $`\sigma _g(12h^1\mathrm{Mpc})=2\sigma _m(12h^1\mathrm{Mpc})`$ imposes an integral constraint on $`\xi _g(r)`$ that is, approximately, $`_0^{12}r^2\xi (r)𝑑r=\mathrm{constant}`$. The bias models that produce higher $`\xi (r)`$ on small scales must therefore compensate with lower $`\xi (r)`$ on large scales. However, in all three cases, the correlation bias function $`b_\xi (r)`$ becomes independent of scale for $`r>8h^1\mathrm{Mpc}`$, and for the two density-based schemes $`b_\xi (r)`$ is nearly scale-independent for $`r>5h^1\mathrm{Mpc}`$. As noted in §3.2, results for a pressure-threshold bias are similar to those for density-threshold bias, and results for filament bias are similar to those for sheet bias. We also found scale-independent large scale amplification of $`\xi (r)`$ for the morphology-density relation (Figure 2), which is itself a form of local bias. Our results for the density-based bias schemes are as expected in light of the analytic argument by Scherrer & Weinberg (1998), which shows that $`b_\xi (r)`$ tends to a constant in the limit $`\xi (r)1`$ for any local density bias applied to a field with a hierarchical clustering pattern. At least in the examples we have examined, the condition $`\xi (r)<1`$ seems to be sufficient to reach the asymptotic regime. More importantly, we find the same asymptotically constant behavior of $`b_\xi (r)`$ for sheet, pressure, and filament bias, where the galaxy density is not a simple function of the mass density. While we cannot examine all conceivable local bias schemes in a finite set of numerical experiments, the combination of our results with the general analytic arguments for local density schemes strongly suggests that scale-independent bias in the regime $`\xi (r)<1`$ is a generic property of all local models of galaxy formation. Figure 6 shows correlation functions and bias functions in the same format as Figure 5, but for the two non-local biasing schemes described in §3.3, “non-local bias” and “void bias.” We also show the results of the local power-law biasing scheme (from Figure 5) for comparison. The non-local bias scheme clearly leads to enhanced clustering on scales larger than about $`10h^1\mathrm{Mpc}`$. The bias function increases monotonically up to the largest scale shown in the Figure, growing by almost a factor of two between $`10h^1\mathrm{Mpc}`$ and $`30h^1\mathrm{Mpc}`$. Void bias boosts the correlation function substantially on scales $`rR_v15h^1\mathrm{Mpc}`$, while at larger scales the imprint of empty regions with a characteristic diameter causes $`\xi (r)`$ to turn over rapidly. Our results confirm the arguments of BW91 and BCFW93 that large scale modulations of galaxy formation can alter the shape of $`\xi (r)`$ enough to reconcile a standard CDM mass correlation function with the APM galaxy correlation function. However, the results in Figure 5 imply that non-locality is not an incidental feature of the BW91 and BCFW93 models but is essential to obtaining a scale-dependent $`b_\xi (r)`$ at large $`r`$. Figure 7 shows the power spectra $`P(k)`$ and the Fourier space bias functions $`b_P(k)`$ for the local biasing schemes. The arrows marked $`k_8=2\pi /(2\times 8)=0.3927h`$ Mpc<sup>-1</sup> and $`k_{12}=2\pi /(2\times 12)=0.2618h`$ Mpc<sup>-1</sup> represent the wavenumbers corresponding to the length scales $`8h^1\mathrm{Mpc}`$ and $`12h^1\mathrm{Mpc}`$, at which we normalize the amplitude of the mass power spectrum and the bias factor, respectively. This bias function $`b_P(k)`$ is also independent of scale on large scales, where it is approximately equal to $`b_\sigma (12h^1\mathrm{Mpc})=2`$. This Figure quantifies clustering on scales up to the fundamental wavelength of our $`400h^1\mathrm{Mpc}`$ simulation cube, well beyond those probed by the correlation functions in Figure 5. The constancy of $`b_P(k)`$ at small $`k`$ reinforces our conclusion that the bias functions of local biasing schemes remain scale-independent on all large scales. The bias functions become truly scale-independent only for $`k<k_s=0.2h`$ Mpc<sup>-1</sup> \[corresponding to length scales $`R>2\pi /(2\times k_s)15h^1\mathrm{Mpc}`$\], although our binning in Fourier space at the low wavenumbers is admittedly coarse. This value of $`k_s`$ is comparable to the fundamental frequency of the simulation box used by MPH98, who found a mild, monotonic scale-dependence of $`b_P(k)`$ on smaller scales ($`k>0.1h`$ Mpc<sup>-1</sup>). Figure 8 shows the power spectra and the bias functions for the non-local and void biasing schemes, again with the local power-law results shown for comparison. The arrow marked $`k_{30}=2\pi /(2\times 30)=0.1047h`$ Mpc<sup>-1</sup> represents the wavenumber corresponding to $`30h^1\mathrm{Mpc}`$, the scale of influence in the non-local model. The power spectrum of the non-local model shows strong curvature at roughly this scale and dramatically enhanced amplitude on larger scales. The bias function $`b_P(k)`$ increases with scale and does not settle to a constant even near the largest scales shown, the fundamental frequency of the cubical simulation box. The void bias model once again exhibits a characteristic feature at the scale of the voids, corresponding to the wavenumber $`k_v=2\pi /(2\times R_v)=0.2094h`$ Mpc<sup>-1</sup>, with a large difference in $`b_P(k)`$ on either side of it. The non-local and the void bias schemes can both enhance the amplitude and change the shape of the underlying mass power spectrum, even on very large scales. ### 3.6 Higher-Order Moments The correlation function and power spectrum measure second moments of the galaxy distribution as a function of scale. We now turn to some of the simplest measures of higher-order clustering, the hierarchical amplitudes $$S_J(R)\frac{\delta ^J_c}{\delta ^2^{J1}},$$ (10) where $`\delta ^J_c`$ and $`\delta ^2`$ are the $`J`$-th connected moment and the variance, respectively, of the density contrast field smoothed with a top hat filter of radius $`R`$. The third and fourth connected moments are $`\delta ^3`$ and $`\delta ^43\delta ^2^2`$, respectively. For Gaussian initial conditions, $`(J1)`$-th order gravitational perturbation theory predicts that $`S_J(R)`$ is independent of the amplitude of $`P(k)`$, independent of $`R`$ for a power-law $`P(k)`$, and only weakly dependent on $`R`$ (i.e., varying much more slowly than $`\delta ^J_c`$) for a more general $`P(k)`$ (Fry (1984); Juszkiewicz, Bouchet, & Colombi (1993); Bernardeau (1994)). A local linear or non-linear transformation of the smoothed field $`\delta (𝐱)`$ — i.e., a form of local bias applied on scale $`R`$ — alters the values of $`S_J`$, but it does not destroy this hierarchical behavior (Fry & Gaztañaga (1993)). The values of $`S_J`$ can therefore provide a diagnostic for the properties of the biasing relation. The hierarchical behavior of moments of the observed IRAS and optical galaxy distributions (e.g., Meiksin, Szapudi & Szalay (1992); Bouchet et al. (1993); Gaztañaga (1992), 1994; Kim & Strauss (1998)) supports the hypothesis of Gaussian initial conditions, and it has also been used as an argument against non-local bias (Frieman & Gaztañaga (1994)). Figure 9 shows the rms fluctuation $`\sigma \delta ^2^{1/2}`$ of the mass and galaxy density fields as a function of the top hat smoothing radius $`R_{\mathrm{th}}`$. For the sake of clarity, we show error bars only for the mass distribution; error bars for the galaxy distributions are similar. We compute the statistical errors as the dispersion in the measurements from four independent simulations, divided by $`\sqrt{3}`$. Our normalization condition, $`b_\sigma (12)=2`$, forces the rms fluctuations of all models to be equal at $`R_{\mathrm{th}}=12h^1\mathrm{Mpc}`$. In Figures 10 and 11, we plot the hierarchical amplitudes $`S_3`$ and $`S_4`$ as a function of $`\mathrm{log}\sigma `$, which can be related to a top hat filter radius using Figure 9. We compute all moments by CIC-binning the mass or galaxy distributions onto a $`200^3`$ grid and smoothing this density field with top hat filters of successively larger radii. Solid lines in Figure 10 show $`S_3`$ for the various bias models, with error bars estimated from the dispersion of $`S_3`$ values measured from the four independent simulations, divided by $`\sqrt{3}`$. The filled circles are the same in all the panels and show the values of $`S_3`$ predicted by perturbation theory for the underlying mass distribution, computed for our tilted CDM power spectrum using the equations in §3.5 of Bernardeau (1994). The predictions agree very well with the values measured from the underlying mass distribution (panel a), demonstrating that the scale dependence of $`S_3`$ of the mass distribution can be reproduced quite accurately using perturbation theory if the matter power spectrum is known a priori. This Figure also shows that the finite size of the simulation box leads to unreliable estimates of $`S_3`$ for $`\mathrm{log}\sigma _g0.8`$, corresponding to $`R_{\mathrm{th}}45h^1\mathrm{Mpc}`$. A linear bias $`\delta _g=b\delta `$ would reduce the amplitude of $`S_3`$ by a factor of $`b`$ on all scales. Our bias models, both local and non-local, have a more complicated effect, altering the scale-dependence of $`S_3`$. Density-threshold and sheet bias (panels b and d) significantly reduce the amplitude of $`S_3`$ on all scales, and for density-threshold bias $`S_3`$ becomes almost scale-independent over the range $`0.1<\sigma <1`$. The high weight assigned to dense regions boosts the value of $`S_3`$ on small scales in the power-law bias model (panel c), though in the range $`0.1<\sigma <1`$ this model tracks the behavior of the mass distribution remarkably closely. Values of $`S_3(\sigma )`$ for the non-local bias model (panel e) are similar to those for the local power-law bias. Finally, void bias (panel f) produces a systematically lower $`S_3(\sigma )`$ and a feature at $`\sigma =0.25`$, corresponding to a physical scale close to the diameter of the voids. Figure 11 shows $`S_4(\sigma )`$ in the same format as Figure 10. The perturbation theory predictions, based on the equations of Bernardeau (1994), again accurately match the values measured from the mass distribution over the range $`0.1<\sigma <1`$. Linear bias would decrease $`S_4`$ by a factor of $`b^2`$ on all scales. The relative behavior of $`S_4(\sigma )`$ for the different biasing schemes is similar to that of $`S_3(\sigma )`$. Thus, density-threshold bias reduces both the amplitude and the scale-dependence of $`S_4`$, while sheet bias primarily reduces its amplitude. The power-law and the non-local biasing schemes preserve the $`S_4`$ of the mass distribution over the range $`0.1<\sigma <1`$ and depart drastically from it outside this range. Void bias once again produces a break at $`\sigma 0.25`$. Note that although Figures 10 and 11 demonstrate the scale-dependence of $`S_3`$ and $`S_4`$, all of our biasing models (even the non-local and void bias) still preserve the basic hierarchical pattern of moments predicted by gravitational perturbation theory with Gaussian initial conditions; for example, the ratios $`S_3`$ and $`S_4`$ change by less (usually much less) than a factor of two between $`\mathrm{log}\sigma =0.5`$ and $`\mathrm{log}\sigma =0`$, even though the denominators $`\sigma ^4`$ and $`\sigma ^6`$ change by factors of 100 and 1000, respectively. Our results provide two cautionary notes for efforts to infer the bias relation from measurements of hierarchical amplitudes. First, the agreement between observed amplitudes and those predicted for the mass distribution has been used to argue that galaxy formation is unbiased (e.g., Gaztañaga (1994)). However, we find that the power-law bias model, which is not particularly contrived, happens to yield nearly the same results as the mass distribution, at least for $`S_3`$ and $`S_4`$. Second, the agreement of $`S_3(\sigma )`$ and $`S_4(\sigma )`$ with perturbation theory predictions has been used to argue against the BCFW93 cooperative galaxy formation bias model (Frieman & Gaztañaga (1994)). However, the non-local model adopted here gives $`S_3(\sigma )`$ and $`S_4(\sigma )`$ results similar to those of the local power-law bias model (and the mass distribution), suggesting that the failure of the BCFW93 model may not extend generally to all similar non-local models. ### 3.7 Quadratic Bias Parameters Ratios of the moments of galaxy counts can also be interpreted in terms of a hierarchy of bias parameters describing the relation between galaxies and mass (Fry & Gaztañaga (1993); Juszkiewicz et al. (1995)). Suppose that there is a relation $`\delta _g=f(\delta _m)`$ between the galaxy and mass density contrast fields after both are smoothed over scale $`R`$. In the limit of small amplitude fluctuations, $`|\delta _m|1`$, the function $`f(\delta _m)`$ can be approximated by a second-order Taylor expansion, $$\delta _gf(\delta _m)b_1\delta _m+\frac{1}{2}b_2\delta _m^2\frac{1}{2}b_2\sigma _m^2,$$ (11) where the third term ensures that $`\delta _g=0`$. The “linear” bias parameter $`b_1`$ gives the slope of the galaxy-mass relation, and in the limit of first-order perturbation theory it is equal to other bias parameters such as $`b_\sigma `$ and $`b_\xi `$. The Taylor expansion (11) can be regarded as the justification for adopting the linear bias model for some calculations in the limit of small $`\delta _m`$. However, the hierarchical amplitude $`S_3`$ is only non-zero in second-order perturbation theory, so to compute the effect of bias on $`S_3`$ one must use the full second-order expansion (11) even in the small-$`\delta _m`$ limit. The result is (Fry & Gaztañaga (1993); Juszkiewicz et al. (1995)) $$S_{3g}=\frac{S_{3m}}{b_1}+\frac{3b_2}{b_1^2}.$$ (12) At the same order, the variance of the galaxy density is (R. Scoccimarro, private communication) $$\sigma _g^2=b_1^2\sigma _m^2+S_{3m}b_1b_2\sigma _m^4+\frac{1}{2}b_2^2\sigma _m^4.$$ (13) In general, there will be some scatter about the relation $`\delta _g=f(\delta _m)`$, and even if the scatter is small at one scale it could be large at another scale. We will return to this issue in §3.9 below. For now, however, we investigate the behavior of the linear and quadratic bias parameters $`b_1`$ and $`b_2`$ defined as the simultaneous solutions to equations (12) and (13). The thick and the thin dashed lines in Figure 12 plot these parameters as a function of $`R_{\mathrm{th}}`$, the radius of the top hat filter used to define the smoothed density fields. The thick and thin solid lines show the results of an alternative definition in which equation (13) is replaced by the first-order relation $`b_1=b_\sigma =\sigma _g/\sigma _m`$ (eq. ). The two definitions are identical in the limit $`\sigma _m1`$, since the $`\sigma _m^4`$ terms in equation (13) become negligible. We estimate the errors in $`b_2`$ as the dispersion in the values derived from the four independent simulations, divided by $`\sqrt{3}`$. The errors in $`b_1`$ are tiny over the range of scales considered, so we do not show them in the Figure. The rms fluctuation $`\sigma `$ is given by an integral of $`\xi (r)`$. Since we have already shown that $`b_\xi (r)`$ tends to a constant on large scales in local bias models, we expect $`b_1`$ to become constant on large scales in local models as well. This is just the behavior that we find in Figure 12, though in the sheet bias model there is scale-dependence out to $`R15h^1\mathrm{Mpc}`$. The void bias model has a more scale-dependent value of $`b_1`$, though it still settles to $`b_1=2`$ for $`R20h^1\mathrm{Mpc}`$. In the non-local model, on the other hand, $`b_1`$ increases monotonically with scale up to the largest scales plotted, reaching $`b_1=3`$ at $`R_{\mathrm{th}}=70h^1\mathrm{Mpc}`$. In all cases the definition $`b_1=b_\sigma `$ is less scale-dependent than the definition from simultaneous solution of equations (12) and (13). Given that $`b_1`$ and $`b_\xi `$ become scale-independent on large scales in all of our local biasing models, it is tempting to conjecture that the quadratic bias parameter $`b_2`$ also becomes scale-independent in this regime. Indeed, one might extend this conjecture to scale-independence of the whole hierarchy of non-linear bias parameters defined by Fry & Gaztañaga (1993), which come from the successively higher order terms in the Taylor expansion of $`\delta _g=f(\delta _m)`$. Unfortunately, the noise in estimates of $`S_3`$ in our finite simulation box makes it difficult to test even the $`b_2`$ conjecture. The density-threshold and power-law bias results appear marginally consistent with a constant $`b_2`$ for $`R_{\mathrm{th}}>15h^1\mathrm{Mpc}`$, while the sheet bias model appears marginally inconsistent with constant $`b_2`$. The value of $`b_2`$ certainly shows more scale-dependence at large $`R_{\mathrm{th}}`$ in the non-local and void bias models. The value of $`b_2`$ is scale-dependent in all models except density-threshold bias for $`R_{\mathrm{th}}<15h^1\mathrm{Mpc}`$, which is not too surprising since the Taylor expansion (11) and perturbation theory calculation (12) break down as $`\sigma `$ approaches one. ### 3.8 Pairwise Peculiar Velocities Gravitational evolution of the inhomogeneous mass distribution induces peculiar velocities on all the mass particles. The pairwise velocity dispersion, which can be estimated from the anisotropy of the redshift-space correlation function $`\xi (r_p,\pi )`$ of galaxy redshift surveys (Davis & Peebles (1983); Bean et al. (1983)), provides a diagnostic of $`\mathrm{\Omega }`$ by way of the “Cosmic Virial Theorem” (Peebles (1976)). It has also been used as a direct test of cosmological models (e.g., Davis et al. (1985)). The first and second moments of the pairwise velocity distribution enter into the BBGKY equations (Davis & Peebles (1977)) and analytic calculations of redshift-space distortions of $`\xi (r_p,\pi )`$ (Fisher (1995) and references therein). However, even if galaxies have the same velocity field as the dark matter, these statistical characterizations of galaxy velocities can be strongly affected by bias because of the pair weighting. For example, if biased galaxy formation preferentially populates dense regions with high velocity dispersions, then the pairwise dispersion of the galaxies will exceed the pairwise dispersion of the mass. Figure 13 shows the first moment of the pairwise velocity distribution, the mean pairwise radial velocity $$V_{12}(r)(𝐯_1𝐯_2)𝐫_{12}.$$ (14) Here $`𝐯_1`$ and $`𝐯_2`$ are the velocities of two particles separated by the vector $`𝐫_{12}`$, and the angular bracket denotes an average over all particle pairs with separation $`r=|𝐫_{12}|`$. Error bars are estimated from the dispersion among the four independent simulations, divided by $`\sqrt{3}`$. In a linear bias model $`\delta _g=b\delta _m`$ where the galaxies follow the same velocity field as the mass, $`V_{12}^{\mathrm{gal}}(r)=bV_{12}^{\mathrm{mass}}(r)`$ in the small $`\delta _m`$ (large $`r`$) limit (Fisher et al. (1994)). In line with this expectation, all three of our local bias models exhibit a nearly constant amplification of $`V_{12}(r)`$ for $`r>10h^1\mathrm{Mpc}`$, by a factor that is close to the model’s value of $`b_\xi (r)`$ from Figure 5. At smaller scales, however, the shape of $`V_{12}(r)`$ depends strongly on the biasing scheme, in particular on the degree to which it weights the densest regions. Thus, the power-law bias model has the highest $`V_{12}(r)`$ and the sheet model, which avoids the isotropic clusters, has a $`V_{12}(r)`$ that falls below the mass $`V_{12}(r)`$ at small separations. The non-local model roughly follows the power-law model on small scales, and it does not settle to a constant amplification of $`V_{12}(r)`$ at large scales. The “bias factor” defined by $`b_v(r)=V_{12}^{\mathrm{gal}}(r)/V_{12}^{\mathrm{mass}}(r)`$ for this model has a scale-dependence at large $`r`$ that is reminiscent of (but not as strong as) that of $`b_\xi (r)`$ (Figure 6). The void bias model is perhaps the oddest counterexample to the linear bias expectation: since galaxies are eliminated from randomly placed voids with no regard to their density or velocity, void bias does not alter $`V_{12}(r)`$ at all. This result emphasizes a difference between the void bias model and our other models, namely that it enhances $`\xi (r)`$ and $`P(k)`$ by imprinting an additional, independent clustering pattern on the galaxy distribution rather than by preferentially selecting galaxies in clustered regions. Figure 14 shows the second moment of the pairwise velocity distribution, the pairwise radial velocity dispersion $$\sigma _v^2(r)|(𝐯_1𝐯_2)𝐫_{12}|^2V_{12}^2(r),$$ (15) where the average is again over pairs of separation $`r=|𝐫_{12}|`$. Measurements of this quantity from the anisotropy of $`\xi (r_p,\pi )`$ are sensitive to the presence or absence of rich clusters in the redshift sample (Mo, Jing & Borner (1993); Zurek et al. (1994); Somerville, Primack, & Nolthenius (1997)). Figure 14 illustrates another shortcoming of the pairwise velocity dispersion as a cosmological diagnostic: it is sensitive to the details of the biasing scheme, so it cannot be predicted without a fully specified model of biasing. The two models with the steepest small scale correlation functions, sheet bias and power-law bias, have, respectively, the lowest and highest pairwise velocity dispersions at small scales because of the relative weights they assign to rich clusters. The sheet bias model has $`\sigma _v^2(r)`$ well below the mass $`\sigma _v^2(r)`$ for $`r<4h^1\mathrm{Mpc}`$. At large $`r`$, all bias models except for the void model produce an amplification of $`\sigma _v^2(r)`$, but not by a factor that is simply related to the large scale bias. Because many of the pairs contributing high values of the pairwise velocity at large $`r`$ have one member in a high-dispersion cluster, the pairwise dispersion statistic is heavily influenced by cluster weighting even at large scales. Recognizing the sensitivity of the pairwise velocity dispersion to the presence of clusters in redshift surveys and to the nature of bias, Kepner, Summers, & Strauss (1997) and Strauss, Ostriker, & Cen (1998) have proposed alternative statistics that measure the velocity dispersion as a function of the local number density of galaxies instead of averaging over all pairs. (Davis, Miller, & White and Landy, Szalay, & Broadhurst have explored other approaches that measure a globally averaged quantity but do not weight by galaxy pairs.) In particular, Kepner et al. (1997) suggested that the slope of the relation between the velocity dispersion and the galaxy number density could provide a diagnostic for $`\mathrm{\Omega }`$. Figure 15 shows $`\sigma _v(\rho /\overline{\rho })`$, the velocity dispersion as a function of the local galaxy overdensity, with both quantities computed in spheres of radius $`3h^1\mathrm{Mpc}`$. We also compute the intrinsic dispersion in $`\sigma _v`$ at fixed $`\rho /\overline{\rho }`$, which is shown by the error bars. This measure of $`\sigma _v(\rho /\overline{\rho })`$ is similar but not identical to the measures proposed by Kepner et al. (1997) and Strauss et al. (1998). Figure 15 shows that the slope of $`\sigma _v`$ vs. $`\rho /\overline{\rho }`$ is systematically lower for biased galaxy distributions than for the underlying mass distribution, as anticipated by Kepner et al. (1997) and Strauss et al. (1998). More significantly, the slope is not simply related to the rms bias on the $`3h^1\mathrm{Mpc}`$ measurement scale (see Figure 9) but depends on the details of the adopted biasing scheme. Measuring $`\sigma _v`$ as a function of density does soften the extreme sensitivity of the pairwise dispersion to the details of the bias model, but probably not enough to make this statistic a good one with which to measure $`\mathrm{\Omega }`$. Measurements of $`\sigma _v(\rho /\overline{\rho })`$ can provide a test of cosmological models, but only if the relation between galaxies and mass is reliably specified. ### 3.9 Galaxy and Mass Density Fields Any physical biasing of the galaxy distribution can lead to a non-trivial relation between the galaxy and the mass density fields. We have introduced several different definitions of bias factors, each related to a specific statistical measure of clustering. A more general description of the effect of bias is the conditional probability $`P(\delta _g|\delta _m)`$ of finding galaxy density contrast $`\delta _g`$ where the mass density contrast is $`\delta _m`$ (DL (98)). This formulation implicitly assumes a smoothing scale on with the fields $`\delta _g`$ and $`\delta _m`$ are defined, and in general both the mean relation and the scatter about it will depend on the smoothing scale. Figure 16 shows the median and the 10th and 90th percentile limits of the distribution of smoothed galaxy density contrasts $`\delta _g`$ at given values of the smoothed mass density contrast $`\delta _m`$. The dotted line that runs along the diagonal in each panel shows the relation $`\delta _g=2\delta _m`$ expected for linear bias with a constant bias factor $`b=2`$. Different rows correspond to different biasing schemes, while the four columns from left to right show results for Gaussian smoothing radii $`R_s=3,6,10`$ and $`15h^1\mathrm{Mpc}`$, respectively. We compute the distribution and the different percentile values using the density fields from the four independent simulations. The effect of the density-threshold bias is remarkably close to linear bias at all smoothing scales and density contrasts, breaking down only where it must at $`\delta _g1`$. This result is consistent with the very small values of $`b_2`$ found for this model (Figure 12). The median $`\delta _g(\delta _m)`$ is markedly more non-linear for the power-law model, approaching the dotted line only for a large smoothing length $`R_s=15h^1\mathrm{Mpc}`$. The median relation for the sheet model is fairly linear, though always shallower than $`\delta _g=2\delta _m`$. The non-local model has a noticeably curved median relation even at $`R_s=15h^1\mathrm{Mpc}`$, consistent with the rise in $`b_2`$ for this model at large scales (Figure 12). For the void model, the scatter in $`\delta _g`$ at fixed $`\delta _m`$ is so large that the median relation alone provides little information. More generally, the scatter between $`\delta _g`$ and $`\delta _m`$ reveals the influence of factors other than the local mass density in shaping the galaxy distribution. As one might expect, the scatter about the median relation is small for the bias models based on local mass density (density-threshold and power-law bias), and it decreases rapidly with increasing smoothing. Some scatter is inevitable because of shot noise in the galaxy distribution, and because the density smoothed with a Gaussian filter of radius $`R_s`$ is not perfectly correlated with the density smoothed with a top hat of radius $`R_{\mathrm{th}}=4h^1\mathrm{Mpc}`$ (as used in the biasing prescriptions). The influence of large scale density contrast in the non-local model does not significantly increase the scatter in $`\delta _g`$ vs. $`\delta _m`$ over that in the power-law model. In the sheet bias model, on the other hand, galaxy selection is based on the geometry of the local mass distribution rather than the mass density. These quantities are correlated, but the scatter between $`\sigma _g`$ and $`\sigma _m`$ is much larger than in the density-based models, though it still becomes fairly small for $`R_s10h^1\mathrm{Mpc}`$. In the void bias model, the probability that a mass particle is included as a galaxy is entirely independent of the local mass density, since the voids are thrown at random into the mass distribution. The void bias model therefore has large scatter between $`\delta _g`$ and $`\delta _m`$, persisting to very large scales. At small $`R_s`$, the 10th percentile at all $`\delta _m`$ corresponds to regions that are empty of galaxies, while the 90th percentile corresponds to regions in which the smoothing volume does not overlap any of the voids. In practice, the mass density contrast $`\delta _m`$ used as the independent variable in Figure 16 is not directly observable. However, $`\delta _m`$ can be inferred from the divergence of the peculiar velocity field using the linear perturbation theory relation $`\delta _m=\mathrm{\Omega }^{0.6}𝐯/H_0`$ or weakly non-linear relations (Nusser et al. (1991); Gramann (1993); Chodorowski & Łokas (1997); Susperregi & Buchert (1997)). The quantity $`𝐯`$ can be inferred from observations of the radial peculiar velocity field by the POTENT method, under the assumption that the peculiar velocity field remains irrotational during gravitational evolution (Bertschinger & Dekel (1989); Dekel et al. (1993); Sigad et al. (1998)). In the case of linear bias, the slope of the relation between $`\delta _g`$ and $`𝐯`$ provides an estimate of the quantity $`\beta =\mathrm{\Omega }^{0.6}/b`$. Figure 17 shows the relation between $`\delta _g`$ and $`𝐯`$ for the unbiased mass distribution (top row) and the various biased galaxy distributions, in the same format as Figure 16. The three columns from left to right show the relation when the fields are smoothed with Gaussian filters of radius $`R_s=6,`$ 10, and $`15h^1\mathrm{Mpc}`$, respectively. We use the method of Babul et al. (1994) to create a volume-weighted, smoothed velocity field from the discrete galaxy peculiar velocities. Specifically, we first form the momentum field by CIC-binning the momentum of every galaxy onto a $`200^3`$ grid. We smooth this momentum field with a Gaussian filter of radius $`R_1=R_s/2`$ and divide it by a similarly smoothed density field to form a mass-weighted smoothed velocity field. We then smooth this velocity field with another Gaussian filter of radius $`R_2=(R_s^2R_1^2)^{1/2}`$, so that the effective smoothing radius is $`R_s`$. Because the second smoothing dominates over the first, the final velocity field is volume-weighted rather than mass-weighted. We do not show the $`\delta _v`$ vs. $`𝐯`$ relation at $`R_s=3h^1\mathrm{Mpc}`$ because the discrete galaxy distribution yields an excessively noisy velocity field in low density regions for this smoothing length. In the first row of Figure 17, systematic departures of the solid curves from the linear theory relation $`\delta _m=𝐯/H_0`$ are caused by non-linear gravitational evolution. This deviation decreases drastically with increased smoothing, showing that linear theory is an increasingly better approximation of the gravitational evolution on larger scales. The panels in the remaining rows of Figure 17 show the distribution of the galaxy density fluctuations $`\delta _g`$ plotted against $`2𝐯/H_0`$. For $`R_s=15h^1\mathrm{Mpc}`$ these plots look similar to the corresponding panels of Figure 16. For smaller smoothing lengths the relation between $`\delta _g`$ and $`𝐯`$ is more non-linear than the relation between $`\delta _g`$ and $`\delta _m`$ because of the additional non-linearity of the $`\delta _g\delta _m`$ relation. Comparisons between galaxy density fields and mass density fields estimated from POTENT usually account for the non-linear relation between $`\delta _m`$ and $`𝐯`$, but the non-linearity and scatter of the bias relation itself are potential sources of additional systematic error in POTENT estimates of $`\beta `$, as noted by DL98. The importance of these effects depends in detail on the form of bias — e.g., density-threshold bias produces a nearly linear relation with small scatter, power-law bias produces a non-linear relation with small scatter, and sheet bias produces a nearly linear relation with large scatter. ## 4 Discussion We have examined the influence of the morphology-density relation and biased galaxy formation on many of the statistical measures that are commonly used to characterize galaxy clustering and galaxy peculiar velocity fields. We have focused most of our attention on local biasing models, in which the efficiency of galaxy formation is governed by the density (density-threshold or power-law bias), geometry (sheet or filament bias), or “pressure” ($`\rho \sigma _v^2`$) within a sphere of radius $`4h^1\mathrm{Mpc}`$. We have contrasted the behavior of these local models with that of models in which the galaxy density is coherently modulated over large scales (non-local bias, as proposed by BCFW93) or suppressed in randomly distributed voids (void bias, as proposed by BW91). In this Section, we summarize our results, then discuss them in light of other recent work and anticipated observational developments. If the morphological segregation of galaxies is governed by the local morphology-density relation proposed by Postman & Geller (1984), then on small scales the correlation function of early-type galaxies should be both steeper and stronger than that of late-type galaxies, in accord with present observations. On scales larger than the galaxy correlation length, early-type galaxies should remain more strongly clustered than late-type galaxies, but the correlation functions (and power spectra) should have the same shape. This prediction of the local morphology-density model is consistent with current data and can be tested at high precision by the 2dF and SDSS redshift surveys. In all of the examples that we have investigated, local bias produces a scale-independent amplification of the two-point correlation function and power spectrum on scales in the linear regime. On these scales, therefore, local biasing cannot resolve a discrepancy between the shape of the mass power spectrum predicted by a cosmological model and the shape of the observed galaxy power spectrum. Non-local biasing can resolve such a discrepancy, as originally shown by BW91 and BCFW93 in the context of “standard” CDM and the APM galaxy survey, but achieving this alteration of the power spectrum shape requires a bias mechanism that directly modulates galaxy formation in a coherent way over large scales. In the non-linear regime, by contrast, the bias functions $`b_\xi (r)`$, $`b_\sigma (r)`$, and $`b_P(k)`$ are scale-dependent even in local models, and their shapes depend on the details of the biasing scheme. Local bias models roughly preserve the hierarchical relations between moments of the galaxy count distribution, in that the ratios $`S_3\delta _g^3_c/\delta _g^2^2`$ and $`S_4\delta _g^4_c/\delta _g^2^3`$ are only weakly dependent on smoothing scale (and hence on $`\sigma \delta _g^2^{1/2}`$). However, local bias can change both the amplitude and shape of the functions $`S_3(\sigma )`$ and $`S_4(\sigma )`$. Our power-law bias model gives $`S_3(\sigma )`$ and $`S_4(\sigma )`$ close to those of the underlying mass distribution, and our non-local bias model in turn gives $`S_3(\sigma )`$ and $`S_4(\sigma )`$ close to those of the local power-law model. These results show that agreement between measured hierarchical amplitudes and perturbation theory predictions for the mass distribution does not necessarily imply that galaxy formation is unbiased or even that it is locally biased, although such agreement can rule out specific local and non-local models (Frieman & Gaztañaga (1994)). If we characterize the effect of bias on the variance and skewness of galaxy counts by linear and quadratic bias parameters $`b_1`$ and $`b_2`$, then $`b_1`$ is independent of scale on large scales in all of our local models. Our results are marginally but not strongly inconsistent with the conjecture that $`b_2`$ becomes scale-independent on large scales in local bias models. The scale-dependence of $`b_1(r)`$ and $`b_2(r)`$ is much stronger in our non-local model than in any of our local models. On large scales, void bias produces a scale-independent $`b_1(r)`$ but a scale-dependent $`b_2(r)`$. Estimates of moments of the velocity distribution from galaxy redshift surveys, which are weighted by galaxy pairs, are strongly affected by biasing because of correlations between the velocity distribution and the parameters that determine the galaxy formation efficiency. In particular, the pairwise velocity dispersion $`\sigma _v(r)`$ is sensitive to the details of the bias model at all separations. The two bias models with the steepest small scale correlation function, power-law bias and sheet bias, have values of $`\sigma _v`$ that differ by a factor of two at $`r2h^1\mathrm{Mpc}`$, because the first enhances galaxy formation in dense, isotropic clusters while the second does not. The dependence of the mean pairwise velocity $`V_{12}(r)`$ on bias is similarly complex on small scales, but the behavior simplifies for local bias models at large $`r`$, where the galaxy $`V_{12}`$ is amplified over the mass $`V_{12}`$ by a factor $`b_v`$ that is close to the correlation function bias $`b_\xi `$. These biases of pairwise moments arise even though the galaxies in our models are just a subset of the dark matter particles and therefore have the same local velocity distribution. Void bias presents an odd case in which the galaxy distribution is biased but $`\sigma _v(r)`$ and $`V_{12}(r)`$ are not. The relation between local velocity dispersion and local overdensity (Kepner et al. 1997; Strauss et al. 1998) is also sensitive to bias, complicating the use of this statistic as a diagnostic for $`\mathrm{\Omega }`$. The median trend and scatter of the relation between $`\delta _g`$ and $`\delta _m`$ or $`\delta _g`$ and $`𝐯`$ depends on the biasing scheme and on the smoothing scale used to define the density and velocity fields. The density-threshold bias prescription produces a relation that is remarkably close to linear bias, $`\delta _g=b\delta _m`$. Power-law bias produces a curved $`\delta _g\delta _m`$ relation, and, as noted by DL98, the non-linearity of this relation is a possible source of systematic error in efforts to measure $`\beta \mathrm{\Omega }^{0.6}/b`$ via the POTENT method (Dekel et al. (1993); Sigad et al. (1998)) or via redshift space distortions (Hamilton (1998) and references therein). The relation between $`\delta _g`$ and $`\delta _m`$ is fairly tight for the bias schemes that are based on local density, but it exhibits much more scatter for the sheet bias scheme, as one might expect. The void bias model predicts very large scatter between $`\delta _g`$ and $`\delta _m`$ or $`𝐯`$, even for smoothing lengths as large as $`15h^1\mathrm{Mpc}`$. The observed correlation between $`\delta _g`$ and $`𝐯`$ (Sigad et al. (1998)) is probably sufficient to rule out such a model, as first argued by Babul et al. (1994). However, our less extreme non-local bias model predicts a relation that is nearly as tight as that of the local power-law bias model, so the existence of a tight relation between $`\delta _g`$ and $`𝐯`$ is not sufficient to rule out non-locally biased galaxy formation. Our results for the large scale behavior of $`\xi (r)`$ and $`P(k)`$ strengthen the conclusions of earlier analytic arguments (Coles (1993); Fry & Gaztañaga (1993); Gaztanaga & Baugh (1998); Scherrer & Weinberg (1998)) and numerical investigations (Weinberg (1995); MPH98; Cole et al. (1998)). Although we can only examine a finite number of specific biasing prescriptions, these examples show that scale-independent amplification of $`\xi (r)`$ and $`P(k)`$ occurs even in models like sheet, filament, and pressure bias, where the galaxy formation efficiency is not governed strictly by the local mass density. We also find that the asymptotic regime of nearly constant $`b_\xi `$, $`b_\sigma `$, and $`b_P`$ is effectively reached on mildly non-linear scales, soon after $`\xi (r)`$ drops below one. Our conclusions may seem mildly at odds with those of Blanton et al. (1998, hereafter BCOS98), who find scale-dependent bias of the “galaxy” population in a hydrodynamic simulation of the $`\mathrm{\Lambda }+`$CDM model (by Cen & Ostriker (1998)) The difference is largely a matter of emphasis: BCOS98 find substantial scale-dependence of $`b_\sigma (r)`$ in the non-linear regime, but they find only a 12% drop in $`b_\sigma (r)`$ from $`r=8h^1\mathrm{Mpc}`$ to $`r=30h^1\mathrm{Mpc}`$, which is the same drop we find for our sheet bias model. Our power-law and sheet bias models also show significant scale-dependence of $`b_\sigma (r)`$ in the non-linear regime ($`r<8h^1\mathrm{Mpc}`$), though not as strong as that found by BCOS98. BCOS98 demonstrate that the scale-dependence of bias in their simulation arises mainly from the correlation between galaxy formation efficiency and the local gas temperature $`T`$ or dark matter velocity dispersion $`\sigma _v^2`$. They further argue that this correlation leads to scale-dependence because of the connection between $`T`$ (or $`\sigma _v^2`$) and the gravitational potential, which has a much redder power spectrum than the density field itself. In the terminology of this paper, the BCOS98 argument could be rephrased as a claim that temperature is a local variable whose influence is more “effectively non-local” than that of other local variables. The fact that we obtain virtually identical results for density-threshold and pressure bias implies that enhanced scale-dependence is not an automatic consequence of incorporating temperature or $`\sigma _v^2`$ into the biasing prescription. In order to address this issue more directly, we also examined a model in which we biased the galaxy distribution using a threshold in $`\sigma _v^2`$ alone, a pure “temperature” bias. We again obtained results nearly identical to those of the density-threshold model, with no enhanced scale-dependence of the bias. The difference of our result from that of BCOS98’s similar numerical experiment presumably reflects our use of a $`4h^1\mathrm{Mpc}`$ rather than a $`1h^1\mathrm{Mpc}`$ sphere to define $`\sigma _v^2`$. Averaged over this larger scale, velocity dispersion does not behave any more “non-locally” than density. This result does not mean that galaxy bias in a realistic model might not be as scale-dependent as BCOS98 find, only that the influence of temperature or velocity dispersion on the scale-dependence of bias depends in detail on the scale over which it is defined and the way that it is incorporated into the bias prescription. Our numerical study complements the general analytic examination of stochastic, non-linear biasing by DL98. The use of N-body simulations allows us to investigate the effects of a wide range of biasing prescriptions on measures of galaxy clustering in the linear, mildly non-linear, and strongly non-linear regimes. For the most part, we have addressed different issues from DL98, but we concur on the general point that bias is a multi-faceted phenomenon, and that only in specific cases and limits can it be described by a single parameter. On small scales, the relation between $`\delta _g`$ and $`\delta _m`$ is generically non-linear and scale-dependent, and it may have substantial scatter. At any given scale one can define many different “bias factors” — $`b_\xi `$, $`b_\sigma `$, $`b_P`$, $`b_1`$, $`b_2`$, $`b_v`$, etc. — and the relation among them depends on the details of the biasing scheme, or, ultimately, on the physics of galaxy formation. Despite this complexity, our results show that the very general assumption of local biasing leads to some important simplifications on large scales. Most significantly, the scale-independence of $`b_\xi `$ and $`b_P`$ in the linear regime means that large galaxy redshift surveys like 2dF and the SDSS should reveal the true shape of the dark matter power spectrum on large scales, if galaxy formation is governed by local physics. Indeed, this result makes the local biasing hypothesis testable, since any non-local physics that significantly modulated galaxy formation would almost certainly have a different impact on galaxies of widely differing luminosity, stellar population age, morphology, and surface mass density. Existing data clearly show that different types of galaxies have different clustering amplitudes, but if each galaxy’s properties are determined by the history of its local environment then the 2dF and SDSS redshift surveys should show that all galaxies have the same $`P(k)`$ shape on large scales. The uncertainties of bias have been a source of frustration in efforts to test cosmological models against observations of galaxy clustering. In recent years, observational and theoretical breakthroughs have opened a number of alternative routes to measuring cosmological parameters and the mass power spectrum, including microwave background anisotropies, the Ly$`\alpha `$ forest, the supernova Hubble diagram, weak gravitational lensing, and the mass function and evolution of the galaxy cluster population. These approaches are insensitive or weakly sensitive to biased galaxy formation, though each one has its own set of assumptions and limitations. Recent years have also seen great improvements in the predictive power of theories of galaxy formation, thanks to advances in numerical simulations and semi-analytic modeling techniques that combine gravitational clustering with the more complicated physical processes of gas cooling, star formation, supernova feedback, metal enrichment, and morphological transformation via mergers and interactions. While the sensitivity of galaxy clustering statistics to the details of galaxy biasing is an obstacle to testing cosmological models, it becomes an asset when the goal is testing the theory of galaxy formation itself, especially if the underlying cosmological model is tightly constrained by independent observations. We have already argued that $`P(k)`$ and $`\xi (r)`$ can test the broad hypothesis of local galaxy formation, and at a greater level of detail we might, for example, come to view the pairwise velocity dispersion not as a tool for measuring $`\mathrm{\Omega }`$ but as a diagnostic for the importance of mergers in dense environments. The giant redshift surveys currently underway will provide superb data sets for such studies, allowing precise clustering measurements for finely divided subsets of the galaxy population over a wide range of scales. Other kinds of data may play an equal or more important role in determining the material contents of the universe and the origin of cosmic inhomogeneity, but measurements of large scale structure at high and low redshift will guide our understanding of the physics that transformed primordial dark matter fluctuations into the universe of galaxies. This work was supported by NSF Grant AST-9616822 and NASA Astrophysical Theory Grant NAG5-3111. We thank Roman Scoccimarro for helpful discussions on galaxy moments and their relation to non-linear bias factors.
no-problem/9812/cond-mat9812235.html
ar5iv
text
# Topological defects and the short-distance behavior of the structure factor in nematic liquid crystals ## I Introduction When a liquid crystalline material is brought from the isotropic phase to the nematic phase, the turbidity (i.e., the total intensity of scattered light) of the material increases by a factor of the order of $`10^6`$, and the sample appears cloudy . In samples in which the nematic ordering is well aligned, this intense scattering is due predominantly to thermal fluctuations in the orientation of the nematic director, and has been extensively studied and reported in the literature . If, on the other hand, the sample is not well aligned and contains a large number of topological defects (disclinations, nematic hedgehogs, or surface defects), the turbidity becomes dominated by scattering from essentially static director inhomogeneities associated with these defects . A particularly visible manifestation of defect-associated scattering is the so-called “Porod tail” contribution to the scattering intensity. For sufficiently large magnitudes $`k`$ of the scattering wave-vector $`𝐤`$, the scattered light intensity $`I(𝐤)`$ of a nematic system containing topological defects decays as a power law $`k^\chi `$, where $`\chi `$ is an integer-valued exponent that depends on the dominant type of defect present in the system. Such behavior was confirmed and the values of $`\chi `$ have been discussed in both the experimental and the theoretical literature on the kinetics of phase ordering of nematic systems containing numerous defects generated during a quench from the isotropic to the nematic phase. A full calculation of the form of the power-law tails associated with nematic defects, and an evaluation of the relative importance of these contributions compared to the $`k^2`$ contribution associated with thermal fluctuations have, however, not yet been given in the literature, and form the subject matter of the present Paper. Our results are, in principle, applicable to any nematic system containing topological defects, but should be especially useful in interpreting detailed measurements of scattered-light intensity in highly disordered systems (i.e., those containing numerous defects). Such configurations arise in low-molecular-weight nematics for example when the transition from the isotropic to the nematic phase is sudden (e.g., induced by a temperature quench) , or when an originally well-aligned nematic sample is put in a sufficiently strong shear flow or under high alternating voltage . In polymer nematics, numerous defects often persist even in the absence of any external agents , and significantly affect the mechanical and electro-optical properties of these materials. Our results permit the extraction of information on the type and number of topological defects present in such systems and the monitoring of the dynamics of the defects. As our results are exact in the appropriate scattering wave-vector range, they can also be used to test the validity of analytical theories and simulations of phase ordering in nematics. This Paper is organized as follows. In Sec. II we discuss the general issue of the origins of power-law contributions to the structure factor $`S(k)`$ (i.e. the Fourier-transformed order-parameter correlation function) in systems possessing continuous symmetries. In general, two types of power-law contributions of quite distinct origins are present: those due to transverse thermal fluctuations of the order parameter, and those due to topological defects. The order-parameter variations associated with topological defects give rise, at sufficiently large $`k`$, to contributions of the form $`\rho Ak^\xi `$ (the Porod tail), where $`\rho `$ is the number density of a given type of defect, $`A`$ is a dimensionless Porod amplitude and $`\xi `$ is an integer-valued Porod exponent. In Sec. III we re-derive and generalize some of the results of Bray and Humayun for Porod amplitudes and exponents in the $`O(N)`$ vector model by using a somewhat different and computationally simpler method. In the remainder of the Paper, we use this method to calculate Porod amplitudes and exponents for topological defects in uniaxial and biaxial nematic liquid crystals. The forms of the Porod tail that correspond to hedgehog defects, disclination lines, and ring defects in uniaxial nematics are calculated in subsections IV BIV D. A case of special interest is that of a wedge-type ring defect (i.e. disclination loop) in a uniaxial nematic, where two separate Porod regimes (having distinct exponents and amplitudes) arise for length scales larger than and smaller than the radius of the disclination loop. The presence of two Porod regimes is specific to the nematic case, and does not occur for any type of defect in $`O(N)`$ vector model systems. In Sec. IV E we discuss the influence of transverse thermal fluctuations of the nematic director on the large-$`k`$ behavior of the structure factor. In Sec. IV F we use the results of Secs. IV AIV E to analyze the results of the light-scattering experiments of Refs. , which investigate the process of phase-ordering kinetics in uniaxial nematics. In Sec. V we generalize our results for the uniaxial nematic to the case of non-abelian defects in biaxial nematics, the dynamics of which have recently been investigated experimentally and theoretically . A general discussion of corrections to the results derived in this Paper that may arise from effects such as defect interactions, defect curvature, and the presence of the defect core, is given in Sec. VI. We conclude, in Sec. VII, with a summary of our results and suggestions for their use in the analysis of experimental data. ## II Generalized Porod law Consider an ordered system characterized by an order-parameter field $`𝚽(𝐫)`$, where $`𝚽`$ is an $`N`$-component vector in order parameter space and $`𝐫`$ is the radius-vector in $`d`$-dimensional real space. The principal quantity of interest in many theoretical and experimental investigations of ordered systems is the structure factor $`S(𝐤)`$, which is the Fourier transform of the real-space correlation function $`C(𝐫)`$ of the order parameter. This quantity, which is directly measurable via the appropriate scattering experiment, characterizes the degree of inhomogeneity of the order parameter at length scales of order $`k^1`$. By definition, $$S(𝐤)d^dre^{i𝐤𝐫}C(𝐫),$$ (1) with the real-space correlation function $`C(𝐫)`$ given by $$C(𝐫)=\frac{1}{M_{O(N)}}d^dx𝚽(𝐱)𝚽(𝐱+𝐫),$$ (2) where the integration is taken over the whole system. Here $`M_{\mathrm{O}(\mathrm{N})}`$ is a normalization factor, $$M_{\mathrm{O}(\mathrm{N})}d^dx𝚽(𝐱)𝚽(𝐱),$$ (3) chosen to ensure that $`C(𝐫)|_{𝐫=\mathrm{𝟎}}=1`$. Notice that with this choice of normalization $`C(𝐫)`$ is dimensionless, whereas $`S(𝐤)`$ has the dimensions of a volume. Equivalently , the structure factor may be expressed as $$S(𝐤)=\frac{1}{M_{O(N)}}𝚽(𝐤)𝚽(𝐤),$$ (4) where $`\mathrm{\Phi }(𝐤)`$ is the Fourier-transformed order parameter, $$𝚽(𝐤)d^dre^{i𝐤𝐫}𝚽(𝐫).$$ (5) For systems that are isotropic on macroscopic scales (such as a bulk system undergoing phase ordering), $`S(𝐤)`$ depends on $`𝐤`$ through the magnitude $`k=|𝐤|`$ only. We now discuss the contributions to $`S(𝐤)`$ that have the form of a power law ( i.e., $`ak^\chi `$) where $`a`$ and $`\chi `$ are $`k`$-independent constants. First, we briefly recall the effect of thermal fluctuations. Consider an $`O(N)`$ vector model system (with $`N2`$) in the ordered phase with the thermally-averaged value of the order parameter $`𝚽(𝐱)`$ pointing in the $`x_1`$ direction. Due to the $`O(N)`$ symmetry, fluctuations in $`\mathrm{\Phi }_i(𝐱)`$ perpendicular to the $`x_1`$ direction (i.e., transverse fluctuations) cost an arbitrarily low amount of free energy for any large-length-scale fluctuations; this leads to strong scattering at small $`k`$ in the whole temperature range of the ordered phase. \[In contrast, strong low-$`k`$ scattering in systems having a scalar order parameter (for which transverse fluctuations do not exist) arises only in the immediate vicinity of a critical point (“critical opalescence”) and is due to longitudinal fluctuations (i.e., fluctuations in the order parameter magnitude).\] For our purposes, the important property of transverse fluctuations is that they occur on all length scales between the size of the system (small $`k`$) and the microscopic coherence length of the order parameter (large $`k`$). In a system of spatial dimension $`d>2`$ and described by the standard form of the gradient free energy $`E`$ $$E=\frac{\kappa }{2}d^3x_i\mathrm{\Phi }_j(𝐱)_i\mathrm{\Phi }_j(𝐱)$$ (6) (where summation over the indices i and j is implied), the contribution to the structure factor due to thermal fluctuations has the form $`a(T)k^2`$ with a temperature-dependent amplitude given by the microscopic length scale $`a(T)=k_BT/\kappa `$ . Next, we discuss power-law contributions to the structure factor that arise from topological defects. Topologically stable defects occur in all $`O(N)`$ vector model systems whenever $`dN`$. The order-parameter variations associated with the defects give rise to the Porod-tail form of the structure factor, $`S(k)k^\chi `$, for sufficiently large values of $`k`$. In contrast to the thermal-fluctuation contribution, the power-law contributions to $`S(k)`$ due to topological defects do not depend on temperature (provided the defect structure is not affected by the temperature) and occur in systems with discrete symmetries as well as in those with continuous symmetries. In scalar systems, this power-law behavior has long been understood as arising from abrupt changes in the order parameter across domain walls present in the system . Consider, for simplicity, a one-dimensional system in which the order parameter suffers a kink. Viewed on length scales larger than the kink width (which is given by the coherence length of the order parameter), the order parameter abruptly changes from $`1`$ to $`+1`$ at the kink location. Consequently, the integrand in Eq. (2) equals $``$1 within the distance $`|r|`$ from the kink (domain wall) location and $`+1`$ elsewhere in the system. This results in the normalized real-space correlation function of the form $`C(r)=12|r|`$. Notice that $`C(r)`$ is non-analytic at $`r=0`$, reflecting the singular variation of the order parameter at the kink. The non-analytic part of $`C(r)`$, after Fourier-transforming, yields a power-law form of the structure factor, $`S(k)k^1`$, valid for values of $`k`$ larger than the inverse system size and smaller than the inverse kink width. For domain walls in dimensions $`d`$ higher then one, additional factors of $`1/k`$ arise due to the spatial extent of the domain walls, with the result (averaged over domain-wall orientations) $`S(k)k^{(d+1)}`$. In regions between the domain walls, the order parameter does not vary, and therefore there are no extra contributions to the structure factor for $`k`$ values larger than the inverse domain size. We thus obtain a structure factor that has a power-law form for $`k`$ in the range between the inverse average domain wall separation and the inverse domain wall width, with the pre-factor proportional to the total domain wall area in the system. Bray and Humayun investigated in detail how the singular variation of the order parameter around a point or line defect in a system with continuous symmetry will likewise lead to an asymptotic power-law dependence on $`k`$ of the structure factor $`S(k)`$. In Ref. , they calculated exactly the exponent and the amplitude in the contribution $`Ak^\chi `$ to the structure factor coming from an isolated defect order-parameter configuration in the $`O(N)`$ vector model. They also argued that for sufficiently large $`k`$, the structure factor of a system containing a number of topological defects can be obtained by multiplying $`Ak^\chi `$ by the number density of defects. The reason is as follows. Variations in the order parameter on very short scales \[probed by $`S(k)`$ with $`k`$ large\] occur only in the singular regions surrounding the defect cores, which are separated by regions that barely contribute to $`S(k)`$. It is therefore appropriate to treat the contribution from each defect independently, and as coming from an isolated defect, provided that the configuration in the vicinity of the defect core is not affected by the presence of other defects. While this assumption is known to be valid in some special cases \[such as the $`O(2)`$ vector model\], it is in general non-trivial. For example, it was argued in Ref. that, for the case of point defects in the $`O(3)`$ vector model in $`d=3`$, a highly asymmetrical, string-like configuration develops in between the defects. Nevertheless, the prediction for $`S(k)`$ based on the assumption of undeformed defect configurations has been found to be in good agreement with numerical simulations in several $`O(N)`$ vector model phase-ordering systems . In the calculations we present in Secs. IIIV, we shall assume that the regions close to the defect cores in a phase ordering system do possess the structure of an isolated defect, and in Sec. VI we shall comment on the nature of possible corrections to our results. The discussion in the preceding paragraphs leads us to the following formulation of the generalized Porod law for the orientation-averaged structure factor at zero temperature of a system containing topological defects: $$S(k)A\rho k^\chi ,$$ (7) where $`\rho `$ is the defect density (i.e., the domain wall area, string length, or number of point defects, per unit volume of the system), and $`A`$ is a dimensionless amplitude characterizing the given type of defect. The generalized Porod law is expected to be valid in the range $`L^1k\xi ^1`$, where $`L`$ is the average separation between defects present in the system and $`\xi `$ is the size of the defect cores. Specific cases exemplifying the validity of Eq. (7) will be given throughout the rest of the paper. The value of the Porod exponent $`\chi `$ can be anticipated on purely dimensional grounds. Denoting by $`d`$ the spatial dimensionality of the system and by $`D`$ the dimensionality of the topological defect (that is, $`D=0`$ for point defects, $`D=1`$ for string defects, and $`D=2`$ for domain walls), we see that the defect density $`\rho `$ in Eq. (7) scales as $`L^D/L^d`$, whereas the structure factor $`S(k)`$ has the dimension of $`L^d`$. Consequently, the exponent $`\chi `$ is given simply by $`2dD`$ (see Sec. VI for a more precise derivation of this result). In contrast, the dimensionless amplitude $`A`$ depends in more detail on the order parameter in question. For example, we shall see that the amplitudes $`A`$ for the nematic hedgehog and for the $`O(3)`$ monopole in $`d=3`$ differ substantially. Nematic liquid crystals represent an interesting case where several distinct power-law contributions to the structure factor can be present simultaneously in the same system. These contributions are discussed in detail in Secs. IV BIV E and are due to line defects (disclinations), point defects (nematic hedgehogs) and thermal fluctuations. The competition among these terms can lead to a variety of forms of the tail of the structure factor appropriate to distinct physical conditions. Before describing in detail the case of nematic liquid crystals, however, we illustrate our computational techniques on the case of Porod amplitudes and exponents in $`O(N)`$-symmetric vector-model systems. ## III $`O(N)`$ symmetric vector systems The Porod exponent $`\chi `$ and amplitude $`A`$ of a contribution to $`S(𝐤)`$ from a single defect in the configuration of an $`O(N)`$ vector model was computed in Ref. by first calculating the real-space correlation function $`C(𝐫)`$, Eq. (2), and then Fourier-transforming the result to obtain $`S(𝐤)`$. During this calculation, certain analytic terms (see Ref. ), which do not contribute to the power-law part of the Fourier transform, were discarded; also, for $`O(N)`$ systems with $`N`$ even, the calculation was complicated by the appearance of logarithmic terms in the intermediate result for $`C(𝐫)`$. We shall show below that the results of Ref. can be obtained in a more simple way by working directly in Fourier space, i.e., by first Fourier-transforming the order-parameter configuration, and then using Eq. (4) to obtain the structure factor . Besides being shorter, this method does not discard any terms, and does not involve any logarithmic terms. ### A Point defects in O(N) symmetric vector systems We first perform the calculation for the case of the $`O(N)`$ vector model system in $`d=N`$ spatial dimensions. In this case, only point defects (i.e., monopoles) are topologically stable; moreover, for $`d=N=2`$, monopoles with charge amplitude $`|Q|>1`$ are energetically unstable towards the decay into monopoles with charge $`+1`$ or $`1`$. In the following, we consider the case $`Q=+1`$; an identical result for the structure factor is obtained in the case $`Q=1`$. The configuration of the $`Q=1`$ (radial) monopole in a system with the free energy given by Eq. (6) is described by $$𝚽(𝐫)=\frac{𝐫}{r}.$$ (8) Note that we have omitted any spatial variation in the amplitude $`|𝚽|`$ of the order parameter. This corresponds to our focusing on length scales larger that the so-called core of the defect (i.e., the region of space in which the competition between condensation and gradient free energies gives rise to significant modification of $`|𝚽|`$). For a configuration having constant $`|𝚽|`$, the correlation function Eq. (2) and the structure factor Eq. (4) do not depend on $`|𝚽|`$; consequently, we have chosen $`|𝚽|=1`$ for our calculation. Due to spherical symmetry, the Fourier transform of $`𝚽(𝐫)`$ takes the form $$𝚽(𝐤)=𝐤f(k^2),$$ (9) where $`f`$ is a certain scalar function of the scalar $`k^2`$. By taking the scalar product of Eq. (9) with $`𝐤`$ and solving for $`f`$ we see that we may write $$𝚽(𝐤)=𝐤k^2d^drr^1𝐤𝐫\mathrm{e}^{i𝐤𝐫}=i𝐤k^2\frac{}{\lambda }|_{\lambda =1}F(\lambda ^2k^2),$$ (10) where the function $`F`$ is defined via $$F(\lambda ^2k^2)d^drr^1\mathrm{e}^{i\lambda 𝐤𝐫},$$ (11) i.e., $`F`$ is the $`d`$-dimensional Fourier transform of the Coulomb potential $`r^1`$, which can readily be shown , for $`𝐤\mathrm{𝟎}`$, to be $$F(\lambda ^2k^2)=\frac{(4\pi )^{(d1)/2}}{k^{d1}}\mathrm{\Gamma }\left(\frac{d1}{2}\right),$$ (12) where $`\mathrm{\Gamma }(x)`$ is the standard Gamma-function. By evaluating the derivative of $`F`$, and inserting it into Eq. (10), we find $$𝚽(𝐤)=i(d1)(4\pi )^{(d1)/2}\mathrm{\Gamma }\left(\frac{d1}{2}\right)\frac{𝐤}{k^{d+1}}.$$ (13) The normalization factor $`M_{\mathrm{O}(\mathrm{N})}`$, given by Eq. (3), is the volume $`V`$ of the system. It follows from Eq. (4) that the structure factor $`S(k)`$ of a single point defect is then given by $$S(k)=\frac{1}{V}\frac{1}{\pi }(4\pi )^d\mathrm{\Gamma }^2\left(\frac{d+1}{2}\right)\frac{1}{k^{2d}}.$$ (14) Specifically, we obtain $`S(k)=V^14\pi ^2k^4`$ for an $`O(2)`$ vortex in $`d=2`$, and $`S(k)=V^112\pi ^3k^4`$ for an $`O(3)`$ monopole in $`d=3`$. Note that the result Eq. (14) does not depend on the magnitude $`|𝚽|`$ of the order parameter (we have chosen $`|𝚽|=1`$ in this subsection) as both the normalization factor Eq. (3) and the numerator in Eq. (4) are proportional to $`|𝚽|^2`$. For a system having $`\rho `$ defects per unit volume, the normalization factor factor $`V^1`$ is replaced by $`\rho `$. The expression Eq. (14) is then in agreement with the result for $`S(k)`$ given in Ref. . It should be noted that by fully exploiting the spherical symmetry at hand, the method used above enables us to pass from a calculation involving vector quantities to a calculation involving only scalars. In Sec. IV B, we shall similarly be able to pass from a tensorial calculation, appropriate for the case of the nematic order parameter, to a scalar one. ### B Vortex lines in three-dimensional O(2) vector systems We now turn to the case of line defects in $`O(N)`$ vector systems, specifically concentrating on the physically prominent case of vortex lines in three-dimensional $`O(2)`$ vector systems. In the case of the spherically-symmetric point defect in the $`O(N)`$ vector system, investigated in the previous subsection, the final result, Eq. (14), is independent of the orientation of the scattering wave-vector $`𝐤`$, due to the $`O(N)`$ symmetry. In contrast, the structure factor of a segment of a line defect depends on the angle between $`𝐤`$ and the orientation of the line segment; specifically, $`S(𝐤)`$ is dominated by contributions from defect segments that are perpendicular to the vector $`𝐤`$, as there are no short-distance order-parameter variations in the direction along the defect line. We first discuss the structure factor of a single straight vortex line. Consider a vortex-line segment in $`d=3`$ having its core located on the line $`(x,y)=(0,0)`$, and extending from $`z=L/2`$ to $`z=L/2`$. The order parameter $`𝚽`$ does not depend on $`z`$, and the minimum-energy configuration of $`𝚽`$ in the $`xy`$ plane is identical to the configuration of a point defect in the corresponding $`O(2)`$ vector model in $`d=2`$. Thus, the structure factor is given by $$S(𝐤)=𝚽(𝐤)𝚽(𝐤)=S^{(2)}(k_x,k_y)_{L/2}^{L/2}𝑑ze^{ik_zz}_{L/2}^{L/2}𝑑z^{}e^{ik_zz^{}},$$ (15) where $`S^{(2)}(k_x,k_y)`$ is the structure factor for the $`d=2`$ point defect configuration. From Porod’s law in $`d=2`$ we have $`S^{(2)}(k_x,k_y)=A^{(2)}/[V^{(2)}(k_x^2+k_y^2)^2]`$, where $`A^{(2)}`$ is a dimensionless constant, $`V^{(2)}`$ is the system area, and $`\theta `$ is the angle between $`𝐤`$ and the orientation of the defect line. We now let $`L\mathrm{}`$ and calculate the structure factor per unit length of the defect, $`S_{\mathrm{seg}}(k)=lim_L\mathrm{}S(𝐤)/L`$. By using the identity $$\underset{L\mathrm{}}{lim}L^1_{L/2}^{L/2}𝑑z\mathrm{exp}(iuz)_{L/2}^{L/2}𝑑z^{}\mathrm{exp}(iuz^{})=2\pi \delta (u),$$ (16) we immediately obtain $$S_{\mathrm{seg}}(𝐤)=\frac{2\pi A^{(2)}}{V}\frac{1}{k^4\mathrm{sin}^4\theta }\delta (k\mathrm{cos}\theta ).$$ (17) Eq. (17) can be used as the basis for the evaluation of the structure factor of an arbitrary vortex-loop configuration, provided that the local radii of curvature of the defect line are large compared to $`k^1`$ at all points along the loop (i.e. the curvature affects the correlations of the order parameter only on length scales exceeding $`k^1`$). We then reach the general result for the structure factor of a vortex loop in three spatial dimensions $$S_{\mathrm{loop}}(𝐤)=\frac{2\pi A^{(2)}}{V}\frac{1}{k^4}𝑑s\frac{\delta (k\mathrm{cos}\theta (s))}{\mathrm{sin}^4\theta (s)},$$ (18) where $`s`$ denotes the loop arc-length. For example, for a circular loop with radius $`R`$ that has the normal to the loop plane oriented at an angle $`\zeta `$ relative to $`𝐤`$, we use the parameterization $`\theta =\mathrm{arccos}[\mathrm{sin}(\zeta )\mathrm{sin}(s/R)]`$ with $`s[0,2\pi R]`$ and obtain $$S_{\mathrm{circ}}(𝐤)=\frac{4\pi RA^{(2)}}{V}\frac{1}{k^5|\mathrm{sin}\zeta |}.$$ (19) The $`k^5`$ dependence in Eq. (19) agrees with the general Porod-law form, Eq. (7), with the exponent $`\chi =2dD=5`$ corresponding to the spatial dimensionality $`d=3`$ and defect dimensionality $`D=1`$. The result Eq. (19) is valid in the range $`R^1k\xi ^1`$, where $`\xi `$ is the core size of the disclination. Note that $`S_{\mathrm{circ}}`$ reaches its minimal value, $`4\pi RA^{(2)}/Vk^5`$, when $`𝐤`$ lies in the plane of the defect loop (i.e., $`\zeta =\pi /2`$), and diverges as the angle $`\zeta `$ approaches $`0`$ or $`\pi `$ (i.e. $`𝐤`$ perpendicular to the loop plane). This reflects the fact that the length of the loop segments at which the angle $`\theta `$ between $`𝐤`$ and the local vortex orientation is sufficiently close to $`\pi /2`$ for the segment contribution Eq. (17) to be nonzero increases with decreasing $`\zeta `$ (for $`\zeta =0`$, we have $`\theta =\pi /2`$ all along the loop). Equation (19) ceases to be valid in the limit $`\zeta 0`$; for finite $`R`$, the regime of validity is restricted to angles $`|\zeta |>\pi /(kR)`$ . It is worth stressing that there is no need for any large-distance cut-off in the loop calculations, as none of the integrals leading to Eq. (19) diverges at small $`k`$. The factor of system volume $`V`$ in Eqs. (17-19) arises purely due to the chosen normalization of the structure factor. The presence for each loop segment of an antipodal loop segment at separation $`2R`$ is reflected in the structure factor result solely through the fact that the validity of Eq. (19) is restricted to $`k`$ values larger than $`R^1`$. Now consider the situation, typical during the phase-ordering process following a quench, in which the system contains many vortex loops of random orientations. In this situation, the properties of the system are globally isotropic and the structure factor $`S(𝐤)`$ depends only on the magnitude $`k`$ of the scattering wave-vector. We may obtain $`S(k)`$ by averaging the result in Eq. (17) over all orientations of the defect segment: $`S_{\mathrm{isotr}}(k)1/2_0^\pi 𝑑\theta \mathrm{sin}\theta S_{\mathrm{seg}}(k,\mathrm{cos}\theta )`$. By using the result (17) we obtain the structure factor per unit defect length in a globally isotropic system: $$S_{\mathrm{isotr}}(k)=\frac{\pi A^{(2)}}{V}\frac{1}{k^5}.$$ (20) So far, all arguments in this section have depended only on the scattering geometry (i.e. the shape and orientation of the defect line and the orientation of the wave-vector), and were valid independent of the form of the order parameter. To obtain specific results for case of the $`O(2)`$ vector model in $`d=3`$, it suffices to substitute the result $`A^{(2)}=4\pi ^2`$ for the Porod amplitude of the point defect in the $`O(2)`$ vector model in $`d=2`$ \[Eq. (14) for $`d=N=2`$\] into the general expressions (17-20). For example, using $`A^{(2)}=4\pi ^2`$ in Eq. (20), we obtain the Porod amplitude in a phase-ordering $`O(2)`$ vector system in $`d=3`$ as $`A^{(3)}=4\pi ^3`$. This last result is in agreement with the corresponding result obtained in Ref. . ## IV Uniaxial nematic systems ### A The nematic order parameter For the case of nematic liquid crystals the local order-parameter field $`Q_{\alpha \beta }(𝐫,t)`$ is a symmetric traceless rank-2 tensor with Cartesian indices $`\alpha ,\beta =1,2,3`$ (see, e.g., Ref. ). In the common case of the uniaxial nematic, the order parameter has the form $$Q_{\alpha \beta }=\frac{3}{2}S_1(u_\alpha u_\beta \frac{1}{3}\delta _{\alpha \beta }),$$ (21) where the order-parameter magnitude $`S_1`$ determines the strength of orientational ordering of the nematogen molecules, and the director $`𝐮`$ gives the local value of the preferred orientation of the molecules. In the more general biaxial nematic case, the order parameter may be written as $$Q_{\alpha \beta }=\frac{3}{2}S_1(u_\alpha u_\beta \frac{1}{3}\delta _{\alpha \beta })+\frac{1}{2}S_2(b_\alpha b_\beta v_\alpha v_\beta ),$$ (22) where $`\pm 𝐮`$ is the uniaxial director, $`\pm 𝐛`$ is the biaxial director, $`𝐯𝐮\times 𝐛`$, and the amplitudes $`S_1`$ and $`S_2`$ determine, respectively, the strength of uniaxial and biaxial ordering. For the nematic, the real-space correlation function $`C(𝐫,t)`$ is defined as $$C(𝐫)=\frac{1}{M_{\mathrm{nem}}}d^3x\mathrm{Tr}[Q(𝐱)Q(𝐱+𝐫)],$$ (23) where $`\mathrm{Tr}[AB]A_{\alpha \beta }B_{\beta \alpha }`$ \[cf. Eq. (2)\], and the normalization factor $`M_{\mathrm{nem}}`$, which enforces $`C(r=0)=1`$, is given by $$M_{\mathrm{nem}}=d^3x\mathrm{Tr}[Q(𝐱)Q(𝐱)]=\frac{V}{2}(3S_1^2+S_2^2),$$ (24) in which $`V`$ is the volume of the system. For the nematic, Eq. (4) becomes $$S(𝐤)=\frac{1}{M_{\mathrm{nem}}}\mathrm{Tr}[Q(𝐤)Q(𝐤)].$$ (25) In this Paper, we are restricting our attention to unpolarized light scattering, in which case the structure factor (25) is directly proportional to the measured scattered-light intensity . When present, a given type of defect will adopt an equilibrium configuration of the nematic director that minimizes the nematic free energy. For future reference, we recall that the Frank free energy of a uniaxial nematic has (apart from surface terms) the general form $$E=\frac{1}{2}d^3x\left[K_{11}(𝐮)^2+K_{22}(𝐮\times 𝐮)^2+K_{33}\left|𝐮\times (\times 𝐮)\right|^2\right],$$ (26) where the splay ($`K_{11}`$), twist ($`K_{22}`$), and bend ($`K_{33}`$) elastic constants depend on the order parameter magnitude $`S_1`$. Eq. (26) is often simplified by adopting the so-called one-constant approximation, $`K_{11}=K_{22}=K_{33}K`$, in which case the free energy (apart from surface terms) can be written as $$E=\frac{K}{2}d^3x(_iu_j)(_iu_j).$$ (27) ### B Hedgehog defects in uniaxial nematic systems A three-dimensional uniaxial nematic system admits topologically stable point defects (i.e., nematic hedgehogs) as well as line defects (i.e., nematic disclinations). (For a pedagogical discussion of the issue of topological stability, see, e.g., Ref. .) The one-constant approximation to the Frank free energy, Eq. (27), admits (up to global rotations) two minimum-energy point-defect solutions with unit topological charge. In the radial hedgehog configuration, the director $`\pm 𝐮`$ points everywhere radially outwards from the center of the defect, and is given by $`𝐮=𝐫/r`$. The hyperbolic hedgehog configuration can be obtained from the radial hedgehog configuration by inverting one of the components of the director $`𝐮`$, e.g. $`𝐮=(x/r,y/r,z/r)`$. Note that the two configurations are topologically equivalent . The calculation described below gives identical results for both the radial and the hyperbolic hedgehog; for simplicity, we shall work with the radial hedgehog configuration of the director, $$Q_{\alpha \beta }(𝐫)=\frac{3}{2}S_1\left(\frac{r_\alpha }{r}\frac{r_\beta }{r}\frac{1}{3}\delta _{\alpha \beta }\right).$$ (28) The following calculation of $`S(𝐤)`$ closely mirrors the steps Eqs. (9)–(14) in our $`O(N)`$ vector model calculation. First, we observe that the Fourier transform of $`Q_{\alpha \beta }(𝐫)`$ is itself a symmetric, traceless, rank-2 tensor, and that due to symmetry it can depend only on the direction $`\pm 𝐤`$. The general form of such a tensor is $$Q_{\alpha \beta }(𝐤)=\frac{3}{2}S_1(k_\alpha k_\beta \frac{1}{3}\delta _{\alpha \beta }k^2)g(k^2),$$ (29) where $`g(k^2)`$ is a scalar function of the scalar $`k^2`$. By contracting Eq. (29) with $`k_\alpha k_\beta `$ and solving for $`g`$ we can rewrite Eq. (29) as $`Q_{\alpha \beta }(𝐤)`$ $`=`$ $`{\displaystyle \frac{9S_1}{4k^4}}\left(k_\alpha k_\beta {\displaystyle \frac{1}{3}}\delta _{\alpha \beta }k^2\right){\displaystyle d^3r\left[\left(\frac{𝐤𝐫}{r}\right)^2\frac{k^2}{3}\right]\mathrm{e}^{i𝐤𝐫}}`$ (30) $`=`$ $`{\displaystyle \frac{9S_1}{4k^4}}\left(k_\alpha k_\beta {\displaystyle \frac{1}{3}}\delta _{\alpha \beta }k^2\right)\left[(2\pi )^3{\displaystyle \frac{k^2}{3}}\delta (𝐤)+{\displaystyle \frac{^2}{\lambda ^2}}|_{\lambda =1}G(\lambda ^2k^2)\right].`$ (31) In this expression the $`\delta `$-function part corresponds to the forward-scattered beam, and will be dropped henceforth, as it does not influence the large-$`k`$ behavior. The function $`G`$ is defined via $$G(\lambda ^2k^2)d^3rr^2\mathrm{e}^{i\lambda 𝐤𝐫},$$ (32) i.e., $`G`$ is the three-dimensional Fourier transform of the potential $`r^2`$, which can readily be shown to be given by $$G(\lambda ^2k^2)=\frac{2\pi ^2}{\lambda k}.$$ (33) Evaluating the second derivative of $`G`$, and inserting it into Eq. (31), leads to $$Q_{\alpha \beta }(𝐤)=\frac{9\pi ^2S_1}{4k^5}\left(k_\alpha k_\beta \frac{1}{3}\delta _{\alpha \beta }k^2\right).$$ (34) From Eq. (24) we see that the normalization factor $`M_{\mathrm{nem}}`$ takes the value $`3VS_1^2/2`$. Combining Eqs. (24), (25) and (34) gives for the structure factor $`S(k)`$ of a single nematic hedgehog defect in a system of volume $`V`$ the result $$S(k)=\frac{36\pi ^4}{V}\frac{1}{k^6}.$$ (35) Note that the Porod amplitude $`36\pi ^4`$ for the nematic hedgehog differs substantially from the Porod amplitude $`12\pi ^3`$ (see Sec. III A) for the $`O(3)`$ monopole in $`d=3`$. ### C Disclination lines in uniaxial nematic systems We now calculate $`S(𝐤)`$ for the case of a disclination line defect in a uniaxial nematic system. In the topologically stable disclination configuration, the director rotates about the core of the disclination by $`\pm 180^{}`$ (see, e.g., Ref. ). Disclinations with $`\pm 360^{}`$ rotations are unstable towards the “escape in the third dimension” and do not have a singular core. We shall concentrate our attention on the topologically stable disclinations (appearing as thin threads in optical observations) as the $`\pm 360^{}`$ disclinations (appearing as thick threads) do not give raise to a power-law contribution to $`S(k)`$. In Section III B, we gave general geometric arguments that allowed us to express the structure factor of a linear defect in $`d=3`$ in several specific configurations in terms of the Porod amplitude $`A^{(2)}`$ of the point defect in $`d=2`$ (obtained as the cross-section of the line defect in $`d=3`$). Equations (1720) are valid regardless of the order parameter in question and may, thus, also be used for nematic systems. The remaining task therefore is to obtain an expression for $`A^{(2)}`$ in the nematic case. As the director rotates about the core by $`180^{}`$, it is clear that the approach of Sec. IV B, which is based on rotational symmetry, cannot be directly applied. It is readily seen, however, that the calculation of the Porod-law amplitude for the $`180^{}`$ nematic defect can be mapped on to the corresponding calculation for the $`360^{}`$ defect in the $`O(2)`$ vector model in $`d=2`$. For the $`180^{}`$ uniaxial nematic defect, in the one-constant Frank free energy approximation, the order-parameter configuration is given (up to a global rotation) by Eq. (21) with $`𝐮(𝐱)=(\mathrm{cos}\frac{1}{2}\varphi (𝐱),\mathrm{sin}\frac{1}{2}\varphi (𝐱),0)`$, where $`\varphi (𝐱)`$ is the polar angle of the radius-vector $`𝐱`$ in the plane perpendicular to the disclination line. The correlation function Eq. (23) then becomes $`C_{180^{}}^{(2)}(r)`$ $`=`$ $`{\displaystyle \frac{(3S_1/2)^2}{M_{\mathrm{nem}}}}{\displaystyle d^2x\mathrm{cos}^2\left(\frac{1}{2}\varphi (𝐱)\frac{1}{2}\varphi (𝐱+𝐫)\right)}+c_1`$ (37) $`=`$ $`{\displaystyle \frac{(3S_1/2)^2}{M_{\mathrm{nem}}}}{\displaystyle d^2x\frac{1}{2}\mathrm{cos}\left(\varphi (𝐱)\varphi (𝐱+𝐫)\right)}+c_2,`$ (38) where $`c_1`$ and $`c_2`$ are numerical constants. On the other hand, for the $`360^{}`$ defect in the $`O(2)`$ model in $`d=2`$, the order-parameter configuration is given by $`𝚽(𝐱)=(\mathrm{cos}\varphi (𝐱),\mathrm{sin}\varphi (𝐱))`$, and the corresponding correlation function, Eq. (2), is given by $$C_{360^{}}^{(2)}(r)=\frac{1}{M_{O(2)}}d^2x\mathrm{cos}[\varphi (𝐱)\varphi (𝐱+𝐫)].$$ (39) After Fourier-transforming Eqs. (38) and (39) and omitting the $`\delta `$-function term arising from the constant $`k_2`$ in Eq. (39), we obtain that the structure factors of the two defects are related simply by $$S_{180^{}}^{(2)}(k)=\frac{(3S_1/2)^2M_{O(2)}}{2M_{\mathrm{nem}}}S_{360^{}}^{(2)}(k).$$ (40) Using the result $`S_{360^{}}^{(2)}(k)=4\pi ^2k^4/V^{(2)}`$ \[i.e., Eq. (14\] for $`d=2`$, with the area of the system denoted by $`V^{(2)}`$), and the normalization factors $`M_{O(2)}=V^{(2)}`$ and $`M_{\mathrm{nem}}=V^{(2)}3S_1^2/2`$, we finally obtain the structure factor $`S_{180^{}}^{(2)}(k)`$ for the $`180^{}`$ point defect in a two-dimensional nematic: $$S_{180^{}}^{(2)}(k)=\frac{3\pi ^2}{V^{(2)}}\frac{1}{k^4}.$$ (41) By using this result (i.e., $`A^{(2)}=3\pi ^2`$) in Eq. (19), we obtain the structure factor of a nematic disclination loop of radius $`R`$ $$S_{\mathrm{circ}}(k)=\frac{12\pi ^3R}{V}\frac{1}{k^5|\mathrm{sin}\zeta |}.$$ (42) Likewise, by using Eq. (20) we obtain the orientation-averaged contribution to the structure factor from a segment of length $`L`$ of a nematic disclination $$S(k)=\frac{L}{V}\mathrm{\hspace{0.17em}3}\pi ^3\frac{1}{k^5}.$$ (43) It follows that the structure factor for a globally isotropic system containing disclination lines is given by $`S(k)=\rho \mathrm{\hspace{0.17em}3}\pi ^3k^5`$, with the defect density $`\rho `$ having the meaning of the total disclination length per unit volume of the system. ### D Ring defects in uniaxial nematic systems Topologically stable disclinations lines in a uniaxial nematic cannot terminate in the bulk — they must form closed loops, or else extend to the system boundary. Depending on the details of the director configuration, the disclination loop can carry any (integral) monopole charge . In this section, we discuss typical examples of disclination loops with zero and nonzero monopole charges and their contributions to the structure factor. Consider first the case of a “twist disclination” loop, which carries zero monopole charge. The cross-section of the director configuration around a twist disclination is shown in Fig. 1a. The axis of rotation of the director is perpendicular to the direction of the disclination line. The director configuration obtained by forming a closed loop of a disclination that locally has the “twist” structure of Fig. 1a is illustrated in Fig. 1b. This director configuration is homogeneous at large distances from the loop, and consequently does not result in a power-law contribution to the structure factor for $`kR^1`$. For length scales smaller than $`R`$ (i.e., for $`kR^1`$), the twist disclination loop is characterized by the structure factor given in Eq. (42). Next, consider the “wedge disclination” loop. In this configuration, the director rotates about an axis parallel to the disclination line (see Fig. 2a). The resulting director configuration (see Fig. 2b) has the structure of a radial hedgehog (see Sec. IV B) outside of the disclination loop. This results in a power-law contribution to the structure factor characterizing the nematic hedgehog in $`d=3`$, Eq. (35), valid for $`k<R^1`$, where $`R`$ is the ring radius. On the other hand, for $`kR^1`$, we have the usual disclination-loop structure factor, Eq. (42). To summarize, the structure factor of a wedge disclination loop is given by $`S(k)=\{\begin{array}{cc}\frac{36\pi ^4}{V}\frac{1}{k^6},\hfill & kR^1\text{;}\hfill \\ \frac{12\pi ^3R}{V}\frac{1}{k^5|\mathrm{sin}\zeta |},\hfill & R^1k\xi ^1\text{ .}\hfill \end{array}`$ (44) Therefore the large-$`k`$ region of the structure factor measured in a system containing a wedge disclination loop of radius size $`R`$ is expected to exhibit a crossover between two power-law regimes with exponents $`6`$ and $`5`$ (Fig. 3). The crossover point, as defined in Fig. 3 , is predicted to occur at the value of $`k_\mathrm{c}=3\pi \zeta /R`$. Note that measuring the location $`k_\mathrm{c}`$ of the crossover allows one to obtain information about the ring-defect size $`R`$ from un-normalized data for $`S(k)`$; this should be contrasted to the use of the Porod law based on Eq. (43), where the total string length is extracted from the (rarely experimentally available) normalized structure factor. Even though the wedge disclination loop is a topologically admissible configuration, the (energetics-dependent) question of its occurrence in real nematic systems has recently attracted some interest. The energetic stability of the wedge loop configuration was investigated theoretically in Refs. and . There, it was found that provided that certain restrictions on the elastic constants in the nematic free energy are satisfied, there does indeed exist a non-zero equilibrium radius of the loop: a ring of a larger (smaller) radius will tend to shrink (expand). Depending on the values of the elastic constants, Ref. predicts $`R_{\mathrm{eq}}`$ in the range $`10\xi 10^4\xi `$, where $`\xi `$ is the coherence length of the nematic order parameter (i.e., $`\xi `$ is of the order of $`10100\AA `$). Note that in a nematic with a positive dielectric or diamagnetic susceptibility anisotropy, the loop can be made unstable towards expansion to a larger radius by applying an electric or magnetic field along the axis of the defect ring . Wedge disclination loops with a non-zero equilibrium radius were recently observed in numerical simulations . Experimental observations of such a configuration, however, are currently not available. The form of the structure factor predicted in Eq. (44), with its characteristic crossover, may be used as an experimental signature for the observation of the wedge ring configuration . ### E Thermal fluctuations of director orientation Up to this point, we have ignored thermal fluctuations — we have effectively assumed that the structure factor was measured at zero temperature. Nematic liquid crystalline phases, however, typically occur and are studied in the room-temperature range, and thermal effects can play an important role the scattering of light. Due to the presence of low-energy fluctuations in the orientation of the nematic director, the system exhibits enhanced turbidity throughout the whole range of temperatures where the nematic phase is stable. These transverse fluctuations of the director result in a power-law contribution to the structure factor $`S(k)`$, even when no defects are present in the system. We now very briefly review the theory of scattering by thermal fluctuations in bulk uniaxial nematic liquid crystals . Consider a well-aligned region of the sample with the average director $`𝐮_\mathrm{𝟎}`$ parallel to the $`z`$-axis. The local value of the director can be written (to first order in $`𝐮_{}`$) as $`𝐮(𝐫)=𝐮_\mathrm{𝟎}+𝐮_{}(𝐫)`$, where the transverse fluctuation $`𝐮_{}(𝐫)`$, by definition, satisfies $`𝐮_{}(𝐫)𝐮_\mathrm{𝟎}`$ and we assume that $`|𝐮_{}(𝐫)|1`$. The nematic structure factor, Eq. (25), then becomes \[upon dropping terms proportional to $`\delta (k)`$\] $$S(k)=\frac{3}{2V}𝐮_{}(𝐤)𝐮_{}(𝐤).$$ (45) Note that the quantity $`𝐮_{}(𝐤)𝐮_{}(𝐤)`$ must now be regarded as a thermal average. It’s value is readily obtained from the equipartition theorem. In the “one-constant” approximation, using $`𝐮(𝐫)=𝐮_\mathrm{𝟎}+𝐮_{}(𝐫)`$ in Eq. (27) results in the free energy $$E=\frac{1}{2}K\frac{d^3k}{(2\pi )^3}k^2|𝐮_{}(𝐤)|^2.$$ (46) Furthermore, it is possible to decompose $`𝐮_{}(𝐤)`$ into two orthogonal modes that decouple ; by equipartition, we then have $`|𝐮_{}(𝐤)|^2=k_\mathrm{B}TV`$. Equation (45) thus becomes $$S_\mathrm{T}(k)=\frac{3}{2}\frac{k_\mathrm{B}T}{Kk^2}.$$ (47) Taking a typical value $`K=10^6`$ dynes for the elastic constant and $`k_\mathrm{B}T0.510^{13}`$ erg for the thermal energy at room temperatures, one obtains the estimate $$S_\mathrm{T}(k)\frac{8\AA }{k^2}.$$ (48) Note that in the light-scattering literature, the structure factor is often defined without the normalization factor $`M_{\mathrm{nem}}`$ in Eq. (25), and then has the dimension of volume squared, rather than of volume as in our case. It should be stressed that Eq. (47) was derived from a free energy appropriate for small fluctuations in a well-aligned part of the nematic sample. Equation (47) can therefore be used to describe scattering from a system with a spatially varying director only provided that there are no appreciable variations in the static director orientation on the scale of $`k^1`$. Even in strongly inhomogeneous samples containing topological defects, thermal fluctuations will dominate the scattering intensity provided either that the defect density is sufficiently low or that $`k`$ is sufficiently high. To illustrate this point, we now consider light scattering from a nematic droplet of radius $`R`$ with normal (homeotropic) boundary conditions on the droplet surface. In the minimum-energy configuration, the droplet contains a radial hedgehog in the center. (As discussed in Sec. IV D, the radial hedgehog is, in fact, expected to take the form of a wedge disclination loop of small radius $`r`$; in the present paragraph, we restrict our attention to length scales exceeding $`r`$ and do not consider the detailed structure of the hedgehog.) Consequently, for a given scattering wave-vector $`k2\pi /R`$, the central region of the droplet will contribute to the structure factor according to Eq. (35), $`S_{\mathrm{def}}(k)=V^136\pi ^4k^6`$, where $`V=4\pi R^3/3`$. In the region $`2\pi /k<r<R`$, the static order-parameter configuration does not contain variations that contribute substantially to $`S(k)`$; however, thermal fluctuations on the scale $`2\pi /k`$ still exist. The contribution to $`S(k)`$ from these is given by Eq. (48). The resulting ratio $`\gamma (k)`$ of thermal and static scattering intensities, i.e. $`\gamma (k)S_\mathrm{T}(k)/S_{\mathrm{def}}(k)=0.04(Rk)^3k`$, where $`k`$ is given in inverse Å, can very in a wide range. Clearly the static contribution $`S_{\mathrm{def}}(k)`$ dominates when $`k`$ is sufficiently small (i.e. near the forward-scattered beam). Consider, on the other hand, scattering at the experimentally accessible value of $`k=10^5\AA ^1`$. For a droplet of radius $`R=1\mathrm{cm}`$, we obtain $`\gamma 400`$; for a droplet of smaller radius $`R=100\mu \mathrm{m}`$, we obtain $`\gamma 0.0004`$. We see that either the thermal or the static contribution to the scattering intensity can completely dominate for $`R`$ and $`k`$ in the experimentally reasonable range. (For the sake of completeness we note that we have not taken into account scattering from the droplet surface in the arguments given above.) ### F Porod tails in the phase-ordering kinetics of uniaxial nematics We are now ready to discuss in detail the short-distance form of the structure factor arising during the phase-ordering process in uniaxial nematic liquid crystals, and to relate our conclusions to the experimental results reported in Refs. . We first briefly recall some general features of phase-ordering kinetics following a quench from a disordered to an ordered phase . Phase-ordering kinetics initially attracted theoretical attention due to the property of dynamical scaling of the order-parameter correlations at late times after the quench. In its simplest form, the corresponding dynamical scaling hypothesis states that all time-dependent length scales in the system have the same asymptotic time dependence (and, consequently, one can define a single “characteristic” length scale). In most systems with non-conserved order-parameter dynamics, this time-dependence is given by the power law $`L(t)t^{1/2}`$. In systems containing topological defects, the characteristic length scale $`L(t)`$ can usually be extracted as the typical defect-defect separation at time $`t`$. The process of phase ordering has been successfully studied experimentally in systems described by a scalar order parameter (e.g., in binary alloys—see Ref. ). However, analogous experiments were found to be difficult to perform in most systems with continuous symmetries (such as ferromagnets or liquid He<sup>4</sup>). In the recent years, a series of experiments have demonstrated that nematic liquid crystals provide a system in which phase ordering (following a quench from the isotropic to the nematic phase) is readily accessible to experimental investigation. In Refs. , the phase-ordering process was studied by measuring the time-dependent nematic structure factor $`S(k,t)`$. The present section includes a discussion of the large-$`k`$ region of the structure factor measured in these experiments; however, we also discuss effects that may be observed under different experimental conditions. As we saw in Secs. (IV B)–(IV D), a uniaxial nematic liquid crystalline system can contain, simultaneously, point defects (hedgehogs) and line defects (disclinations) . If the average separation between hedgehogs is given by $`L_{\mathrm{hedg}}`$ and that between disclinations by $`L_{\mathrm{discl}}`$, a unit volume of the system contains $`\rho _{\mathrm{hedg}}=L_{\mathrm{hedg}}^3`$ hedgehogs and a length $`\rho _{\mathrm{discl}}=L_{\mathrm{discl}}^2`$ of disclinations. Now consider scattering at a wave-vector $`k`$ such that $`k^1L_{\mathrm{discl}}`$ and $`k^1L_{\mathrm{hedg}}`$. The total structure factor per unit volume due to hedgehogs is then given by $`\rho _{\mathrm{hedg}}\mathrm{\hspace{0.17em}36}\pi ^4/k^6`$ \[see Eq. 35\], and that due to disclinations by $`\rho _{\mathrm{discl}}\mathrm{\hspace{0.17em}3}\pi ^3/k^5`$ \[see Eq. (43)\]. (Here we have assumed that the disclination lines are predominantly of the twist type, as is expected from energetic considerations.) In the parts of the system that are not within distance $`k^1`$ from the nearest disclination or hedgehog (which, by assumption, is most of the volume of the system), the static director configuration is effectively homogeneous on scales of order of $`k^1`$. Consequently, Eq. (47) for the strength of scattering due to thermal fluctuations applies, and per unit volume, we have $`S_\mathrm{T}(k)8\AA /k^2`$. (Notice that the length scale occurring in the expression for $`S_T(k)`$ is microscopic, as opposed to the defect separations occurring in the hedgehog and disclination contributions.) The total structure factor is therefore given by $$S(k)=\rho _{\mathrm{hedg}}\frac{36\pi ^4}{k^6}+\rho _{\mathrm{discl}}\frac{3\pi ^3}{k^5}+\frac{8\AA }{k^2}.$$ (49) First consider a system in which disclinations dominate over hedgehogs. We now estimate, for a given density of disclinations $`\rho _{\mathrm{discl}}`$ (and $`\rho _{\mathrm{hedg}}=0`$), the range of scattering wave-vectors $`k`$ in which the the generalized Porod law behavior, $`S(k)k^5`$, can be observed. The lower limit on the range of $`k`$ is given by the inverse average separation of defects: $`k_\mathrm{l}=L_{\mathrm{discl}}^1=\rho _{\mathrm{discl}}^2`$. The upper limit $`k_\mathrm{u}`$ is given by the $`k`$ value for which $`S_\mathrm{T}(k)=8\AA k^2`$ exceeds the Porod law contribution, $`S_{discl}=\rho _{\mathrm{discl}}3\pi ^3k^5`$. This gives $`k_\mathrm{u}=(3\pi ^3/8\AA )^{1/3}L_d^{2/3}`$. We see that although both $`k_\mathrm{l}`$ and $`k_\mathrm{u}`$ decrease when the separation of defects $`L_{\mathrm{discl}}`$ increases, the ratio $`k_\mathrm{u}/k_\mathrm{l}`$ increases as $`(L_{\mathrm{discl}}/0.09\AA )^{1/3}`$. Analogous estimates for observing the $`S(k)k^6`$ behavior in a system dominated by hedgehogs yield $`k_\mathrm{l}=L_{\mathrm{hedg}}^1=\rho _{\mathrm{hedg}}^3`$, $`k_\mathrm{u}=(36\pi ^4/8\AA )^{1/4}L_{\mathrm{hedg}}^{3/4}`$, and $`k_\mathrm{u}/k_\mathrm{l}=(L_{\mathrm{hedg}}/0.002\AA )^{1/4}`$. Consequently, the Porod form of the structure factor is predicted to extend over one to three decades in $`k`$ for the experimentally accessible values of defect separation of the order of $`0.1\mu \mathrm{m}`$ to $`1\mathrm{cm}`$, for either a disclination-dominated or a hedgehog-dominated system . Crossover effects between $`k^6`$ and $`k^5`$ scaling in situations when both hedgehogs and disclinations are present may further diminish the range in which these power laws can be clearly observed. (A specific example of such behavior will be seen below.) It should be noted that measuring the location of the crossover wave-vector $`k_\mathrm{u}`$ permits the efficient estimation of the density of defects in the system. Specifically, if one measures the location of the crossover $`k_\mathrm{u}`$ in a disclination-dominated system, the total disclination length per unit volume (expressed in units of $`\AA ^2`$) is given by $`(2.3/k_\mathrm{u})^{3/2}`$. In a hedgehog-dominated system, the number of hedgehogs per unit volume (expressed in units of $`\AA ^3`$) is given by $`(4.6/k_\mathrm{u})^{4/3}`$. Although it is also possible to estimate the defect density from the position $`k_\mathrm{l}`$ of the crossover at low wave-vectors, it should be noted that this crossover is less steep than the crossover at $`k_\mathrm{u}`$, and its precise character depends on the nature of the correlations amongst the defects. We now relate our results to the measurements of $`S(k)`$ in a uniaxial nematic undergoing phase ordering in the experiments of Wong et al. . The range of scattering wave-vectors investigated in Refs. was $`k=1000\mathrm{c}\mathrm{m}^1`$$`10000\mathrm{c}\mathrm{m}^1`$, and the typical defect separation in the system at intermediate and late times after the quench was of the order of $`10\mu \mathrm{m}`$ (see Fig. 1 in Ref. ). Direct optical observations in this and related phase-ordering systems indicate that disclinations dominate over hedgehogs; the exact relative proportion of their densities is, however, difficult to measure. In Fig. 4, we plot the structure factor $`S(k)`$ predicted by Eq. (49) for two situations in which the average separation between defects is of the order of $`10\mu \mathrm{m}`$. The lower solid curve shows $`S(k)`$ for $`\rho _{\mathrm{discl}}=10^6\mathrm{cm}^2`$ and $`\rho _{\mathrm{hedg}}=0`$ (i.e., a configuration dominated by disclinations with $`L_{\mathrm{discl}}=10\mu \mathrm{m}`$). The upper solid curve corresponds to $`\rho _{\mathrm{discl}}=10^6\mathrm{cm}^2`$ and $`\rho _{\mathrm{hedg}}=10^8\mathrm{cm}^3`$ (i.e., the system contains, in addition, hedgehogs with average spacing $`L_{\mathrm{hedg}}=22\mu \mathrm{m}`$). We restrict the plot to the region $`k>(10\mu \mathrm{m})^1=10^3\mathrm{cm}^1`$, where Eq. (49) is expected to be applicable. The broken lines in Fig. 4 show the three individual contributions to $`S(k)`$ arising from hedgehogs, disclinations, and thermal fluctuations. It is seen that for both $`\rho _{\mathrm{hedg}}=0`$ and $`\rho _{\mathrm{hedg}}=10^8\mathrm{cm}^3`$, the crossover to the thermal-fluctuation-dominated regime \[$`S(k)k^2`$\] occurs at approximately $`k=50000\mathrm{cm}^1`$, i.e., well beyond the range of $`k`$ measured in Refs. and at the limit of the range of $`k`$ accessible, in principle, with visible light. Indeed, the structure-factor data in Refs. do not show any sign of such a crossover. It should be noted, however, that if the scattering experiments would have been performed at a (still later) time when the average defect separation would have reached $`300\mu \mathrm{m}`$, the crossover value would have decreased to $`k5000\mathrm{cm}^1`$, and the sharp crossover to the thermal regime would have been clearly visible with the experimental setup of Refs. provided that the scattering probe were sufficiently sensitive. In the case $`\rho _{\mathrm{hedg}}=0`$, the structure factor in Fig. 4 exhibits the Porod behavior $`S(k)k^5`$, characteristic of disclinations, over a range of 1.5 decades ($`k=1000\mathrm{cm}^1`$ to $`k=50000\mathrm{cm}^1`$). In the case $`\rho _{\mathrm{hedg}}=10^8\mathrm{cm}^3`$, a crossover between $`S(k)k^6`$ and $`S(k)k^5`$ is spread between $`k=1000\mathrm{cm}^1`$ and $`10000\mathrm{cm}^1`$, and leaves only a very narrow range of $`k`$ for which the $`S(k)k^5`$ scaling behavior may be observed. The fact that hedgehogs result in a substantial modification of the structure factor even when $`L\rho _{\mathrm{hedg}}/\rho _{\mathrm{discl}}=0.1`$ is rather surprising, and is due to the large ratio, $`A_{\mathrm{hedg}}/A_{\mathrm{discl}}=12\pi `$, of the Porod amplitudes for hedgehogs and disclinations. The structure-factor data reported in Refs. (for three-dimensional samples) were fit in these references to a power-law with exponent $`6\pm 0.3`$. Some of these data (specifically Fig. 7 in Ref. ) were later re-analyzed in Ref. , with the conclusion that the asymptotic slope was closer to $`5`$, but was approached from above , i.e., through effective exponents lying between $`5`$ and $`6`$. Our results show (see the upper solid line in Fig. 4) that such behavior is expected to arise if a sufficient number of hedgehogs is present in the system. The relative density of disclinations and hedgehogs was not examined in the experiments of Refs. . Other experiments investigating phase ordering in nematics, however, indicate that hedgehogs are rare compared to disclinations. (For the sake of completeness we note that it is possible to prepare nematic systems dominated by hedgehog defects: thus, in Ref. , a liquid crystalline system was quenched from the isotropic to the isotropic-nematic biphasic region, and a large number of hedgehogs formed upon coalescence of the nematic droplets.) Specifically, it was found in Ref. that hedgehogs occurred in significant numbers only within a specific intermediate range of times after the quench, and that the ratio $`L\rho _{\mathrm{hedg}}/\rho _{\mathrm{discl}}`$ within this time range was of the order of $`10`$$`100`$. The case $`\rho _{\mathrm{discl}}=10^6\mathrm{cm}^2`$, $`\rho _{\mathrm{hedg}}=10^8\mathrm{cm}^3`$, chosen above to obtain the upper solid curve in Fig. 4, falls within this range. Consequently, it cannot be ruled out that the observed approach to $`S(k)k^5`$ from above in Fig. 7 of Ref. is due to the presence of hedgehogs, even though if the relative proportion of populations of hedgehogs and disclinations in the experiments of Refs. is similar to that found in the experiments of Refs. (which was performed on a different nematic material), such a possibility should be considered unlikely. A more likely possibility is that a portion of the length of the disclinations in the system is of the wedge type (see Sec. IV D). Any curved configuration of a wedge disclination will lead to an order-parameter configuration in the vicinity of the disclination that resembles a part of a (radial or hyperbolic) hedgehog . (The special case of a disclination loop that is wedge-like everywhere along its length was discussed in Sec. IV D; in this case, a full hedgehog configuration is obtained outside of the loop.) Although the twist configuration of a disclination line is energetically preferable to the wedge configuration for usual values of the nematic elastic constants , wedge-type segments of disclination loops in a system undergoing phase ordering may be generated dynamically, and lead to a $`k^6`$ contribution to the nematic structure factor. We complete this Section by a brief discussion of the dynamical scaling of the structure factor. Three different length scales — the disclination separation $`L_{\mathrm{discl}}`$, the hedgehog separation $`L_{\mathrm{hedg}}`$, and the microscopic length scale $`8\AA `$ associated with thermal fluctuations — appear in the large-$`k`$ form of the nematic structure factor, Eq. (49). Is this inconsistent with the property of dynamical scaling of the structure factor, according to which there exists a (time-dependent) length scale $`L(t)`$ such that $`k^3S(k)`$ depends only on the dimensionless scaling variable $`ykL(t)`$? Let us choose the disclination separation $`L_{\mathrm{discl}}`$ as the characteristic length $`L(t)`$. From Eq. (49), we then have $$k^3S(y)=3\pi ^3\frac{1}{y^2}+\frac{36\pi ^4}{y^3}\left[\frac{L_{\mathrm{discl}}(t)}{L_{\mathrm{hedg}}(t)}\right]^3+y\frac{8\AA }{L_{\mathrm{discl}}(t)}.$$ (50) Clearly, $`k^3S(k)`$ does not in general satisfy the scaling form $`f(kL_{\mathrm{discl}}(t))`$. At asymptotically late times, however, $`L_{\mathrm{discl}}8\AA `$ and, consequently, the term in Eq. (50) associated with thermal fluctuations becomes negligible. Experimental evidence indicates that at late times after the quench, the hedgehog separation $`L_{\mathrm{hedg}}(t)`$ grows faster than the disclination separation $`L_{\mathrm{discl}}(t)`$. Consequently, the second term in Eq. (50) (associated with hedgehogs) also becomes negligible, asymptotically. Our analysis is therefore consistent with the dynamical scaling of the structure factor at asymptotically late times. As we saw earlier in this Section, however, the scattering contributions from thermal fluctuations and from nematic hedgehogs can lead to significant transient effects under typical experimental conditions. ## V Biaxial nematic systems Thus far, our discussion of defects in nematics has concentrated on the case of uniaxial systems. It is straightforward to generalize this discussion to the case of biaxial systems. As there are no topologically stable point defects in a three-dimensional biaxial nematic , we need only consider line defects. Biaxial nematics admits four topologically distinct classes of line defects , which are disclination lines distinguished by the angles of rotation of the uniaxial director $`𝐮`$ and the biaxial director $`𝐛`$ \[defined in Eq. (22)\] around the defect core. In the $`C_x`$ class of disclinations $`𝐮`$ rotates by $`\pm 180^{}`$ and $`𝐛`$ does not rotate; in $`C_y`$ disclinations $`𝐮`$ does not rotate and $`𝐛`$ rotates by $`\pm 180^{}`$; in $`C_z`$ disclinations both $`𝐮`$ and $`𝐛`$ rotate by $`\pm 180^{}`$; finally, in $`\overline{C}_0`$ disclinations either $`𝐮`$ or $`𝐛`$ (or both) rotate by $`360^{}`$. These four distinct disclination types were observed experimentally in a thermotropic nematic polymer , and their properties where found to be in agreement with the predictions of the topological classification scheme. In the minimum-energy configuration \[with free energy given by Eq. (27\] for the $`C_x`$, $`C_y`$, and $`C_z`$ defects, the $`180^{}`$ rotations of the $`𝐮`$ and $`𝐛`$ director are uniform . The configuration of a $`C_x`$ disclination is correspondingly given (up to a global rotation) by Eq. (22) with $`𝐮(𝐱)`$ $`=`$ $`(\mathrm{cos}{\displaystyle \frac{1}{2}}\varphi (𝐱),\mathrm{sin}{\displaystyle \frac{1}{2}}\varphi (𝐱),0),`$ (52) $`𝐛(𝐱)`$ $`=`$ $`(0,0,1),`$ (53) $`𝐯(𝐱)`$ $`=`$ $`(\mathrm{cos}\left({\displaystyle \frac{1}{2}}\varphi (𝐱)+{\displaystyle \frac{\pi }{4}}\right),\mathrm{sin}\left({\displaystyle \frac{1}{2}}\varphi (𝐱)+{\displaystyle \frac{\pi }{4}}\right),0),`$ (54) where $`\varphi (𝐱)`$ is the polar angle in the plane perpendicular to the disclination line. The correlation function (23) for this configuration can be expressed (up to an additive constant) as the correlation function (38) for the uniaxial nematic disclination, multiplied by the biaxiality-strength–dependent factor $`R(3S_1+S_2)^2/4`$, where $`R3S_1^2/(3S_1^2+S_2^2)`$ is the ratio of the uniaxial ($`S_2=0`$) and biaxial ($`S_20`$) normalization factors $`M_{\mathrm{nem}}`$. The corresponding factors for the case of the $`C_y`$ and $`C_z`$ disclinations are likewise readily evaluated. By using the result (43) for the uniaxial disclination, we obtain the structure factors of the biaxial disclinations $`C_x`$, $`C_y`$ and $`C_z`$: $`S(k)_{\mathrm{C}_\mathrm{x}}`$ $`=`$ $`{\displaystyle \frac{9\pi ^2}{8}}{\displaystyle \frac{L_x}{V}}{\displaystyle \frac{S_1^2(3S_1+S_2)^2}{3S_1^2+S_2^2}}{\displaystyle \frac{1}{k^5}},`$ (56) $`S(k)_{\mathrm{C}_\mathrm{y}}`$ $`=`$ $`{\displaystyle \frac{9\pi ^2}{2}}{\displaystyle \frac{L_y}{V}}{\displaystyle \frac{S_1^2S_2^2}{3S_1^2+S_2^2}}{\displaystyle \frac{1}{k^5}},`$ (57) $`S(k)_{\mathrm{C}_\mathrm{z}}`$ $`=`$ $`{\displaystyle \frac{9\pi ^2}{8}}{\displaystyle \frac{L_z}{V}}{\displaystyle \frac{S_1^2(3S_1S_2)^2}{3S_1^2+S_2^2}}{\displaystyle \frac{1}{k^5}}.`$ (58) It should be noted that in the case $`3S_1=S_2`$, the amplitude in the Porod law for the $`C_z`$ defect is zero, whereas the amplitudes for the $`C_x`$ and $`C_y`$ defects are non-zero and equal to each other \[see Eqs. (56)–(58)\]. This reflects the fact that for $`3S_1=S_2`$, the order parameter given by Eq. (22) describes a uniaxial discotic phase with uniaxial axis $`𝐯=𝐮\times 𝐛`$. Correspondingly, the $`C_z`$ configuration in this case does not represent a defect (as $`𝐯`$ does not rotate), whereas the $`C_x`$ and $`C_y`$ configurations are equivalent. It remains to consider the $`\overline{C}_0`$ disclination. For the case $`S_1>S_2`$ (i.e., needle-like ordering ), the minimum-energy configuration of type $`\overline{C}_0`$ is given by Eq. (22) with $`𝐮(𝐱)`$ $`=`$ $`(0,0,1),`$ (60) $`𝐛(𝐱)`$ $`=`$ $`(\mathrm{cos}\varphi (𝐱),\mathrm{sin}\varphi (𝐱),0),`$ (61) $`𝐯(𝐱)`$ $`=`$ $`(\mathrm{cos}\left(\varphi (𝐱)+\pi /4\right),\mathrm{sin}\left(\varphi (𝐱)+\pi /4\right),0),`$ (62) where $`\varphi (𝐱)`$ is the polar angle in the plane perpendicular to the disclination line. The correlation function (23) for the point defect in the planar cross-section through this disclination can then be expressed as $`R(2S_2/3S_1)^2`$, By using Eq. (20) one obtains the structure factor per unit length of the $`\overline{C}_0`$ disclination: $$S(k)_{\overline{\mathrm{C}}_0}=12\pi ^2\frac{L_0}{V}\frac{S_2^2}{3S_1^2+S_2^2}\frac{1}{k^5}.$$ (63) For the case $`S_1<S_2`$ (i.e., discotic ordering ) the $`𝐯`$ director (and not the $`𝐮`$ director, in contrast with the needle-like case) corresponds to the eigenvalue of the order-parameter tensor $`Q_{\alpha \beta }`$ largest in absolute value, and the lowest energy configuration that can be taken by the $`\overline{C}_0`$ disclination has directors given by $`𝐮(𝐱)`$ $`=`$ $`(\mathrm{cos}\varphi (𝐱),\mathrm{sin}\varphi (𝐱),0),`$ (65) $`𝐛(𝐱)`$ $`=`$ $`(\mathrm{cos}\left(\varphi (𝐱)+{\displaystyle \frac{\pi }{4}}\right),\mathrm{sin}\left(\varphi (𝐱)+{\displaystyle \frac{\pi }{4}}\right),0),`$ (66) $`𝐯(𝐱)`$ $`=`$ $`(0,0,1).`$ (67) The corresponding structure factor is $$S(k)_{\overline{\mathrm{C}}_0}=\frac{8}{3}\frac{L_0}{V}\pi ^2\frac{(3S_1^2S_2)^2}{3S_1^2+S_2^2}\frac{1}{k^5}.$$ (68) In general, the large-$`k`$ structure factor of a biaxial nematic system is given by the sum of the expressions (56-57) and (63) or (68). Finally, we note that in a biaxial nematic film (i.e., a system having two spatial dimensions, but the full $`3\times 3`$ tensorial nematic parameter), $`C_x`$, $`C_y`$, $`C_z`$ and $`\overline{C}_0`$ are point-like defects (the dynamics of which was studied in ). The Porod-law amplitudes for these defects are obtained from the amplitudes in the results (56)–(58), (63) and (68) by dividing by $`\pi `$ \[see Eq. (20)\], and the corresponding Porod-law exponents take the value 4 instead of 5. ## VI Corrections to Porod’s law In this section we briefly discuss several effects—inter-defect interactions, inequality among the three elastic constants in the nematic free energy, curvature of linear defects, and the presence of the defect core—that were not fully taken into account during our calculations in the previous sections and may, under certain circumstances, lead to modifications of our results. An interesting question, already alluded to in Sec. II, is whether the order-parameter configuration in the vicinity of the defect core is or is not affected by the interactions with the other defects present in the system. \[In order to avoid confusion with the core region, where the order parameter magnitude is reduced, we shall refer to the region close to, but outside, the core, as the “central” region of the defect.\] It is important to distinguish between static and dynamic effects. In the case of the $`O(2)`$ vector model with the standard gradient free energy, given by Eq. (6), the minimum-energy configuration of a collection of defects with fixed locations is obtained simply by taking the superposition of the angles characterizing the order parameter $`𝐦`$ around isolated defects; their central regions are therefore unaffected. A deformation does occur, however, once the defects are allowed to move ; for sufficiently slow defect velocities, this effect may be neglected. The situation is different in the $`O(3)`$ vector-model case. Here, Ostlund showed that in the minimum-energy configuration of a monopole-antimonopole pair separated by a fixed distance, the gradient free energy is concentrated along a string connecting the two point defects. We are not aware of any investigation of how this result might be modified in a dynamical situation. Ostlund’s result does suggest, however, that the order-parameter configuration of monopoles and antimonopoles in a phase-ordering $`O(3)`$ vector system in $`d=3`$ may substantially differ from the radially symmetric configuration assumed during the calculation of the corresponding Porod law in Sec. III A of the present Paper. Consider, then, an arbitrary point defect in the $`O(N)`$ vector model in $`d=N`$ dimensions. We denote the corresponding order-parameter configuration by $`𝚽(𝐫)`$, where the radius-vector $`𝐫`$ originates at the core of the defect. As already indicated in Sec. II, the value $`\xi =2d`$ of the Porod exponent for point defects is suggested by simple dimensional analysis, independently of the exact form of $`𝚽(𝐫)`$. This argument can be made more precise if we consider configurations in which $`𝚽(𝐫)`$ depends only on the direction $`\widehat{𝐫}`$, and not the magnitude $`r`$, of the radius-vector $`𝐫`$. We may write $`𝚽(𝐤)d^dre^{i𝐤𝐫}𝚽(𝐫)=d^dre^{ik\widehat{𝐤}𝐫}𝚽(𝐫)`$; by assumption, $`𝚽(k^1𝐫)=𝚽(𝐫)`$, which, combined with the substitution $`𝐫=k^1𝐲`$, yields $`𝚽(𝐤)=k^dd^dye^{i\widehat{𝐤}𝐲}𝚽(𝐲)k^d𝚽(\widehat{𝐤})`$. Consequently, the structure factor Eq. (4) is of the form $`S(𝐤)=A(\widehat{𝐤})k^{2d}`$, where the Porod amplitude $`A(\widehat{𝐤})M_{\mathrm{O}(\mathrm{N})}^1𝚽(\widehat{𝐤})𝚽(\widehat{𝐤})`$ now depends on the orientation of $`𝐤`$. The Porod exponent $`\xi =2d`$, expressing how $`S(𝐤)`$ depends on the magnitude of $`𝐤`$, is, however, unchanged compared to the radially symmetric case. As for the value of $`A(\widehat{𝐤})`$, we may make the following argument using the identity relating the standard form of the gradient free energy $`E`$ to the structure factor $`S(𝐤)`$: $$E\frac{\kappa }{2}d^dx_i\mathrm{\Phi }_j(𝐱)_i\mathrm{\Phi }_j(𝐱)=\frac{\kappa }{2}d^dkk^2S(𝐤)=\frac{\kappa }{2}A(\widehat{𝐤})_{\mathrm{ang}}_{D^1}^{\xi ^1}𝑑kk^{22d}.$$ (69) Here $`A(\widehat{𝐤})_{\mathrm{ang}}`$ denotes the angular average of $`A(\widehat{𝐤})`$ over all orientations $`\widehat{𝐤}`$, $`\kappa `$ is a positive elasticity constant, $`\xi `$ is the core size of the defect and $`D`$ is the large-distance cutoff necessary in the calculation of the free energy. Equation (69) implies that $`A(\widehat{𝐤})_{\mathrm{ang}}`$ reaches its minimum for the order-parameter configuration $`𝚽(\widehat{𝐫})`$ that minimizes the free energy $`E`$ . In the case of the $`O(N)`$ monopole with charge $`+1`$, the configuration minimizing $`E`$ is precisely the radial configuration used by us in Sec. III A, and we conclude that our result Eq. (14) provides a lower bound for the value of the Porod amplitude in a macroscopically isotropic system . Note, however, that the property of $`A(\widehat{𝐤})_{\mathrm{ang}}`$ formulated above applies equally well for defects that do not possess radial symmetry in the minimum-energy configuration, such as monopoles with higher-integer charges in the $`O(N)`$ vector model, or strength-$`1/2`$ disclinations in nematic liquid crystals \[in the latter case, $`E`$ in Eq. (69) is replaced by the Frank free energy in the one-constant approximation, Eq. (27)\]. We thus reach the conclusion that in nematics for which the three elastic constants (see Sec. IV A) are not equal to each other and consequently, the configuration of the nematic director around a disclination differs from the configuration minimizing Eq. (27), the angle-averaged structure factor in a phase-ordering system will have Porod amplitudes larger than those derived by us in Secs. IV BIV F, even when the central regions of the defects are not deformed by inter-defect interactions. While deriving the results for linear defects \[$`O(3)`$ vortex lines in Sec. III B and nematic disclinations in Secs. IV C and IV D\], we assumed that the curvature $`1/R`$ of the defect lines was negligible (i.e. $`Rk1`$). In general, we can expect corrections to the basic Porod law $`k^5`$ that are suppressed by factors of $`Rk`$, thus resulting in contributions to $`S(k)`$ decaying as higher power laws ($`k^\eta `$ with $`\eta >5`$). The case of curved wedge-type disclinations (discussed near the end of Sec. IV C), for which $`\eta =6`$, serves as an example. Similarly, we can expect higher-power terms to arise from the presence of the core region of the defects. As the crossing of the boundary between the core region and the outside region of a defect is associated with a change in the magnitude of the order parameter, we may obtain an upper-bound estimate of the corresponding contribution $`S_{\mathrm{core}}(k)`$ to the structure factor by regarding the core boundary as a domain wall in a scalar system, leading (in a 3-dimensional system) to $`S_{\mathrm{core}}(k)ak^4`$, where $`a`$ is the domain wall area. For a monopole $`a\xi ^2`$, where $`\xi `$ is the core size, and the ratio of $`S_{\mathrm{core}}(k)`$ to the standard monopole contribution $`k^6`$ is of the order of $`(\xi k)^2`$. For a string defect of length $`L`$, we have $`a\xi L`$, and the ratio of $`S_{\mathrm{core}}(k)`$ to $`Lk^5`$ is of the order of $`\xi k`$. For typical core sizes $`\xi `$ of the order of $`10\AA `$ and for scattering wave-vector values $`k10^4\AA `$ accessible with visible light, we have $`\xi k1000`$, and the contributions to $`S(k)`$ from the defect core are expected to be negligible for both monopoles and strings. ## VII Conclusions In this Paper we have presented a detailed discussion of the influence of topological defects on the large-wavevector behavior of the structure factor $`S(𝐤)`$ in nematic liquid crystals. The presence of topological defects leads to power-law contributions to $`S(𝐤)`$ of the Porod form $`\rho Ak^\xi `$, where $`\rho `$ is the number density of a given type of defect, $`A`$ is a dimensionless amplitude, and $`\xi `$ is an integer-valued exponent. We have computed the values of the Porod exponents and amplitudes for the various types of topological defects present in uniaxial and biaxial nematics, and have discussed the competition between contributions to $`S(𝐤)`$ due to defects and to thermal fluctuations. Our main results are summarized in Tables I (for nematic films) and II (for bulk nematic systems). To obtain the short-distance structure factor of a nematic system containing topological defects, the expressions given in Tables I or II should be multiplied by the number densities of the corresponding defects and then added. Here, the defect number density is defined as the number of point defects, resp. the total length of line defects, per unit area (in a two-dimensional system) or unit volume (in a three-dimensional system). In addition, the total structure factor contains a power-law contribution due to transverse thermal fluctuations of the nematic director, this contribution being given (in a bulk system) approximately by $`8\AA /k^2`$. The resulting form of the structure factor $`S(k)`$ is valid for $`k`$ ranging from the inverse typical separation of defects to the inverse defect-core size. To avoid confusion, we remind the reader that our results were calculated for a structure factor $`S(𝐤)`$ defined so that the real-space correlation function $`C(𝐫)`$ is normalized to unity at $`r=0`$ (and, consequently, our structure factor has the dimension of volume). Thus, our results differ from the un-normalized structure factor (with dimension of volume squared) by the normalization factor $`M_{\mathrm{norm}}=(V/2)(3S_1^2+S_2^2)`$ (where $`V`$ is the system volume or area, $`S_1`$ is the (uniaxial) order-parameter magnitude, and $`S_2`$ is the strength of biaxial ordering). Recall, in addition, that the scattered-light intensity is directly proportional to the structure factor $`S(𝐤)`$ only in the case of unpolarized light scattering. Finally, recall (Sec. VI) that the values of the Porod amplitudes (but not those of the Porod exponents) in the general case of unequal elastic constants in the Frank free energy (26) are expected to differ from the values (derived in the one-constant approximation) listed in Tables I and II . For comparison, we also show in Tables I and II the results for the corresponding defects (when they exist) in $`O(2)`$ and $`O(3)`$ symmetric vector-model systems. In these systems, the normalization factor $`M_{\mathrm{norm}}`$ is given by $`Vs^2`$, where $`s`$ is the order-parameter magnitude. Identical results for some of the $`O(N)`$ defect configurations were previously obtained in Ref. . We conclude this Paper by highlighting some features of our results. As discussed in Sec. II, the value of the Porod exponent $`\xi `$ can be correctly anticipated purely on dimensional grounds. It should be noted, however, that the value of the dimensionless Porod amplitude $`A`$ often differs considerably from unity. For example, in the case of the nematic hedgehog defect, we obtained $`A=36\pi ^43500`$. In addition, the Porod amplitude depends significantly on the order parameter in question (and not just on the spatial and defect dimensionalities). For example, the Porod amplitude for the nematic hedgehog exceeds the amplitude for the $`O(3)`$ monopole in $`d=3`$ by a factor of $`3\pi `$. In Secs. IV E and IV F, we analyzed in detail the conditions under which the Porod tail of the structure factor is not overshadowed by the power-law contribution from transverse thermal fluctuations of the nematic director. \[Similar considerations apply to any system possessing continuous symmetry of the order parameter.\] We concluded that for experimentally accessible defect densities in a nematic phase-ordering experiment, the Porod tail should be observable over a range of 1 to 3 decades in $`k`$; the range of observability is limited by defect interactions at small $`k`$ and by thermal fluctuations at large $`k`$. In the experiments reported in Refs. , the measured structure factor did exhibit a Porod tail over at least 1 decade in $`k`$, but the crossover at high $`k`$ to the fluctuation-dominated regime was not observed. This was found to be consistent with our theoretical predictions under the conditions of these experiments. The crossover from $`S(k)k^5`$ (in a system where disclinations dominate over hedgehogs) to $`S(k)k^2`$ (the thermal-fluctuation-dominated regime) was predicted in Sec. IV F to occur approximately at the scattering wave-vector value $`k_\mathrm{u}=(3\pi ^3/8\AA )^{1/3}L_{\mathrm{discl}}^{2/3}`$, where $`L_{\mathrm{discl}}`$ is the total disclination length per unit volume of the system. As this crossover is very sharp, and is not affected by defect interactions, measuring $`k_\mathrm{u}`$ can give a precise estimate of the total disclination length. We are not aware of any use of this method in the experimental literature to date. The proportionality of the Porod tail to the defect density has been used previously to extract the power law characterizing the decay of disclination length; extracting the absolute defect density in this way, however, would require the knowledge of the (rarely experimentally available) normalized structure factor. The location of the crossover $`k_\mathrm{u}`$, on the other hand, can be extracted from the unnormalized structure factor. In Sec. IV D, we contrasted the contributions to the structure factor arising from twist-type and wedge-type disclination loops. The wedge-type loop of radius $`R`$ gives a $`k^6`$ contribution at $`k<R^1`$ in addition to the usual $`k^5`$ disclination contribution at $`k>R^1`$, as it has the structure of a nematic hedgehog at large distances from the loop. Likewise, any curved disclination segment of the wedge type gives rise to a $`k^6`$ contribution. In Sec. IV F, we attempted to connect these findings to the observation that the structure factor in phase-ordering uniaxial nematics approaches the $`k^5`$ power law at high $`k`$ through effective exponents larger than 5 (and close to 6) at intermediate $`k`$. Such a behavior of the structure factor had been found to occur in experiments as well as in numerical simulations , but not in approximate analytical theories . We concluded that it was, in principle, possible, but nevertheless unlikely, that the “approach from above” to the exponent $`\chi =5`$ observed in the experiments was due to $`k^6`$ contributions from hedgehog defects. It remains a challenge for future work to determine whether this behavior of the structure factor is caused by hedgehog defects, curved wedge-type disclination segments, or correlations that fall beyond the range of the Porod regime. ###### Acknowledgements. M. Z. wishes to thank A. J. Bray, T. C. Lubensky, A. D. Rutenberg, and B. Yurke for useful discussions. This work was supported by the U. S. National Science Foundation through Grants DMR95-07366 (M. Z.) and DMR94-24511 (P. M. G.).
no-problem/9812/cond-mat9812310.html
ar5iv
text
# The Mott-Hubbard transition and the paramagnetic insulating state in the two-dimensional Hubbard model ## Abstract The Mott-Hubbard transition is studied in the context of the two-dimensional Hubbard model. Analytical calculations show the existence of a critical value $`U_c`$ of the potential strength which separates a paramagnetic metallic phase from a paramagnetic insulating phase. Calculations of the density of states and double occupancy show that the ground state in the insulating phase contains always a small fraction of empty and doubly occupied sites. The structure of the ground state is studied by considering the probability amplitude of intersite hopping. The results indicate that the ground state of the Mott insulator is characterized by a local antiferromagnetic order; the electrons keep some mobility, but this mobility must be compatible with the local ordering. The vanishing of some intersite probability amplitudes at $`U=U_c`$ puts a constrain on the electron mobility. It is suggested that such quantities might be taken as the quantities which control the order in the insulating phase. There are several indications that the two-dimensional Hubbard model (HM) can describe a metal-insulator transition and that some kind of order is established in the paramagnetic insulating state. However, there is no clear picture about the structure of the ground state and no indication about the existence of an order parameter. In particular, there is a difficulty to conciliate the existence of a finite value for the doubly occupancy, which implies mobility of the electrons, and the existence of some order which would imply a localization of the electrons. In this article we study the Hubbard model by means of the composite operator method (COM) in the two-pole approximation. The main results can be so summarized: (i) a Mott- Hubbard transition does exist; (ii) a local antiferromagnetic (AF) order is present in the insulating state; (iii) a quantity which controls the order in the insulating state is individuated. According to the band model approximation several transition oxides should be metals. In practice one finds both metallic and insulating states, with a metal-insulator transition induced by varying the boundary conditions (pressure, temperature, compound composition). Mott pointed out that for narrow bands the electrons are localized on the lattice ions and therefore the correlations among them cannot be neglected. A model to describe these correlations was proposed by Hubbard . In a standard notation his Hamiltonian is given by $$H=\underset{ij}{}(t_{ij}\mu \delta _{ij})c^{}(i)c(j)+U\underset{i}{}n_{}(i)n_{}(i)$$ (1) where $`c(i)`$ and $`c^{}(j)`$ are annihilation and creation operators of electrons at site $`i`$, in the spinor notation; $`t_{ij}`$ describes hopping between different sites and it is usually taken as $`t_{ij}=4t\alpha _{ij}`$, where $`\alpha _{ij}`$ is the projection operator on the first neighbor sites; the $`U`$-term is the Coulomb repulsive interaction between two electrons at the same site with $`n_\sigma (i)=c_\sigma ^{}(i)c_\sigma (i)`$; $`\mu `$ is the chemical potential. The magnitudes of the on-site Coulomb energy $`U`$ and the one-electron band width $`W=8t`$ control the properties of the system. In this competition between the kinetic and the potential energy the most difficult part of the model resides and exact solutions do not exist, except in some limiting cases. In particular, an adequate description of the ground state and elementary excitations is still missing. In the case of one dimension an exact solution is available which shows that there is no Mott-Hubbard transition: a gap in the density of states is present for any value of U. In higher dimensions there are several results that indicate the existence of a Mott-Hubbard transition, in the sense that at half filling there is a critical value of the Coulomb potential $`U_c`$ which separates the metallic phase from the insulating phase; but no rigorous results. In Hubbard I approximation and in the work by Roth no transition is observed . In Hubbard III approximation an opening of the gap is observed for $`U_c=W\sqrt{3/2}`$. By using the Gutzwiller variational method Brinkman and Rice find $`U_c=8|\overline{ϵ}|1.65W`$, with $`\overline{ϵ}`$ being the average kinetic energy per electron; the vanishing of the double occupancy D at this value induced them to propose the double occupancy as an order parameter to describe the metal-insulator transition. However, this result is based on the use of the Gutzwiller approximation, which becomes exact only for infinite dimensions . For finite dimensions theoretical and numerical analysis show that the double occupancy tends to zero only in the limit $`U\mathrm{}`$. By using the dynamical mean-field approach, or $`d\mathrm{}`$ limit, Georges et. al. find that at some critical $`U`$ a gap opens abruptly in the density of states, due to the disappearance of a Kondo-like peak. A recent calculation of the HM in infinite dimensions shows a continuous Mott-Hubbard transition, with a gap opening at $`U_cW`$. The same qualitative result has been found by using Quantum Monte Carlo (QMC) simulation; working at the high $`T=0.33t`$ the authors observe a transition with a gap opening continuously at $`U_cW/2`$. In conclusion, while there are several results indicating the existence of a Mott-Hubbard transition in the 2D Hubbard model, there is no unified picture; the mechanisms that lead to the transition are different; the value of the critical interaction strength varies from $`U_c0.5W`$ up to $`U_c1.65W`$ . A description of the structure of the ground state in the paramagnetic insulating state is also lacking. In the framework of the COM the Hubbard model has been solved in the two pole approximation , where the operatorial basis is described by the doublet Heisenberg operator $$\psi (i)=\left(\begin{array}{c}\xi (i)\\ \eta (i)\end{array}\right)$$ (2) and finite life-time effects are neglected. The fields $`\xi (i)=[1n(i)]c(i)`$ and $`\eta (i)=n(i)c(i)`$, with $`n(i)=c^{}(i)c(i)`$, are the Hubbard operators . In this framework the single-particle propagator is given by $$F.T.R[\psi (i)\psi ^{}(j)]=\underset{i=1}{\overset{2}{}}\frac{\sigma ^{(i)}(𝐤)}{\omega E_i(𝐤)+i\eta }$$ (3) where $`F.T.`$ means Fourier transform. The expressions of the spectral functions $`\sigma ^{(i)}(\mathrm{k})`$ and energy spectra $`E_i(\mathrm{k})`$ have been reported in previous works . These functions are calculated in a fully self-consistent treatment, where attention is paid to the conservation of relevant symmetries . Differently from other approaches, one does not need to recur to different schemes in order to describe the weak- and strong-coupling regimes. Both the limits $`U0`$ and $`U\mathrm{}`$ are recovered by Eq. (3). The result (3) has been derived by assuming a paramagnetic phase. It is an open question if this is the true ground state at half filling and zero temperature. Results of numerical simulation seem to indicate that the paramagnetic phase is unstable against a long range antiferromagnetic order. However, numerical analysis is severely restricted in cluster size, and it is very hard to conclude that the true solution has an infinite range AF order. As it will be shown later, the calculation of the probability amplitudes for electron transfers shows that in the paramagnetic insulating state a local AF order is established, with a correlation length of the order of few hundred time the lattice constant. At first, we observe that the Mott-Hubbard transition can be studied by looking at the chemical potential, which is the quantity which mostly controls the single-particle properties. Let us define $$\mu _1=\left(\frac{\mu }{n}\right)_{n=1}=\frac{1}{\kappa (1)}$$ (4) where $`\kappa (n)=(n/\mu )/n^2`$ is the compressibility. Analytical calculations show that at zero temperature there is a critical value of the interaction, fixed by the equation $$U_c=8t\sqrt{4p1}$$ (5) such that for $`U>U_c`$ the quantity $`\mu _1`$ diverges. The parameter $`p`$ describes a bandwidth renormalization and is defined by $$p\frac{1}{4}n_\mu ^\alpha (i)n_\mu (i)[c_{}(i)c_{}(i)]^\alpha c_{}^{}(i)c_{}^{}(i)$$ (6) where $`n_\mu (i)=c^{}(i)\sigma _\mu c(i)`$ is the charge ($`\mu =0`$) and spin ($`\mu =1,2,3`$) density operator. We use the notation $`A^\alpha (i)=_j\alpha _{ij}A(j)`$ to indicate the operator $`A`$ on the first neighbor site of $`i`$. The quantities $`p`$ and $`\mu _1`$ are functions of the external parameters $`n`$, $`T`$, $`U`$ and are self-consistently calculated. Numerical solution of the self-consistent equation (5) shows $`U_c1.68W`$. In Fig. 1a $`\mu _1`$ is plotted versus $`U/t`$ for $`k_BT/t=0,0.3,1`$. At finite temperature $`\mu _1`$ increases by increasing U and tends to $`\mathrm{}`$ in the limit $`U\mathrm{}`$. At zero temperature $`\mu _1`$ exhibits a discontinuity at $`U=U_c`$.When the intensity of the local interaction exceeds the critical value $`U_c`$, the chemical potential exhibits a discontinuity at half filling, showing the opening of a gap in the density of states and therefore a phase transition from the metallic to the insulating phase. Calculations show that the density of states (DOS) is made up by two bands: lower and upper band. When $`U<U_c`$ the two band overlap: metallic phase. The region of overlapping is given by $`\mathrm{\Delta }\omega =16tp\sqrt{U^2+64t^2(2p1)^2}`$. When $`U>U_c`$ the two band do not overlap: insulating phase. In Fig. 1b the electronic DOS is reported for different values of $`U`$. We see that when $`U`$ increases the central peak opens in two peaks; some of the central weight is transferred to the two peaks, which correspond to the elementary excitations, described by the fields $`\xi `$ and $`\eta `$. When $`U`$ reaches the critical value $`U_c1.68W`$ the central peak vanishes abruptly; a gap appears and the electronic density of states splits into two separate bands. This is seen in Fig. 1c, where the DOS calculated at the Fermi level is plotted versus $`U`$. We find that the gap develops continuously, following the law $`\mathrm{\Delta }1.5W(U/U_c1)`$. A more detailed study of the density of states can be obtained by considering the contributions of the different channels. Calculations show that both the fields $`\xi `$ and $`\eta `$ contribute to the two bands. Only in the limit $`U\mathrm{}`$ the two operators do not interact and separately contribute to the two bands. Although, the lower band is essentially made up by the contribution of “$`\xi `$-electron”, there is always a contribution coming from the “$`\eta `$-electron”. The viceversa is true for the upper band. Particularly, the cross contribution plays an important role in the region around the Fermi value, where $`N_{\xi \eta }(\mu )N_{\xi \xi }(\mu )N_{\eta \eta }(\mu )`$ \[for $`U>0`$\]. This result shows that in the insulating phase the ground state has a structure different from the simple one where all sites are singly occupied; the competition between the itinerant and local terms leads to a ground state characterized by a small fraction of empty and doubly occupied sites. Some questions arise: (1) what is the structure of the ground state? and in particular there exists any order?; (2) if an ordered state is established, why this order is not destroyed by the mobility of the electrons; (3) can we individuate an order parameter describing the transition at $`U=U_c`$? An important quantity for the comprehension of the properties of the system is the double occupancy $`D=n_{}n_{}`$ which gives the average number of sites occupied by two electrons. Analytical calculations show that at zero temperature the double occupancy, as a function of $`U`$, exhibits a drastic change when the critical value is crossed, however remains finite for $`U>U_c`$ and tends to zero only in the limit of infinite $`U`$ as $`lim_U\mathrm{}D=J/8U`$ where $`J=4t^2/U`$ is the AF exchange constant. In the case of one dimension our analytical results give $`lim_U\mathrm{}D=3t^2/U^2`$ which is very close to the Bethe Ansatz result $`lim_U\mathrm{}D=4\mathrm{ln}2t^2/U^2`$ . Double occupite sites are used by the system in order to lower its energy. As a matter of fact this is precisely the origin of the effective spin-spin interaction in the $`tJ`$ model . To better understanding the structure of the ground state, we have to study the matrix element $`c_\sigma (j)c_\sigma ^{}(i)`$. This quantity represents the probability amplitude that an electron of spin $`\sigma `$ is created at site $`i`$ and an electron of spin $`\sigma `$ is destroyed at site $`j`$. However, this quantity gives only a limited information about the occupation of the sites $`i`$ and $`j`$; there are four possible ways to realize the transition $`j(\sigma )i(\sigma )`$, and the quantity $`c_\sigma (j)c_\sigma ^{}(i)`$ cannot distinguish among them. By means of the decomposition $`c_\sigma (i)=\xi _\sigma (i)+\eta _\sigma (i)`$, the probability amplitude is written as the sum of four contributions $`c_\sigma (j)c_\sigma ^{}(i)=\xi _\sigma (j)\xi _\sigma ^{}(i)+\xi _\sigma (j)\eta _\sigma ^{}(i)+\eta _\sigma (j)\xi _\sigma ^{}(i)+\eta _\sigma (j)\eta _\sigma ^{}(i)`$ which correspond to the following transitions: $`\begin{array}{cccccc}\xi _\sigma (j)\xi _\sigma ^{}(i):& \text{0}& \sigma & & \sigma & \text{0}\\ & \text{i}& \text{j}& & \text{i}& \text{j}\\ \xi _\sigma (j)\eta _\sigma ^{}(i):& \overline{)\sigma }& \sigma & & \overline{)\sigma \sigma }& \text{0}\\ & \text{i}& \text{j}& & \text{i}& \text{j}\\ \eta _\sigma (j)\xi _\sigma ^{}(i):& \text{0}& \overline{)\sigma \sigma }& & \sigma & \overline{)\sigma }\\ & \text{i}& \text{j}& & \text{i}& \text{j}\\ \eta _\sigma (j)\eta _\sigma ^{}(i):& \overline{)\sigma }& \overline{)\sigma \sigma }& & \overline{)\sigma \sigma }& \overline{)\sigma }\\ & \text{i}& \text{j}& & \text{i}& \text{j}\end{array}`$ A study of the probability amplitudes $`\psi (j)\psi ^{}(i)`$ will give detailed information about the structure of the ground state. In Fig. 2a the amplitude $`A=\xi ^\alpha (i)\xi ^{}(i)=\eta ^\alpha (i)\eta ^{}(i)`$ is reported as a function of $`U/t`$ for two different temperatures $`k_BT/t=0,1`$. We see that in the case of zero temperature this quantity vanishes for $`U>U_c`$. This can be easily seen also by analytical methods. The quantity $`B=\eta ^\alpha (i)\xi ^{}(i)=\xi ^\alpha (i)\eta ^{}(i)`$ is reported in Fig. 2b; we see that this probability amplitude does not vanish above $`U_c`$. Owing to this contribution, we have that for $`U>U_c`$ the hopping of electrons from site $`i`$ to the nearest neighbor is not forbidden, although restricted by the fact that $`A=0`$. The hopping amplitudes have been studied up to the third nearest neighbors, but the analysis is easily extended to any site, by symmetry considerations. The scheme that emerges from analytical and numerical calculations can be summarized in the following table. Putting all these results together, for $`U>U_c`$ the situation can be so summarized: 1. an electron $`\sigma `$ which singly occupies a site (a) can hop on first, third,…..neighboring sites if and only if these sites are already occupied by an electron $`\sigma `$; (b) can hop on second, fourth,…..neighboring sites if and only if these sites are empty; 2. an electron $`\sigma `$ which doubly occupies a site (a) can hop on first, third,…..neighboring sites if and only if these sites are empty; (b) can hop on second, fourth,…..neighboring sites if and only if these sites are already occupied by an electron $`\sigma `$. The picture that emerges by these results is that the paramagnetic ground state in the insulating phase is characterized by a finite-range antiferromagnetic order. Due to the fact that there are empty and doubly occupied sites, the electrons have some mobility, but there are strong constrains on this mobility, such that the local antiferromagnetic order is not destroyed. This result is consistent with the fact that there is a competition between the itinerant and localizing energy terms. A study of the kinetic and potential energies as functions of U shows that for any $`t0`$ there is always some contribution which comes from the kinetic energy which allows the hopping among sites. Only in the limit of infinite U, the double occupancy and all transition amplitudes go to zero. In conclusion, the two-dimensional single-band Hubbard model at half filling and zero temperature has been studied by means of the composite operator method. Analytical calculations show the existence of a critical value $`U_c`$ which separates the metallic and insulating phases. As soon as U increases from zero, a depletion appears in the density of states; some weight of the central region is transferred to the lower and upper Hubbard bands. For larger values of U, DOS develops three separated structures: part of the weight remains in the center around the Fermi value and discontinuously disappears at $`U=U_c`$. Similar results, although based on different mechanism, have been previously obtained in Ref. for the case of infinite-dimensional Hubbard model, in Ref. by means of standard perturbation expansions, in Ref. by Monte Carlo simulations. For $`U>U`$ a gap opens and the density of states splits into two separated structures. Our calculations show that even for $`UU_c`$, where the lower and upper bands are well separated, the two contributions coming from $`\xi `$ and $`\eta `$ do not separate. The ground state in the insulating phase contains always a small fraction of empty and doubly occupied sites. This result is confirmed by the study of the matrix element $`c_\sigma (j)c_\sigma ^{}(i)`$, which gives the probability amplitude of hopping from the site $`j`$ to the site $`i`$. When $`j`$ is an odd nearest neighboring site of $`i`$, this quantity is not zero for $`U>U_c`$ and vanishes only for infinite U. However, when we split $`c=\xi +\eta `$ and analyze $`c_\sigma (j_{odd})c_\sigma ^{}(i)`$ in components, we find that for $`U>U_c`$ only the matrix elements $`\xi _\sigma (j_{odd})\eta _\sigma ^{}(i)`$ and $`\eta _\sigma (j_{odd})\xi _\sigma ^{}(i)`$ survive. The probability amplitudes $`\xi _\sigma (j_{odd})\xi _\sigma ^{}(i)`$ and $`\eta _\sigma (j_{odd})\eta _\sigma ^{}(i)`$ vanish at $`U=U_c`$ and remain zero for all $`U>U_c`$. On the other hand, the matrix element $`c_\sigma (j_{even})c_\sigma ^{}(i)`$ is always zero for any value of U; the two contributions $`\xi _\sigma (j_{even})\xi _\sigma ^{}(i)`$ and $`\eta _\sigma (j_{even})\eta _\sigma ^{}(i)`$ compensating each other. Summarizing, our calculations suggest that the ground state of the Mott insulator has the following characteristics: (1) a small fraction of sites are empty or doubly occupied; the number of these sites depend on the value of U/t and tends to zero only in the limit $`U\mathrm{}`$; (2) a local antiferromagnetic order is established; (3) the electrons keep some mobility, but this mobility must be compatible with the local AF order; (4) the matrix elements $`\xi _\sigma (j_{odd})\xi _\sigma ^{}(i)`$ and $`\eta _\sigma (j_{odd})\eta _\sigma ^{}(i)`$ might be considered as the quantities which control the order in the insulating phase. \*** The author wishes to thank Doctors Adolfo Avella and Dario Villani for valuable discussions. It is gratefully acknowledged an enlightening discussion with Professor Peter Fulde, that partly motivated the writing of this article.
no-problem/9812/cond-mat9812282.html
ar5iv
text
# Stripe Formation within SO(5) Theory \[ ## Abstract We study the formation of stripe order within the SO(5) theory of high-$`T_c`$ superconductivity. Spin and charge modulations arise as a result of the competition between a local tendency to phase separate and the long-range Coulomb interaction. This frustrated phase separation leads to hole-rich and hole-poor regions which are respectively superconducting and antiferromagnetic. A rich variety of microstructures ranging from droplet and striped to inverted-droplet phases are stabilized, depending on the charge carrier concentration. We show that the SO(5) energy functional favors non-topological stripes. preprint: Revised \] One of the most striking features of the cuprates is the proximity between antiferromagnetic (AF) and superconducting (SC) phases as a function of doping. Recently it has been proposed that these two phases are unified by an approximate SO(5) symmetry . A number of experimental consequences of this theory have been worked out . Although SO(5) appears to be a natural framework for understanding the cuprates, no experiment has unequivocally tested the fundamental validity of the theory. One of its most direct predictions is the existence of a first-order transition from the AF to SC state as the chemical potential $`\mu `$ is increased beyond a critical value. However, this prediction is complicated by the fact that the doping $`x`$ (not $`\mu `$) is the experimentally tunable parameter. Experimentally, it is found that in the vicinity of the AF/SC transition, the cuprates show an increased sensitivity to disorder and inhomogeneity. In this Letter, we study this region of the phase diagram in the presence of the long-range Coulomb interaction within the SO(5) formalism and show how spatially inhomogeneous states can emerge. In the $`T\mu `$ phase diagram of SO(5) theory, there is a first-order line separating the AF and SC phases, across which the charge carrier density $`x`$ jumps discontinuously. In the $`Tx`$ phase diagram, this translates into a two-phase region where AF and SC phases coexist. Phase separation into hole-rich and hole-poor regions was also noticed in studies of the $`tJ`$ model . However, as Emery and Kivelson argued rather successfully, the long-range Coulomb interaction between charge carriers prevents macroscopic phase separation. The competition between the local tendency toward phase separation and the long-range Coulomb interaction leads to modulated domain structures at mesoscopic scales . In the SO(5) theory, the hole-rich and hole-poor regions are respectively identified as having a superconducting and antiferromagnetic character. The spin and charge modulations of the system are interpreted as textures of the SO(5) superspin as it rotates in SO(5) space. There is considerable evidence for modulated microstructure in the oxides. Domain formation has been reported in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LSCO) in muon spin resonance , NMR and neutron diffraction experiments . Neutron scattering measurements in La<sub>1.6-x</sub>Nd<sub>0.4</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LNSCO) provide direct evidence for stripe ordering in which the phase of the AF order shifts by $`\pi `$ across a domain wall . Furthermore, recent inelastic neutron scattering measurements in underdoped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-x</sub> (YBCO) and ARPES measurements in underdoped Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8-x</sub> (BSCCO) are not inconsistent with a striped phase interpretation. In the mean field approximation of the SO(5) theory one minimizes the classical energy $$H_1(n_a,p_a)=\frac{1}{4}\underset{ab}{}\frac{L_{ab}^2}{\chi _{ab}}+g(n_1^2+n_5^2)+\frac{\rho _s}{2}(\stackrel{}{n})^2+E_c$$ (1) with $`L_{ab}=n_ap_bn_bp_a`$ and constraints $`n_a^2=1`$, $`n_ap_a=0`$. $`L_{ab}`$ and $`\stackrel{}{n}`$ refer respectively to the SO(5) generators of rotation and the 5-component superspin $`\stackrel{}{n}=(Re(\mathrm{\Delta }),N_x,N_y,N_z,Im(\mathrm{\Delta }))`$ . The last term $`E_c`$ is the Coulomb energy. Since, in SO(5) theory, the hole density is given by $`L_{15}`$, the charge density is $`\rho (r)=L_{15}(r)ex`$, where $`ex`$ is the charge of the neutralizing counterion charges which are assumed to be static and homogeneously distributed with a density $`x`$. We conjecture then that, if SO(5) theory is still valid in the presence of the long-range Coulomb interaction,/citecoul the Coulomb energy in the mean field approximation will be given by $$E_c=\frac{1}{2}\rho (r)V_C(rr^{})\rho (r^{})𝑑r𝑑r^{}.$$ (2) It is important to emphasize that for homogeneous phases one recovers the basic SO(5) model because the charge density vanishes exactly making the Coulomb interaction irrelevant. Hence the influence of the Coulomb term arises solely in the phase separation regime. Second, the assumption of immobile static counterions is known to fail for La<sub>2</sub>CuO<sub>4+δ</sub> . In this case, the oxygen ions are mobile enough as to screen the charge inhomogeneities which leads to macroscopic phase separation of superconducting and antiferromagnetic domains. This last fact provides strong evidence for the correctness of the SO(5) picture. It can be proved that if $`H_1`$ is symmetric with respect to rotations within AF and SC subspaces ($`\chi _{15}=\chi _s,\chi _{23}=\chi _{34}=\chi _{24}=\chi _a,\chi _{12}=\chi _{13}=\chi _{14}=\chi _{23}=\chi _{24}=\chi _\pi `$), the minimal configuration has a form $`\stackrel{}{n}=(n_1,n_2,0,0,0)=(\mathrm{cos}\theta ,\mathrm{sin}\theta ,0,0,0)`$, $`\stackrel{}{p}=(0,0,0,0,p_5)`$ and the constraints are automatically satisfied if we use the variables $`(\theta ,p_5)`$. With the addition of the long-range Coulomb interaction to the Hamiltonian, the behavior of the system is no longer tractable analytically and one must resort to numerical analysis. We note that a classical spin Hamiltonian $$H_2=J\underset{i,j}{}\stackrel{}{S}_i\stackrel{}{S}_j2K\underset{i}{}(S_i^z)^2$$ (3) can be transformed into the local part of $`H_1`$ by a Haldane map (see for details). Here $`J>0`$ and $`K>0`$ are known functions of $`\chi _{ab},g,\rho _s`$. To match the whole expression (1) we add a term $$V=\frac{1}{2}\underset{i,j}{}\frac{(S_i^zx)(S_j^zx)}{ϵ\left|\stackrel{}{r}_i\stackrel{}{r}_j\right|}$$ (4) where $`ϵ`$ is the dielectric constant of the material. As emphasized earlier, since experiments are performed at constant carrier concentration, the Hamiltonian $`E=H_2+V`$ is subject to the doping constraint $`S^z=x`$. Numerically it is easier to study $`E`$ than $`H_1`$ because the hole density is given explicitly by $`S^z`$ rather than implicitly by $`L_{15}`$. The properties of $`E`$ are studied using Monte Carlo simulations. In order to find the lowest energy state of the system, we perform simulated annealing from high-temperature. We assume an $`N\times N`$ 2-dimensional lattice where $`N`$ can be up to 40 unit cells. In the absence of the long-range Coulomb interaction, one can easily show that the system phase separates for densities $`x`$ less than $`x_c=K/(2JK)`$. The addition of the Coulomb term leads to a rich variety of modulated structures, which are shown on the phase diagram of Fig.1. For large dielectric constant, three phases are found to be stabilized: a droplet phase made of SC droplets embedded in an AF background, a striped phase of alternating SC and AF stripes and an inverted-droplet phase where the droplets are antiferromagnetic. Our numerical solutions (Fig.2) show that the superspin stays in the AF or SC directions inside the domains and changes only in the thin domain walls. The structure represents a collection of solitons rather than a small modulation of the direction of $`\stackrel{}{n}`$. The superconducting density switches between $`0`$ and $`x_c`$ in AF and SC domains which means that the superconducting area fraction $`A_{sc}/A=x/x_c`$ leading to a linear relation between the superfluid density and the doping as seen in some experiments. A simple physical argument for the pattern shape can be given in terms of interface energy. As long as one of the phases (AF or SC) is in the minority, the energy of the AF/SC interface predominates over the Coulomb energy and circular domains are preferred as they minimize the length of this interface. However, the situation is reversed for $`xx_c/2`$ where the repulsive interaction leads to dipole formation which favors elongated domains such as stripes. The striped phase is reminiscent of the domain structure observed in LNSCO, though the rows of charge are superconducting in our model. It is interesting to consider the superspin texture in the striped phases, namely, the relative phase shift of every other stripe. Numerically, it is found that the lowest energy states do not show any winding of the superspin in space. Hence, the phase of the AF order parameter does not shift by $`\pi `$ on crossing a SC stripe. The same results were obtained for a simulation on a spin ladder where all configurations are necessarily one-dimensional. This absence of topological phase shift is in striking contrast to experimental data ; however, it can be proved analytically for the minimal periodic 1-D configuration of (1) using the theorem of Pryadko et al. . Theorem: For a functional $``$ $`=`$ $`{\displaystyle (\frac{dv}{dr})^2}+E_{loc}(v^2(r),r)dr`$ (5) $`+`$ $`{\displaystyle \rho (v^2(r),r)V(rr^{})\rho (v^2(r^{}),r^{})𝑑r𝑑r^{}}`$ (6) of the function of one argument $`v(r)`$ the minimal configuration under a constraint $`\rho (v^2(r),r)𝑑r=0`$, does not cross zero. Applying it to (1) we note that as a function of $`\theta `$ both $`\rho `$ and $`H_1`$ can be expanded in even powers around the points $`\theta =0,\pi `$ and thus $`\theta _{min}(r)`$ can not cross these levels. In addition, let us perform a variable change $`(\theta ,p_5)(\theta ,q=p_5/\mathrm{cos}\theta )`$. Its Jacobian is sometimes infinite, but not on the minimal solution for which minimization of $`H_1`$ with respect to $`p_5`$ gives: $`p_5^{min}`$ $`=`$ $`{\displaystyle \frac{\mathrm{cos}\theta }{\mathrm{cos}^2\theta /\chi _s+\mathrm{sin}^2\theta /\chi _\pi }}{\displaystyle V_C(rr^{})\rho (r^{})𝑑r^{}}`$ (7) After the variable change $`H_1(\theta ,q)`$ can also be expanded in even powers around the points $`\theta =\pi /2,3\pi /2`$, so the minimal solution does not cross these levels either. Altogether, $`\theta _{min}(r)`$ always stays in one of the four quadrants of the circle. For low dielectric constant, i.e. weak screening, we find that phase separation is precluded altogether. The system exhibits a homogeneous mixed phase in which the superspin points neither purely in the AF or SC direction. This state is reminiscent of the putative supersolid phase in <sup>4</sup>He as both order parameters (AF and SC) are nonzero everywhere in the sample. After the minimum of $`E`$ is found, the chemical potential can be numerically calculated as $`\mu =E/x`$. As shown in Fig. 3, $`\mu (x)`$ becomes non-monotonic and a region of $`d\mu /dx<0`$ appears. Such a region is prohibited in thermodynamics, but in models with continuous charge density (as opposed to point charges) it is generic. The origin of the region in which the chemical potential is double-valued is illustrated by Fig.4. In the absence of Coulomb interactions and gradients the energy of a mixture with doping $`x_{min}<x<x_{max}`$ interpolates linearly between $`E_{min}`$ and $`E_{max}`$. The effect of Coulomb interactions and gradients is to increase the total energy of these intermediate states to $`E_{tot}(x)`$ as shown in Fig.4. The dependence of $`\mu `$ on $`x`$ is then given simply by $`\mu =\delta E_{tot}/\delta x`$, which is consistent with the numberical results of Fig.3. Because $`E_{tot}(x_{min,max})=E_{min,max}`$ in models with continuous charge one has $$_{x_{min}}^{x_{max}}(\mu (x)\mu _c)𝑑x=0.$$ (8) which applies beyond the SO(5) theory. Our numerical results are consistent with (8). Experimentally, investigations of the chemical shifts in LSCO and BSCCO have shown that while the shift is large in overdoped samples, it is strongly suppressed and pinned in underdoped samples, in agreement with the phase separation picture. However, due to poor experimental resolution, it is not possible to ascertain the non-monotonic behavior of $`\mu (x)`$. More experimental work is needed to test this prediction of our model. While this work was motivated by experiment, it should be emphasized that extensions of our model would be needed to make real contact with experiments. The lattice anisotropy of the cuprates will lead to an anisotropic AF/SC interface energy. We expect this anisotropy to enlarge the region of striped phase stability relative to that of the droplet phases, as the former can take best advantage of that anisotropy. Also, disorder will make the coefficients $`J`$ and $`K`$ (or $`\chi _{ab}`$ and $`g`$) and the charge of the counterions position-dependent. Although the resulting effects are complex in character, we may speculate that for small disorder, the defects act as pinning centers for the stripes and lead to distortions of the domain structure as well as a loss of long-range order. This may explain the failure to observe droplet phases in the high-$`T_c`$ superconductors. For strong randomness, the size of the domains would be predominantly set by the disorder instead of the long-range interactions . However, we expect that the linear relation between the superfluid density and the doping should still hold in these glassy materials. In summary, we have shown that the interplay between the long-range Coulomb interaction and the local tendency to phase separation of the SO(5) model leads to an interesting and remarkably rich phase diagram for the clean system. We found that the frustrated phase separation between hole-rich and hole-poor regions can provide an explanation for the gross features of the cuprates near the AF/SC transition when lattice anisotropy and impurity effects are taken into account. However, the SO(5) energy functional cannot have topological solutions as its lowest energy state. Therefore we believe that the topological nature of stripes that are observed in experiment must arise from microscopic properties of the coexisting states. Finally, we draw attention to the behavior of the chemical potential in the phase separation regime. We acknowledge many useful discussions with Eugene Demler and Shoucheng Zhang. This work was partially supported by NSERC, by the Ontario Center for Materials Research and by Materials and Manufacturing Ontario.
no-problem/9812/cond-mat9812089.html
ar5iv
text
# Propagating front in an excited granular layer ## I INTRODUCTION Granular materials exhibit phases that resemble yet are distinct from ordinary solids, liquids, and gases. (For reviews see Refs .) Granular materials at rest behave much like solids (e.g. they can sustain stress), but when excited they can behave as a dense liquid or a more dilute gas, with varying degrees of correlated motion. Many phenomena observed in granular materials involve more than one phase simultaneously. For example, in sheared granular materials and in avalanches both a solid phase and a liquid or gas phase are present. The coexistence of phases and the mechanisms of transitions between phases are of great interest. Although there are no attractive interparticle forces, it is provocative to compare and contrast these phenomena to their condensed matter analogs. A critical difference is the presence of inelastic collisions and dissipation of energy in a granular medium . Energy has to be added continuously in order to maintain granular material in a fluid state. Dissipation can also render the excited state unstable against density fluctuations. Regions of elevated density experience an elevated collision rate, which decreases average particle speeds and leads to further local density increases through the granular equivalent of mass diffusion. The excited state collapses locally and clusters of particles at rest appear. This phenomenon has been studied numerically for the cooling of a granular gas without energy input . Clustering has been predicted to persist in the presence of excitation , whether the energy is provided homogeneously or at the boundary . Clustering due to inelasticity has been studied experimentally by several groups . Granular materials can be excited conveniently with a controlled energy input through vertical vibrations of the container. Since energy is transferred into horizontal motion through interparticle collisions, the (mean) vertical kinetic energy is always larger than the horizontal kinetic energy. Olafsen and Urbach , in a study of a vertically vibrated partial layer of steel spheres, found spontaneous clustering into a two-dimensional ordered crystal at rest surrounded by a sea of vibrating particles, as the peak acceleration was reduced below 1g. Both phases were found to coexist in steady state. In this article, we study a triggered phase transition that transforms a quiescent disordered phase to an excited granular gas via a propagating front. Starting with a partial two-dimensional layer of spherical beads at rest on a vibrating flat plate below a threshold vibration amplitude, we have observed that a small external perturbation of only one bead can induce a phase transition into a rapidly moving gaseous state that spreads to all beads. We determine the frequency-dependent range of vibration amplitudes where the phase transition can be triggered, and observe a subcritical-to-supercritical transition at a threshold frequency. We also measure the growth rate and interface shape of the gaseous region. We account for our observations using numerical studies of a single periodically forced bouncing particle subjected to random perturbations arising from collisions. The simulation explains the origin of the propagating front and may also help to explain the crystal-fluid coexistence discovered earlier . Other metastable states are known in granular matter. For example, a small perturbation suffices to trigger avalanches and a small (tapping) perturbation can start the flow of granular material from a hopper. The propagating front described here is different from these examples; it arises from the coexistence of two dynamical attractors. ## II Experimental procedure and qualitative observations The experiments are conducted in a rigid $`32`$ cm diameter circular container made of anodized aluminum that is oscillated vertically with a single frequency between $`40`$ Hz and $`140`$ Hz using a VTS500 vibrator from Vibration Test Systems Inc. A computer controlled feedback loop keeps the vibration amplitude constant and reproducible. Spherical (grade 100) nonmagnetic 316 stainless steel beads with $`1.59\pm 0.02`$ mm diameter are used as granular material. Most experiments were carried out with a single layer of $`18400`$ beads, which corresponds to a fractional coverage of $`c=0.50`$ (half that of a hexagonal close packed layer). The particles are illuminated from four sides and images are captured with a $`512\times 512`$ pixel variable scan CCD camera (CA-D2, Dalsa Inc.) at $`2`$ frames/s. In some experiments a $`512\times 480`$ pixel fast CCD camera (SR-500, Kodak Inc.) operated at $`3060`$ frames/s was used. The coefficient of restitution for collisions with the aluminum plate (i.e. the ratio of velocities after and before a collision), measured from images of successive vertical bounces taken with the fast camera, is $`\alpha _{\mathrm{plate}}=0.95\pm 0.02`$ at commonly observed particle speeds. This result is similar to $`\alpha _{\mathrm{bead}}=0.93\pm 0.02`$, which was measured in Ref. . (A similar value, $`\alpha _{\mathrm{bead}}=0.95\pm 0.03`$, was obtained in Ref. .) We found that $`\alpha _{\mathrm{plate}}`$ decreases with increasing impact velocity (in agreement with Refs ). When rotational motion or a tangential velocity component is present during collisions, the effective restitution coefficient decreases. Conducting nonmagnetic 316 stainless steel was chosen for the beads. We found that the phase transitions occur at the same accelerations when the shaker’s magnetic field was reduced by $`70\%`$, and that the growth rate of the excited phase is unchanged. These facts indicate that the external magnetic field generated by the shaker does not influence the results presented here. We also did not find any significant electrostatic effects. ### A Quiescent disordered (amorphous) state In order to create reproducible initial conditions for all experiments, the container is vibrated for more than $`20`$ s at a peak acceleration $`a2`$ g. Then the vibration is abruptly turned off and the particles come to rest through free cooling (i.e. decrease of particle energies through inelastic collisions). Figure 1 shows the resulting radial autocorrelation function $`C(\delta )`$ of images of the amorphous state after sudden cooling starting from different forcing frequencies; the horizontal axis is scaled by the bead diameter. The correlation function is calculated from the greyscale intensity $`I`$ after subtraction of the average greyscale intensity as $`C(\delta )=I(r)I(r+\delta )_r/2\pi \delta I(r)^2_r`$. We note a sharp peak near $`\delta =0`$ corresponding to the image of a single bead, and a tail that is insensitive to the vibration frequency and amplitude prior to the shut off. This tail is not present in the gaseous state, also shown. The clustering of beads during sudden cooling, which was anticipated numerically, appears to be insensitive to the frequency at which the beads were excited. When the amplitude is lowered slowly, clustering becomes measurably stronger, as is evident from the upper curve. ### B Propagating front The focus of the experiments presented in this article is the transition to a moving gaseous state, starting from the reproducible quiescent amorphous state produced by sudden cooling. If the peak acceleration is much larger than 1 g, all beads start moving immediately when the vibration is switched on. For peak accelerations much smaller than 1 g, the beads remain quiescent on the plate indefinitely and return into the quiescent state when perturbed externally. In contrast, for a frequency dependent intermediate range of peak accelerations just below 1 g, the beads remain at rest when the vibration is switched on, but a transition to the moving gaseous state can be triggered by setting at least one bead into motion. Figure 2 shows the typical evolution of a portion of the layer from the amorphous state to the gaseous state when a region of several beads is perturbed. The peak acceleration $`a`$ of the vibrated plate has been set below, but close to, a threshold $`a_c`$ that is dependent on driving frequency $`f`$; the beads remain essentially at rest on the plate (Fig 2a). An external perturbation is then applied to a few beads, either by rolling an additional bead through an inclined cylinder onto the surface or by pushing a few beads manually. The perturbed bead(s) start to bounce and move in the horizontal direction, transmitting energy to their neighbors, which in turn start to move. An area within which all beads are moving develops quickly, surrounded by a denser area that marks the interface between the granular gas and the essentially static amorphous state (Fig. 2b). The area of the moving gaseous phase increases rapidly as the dense front propagates into the amorphous phase (Fig. 2c), until all beads are moving. For accelerations where a crystalline phase can exist (see below), we observe the development of a crystal during the transition and its coexistence with the gaseous phase in the steady state. ### C Interface shape We use the absolute difference of two consecutive frames to calculate the area and the interface shape of the gaseous region. Such a difference image, converted to black and white, produces a white spot at the initial and final positions of a moving bead, leaving all stationary beads and the background black. This allows us to distinguish between the essentially stationary amorphous phase and the moving gaseous phase. Figures 2d-f are difference images corresponding to Fig. 2a-c. (The difference images reveal that a few beads do move even in the amorphous phase, so small spontaneous perturbations are always present.) In order to extract the size and shape of the gaseous region from the difference images, we create a continuous white region using NIH image. The perimeter of the gaseous region, extracted with this method from the absolute difference of two consecutive frames, is shown overlayed onto the second frame in Fig. 3. The dense front clearly moves as the gaseous phase expands, so we consider the dense front as part of the gaseous phase . ## III QUANTITATIVE RESULTS ### A Phase Diagram The transition between the amorphous and gaseous phases, triggered by external perturbations, can occur only for a small range of vibrator accelerations. Below a low acceleration limit $`a_l`$, no perturbation is able to initiate a gaseous region that persists for at least one minute. Above a higher acceleration $`a_c`$, the spontaneous perturbations present in the amorphous state (observable in Fig. 2d), are sufficiently strong to trigger a propagating front within two minutes without an external perturbation. Figure 4 shows $`a_l`$ and $`a_c`$ as a function of the driving frequency $`f`$. The hysteretic region between $`a_c`$ and $`a_l`$ was found to be reproducible to within $`1\%`$ of the driving acceleration. It decreases with increasing frequency, and a subcritical to supercritical transition takes place at $`f_t120`$ Hz. At higher frequencies, spontaneous excitation of the gaseous phase is observed for $`aa_c`$, but any excitation decays slowly, and even large external perturbations fail to trigger a transition into the gaseous state. For $`a>a_c`$, on the other hand, areas of local excitation develop (often in numerous spots simultaneously) and propagate quickly throughout the system. The freezing and evaporation points of the crystalline phase are also shown in Fig. 4 as open symbols. At lower frequencies they fall in the middle of the hysteretic region, while at higher frequencies (roughly above $`f_t`$) they lie above the amorphous-to-gaseous transition. When the freezing point falls within the hysteretic region, the amorphous-to-gaseous transition leads to a gaseous phase for accelerations above the freezing point, and to coexisting crystalline and gaseous regions at accelerations below the freezing point. Additional measurements indicate that at a lower bead coverage $`c=0.25`$, both $`a_l`$ and $`a_c`$ change by less than $`5\%`$. ### B Growth Rates The growth rate of the gaseous region can be measured between $`a_l`$ and $`a_c`$ (and even somewhat above $`a_c`$ by applying an external perturbation quickly, before a spontaneous instability occurs). Figure 5 shows a double logarithmic plot of the area $`A`$ of the gaseous region as a function of time after initiation by a perturbation. The measurements are carried out at $`40`$ Hz and $`80`$ Hz for several peak accelerations within the hysteresis region and slightly above $`a_c`$. A doubling of the driving frequency slows the rate of growth of the gaseous area by approximately a factor of 4. Reducing the coverage does not change the initial growth rate, but leads to faster growth at later times. The total plate area $`A_{\mathrm{max}}=804\mathrm{cm}^2`$ limits the growth eventually. Figure 5 indicates that the growth, after an initial transient during which the dense front forms, can be approximately described by a power law with an exponent that is independent of acceleration and only slightly dependent on coverage. However, the exponent changes significantly with frequency. A realistic description of the dynamics of growth of the gaseous region is complicated by the dynamics of the dense layer of beads ahead of the interface, which appears to slow the growth of the gaseous region. The dense layer is pushed uniformly by the advancing gas under some conditions (see Fig. 3), but in other cases crystalline regions form that do not move. The advancing front then grows around those regions, leaving behind slowly melting crystalline regions within the gaseous phase. At small coverages ($`c0.05`$) an interface between the amorphous and gaseous phase is not defined, as moving beads can pass many stationary beads between collisions. In this case the rate of increase in the number of moving beads is only limited by the mean interval between collisions with stationary beads. It is nevertheless possible to describe the growth process approximately by noting that the growth rate $`dA/dt`$ increases in proportion to the perimeter $`P(A)`$ between the gaseous and the amorphous regions as shown in Fig. 6 for areas between $`50`$ and $`400\mathrm{cm}^2`$. We find $$dA/dt=\beta (a,f)P(A),$$ (1) where the front velocity $`\beta `$ depends on $`a`$ and $`f`$. This indicates that the growth rate of the gaseous area is roughly proportional to the rate at which beads collide with the interface once a dense layer has formed. The front velocity $`\beta `$ is shown in Fig. 7. It increases approximately linearly with $`a`$ and goes through zero near $`a_l(f)`$. ## IV BOUNCING BALL MODEL ### A Model of unperturbed bouncing The hysteretic nature of both the amorphous-to-gaseous transition (and to a lesser degree the crystalline-to-gaseous transition ) below $`1\mathrm{g}`$ peak acceleration indicates that an energy gap exists between the states at rest (amorphous and crystalline) and the lowest energy excited steady state. The simplest model that can be used to explain the observed behavior is based on the dynamics of a single bead bouncing on a plate under sinusoidal vibrations $`x_p=X_{\mathrm{max}}cos(2\pi ft)`$. The peak acceleration then is $`a=X_{\mathrm{max}}4\pi ^2f^2`$. In a collision with the plate, the vertical bead velocity changes from $`v`$ to $`v^{^{}}`$: $$v^{^{}}=(1+\alpha _{\mathrm{plate}})\dot{x}_p\alpha _{\mathrm{plate}}v.$$ (2) The bead at rest on the plate ($`v^{^{}}=v=\dot{x}_p`$) always represents a stable steady state solution below $`1`$ g peak acceleration with an average bead energy $$E_0=\frac{a^2}{16\pi ^2f^2}$$ (3) per unit mass. Additional steady states appear as the acceleration is increased. The first nonsticking steady state is a periodic bouncing with speed at the instant of collision given by: $$v^{^{}}=v=\frac{1+\alpha _{\mathrm{plate}}}{1\alpha _{\mathrm{plate}}}\dot{x}_p.$$ (4) For periodic bouncing every vibration period this requires: $`v^{^{}}=g/(2f)`$ and therefore (at the moment of contact) $$E_1=\frac{g^2}{8f^2}$$ (5) per unit mass. Other excited states that are observable under conditions similar to the experiment have higher energy. States with a period $`n/f`$, where the bead hits the plate on average once per vibration period and the impact pattern is repeated after $`n`$ cycles, have an average energy of $`E=(g^2/8n)_k\delta t_k^2`$. The minimum of $`E`$ occurs for the simple periodic state $`n=1`$, where the time intervals between contacts with the plate $`\delta t_k`$ are of equal length. Periodic or chaotic states with more than one plate impact per vibration period would have lower energy if they exist, but no such state was found in several calculations of basins of attraction using the bouncing ball program by Tufillaro et al with parameters similar to the experiment. Those states therefore either do not exist, are unstable, or have a negligible basin of attraction. For vibrator accelerations below $`1\mathrm{g}`$ an energy gap therefore exists between the lowest energy steady state $`E_0`$, which is the quiescent state, and the first excited state, a periodic bouncing state with energy $`E_1`$. We propose that this energy gap determines the size of the hysteretic region. The measured decrease in the extent of hysteresis with increasing $`f`$ may reflect the dependence of $`E_1`$ and $`E_0`$ on $`1/f^2`$. The discussion so far, which is based on a one dimensional idealized model, has neglected perturbations. In the experiments, however, perturbations are important, as each bead is subjected to frequent interparticle collisions (which create and sustain horizontal motion) and to other perturbations (e.g. due to slight nonuniformities of the vibrator surface). Since a perturbation can trigger a transition into a different steady state, we include them in a model that can be simulated numerically. ### B Simulations We follow the vertical movement of a single bead that is subjected to random perturbations of its vertical speed representing collisions with other beads. Our strategy is to determine the mean energy of the vertical motion and to compare it with that of the first excited state. Therefore, we do not track the horizontal speed of the particle. The time of perturbation events is selected randomly to create an average perturbation rate $`f_p`$. The velocity of the bead after a perturbation is $`v^{^{}}=v\alpha _{\mathrm{bead}}r`$, where $`r`$ is a random number with gaussian distribution and unit variance. To verify that the results do not depend on the choice of the probability distribution, we also tried letting $`r`$ be either $`\pm 1`$ (randomly), with similar results. Energy losses from interparticle collisions are included through $`\alpha _{\mathrm{bead}}`$. Bead and plate positions are calculated at fixed time intervals $`\delta t`$. The time of bead impact on the plate is calculated to within $`O(\delta t^2)`$. The bead is considered to be stuck if more than one bounce per time interval occurs. The coefficients used in the simulations were $`f=80`$ Hz, $`\delta t=5\times 10^5\mathrm{s}`$, and $`\alpha _{\mathrm{plate}}=\alpha _{\mathrm{bead}}=0.90`$ (a rough estimate for the effective coefficient of restitution in the presence of bead rotations during impact). After an initial transient time of $`25`$ s the average behavior of the bead is recorded for $`475`$ s. Since the calculations are carried out for one bead only, it is not determined a priori whether the perturbation rate is sustainable from interbead collisions in a many particle system. Many particles must be in an excited steady state to sustain continued collisions, since two quiescent particles cannot collide. Collisions are only able to maintain or increase the number of excited particles (a necessary condition for sustainability of the collision rate) if the steady state mean energy is comparable to $`E_1`$. Significantly lower computed mean energies would actually not be sustainable in the many particle system, because collisions would lead to a loss of particles from the excited state, and the system would settle into the ground state attractor for all particles. To check the conditions under which $`EE_1`$, we show in Fig. 8 the average energy of the particle as a function of $`f_p`$. As the perturbation rate is increased, the energy of the particle first increases and then decreases. For comparison, the energies of a single bead in the lowest excited state $`E_1`$ and in the quiescent state $`E_0`$ of the unperturbed problem are shown as horizontal lines. At $`a=0.66`$ g, $`E<E_1`$ for all perturbation rates; therefore the particle will be trapped in the stationary ground state. For $`a>1\mathrm{g}`$, $`E>E_1`$ the mean particle energy is larger than $`E_1`$ (except for $`f_p/f>>1`$); here the quiescent state attractor does not exist. On the other hand, for $`0.8`$ g $`<a<1`$ g, $`E>E_1`$ for a limited range of $`f_p`$; this regime will show hysteresis. This explains qualitatively the occurrence of hysteresis for a many particle system that is sufficiently dense to provide the required collision rate. The propagating front is basically a self-sustained chain reaction. ## V DISCUSSION AND CONCLUSIONS In this paper we have described the steady states and phase transitions of a two dimensional layer of beads subjected to vertical vibrations below $`1\mathrm{g}`$ peak acceleration. We start from a nearly static amorphous state created by temporarily stopping the vibration while all particles are moving rapidly. Free cooling of the granular layer creates a reproducible structure, with a correlation function that is not dependent on the driving frequency or acceleration over a wide range. Starting from this quiescent disordered (“amorphous”) state, we observe a striking transition to the gaseoue state, via a propagating front, as the vibration amplitude is increased. A region of hysteresis exists for which the amorphous phase remains metastable but a perturbation induces a transition. The final state can consist either of gas, or of crystal and gas, depending on the acceleration and coverage. The velocity of the propagating front $`\beta `$, which remains constant during the transition, increases linearly with peak plate acceleration $`a`$ and decreases with increasing vibration frequency $`f`$. A simple model, based on a single bouncing bead with random perturbations modeling collisions, exhibits behavior that is consistent with the experimental results. An energy gap between the ground and excited states of the single bouncing bead leads to a chain reaction that can sustain the excited state of the many-particle system under the right conditions. The phenomenon is similar in some respects to a flame front. The coexistence of the ground state and excited state attractor can be seen most clearly at low coverage, where collisions become sufficiently rare that beads can remain in the ground state attractor long enough to become stationary. In Fig. 9 we show a time average image at $`c=0.03`$. Moving beads appear in this figure as streaks and stationary ones as bright spots. In addition, small clusters of beads at rest are clearly observable. The crystal-gas coexistence is another potential manifestation of the coexistence of the two attractors, in this case spatially separated by an interface. The simulation therefore suggests a possible origin of the crystalline phase discovered by Olafsen and Urbach . As the acceleration is lowered, the maximum energy provided by the plate decreases until more energy is lost in collisions than can be supplied by the plate. Beads then come to rest in the dense regions of the plate where the collision losses are greatest. A crystal forms as the moving beads apply pressure to the cluster at rest, forcing it into the densest possible configuration. Bouncing beads that hit the crystal tend to come to rest at the interface, increasing the size of the crystal but simultaneously decreasing the bead concentration in the remaining gaseous phase. Crystal growth stops when the density of the gaseous phase is lowered sufficiently that the (density-dependent) collision rate can be sustained in steady state. We conclude that the rich variety of steady states that are observed just below $`1`$ g peak vibrator acceleration are related to the coexistence of a ground state attractor that exists up to $`1`$ g and excited state attractors that exist below $`1`$ g. The rapidly propagating front of highly excited beads spreading in a sea of beads at rest, which can be triggered by one moving bead without a change in the acceleration amplitude, is a striking many body consequence of this coexistence. ## VI Acknowledgments This research was supported by the National Science Foundation under grant No. DMR-9704301. We appreciate helpful discussions with I. Aronson, J.-C. Geminard and J. Urbach.
no-problem/9812/quant-ph9812064.html
ar5iv
text
# Untitled Document A simple attacks strategy of BB84 protocol Guihua Zeng<sup>1</sup><sup>1</sup>1Email: ghzeng@pub.xaonline.com National Key Laboratory on ISDN of XiDian University, Xi’an, 710071, P.R.China Abstract A simplified eavesdropping-strategy for BB84 protocol in quantum cryptography is proposed. This scheme is based on the ‘indirect copying’. Under this scheme, eavesdropper can exactly obtain the exchanged information between the legitimate users without being detected. Key words: Eavesdropping strategy, indirect copying, quantum cryptography, BB84 protocol, corresponding reference list. I.Introduction Quantum cryptography, suggested originally by S.Wiesner and then by C.H.Bennett and G.Brassard , employs quantum phenomena such as the uncertainty principle and the quantum corrections to protect distributions of cryptographic keys. Key distribution is defined as procedure allowing two legitimate users of communication channel to establish two exact copies, one copy for each user, of a random and secret sequence of bits. In other words, quantum cryptography is a technique that permits two parties, who share no secret information initially, to communicate over an open channel and to establish between themselves a shared secret sequence of bits. Quantum cryptography is provably secure against eavesdropping attack, in that, as a matter of fundamental principle, the secret data can not be compromised unknowingly to the legitimate users of the channel. BB84 protocol is a key distribution protocol over an open channel by quantum phenomena, it relies on the uncertainty principle of quantum mechanics to provide key security. The security guarantee is derived from the fact that each bit of data is encoded at random on either one of a conjugate pair of observables of quantum-mechanical object. Because such a pair of observables is subjected to the Heisenberg uncertainty principle, measuring one of the observables necessarily randomizes the other. Although quantum cryptography is provably security, with the quantum key distribution protocols presented, several attacks strategy have been generated, such as intercept/resend scheme , beamsplitting scheme , entanglement scheme \[5-7\] and quantum copying . In the intercept/resend scheme, Eve intercepts selected light pulses and reads them in bases of her choosing. When this occurs, Eve fabricates and sends to Bob a pulse of the same polarization as she detected. However, due to uncertainty principle, at least 25% of the pulse Eve fabricates will yield the wrong result if later successfully measured by Bob. The other attack, beamsplitting, depends on the fact that transmitted light pulses are not pure single-photon states. In the entanglement scheme, the eavesdropper involves the carrier particle in an interaction with her own quantum system, referred to as probe, so that the particle and the probe are left in an entangled state, and a subsequent measurement of the probe yields information about the particle. Some investigators are now turning their attention to collective attacks and joint attacks. About these attacks description please see Ref. and its references. Eve can also use the quantum copying to obtain the information between Alice and Bob. Two kind quantum copies are presented . It is appropriate to emphasize the limitation of above attacks strategy. All these mentioned attacks strategy are restricted by the uncertainty principle or the quantum corrections, so Eve can not break the quantum cryptography protocols. The risk of eavesdropper is to disturb the information and finally to be detected by the legitimate users. This is the reason why quantum cryptography is declared to be provably security. The Eve’s aim is to obtain more information from the open channel set up by legitimates user, saying Alice and Bob, and induce more less disturbance on the transmitting quantum bits, so that she can not be detected by the legitimate users Alice and Bob. In usually, the uncertainty principle or the quantum corrections prevents Eve’s attempt from eavesdropping the useful information without being detection. However if Eve can escape the restriction of the uncertainty principle, her attempt will be succeed. In this paper we propose a novel attack strategy for quantum cryptographic protocols. Under this strategy, the security of BB84 quantum key distribution protocol will completely loss. The scheme works by follows procedure: Eve constructs a prescription function. This function must be an uniform function for every different quantum state used by Alice and Bob. This mean that every function value corresponds to a different quantum state. It consists of a reference list that all these corresponding relationship of function value to different quantum state. While Alice sends a random quantum sequence to Bob, Eve intercepts every state and calculates the corresponding value by the function, then gives up the intercepted state. When this finishes, Eve resends a new quantum state to Bob according to the reference list in which every value corresponds a correct quantum state. By this method Eve can exactly obtain the information exchanged between Alice and Bob without being detected. We call this method as “indirect copying”. Of course, It is different from the probabilistic cloning and the inaccurance quantum copying. Obviously, the “indirect copying” is not a true copy of quantum bits. II.BB84 quantum key distribution protocol For describing our attacks strategy, we first review the BB84 protocol. The BB84 protocol is the first key distribution protocol in quantum cryptography. Follows are the protocol in details . 1. Alice prepares a random sequence of photons polarized and sends them to Bob 2. Bob measure his photon using a random sequence of bases 3. Results of Bob’s measurements. Some photons are shown as not having been received owing to imperfect detector efficiency. 4. Bob tells Alice which basis he used for each photon he received. 5. Alice tells him which bases were correct. 6. Alice and Bob keep only the data from these correctly measured photons, discarding all the rest. 7. This data is interpreted as a binary sequence according to the coding scheme: 8. Bob and Alice test their key by publicly choosing a random subset of bit positions and verifying that this subset has the same parity in Bob’s and Alice’s versions of the key (here parity is odd). If their keys had differed in one or more bit position, this test would have discovered that fact with probability 1/2. 9. Remaining secret key after Alice and Bob have discarded one bit from the chosen subset in step 8, to compensate for the information leaked by revealing its parity. Step 9 and 10 are repeated k times with k independent random subsets, to certify with probability $`12^k`$ that Alice’s and Bob’s keys are the identical, at the cost of reducing the key length by k bits. 10. Distilling the security key by the privacy amplification . The basic principle of privacy amplification is as follows. Let Alice and Bob shared a random variable $`W`$, such as a random $`n`$-bit string, while an eavesdropper Eve learns a corrected random variable $`V`$, providing at most $`t<n`$ bits of information about $`W`$, i.e., $`H\left(W|V\right)nt`$. Eve is allowed to specify an arbitrary distribution $`P_{VW}`$ (unknown to Alice and Bob) subject to the only constraint that $`R\left(W|V=v\right)nt`$ with high probability (over values $`v`$), where $`R\left(W|V=v\right)`$ denotes the second-order conditional Renyi entropy of $`W`$, given $`V=v`$. For any $`s<nt`$, Alice and Bob can distill $`r=nts`$ bits of the secret key $`K=G\left(W\right)`$ while keeping Eve’s information about $`K`$ exponentially small in $`s`$ , by publicly choosing the compression function $`G`$ at random from a suitable class of maps into $`\{0,1\}^{nts}`$. III. Construction of Reference List To perform privacy communication between legitimate users, known as Alice and Bob, a set of pre-defined nonorthogonal quantum states or noncommuting quantum states often are used. For briefly, We call this set of pre-defined nonorthogonal quantum state or the noncommuting quantum states as basic quantum states (BQS) in the remainder paper. Because the BQS are publicly announced by Alice and Bob, Eve can easily get it. In BB84 protocol, the BQS are the four noncommuting states $`|0>,|\frac{\pi }{2}>,|\frac{\pi }{4}>,|\frac{3\pi }{4}>`$. Of course the linearly polarized states $`|0>,|\frac{\pi }{2}>`$ and the circularly polarized states $`|\frac{\pi }{4}>,|\frac{3\pi }{4}>`$ are orthogonal, respectively. In BB84 quantum key distribution protocol, the quantum states $`|0>`$ and $`|\frac{\pi }{2}>`$ are measured by the so called rectilinear measurement type. Representing this rectilinear measurement type as $``$, we have $$|0>=\lambda _1|0>,$$ $`\left(1\right)`$ $$|\frac{\pi }{2}>=\lambda _2|\frac{\pi }{2}>,$$ $`\left(2\right)`$ where $`\lambda _i,i=1,2`$ are eigenvalues. Because the states $`|0>`$ and $`|\frac{\pi }{2}>`$ constitute a base in Hilbert, an arbitrary quantum state can be expanded by this base, i.e., $$|\psi >=c_1|0>+c_2|\frac{\pi }{2}>.$$ $`\left(3\right)`$ By Eq.(3), it is easy to obtain $$|\frac{\pi }{4}>=\frac{\sqrt{2}}{2}|0>+\frac{\sqrt{2}}{2}|\frac{\pi }{2}>,$$ $`\left(4\right)`$ $$|\frac{3\pi }{4}>=\frac{\sqrt{2}}{2}|0>\frac{\sqrt{2}}{2}|\frac{\pi }{2}>,$$ $`\left(5\right)`$ Consider a proper ancilla quantum state, for example, $$|\alpha >=\frac{\sqrt{3}}{2}|0>+\frac{1}{2}|\frac{\pi }{2}>,$$ $`\left(6\right)`$ making product between the ancilla quantum state $`|\alpha >`$ and the quantum of BQS gives $$<\alpha |0>=\frac{\sqrt{3}}{2}m_1=\frac{3}{4}=0.75,$$ $`\left(7\right)`$ $$<\alpha |\frac{\pi }{2}>=\frac{1}{2}m_2=\frac{1}{4}=0.25,$$ $`\left(8\right)`$ $$<\alpha |\frac{\pi }{4}>=\frac{\sqrt{6}+\sqrt{2}}{4}m_3=\frac{\left(\sqrt{3}+1\right)^2}{8}0.933,$$ $`\left(9\right)`$ $$<\alpha |\frac{3\pi }{4}>=\frac{\sqrt{6}\sqrt{2}}{4}m_4=\frac{\left(\sqrt{3}1\right)^2}{8}0.067,$$ $`\left(10\right)`$ Obviously, an observable value $`m_j,j=1,2,3,4`$ corresponds to only a basic quantum state $`|j_k>,k=1,2,3,4`$. All these corresponding relationship constructs a corresponding reference list. It is given by | quantum state $`|j_k>`$ | $`m_k`$ | | --- | --- | | $`|0>`$ | 0.75 | | $`|\frac{\pi }{2}>`$ | 0.25 | | $`|\frac{\pi }{4}>`$ | 0.933 | | $`|\frac{3\pi }{4}>`$ | 0.067 | $`\left(11\right)`$ List (11) constructs an uniform function between the sorting value and the BQS, i.e., $`S_k=f(|j_k>)`$, where $`k=1,2,3,4,|j_k>`$ represent the four basic quantum states. Obviously, $`S_1S_2S_3S_4`$. When Alice connects Bob and exchanges information, Eve intercepts the sequences of the quantum bits. For each quantum bit in the sequence intercepted by Eve, she measures it and obtains a corresponding sorting value. Comparing the sorting value to the reference list, Eve resends the corresponding quantum bit to Bob. For example, if the measurement value corresponds $`m_2=1/4`$, Eve resends the quantum state $`|\frac{\pi }{2}>`$ to Bob. Thus, Eve can exactly obtain the complete information exchanged between Alice and Bob, and escapes the detection of Alice and Bob. So under the presented attack strategy, the BB84 protocols is completely insecure. IV. Attack scheme First, Eve constructs a corresponding reference list for every state of BQS. For correctly determining the intercepted quantum states and resending the correct quantum bits to Bob, every basic quantum state $`|j_k>`$ must correspond to a different reference value (marking the function value as $`S_k,k=1,2,\mathrm{},m`$). So Eve firstly need to construct an uniform function which is an one-to-one map of $`|j_k>`$ to the function value $`m_k`$. Second, Eve intercepts the random sequence of quantum states sent by Alice and calculates the value for every intercepted state by measurement operation. For distributing the quantum key, Alice randomly choose the quantum state from the basic quantum state $`|j_k>,k=1,2,\mathrm{},m`$, and sends the randomly selected quantum bits sequence to Bob. The communication between Alice and Bob is in an open channel, which Eve can easily access. Eve intercepts the quantum bits sequences sent by Alice, and measures the observables $`m_k,k=1,2,3,4`$ for every quantum bit. By the measurement values Eve calculates the corresponding sorting for every intercepted quantum state by her machine. Third, Eve gives up the intercepted quantum state. The Eve’s operation will limited by the uncertainty principle, her measurement disturbs the quantum state because she don’t know beforehand the every random quantum bit state. If Eve resends these intercepted states to Bob like the Intercept/Resend attack strategy proposed in Ref., she will reveal herself. To avoid these case, we let Eve give up all these intercepted states. Finally, Eve resends the corresponding quantum states. By the calculation sorting values obtained in step 2, Eve chooses a corresponding quantum bit state according to the reference list and resends it to Bob. The resent quantum state is exactly same as that send by Alice, it seems that Eve ‘copies’ the Alice’s quantum state. However, it is not a real copying, This ‘copying’ is completely different from the probability and inaccurance copying. We call it as “indirect copying”. By this method Eve can measure Alice’s signal exactly, and resend an exact copy of it, thereby escaping detection. Our attack strategy makes the quantum cryptographic protocol at risk. Of course, our scheme can not attack every protocol proposed previously. For example, we can not attack the Ekert protocol , because there is no information encoded there while the particle transits from the source to the legitimate users. In fact, our scheme is only valid for the protocol that quantum state is encoded in transit. Meanwhile, Eve must know the BQS. In addition, the interval time between two adjacent quantum state of the resent quantum state should almost keep the same as that in Alice’s random sequence of quantum bits so that Bob can not feel Eve. V. Conclusion In conclusion, we proposed an attack strategy for BB84 key distribution protocol in the quantum cryptography, we called this strategy as ‘indirect copying attack’. Under this strategy, the BB84 quantum cryptographic protocols is at risk, the eavesdropper can exactly obtain the information between the legitimate users without being detected. Of course, the presented strategy is only valid for the case that the quantum state is encoded in transit. 1. References 2. S. Wiesner, Conjugate coding, Sigact News, vol. 15, no. 1,78(1983); original manuscript written circa 1970. 3. C.H.Bennett, G.Brassard, S.Breidbart and S.Wiesner, Quantum cryptography, or unforgeable subway tokens, Advances in cryptography: Proceedings of Crypto’82, August 1982, plenum, New York, pp. 267-275. 4. C.H.Bennett and G.Brassard. Quantum cryptography: Public key distribution and coin tossing, Proceedings of the IEEE International Conference on Computers, System, and Signal Processing, Bangalore, India (IEEE, New York. 1984), pp. 175-179. 5. C.H.Bennett, F.Bessett, G.Brassard, L.Salvail, and J.Smolin, Experimental quantum cryptography, J.Cryptology, 5, 3(1992). 6. C.A.Fuchs, N.Gisin, R.B.Griffiths, C.S.Niu, and A.Peres, Optimal eavesdropping in quantum cryptography. I. Information bound and optimal strategy, Physical Review A 56, 1163 (1997). 7. R.B.Griffiths, and C.S.Niu, Optimal eavesdropping in quantum cryptography. II. A quantum circuit. Physical Review A, 56, 1173(1997). 8. C.H.Bennett, T.Mor, and J.A.Smolin, Parity bit in quantum cryptography, Physical Review A 54, 2675 (1996). 9. B.A.Slutsky, R.Rao, P.C.Sun, and Y.Fainman, Security of quantum cryptography against individual attacks, Physical Review A 57, 2383(1998). 10. M.Hilery, and V.Buzek, Quantum copying: Fundamental inequalities, Physical Review A 56, 1212(1997). 11. V.Buzek, and M.Hillery, Quantum copying: Beyond the no-cloning theorem, Physical Review A 54, 1844(1996). 12. L.M.Duan and G.C.Guo, Probabilistic cloning and identification of linearly independent quantum states, Physical Review Letters 80, 4999(1998). 13. C.H.Bennett, Quantum cryptography using any two nonorthogonal states, Physical Review Letters, 68, 3121(1992). 14. C.H.Bennett, G.Brassard, C.Crepeau and U.M.Maurer, Generalized privacy amplification, IEEE Trans. Inform. Theory, 41, 1915(1995). 15. A.K.Ekert, Quantum cryptography based on Bell’s theorem, Physical Review Letter, 67, 661(1991).
no-problem/9812/astro-ph9812435.html
ar5iv
text
# Kinematics of the helium accretor GP Com ## 1 Introduction GP Com ($``$G61–29) is a high proper motion star \[Giclas, Burnham & Thomas 1961\] with an optical spectrum dominated by broad HeI emission lines \[Burbidge & Strittmatter 1971\]. Warner \[Warner 1972\] and Smak \[Smak 1975\] found rapid photometric variability, similar to the flickering observed in cataclysmic variable stars and suggested that, like them, it is a binary star, although they found no convincing periodicity. Nather, Robinson & Stover (1981, hereafter NRS) found a narrow emission-line component which varied in radial velocity on a period of $`46.52\mathrm{min}`$, providing strong support for the binary nature of GP Com. The variable component moved between the two outer peaks of the triple-peaked emission lines, similar to phenomena observed in hydrogen-dominated cataclysmic variable stars (e.g. U Gem). Thus, by analogy, NRS identified the moving component with the region where the gas stream hits the accretion disc. The short period and absence of hydrogen qualifies GP Com as one of the AM CVn group of accreting double-degenerate systems; it is in fact the longest period of these systems and stands out from the crowd as the only one with a spectrum strongly in emission and as the only one to show an ‘S’-wave. The most prominent element other than helium is nitrogen in the form of NV in IUE \[Lambert & Slovak 1981\] and HST \[Marsh et al. 1995\] spectra and NI in spectra covering the R- and I-bands \[Marsh, Horne & Rosen 1991\]. The absence of hydrogen and the strong helium and nitrogen emission are consistent with our seeing material from the core of the star that has undergone hydrogen burning and CNO-cycle processing of most of the carbon and oxygen into nitrogen. This is as expected given the very short period of GP Com. The emission lines from GP Com show the characteristic double-peaked, broad profiles of emission from an accretion disc. However, unlike hydrogen-dominated cataclysmic variables with such profiles, GP Com also sports a narrow component at the centre of each emission line which we will refer to as the “central spike”. The origin of this component is unclear. NRS could find no radial velocity variability of the central spike and suggested that it might come from a nebula surrounding GP Com, by analogy with old novae. However a subsequent search failed to find any nebula \[Stover 1983\]. On the other hand the mass donor star in GP Com is probably of such a low mass that the spike could originate on the mass accretor and still show little radial velocity variability. We look at this component again in this paper. Another remarkable feature of GP Com was seen in HST spectra \[Marsh et al. 1995\] in which flaring was seen in the emission lines, representing a factor of 5 change in flux from minimum to maximum. This is probably driven by X-ray variability \[van Teeseling & Verbunt 1994\] which might also influence the optical lines. We look for this in the time-resolved data we present here. We begin with a description of the observations and an analysis of orbital variability in our data. ## 2 The Observations and their Reduction We used the $`2.5`$m Isaac Newton Telescope on the island of La Palma to take 414 spectra on three nights from the 21st to the 23rd April 1988. The spectra were taken with the Intermediate Dispersion Spectrograph (IDS) with an Image Photon Counting System (IPCS) detector, and cover the range $`4200`$ to $`5200`$Å in 2040 pixels with a full width half maximum (FWHM) resolution of 2 pixels. Apart from the first six exposures of 120 seconds each, all exposures were 100 seconds long, taken through a $`1.1`$ by 50 arcsecond slit aligned to a position angle of $`4.7^{}`$ to capture simultaneous spectra of the variable GP Com and a nearby comparison star. Arc spectra were taken every half hour or so to track flexure in the spectrograph. Spectra of the comparison star and the flux standards BD$`+253941`$ and Feige 34 were taken to provide flux calibration. The sky background was estimated by interpolation from uncontaminated sky regions on each side of the object spectra. After subtraction of the background, the spectra were extracted by summation perpendicular to the dispersion. Flat field corrections were applied at the same time. We fitted 13th order polynomials to the average arc spectrum for each night. The two lowest order terms were then refitted for the individual arc spectra. The root mean square (RMS) of the fits was typically $`0.07`$Å , equivalent to $`5\text{km}\text{s}^1`$ or $`1/7`$th of a pixel at our dispersion. The wavelength scales for GP Com were then interpolated between neighbouring arc spectra. This procedure eliminates the effects of flexure which amounted to $`0.5`$Å during the night. Finally the flux standard spectra were used to correct for the instrumental sensitivity and the comparison star spectra were used to correct for slit losses. ## 3 Results ### 3.1 The Average Spectrum The average of our spectra has already been discussed in Marsh et al. \[Marsh, Horne & Rosen 1991\]. For convenient reference, we show it once more in Fig. 1. The spectrum confirms the earlier investigations of Burbidge & Strittmatter \[Burbidge & Strittmatter 1971\] and NRS. The only strong lines detected are those of HeI and HeII, and these are broad and triple-peaked, a structure seen most clearly in the HeI 5015 line. Smak \[Smak 1975\] and NRS suggest that the lines are made up of a broad, double-peaked profile from an accretion disc plus a separate narrow component near the centre of the line. The variation in the relative strength of the central “spike” from line to line supports this hypothesis. Double-peaked profiles are well known from the hydrogen-dominated cataclysmic variables \[Honeycutt, Kaitchuck & Schlegel 1987\], but the central spike is unique to GP Com. ### 3.2 The Orbital Period The first reliable detection of a periodicity in GP Com was made by NRS who discovered a narrow emission-line component (in addition to the central spike) which varied in velocity with a period of $`46.52\pm 0.02\mathrm{min}`$. The velocity of the component varied from $`670`$ to $`+670\text{km}\text{s}^1`$ taking it between the two outer peaks of the lines. Such ‘S’-waves are well-known from other CVs, and are associated with the region where the gas stream from the secondary star hits the disc. If this is the case in GP Com, then $`46.52\mathrm{min}`$ is also the orbital period of the binary. The ‘S’-wave is also present in our data. In the upper panel of Fig. 2 we show the phase binned trailed spectrum of GP Com, and in the lower panel we show the result of subtracting the average spectrum from this. Fifty phase bins were equally spaced around the cycle and they are repeated twice for clarity. The ‘S’-wave is clear in all the lines, while there is little other variability. Fig. 2 also shows the lack of radial velocity variation in the central spike. Thus it is the ‘S’-wave that gives the best way of measuring the orbital period in GP Com. In order to measure the period quantitatively we undertook a multi-gaussian fit which was made to all the spectra and all lines at once, representing the ‘S’-wave as a single gaussian with the same sinusoidal radial velocity curve for all the lines. The period was one of the free parameters of the fit, and is determined by the need to keep the ‘S’-wave in step over the three nights of our run. We obtained a period of $`0.0323386\pm 0.000002`$ days or $`46.567\pm 0.003\mathrm{min}`$. This differs from NRS’s value by $`2.3\sigma `$, but given uncertainties caused by the round-off implicit in NRS’s value, we doubt this is significant. There is no clear indicator of the conjunction phase for GP Com, and thus we chose an arbitrary zero point close to the middle of our observations to compute phases. Thus we use the following ephemeris for all phases in this paper: $$\mathrm{HJD}=2447274.7+0.0323386E.$$ This is the ephemeris used in constructing Fig. 2. We will discuss a possible constraint on the conjunction phase in section 4.1. #### 3.2.1 Superhumps? The sub-class of cataclysmic variables called the SU UMa stars are known for quasi-periodic flaring behaviour during outburst which occurs on a period somewhat longer than the orbital period. These flares, usually known as “superhumps”, are thought to be the result of a distortion of the outer disc induced by the 3:1 resonance \[Whitehurst 1988\] between the disc and binary orbit. The distortion only occurs if the 3:1 resonance radius is inside the disc and this requires the mass ratio $`q`$ ($`=M_2/M_1`$) to be less than about $`0.25`$. This neatly explains why the SU UMa stars are all at the short period end of the cataclysmic variable period distribution because then the secondary star is of relatively low mass. The restriction on mass ratio is easily satisfied by the even shorter period GP Com, and so it is of interest to see whether it shows any superhump-like behaviour. Since our spectra are slit-corrected we can attempt this. We computed light curves over two regions avoiding the emission lines. The first of these covered 4250 to 4350 and 4510 to 4640 Å while the second covered 4750 to 4880 and 5080 to 5190 Å. Periodograms \[Scargle 1982\] of these light curves are shown in Fig. 3. These do indeed show some evidence for a signal on a slightly longer period than the spectroscopic one; the difference visible in the right-hand panels of Fig. 3 corresponds to a difference of about $`0.4`$ cycles over the 2 day baseline of our observations, which is easily measurable. Thus our data are at least consistent with the possibility of superhump-like behaviour in GP Com. However, given that the signal in the continuum near the spectroscopic period is only one of several peaks of similar power, the evidence can only be regarded as suggestive. It should be noted that superhumps in SU UMa stars are only seen during some outbursts, and so it would not be surprising if GP Com failed to show them. ### 3.3 Doppler Images The emission lines in GP Com show the broad, Doppler-shifted profiles produced by accretion discs. Doppler tomography provides a powerful method of interpreting such profiles. We imagine that the profiles are built from the sum of many ‘S’-waves which have radial velocity $$V(\varphi )=\gamma V_X\mathrm{cos}2\pi \varphi +V_Y\mathrm{sin}2\pi \varphi ,$$ at orbital phase $`\varphi `$. The strength of each ‘S’-wave is plotted with coordinates $`V_X`$, $`V_Y`$ to build up an image. The coordinates are chosen for consistency with a coordinate system in which the $`X`$ axis points from the primary to the secondary star and the $`Y`$ axis points in the direction of the secondary star’s motion. A detailed description of the computation of the images is given in Marsh & Horne \[Marsh & Horne 1988\]. Images of the seven lines are presented in Fig. 4. These images were computed in three sets, the first covering HeI 4387 and 4471, the second HeII 4686 and HeI 4713 and the third HeI 4921, 5015 and 5047. In this way blending between lines could be accounted for. The fits to the lines were very good with reduced $`\chi ^2`$ of order $`0.8`$. All lines show a similar structure with a central spot, matching the central spikes of Figs. 1 and 2, an offset spot equivalent to the ‘S’-wave and a smooth ring-shaped background for the rest of the disc. We have scaled the lines so that the ring from the disc is seen at about the same level in each case which brings out the marked variation in the relative strengths of the central spike and bright-spot components. The outer ring visible in HeII 4686 is a consequence of its blending with HeI 4713 (which has a similar but less obvious artefact). It is the inner ring that corresponds to the edge of the HeII 4686 emission region and it can be seen to be markedly larger than its equivalents in the HeI lines. This indicates that the HeII 4686 emission does not cover all of the disc, a point we will return to in section 3.6 when we discuss flaring behaviour in GP Com. ### 3.4 The ‘S’-wave The spot equivalent to the ‘S’-wave appears to have some structure in several of the images of Fig. 4, most obviously in HeI 4387. On examining the data more closely (Fig. 2), it turns out that the ‘S’-wave is significantly non-sinusoidal, a phenomenon also seen in the strong ‘S’-wave of WZ Sge (this is best seen by looking at Fig. 2 sideways). In order to assess the departure from a sinusoid, we returned to the multi-gaussian fits, forcing the velocity of the S-wave to be the same in all lines. We first fitted the ‘S’-wave velocity as before when determining the period but now, rather than just a simple sinusoid, we fitted a Fourier series of the form: $$V(\varphi )=\underset{n=1}{\overset{3}{}}A_n\mathrm{cos}2\pi n\varphi +\underset{n=1}{\overset{3}{}}B_n\mathrm{sin}2\pi n\varphi .$$ With this as a starting point, we then measured velocities individually from 50 phase-folded spectra (the signal-to-noise being too low to attempt this on the raw data). The results and Fourier fit are plotted in Fig. 5. The parameters of the Fourier fit are listed in Table 1. Both figure and table exhibit the substantial departure from a pure sinusoid. Non-sinusoidal velocity changes of this manner violate one of the basic assumptions of Doppler tomography, but are easily produced by phase-dependent visibility in the emission regions. For example, at one phase we may see emission from the stream, whereas at another emission from the disc near the stream may be easier to see. In general these will have different velocities and thus we see a non-sinusoidal behaviour. Such visibility variations imply that vertical structure in the disc or stream plays a significant role in GP Com, suggesting that it is of high orbital inclination. The Doppler maps cannot reflect this complexity and instead we see an averaged view of the system with emission spread over all contributing sites. This explanation is nicely supported by the phase-dependent flux variation in the ‘S’-waves of Fig. 2. Similar behaviour can be seen in the ‘S’-wave of the hydrogen-dominated system, WZ Sge \[Spruit & Rutten 1998\]. To illustrate the difference between the stream and disc velocities we have plotted the ‘S’-waves predicted for stream only (dash-dotted line) and disc only (dotted) in Fig. 5. The predicted velocities are based upon a mass ratio of $`q=0.02`$ and a disc of radius $`0.6\mathrm{R}_{\mathrm{L1}}`$. Ideally, the observed velocities should be bracketed by these two paths. This is near enough the case for the above explanation to be credible. The phasing and scaling used in producing the prediction is discussed in the next section. Measurements of the amplitudes for the individual lines are listed in Table 2. In this case a simple sinusoid has been fitted to each line. We will consider these further in the next section. ### 3.5 The Central Spike The absence of obvious radial velocity variation in the central spike led to suggestions that it arose in a nebula, which however has not been found \[Stover 1983\]. However, although the absence of variations rules out the mass donor, the disc and the bright-spot, as we pointed out earlier, the accreting white dwarf remains a possibility because the extreme mass ratio expected of GP Com (see section 4.2) means that its semi-amplitude is only of order $`10\text{km}\text{s}^1`$. Therefore a measurement of the radial velocity amplitude of the central spike provides an important constraint upon its origin. Actually measuring the amplitude is difficult. In principle it can be measured by determining the position of the central spot in the Doppler images. However, this is adversely affected by the systemic velocity shifts visible in the mean spectra (which tend to blurr the central spot) and it is also difficult to estimate uncertainties with this method. On the other hand one can’t simply measure velocities directly without accounting for the presence of the ‘S’-wave. Therefore we again applied the multi-gaussian fitting method. With the spike and ‘S’-wave modelled as gaussians, and with velocities fitted by the usual $`\gamma V_X\mathrm{cos}2\pi \varphi +V_Y\mathrm{sin}2\pi \varphi `$ function, the fitted parameters are listed in Table 3. The last entry denoted by “All” was derived from a simultaneous fit to all lines in which they were forced to have the same $`V_X`$ and $`V_Y`$. Individually the signal-to-noise in the lines is too marginal to be confident of a detection, but the simultaneous fit to all of them, which gives $`V_X=10.3\pm 1.6\text{km}\text{s}^1`$, $`V_Y=3.9\pm 1.6\text{km}\text{s}^1`$, is statistically significant. However, it should be remembered that this measurement was made against a background of disc and ‘S’-wave and that the semi-amplitude is only of order 1/10<sup>th</sup> of the FWHM of the spike. Nevertheless the detection supports the accretor origin for the central spike. After making a small correction for noise-induced bias, our measurement is equivalent to an amplitude of $`10.8\pm 1.6\text{km}\text{s}^1`$. This can be compared to NRS’s measurement of $`14.3\pm 3.7\text{km}\text{s}^1`$ based upon the emission line wings. The values are consistent, although unfortunately NRS do not specify the phasing of their measurement. We have not tried to measure the amplitude from the line wings as they are so broad that the measurement of such small amplitudes cannot be relied upon, as NRS warn. If the spike truly does reflect the motion of the accretor, and the ‘S’-wave originates from the gas stream/disc impact region, then their relative phases and amplitudes depend upon the mass ratio. The connection is illustrated in Fig. 6 which shows that the S-wave parameters listed in Table 2 (plotted as the cluster of points in upper-right quadrant of each panel) are consistent with the measured position of the spike (which lies on the straight, dashed lines) for a mass ratio $`q=0.017`$ if the ‘S’-wave represents the velocity of the gas stream or $`q=0.023`$ if it represents the velocity of the disc along the path of the stream. In the former case the disc would extend to 75% of the way to the inner Lagrangian point, whereas in the latter it would only need reach $`50`$% of the way. Both mass ratios and disc radii are reasonable for GP Com, and provide further circumstantial evidence that the spike may indeed be from the accretor. If this interpretation is correct, then inferior conjunction of the secondary star (the expected phase of eclipse of the disc, if there was one), occurs at orbital phase $`0.19\pm 0.02`$ on our ephemeris. The same method was used to produce the predicted stream and disc ‘S’-waves plotted in Fig. 5. ### 3.6 Stochastic variability So far we have mainly considered variations with orbital phase. However, strong flaring behaviour has been observed in HST data which does not appear to be related to orbital phase \[Marsh et al. 1995\]. In the HST data the flux of NV 1240 was seen to increase by a factor of up to five during three flares which occurred in the 13 hour observing interval, while the continuum only increased by $`40`$%. It is likely that this variation is driven by X-ray variability through photo-ionisation, and indeed, variations have been seen in ROSAT data on GP Com \[van Teeseling & Verbunt 1994\]. If so, we can expect similar effects at optical wavelengths. That this is so is demonstrated in Fig. 7 which shows the light curves of the continuum and emission lines of GP Com during our run. There is significant, correlated variability in all components, and it has a larger amplitude in the lines than the continuum. We can derive a spectrum for the flaring component using the method employed by Marsh et al. \[Marsh et al. 1995\]. In this the spectra are modelled as the sum of a spectrum representing the mean plus multiples of another spectrum representing the flaring component. Both spectra and the multipliers (which are different for each of the 414 spectra) are optimised by minimising $`\chi ^2`$. The constant and flare components derived from our data are shown in the top two panels of Fig. 8. It has to be remembered that our spectra were taken through a narrow slit, and although they were corrected for slit losses, it is likely that some part of the “flaring” could be artificially induced. This would have the effect of making the flare spectrum look like the mean spectrum. Therefore as a further check we repeated the computation of the flare spectrum after subtracting the continuua of the spectra. This allows the variations in the lines to determine the flare spectrum rather than the lines and continuum and should weaken the influence of poor slit corrections, although not eliminate them entirely. The result of this is shown in the lower panel of Fig. 8 and probably gives the most accurate representation of the line profiles. Expanded views of four of the line profiles are shown in Fig. 9. Looking first at the upper two panels confirms the behaviour seen in the light curves of Fig. 7. The emission lines are relatively stronger than the continuum in the flare compared to the mean. This is exactly as found in the HST data \[Marsh et al. 1995\], although not to such a marked extent (the possible contamination due to poor slit loss correction should be remembered however). The line ratios in the two spectra are very similar with the exception of HeII 4686 which is stronger in the flare component; with hindsight the HeII 4686/HeI 4713 light curve can be seen to be the most modulated in Fig 7. This supports photo-ionisation as the driver of the flares. Finally there are substantial differences in the line profiles (Fig. 9). The profiles in the flare spectrum have broader wings than they do in the mean spectrum, indicative of a larger contribution from the inner disc and they also have a weaker central spike component compared to the mean. However, it is important to note that the flare spectrum does appear to have some central spike component, although the caveat about slit loss contamination is worth repeating. The weakening of the central spike leads to profiles with more pronounced double-peaks. Most remarkable of all is the HeII 4686 profile which in the flare spectrum has a blue-shifted peak at around $`1400\mathrm{km}\mathrm{s}^1`$ (the blending with HeI 4713 rendering it impossible to be sure of the red-shifted peak). If interpreted as a standard profile from an accretion disc, this suggests that the HeII 4686 emission region only extends out to about 1/4 of the radius of the disc (assuming that the HeI lines come from all of it). This peak is consistent with the radius of the inner ring of the HeII 4686 map of Fig. 4. ## 4 Discussion ### 4.1 Origin of the Central Spike The persistent narrow emission at the centre of all the lines is a puzzling feature of GP Com. The nearest equivalent we know of is the zero velocity emission observed in IP Peg and SS Cyg \[Steeghs et al. 1996\], but this was only seen during outburst. The parameters in Table 3 provide several constraints upon its origin. The amplitude of any radial velocity variations is low. The spike is seen best in HeI 4713, HeI 5015 and HeII 4686 and these lines indicate a narrow FWHM of $`120\text{km}\text{s}^1`$. We take these lines as representative since the ambiguity about whether flux is from the spike or from the disc is likely to cause much larger systematic errors for the other lines. Taken together, the small width and the radial velocity amplitude allow only a few possible sites of origin which are (a) the accreting white dwarf, (b) a nebula or (c) a wind or jet. Any other part of the binary moves too fast (e.g. the donor star) or would produce too broad a profile (e.g. the disc). All the above sites have their own problems however. A wind origin faces difficulties since the winds observed in normal cataclysmic variables are too highly ionised to produce HeI emission and have large velocities characteristic of the escape velocity of the white dwarf \[Drew 1987\]. It is hard to see how expansion velocities of order $`60\text{km}\text{s}^1`$ arise in such circumstances. Moreover a wind origin has no good explanation for the variations in the systemic velocities, most obvious in HeI 5015. The main point in favour of the accreting star is our detection of radial velocity variation, albeit of low amplitude. As we showed in section 3.5 the measured amplitude and phase of the spike are in accord with the ‘S’-wave amplitude and phase. However, once more the systemic velocities are hard to understand, although the primary star has the possibilities of gravitational, pressure \[Koester 1987\] and Zeeman shifts. It is encouraging that the most discrepant line, HeI 5015, is the only singlet line in our wavelength range, but there does not seem to be enough known about these lines for us to say anything more. A nebula origin copes best with the systemic velocities. HeI 5015 is the line most sensitive to optical depth effects. Under case A conditions the HeI 5015/4922 ratio should be of order $`0.1`$ whereas under case B it increases to about $`2.5`$ \[Brocklehurst 1972\]. Thus a correlation between the velocity of the emitting material and its density could be the reason behind the HeI 5015 anomaly. The nebula, if it exists, would have to be asymmetric, which is no great obstacle. There are however other problems with a nebula origin. First, it is not resolved. A nebula expanding at $`60\text{km}\text{s}^1`$ would reach a size of 1 arcsecond in only 10 years if GP Com lies at the upper limit of $`100\text{pc}`$ deduced by Marsh et al. \[Marsh, Horne & Rosen 1991\]. We find no evidence for any extended emission in our long slit spectra and estimate that an extension of order 1 arcsecond should have been detected; Stover \[Stover 1983\] found that what appeared to be extended emission around GP Com is in fact a group of stars. Perhaps the most significant problems are the radial velocity and flux variability that we have detected. The nebula explanation can only survive these if they are dismissed as systematic artefacts. Given the small amplitude of the radial velocity variations, it is hard to rule out this possibility. However, while we cannot be sure that poor slit loss correction has not artificially produced the spike component in the flare spectra, our feeling is that it is too strong for this to be the case. Luckily, it should be possible to improve upon this with better data in the future. ### 4.2 Parameters of GP Com The evolution that leads to a system such as GP Com has been discussed in several papers \[Savonije, de Kool & van den Heuvel 1982\]. At any particular orbital period there are two possibile configurations. In one case the donor may be a helium star of relatively high mass and luminosity. In this case the orbital period decreases with time. Alternatively, at a later stage after the system has passed its minimum period (of order $`10\mathrm{min}`$), the donor adopts a semi-degenerate structure of very low mass and luminosity. In this state as the donor loses mass it increases in radius and the orbital period increases as well. It is unable to reach a truly degenerate structure because its thermal timescale becomes so long. The latter case is almost certainly the one that applies to GP Com since there is no sign of the donor star. Our favoured mass ratio of $`q=M_2/M_10.02`$ also supports this scenario. Based upon Savonije et al.’s work, Warner \[Warner 1995\] deduces that $$M_2=0.0186P^{1.274}\mathrm{M}_{},$$ where the orbital period $`P`$ is measured in hours. This gives a mass of order $`0.026\mathrm{M}_{}`$ for the donor star. Given an accretor mass of order $`1\mathrm{M}_{}`$, this is acceptably close to the mass ratio we deduce, and confirms the status of GP Com. Using the relation given by Warner for a fully-degenerate secondary (his equation 9.43) leads to $`M_2=0.009\mathrm{M}_{}`$, still consistent with our mass ratio if the accretor is more like $`0.5\mathrm{M}_{}`$. For $`q=M_2/M_1=0.02`$, $`K_1=10.8\text{km}\text{s}^1`$ and $`P=46.567\mathrm{min}`$, we calculate $`M_1\mathrm{sin}^3i=0.55\mathrm{M}_{}`$, a reasonable value. Uncertainties in $`q`$ and $`K_1`$ are too large to deduce any useful value of the inclination however, and we just note again that the non-sinusoidal ‘S’-wave behaviour suggests that it might be high, although there is no sign of an eclipse. ### 4.3 Flaring in GP Com The flaring we have found at optical wavelengths, and seen more dramatically still at UV wavelengths, is another unusual feature of GP Com. It strongly suggests that the inner disc is variable in a way not seen in the systems that look most like GP Com in terms of their optical spectra, the quiescent dwarf novae (leaving aside the absence of hydrogen!). GP Com has never been seen to show outbursts, and there has been discussion as to whether this is because it has a very long inter-outburst interval or because it is in a steady-state of very low mass transfer rate \[Warner 1995\]. The former possibility is quite reasonable given the decades-long intervals between the outbursts of the short-period dwarf novae known as the WZ Sge stars. On the other hand, it has long been realised that, if the thermal instability model of dwarf nova outbursts is correct, a disc composed largely of helium may behave very differently from one dominated by hydrogen \[Smak 1983, Tsugawa & Osaki 1997\]. The turning points on the thermal equilibrium curve plotted as $`\mathrm{log}T`$ versus $`\mathrm{log}\mathrm{\Sigma }`$ (the “S-curve”) are at a higher temperature and surface density for helium compared to hydrogen \[Tsugawa & Osaki 1997\]. As a result, it is more likely that the entire disc can be on the lower branch of the S-curve, and thus the system will not undergo outbursts. We suggest that the high-level of X-ray to optical flux in GP Com \[Verbunt et al. 1997\] and the flaring behaviour support the latter possibility. That is, matter accretes at all radii in the disc rather than accumulating in the outer disc as it is believed to do in quiescent dwarf novae. In this state the disc can be optically thin \[Tsugawa & Osaki 1997\], which explains the strong emission lines displayed by GP Com, in contrast to the rest of the AM CVn group. Of course, some other instability must be occurring in the inner disc to explain the flaring; its cause remains to be determined. ## 5 Conclusions We have analysed time-resolved spectrophotometry of the double-degenerate binary GP Com. We confirm the presence of the ‘S’-wave feature found by NRS and refine its period to $`46.567\pm 0.003\mathrm{min}`$. We have detected a small radial velocity variation with a semi-amplitude of $`10.8\pm 1.6\text{km}\text{s}^1`$ in the sharp component at the centre of the emission lines, which may indicate that it comes from the accreting primary star. The amplitude and phase are consistent with such an origin together with the measured parameters of the ‘S’-wave if the mass ratio $`q=M_2/M_1`$ is of order $`0.02`$. This is roughly as expected if GP Com is now increasing in period and has a near-degenerate donor star. However, the systemic velocity of the narrow component, which varies from line to line, remains to be explained. While the orbital variations that we find are consistent with earlier data, we have also discovered erratic flaring which we believe to be analogous to similar but more obvious behaviour seen at UV wavelengths. We suggest that this supports models in which GP Com’s disc is on the lower branch of the thermal instability curve and does not undergo the global outbursts followed by intervals of mass accumulation as occur in dwarf novae. ## Acknowledgements TRM was supported by a PPARC Advanced Fellowship during the course of part of this work. The data reduction and analysis were carried out on the Southampton node of the UK STARLINK computer network.
no-problem/9812/astro-ph9812117.html
ar5iv
text
# The modified dynamics (MOND) predicts an absolute maximum to the acceleration produced by ‘dark halos’ ## 1 Introduction The acceleration constant of the modified dynamics (MOND), $`a_0`$, appears in various predicted regularities pertinent to galaxies. For example, it features as an upper cutoff to the mean surface density (or mean surface brightness–translated with $`M/L`$) of galaxies, as observed and formulated in the Freeman law for disks, and of the Fish law for ellipticals. We have now come across another such role of $`a_0`$ that had escaped our notice until recently: In spherical configurations, and in those relevant to rotation-curve analysis of disk galaxies, the excess, $`\mathrm{g}_h\mathrm{g}\mathrm{g}_N`$, of the MOND acceleration, $`\mathrm{g}`$, over the Newtonian value for the same mass, $`\mathrm{g}_N`$, is universally bounded from above by a value $`\mathrm{g}_{max}=\eta a_0`$, where $`\eta `$ is of order 1. Thus, if we attribute what are the effects of MOND to the presence of a fictitious dark halo, $`\mathrm{g}_{max}`$ is a universal upper bound to the acceleration produced by the ‘halo’, in all systems, and at all radii. If the ‘halo’ is assumed quasi-spherical, this can be put as a statement on the accumulated (three dimensional) surface density of the ‘halo’, which must obey the universal bound $`M_h(r)/r^2\eta a_0G^1`$. Inasmuch as MOND is successful in explaining the rotation curves of disk galaxies with reasonable stellar $`M/L`$ values (Sanders (1996); Sanders and Verheijen (1998); de Blok and McGaugh (1998)) we can deduce that, indeed, ‘halo’ accelerations are bounded by $`\mathrm{g}_{max}`$. This is an important observation regardless of whether MOND entails new physics, or is just an economical way of describing dark halos. Newtonian, disk-plus-dark-halo decompositions and rotation-curve fits are rather more flexible because they involve two added parameters for the halo, allowing one to maximize the contribution of the halo, minimizing that of the disk. But, reasonable fits do give a maximum halo acceleration. For example, Sanders (private communication) finds in the dark-halo best fits of Begeman Broeils and Sanders (1991) a maximum acceleration of $`0.4a_0`$ for all the galaxies with reasonable fits. We derive this upper bound and explain the assumptions that go into the derivation in section 2. Then, in section 3, we compare this new limit with previous MOND limits on the acceleration in galactic systems. ## 2 derivation of the upper bound The absolute upper bound on $`\mathrm{g}_h`$ follows simply from the basic MOND relation between the acceleration $`\mathrm{g}`$ and the Newtonian acceleration $`\mathrm{g}_N`$: $$\mu (\mathrm{g}/a_0)\mathrm{g}=\mathrm{g}_N,$$ (1) $`\mu (x)`$ being the interpolating function of MOND. The validity of this relation constitutes part of the underlying assumptions (see below). The excess acceleration $`\mathrm{g}_h=\mathrm{g}\mathrm{g}_N`$ can be written as a function of $`\mathrm{g}`$: $$\mathrm{g}_h=\mathrm{g}\mathrm{g}\mu (\mathrm{g}/a_0).$$ (2) Now, $`\mathrm{g}`$ can take any (non-negative) value, but, for all acceptable forms of $`\mu (x)`$, expression (2) has a maximum, which $`\mathrm{g}_h`$ can thus not exceed. Writing $`x=\mathrm{g}/a_0`$, and $`y=\mathrm{g}_h/a_0`$, $`y(x)=x[1\mu (x)]`$ is non-negative and vanishes at $`x=0`$. Thus, it has a global maximum if and only if it does not diverge at $`x\mathrm{}`$; i.e., if $`\mu (x)`$ approaches 1 at $`x\mathrm{}`$ (as it must do) no slower than $`x^1`$. The parameter $`\eta `$ defined above is just this maximum value of $`y(x)`$. There are solar-system constraints on how slowly $`\mu (x)`$ can approach 1 in the Newtonian limit (Milgrom (1983)). Such constraints practically exclude the possibility that $`y(x)`$ diverges at large $`x`$. Some examples: for $`\mu (x)=x/(1+x)`$ the maximum, achieved in the Newtonian limit, is $`\eta =1`$; for the often-used $`\mu (x)=x(1+x^2)^{1/2}`$, $`\eta =[(\sqrt{5}1)/2]^{5/2}0.3`$; for $`\mu (x)=1e^x`$, $`\eta =e^10.37`$. (We see that, in fact, $`\eta `$ tends to be rather smaller than 1.) When is expression (1) valid? MOND may be viewed as either a modification of gravity or as one of inertia. Mondified gravity is described by the generalized Poisson equation discussed in Bekenstein and Milgrom (1984), which is of the form $$\stackrel{}{}[\mu (|\stackrel{}{}\phi |/a_0)\stackrel{}{}\phi ]=4\pi G\rho ,$$ (3) where $`\phi `$ is the (MOND) potential produced by the mass distribution $`\rho `$. For systems with one-dimensional symmetry (e.g. in spherically symmetric ones) eq.(1) is exact in this theory. It was also shown to be a good approximation for the acceleration in the mid-plane of disk galaxies (Milgrom (1986),Brada and Milgrom (1995)). An exact statement that can be made in this case for an arbitrary mass configuration is that the average value of $`|𝐠_h|`$ over an equipotential surface of the ‘halo’ is bounded by $`\mathrm{g}_{max}`$. To see this note that from eq.(3) $$\stackrel{}{}𝐠_h=\stackrel{}{}[𝐠\mu (\mathrm{g}/a_0)𝐠]$$ (4) (because $`\stackrel{}{}𝐠_N=4\pi G\rho =\stackrel{}{}[\mu (\mathrm{g}/a_0)𝐠]`$). Take a Gauss integral for a volume bounded by an equipotential of $`\phi _h\phi \phi _N`$. Because $`𝐠_h`$ is perpendicular to the surface we have $$[1\mu (\mathrm{g}/a_0)]𝐠\mathrm{𝐝𝐬}=𝐠_h\mathrm{𝐝𝐬}=|𝐠_h|𝑑s.$$ (5) Since we proved that $`[1\mu (\mathrm{g}/a_0)]\mathrm{g}\mathrm{g}_{max}`$, the left-hand side is bounded by $`\mathrm{g}_{max}𝑑s`$, and so $`|𝐠_h||𝐠_h|𝑑s/𝑑s\mathrm{g}_{max}`$. There is no concrete theory of mondified inertia yet; but, as was shown in Milgrom (1994), eq.(1) is exact in all such theories for circular orbits in an axisymmetric potential. So our limit here would apply, in both versions of MOND, to the ‘halo’ deduced from rotation-curve analysis. ## 3 comparison with previous MOND acceleration limits The acceleration constant of MOND, $`a_0`$, has been found before to define a sort of limiting acceleration in two cases. The first case concerns self-gravitating spheres supported by random motions with constant tangential and radial velocity dispersions. The mean acceleration in all such spheres cannot exceed a certain value of order $`a_0`$ (Milgrom (1984)). This was suggested as an explanation of the Fish law, by which the distribution of the central surface brightnesses in ellipticals is sharply cutoff above a certain value (which, assuming some typical $`M/L`$ value, translates into a mean surface density $`\mathrm{\Sigma }a_0G^1`$). The second instance concerns self-gravitating disks. In MOND, disks with a mean acceleration much larger than $`a_0`$ are in the Newtonian regime and are less stable than disks in the MOND regime, with mean accelerations smaller than $`a_0`$ (Milgrom (1989); Brada and Milgrom (1998) and references therein). This was suggested as an explanation of the Freeman law in its revised form, whereby the distribution of central surface brightnesses of galactic disks is cut off above a certain value (see a recent review and further references in McGaugh (1996)). The new limit we discuss here is different from those two in several important regards. 1. The previous limits concern the visible part of the galaxy, while the new limit pertains to the fictitious halo and thus lends itself to direct comparison with predictions of structure-formation simulations, which are rather vague as regards the visible galaxy. At the moment such simulations are also equivocal on the exact structure of the halo itself. Different simulations start with different assumptions, and the effect of the visible galaxy on the halo is also poorly accounted for. Nonetheless, it may be easy to check for a specific structure-formation scenario whether it predicts an absolute upper limit to the acceleration in halos of the order predicted by MOND. For example, the family of halos produced in the simulations of Navarro Frenk and White (1996) do not seem to have a maximum acceleration, with higher-mass halos having higher accelerations exceeding $`a_0`$ (Stacy McGaugh, Bob Sanders–private communications). 2. The new limit is ‘mathematical’; i.e., it does not make further assumptions on the physical nature of the galaxy. In contrast, the validity of the previous limits rests on additional assumptions. In the first example quasi-isothermality and a nondegenerate-ideal-gas equation of state are assumed for the spherical system. The limit then applies neither to normal stars, which are not isothermal, nor to white dwarfs, whose equation of state is not that of an ideal gas. These stars have, indeed, mean accelerations much higher than $`a_0`$. In the second example, instability is relied upon to cull out disks with high mean acceleration. 3. The former two acceleration limits apply to the mean acceleration in the system, while the new limit applies to the ‘halo’ acceleration at all radii.
no-problem/9812/astro-ph9812483.html
ar5iv
text
# References On Spectrum of Extremely High Energy Cosmic Rays through Decay of Superheavy Particles Yūichi Chikashige<sup>1</sup> and Jun-ichi Kamoshita<sup>2</sup> <sup>1</sup>Faculty of Engineering, Seikei University, Musashino, Tokyo 180-8633, Japan <sup>2</sup>Department of Physics, Ochanomizu University, 2-1-1 Otsuka, Bunkyo, Tokyo 112-8610, Japan ## Abstract We propose a formula for flux of extremely high energy cosmic rays (EHECR) through decay of superheavy particles. It is shown that EHECR spectrum reported by AGASA is reproduced by the formula. The presence of EHECR suggests, according to this approach, the existence of superheavy particles with mass of about $`7\times 10^{11}`$GeV and the lifetime of about $`10^9`$ years. Possibility to obtain a knowledge of $`\mathrm{\Omega }_0`$ of the universe from the spectrum of EHECR is also pointed out. PACS numbers: 13.85.Tp, 14.80.-j, 95.85.Ry, 98.70.-f, 98.80.Es The unexpected energy spectrum of the extremely high energy cosmic rays (EHECR) with the energy above $`10^{19.8}`$eV has been reported by Akeno Giant Air Shower Array (AGASA) collaboration which used the updated data set. The existence of EHECR has been known for about 30 years , among which the highest energy of EHECR, $`(3.2\pm 0.9)\times 10^{20}`$eV, was recorded in Fly’s Eye . When we regard nucleons or nuclei as constituents of EHECR, the attenuation length is estimated less than $`100`$ Mpc due to Greisen-Zatsepin-Kuz’min (GZK) effect . Thus if distances from Earth to any sources of EHECR are over 100 Mpc, it is difficult to explain EHECR in terms of nucleons or nuclei. One may expect that sources of EHECR would exist within about 100 Mpc from Earth, but such sources have not been found. Here we can make a brief list of yet unsolved problems on EHECR as follows: (1) the chemical compositions of EHECR are unknown, (2) the sources of EHECR are unclarified, and (3) the shape of energy spectrum reported by AGASA is unexplained. In order to solve these problems, several ideas have been proposed. There exist mainly two approaches based on astrophysical aspect and particle physical aspect. From the astrophysical aspect, number of acceleration mechanisms of the ordinary particles have been proposed. For example, the protons are accelerated to the extremely high energy by the relativistic jets from AGN . The production of EHECR by the Gamma-Ray Bursts is also considered . There are, however, no particular astronomical objects in the directions of EHECR . On the other hand, from the particle physical aspect, the production mechanism of EHECR has been proposed . A basic idea is that EHECR are produced by decay of superheavy particles. In this case, EHECR are consisted of either the standard particles (proton, photon or neutrino) or the new particles (for example, neutralino or gluino). Furthermore, instead of decay products of superheavy particles, quasi-stable superheavy particles can also be considered as constituent of EHECR in itself. For example, colored monopoles have been examined in ref.. When decaying superheavy particles are considered as sources of EHECR, it is plausible to imagine the mass of the superheavy particles larger than about $`10^{12}`$GeV, since the highest energy of EHECR is $`(3.2\pm 0.9)\times 10^{20}`$eV. There are many models in particle physics which introduce such superheavy particles, for example, supersymmetric grand unified theory or the see-saw model for neutrino mass. However, the superheavy particles that should decay into EHECR have to have a long lifetime which is nearly or greater than the present age of the universe, because, if the superheavy particles decay fast, the remnants of the superheavy particles are very little, and as a result, we cannot observe EHECR produced via the decay. Although the order of lifetime is usually proportional to the inverse of mass of the particle, some kind of models permits quasi-stable superheavy particles contrary to their heaviness. It is attractive to make clear of the relation between the EHECR problem and the particle physical aspect, because we can expect that the feature of EHECR leads us to ones beyond the standard model of particle physics with a large mass scale larger than about $`10^{12}`$GeV. In this paper, we examine the problem of EHECR from the particle physical aspect, although we do not reject at the present time a possibility to explain EHECR from astronomical origins like quasisteller objects which are recently considered seriously in ref. . We propose a proper formula for the flux of EHECR in the beginning. This formula should be used when EHECR are produced via the decay of the superheavy particles. Reproducing the spectrum of EHECR observed at AGASA by using our formula, we show that the lifetime of the superheavy particles is suggested to be about 10% of the Hubble time when their mass is $`7\times 10^{11}`$GeV. Furthermore, it is also pointed out that the possibility to obtain a knowledge of the omega parameter, $`\mathrm{\Omega }_0`$, of the universe from the energy spectrum of EHECR. At first, we show the formula for the flux to the cosmic rays as decay products of superheavy particle $`X`$. Let $`f`$ be the particle in decay products of $`X`$ and produce Extensive Air Showers (EAS) when it reaches to the atmosphere. We denote the number density of the source at time $`t_e`$ as $`n_X(t_e)`$ and the partial decay constant for a decay mode including $`f`$ as $`\mathrm{\Gamma }_f`$. Then the production rate of $`f`$ at time $`t_e`$ is determined by $`\mathrm{\Gamma }_fn_X(t_e)`$. The general expression of the flux of cosmic rays, after taking average about the angular distribution, is given in the Robertson-Walker metric with the scale factor $`a(t)`$ as follows: $`J(E_{obs})`$ $`=`$ $`{\displaystyle \frac{1}{4\pi }}{\displaystyle _{t_{min}}^{t_0}}dt_e\left({\displaystyle \frac{a(t_e)}{a(t_0)}}\right)^3n_X(t_e){\displaystyle \frac{\mathrm{d}\mathrm{\Gamma }_f}{\mathrm{d}E_e}}{\displaystyle \frac{\mathrm{d}E_e}{\mathrm{d}E_{obs}}},`$ (1) where $`E_{obs}`$ is the observed primary energy of cosmic rays, and $`\mathrm{d}\mathrm{\Gamma }_f/\mathrm{d}E_e`$ is the energy distribution of cosmic rays emitted by the source at time $`t_e`$. The present time is represented by $`t_0`$, while $`t_{min}`$, is determined by a physical condition that $`t_0t_{min}`$ is smaller than an attenuation time of $`f`$. The number density of the superheavy particles $`n_X(t)`$ is evolved by the following Boltzman equation, $`{\displaystyle \frac{\mathrm{d}(a(t)^3n_X(t))}{\mathrm{d}t}}=\mathrm{\Gamma }_Xa(t)^3n_X(t),`$ (2) where $`\mathrm{\Gamma }_X`$ is total decay width of $`X`$. This equation gives $`n_X(t)=\left({\displaystyle \frac{a(t_0)}{a(t)}}\right)^3n_X(t_0)\mathrm{exp}[\mathrm{\Gamma }_X(t_0t)].`$ (3) By substituting eq. (3) into eq. (1), we obtain $`J(E_{obs})`$ $`=`$ $`{\displaystyle \frac{n_X(t_0)}{4\pi }}{\displaystyle _{t_{min}}^{t_0}}dt_e\mathrm{exp}[\mathrm{\Gamma }_X(t_0t_e)]{\displaystyle \frac{\mathrm{d}\mathrm{\Gamma }_f}{\mathrm{d}E_e}}{\displaystyle \frac{\mathrm{d}E_e}{\mathrm{d}E_{obs}}}.`$ (4) Here we would like to make a comment on this formula. In the previous works, similar formulae have been used ; however, the effect of the decrease in the number density of parent particles has been neglected, since their lifetime has been assumed to be greater than the age of the universe. On the other hand, in eq. (4), this effect is included in the form of exponential damping factor, $`\mathrm{exp}[\mathrm{\Gamma }_X(t_0t)]`$, since $`X`$ is supposed to have the same order lifetime as the present age of the universe. We should use eq. (4) rather than the formula used in previous works when we estimate the flux of EHECR produced by the decay of superheavy particles. This is because the number density of parent particles decreases due to their decay. This effect is very important to reproduce the spectrum of EHECR as shown later in this paper. Our formula, eq. (4), can be applied to the case that $`f`$ can travel almost freely through the cosmic background radiation, though the attenuation due to the GZK effect has been neglected. For example, we can take $`f`$ to be neutrino or neutralino. The difference between them is the translation rate $`R_{c2a}`$ of EHECR to EAS when they reach at Earth and produce EAS. $`R_{c2a}`$ $``$ $`{\displaystyle \frac{\mathrm{number}\mathrm{of}\mathrm{EHE}\mathrm{air}\mathrm{shower}\mathrm{events}}{\mathrm{number}\mathrm{of}\mathrm{EHECR}\mathrm{incident}\mathrm{on}\mathrm{Earth}}}.`$ (5) Then the flux of EAS, $`J(E_{obs})|_{\mathrm{EAS}}`$, is derived from the flux of EHECR, $`J(E_{obs})|_{\mathrm{EHECR}}`$, as $`J(E_{obs})|_{\mathrm{EAS}}=J(E_{obs})|_{\mathrm{EHECR}}R_{c2a}.`$ (6) Neutrino gives the translation rate larger value as an order of $`R_{c2a}=10^6`$ than neutralino. Now we consider a case that EHECR are taken to be neutrinos which are produced by the two-body decay of $`X`$. Its mass, $`M_X`$, can be expected about twice the highest energy of EHECR. Since the highest energy recorded by Fly’s Eye is $`(3.2\pm 0.9)\times 10^{20}`$eV, we take $`M_X`$ as $`7\times 10^{11}`$GeV in the following. The energy spectrum right at the emitted point is to be the monochromatic one in the two-body decay, $`{\displaystyle \frac{\mathrm{d}\mathrm{\Gamma }_f}{\mathrm{d}E_e}}=\mathrm{\Gamma }_f\delta (E_eM_X/2).`$ (7) Hereafter, for simplicity, we limit ourselves to consider such a case that there is unique channel for two-body decay, i.e. $`X\nu +`$ some particle, and then $`\mathrm{\Gamma }_f`$ becomes just the total decay width $`\mathrm{\Gamma }_{tot}`$. Since the kinematics tells that the energy spectrum is monochromatic in the present case, we can obtain the energy distribution of EHECR without the detailed description of the interaction. Thus we will be able to discuss the problem of EHECR rather in the model-independent way. The red-shift relation between $`E_e`$ and $`E_{obs}`$ is represented as $`(1+z)E_{obs}=E_e`$. Then the monochromatic energy condition becomes $`\delta (E_eM_X/2)={\displaystyle \frac{1}{E_{obs}}}{\displaystyle \frac{1}{|\mathrm{d}f(t)/\mathrm{d}t|}}\delta (tt_\alpha )\theta (z),`$ (8) where the step function $`\theta (z)`$ is necessary to satisfy the condition $`z0`$. In eq. (8), $`f(t)1+z(t)`$, and $`t_\alpha `$ is defined as a solution of $`f(t_\alpha )=M_X/2E_{obs}`$. The expression of $`f(t)`$ is different for each of three cases, open universe of $`\mathrm{\Omega }_0<1`$, flat universe of $`\mathrm{\Omega }_0=1`$ and closed universe of $`\mathrm{\Omega }_0>1`$. By using eqs. (4$``$ (8), we can calculate the energy spectrum of EHECR. For the moment, we consider the case for $`\mathrm{\Omega }_0=1`$, where $`f(t)=(t_0/t)^{2/3}`$. In this case, the spectrum of EAS is given by $`J(E_{obs})_{\mathrm{EAS}}`$ $`=`$ $`R_{c2a}n_X(t_0){\displaystyle \frac{\sqrt{2}\mathrm{\Gamma }_X}{2\pi H_0}}{\displaystyle \frac{\sqrt{E_{obs}}}{M_{X}^{}{}_{}{}^{3/2}}}\mathrm{exp}\left\{{\displaystyle \frac{2\mathrm{\Gamma }_X}{3H_0}}\left[1\left({\displaystyle \frac{2E_{obs}}{M_X}}\right)^{3/2}\right]\right\}.`$ (9) The AGASA’s data above the GZK cutoff distributes over the energy region from $`10^{19.8}`$eV to $`10^{20.5}`$eV, which corresponds to the change of 1 to 5 for $`1+z`$. This means that the most remote position of $`X`$ is about 7 Gpc in flat universe case. This distance is certainly shorter than the mean free length of supposed neutrinos on the collisions with background photons or neutrinos . Thus we can practically put $`t_{min}`$ into 0 in eq. (4). We see from eq. (9) that the spectrum of EHECR is determined by the three parameters $`M_X`$, $`r\mathrm{\Gamma }_X/H_0`$, and $`R_{c2a}n_X(t_0)`$. Since $`R_{c2a}`$ appears as a product with $`n_X(t_0)`$ in eq. (9), fortunately an ambiguity of $`R_{c2a}`$ does not affect our estimate of the spectrum. When we reproduce the spectrum reported by AGASA, we must multiply the factor $`E_{obs}^3`$ to eq. (9). Fig. 1 displays the curves of $`\mathrm{log}_{10}(J(E_{obs})E_{obs}^3)`$ for the case that EHECR are to be neutrinos produced by the two-body decay of $`X`$. The flux $`J(E_{obs})`$ is calculated by eq. (9). The end point of the energy spectrum locates at $`M_X/2`$. The energy of EHECR produced at time $`t_\alpha `$ is lowered to $`E_e/(1+z)`$ by the red shift. The peak around $`10^{20}`$eV is reproduced by synergistic effect between the decreasing factor of $`X`$, $`\mathrm{exp}[\mathrm{\Gamma }_X(t_0t_1)]`$, and the expanding factor of the universe, $`(t_0/t_1)^{2/3}`$ in eq. (4). We see remarkable conformity of the energy spectrum of EHECR by eq. (9) with the one reported by AGASA. The location of peak of the spectrum shown in Fig. 1 is derived from eq. (9) as $`E_{obs}(\mathrm{peak})={\displaystyle \frac{M_X}{2}}\left({\displaystyle \frac{3.5}{r}}\right)^{2/3}.`$ (10) In order that the peak appears in Fig. 1, $`r>3.5`$ is needed. We see from the data point given by AGASA that the peak seems to locate at $`E_{obs}10^{20}`$eV, though this location is vague in the data. When the inequalities $`20.1<\mathrm{log}_{10}E_{obs}<20.4`$ are imposed, we obtain $`3.5\left({\displaystyle \frac{M_X}{2}}10^{11.4}\right)^{1.5}<r<3.5\left({\displaystyle \frac{M_X}{2}}10^{11.1}\right)^{1.5}`$ (11) where $`M_X`$ is given in the unit of GeV. For $`M_X=7\times 10^{11}`$GeV, we find $`5.8<r<16`$. The inequalities turn the lifetime of $`X`$ into $`\tau _X=(0.090.3)t_0`$. When we see AGASA’s data in detail, there is no event in the energy bin between $`10^{20.2}`$ and $`10^{20.3}`$ eV. We anticipate, however, appearing of one event at this energy bin from our present consideration. But if there remains this vacancy in future, this spectrum should be interpreted from our standpoint as a suggestion that two kinds of superheavy particles with different masses exist. The spectrum below $`10^{20.2}`$ eV is due to superheavy particles with mass of $`3\times 10^{11}`$GeV, while the one over $`10^{20.3}`$eV is due to another superheavy particles with mass of $`7\times 10^{11}`$GeV. In Fig. 2, the curves of $`\mathrm{log}_{10}(J(E_{obs})E_{obs}^3)`$ are shown for $`\mathrm{\Omega }_0=0.5,1`$, and 2. We put $`r=10`$ for $`M_X=7\times 10^{11}`$GeV as an example. Statistics of the current data above $`10^{19.8}`$ eV is not enough to derive information of $`\mathrm{\Omega }_0`$. We hope from this point that the statistics of the EHECR observation increases in future. So far we have considered two-body decay case. Here we address a question how the energy spectrum changes for multi-body decay case. As an example, we show the curves of $`\mathrm{log}_{10}(J(E_{obs})E_{obs}^3)`$ for three-body decay due to four-Fermi interaction. We can see from Fig. 3 that the curves for three-body decay case do not reproduce the data so well as those for two-body decay case in Fig. 1. In summary, we have examined the problem of EHECR from the particle physical aspect, proposing a suitable formula to the flux of the cosmic rays through decay of superheavy particles. Then the energy spectrum of EHECR has been reproduced satisfactorily as that of the neutrinos via two-body decay of $`X`$ with mass of $`7\times 10^{11}`$GeV and lifetime of around $`0.1t_0`$. $`R_{c2a}\mathrm{\Omega }_X(t_0)10^{14}`$ has been required to reproduce the energy spectrum of EHECR, where $`\mathrm{\Omega }_X\rho _X/\rho _{cr}`$ as is usually defined. Our analysis is not confined exclusively to the neutrino case as mentioned before. We can take neutralino instead of neutrino. The difference between neutralino and neutrino is only the value of $`R_{c2a}`$, and $`R_{2ca}`$ becomes smaller for neutralino. Thus, $`\mathrm{\Omega }_X(t_0)`$ turns out to be much larger for neutralino case than $`10^8`$ for neutrino case. Furthermore, we have the possibility to derive a knowledge of the omega parameter, $`\mathrm{\Omega }_0`$, of the universe from the energy spectrum of EHECR. We expect to acquire further information of superheavy particles as well as $`\mathrm{\Omega }_0`$ of the universe from future observations where more data will be accumulated to give higher statistics. From this aspect, future detectors like the HiRes, the Telescope Array, the Pierre Auger, and the Owl are greatly awaited. One of the authors (Y. C.) thanks Tetsuya Hara for his kind correspondence.
no-problem/9812/math-ph9812011.html
ar5iv
text
# ℎ–analogue of Newton’s binomial formula ## acknowlegment I’d like to thank the DAAD for its financial support and the referee for his remarks. Special thanks go also to Prof. H.-D. Doebner, Dr. R. Häußling for reading the manuscript and Prof. F. Scheck for encouragements.
no-problem/9812/cond-mat9812265.html
ar5iv
text
# Temperature- and magnetic-field-dependent thermal conductivity of pure and Zn-doped Bi2Sr2CaCu2O8+δ single crystals ## Abstract The thermal conductivity $`\kappa `$ of Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> is measured in pure and Zn-doped crystals as a function of temperature and magnetic field. The in-plane resistivity is also measured on the identical samples. Using these data, we make a crude estimate of the impurity-scattering rate $`\mathrm{\Gamma }`$ of the pure and the Zn-doped crystals. Our measurement show that the “plateau” in the $`\kappa (H)`$ profile is not observed in the majority of our Bi-2212 crystals, including one of the cleanest crystals available to date. The estimated values of $`\mathrm{\Gamma }`$ for the pure and Zn-doped samples allow us to compare the $`\kappa (H)`$ data with the existing theories of the quasiparticle heat transport in $`d`$-wave superconductors under magnetic field. Our analysis indicates that a proper inclusion of the quasiparticle-vortex scattering, which is expected to play the key role in the peculiar behavior of the $`\kappa (H)`$, is important for a quantitative understanding of the QP heat transport in the presence of the vortices. Thermal conductivity $`\kappa `$ of a superconductor is one of the few probes which allow us to investigate the quasiparticle (QP) density and its scattering rate in the superconducting (SC) state. It is now believed that the SC state of the high-$`T_c`$ cuprates is primarily $`d_{x^2y^2}`$ , where it has been found that the magnetic field induces extended QPs whose population increases as $`\sqrt{H}`$ . Also, the QP scattering rate in the cuprates has been found to drop very rapidly below $`T_c`$ , which causes a pronounced peak in the temperature dependence of $`\kappa `$ . In 1997, Krishana et al. reported an intriguing result from their measurement on the magnetic-field dependence of $`\kappa `$ in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> (Bi-2212) at temperatures below 20 K . They observed, in the profile of $`\kappa (H)`$, a sharp break at a “transition field” $`H_k`$ and a subsequent plateau region where $`\kappa `$ does not change with magnetic field. Krishana et al. proposed an interpretation that $`H_k`$ marks a phase transition from the $`d_{x^2y^2}`$ state to a fully-gapped $`d_{x^2y^2}+id_{xy}`$ (or $`d_{x^2y^2}+is`$) state and thus in the plateau region there are little thermally-excited QPs which contribute to the heat transport. This interpretation appears to be fundamentally related to the high-$`T_c`$ mechanism and therefore attracted much attention both from theorists and from experimentalists . An independent test of this unusual behavior of $`\kappa `$ has been reported by Aubin et al. ; although the plateau-like feature was essentially reproduced in their measurement, when the data were taken with field sweep up and down, Aubin et al. observed a rather large jump in $`\kappa `$ upon field reversal and consequently the $`\kappa (H)`$ profile had a pronounced hysteresis. The fact that the $`\kappa `$ value in the “plateau” depends on the history of the applied magnetic field casts a serious doubt on the Krishana et al.’s interpretation. Moreover, Aubin et al. reported that an increase in $`\kappa `$ with magnetic field was observed at subkelvin temperatures, which strongly suggests the presence of a finite density of QPs at low temperatures and thus is incompatible with the fully-gapped state . Although these newer results suggest that a novel phase transition in the gap symmetry is not likely to be taking place, the plateau in the $`\kappa (H)`$ profile and the sensitiveness of $`\kappa `$ on the magnetic-field history are still to be understood. One interesting information one can draw from these experiments is that phonons are not scattered by vortices in cuprate superconductors . Motivated by these experiments, there appeared several theories that try to capture the essential physics of the QP heat transport in the $`d`$-wave superconducting state. It has become rather clear that a proper inclusion of the QP-vortex scattering rate is necessary for explaining the observed magnetic-field dependence; however, there has been no consensus yet as to how the QP-vortex scattering is taken into account. To improve our understanding of the QP heat transport under magnetic field, quantitative examinations of various theories in the light of actual data are indispensable. For Bi-2212, however, previously published data from various groups do not provide enough information; for example, the impurity scattering rate, which is the most important parameter that controls the thermal conductivity behavior, are not known for clean Bi-2212 crystals. (Rather surprisingly, no electrical resistivity data have been supplied for samples which were used in recent studies of $`\kappa (H)`$ profile of Bi-2212 .) In this paper, we present results of our measurements of $`\kappa (T)`$ and $`\kappa (H)`$ of well-characterized Bi-2212 crystals, together with their in-plane resistivity ($`\rho _{ab}`$) data. To look for the effect of changing impurity-scattering rate, we measured both pure and 0.6%-Zn-doped crystals. The crystals used here are single domained (without any mosaic structure nor grain boundaries) and have very good morphology, which were confirmed by x-ray analyses and the polarized-light optical microscope analysis. In the data presented here, neither the pure sample nor the Zn-doped sample show any plateaus in the $`\kappa (H)`$ profile in the temperature and the magnetic-field regime where the plateaus have been reported. In fact, we have found that the plateau in the $`\kappa (H)`$ profile is not a very reproducible feature (we have thus far observed the plateau-like feature in only 2 samples out of more than 30 samples measured) and we have not yet conclusively sorted out what determines the occurrence of the plateau ; we therefore decided to show only the data that are representative of the majority of the samples. Using all the data of $`\rho _{ab}(T)`$, $`\kappa (T)`$, and $`\kappa (H)`$, we try to estimate the electronic thermal conductivity $`\kappa _e(T)`$ and make a rough estimate of the impurity scattering rate $`\mathrm{\Gamma }`$ for the pure and Zn-doped samples. Our data offer a starting point for the quantitative understanding of the QP heat transport in the $`d`$-wave superconductors in the magnetic fields, where an interplay between the QP-vortex scattering and the QP-impurity scattering apparently plays an important role. The single crystals of Bi-2212 are grown with a floating-zone method and are carefully annealed and quenched to obtain a uniform oxygen content . Both the pure and the Zn-doped crystals \[Bi<sub>2</sub>Sr<sub>2</sub>Ca(Cu<sub>1-z</sub>Zn<sub>z</sub>)<sub>2</sub>O<sub>8+δ</sub>\] are tuned to the optimum doping by annealing at 750 C for 48 hours, after which the transition width of about 1.5 K (measured by the dc magnetic susceptibility measurements) is achieved; this indicates a high homogeneity of these crystals. The Zn-doped crystal contains (0.6$`\pm `$0.1)% of Zn (namely $`z`$=0.006$`\pm `$0.001), which was determined by the inductively-coupled plasma spectrometry. The zero-resistance $`T_c`$’s are 92.4 K for the pure crystals and 84.5 K for the Zn-doped crystal. We first measure the temperature dependence of $`\rho _{ab}`$ using a standard four-probe technique, and then measure the thermal conductivity $`\kappa `$ of the same sample. The temperature dependence of $`\kappa `$ from 1.6 to 160 K are measured in zero field using calibrated AuFe-Chromel thermocouples. The precise magnetic-field dependence of $`\kappa `$ is measured with a steady-state technique using a small home-made thin-film heater and microchip Cernox thermometers. The bottom end of the sample is anchored to a copper block whose temperature is carefully controlled within 0.01% stability and accuracy. Since we need to measure the change in $`\kappa `$ with a very high accuracy, the small magnetic-field dependence of the thermometers were carefully calibrated beforehand using a SrTiO<sub>3</sub> capacitance sensor and a high-resolution capacitance bridge. As a check for the measurement system and the calibrations, we first measured the thermal conductivity of nylon and confirmed that the reading is indeed magnetic-field independent within 0.1% accuracy. Figure 1(a) shows the $`\rho _{ab}(T)`$ data of the pure and Zn-doped samples. If we define the residual resistivity $`\rho _0`$ by the extrapolation of the $`T`$-linear resistivity to 0 K, we obtain slightly negative $`\rho _0`$ for the pure sample; this is always the case for the cleanest Bi-2212 crystals grown in our laboratory. As is expected, the Zn-doped sample gives larger $`\rho _0`$, which is about 10 $`\mu \mathrm{\Omega }\mathrm{cm}`$. We note that the uncertainty in the absolute magnitude of $`\rho _{ab}`$ and $`\kappa `$ in our measurements are less than $`\pm `$5% in this work. This is achieved by determining the sample thickness, which is usually the main source of the uncertainty, by measuring the weight of the sample with 0.1 $`\mu `$g resolution. We used relatively long samples here (the length of the pure and the Zn-doped samples were 4.5 mm and 6.5 mm, respectively), with which the errors in estimating the voltage contact separation and the thermocouples separation are less than $`\pm `$5%. It should also be noted that a sophisticated technique to make the current contacts uniformly on the side faces of the crystals are crucial for reliably measuring $`\rho _{ab}`$ of Bi-based cuprates. Figure 1(b) shows the temperature dependence of $`\kappa `$ of the two samples in zero field. To our knowledge, the size of the peak in $`\kappa (T)`$ of this pure sample is the largest ever reported for Bi-2212 (the enhancement from the minimum near $`T_c`$ to the peak is 16%). This implies that the pure crystal reported here is among the cleanest Bi-2212 crystals available to date. The Zn-doped sample shows not only a smaller magnitude of $`\kappa `$ but also a significantly suppressed peak in $`\kappa (T)`$ below $`T_c`$; this is caused by a larger impurity scattering rate (which limits the enhancement of the QP mean free length below $`T_c`$) and is consistent with the Zn-doping effect in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (YBCO) . Figure 2 shows $`\kappa (H)`$ profiles of the two samples at 7.5 and 12.5 K. The data are normalized by the zero-field value of $`\kappa `$. Neither of these samples show any plateau-like feature below 6 T. (The $`\kappa (H)`$ data of the Zn-doped sample is almost flat within our sensitivity, but we do not call it a plateau.) Note that Ref. reported the plateaus to be observed above 0.9 T at 7.5 K and above 2.6 T at 12.5 K, which are well within the range of our experiment. The data in Fig. 2 were taken in the field-cooled (FC) procedure, in which the sample is cooled in the presence of a magnetic field, and therefore the magnetic induction in the sample remains homogeneous. The FC data are expected to give information which is free from complications due to vortex pinning, while the zero-field-cooled (ZFC) data (for which the sample is cooled in zero field and the magnetic field is swept at a constant temperature) are subject to such complications. We emphasize that the FC data should be examined to look for the intrinsic effect of vortices on the QP transport. All the previously published data on the plateau are, however, ZFC data . It has been proposed that the magnetic-field dependence of $`\kappa `$ of the high-$`T_c`$ cuprates are described by $$\kappa (H,T)=\frac{\kappa _e(T)}{1+p(T)|H|}+\kappa _{ph}(T),$$ (1) where $`\kappa _e`$ is the electronic part of $`\kappa `$ in zero field and $`\kappa _{ph}`$ is the phonon part . Equation (1) was proposed first by Yu et al. and later by Ong and co-workers . This expression utilizes the finding that the phonon thermal conductivity of the cuprates is independent of the magnetic field. The parameter $`p(T)`$ is proportional to the zero-field value of the QP mean free path. We found that the $`\kappa (H)`$ data of our clean samples are reasonably well described by Eq. (1). The solid lines in Fig. 2 (a) and (b) are fits of the data to Eq. (1). The fitting parameters suggest that the phononic contribution to $`\kappa `$ is as large as 96% and 93% at 7.5 and 12.5 K, respectively; this is essentially a reflection of the fact that the changes in $`\kappa `$ with the magnetic field are very small at these temperatures. Ong et al. reported that $`p(T)`$ is about 2.3 T<sup>-1</sup> for underdoped YBCO at 7.5 K, while it is 0.88 T<sup>-1</sup> for our pure Bi-2212 at 7.5 K. This is an indication that the QP transport is dirtier in Bi-2212 compared to YBCO; we will elaborate on these $`p(T)`$ values later. The magnetic field dependence of $`\kappa `$ of the Zn-doped sample is too small to make a reliable fit to Eq. (1), but one can infer that the ratio of the phononic contribution in the Zn-doped sample is even larger than in the pure sample. In the normal state, the electronic thermal conductivity $`\kappa _e`$ and the electrical conductivity $`\sigma `$ are related by $`\kappa _e/T`$ = $`L\sigma `$, where $`L`$ is called the Lorenz number. In simple metals, $`L`$ is usually constant at high temperatures (Wiedemann-Franz law) and the free-electron model gives $`L_0`$ = 2.44$`\times `$10<sup>-8</sup> W$`\mathrm{\Omega }`$/K<sup>2</sup>. When the electron-electron correlation becomes strong, $`L`$ becomes smaller than the free electron value $`L_0`$; for YBCO, $`L`$ has been estimated to be around 1.0$`\times `$10<sup>-8</sup> W$`\mathrm{\Omega }`$/K<sup>2</sup> near $`T_c`$ . Using this value of $`L`$ and the $`\rho _{ab}`$ data of our crystals, we can roughly estimate $`\kappa _e`$ and $`\kappa _{ph}`$ above $`T_c`$. Such estimation for the pure sample gives $`\kappa _e`$ and $`\kappa _{ph}`$ values of about 1.6 and 3.6 W/Km, respectively, at 120 K. (120K is the lower bound for the temperature range where the effect of the superconducting fluctuations on $`\rho _{ab}`$ is negligible.) It has already been established that $`\kappa _{ph}`$ does not change much just below $`T_c`$ and that it is mostly $`\kappa _e`$ that causes the peak . We can therefore infer (from the $`\kappa _e`$ value at 120 K and the total $`\kappa `$ at the peak) that $`\kappa _e`$ at the peak is about 1.7 times larger than in the normal state. In the same manner, we can estimate for the Zn-doped sample $`\kappa _e`$ and $`\kappa _{ph}`$ to be about 1.0 and 2.9 W/Km at 120 K, respectively, and the enhancement of $`\kappa _e`$ at the peak is inferred to be a factor of 1.2. This analysis indicates that the effect of 0.6%-Zn doping is weaker for $`\kappa _{ph}`$ (which is decreased by 19% at 120 K upon the 0.6%-Zn doping) than for $`\kappa _e`$ (which is decreased by 38% at 120 K). Comparison of the inferred behaviors of $`\kappa _e`$ below $`T_c`$ with the theoretical calculations of Ref. gives us the idea of the magnitude of the impurity-scattering rate $`\mathrm{\Gamma }`$ in our samples. As is discussed above, $`\kappa _e`$ can be inferred to be about 1.7 times enhanced at the peak; comparison of this enhancement factor with the calculations for various $`\mathrm{\Gamma }/T_c`$ values suggests that $`\mathrm{\Gamma }/T_c`$ of our pure sample is about 0.05. The position of the peak (at $`T/T_c`$ 0.7) is also consistent with the theoretical calculation for $`\mathrm{\Gamma }/T_c`$ 0.05. Similarly, the inferred enhancement factor of $`\kappa _e(T)`$ of the Zn-doped sample suggests $`\mathrm{\Gamma }/T_c`$ to be about 0.2. Although these estimates are very crude, the estimated values of $`\mathrm{\Gamma }/T_c`$ of our Bi-2212 samples imply that the scattering rate increases by $`\mathrm{\Gamma }/T_c`$ 0.25 per 1% of Zn, which is of the same order of magnitude as that for YBCO . The estimated $`\mathrm{\Gamma }/T_c`$ of our pure Bi-2212 is still notably larger than that for pure YBCO, for which $`\mathrm{\Gamma }/T_c`$ has been estimated to be $``$0.01 . This is essentially a reflection of the fact that the peak in $`\kappa (T)`$ of Bi-2212 is much smaller compared to that of YBCO, and is probably caused by the intrinsic disorder of the crystalline lattice of Bi-2212 (the modulation structure along the $`b`$ axis). The above mentioned difference in $`\mathrm{\Gamma }/T_c`$ between our pure Bi-2212 ($`\mathrm{\Gamma }/T_c`$ 0.05) and pure YBCO ($`\mathrm{\Gamma }/T_c`$ 0.01) indicates that the QP mean free path in zero field, $`l_0`$, is roughly 5 times longer in pure YBCO compared to that in pure Bi-2212. This observation yields a useful information on the QP scattering cross section of the vortices, $`\sigma _{tr}`$, in Bi-2212. It was discussed in Refs. and that parameter $`p(T)`$ appearing in Eq. (1) can be expressed as $`p(T)=l_0\sigma _{tr}/\varphi _0`$. (Note that $`\sigma _{tr}`$ is the scattering cross section in two dimensions.) As is already discussed, the $`p(T)`$ value at 7.5 K is 0.88 T<sup>-1</sup> for our pure Bi-2212, while it is 2.3 T <sup>-1</sup> for YBCO; given the indication that $`l_0`$ is roughly 5 times longer in YBCO, the ratio of the $`p(T)`$ values of the two systems suggests that $`\sigma _{tr}`$ should be approximately a factor of 2 larger in Bi-2212. Since $`\sigma _{tr}`$ has been estimated to be 9 nm for YBCO , we obtain $`\sigma _{tr}`$ 18 nm for Bi-2212. The origin of this difference in $`\sigma _{tr}`$ between the two systems might be related to the difference in the structure of the vortex lines and should be a subject of future studies. In any case, the estimate of $`\sigma _{tr}`$ 18 nm gives $`l_0`$ 0.1 $`\mu `$m for pure Bi-2212 at 7.5 K. Now let us discuss the observed magnetic-field dependence of $`\kappa `$ in conjunction with the estimated values of $`\mathrm{\Gamma }/T_c`$ for each sample. Kübert and Hirschfeld (KH) calculated the magnetic-field dependence of $`\kappa _e`$ that comes from the QP’s Doppler shift around the vortex . Good agreements between the KH theory and experiments have been reported for very low temperatures, where an increase of $`\kappa `$ with magnetic field has been observed . The numerical calculation by KH show that $`\kappa (H)`$ is already an increasing function of $`H`$ at $`T`$=0.2$`T_c`$ for a dirty case, $`\mathrm{\Gamma }/T_c`$=0.1; this is clearly in disagreement with our data and indicates the necessity of an inclusion of the QP-vortex scattering into the calculation. The theory proposed by Franz tries to incorporate the effect of QP-vortex scattering; heuristically, Franz supposed that the QP-impurity scattering and the QP-vortex scattering are separable and additive. In this theory, the zero-field scattering rate $`\sigma _0`$ (which is a sum of the impurity scattering rate and the inelastic scattering rate) is directly related to the total change in $`\kappa `$ with the magnetic field, $`\mathrm{\Delta }\kappa `$, which is expressed as $`\mathrm{\Delta }\kappa `$ = $`(2.58T/\sigma _01)\kappa _{00}`$ for $`\sigma _0<T`$, and as $`\mathrm{\Delta }\kappa `$ = $`(2.15T/\sigma _0)^2\kappa _{00}`$ for $`\sigma _0>T`$. In these expressions, $`\kappa _{00}`$ is the universal thermal conductivity . Assuming that the inelastic scattering is negligible ($`\sigma _0\mathrm{\Gamma }`$) at 7.5 K, we can estimate $`\mathrm{\Delta }\kappa `$ for the pure and Zn-doped samples using these equations. The results are $`\mathrm{\Delta }\kappa `$ 0.4 W/Km for the pure sample and $`\mathrm{\Delta }\kappa `$ 0.1 W/Km for the 0.6%-Zn-doped sample. (In the calculation, we used $`\kappa _{00}/T`$ = 0.015 W/K<sup>2</sup>m which is reported for Bi-2212 .) When we turn to our data, the values of $`\mathrm{\Delta }\kappa `$ actually measured in our experiments are much smaller; at 7.5 K, $`\mathrm{\Delta }\kappa `$ up to 6 T is 0.086 and 0.008 W/Km for the pure and the Zn-doped samples, respectively. Clearly, the theory overestimates $`\mathrm{\Delta }\kappa `$ and the overestimation is particularly large for the Zn-doped sample. This comparison implies that the rather simple treatment of the separable QP-impurity scattering and the QP-vortex scattering is not accurate enough, particularly when the impurity scattering rate becomes comparable to the vortex scattering rate. Vekhter and Houghton (VH) have recently proposed a theory which explicitly considers the interplay between the vortex-lattice scattering and the impurity scattering. Unfortunately, numerical calculations for the magnetic-field dependence of $`\kappa `$ at intermediate temperatures are available only for a very clean case ($`\mathrm{\Gamma }/T_c`$=0.006) , and we cannot make a direct comparison to our data. Qualitatively, however, the theory predicts that a steep drop in $`\kappa (H)`$ at low fields is expected only when $`T`$ is larger than $`\mathrm{\Gamma }`$. The estimated $`\mathrm{\Gamma }/T_c`$ values of our samples suggest that the condition $`T>\mathrm{\Gamma }`$ is satisfied in the pure sample, but is not satisfied in the 0.6%-Zn-doped sample in the temperature range of Fig. 2. In summary, the temperature and magnetic-field dependences of the thermal conductivity $`\kappa `$ are measured in pure and Zn-doped Bi-2212 crystals that are well-characterized. The $`\rho _{ab}(T)`$ data taken on the identical samples are used for extracting the electronic thermal conductivity $`\kappa _e`$ above $`T_c`$. The temperature and magnetic-field dependences of $`\kappa `$ clearly reflect the difference in the impurity-scattering rate $`\mathrm{\Gamma }`$ in the crystals. It is found that in the majority of our Bi-2212 crystals (including one of the cleanest crystals available to date) the “plateau” in the $`\kappa (H)`$ profile is not observed and the $`\kappa (H)`$ profile around 10 K are reasonably well described by Eq. (1), which was originally proposed for YBCO. We estimate $`\mathrm{\Gamma }`$ of our crystals by comparing the behaviors of $`\kappa _e`$ below $`T_c`$ with the calculation by Hirschfeld and Putikka . The estimated values of $`\mathrm{\Gamma }`$ for the pure and Zn-doped samples allow us to compare the $`\kappa (H)`$ data with the existing theories of the QP heat transport in $`d`$-wave superconductors under magnetic field. We thank H. Aubin, K. Behnia, M. Franz, T. Hanaguri, A. Maeda, Y. Matsuda, and N. P. Ong for fruitful discussions.
no-problem/9812/astro-ph9812294.html
ar5iv
text
# Discovery of a Magnetic White Dwarf in the Symbiotic Binary Z Andromedae ## 1. Introduction When the term “symbiotic star” was coined in the early 1940s for the newly discovered peculiar variable stars with combination optical spectra (see Kenyon (1986)), Z Andromedae was one of the prototypes. Today, it remains one of the most frequently observed symbiotic systems (SS). The observations have revealed a complex system that is still not fully understood (Mikołajewska & Kenyon (1996)). Most evidence indicates that the hot star in Z Andromedae is a white dwarf (WD), and the work we present here supports that conclusion. This evidence includes effective temperature estimates of the hot component of approximately $`10^5\mathrm{K}`$ (Fernández-Castro et al. (1988); Murset et al. (1991)), an inferred hot component radius of approximately $`0.07R_{}`$ (Fernández-Castro et al. (1988); Murset et al. (1991)), and a large radio nebula (Seaquist, Taylor, & Button (1984)), which is not expected if mass transfer occurs via Roche lobe overflow onto a main sequence star. The binary has an orbital period of 759 days (Formiggini & Leibowitz (1994); Mikołajewska & Kenyon (1996)), and Schmid and Schild (1997) have used Raman line polarimetry to determine an orbital inclination of $`i47\pm 12^{}`$ and infer a mass for the hot component of $`0.65\pm 0.28M_{}`$ (assuming a total system mass of between 1.3 and 2.3 $`M_{}`$). According to current theories of binary evolution (Yungelson et al. (1995)), most white dwarfs found in symbiotics should have evolved from stars with main sequence masses greater than about $`1.5M_{}`$. Highly magnetic WDs ($`B_\mathrm{S}10^6`$ G, where $`B_\mathrm{S}`$ is the field at the stellar surface) appear to be preferentially formed by stars with main sequence masses $`M24M_{}`$ (Angel, Borra, & Landstreet (1981); Sion et al. (1988)), and so it is possible that the fraction of WDs that are magnetic is higher in symbiotics than in the field, where it is about $`35`$% (Chanmugam (1992)). Given that there are at least 150 known SS, and that most of these contain white dwarfs, we expect some SS to contain white dwarfs that are magnetized at the level seen in DQ Herculis and AM Herculis cataclysmic variables ($`B_\mathrm{S}10^5`$ G). Mikołajewski et al. (1990ab) and also Mikołajewski & Mikołajewska (1988) have invoked the presence of a magnetic white dwarf to explain the jets, flickering with possible QPO’s, and large changes in the hot component luminosity in the symbiotic star CH Cygni, and this idea was later also adopted in the case of another symbiotic, MWC 560 (Tomov et al. (1992); Michalitsianos et al. (1993)). However, stable and repeatable oscillations like those detected in magnetic cataclysmic variables have until now not been seen in a symbiotic, and the prevalence of magnetic white dwarfs in SS is an important unknown. We are undertaking a long-term observational program to study the minute time scale photometric behavior of symbiotic binaries, expanding upon the work of Dobrzycka, Kenyon, & Milone (1996) and others. In §2, we present the first result from our survey, the discovery of a 28-minute oscillation in the B band emission from Z Andromedae. This was the only strong oscillation found in the preliminary analysis of 20 objects. Results from the complete survey will be presented in a future paper. In §3, we interpret this oscillation as due to accretion onto a magnetic, rotating white dwarf. The fact that the oscillation was detected during a recent outburst, as well as once the source had returned to quiescence, has implications for outburst models, as we outline in our conclusions (§4). ## 2. Observations and Results We observed Z Andromedae on seven occasions separated by two to four weeks each from 1997 July to 1997 December, and then once again in 1998 June, with the 1-meter Nickel telescope at UCO/Lick Observatory. The observations ranged in length from approximately 4 hours on a single night in 1997 July, to approximately 7 hours per night three nights in a row in 1997 November, for a total of 76 hours of observing on 13 nights spanning 1 year (see Table 1). The $`2048\times 2048`$ pixel, $`6.3\times 6.3`$ arcmin, unthinned LORAL CCD currently in Lick’s dewar #2, and a Johnson B filter were always used. The first observation fortuitously occurred about 1 month after the peak of an optical outburst ($`\mathrm{\Delta }V1`$), and the subsequent observations in 1997 took place as the optical flux declined. At the time of the 1998 June observation, the optical flux had returned to its pre-outburst quiescent value. A 2.2 year V band light curve is shown in Figure 1, with the times of our observations marked. At the time of the first observation, on 1997 July 8, the binary was oriented with the WD in front of the red giant, from the observer’s perspective (i.e. the orbital phase of the binary was 0.5, where phase 0.0 corresponds to photometric minimum in quiescence and spectroscopic conjunction). By 1997 December 2, the binary had moved through almost one quarter of its orbit to phase 0.7, where both stars are roughly equidistant from the observer, and in 1998 June, the WD was behind the red giant. Data reduction was performed using IDL software based on standard IRAF routines. Source counts for each image were extracted from a circular aperture with a radius of 8 to 14 arcsec, and the background was estimated from a surrounding annulus. For each light curve, the extraction region was chosen to be much larger than the seeing so that any variability due to source counts falling outside the extraction region (as the seeing or guiding quality changed) was small compared to systematic errors. Z Andromedae is bright enough that even with large extraction regions, the Poisson errors from sky background are usually not significant. Several representative light curves from our observations are shown, in chronological order, in Figure 2. There is one other bright star in the field of Z Andromedae (at J2000 coordinates 23 33 24, +48 45 38), but it is also variable, so it was not used as a comparison star for differential photometry (except for one night in October when its amplitude of variability was low, and one night in November when thick clouds were present). Therefore, although every attempt was made to perform observations on clear nights, some observations were affected by high clouds. Data points affected by radiation events (“cosmic rays”) were removed when they could be identified, and the light curves were corrected for atmospheric extinction. In addition, we divided most of the light curves by a 3rd order polynomial in order to remove residual atmospheric effects ($``$ 1% effect). Note that this may have also removed any intrinsic variability on a time scale comparable to the length of the observation. This polynomial fitting was not performed for the Aug. 2 and Aug. 30 data because of the presence of flare-like variability in the light curves (see Figure 2). Power spectra corresponding to the light curves in Figure 2 are shown in Figure 3. The most striking, persistent feature is the peak at 0.6 mHz, corresponding to a period of 28 minutes. A smaller, but still significant peak is also present at twice this frequency in the power spectra of the July 8 (not shown), August 2, August 30, and September 13 light curves. The 28-minute oscillation was significantly detected in all 8 observations. The power spectrum of the other bright star in the field does not show the feature at 0.6 mHz, confirming that the oscillation detected in Z Andromedae was neither instrumental nor atmospheric in origin. ### 2.1. Timing Analysis Although the feature at 0.6 mHz is detected in the power spectra of Z Andromedae, the precise value of the oscillation period and its uncertainty is best determined in the time domain. Visual inspection of the light curves suggests that the signal is not a simple sinusoid, and the detection of a harmonic in several of the power spectra confirms this impression. By using time domain epoch folding techniques, we need not make any assumptions about the shape of the pulse profile, and all the signal “power” will be located at a single period. To perform this analysis, we used the phase dispersion minimization (PDM) technique originally described by Stellingwerf (1978). The PDM technique consists of folding a light curve at a range of periods and computing the mean pulse profile, and the scatter of the data points about this profile, for each period. Each data point is assigned a phase $`\varphi =tmodP`$, where $`t`$ is the time of the measurement from some initial time and $`P`$ is the fold period, and binned accordingly. In our case the number of phase bins ranged from 10 to 20, and was chosen to be as large as possible while still ensuring that each phase bin contained enough data for our statistics to be valid (at least 10 data points). With a large number of points in each bin, the standard PDM statistic is just $`\chi ^2`$, $$\chi ^2=\underset{i=1}{\overset{n_b}{}}\underset{j=1}{\overset{n_i}{}}\frac{(x_{ij}m_i)^2}{\sigma _{ij}^2},$$ (1) where $`n_b`$ is the number of phase bins, $`n_i`$ is the number of points in bin $`i`$, $`x_{ij}`$ is the $`j`$th point in the $`i`$th phase bin, $`m_i=n_i^1x_{ij}`$ is the mean of all points in bin $`i`$, and $`\sigma _{ij}`$ is the uncertainty on $`x_{ij}`$. In our treatment the $`\sigma _{ij}`$ were usually dominated by the Poisson errors on the source counts. The mean pulse profile will be rather flat for fold periods far from the true period, and the points in each phase bin will have a large variance, causing $`\chi ^2`$ to be large. For a fold period close to the true period, the mean profile will approach the true pulse profile, the variance of the points in each bin will be small, and $`\chi ^2`$ will decrease. The best estimate of the true period is found by minimizing $`\chi ^2`$. For a data stream with Gaussian noise and a superimposed oscillation, $`\chi _{min}^2`$, the minimum value of $`\chi ^2`$, should be approximately equal to the number of degrees of freedom (in this case $`Nn_b`$, where $`N`$ is the total number of points). In other words, the reduced $`\chi ^2`$ is approximately equal to 1, indicating that the fit of the data to the mean profile is good. If the errors are normally distributed, there is also a simple relationship between $`\mathrm{\Delta }\chi ^2`$ above the minimum and the level of confidence that the true period lies within the range that produce $`\chi ^2(\chi _{min}^2+\mathrm{\Delta }\chi ^2)`$. For models with only one free parameter, like ours, $`\mathrm{\Delta }\chi ^2=1`$ corresponds to a 68.3% confidence level, and $`\mathrm{\Delta }\chi ^2=2.71`$ corresponds to a 90% confidence level. If underlying red noise is present at the period of interest, the task of identifying an oscillation and measuring its parameters is much more difficult. Very long baselines or repeated observations are then needed to characterize the underlying variability. For Z Andromedae, any red noise is at longer periods than the detected oscillation, so we did not have this added complication. For data with noise properties that are not precisely Gaussian, the PDM technique can still be used, with the help of simulations to determine the correct relationship between $`\mathrm{\Delta }\chi ^2`$ above the minimum and confidence levels (van der Klis, private communication; Press et al. (1992)). We performed such simulations for each light curve by repeatedly injecting a fake signal with a known period into the data, and then examining the distribution of periods resulting from the PDM method. The period of the oscillation as determined from each observation individually is shown in the last column of Table 1, where the quoted errors are roughly 68% confidence limits. The measurements from observations with more than one night of observing are more precise than single night observations because of the longer baseline. The period measurements from all observations are consistent, indicating that the period was stable to within less than 15 seconds, or 1% of the period, for 1 year and during an outburst of the system. More accurate determination of the oscillation period in Z Andromedae by connecting the data from adjacent observations will allow for important orbital time delay measurements. Given the system inclination of $`i=47^{}`$ and taking the total system mass to be $`M_{\mathrm{t}\mathrm{o}\mathrm{t}}=2M_{}`$, which is typical for a symbiotic (Schmid & Schild (1997), and references therein), the light travel time across the WD orbit is $`(a_{\mathrm{W}\mathrm{D}}\mathrm{sin}i)/c12.2\mathrm{min}(\mathrm{sin}i/0.73)(M_{\mathrm{t}\mathrm{o}\mathrm{t}}/2M_{})^{1/3}(1+M_{\mathrm{W}\mathrm{D}}/M_{\mathrm{R}\mathrm{G}})^1`$, where $`a_{\mathrm{W}\mathrm{D}}`$ is the distance from the WD to the center of mass, $`M_{\mathrm{W}\mathrm{D}}`$ is the mass of the WD, and $`M_{\mathrm{R}\mathrm{G}}`$ is the mass of the red giant. The peaks at twice the fundamental frequency in the early data indicate that the pulse profile deviates from a sinusoid. Pulse profiles created by folding the light curves from observations #3, #4, and #7 at 1683 seconds are shown are Figure 4. The pulse fraction decreased monotonically as the outburst decayed, from $`5`$ mmag peak-to-peak in 1997 July and August (observations #1 – #3) to $`2`$ mmag peak-to-peak in 1997 December (observation #7). In 1998 June the oscillation was detected at the 2 mmag level. ## 3. The Case for Magnetic Accretion We interpret the 28-minute oscillation as the result of rotation of a white dwarf which has a strong enough magnetic field to channel the accretion flow onto its magnetic polar caps, as in the DQ Herculis systems (Patterson (1994)). Non-radial g-mode pulsations of a hot WD that is similar to a planetary nebula nucleus (PNN) is another plausible explanation of the oscillation, especially since g-mode pulsations with periods close to 28 minutes have been observed in several PNN (Ciardullo & Bond 1996). However, these systems are multiperiodic and have frequencies that change on month long time scales. The Z Andromedae emission oscillated at only a single, constant frequency for an entire year, as well as throughout an outburst during which conditions in the WD envelope presumably changed significantly. Therefore, we conclude that WD g-mode pulsations are unlikely to be the cause of the oscillation. The period of the oscillation is too long to be due to an acoustic (p-mode) pulsation in a WD, and too short to be due to a g-mode pulsation in a main sequence star with $`M0.65M_{}`$. A p-mode pulsation in a main sequence star is not formally ruled out, but again one would expect more than a single mode to be present. Therefore, the period, its stability and coherence, and the fact that only one period is detected all support the WD magnetic dipole rotator model. The minimum magnetic field strength at the dipolar cap, $`B_\mathrm{S}`$, that is needed to funnel the accretion onto the star can be roughly estimated by requiring that the magnetospheric radius, $`r_{\mathrm{mag}}`$, be larger than the white dwarf radius, $`R`$. At the magnetospheric radius, the magnetic field pressure is comparable to to the ram pressure of the in-falling material, giving the standard $`r_{\mathrm{mag}}(\mu ^4/2GM_{\mathrm{W}\mathrm{D}}\dot{M}^2)^{1/7}`$, where $`\mu =B_\mathrm{S}R^3/2`$ is the magnetic dipole. This then leads to a minimum field $`B_\mathrm{S}`$ $``$ $`3\times 10^4\mathrm{G}\left({\displaystyle \frac{10^9\mathrm{cm}}{R}}\right)^{5/4}\left({\displaystyle \frac{\dot{M}}{10^8M_{}\mathrm{yr}^1}}\right)^{1/2}`$ $`\text{ }\times \left({\displaystyle \frac{M_{\mathrm{W}\mathrm{D}}}{0.65M_{}}}\right)^{1/4}.`$ If the WD has been spun up so that it is in a rotational equilibrium, with its spin period, $`P_\mathrm{s}`$, equal to the Kepler period at $`r_{\mathrm{mag}}`$, then the magnetospheric radius would be $`r_{\mathrm{mag}}`$ $``$ $`\left({\displaystyle \frac{GM_{\mathrm{W}\mathrm{D}}P_\mathrm{s}^2}{4\pi ^2}}\right)^{1/3}`$ $``$ $`0.26R_{}\left({\displaystyle \frac{M_{\mathrm{W}\mathrm{D}}}{0.65M_{}}}\right)^{1/3}\left({\displaystyle \frac{P_\mathrm{s}}{28\mathrm{min}}}\right)^{2/3},`$ and the field strength needed to have the magnetosphere at this radius is roughly $`B_\mathrm{S}`$ $``$ $`6\times 10^6\mathrm{G}\left({\displaystyle \frac{\dot{M}}{10^8M_{}\mathrm{yr}^1}}\right)^{1/2}\left({\displaystyle \frac{10^9\mathrm{cm}}{R}}\right)^3`$ $`\text{ }\times \left({\displaystyle \frac{M_{\mathrm{W}\mathrm{D}}}{0.65M_{}}}\right)^{5/6}.`$ The time it takes for the white dwarf to reach this rotational equilibrium is $`t_{\mathrm{spin}\mathrm{up}}=2\pi I/NP_\mathrm{s}5\times 10^4\mathrm{yr}(28\mathrm{min}/P_\mathrm{s})(10^8M_{}\mathrm{yr}^1/\dot{M})`$, where $`IM_{\mathrm{W}\mathrm{D}}R^2/5`$ is the WD moment of inertia and $`N=\dot{M}(GM_{\mathrm{W}\mathrm{D}}r_{\mathrm{mag}})^{1/2}`$ is the accretion torque. This spin-up time is shorter than the lifetime of the red giant, so it is likely that the system has reached this form of equilibrium. ## 4. Conclusions and Implications for the Outburst Mechanism Our survey has so far yielded one persistent periodic oscillator. The oscillation was detected on all 8 occasions when the source was observed over the course of one year, and the period, $`P=1682.6\pm 0.6`$ s, was stable to within our measurement errors. We interpret this oscillation in terms of magnetic accretion onto a rotating WD. This detection is the first of its kind for a symbiotic, and it comes from an object, Z Andromedae, in which no other phenomena thought to be associated with magnetism have been observed. Outburst mechanisms need to be reconsidered in light of this discovery, and as we now elaborate, accretion disk instabilities look to be a promising source for the outbursts. The detection of an oscillation that originates at the WD surface during an outburst has serious consequences for models of the outburst mechanism in Z Andromedae. Most models (Mikołajewska & Kenyon (1992), and references therein) invoke dramatic expansion of the WD photosphere, for example as the result of a thermonuclear shell flash or a change in $`\dot{M}`$ onto a nearly stably burning hydrogen layer. Evidence for such expansion and the subsequent decrease in the effective temperature of the WD includes a decrease in the strength of high ionization state emission lines, line broadening, increased opacity as measured by line ratios, the appearance of an A-F-type spectrum, and direct luminosity estimates, all during outburst (Fernández-Castro et al. (1995), Mikołajewska & Kenyon (1996)). Mikołajewska & Kenyon (1996) deduced that the radius of the hot component increased by a factor of $`100`$ during previous outbursts of Z Andromedae. However, they also noted a few problems with the shell flash/photospheric expansion model. The HeII emission lines evolve in a different manner than other emission lines during outbursts. Therefore, the outburst spectra are inconsistent with an evolving single temperature model for the WD. Another problem for thermonuclear runaway and steady burning shell expansion models is the time scales. It is difficult to reconcile theoretical photospheric expansion time scales and shell flash recurrence time scales with the observations (Mikołajewska & Kenyon (1992), and references therein), especially for a low-mass WD (although see Sion & Ready ). Our detection of an oscillation from a region that would be hidden by an expanded photosphere is another phenomenon that is difficult to reconcile with models involving photospheric expansion. Our observations do not provide information about the temperature evolution of the hot component, so it is possible that the 1997 outburst was significantly different from previous outbursts. The 1997 outburst was smaller and more asymmetric than either of two well-studied outbursts in 1984 and 1986, which rose to $`V9.6`$ and $`V9.1`$ respectively, compared to $`V9.7`$ for the 1997 outburst. Based upon multiwavelength observations, Fernández-Castro et al. (1995) suggested that the 1984 and 1986 events were similar, but that a less massive shell was ejected during the 1984 outburst. The 1984 event produced a smaller increase in opacity (Fernández-Castro et al. (1995)), so a correlation between $`V`$ at the outburst peak and the nature of the outburst might exist. Another outburst mechanism that has been discussed for SS, although usually not for systems that contain WDs, is thermal accretion-disk instabilities (DI; Duschl 1986a b) like those that lead to dwarf novae eruptions in CVs (Osaki (1996)). DI models have not been considered prime candidates for explaining the outbursts in WD SS for several reasons. First of all, there is little direct evidence for disks around WDs in SS. Disks are not needed in spectral fits (Murset et al. (1991)) and double peaked line profiles cannot be definitively linked to disk emission (Robinson et al. (1994)). Furthermore, disk instabilities alone may not provide sufficient energy to explain the observed flux increases (Kenyon (1986)). Disk instability models can, however, produce time scales that are more in accordance with the durations and recurrence times seen in SS outbursts than thermonuclear runaway models. We will explore the possibility that disk instabilities play an important role in SS outbursts more fully in a separate paper. There are several points that are important to note here, however. Most importantly, if a large disk does exist around the WD in Z Andromedae, it would be thermally unstable (Duschl 1986b ; Meyer-Hofmeister (1992)). Secondly, during a DI-induced outburst, the emission region close to the WD could remain exposed, as appears to be the case during the most recent outburst of Z Andromedae. Finally, the presence of a quasi-steady burning layer on the WD may affect the energetics of the outburst resulting from a disk instability. We would like to thank W. Ho for help with the observations, as well as D. Chakrabarty, M. Eracleous, M. van der Klis, and G. Ushomirskiy for useful discussions. The work of W. Deitch modifying the timing system at the Nickel telescope was also greatly appreciated, as was the assistance of T. Misch and R. Stone. This work was supported by the California Space Institute (CS-45-97) and by a Hellman Family Faculty Fund (UC-Berkeley) award to L. B..
no-problem/9812/astro-ph9812300.html
ar5iv
text
# The Ray Bundle method for calculating weak magnification by gravitational lenses ## 1 Introduction Gravitational lensing is the study of the effects of matter on the propagation of light. The most obvious observational results are the production of multiple images (as first seen with the multiply imaged quasar 0957+561 by Walsh, Carswell & Weymann \[Walsh et al. 1979\]), the creation of giant luminous arcs (first identified in the galaxy clusters Abell 370 and Cl 2244 by Lynds & Petrosian \[Lynds & Petrosian 1986\] and, independently, in Abell 370 by Soucail et al. \[Soucail et al. 1987\]) and the large magnifications of source flux seen in microlensing events (for example, brightening of a single image of the multiply-imaged quasar 2237+0305 due to compact objects in the lensing galaxy, first detected by Irwin et al. \[Irwin et al. 1989\]). The paths of light rays from a source to the observer are conveniently described with the gravitational lens equation (Equation (1) below). This equation is highly non-linear, so that, except for a small number of specific cases, there are no analytic solutions. In particular, there is no straightforward result which determines the image locations or magnifications for an ensemble of many lenses. This presents a serious problem when we wish to study the lensing properties of a complex lensing structure such as the Universe. Fortunately, a number of approximate numerical methods have been developed which allow us to calculate magnifications and other properties of a collection of lenses. Foremost amongst these are the Ray Shooting methods, introduced by Paczyński \[Paczyński 1986\] and Kayser, Refsdal & Stabell \[Kayser et al. 1986\] and developed by Schneider & Weiss \[Schneider & Weiss 1987\], Kayser et al. \[Kayser et al. 1989\] and Lewis et al. \[Lewis et al. 1993\]. For a discussion of other methods, such as the use of a scalar deflection potential, and the optical scalar equations, see Schneider, Ehlers & Falco \[Schneider et al. 1992\]. In this paper, we present an alternative technique for calculating the magnification properties of an ensemble of lenses – the Ray Bundle method (RBM). The RBM is particularly well suited to studies of the weak lensing limit, where we are not concerned with the creation of multiple (comparably bright) images. Like the Ray Shooting method, the RBM uses backwards propagation of light rays from the observer to the source, which are deflected by the distribution of lenses, and are mapped to the source plane. Whereas the Ray Shooting method collects the deflected light rays within a rectangular grid of pixels, we consider an infinitesimal bundle of rays which form a circular image, and maintain this association to the source plane. At first sight, the Ray Bundle method may seem to be less computationally efficient than the Ray Shooting method, as a grid based technique lends itself to (fast) hierarchical calculations \[Wambsganss 1990\]. However instead of requiring $`\stackrel{>}{}\mathrm{\hspace{0.17em}100}`$ light rays per source grid to reduce the statistical error, we need only use a bundle of $`N_{\mathrm{ray}}=8`$ rays to obtain magnifications which are correct to better than 5 per cent for an equivalent source size. With a Ray Shooting method, we have both image and source pixels, but cannot easily determine the correspondence between them. By keeping track of the individual light bundles with the RBM, we are able to monitor the shape distortions of the beam caused by the shear and convergence of a lens ensemble. This is of particular interest when the RBM is applied to multiple lens plane geometries (as are conventionally used for studying cosmologically distributed lenses). Details of the beam shape allows for the opportunity to make comparisons between results calculated with the gravitational lens equation, and those using the optical scalar equations (which are more easily applied to smooth mass distributions, or approximate mass distributions such as the Swiss cheese model \[Einstein & Straus 1945\]). In Section 2 we introduce various basic results of gravitational lensing, particularly with regards to magnification and the magnification probability distribution. The Ray Bundle method is introduced in Section 3 and compared in detail with analytic solutions for the Schwarzschild lens model. By obtaining the magnification probability distribution with both the Ray Bundle and Ray Shooting methods for a variety of lens geometries, we demonstrate the general applicability of the RBM. We do not discuss applications of the RBM here, but reserve details for Fluke, Webster & Mortlock (in preparation) where the Ray Bundle method is used to investigate weak lensing within realistic cosmological models (generated with N-body simulations). ## 2 Gravitational Lensing We present here a number of important results from gravitational lensing which we will require: the gravitational lens equation, the magnification of source flux and the magnification probability distribution. Several excellent sources exist which describe gravitational lensing far more comprehensively than may be discussed here, for example Schneider et al. \[Schneider et al. 1992\] and Narayan & Bartelmann \[Narayan & Bartelmann 1996\]. ### 2.1 The gravitational lens equation The deflection of a light ray by a massive object is conveniently expressed with the gravitational lens equation (GLE), which may be derived from simple geometrical arguments (as demonstrated in Figure 1). The GLE relates the impact parameter, $`𝝃`$, in the lens (deflector) plane, the source position, $`𝜼`$, in the source plane and the deflection angle, $`\widehat{𝜶}(𝝃)`$, of the light ray: $$𝜼=\frac{D_{\text{os}}}{D_{\text{od}}}𝝃D_{\text{ds}}\widehat{𝜶}(𝝃).$$ (1) The distances ($`D_{\text{ij}}`$) in Equation (1) are angular diameter distances between the \[o\]bserver, \[d\]eflector and \[s\]ource planes. For a given source position, an image will occur for each value of $`𝝃`$ which is a solution of Equation (1). We define the lens axis as the line from the observer through a single lens (at the origin of the lens plane), and which is perpendicular to both the lens and source planes. The GLE may be recast into a dimensionless form by introducing the scaling lengths $`\xi _0`$ and $`\eta _0=\xi _0D_{\text{os}}/D_{\text{od}}`$. Defining $`𝒙=𝝃/\xi _0`$, $`𝒚=𝜼/\eta _0`$ and the dimensionless deflection angle $$𝜶(𝒙)=\frac{D_{\text{od}}D_{\text{ds}}}{D_{\text{os}}\xi _0}\widehat{𝜶}(\xi _0𝒙)$$ (2) it follows that $$𝒚=𝒙𝜶(𝒙).$$ (3) ### 2.2 Magnification For a bundle of light rays passing through a transparent lens, the number of photons is conserved, ie. gravitational lensing does not change the specific intensity of the source. A change in flux, however, can occur as the cross-sectional area of a bundle of light rays will be affected by a gravitational lens. The change in the apparent luminosity is entirely due to the change in the solid angle that the image covers (at the expense of the rest of the sky) – with a lens present, more photon trajectories are brought to the observer’s eye than if there were no lens \[Dyer & Roeder 1981\]. If the flux at a frequency $`\nu `$ is $`S_\nu =I_\nu d\mathrm{\Omega }_{\mathrm{obs}}`$, where $`I_\nu `$ is the specific intensity and $`d\mathrm{\Omega }_{\mathrm{obs}}`$ is the solid angle subtended by the source at the observer’s location, then the magnification is $$|\mu |=\frac{S_\nu }{S_\nu ^{}}=\frac{\mathrm{d}\mathrm{\Omega }_{\mathrm{obs}}}{\mathrm{d}\mathrm{\Omega }_{\mathrm{obs}}^{}}$$ (4) with primes denoting quantities when the lens is absent. It is usual to measure the magnification with respect to either a ‘full beam’ or an ‘empty beam’. The full beam refers to the case where matter is smoothly distributed everywhere (inside and outside of a bundle of light rays), so that there is no shear and the magnification is due entirely to convergence. An empty beam corresponds to the case where there is a smooth distribution of matter (on average) external to the beam, and no matter within the beam. The empty beam is the maximally divergent beam, as there is now no magnification due to convergence, and the minimum magnification is $`\mu _{\mathrm{empty}}=1`$. The addition of material outside of the beam will result in a total magnification $`\mu 1`$ due to shear \[Schneider 1984\]. In the work that follows, we will calculate magnifications with respect to an empty beam. ### 2.3 The magnification probability distribution The (differential) magnification probability, $`p(\mu ,z)\mathrm{d}\mu `$, is the probability that the total magnification of a source at redshift $`z`$ will lie in the range $`\mu `$ to $`\mu +\mathrm{d}\mu `$. The probability is subject to the constraints $$_1^{\mathrm{}}p(\mu ,z)d\mu =1$$ (5) and $$_1^{\mathrm{}}p(\mu ,z)\mu d\mu =\mu (z),$$ (6) where $`\mu (z)`$ is the mean magnification. For a universe where the matter is smoothly distributed, $`\mu (z)=1`$, while for the empty beam magnifications considered here, $`\mu (z)>1`$. Equation (5) provides the normalisation of the probability, while Equation (6) expresses the conservation of flux \[Weinberg 1976\]. These constraints are only strictly true when we consider the magnification probability over the whole sky – in the tests we conduct here, we use artificial lens distributions that do not cover the entire celestial sphere. The probability distribution provides a straightforward way of making a comparison between different techniques for solving the GLE. Although we might expect the calculation of the magnification along a particular line-of-sight to vary slightly (depending on source geometry, etc), the statistical properties of a distribution of sources should be essentially independent of the method. In practice, we will use a histogram of the probability as a function of $`\mu `$. This involves summing over a range of magnifications from $`\mu _1`$ to $`\mu _2=\mu _1+\mathrm{\Delta }\mu `$, so that each histogram bin represents $$p(\mu _1,z)\mathrm{\Delta }\mu =_{\mu _1}^{\mu _2}p(\mu ,z)d\mu .$$ (7) In the following, we will refer to this as the magnification probability histogram (MPH). ### 2.4 Solving the gravitational lens equation The GLE is highly non-linear, and in general there are multiple solutions for the image locations for a single source position. As a result, analytic solutions (where we can invert the GLE to solve for all $`𝝃`$ as a function of $`𝜼`$) exist for only special lens geometries and models, which may not always be realistic or practical. Instead, various numerical methods have been developed which enable solutions to the lens equation to be found accurately and efficiently. Some of the most widely used methods are those based on the concept of Ray Shooting. The basic principle of the Ray Shooting method (RSM) is to use Equation (1) to propagate light rays backwards from the observer through a sequence of one or more lens planes to the source plane, where the rays are collected on a rectangular pixel grid. Typically $`10^6`$ pixels are required, with an average of $`\overline{N}100`$ rays per source plane pixel. The magnification in each source pixel $`(i,j)`$ is then proportional to the density of rays collected therein: $$\mu _{\mathrm{pixel}}(i,j)=N_{\mathrm{collected}}(i,j)/\overline{N}.$$ (8) A shooting region in the lens plane is chosen such that only a few rays from outside of this grid are mapped onto one of the sources, otherwise there would be missing flux. By using a uniform grid of image rays, the relative error is approximately $`\overline{N}^{3/4}`$ \[Kayser et al. 1986\], which is better than the Poisson error if a random distribution of image rays was used (however a regular grid may introduce systematic errors). As a consequence of this error, it is possible to get magnifications $`\mu _{\mathrm{pixel}}(i,j)<1`$, violating the condition on the total magnification (Section 2.2), which is entirely a numerical effect. Early versions of this method were introduced by Paczyński \[Paczyński 1986\] and Kayser et al. \[Kayser et al. 1986\] and developed by Schneider & Weiss \[Schneider & Weiss 1987\] and Kayser et al. \[Kayser et al. 1989\]. Hierarchical tree methods were applied to microlensing scenarios by Wambsganss \[Wambsganss 1990\] and Wambsganss, Paczyński & Katz \[Wambsganss et al. 1990\]. The hierarchical methods approximate the effects of lenses which are far from a light ray, and allow the inclusion of many thousands of lenses at a low computational cost (O($`N\mathrm{log}_2N`$) for a tree-code with N lenses, versus O($`N^2`$) when the contribution of every lens is explicitly calculated). An improvement on conventional Ray Shooting methods for obtaining statistical properties of microlensing light curves can be made with the efficient one-dimensional contour following algorithm of Lewis et al. \[Lewis et al. 1993\]. We now introduce the Ray Bundle method as an alternative to the RSM. ## 3 The Ray Bundle Method The Ray Bundle method (RBM) is similar to the RSM, in that the lens equation is used to propagate light rays backwards from the observer to the source plane. However we now consider a bundle consisting of a central ray (the null geodesic) surrounded by $`N_{\mathrm{ray}}`$ light rays, which create an image shape (usually circular). As the ray bundle passes through the lens distribution, its shape will be distorted due to shear (stretching along an axis) and convergence (focusing due to matter within the beam). For an ‘infinitesimal’ ray bundle, the magnification is determined from Equation (4) by calculating the area of the bundle in the image plane ($`d\mathrm{\Omega }_{\mathrm{obs}}^{}`$) and the source plane ($`d\mathrm{\Omega }_{\mathrm{obs}}`$). Since we are using backwards ray-tracing through a single image position, we do not know where other images may occur – hence any measurement of the magnification using the RBM will underestimate the total magnification. However, this is only a significant problem when we consider images located near the critical curves, when the contribution to the total magnification due to any other images becomes important (see below). The RBM was developed for applications in the weak lensing limit, and should be used with caution for strong lensing cases. ### 3.1 Comparison with analytic solutions We can investigate the validity of the RBM by comparison with the various analytic solutions which exist for the Schwarzschild lens (see Appendix A for a summary). Consider first a circular source of radius $`R_\mathrm{s}`$ with centre at $`𝒚_\mathrm{c}=(y_{1,\mathrm{c}},y_{2,\mathrm{c}})`$. The circumference of the source is then described by the set of vectors $`𝒚=(y_1,y_2)`$ with $`y_1`$ $`=`$ $`y_{1,\mathrm{c}}+R_\mathrm{s}\mathrm{cos}(\varphi )`$ $`y_2`$ $`=`$ $`y_{2,\mathrm{c}}+R_\mathrm{s}\mathrm{sin}(\varphi )`$ (9) where $`0\varphi <2\pi `$. For each $`𝒚`$, we can solve for the two solutions, $`𝒙_\pm `$, with Equation (19). In this case we are using the GLE to map from the source plane to the image plane. A source far from the lens axis produces one highly demagnified image ($`\mu _{\mathrm{faint}}`$) located near the lens axis (at $`𝒙_{\mathrm{faint}}`$). The second image will have a magnification $`\mu _{\mathrm{bright}}1`$, and an angular position near the source (at $`𝒙_{\mathrm{bright}}`$). As the source is moved towards the lens axis, the images are stretched in the tangential direction<sup>1</sup><sup>1</sup>1The Schwarzschild lens has a single (degenerate) caustic point at $`y=0`$, which corresponds to a tangential critical curve at $`x=1`$ (the Einstein ring). and become comparable in brightness ($`|\mu _{\mathrm{faint}}||\mu _{\mathrm{bright}}|1`$ as $`𝒚_c0`$). When $`𝒚_c=0`$, the two images merge into a highly magnified ring (the Einstein ring) with total magnification given by Equation (24). Now consider a circular image of radius $`R_\mathrm{i}`$ centred on the location of the bright image, $`𝒙_\mathrm{c}=𝒙_{\mathrm{bright}}(𝒚_\mathrm{c})`$, with circumferential points $`𝒙=(x_1,x_2)`$ $`x_1`$ $`=`$ $`x_{1,\mathrm{c}}+R_\mathrm{i}\mathrm{cos}(\varphi )`$ $`x_2`$ $`=`$ $`x_{2,\mathrm{c}}+R_\mathrm{i}\mathrm{sin}(\varphi ).`$ (10) This time, the GLE maps in the opposite direction – from the image plane to the source plane. The source shape we obtain is stretched along the radial direction, and differentially compressed in the tangential direction. Figure 2 demonstrates the differences between the shape and locations of a circular source (solid line), for which the circumferential points may be determined for both images, and a circular image and its corresponding source (short dashed line). In this example, we have considered a source which is near the Einstein radius where strong lensing effects dominate, and there may be a significant contribution to the flux from the second image. In all cases where we will apply the RBM, the image is chosen to be well away from the Einstein radius (critical curve), and so the flux lost from the second image is not important, as we now show. ### 3.2 The magnification deficit We now look at how accurately the RBM approximates the total magnification, even though it includes the contribution of only one image. If we set $`R_\mathrm{s}=R_\mathrm{i}`$, as we expect only small changes to the shape and hence radius of the image in the weak lensing limit ($`|𝒙_\mathrm{c}|1`$), then this is a two parameter problem ($`R_\mathrm{i},N_{\mathrm{ray}}`$). Defining the RBM magnification in terms of the ray bundle image and source areas ($`A_{\mathrm{i},\mathrm{RBM}},A_{\mathrm{s},\mathrm{RBM}}`$) as $$\mu _{\mathrm{RBM}}=\frac{A_{\mathrm{i},\mathrm{RBM}}}{A_{\mathrm{s},\mathrm{RBM}}}$$ (11) and the true (total) magnification as $$\mu _{\mathrm{true}}=\frac{A_{\mathrm{faint}}+A_{\mathrm{bright}}}{A_\mathrm{s}},$$ (12) where $`A_{\mathrm{faint}}`$, $`A_{\mathrm{bright}}`$ are the areas of the two images, then the relative error in $`\mu _{\mathrm{RBM}}`$ is $$\frac{\mathrm{\Delta }\mu _{\mathrm{RBM}}}{\mu _{\mathrm{true}}}=\frac{|\mu _{\mathrm{true}}\mu _{\mathrm{RBM}}|}{\mu _{\mathrm{true}}}.$$ (13) Due to the circular symmetry of the Schwarzschild lens model, we need only determine the radius, $`x_{\mathrm{cut}}`$, within which the RBM produces a relative error $`\frac{\mathrm{\Delta }\mu _{\mathrm{RBM}}}{\mu _{\mathrm{RBM}}}>p`$ per cent. By using $`N_{\mathrm{ray}}`$ rays in the image and source bundles, we are approximating the shape of a circular image/source by a polygon with $`N_{\mathrm{ray}}`$ sides. Clearly, when $`N_{\mathrm{ray}}1`$, we will have a reasonable approximation to the true shape of the image/source. However, to improve the speed of the ray bundle method (at the cost of a small error), we ideally want to select a small value of $`N_{\mathrm{ray}}`$ ($`\stackrel{<}{}\mathrm{\hspace{0.17em}20}`$). The areas are calculated as a sum of triangular components within the image/source polygon, where each triangle has a common vertex at $`𝒚_\mathrm{c}`$ or $`𝒙_\mathrm{c}`$ (ie. the null geodesic). We need to first check that the calculated $`\mu _{\mathrm{true}}`$ is not significantly in error using a particular value of $`N_{\mathrm{ray}}`$. This was achieved by numerically integrating Equation (23), and comparing with $`\mu _{\mathrm{true}}`$ for $`4N_{\mathrm{ray}}256`$. The relative error in $`\mu _{\mathrm{true}}`$ was found to be independent of the choice of $`N_{\mathrm{ray}}`$, and was well below the percentage cut-off level selected for $`\mu _{\mathrm{RBM}}`$ at the same radii. For source positions which are near the lens axis, it is difficult to keep track of which solutions $`x_\pm `$ belong to which of the images, particularly when merging of images is taking place. In these cases, $`\mu _{\mathrm{true}}`$ is a misnomer, as it can be significantly higher than the total magnification from Equation (23), as may be seen in Figure 3. $`\mu _{\mathrm{RBM}}`$ is subject to a similar error, as the areas here are also being calculated on the basis of $`N_{\mathrm{ray}}`$ rays. We can proceed under the assumption that $`\mu _{\mathrm{true}}`$ is sufficiently accurate for comparison with $`\mu _{\mathrm{RBM}}`$ when the two images are separable. By randomly selecting $`5000`$ source (RBM image) locations, we calculate $`\mu _{\mathrm{true}}`$ and $`\mu _{\mathrm{RBM}}`$. Table 1 shows the dimensionless radius, $`x_{\mathrm{cut}}`$, within which $`\mathrm{\Delta }\mu _{\mathrm{RBM}}/\mu _{\mathrm{true}}>p`$ per cent for a range of source radii ($`10^5<R_\mathrm{i}<0.01`$). For a given value of the relative error, these results are essentially independent of both the source radii and $`N_{\mathrm{ray}}`$ in the range $`4N_{\mathrm{ray}}256`$. The choice of $`N_{\mathrm{ray}}=8`$ appears to be the best compromise between accuracy and speed for our later investigations of the RBM. The smaller $`N_{\mathrm{ray}}`$, the fewer total deflection calculations which need to be made, yet we retain high accuracy for $`\mu _{\mathrm{RBM}}`$ in the weak lensing limit. With 8 rays, we have a symmetric image with a quadrupole component, so it is possible to determine the distribution of image ellipticities. In Figure 3, we plot $`\mu _{\mathrm{RBM}}`$ (thick solid line), $`\mu _{\mathrm{true}}`$ (thin solid line) and $`\mu _{\mathrm{integral}}`$, the solution of Equation (23) (dashed line), as functions of the source impact parameter ($`y=|𝒚|`$). The source/image bundle radius in this case was $`R_\mathrm{s}=R_\mathrm{i}=0.4`$. We have used a larger radius than would be practical for the RBM in order to show in more detail what the high magnification behaviour is like near $`y=0`$. As the source is comparable in size to the Einstein radius, the calculated magnifications near $`y0.4`$ are noisy. In this case, a circular source should produce a pair of highly distorted arcs, but this is not well represented by the use of triangular components within the ray bundle (ie. parity changes can occur in the individual triangles which make up the bundle). The vertical dotted line shows where the relative error in $`\mu _{\mathrm{RBM}}`$ is accurate to $`10`$ per cent, at $`y_{\mathrm{cut}}=1.17`$ or $`x_{\mathrm{cut}}=1.74`$ (note that these values are higher than in Table 1 due to the larger source radius). This test has shown that the Ray Bundle method can give a highly accurate value of the magnification (relative error better than $`5`$ per cent) in the weak lensing limit. However, we have modified the Ray Bundle method slightly by requiring that the image position agrees with the brighter image of the corresponding circular source. In a true application of the RBM, we randomly select image positions without the a priori knowledge of source locations. Figure 4 shows the RBM magnification for bundle positions near, and within, the critical curve of a single Schwarzschild lens (thick solid line). For comparison, the total magnification of a point source with one image at the same location as the RBM image is shown as the dashed line. Bundle positions within $`x=0.5`$ produce magnifications $`\mu _{\mathrm{RBM}}1`$. For these bundles we actually selected the fainter of the images, as opposed to the case considered previously, where we purposefully selected the bundle corresponding to the brighter image. Solutions of the lens equation, Equation (19), which produce an image near the lens ($`x_{}\stackrel{<}{}\mathrm{\hspace{0.17em}1}`$) correspond to total magnifications near $`\mu =1`$, as the source is far from the caustic point ($`y1/x_{}>1`$). This is demonstrated with the thin solid line in Figure 4, where we plot ray bundles with $`\mu 1`$ within the Einstein radius as $`\mu ^{}=\mu +1`$. It is clear that a small strip of image locations near the critical curve is responsible for the high magnification region of the MPH, for which the Ray Bundle method is not well suited. Images well within the Einstein radius correspond to sources far from the lens axis. Such images are the faint images described in Section 3.1 (at $`𝒙_{\mathrm{faint}}`$), and so there will be a second image at $`𝒙_{\mathrm{bright}}`$ which contributes the majority of the flux. Although it is not possible to solve for $`𝒙_{\mathrm{bright}}`$ given $`𝒙_{\mathrm{faint}}`$, if the entire image plane is well sampled with ray bundles, then we can expect that another bundle will pass through $`𝒙_{\mathrm{bright}}`$ and the source magnification will be calculated on the basis of this second bundle only. Two restrictions are now imposed on the Ray Bundle method for its later application to ensembles of lenses, which serve to complete the definition of the method. Firstly, image positions within the Einstein radius, or equivalently, any magnifications which are calculated to be $`\mu <1`$ are discarded. Secondly, after selecting a relative error for $`\mu _{\mathrm{RBM}}`$, we do not include images which fall within $`x_{\mathrm{cut}}`$ Einstein radii of any given lens. Having shown that in the ‘weak’ lensing limit ($`|𝒙_\mathrm{c}|\stackrel{>}{}\mathrm{\hspace{0.17em}2}`$), we can be sure of calculating the magnification to within $`5`$ per cent (or better) using $`N_{\mathrm{ray}}4`$ we can now proceed to a statistical comparison between the Ray Shooting and Ray Bundle methods. ### 3.3 Comparison of magnification probability histograms Next we compare the MPH obtained with the RBM and the RSM. A number of subtle differences exist between the two methods, even when applied to the same lens model. The source size investigated is limited by the number of pixels in the source plane for the RSM. For a grid of $`N_{\mathrm{pix}}\times N_{\mathrm{pix}}`$ covering a square region $`2y_{\mathrm{max}}\times 2y_{\mathrm{max}}`$, the source ‘radius’ is $`R_\mathrm{s}=y_{\mathrm{max}}/N_{\mathrm{pix}}`$. We choose the RSM sources to be squares with a side-length equal to the diameter of the circular RBM image bundles. Since the magnification decreases as the source area is increased, we expect each RSM source to have a systematically smaller magnification than the corresponding RBM image. It is possible to use a much finer resolution grid for the ray shooting, and then integrate over a larger source size, but we have elected not to do this. This decision was based on a comparison of the computation time: for our implementation of the RSM, a grid of $`1000\times 1000`$ pixels, with an average of $`\overline{N}=250`$ rays per pixels took approximately eight hours of computation time. An equivalent number of images (where $`N_{\mathrm{ray}}=8`$) is completed in 1 minute with the RBM<sup>2</sup><sup>2</sup>2More computationally efficient versions of the RSM are available, which can produce the same level of resolution in a time comparable to that of the RBM (J. Wambsganss, private communication). Due to the distortion of rays near the boundary, as shown in Figure 5, we actually shoot rays with the RSM through a larger angular region in the image plane. This prevents us from including source pixels which are not well sampled by rays. The accuracy of the RSM magnifications depends on the average number of rays collected in each pixel, $`\overline{N}`$. Since we are not implementing a (fast) hierarchical method, the time required to obtain the magnification distribution is proportional to the total number of rays, $`N_{\mathrm{RSM}}=\overline{N}N_{\mathrm{pix}}^2`$. The RSM does not fully sample the highest magnification region $`(\mu \mu _{\mathrm{max}})`$ due to the regular placement of sources on a grid. With the RBM image size fixed by $`R_\mathrm{i}=R_\mathrm{s}`$, we are left to choose the number of rays which make up the ray bundle, $`N_{\mathrm{ray}}`$, and the number of images. Ideally we want the images to completely cover the image plane (which again has a slightly larger angular size than the source plane due to the deflection of light rays near the boundaries) which requires $$N_{\mathrm{image}}f\frac{\pi R_\mathrm{i}^2}{(2y_{\mathrm{max}})^2},$$ (14) where $`f`$ is (approximately) the fraction of the source plane covered by sources. For the RBM then, the total number of rays required is $`N_{\mathrm{RBM}}=N_{\mathrm{image}}\times N_{\mathrm{ray}}`$. The comparative computational speed and resulting magnification accuracy may be obtained by requiring $`N_{\mathrm{RBM}}=N_{\mathrm{RSM}}`$. Setting $`f=1`$ and $`R_\mathrm{i}=R_\mathrm{s}`$, we have $$\overline{N}=\frac{4}{\pi }N_{\mathrm{ray}}.$$ (15) We have already seen that $`N_{\mathrm{ray}}=8`$ is a suitable choice, which gives $`\overline{N}10.2`$. This corresponds to a relative error $`\overline{N}^{3/4}17`$ per cent. Although we can relax the constraint on the fraction of the source plane covered somewhat, and still have a well sampled MPH with the Ray Bundle method, we do not have this flexibility with the grid based Ray Shooting method. In addition, as we decrease the source size, the number of pixels required for the RSM increases, and a higher density of rays is necessary. Figure 6 shows the Magnification Probability histograms obtained for a single Schwarzschild lens using the Ray Shooting (thin line) and Ray Bundle (thick line) methods. The parameters for each method are listed in Table 2. The vertical axis of this (and later histograms) is the normalised number of bundles/sources in each magnification bin, $`N(\mu )`$, which is equivalent to the definition of $`p(\mu _1)\mathrm{\Delta }\mu `$ in Equation (7). A cut-off was imposed on image locations for the RBM at $`x_{\mathrm{cut}}=1.01`$ Einstein radii. The two distributions are qualitatively very similar, when the various caveats described above are considered. Imposing a larger value of $`x_{\mathrm{cut}}`$ serves to reduce the maximum magnification with the RBM. The poor sampling in the highest $`\mu `$ bins for the RSM is clearly demonstrated. The dashed line shows the expected $`\mu ^2`$ power law slope of the MPH at large $`\mu `$ (see Appendix A), and both distributions have this approximate form (although for the RSM the statistical significance of the histogram bins with $`\mu >10`$ is low). As expected, the RSM produces magnifications which are $`\mu <1`$ (when $`\mu _{\mathrm{empty}}1`$ is expected) due to numerical effects. The sample mean and variance of the two distributions are $`\mu =0.98`$ and $`\sigma _\mu ^2=0.03`$ for Ray Shooting, and $`\mu =1.02`$ and $`\sigma _\mu ^2=0.08`$ for the Ray Bundle method<sup>3</sup><sup>3</sup>3We reiterate that in the definition of the RBM, we do not include images with $`\mu <1`$, see end of Section 3.2). . The Ray Shooting method provides higher accuracy at high magnifications, but at magnifications $`\mu 1`$, the Ray Bundle method is more accurate even though flux from additional images is neglected. One aspect of the MPH we have not yet discussed has to do with the weighting we apply to each ray bundle. For the RSM, every source is a pixel with the same area. For the RBM, the initial ray bundles (images) have the same area, but the resulting sources must have different areas (by the definition of a magnification). It is sufficient to weight each ray bundle in the MPH by the area of the resulting source. Figure 7 shows the RBM (thin line) with no weighting compared with the correct area weighting (thick line). ### 3.4 More complex lens distributions For an ensemble of $`N`$ lenses in the lens plane, each with mass $`M_j`$, the total deflection angle generalises to $$\widehat{𝜶}(𝝃)=\underset{j=1}{\overset{N}{}}\frac{4GM_j}{c^2}\frac{(𝝃𝝃_j)}{|𝝃𝝃_j|^2},$$ (16) where the $`(𝝃𝝃_j)`$ are the impact parameters to each lens. Consider the case where lenses are restricted to lie within a rectangular region<sup>4</sup><sup>4</sup>4This is the case in common implementations of Ray Shooting as it allows for the easy implementation of fast, hierarchical methods \[Wambsganss 1990\].. Light rays passing through one of the corners of the shooting region will necessarily be deflected inwards by the mass distribution. This is appropriate for an isolated configuration of lenses, such as in studies of the microlensing effect of many stars which make up a galaxy (where the contribution to the deflection by an external mass distribution may be modelled by adding a shear term to the lens equation). However, for an investigation of the lensing due to large scale structure, where the mass distribution is assumed to be continuous and homogeneous in all directions about a ray, we may introduce an artificial shear on rays near the shooting boundary. For the RBM, we choose to calculate each of the deflection angles explicitly, with an increase in the computational time over an equivalent hierarchical method. By making the direct calculation of the deflection, we are free to choose the geometry of the region within which we include lenses. The most natural choice for a distribution which is homogeneous and isotropic beyond some length scale $`R_\mathrm{H}`$ is to include lenses within a circular region around each ray out to the radius $`R_{\mathrm{lens}}=R_\mathrm{H}`$. For the isolated lens geometries we examine here, we need only set $`R_{\mathrm{lens}}`$ to encompass the RSM image plane. A discussion of the appropriate choice for $`R_\mathrm{H}`$ in cosmological lensing scenarios is reserved for Fluke et al. (in preparation). As a final test here, we now consider a random distribution of $`N_{\mathrm{lens}}=5`$ lenses (with the same lens positions for both the RSM and RBM). The resulting MPH is shown in Figure 8 and various parameters in Table 2. For both methods we use a total of $`10^6`$ bundles/sources, but discarding bundles within $`x_{\mathrm{cut}}=1.01`$ Einstein radii for the RBM reduces this to $`9.3\times 10^5`$ bundles for the histogram. The sample mean and variance are $`\mu =1.09`$, $`\sigma _\mu ^2=2.09`$ for the RBM and $`\mu =0.98`$, $`\sigma _\mu ^2=0.23`$ for the RSM. Using an ensemble of lenses introduces a new length scale to the problem, so that sub-structure at intermediate magnifications ($`3\mu 30`$) is seen in the MPH using the RSM (but not with the RBM). The ‘bump’ in the MPH occurs for a planar distribution of lenses, and has been studied both numerically \[Rauch et al. 1992\] and analytically \[Kofman et al. 1997\], and is believed to be due to the caustic patterns of pairs of point lenses. If this is a caustic-induced feature, ie. a region of high magnification, then the RBM will not provide the ‘correct’ magnification. We feel the feature may be in part due to the low resolution with which the complex caustic structure was mapped with the RSM on a regular grid ($`1000\times 1000`$ pixels), however a discussion of this effect is beyond the scope of this paper. ## 4 Conclusions The Ray Bundle method provides a computationally fast, accurate and flexible alternative to the Ray Shooting method for studies of the weak gravitational lensing limit. A wide variety of lens models are easily incorporated – we have considered here only the case of the Schwarzschild lens, but changing to a different model involves modifying only $`\widehat{𝜶}(𝝃)`$ in the GLE. One alternative to ray based methods requires solving for the scalar lensing potential, but it may not always be possible to find an analytic solution for all lens models. The RBM also allows us to avoid artificial shear introduced by grid based methods for light rays which pass near the shooting boundary, when a small portion of an otherwise homogeneous and isotropic distribution is used. The point source limit may be approached as values of $`R_\mathrm{i}`$ may be selected without the restriction of introducing a finer source grid, and a corresponding increase in the density of rays required (provided we can relax the constraint that the source plane must be completely covered by sources with the RBM). The RBM should only be applied with caution to strong lensing scenarios, where the RSM is far superior. As only one image is followed to the source, there will be an error in the total magnification when the image is near a critical curve. The RBM is, however, particularly well suited to problems were we want to investigate in detail individual lines-of-sight for various lens geometries and models. An important advantage of the RBM is that we can associate a particular image position and shape with the corresponding source position and shape. This provides us with the opportunity of following the development of the shape of a ray bundle through a sequence of lens planes, as used in models of cosmological lensing (for example, Wambsganss, Cen & Ostriker \[Wambsganss et al. 1998\]). An important application of weak lensing is to determine the effect of small changes in the magnification of standard candle sources (such as Type Ia Supernovae) on the derived values of cosmological parameters \[Wambsganss et al. 1997\]. The high accuracy of the RBM in the weak lensing limit makes it a value tool for such studies. ## Acknowledgments The authors would like to thank Peter Thomas and Andrew Barber (University of Sussex), and Hugh Couchman (University of Western Ontario) for helpful discussions. The authors are grateful to the referee, Joachim Wambsganss, for his insightful comments. CJF and DJM are funded by Australian Postgraduate Awards. CJF is grateful for financial assistance from the University of Melbourne’s Melbourne Abroad scholarships scheme, and the Astronomical Society of Australia’s travel grant scheme. ## Appendix A The Schwarzschild Lens The simplest lens model is the point-mass or Schwarzschild lens (for example Schneider et al. \[Schneider et al. 1992\], Narayan & Bartelman \[Narayan & Bartelmann 1996\]) , for which the deflection angle due to a mass, $`M`$, is $$\widehat{𝜶}(𝝃)=\frac{4GM}{c^2|𝝃|^2}𝝃.$$ (17) The dimensionless lens equation for a point source with this model is $$y=x1/x$$ (18) so that there are two images (one located on either side of the lens, and co-linear with the lens) at $$x_\pm =\frac{1}{2}\left(y\pm \sqrt{y^2+4}\right)$$ (19) with corresponding magnifications $$\mu _\pm =\frac{1}{2}\left(\frac{y^2+2}{y\sqrt{y^2+4}}\pm 1\right).$$ (20) The total magnification is the sum of the absolute values of the individual image magnifications: $`\mu _\mathrm{p}=|\mu _+|+|\mu _{}|`$. When the source is far from the lens axis ($`y1`$), one of the images (say, $`x_{}`$) will be significantly demagnified ($`\mu _{}1`$). The total magnification is then $`\mu _\mathrm{p}\mu _+`$. It is possible to derive an analytic form for the magnification probability, $`p(\mu ,z)`$, for the Schwarzschild lens for large values of $`\mu _\mathrm{p}`$ \[Schneider et al. 1992\], which is reasonably generic for most lens models \[Peacock 1982\]: $$p(\mu )\mu _\mathrm{p}^3.$$ (21) On integrating to form the MPH we have $$p(\mu )\mathrm{\Delta }\mu =_{\mu _1}^{\mu _2}p(\mu )d\mu =\frac{1}{\mu _1^2}\left(1\frac{\mu _1^2}{\mu _2^2}\right).$$ (22) If the histogram bins are equally spaced logarithmically, $`\mu _1/\mu _2`$ is constant, and so we have an additional constraint that the MPH must have a power law slope $`2`$ for $`\mu \stackrel{>}{}\mathrm{\hspace{0.17em}10}`$. For an extended source, the total magnification is the integral of $`\mu _\mathrm{p}`$ over the source, weighted by the intensity profile, $`(𝒚)`$. For a circular source with dimensionless radius $`R_\mathrm{s}=R_\mathrm{s}/\eta _0`$, and a uniform intensity profile we have (eg. Pei \[Pei 1993\], or Schneider et al. \[Schneider et al. 1992\] for an alternative formulation) $$\mu _\mathrm{e}(y)=\frac{1}{\pi R_\mathrm{s}^2}_{|yR_\mathrm{s}|}^{y+R_\mathrm{s}}dt\frac{\sqrt{t^24}\left(R_\mathrm{s}^2y^2+t^2\right)}{\sqrt{R_\mathrm{s}^2(yt)^2}\sqrt{(y+t)^2R_\mathrm{s}^2}}.$$ (23) Equation (23) approaches the point source solution as $`y\mathrm{}`$ or $`R_\mathrm{s}0`$, and the maximum magnification is $$\mu _{\mathrm{e},\mathrm{max}}=\frac{\sqrt{R_\mathrm{s}^2+4}}{R_\mathrm{s}}$$ (24) at $`y=0`$.
no-problem/9812/hep-ph9812517.html
ar5iv
text
# Contact Interactions with Polarized Beams at HERA ## 1 Introduction The study of contact interactions (CI) is a powerful way to search for departures from the Standard Model and to parametrise new physics phenomena. Phenomenologically at an energy scale much lower than the characteristic scale of the underlying theory, very different new physics contributions can give rise to very similar changes in processes involving only SM particles. The presence of new interactions can then be written in terms of an effective CI Lagrangian in a model-independent way. In the context of HERA the lepton-quark CI of the first generation fermions are the most interesting since they can manifest themselves in $`eqeq`$ scattering. The most general $`eeqq`$ current-current effective Lagrangian has the form : $$_{eq}^{NC}=\underset{q=u,d}{}\underset{i,j=L,R}{}\eta _{ij}^q(\overline{e}_i\gamma _\mu e_i)(\overline{q}_j\gamma ^\mu q_j)$$ (1) with $`\eta _{ij}^q=4\pi ϵ_{ij}^q/(\mathrm{\Lambda }_{ij}^q)^2`$, where $`\mathrm{\Lambda }_{ij}^q`$ is the effective mass scale of the contact interaction. The sign $`ϵ_{ij}^q`$ characterises the nature of the interference of each CI term with the Standard Model $`\gamma `$ and $`Z`$ exchange amplitudes. The subscripts $`LL`$, $`RR`$, $`LR`$ and $`RL`$ refer to the chiral structure of the new interaction. In a purely phenomenological approach all $`\eta `$’s are unknown parameters, whereas they are predicted in a given theoretical framework. For example, the structure of $`\eta _{ij}^q`$ in a generic leptoquark model or in supersymmetry with $`R`$-parity violation can be found in Ref . The overwhelming success of the SM suggests that the CI Lagrangian must be $`SU(2)_L\times U(1)`$ symmetric. The symmetry implies that $`\eta _{RL}^u=\eta _{RL}^d`$, relates $`eq`$ and $`\nu q`$ NC interactions, and relates the difference $`\eta _{LL}^u\eta _{LL}^d`$ to the CC $`e\nu ud`$ contact term. The lepton-hadron universality of charged-current data indicates that the latter is small. HERA will be most sensitive to CI terms in measurements at large $`Q^2`$ which favours large $`x`$ where the leading contribution comes from the valence $`up`$-quark. Therefore, for simplicity, we will assume u-d universality, i.e. $`\eta _{ij}^u=\eta _{ij}^d`$. As a result we are left with eight terms in eq. (1) defined by various combinations of chiralities and signs, which we will denote by $`ij^ϵ`$, i.e. $`LR^+`$, $`LL^{}`$ etc. A global study of the $`eq`$ CI, based on the most relevant existing experimental data, has recently been performed in Ref . Stringent bounds of the order of $`\mathrm{\Lambda }10`$ TeV for the individual CI terms are found. However, when several terms of different chiralities are involved simultaneously, cancellations occur and the resulting bounds on $`\mathrm{\Lambda }`$ are considerably weaker and of the order of $`34`$ TeV . Present bounds from HERA as well as from other high-energy experiments are of the same order of magnitude and it can therefore be expected that experiments at HERA will be able to improve the limits with increasing luminosity. ## 2 Chiral structure of CI: the case for polarized beams In this note we want to emphasize that the measurement of spin asymmetries, defined in the context of HERA with polarized lepton beams ($`e^+`$ and $`e^{}`$) and/or with polarized lepton and proton beams, could provide very important tools to disentangle the chiral structure of the new interaction. To illustrate the point we will consider an example with double spin asymmetries defined in eq. (2) below. The details of a more complete phenomenological analysis of the experimental signatures of CI from the measurements of cross sections and spin asymmetries in the NC channel at HERA can be found in . The analysis of has been motivated by the renewed interest in the polarization option, considered already at an early stage of the HERA project (see for example ), and more recently in ; we also refer to the report of the Working Group 6 in these proceedings. When we discuss polarized beams we split equally the expected total integrated luminosity of 1 fb<sup>-1</sup> among various configurations of beams and polarizations, i.e. for both lepton and proton beams polarized, we assume a luminosity of 125 pb<sup>-1</sup> for $`e^+p`$ and $`e^{}p`$ with longitudinally polarized leptons ($`\lambda _e=\pm `$) and protons ($`\lambda _p=\pm `$). As a result, the “discovery potential” (as far as the sensitivity to the scale $`\mathrm{\Lambda }`$ is concerned) is not significantly improved by running with polarized beams as compared to the unpolarized case, see Table 1. There the 95% CL limits for $`\mathrm{\Lambda }`$’s obtained from the analysis of unpolarized $`e^+p`$ and $`e^{}p`$ collisions (upper row) are compared to the ones obtained with the help of polarized beams (lower row). In the latter case the following double-spin asymmetries<sup>1</sup><sup>1</sup>1The asymmetries defined in eq. (2) are sensitive to the violation of parity, and are also interesting from the point of view of spin structure functions . have been used $$A_{LL}^{PV}(e^{})=\frac{\sigma _{}^{}\sigma _{}^{++}}{\sigma _{}^{}+\sigma _{}^{++}}\text{and}A_{LL}^{PV}(e^+)=\frac{\sigma _+^{}\sigma _+^{++}}{\sigma _+^{}+\sigma _+^{++}}$$ (2) which are defined in terms of the polarized differential cross sections $`\sigma _l^{\lambda _e,\lambda _p}\text{d}\sigma _l^{\lambda _e,\lambda _p}/\text{d}Q^2`$ with $`l=+`$ for $`e^+p`$ and $`l=`$ for $`e^{}p`$. Exploiting other spin asymmetries defined in Ref. (see also ) does not improve significantly the limits. Once new physics effects are observed, polarized beams are very useful since the $`Q^2`$ dependence of spin asymmetries contains additional information which is sensitive to the chiral structure. This is exemplified in Fig. 1 where the $`Q^2`$ dependence of $`A_{LL}^{PV}(e^{})`$ and $`A_{LL}^{PV}(e^+)`$, assuming $`\mathrm{\Lambda }=4`$ TeV, is drawn for several $`ij^ϵ`$ CI terms. The observation of a deviation from the SM in $`e^{}p`$ will allow us to distinguish between $`LL^+/RR^{}`$ and $`LL^{}/RR^+`$, and from $`e^+p`$ data between $`LR^+/RL^{}`$ and $`LR^{}/RL^+`$ contact interaction terms. Other spin asymmetries can be exploited to reveal the anatomy of the chiral structure of contact interactions . ## 3 Conclusions The main conclusions are the following: 1) The HERA collider with an integrated luminosity of $`L_{tot}=1`$ fb<sup>-1</sup> will give strong bounds on the energy scale of a possible new CI. For constructive interferences, the limit on $`\mathrm{\Lambda }`$ is of the order of $`6TeV`$, and for destructive interferences we get $`\mathrm{\Lambda }5TeV`$. The availability of polarized lepton and proton beams will not increase significantly these bounds, except for destructive interferences. With only leptons polarized, the sensitivity is strongly reduced. 2) The studies of spin (and charge) asymmetries can give unique information on the chiral structure of the new interaction. The availability of electron and positron beams is mandatory in order to cover all the possible chiralities. 3) Since in the proton the valence $`u`$-quark distribution is dominant, the measurements essentially constrain the presence of a new interaction in the $`eu`$ sector. To constrain a new interaction in $`ed`$ sector, protons could be replaced by neutrons, for example by using electron-$`He^3`$ collisions, an option which is also under consideration at HERA . ## 4 Acknowledgments JK has been partially supported by the Polish Committee for Scientific Research Grant 2 P03B 030 14. ## 5 References
no-problem/9812/astro-ph9812440.html
ar5iv
text
# The rotation speed of the companion star in V395 Car (=2S0921–630) ## 1 Introduction In spite of its low luminosity, 2S0921-630 has proved to be an interesting and important X-ray source. Discovered by SAS-3 and identified with a $``$15<sup>m</sup> blue star, V395 Car (Li et al, 1978), subsequent studies (e.g. Branduardi-Raymont et al, 1983) showed that the spectrum was dominated by strong HeII $`\lambda `$4686 and Balmer emission, a characteristic of low-mass X-ray binaries (LMXBs). However, 2S0921-630 has an unusually low $`L_X/L_{opt}`$ of $``$1 compared to most LMXBs, which was explained by the presence of partial optical and X-ray eclipses (Mason et al, 1987). This demonstrated that 2S0921-630 was a high inclination, accretion-disc corona (ADC) source. Only a handful of these are known (e.g. White et al., 1995) in which the compact object is permanently obscured from our view by the accretion disc, and hence the observed X-rays are scattered into our line-of-sight by a hot corona of gas above and below the disc. By implication, the intrinsic X-ray luminosity is much higher. However, what is remarkable about 2S0921-630 is its long period. The other ADC sources are 2A1822-371, 4U2129+47 and 4U2127+19, which have relatively short orbital periods (5.6, 5.2 and 17.1 hrs respectively). Optical photometry and spectroscopy (Cowley et al., 1982 and Branduardi-Raymont et al., 1983) have indicated that V395 Car has a much longer orbital period of 9.02 days, with kinematic properties implying that it is located in the halo at a distance of $``$10 kpc (Cowley et al., 1982). The secondary star must then be evolved and intrinsically luminous in its own right, making the system very similar to the well-known halo giant Cyg X-2 (Casares et al., 1997; Orosz & Kuulkers, 1998). With its high inclination V395 Car is thus one of those rare LMXBs (along with Cyg X-2 and Her X-1) in which the secondary should be visible in spite of the presence of a luminous disc, and hence high resolution optical spectroscopy should reveal both the spectral type of the secondary and its rotation speed, thereby providing significant constraints on the system masses. Here we report the first results from such a study. ## 2 Observations and data reduction High resolution optical spectra of V395 Car were obtained on 1998 May 28/29 with the 3.5-m New Technology Telescope (NTT) at the European Southern Observatory (ESO) in Chile using the ESO Multi Mode Instrument (EMMI). We used the red arm with an order-separating OG 530 filter and grating #6 which gave a dispersion of 0.31 Å per pixel. The TEK 2048$`\times `$2048 CCD was used, binned by a factor two in the spatial direction in order to reduce the readout noise. The dispersion direction was not binned. Very good seeing allowed us to use a slit width of $`0\stackrel{}{.}8`$ which resulted in a spectral resolution of 0.83 Å. We took 3$`\times `$1200sec exposures of V395 Car (see Table 1) and Cu-Ar arc spectra were taken for wavelength calibration. In addition we observed template field stars with a variety of spectral types, whose rotational velocities are much less than the resolution of our data. The data reduction and analysis was performed using the Starlink figaro package, the pamela routines of K. Horne and the molly package of T. R. Marsh. Removal of the individual bias signal was achieved through subtraction of the mean overscan level on each frame. Small scale pixel-to-pixel sensitivity variations were removed with a flat-field frame prepared from observations of a tungsten lamp. One-dimensional spectra were extracted using the optimal-extraction algorithm of Horne (1986), and calibration of the wavelength scale was achieved using 5th order polynomial fits which gave an rms scatter of 0.02 Å. The stability of the final calibration was verified with the OH sky line at 6562.8Å whose position was stable to within 0.1 Å. ## 3 The V395 Car spectrum In Fig. 1 we show the variance-weighted average of our V395 Car spectra, which has a signal-to-noise ratio of about 40 in the continuum. The most noticeable features are the double peaked H$`\alpha `$ and He$`\mathrm{i}`$ 6678Å emission lines. The double peaked nature of the emission lines is characteristic of a high inclination X-ray binary (Horne & Marsh 1986). This is consistent with the observed partial eclipse in the optical and X-rays (Mason et al., 1987). ## 4 The spectral type and rotational broadening of the companion star We determine the spectral type of the companion star by minimizing the residuals after subtracting different template star spectra from the Doppler-corrected average spectrum. This method is sensitive to the rotational broadening $`v\mathrm{sin}i`$ and the fractional contribution of the companion star to the total flux ($`f`$; 1-$`f`$ is the “veiling factor”). The template stars we use are in the spectral range F2–K2 $`\mathrm{iii}`$ and were obtained during this observing run but also from previous runs at La Palma and with comparable dispersion. First we determined the velocity shift of the individual spectra of V395 Car with respect to each template star spectrum by the method of cross-correlation (Tonry & Davis 1979). The V395 Car spectra were then interpolated onto a logarithmic wavelength scale (pixel size 14.5 km s<sup>-1</sup>) using a $`\mathrm{sin}x/x`$ interpolation scheme to minimize data smoothing (Stover et al. 1980). The spectra of V395 Car were then Doppler-averaged to the rest frame of the template star. In order to determine the rotational broadening $`v\mathrm{sin}i`$ we follow the standard procedure described by Marsh et al., (1994). Basically we subtracted a constant representing the fraction of light from the template star, multiplied by a rotationally broadened version of that template star. We broadened the template star spectrum from 0 to 100 km s<sup>-1</sup> in steps of 1 km s<sup>-1</sup> using the Gray rotation profile (Gray 1992). We then performed an optimal subtraction (Marsh et al., 1994) between the broadened template and averaged V395 Car spectra. The optimal subtraction routine adjusts the constant to minimize the residual scatter between the spectra. The scatter is measured by carrying out the subtraction and then computing the $`\chi ^2`$ between this and a smoothed version of itself. The constant, $`f`$, represents the fraction of light arising from the template spectrum, i.e. the secondary star. The optimal values of $`v\mathrm{sin}i`$ and $`f`$ are obtained by minimising $`\chi ^2`$. The above analysis was performed in the spectral ranges 6380–6520 Å which excludes H$`\alpha `$ and He$`\mathrm{i}`$ 6678Å. This was the only region common to all the templates stars and V395 Car. A linear limb-darkening coefficient of 0.60 was used (Al-Naimiy 1978). Using the template stars covering a range in spectral type (F2–K2$`\mathrm{iii}`$) we found $`v\mathrm{sin}i`$ to be in the range 58–71 km s<sup>-1</sup>. The minimum $`\chi ^2`$ occured at spectral type K0 with a $`v\mathrm{sin}i`$ of 64$`\pm `$ 9 km s<sup>-1</sup>(1-$`\sigma `$) and the companion star contributing about 25% to the observed flux at $``$ 6500 Å. Fig. 1 shows the results of the optimal subtraction. This analysis assumes that the limb-darkening coefficient appropriate for the radiation in the line is the same as for the continuum. However, in reality this is not the case; the absorption lines in early-type stars will have core limb-darkening coefficients much less than that appropriate for the continuum (Collins & Truax 1995). In order to determine the extreme limits for $`v\mathrm{sin}i`$ we also repeated the above analysis for the K0 template star using zero and full limb-darkening. We found that $`v\mathrm{sin}i`$ changes by 4 km s<sup>-1</sup>. In order to estimate the systematic effects in estimating the spectral type and rotational broadening in V395 Car, we performed the same analysis but now using a template star of known spectral type. We broadened the target spectrum by 60 km s<sup>-1</sup>, added noise and veiled it by 70 % to produce a spectrum of comparable quality to that of V395 Car. We then repeated the broadening and optimal subtraction procedure using the same templates stars as was used for the V395 Car analysis, thereby determining the best fit. We found that with our analysis we were able to retrieved the spectral type to within two subclasses and the enforced rotational broadening and veiling accurate to 7%. ## 5 Discussion ### 5.1 Nature of the secondary star In the ADC sources the observed X-rays are scattered into our line-of-sight by a hot corona of gas above and below the disc. The intrinsic X-ray emission is permanently obscured from our view by the accretion disc and its extended rim. The observation of partial X-ray eclipses constrains the binary inclination $`i`$ to lie in the range 75–90 (Mason et al., 1987). Thus, given our observed projected rotational broadening of the secondary star (64$`\pm `$9 km s<sup>-1</sup>) and these limits to $`i`$ we determine the rotation of the secondary star to lie in the range 55.0–75.6 km s<sup>-1</sup>(1-$`\sigma `$ limits). Furthermore, as a “steady” X-ray source, the secondary star must fill its Roche-lobe, and so its rotational velocity is given by $$v_{\mathrm{rot}}=611\left[\frac{M_1(1+q)}{P_{hr}}\right]^{1/3}\left(\frac{R_{\mathrm{L2}}}{a}\right)\mathrm{km}\mathrm{s}^1$$ (1) where $`P_{hr}`$ is the orbital period in hrs, $`q`$ is the binary mass ratio (=$`M_2/M_1`$), $`M_1`$ is the mass of the neutron star and $`R_{\mathrm{L2}}/a`$ is the Roche-lobe radius of the secondary and depends only on $`q`$ (Eggleton 1983). For a given mass for the compact object, $`M_1`$, we can solve equation (1) for $`q`$ and hence $`M_2`$, and then also determine $`R_{\mathrm{L2}}`$. Assuming that the compact object has the mass of a canonical neutron star, 1.4 M, we find $`q`$, $`M_2`$ and $`R_{\mathrm{L2}}`$ to lie in the ranges 1.0–2.2, 1.4–3.1 Mand 9.7–13.4 Rrespectively. In a long period X-ray binary, loss of angular momentum via nuclear expansion of the secondary star drives the mass transfer, provided $`q`$$`\stackrel{<}{}`$1.2. In this case, the mass and radius of the secondary star are constrained to lie in the range 1.4–1.7 Mand 9.7–10.3 Rrespectively. Recent theoretical models of King et al. (1997) have concluded that long period persistent X-ray binaries that contain neutron stars must have companion masses $`M_2`$$`\stackrel{>}{}`$0.75 M. Our limits for $`M_2`$ are consistent with this idea. The mean density ($`\rho `$) of the secondary star is fixed by the orbital period (Frank et al., 1992). For 2S0921–630 we find $`\rho =2.4\times 10^3`$ g cm<sup>-3</sup>, which implies a K0$`\mathrm{iii}`$ spectral type for the secondary star (Gray 1992). Note that this is consistent with our observed estimate (see Sect. 4). ### 5.2 P-Cygni profiles? There is evidence for an outflow in 2S0921–630 arising from an accretion disc wind. The blue spectra of Branduardi-Raymont et al., (1983) show the Balmer emission lines to have a P-Cygni type profiles, where the blue wing of the line profile is absorbed. As the binary inclination of 2S0921–630 is high, one would expect the P-Cygni profiles to be stronger at phase 0.5 that at phase 0.0, simply because at phase 0.5 one sees more of the accretion disc (Note we have used the usual phase convention i.e. phase 0.0 is defined when the secondary star is in front of the compact object.) The Balmer lines in the spectra of Branduardi-Raymont et al., (1983) taken at phase 0.52 (i.e. phase 0.77 in their convention) do indeed show P-Cygni profiles, Using the orbital ephemeris determined by Mason et al. (1987), we find that the orbital phase of our NTT spectrum to be 0.03$`\pm `$0.03. Our spectrum does not show any evidence for a P-Cygni type profile, which is what one would expect, in a high inclination binary system. The spectra of Branduardi-Raymont et al., (1983) taken near orbital phase 0.52 show strong He$`\mathrm{i}`$ 4471Å in absorption. This is clear evidence for irradiation of the secondary star (c.f. 2A1822-371; Harlaftis et al., 1997) as late-type stars do not show He$`\mathrm{i}`$ lines. This suggests that the inner face of the secondary star, facing the compact object has a mean temperature of $``$ 20,000 K i.e. a spectrum of an early B type star. Note that since our NTT spectrum was taken near phase 0.0 the effects of irradiation will be the least and will most resemble the “true” spectral type of the secondary (compared to spectra taken at other orbital phases, where the accretion disc light and the effects of irradiation will contribute more). ### 5.3 Comparison with other systems We can compare 2S0921–630 with the ADC source 4U2127+119. Both systems show Balmer P-Cygni type profiles and He$`\mathrm{i}`$ 4471Å in absorption. The probable high binary mass ratio in 4U2127+119 leads to unstable mass transfer from the secondary star, resulting in a common envelope (Bailyn & Grindlay 1987). In 4U2127+119 the He$`\mathrm{i}`$ absorption line is blue shifted with respect to the mean velocity of the systemm believed to arise in a stream of gas leaving the outer Lagrangian point (Bailyn et al., 1989). If the mass transfer in 2S0921–630 is unstable, then we would also expect the He$`\mathrm{i}`$ lines to be blue-shifted and have low non-sinusoidal velocity variations. It is interesting to note the similarities between 2S0921–630 and Cyg X–2. Both systems are long period binaries in the halo of the Galaxy, they are at high inclination angles with evolved secondaries and both contain neutron stars. \[Cyg X-2 must contain a neutron star because of the observed type$`\mathrm{i}`$ X-ray bursts. For 2S0921–630 we cannot unequivocally state that it contains a neutron as no bursts have been seen. However, this may be a result of the natural consenquence of ADC sources. It should also be noted that our upper limit to $`M_1`$ suggests that it is a neutron star.\] Also 2S0921–630 has a binary mass ratio $`q1`$, which is a factor of 3 higher than that of Cyg X–2 ($`q`$=0.34; Casares et al., 1997), suggesting that if the neutron stars in both systems have similar masses, then the secondary star in 2S0921–630 is more massive. However, it should be noted that the secondary star in 2S0921–630 is much cooler (K0$`\mathrm{iii}`$; $`L_2`$50 L) and hence less luminous than the secondary star in Cyg X–2 (A9$`\mathrm{iii}`$; $`L_2`$200 L), contrary to what we might have expected given their inferred masses. This discrepency can be reconciled if we postulate that we are seeing 2S0921–630 and Cyg X–2 at very different phases in their evolution. In Cyg X–2 we are probably seeing the secondary star which is high up on the giant branch and has lost its outer evelvope due to mass transfer and/or irradiation, leaving just the hot inner core of what had been initially a more massive star. Whereas, in 2S0921–630 the secondary star is not as evolved and so is near the base of the giant branch. Only detailed stellar evolution calculations will resolve this. ## 6 Conclusion Using high resolution optical spectra, we estimate the spectral type of the companion star in 2S0921–630 (=V395 Car) to be a K0$`\mathrm{iii}`$ star. By optimally subtracting different broadened versions of the companion star spectrum from the average V395 Car spectrum we determine the rotational broadening of the companion star to be 64$`\pm `$9 km s<sup>-1</sup>(1-$`\sigma `$) contributing $``$ 25% to the observed flux at 6500Å. ## Acknowledgements J.C. acknowlegdes support by the Spanish Ministerio de Educacion y Cultura through the grant FPI-070-97.
no-problem/9812/astro-ph9812372.html
ar5iv
text
# Heliospheric and Astrospheric Hydrogen Absorption towards Sirius: No Need for Interstellar Hot Gas Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute which is operated by the Association of Universities for Research in Astronomy Inc., under NASA Contract NASA5-26555 ## 1 Introduction There is a long-standing debate on the properties of the hot ( $`T10^6`$ K) gas which fills the so-called Local Bubble, the soft X-ray emitting region around the Sun (see Breitschwerdt et al. 1997 for updated information). The cold and warm diffuse clouds embedded in the hot gas, like the group of cloudlets in the solar vicinity, are supposed to be surrounded by a shell of semi-hot gas at an intermediate temperature ($`T10^5`$ K), and one way to detect this gas is through the neutral hydrogen absorption at Ly-$`\alpha `$. It is worthwhile here to recall that a small quantity of hot gas ($`T10^5`$ K) can produce an absorption equivalent width in Ly-$`\alpha `$ comparable to that of a 3 or 4 orders of magnitude larger quantity of warm gas ($`T10^4`$ K). Indeed, spectra obtained with the Goddard High Resolution Spectrometer (GHRS) on board the Hubble Space Telescope (HST) show hot neutral H absorption along the line-of-sight to a few nearby stars, including Sirius A and $`ϵ`$ CMa (Bertin et al. 1995, hereafter BVL95; Gry et al. 1995). The origin of this absorption, however, has been a matter of debate. For the Sirius line-of-sight, BVL95 proposed that a hot conductive interface is responsible for excess Ly-$`\alpha `$ seen on the red side of the absorption profile, and Bertin et al. (1995b) proposed that absorption from Sirius’s wind accounted for excess Ly-$`\alpha `$ absorption on the blue side. In other cases, detected hot H components are undoubtedly linked to the neutral gas formed either around our own heliosphere, due to the interaction of the solar wind with the ambient interstellar medium of the Local Interstellar Cloud (LIC) (Linsky & Wood 1996, hereafter LW96), or to neutral gas around other astrospheres due to the corresponding interaction between the stellar winds and the ambient neutral interstellar gas (Wood et al. 1996). But for Sirius A and $`ϵ`$ CMa, the combination of conductive interface and stellar wind absorption has remained the most likely source. There are 3 different types of heliospheric H atoms, in addition to the unperturbed interstellar neutral H called primary interstellar atoms or PIA’s: i) the compressed, decelerated, and heated interstellar atoms (HIA’s) formed by charge exchange with heated interstellar protons outside the heliopause, ii) the neutralized, decelerated, and heated solar wind atoms (HSWA’s) formed in the heliosheath by charge exchange between the neutral interstellar gas and the hot protons of the decelerated and compressed solar wind, and iii) the neutralized supersonic solar wind atoms (SSWA’s). Only the HIA’s and HSWA’s are of interest here since the SSWA component is flowing radially at very large velocities and will not produce absorption in the central part of the Ly-$`\alpha `$ lines, and the PIA’s are indistinguishable from the normal interstellar gas. The HIA’s on the upwind side of the heliosphere (i.e. the direction from which the interstellar wind flows) collectively make up the so-called “H-wall”, which is the gas that has been detected towards $`\alpha `$ Cen (LW96). The properties of the HSWA’s are the most difficult to calculate, since this hot gas has a very large mean free path and its characteristics at one location in the heliosphere depend on the properties of all the source regions everywhere in the heliosheath. Indeed, significant differences between multi-fluid models and kinetic models have been found by Williams et al. (1997). These authors have also suggested that the mixing between the hot and warm populations in the heliospheric tail through H-H collisions could be the origin of the hot gas absorption observed towards Sirius. While recent computations show that H-H collisions are negligible compared to charge-exchange processes (Izmodenov et al. 1999b), our conclusions below will ultimately be similar to their original idea. The goal of this letter is to show that when one uses updated parameters of the circumsolar interstellar medium and a very precise kinetic/gasdynamic self-consistent model of the heliosphere, HSWA’s produce a non-negligible absorption in almost all directions, with a maximum effect on the downwind side. We reconsider the Sirius A HST Ly-$`\alpha `$ spectrum and show that the red wing of the absorption is very well fitted using our model. Then we show using simple analogies that the additional absorption on the blue wing could be produced by HSWA’s and HIA’s around Sirius itself, if the star is embedded in the neighboring cloud detected towards the star by Lallement et al. (1994) and if the star produces a wind, which is likely. ### 1.1 Heliospheric absorption towards Sirius A description of our self-consistent heliospheric model of the solar wind-interstellar gas interaction can be found in Baranov & Malama (1993), Baranov et al. (1998), and Izmodenov et al. (1999a). We have updated the interstellar parameters to take into account recent advances in the field, such as the velocity and temperature determinations of the LIC from in situ helium measurements and stellar spectroscopy (Witte et al. 1993; Lallement & Bertin 1992; Bertin et al. 1993), as well as estimates of the neutral H and electron density in the circumsolar interstellar medium (Lallement et al. 1996; Izmodenov et al. 1999a). In what follows, the assumed interstellar parameters are then: $`T=6000`$ K, $`V=25`$ km/s, $`N(HI)=0.2`$ cm<sup>-3</sup>, $`N(e^{})=0.07`$ cm<sup>-3</sup>. The upwind direction is taken as $`\lambda =254.5^{}`$, $`\beta =7.5^{}`$ (ecliptic coordinates), which translates into $`l_{II}=186^{}`$,$`b_{II}=16^{}`$ (galactic coordinates). The assumed solar wind parameters at 1 AU are: $`n(p)=7`$ cm<sup>-3</sup>, $`V=450`$ km/s. The model does not include an interstellar magnetic field, but our estimates should not be significantly changed in the presence of a moderate field. The boundary of the model grid is at a distance of about 2000 AU in the direction of Sirius. Fig. 1a is a sketch of the heliosphere and shows the direction of Sirius on the downwind side. The predicted absorption by HSWA’s and HIA’s in the direction of Sirius at an angle of $`139^{}`$ from the upwind direction is displayed in Fig. 1b. The absorption is shown in a heliocentric rest frame. It can be seen that the HSWA’s are the main absorbers, and that their absorption is far from negligible. ### 1.2 Heliospheric and interstellar absorption towards Sirius The 2.7 pc long line-of-sight to Sirius has been shown to cross two clouds: i) the LIC, which in this direction is seen at a positive redshift of 19 km s<sup>-1</sup>, and ii) a second cloud at a Doppler shift of 13 km s<sup>-1</sup> (Lallement et al. 1994), which is probably of the same type as our Local Cloudlet. Using the angularly close star $`ϵ`$ CMa, Gry & Dupin (1998) have argued that the LIC extent in that direction is not longer than $``$ 0.6 pc. Figure 1c shows the Sirius spectrum around the Ly-$`\alpha `$ line, and a simple polynomial fit to the continuum surrounding the D and H absorption. Superimposed on the data is the expected profile after absorption by the two clouds at $`V=13`$ and 19 km s<sup>-1</sup>, respectively, with an assumed temperature of $`T=6000`$ K. The column densities of the two clouds are both $`1.6\times 10^{17}`$ cm<sup>-2</sup>, in agreement with the D absorption and a D/H ratio of $`1.65\times 10^5`$ (Linsky et al, 1995, BVL95). Our conclusions are not sensitive to either the exact value of D/H, or to the exact interstellar temperature. The absorption has been convolved by the instrumental profile corresponding to the G160M/SSA settings of the GHRS spectrograph. It is clearly seen that with warm interstellar gas only, absorption is missing on both sides of the line, as already noticed by BVL95. After adding the modeled absorption by the heliosphere, the resulting profile is substantially modified on the red part of the line. To make the heliospheric effect clear in Fig. 1c, the additional absorption is shown as a hatched area. It can be seen that the red part of the observed spectrum is well fitted by the model. Thus, while the heliosphere cannot been made responsible for the absorption in the blue wing, there is no need to propose additional absorption from interstellar hot gas along the line-of-sight to fit the red side of the absorption line. ## 2 Heliospheric, interstellar, and Siriospheric absorption towards Sirius Bertin et al. (1995b) have suggested that the additional absorption on the blue side is due to neutral gas associated with Sirius’s wind, a counterpart of Mg II absorption detected at this velocity. Here we consider another possibility, that the absorption is from the interaction area between Sirius’s wind and the interstellar gas around the star. In the following, we make a series of assumptions: \- Sirius is embedded in the “blue cloud” seen at the Doppler shift of 13 km s<sup>-1</sup>. This is a very reasonable assumption, owing to the very small length of the line-of-sight. \- Sirius has a wind with a terminal velocity of the same order of magnitude as the solar wind velocity (say 400–1500 km s<sup>-1</sup>), and a mass flux at least that of the solar wind. These assumptions are compatible with the predictions of radiatively driven wind models or coronal winds (see Bertin et al. 1995b). \- The gas near the Siriopause is not fully ionized by the EUV radiation from Sirius B. Using model results of Paerels et al. (1987) for a 25,000 K pure H white dwarf, the EUV flux of Sirius B balances the travel time associated to a star/ISM relative motion of $``$25 km s<sup>-1</sup> at distances of about 200 AU, which implies that if the size of the siriosphere is of this order or larger, neutral atoms of the cloud can penetrate within it. Such a size is very likely reached, since the Sirius wind is probably stronger than the solar wind and then the equilibrium with the ISM is reached at larger distances. \- The axis of symmetry of the siriosphere, determined by the relative motion between the star and the ambient gas, makes an angle $`\theta 40^{}`$ with the line-of-sight direction. We know the 3D motion of Sirius A from ephemerides for the orbital system, combined with the radial velocity of Sirius A at the time of the observations, v<sub>r</sub>= -5 km s<sup>-1</sup> , but we do not know the 3D motion of the cloud. Multiple clouds have been observed for many short lines of sight besides Sirius, but their projected velocities are never far from that of the LIC; the separation is only 6 km s<sup>-1</sup> for the non-LIC cloud seen towards Sirius. Thus, assuming the motions of these additional clouds are identical to that of the LIC is a reasonable approximation, and making this assumption for the Sirius cloud leads to an estimated angle of $`\theta 40^{}`$. Under these assumptions, we can estimate some characteristics of the HIA and HSWA populations around Sirius. The distance at which pressure equilibrium between the wind and the ISM is reached depends on the ISM pressure and the stellar wind momentum flux. If the mass flux and/or the velocity of the Sirius wind are larger than the solar wind flux and velocity, which is likely, the HIA component will be created at larger distances from the star in comparison with the solar case. But for the interstellar gas outside the discontinuity, the conditions of deceleration and heating should be about the same as for the heliosphere, since the gas has to decelerate in both cases by about the same quantity to be at rest with the star (the relative velocity between the star and the ISM). In the solar case, the relative velocity is 25.5 km s<sup>-1</sup>. For Sirius, we can estimate this velocity to be about 20–40 km s<sup>-1</sup>. The velocity is $`31`$ km s<sup>-1</sup> if the surrounding cloud motion is assumed to be identical to the LIC. However, since it’s projected velocity (13 km s<sup>-1</sup>) is lower than the LIC’s projected velocity (19 km s<sup>-1</sup>), the actual relative velocity is likely to be lower than 31 km s<sup>-1</sup> and therefore not too different from the solar case. In this instance, the absorption will be found at velocities between 13 km s<sup>-1</sup> (projection of the cloud’s velocity) and $`5`$ km s<sup>-1</sup> (the projection of the stellar motion). Fig. 1b shows the resulting theoretical absorption. The compressed stellar wind should also have properties similar to the compressed solar wind, although possibly formed at larger distances and possibly hotter. We have computed the absorption in the solar frame for $`\theta =45^{}`$, which should be equivalent to what would be seen by an observer on Sirius. Then, we have changed its sign and added $`5`$ km s<sup>-1</sup> to represent what would be seen for an observer at rest with the Sun and looking towards Sirius. Fig. 1b shows the predicted absorption. It can be seen that the HIA and especially the HSWA absorptions fall at the location of the “missing” absorption in the blue wing. Figure 1d shows the consequences of this additional absorption on the simulated spectrum. In order to obtain a complete “filling” of the line we have multiplied by 2 the column density of the HIA and HSWA components, which corresponds to a cloud two times denser than the LIC, or distances in the siriosphere two times larger, or any combination. It is beyond the scope of this paper to investigate all solutions since there are too many. However, from this crude estimate we conclude that siriospheric absorption could possibly account for the extra absorption observed in the blue wing. ## 3 Conclusion and discussion We have used the most precise and updated model of the Sun-LIC interaction to calculate the Ly-$`\alpha `$ absorption by the neutral gas in and around the heliosphere along the line-of-sight towards Sirius. We find that the neutralized solar wind from the heliosheath is mainly responsible for the absorption, and that the red side of the absorption line is very well fitted when adding this absorption to the normal interstellar absorption. In these conditions, there is no need to propose interstellar hot gas from a conductive interface to explain the red wing absorption, as BVL95 did in their analysis. Using analogies with the solar case, we also show that the remaining missing absorption on the blue side could be explained in the same way by a “siriosphere”, if Sirius is embedded in the neighboring cloud seen towards the star, if it has a stronger wind than the Sun, and if Sirius B does not completely ionize the hydrogen in and around the siriosphere. In this interpretation, there is no need for neutral H associated with a supersonic wind like that proposed by Bertin et al. (1995b). We also point out that the model results show that heliospheric absorption cannot be neglected in any Ly-$`\alpha `$ analysis, whatever the line-of-sight direction, if the interstellar absorption is relatively low (N(HI) $`10^{18.5}`$ cm<sup>-2</sup>). ###### Acknowledgements. We thank our referee Brian Wood for his valuable comments, his help for the rewording of the paper and the improvements he suggested. This work has been done in the frame of the INTAS project ”The heliosphere in the Local Interstellar Cloud” and partly supported by the ISSI in Bern. V.I. has been supported by the MENESR (France). Yu.M. has been supported by RFBR Grants No.98-01-00955,98-02-16759.
no-problem/9812/cond-mat9812133.html
ar5iv
text
# Anomalous Low Temperature States in CeNi2Ge2 ## I Introduction Strongly correlated electron materials in general and the heavy fermion compounds in particular exhibit unusual metallic states and low temperature phase transitions that remain only partly understood. The temperature dependences of the thermodynamic and transport properties, and in particular of the resistivity, allow us to identify a number of apparently distinct regimes. At high temperatures, a weakly temperature dependent, large resistivity consistent with scattering from thermally disordered local magnetic moments is observed down to an upper temperature scale, $`T_{sf}`$ . Below $`T_{sf}`$, the resistivity drops with decreasing temperature and in the absence of a phase transition the resistivity and other bulk properties follow the predictions of Fermi liquid theory below a lower temperature scale $`T_{FL}`$. The range between $`T_{FL}`$ and $`T_{sf}`$, sometimes described as a spin liquid regime, appears as a narrow cross-over region in most materials. An increasing number of systems are coming to light, however, in which - due to their proximity to magnetic phase transitions - the Fermi liquid regime is suppressed to very low temperatures or even masked by the onset of superconductivity or other forms of order (see e.g. ). The unconventional normal states observed in these metals might be examined in the first instance in terms of phenomenological models for the fluctuations of the local order parameter, i.e. the local magnetisation for ferromagnetism and the local staggered magnetisation for antiferromagnetism , or of some associated variable. If the magnetic ordering temperature is suppressed to absolute zero, these modes soften over large portions of reciprocal space at low temperatures, leading to a strong enhancement of the quasiparticle scattering rate and potentially to a breakdown of the Fermi liquid description in its simplest form. This breakdown may be expected to be particularly apparent in the temperature dependence of the resistivity $`\rho (T)`$ that may deviate from the usual Fermi liquid form $`\rho (T)T^2`$ in pure samples at low temperatures. To look for a breakdown of the Fermi liquid description in pure materials as opposed to the more extensively studied doped heavy fermion systems, we have selected stoichiometric compounds that are close to being magnetically ordered at low temperature and have used hydrostatic pressure to tune these compounds through quantum ($`T0K`$) phase transitions. The systems that we have selected, namely the isostructural and isoelectronic relatives CePd<sub>2</sub>Si<sub>2</sub> and CeNi<sub>2</sub>Ge<sub>2</sub>, allow for examinations of an antiferromagnetic quantum critical point in pure metals for the first time in considerable detail. CeNi<sub>2</sub>Ge<sub>2</sub> and CePd<sub>2</sub>Si<sub>2</sub> are isostructural to the heavy fermion superconductor CeCu<sub>2</sub>Si<sub>2</sub> and its larger volume relative CeCu<sub>2</sub>Ge<sub>2</sub> (both with the ThCr2Si2 structure), but differ from CeCu<sub>2</sub>Si<sub>2</sub> in the number of d electrons in the d-metal constituent, and hence in the character of the Fermi surface and in the magnetic properties. At ambient pressure, CePd<sub>2</sub>Si<sub>2</sub> orders in an antiferromagnetic structure with a comparatively small moment of 0.7 $`\mu _B`$ below a Néel temperature $`T_N`$ of about 10 K , which falls with increasing pressure . The spin configuration consists of ferromagnetic (110) planes with spins normal to the planes and alternating in direction along the spin axis. In a recent study , we have elucidated the phase diagram of CePd<sub>2</sub>Si<sub>2</sub> up to hydrostatic pressures of about 30 kbar. The Néel temperature has been found to drop linearly with pressure above 15 kbar and to extrapolate to zero at a critical pressure, $`p_c\text{28 kbar}`$, while the shoulder in the resistivity, $`T_{sf}`$, shifts from 10 K at low pressure to about 100 K near $`p_c`$. Superconductivity appears below 430 mK in a limited pressure region of a few kbar on either side of $`p_c`$. This behaviour, and perhaps that in a related system CeRh<sub>2</sub>Si<sub>2</sub> , is believed to be consistent with an anisotropic pairing arising from magnetic interactions . Here, we concentrate on the striking normal state behaviour of the resistivity, which deviates strongly from the $`T^2`$ form usually associated with a Fermi-liquid (upper curve in Fig. 1 and left inset in Fig. 2). The range between $`T_{sf}`$ and $`T_{FL}`$, in many materials a narrow cross-over regime, thus appears to open up to more than two orders of magnitude in temperature and becomes the dominant feature of the system, exposing the intervening spin liquid state for closer scrutiny. The electronically and structurally equivalent compound CeNi<sub>2</sub>Ge<sub>2</sub> , which is of central interest here, has a slightly smaller lattice constant and its zero pressure behaviour may be expected to be similar to that of CePd<sub>2</sub>Si<sub>2</sub> at a pressure close to but higher than $`p_c`$. ## II Results As shown in Fig. 1, high purity samples of CeNi<sub>2</sub>Ge<sub>2</sub> at ambient pressure and of CePd<sub>2</sub>Si<sub>2</sub> at $`p_c`$ exhibit qualitatively similar anomalous temperature dependences of the resistivity. For CePd<sub>2</sub>Si<sub>2</sub> near $`p_c`$, the resistivity has a form $`\rho =\rho _0+AT^x`$ with an exponent $`x`$ close to 1 over a wide range, from the onset of superconductivity near 0.4 K up to about 40 K (left inset of Fig. 2). Experiments carried out on various samples and different sample orientations have revealed a variation of the exponents in the range $`1.1<x<1.4`$ and a general trend towards lower values for purer (lower $`\rho _0`$) specimens, indicating a possible limiting value of close to 1 for ideally pure samples (inset of Fig. 1). Our present measurements at ambient pressure show that CeNi<sub>2</sub>Ge<sub>2</sub> follows a similar power law variation of the resistivity over at least an order of magnitude in temperature above about 200 mK and up to pressures of at least 17 kbar (Fig. 1). We have extended this study to lower temperatures under applied magnetic fields in a high quality sample, which showed no sign of a superconducting transition down to 100 mK at 0.5 T and above. At 0.5 T, a detailed analysis of the temperature dependence of the power-law exponent (more precisely, the logarithmic derivative defined in the caption of Fig. 2) reveals a rapid cross-over to the Fermi-liquid value $`x=2`$ below a temperature $`T_{FL}\text{200 mK}`$ (Fig. 2). This cross-over region rises and broadens with increasing magnetic field (Fig. 2). These findings indicate that CeNi<sub>2</sub>Ge<sub>2</sub> is delicately placed close to the critical point studied in CePd<sub>2</sub>Si<sub>2</sub> at high pressure, returning to Fermi liquid behaviour at low temperatures as the spin fluctuations are quenched by an increasing magnetic field. One sample of CeNi<sub>2</sub>Ge<sub>2</sub> shows a complete loss of resistance at ambient pressure below 200 mK (Fig. 3), similar to the occurrence of superconductivity in high pressure CePd<sub>2</sub>Si<sub>2</sub>, while a number of other high quality crystals exhibit a drop in $`\rho (T)`$ of about 85% at low temperatures. We note that a downturn of approximately 10% in the resistivity of CeNi<sub>2</sub>Ge<sub>2</sub> below 100 mK is also evident from data in . A study of the shift of this transition in a second sample with magnetic field yields an initial slope $`dB_{c2}/dT5T/K`$ (inset of Fig. 3), comparable with the value of $`6T/K`$ observed for the superconducting transition in high-pressure CePd<sub>2</sub>Si<sub>2</sub> . The transition is very sensitive to hydrostatic pressure (inset of Fig. 3) and above 4 kbar, no drop of the resistivity is observed. The resulting phase diagram is reminiscent of the behaviour of CePd<sub>2</sub>Si<sub>2</sub> at high pressure and is consistent with our conjecture that CeNi<sub>2</sub>Ge<sub>2</sub> is a smaller volume relative of CePd<sub>2</sub>Si<sub>2</sub>. Further anomalies were discovered at still higher pressure in CeNi<sub>2</sub>Ge<sub>2</sub>, which showed indications of a new ordered phase (labelled $`T_x`$ in Fig. 4) at around 1 K and, again, a drop of resistance of up to 100% below about 0.4 K ($`T_s`$ in Fig. 4), reminiscent of superconductivity. We note that early measurements of the specific heat in the same region of the phase diagram have revealed an anomalous peak, which may be associated with our upper transition at $`T_x`$ . In contrast to the superconducting phase found in high pressure CePd<sub>2</sub>Si<sub>2</sub> and to the corresponding low pressure phase in CeNi<sub>2</sub>Ge<sub>2</sub>, these high pressure states in CeNi<sub>2</sub>Ge<sub>2</sub> are relatively insensitive to variations of lattice density. In this sense they are reminiscent of the behaviour observed in CeCu<sub>2</sub>Si<sub>2</sub>, where a superconducting phase persists over a wide region of pressure up to about 100 kbar . ## III Discussion In both CeNi<sub>2</sub>Ge<sub>2</sub> and CePd<sub>2</sub>Si<sub>2</sub>, the temperature dependence of the resistivity is characterised over a wide range by a power-law with exponent close to 1, and by rapid cross-overs to the high and low temperature forms. This behaviour is not limited to a critical lattice density alone, but, at least in CeNi<sub>2</sub>Ge<sub>2</sub>, appears to extend over a considerable range in pressure. These properties of our two tetragonal metals contrast sharply with that of the cubic antiferromagnet CeIn<sub>3</sub> . In the latter the resistivity deviates from the Fermi liquid form only in a very narrow pressure range near the critical pressure $`p_c`$ where $`T_N\text{0 K}`$. At $`p_c`$ and in low magnetic fields the resistivity exponent, or more precisely $`d\mathrm{ln}(\rho \rho _0)/d\mathrm{ln}T`$, grows smoothly with decreasing temperature and tends towards a value of about $`3/2`$ at around 1 K. We now consider to what extent these findings may be understood in terms of the standard model for spin fluctuation scattering. In this model, the resistivity is given essentially by the population of spin excitations that is proportional to an effective volume $`q_T^d`$ in reciprocal space centred on the ordering wavevector $`𝐐`$. Here $`d`$ is the spatial dimension and $`q_T`$ is a characteristic thermal wavevector that is proportional to $`T^{1/z}`$, if the spin fluctuation rate $`\mathrm{\Gamma }_{𝐐+𝐪}`$ at a wavevector $`𝐐+𝐪`$ varies as $`q^z`$ at a small $`q`$ and at $`p_c`$. For the simplest case, $`z=2`$ and thus the resistivity is predicted to vary as $`T`$ for $`d=2`$ and $`T^{3/2}`$ for $`d=3`$ far below $`T_{sf}`$ and at $`p_c`$, when antiferromagnetic order vanishes continuously . These results are qualitatively consistent with experiment, if we take $`d=3`$ for cubic CeIn<sub>3</sub>, and if the effective dimension is closer to 2 for tetragonal CePd<sub>2</sub>Si<sub>2</sub> and CeNi<sub>2</sub>Ge<sub>2</sub>. The latter assumption is not necessarily inconsistent with the known magnetic structure of CePd<sub>2</sub>Si<sub>2</sub>, that suggests a frustrated spin coupling along the c-axis and hence a strongly anisotropic spin fluctuation spectrum . There are at least two major difficulties with this description. Firstly, it assumes that essentially all carriers on the Fermis surface scatter strongly from excited spin fluctuations. This assumption, however, cannot be justified within the usual Born approximation in the presence of harmonic spin fluctuations in an ideally pure system. Under these conditions, those carriers not satisfying the ‘Bragg’ condition for scattering from critical spin fluctuations near $`𝐐`$ are only weakly perturbed and at low T lead to a Fermi liquid $`T^2`$ resistivity and not to the above anomalous exponents . Secondly, for the standard form assumed for the temperature and pressure dependence of $`\mathrm{\Gamma }_{𝐐+𝐪}`$ the model of the last paragraph predicts (i) a gradual increase of $`d\mathrm{ln}\rho /d\mathrm{ln}T`$ with decreasing $`T`$ tending to the limiting exponent only for $`TT_{sf}`$, and (ii) a rapid cross-over to the Fermi liquid exponent at low T as a function of pressure (or when the magnetic transition is not continuous). Neither of these predictions appear to be consistent with our findings in CePd<sub>2</sub>Si<sub>2</sub> and CeNi<sub>2</sub>Ge<sub>2</sub>. The first of these two difficulties may perhaps be cured by including the effects of residual impurities, spin fluctuation anharmonicity, and corrections to the Born approximation that may homogenise the quasiparticle relaxation rate over the Fermi surface. The second problem might be resolved by means of a more realistic model for the temperature and pressure dependences of $`\mathrm{\Gamma }_{𝐐+𝐪}`$ than that currently employed. A first step towards such refinements has recently been proposed, specifically for the effects of residual impurities , but it is too early to tell whether or not it can account for all of the features we observe, both in our tetragonal and the cubic systems, in a consistent way. We also point out that a description of CePd<sub>2</sub>Si<sub>2</sub> and CeNi<sub>2</sub>Ge<sub>2</sub> based on a more extreme separation of charge and spin degrees of freedom than is present in current approaches, cannot be ruled out . A complete theory would have to account not only for the differences between our tetragonal and cubic systems and the unexpectedly wide range of apparent criticality in CePd<sub>2</sub>Si<sub>2</sub> and CeNi<sub>2</sub>Ge<sub>2</sub>, but also for the occurrence of superconductivity on the border of antiferromagnetism in all of these cases, and the higher pressure phases that we observe in CeNi<sub>2</sub>Ge<sub>2</sub>. ## IV Conclusion The 4f-electron metals CeNi<sub>2</sub>Ge<sub>2</sub> and CePd<sub>2</sub>Si<sub>2</sub> offer the possibility of observing an unconventional normal state over a wide window in temperature and pressure without chemical doping. The similarity between the two materials suggests that CeNi<sub>2</sub>Ge<sub>2</sub> at ambient pressure is conveniently placed very close to the antiferromagnetic quantum critical point, as studied under high pressure in CePd<sub>2</sub>Si<sub>2</sub>, and make it an attractive material for future investigations. The fixed value of the power-law exponents over a wide range in temperature - reminiscent of the behaviour observed in some of the high T<sub>c</sub> oxides - and the wide pressure range of apparent criticality appear to defy a description in terms of the spin fluctuation model in its simplest form. Superconductivity in rare-earth based heavy fermion metals is an uncommon phenomenon. CeNi<sub>2</sub>Ge<sub>2</sub> may be the second ambient pressure superconductor in this class after CeCu<sub>2</sub>Si<sub>2</sub>, if the zero-resistance state observed at low pressures can be identified as superconductivity. The pairing mechanism in CePd<sub>2</sub>Si<sub>2</sub> and CeNi<sub>2</sub>Ge<sub>2</sub>, in which superconductivity exists only close to the very edge of antiferromagnetic order, may however be different from that in CeCu<sub>2</sub>Si<sub>2</sub> . On the other hand, the high pressure, zero-resistance state in CeNi<sub>2</sub>Ge<sub>2</sub> could offer a new perspective for our understanding of the first heavy fermion superconductor, CeCu<sub>2</sub>Si<sub>2</sub>. ###### Acknowledgements. We thank, in particular, P. Coleman, J. Flouquet, P. Gegenwart, C. Geibel, I. Gray, S. Kambe, D. Khmelnitskii, F. Kromer, M. Lang, A. P. Mackenzie, G. J. McMullan, A. J. Millis, P. Monthoux, C. Pfleiderer, A. Rosch, G. Sparn, F. Steglich, A. Tsvelik and I. R. Walker. The research has been supported partly by the Cambridge Research Centre in Superconductivity, headed by Y. Liang, by the EPSRC of the UK, by the EU, and by the Cambridge Newton Trust.
no-problem/9812/cond-mat9812280.html
ar5iv
text
# Quasi-long range order in the random anisotropy Heisenberg model ## Abstract The large distance behaviors of the random field and random anisotropy Heisenberg models are studied with the functional renormalization group in $`4ϵ`$ dimensions. The random anisotropy model is found to have a phase with the infinite correlation radius at low temperatures and weak disorder. The correlation function of the magnetization obeys a power law $`𝐦(𝐫_1)𝐦(𝐫_2)|𝐫_1𝐫_2|^{0.62ϵ}`$. The magnetic susceptibility diverges at low fields as $`\chi H^{1+0.15ϵ}`$. In the random field model the correlation radius is found to be finite at the arbitrarily weak disorder. The effect of impurities on the order in condensed matter is interesting since the disorder is almost inevitably present in any system. If the disorder is weak, the short range order is the same as in the pure system. However, the large distance behavior can be strongly modified by the arbitrarily weak disorder. This happens in the systems of continuous symmetry in presence of the random symmetry breaking field . The first experimental example of this kind is the amorphous magnet . During the last decade a lot of other related objects were found. These are liquid crystals in the porous media , nematic elastomers , He-3 in aerogel and vortex phases of impure superconductors . The nature of the low-temperature phases of these systems is still unclear. The only reliable statement is that a long range order is absent . However, other details of the large distance behavior are poorly understood. The neutron scattering reveals sharp Bragg peaks in impure superconductors at low temperatures and weak external magnetic fields. Since the vortices can not form a regular lattice , it is tempting to assume, that there is a quasi-long range order (QLRO), that is the correlation radius is infinite and correlation functions depend on the distance slow. Recent theoretical and numerical studies of the random field XY model, which is the simplest model of the vortex system in the impure superconductor , support this picture. The theoretical advances are afforded by two new technical approaches: the functional renormalization group and the replica variational method . These methods are free from drawbacks of the standard renormalization group and give reasonable results. The variational method regards a possibility of spontaneous replica symmetry breaking and treats the fluctuations approximately. On the other hand, the functional renormalization group provides a subtle analysis of the fluctuations about the replica symmetrical ground state. Surprisingly, the methods suggest close and sometimes even the same results. Both techniques were originally suggested for the random manifolds and then allowed to obtain information about some other disordered systems with the abelian symmetry . It is less known about the non-abelian systems. The simplest of them are the random field and random anisotropy Heisenberg models. The latter was introduced as a model of the amorphous magnet . In spite of a long discussion, the question about QLRO in these models is still open. There is an experimental evidence in favor of no QLRO . On the other hand, recent numerical simulations support the possibility of QLRO in these systems. The only theoretical approach, developed up to now, is based on the spherical approximation . However, there is no reason for this approximation to be valid. In this letter we study the random field and random anisotropy Heisenberg models in $`4ϵ`$ dimensions with the functional renormalization group. The large distance behaviors of the systems are found to be quite different. While in the random field model the correlation radius is always finite, the random anisotropy Heisenberg model has a phase with QLRO. In this phase the correlation function of the magnetization obeys a power law and the magnetic susceptibility diverges at low fields. To describe the large distance behavior at low temperatures we use the classical nonlinear $`\sigma `$-model with the Hamiltonian $$H=d^Dx[J\underset{\mu }{}_\mu 𝐧(𝐱)_\mu 𝐧(𝐱)+V_{\mathrm{imp}}(𝐱)],$$ (1) where $`𝐧(𝐱)`$ is the unit vector of the magnetization, $`V_{\mathrm{imp}}(𝐱)`$ the random potential. In the random field case it has the form $$V_{\mathrm{imp}}=\underset{\alpha }{}h_\alpha (𝐱)n_\alpha (𝐱);\alpha =x,y,z,$$ (2) where the random field $`𝐡(𝐱)`$ has a Gaussian distribution and $`h_\alpha (𝐱)h_\beta (𝐱^{})=A^2\delta (𝐱𝐱^{})\delta _{\alpha \beta }`$. In the random anisotropy case the random potential is given by the equation $$V_{\mathrm{imp}}=\underset{\alpha ,\beta }{}\tau _{\alpha \beta }(𝐱)n_\alpha (𝐱)n_\beta (𝐱);\alpha ,\beta =x,y,z,$$ (3) where $`\tau _{\alpha \beta }(𝐱)`$ is a Gaussian random variable, $`\tau _{\alpha \beta }(𝐱)\tau _{\gamma \delta }(𝐱^{})=A^2\delta _{\alpha \gamma }\delta _{\beta \delta }\delta (𝐱𝐱^{})`$. Random potential (3) corresponds to the same symmetry as a more conventional choice $`V_{\mathrm{imp}}=(\mathrm{𝐡𝐧})^2`$, but is more convenient for the further discussion. The Imry-Ma argument suggests that in our problem the long range order is absent at any dimension $`D<4`$. One can estimate the Larkin length, up to which there are strong ferromagnetic correlations, with the following qualitative renormalization group (RG) approach. Let one remove the fast modes and rewrite the Hamiltonian in terms of the block spins, corresponding to the scale $`L=ba`$, where $`a`$ is the ultraviolet cut-off. Then let one make rescaling so that the Hamiltonian would restore its initial form with new constants $`A(L),J(L)`$. Dimensional analysis provides estimations $$J(L)b^{D2}J(a);A(L)b^{D/2}A(a)$$ (4) To estimate the typical angle $`\varphi `$ between neighbor block spins, one notes that the effective field, acting on each spin, has two contributions: the exchange contribution and the random one. The exchange contribution of order $`J(L)`$ is oriented along the local average direction of the magnetization. The random contribution of order $`A(L)`$ may have any direction. This allows one to write at low temperatures that $`\varphi (L)A(L)/J(L)`$. The Larkin length corresponds to the condition $`\varphi (L)1`$ and equals $`L(J/A)^{2/(4D)}`$ in agreement with the Imry-Ma argument . If Eq. (4) were exact, the Larkin length could be interpreted as the correlation radius. However, there are two sources of corrections to Eq. (4). Both of them are relevant already at the derivation of the RG equation for the pure system in $`2+ϵ`$ dimensions . The first source is the renormalization due to the interaction and the second one results from the rescaling of the magnetization, which is necessary to ensure the fixed length condition $`𝐧^2=1`$. The leading corrections to Eq. (4) are proportional to $`\varphi ^2J,\varphi ^2A`$. Thus, the RG equation for the combination $`(A(L)/J(L))^2`$ is the following $$\frac{d}{d\mathrm{ln}L}\left(\frac{A(L)}{J(L)}\right)^2=ϵ\left(\frac{A(L)}{J(L)}\right)^2+c\left(\frac{A(L)}{J(L)}\right)^4,ϵ=4D$$ (5) If the constant $`c`$ in Eq. (5) is positive, the Larkin length is the correlation radius indeed. But if $`c<0`$, the RG equation has a fixed point, corresponding to the phase with the infinite correlation radius. As it is seen below, both situations are possible, depending on the system. To derive the RG equations in a systematic way we use the method, suggested by Polyakov for the pure system. The same consideration as in the XY and random manifold models suggests that near a zero-temperature fixed point in $`4ϵ`$ dimensions there is an infinite set of relevant operators. After replica averaging, the relevant part of the effective replica Hamiltonian can be represented in the following form $$H_R=d^Dx[\underset{a}{}\frac{1}{2T}\underset{\mu }{}_\mu 𝐧_a_\mu 𝐧_a\underset{ab}{}\frac{R(𝐧_a𝐧_b)}{T^2}],$$ (6) where $`a,b`$ are replica indices, $`R(z)`$ is some function, $`T`$ the temperature. In the random anisotropy case the function $`R(z)`$ is even due to the symmetry with respect to changing the sign of the magnetization. The one-loop RG equations in $`4ϵ`$ dimensions are obtained by a straightforward combination of the methods of Refs. and . The equations below are given for the arbitrary number $`N`$ of the components of the magnetization. In the Heisenberg model $`N=3`$. The RG equations become simpler after substitution for the argument of the function $`R(z)`$: $`z=\mathrm{cos}\varphi `$. In terms of this new variable one has to find even periodic solutions $`R(\varphi )`$. The period is $`2\pi `$ in the random field case and $`\pi `$ in the random anisotropy case. In a zero-temperature fixed point the one-loop equations are $$\frac{d\mathrm{ln}T}{d\mathrm{ln}L}=(D2)2(N2)R^{\prime \prime }(0)+O(R^2,T);$$ (7) $`0={\displaystyle \frac{dR(\varphi )}{d\mathrm{ln}L}}=ϵR(\varphi )+(R^{\prime \prime }(\varphi ))^22R^{\prime \prime }(\varphi )R^{\prime \prime }(0)`$ (8) $`(N2)[4R(\varphi )R^{\prime \prime }(0)+2\mathrm{c}\mathrm{t}\mathrm{g}\varphi R^{}(\varphi )R^{\prime \prime }(0)\left({\displaystyle \frac{R^{}(\varphi )}{\mathrm{sin}\varphi }}\right)^2]+O(R^3,T)`$ (9) The two-spin correlation function is given by the expression $`𝐧(𝐱)𝐧(𝐱^{})|𝐱𝐱^{}|^\eta `$, where $$\eta =2(N1)R^{\prime \prime }(0)$$ (10) The same equations (7-10) were derived by a different method in Ref. . In that paper the critical behavior in $`4+|ϵ|`$ dimensions was studied by considering analytical fixed point solutions $`R(\varphi )`$. In the Heisenberg model, analytical solutions are absent and they are unphysical for $`N3`$ . In this letter we search for non-analytical $`R(\varphi )`$. As shown below, the random field model can be completely studied by analytical means. In the random anisotropy case, one has to solve Eq. (9) numerically. Since coefficients of Eq. (9) are large as $`\varphi 0`$, it is convenient to use the expansion of $`R(\varphi )`$ over $`|\varphi |`$ at small $`\varphi `$. At larger $`\varphi `$ the equation is integrated by the Runge-Kutta method. The solutions to be found have zero derivatives at $`\varphi =0,\pi /2`$. At $`N=3`$ the solution with largest $`|R^{\prime \prime }(0)|`$, which corresponds to $`\eta =0.62ϵ`$ (10), has two zeroes in the interval $`[0,\pi ]`$. There are also solutions with 4 and more zeroes. They all correspond to $`\eta <0.5ϵ`$. We shall see below, that these solutions are unstable. To test the stability of the solution with two zeroes, we use an approximate method. The instability to the constant shift of the function $`R(\varphi )`$ has no interest for us, since the constant shifts do not change the correlators . To study the stability to the other perturbations, it is convenient to rewrite Eq. (9), substituting $`\omega (R^{\prime \prime }(\varphi ))^2`$ for $`(R^{\prime \prime }(\varphi ))^2`$. The case of interest is $`\omega =1`$, but at $`\omega =0`$ the equation can be solved exactly. The solution at $`\omega =1`$ can be found with the perturbation theory over $`\omega `$. The exact solution at $`\omega =0`$ is $`R_{\omega =0}(\varphi )=ϵ(\mathrm{cos}\varphi /24+1/120)`$. The perturbative expansion provides the following asymptotic series for $`\eta `$: $`\eta =ϵ(0.670.08\omega +0.14\omega ^2\mathrm{})`$. The resulting estimation $`\eta =ϵ(0.67\pm 0.08)`$ agrees with the numerical result well. This allows us to expect that the stability analysis of the solution $`R_{\omega =0}`$ of the equation with $`\omega =0`$ provides information about the stability of the solution of Eq. (9). A simple calculation shows that $`R_{\omega =0}`$ is stable in the linear approximation. Thus, there is a stable zero-temperature fixed point of the RG equations with the critical exponent of the correlation function $$\eta =0.62ϵ$$ (11) The critical exponent $`\gamma `$ of the magnetic susceptibility $`\chi (H)H^\gamma `$ in the weak uniform field $`H`$ is given by the equation $$\gamma =1+(N1)R^{\prime \prime }(0)/2=10.15ϵ$$ (12) Let us demonstrate the absence of physically acceptable fixed points in the random field case. We derive some inequality for critical exponents. Then we show that the inequality has no solutions. We use the rigorous inequality for the connected and disconnected correlation functions $$𝐧_a(𝐪)𝐧_a(𝐪)𝐧_a(𝐪)𝐧_b(𝐪)\mathrm{const}\sqrt{𝐧_a(𝐪)𝐧_a(𝐪)},$$ (13) where $`𝐧(𝐪)`$ is a Fourier-component of the magnetization, $`a,b`$ are replica indices. In a fixed point, Eq. (13) provides an inequality for the critical exponents of the connected and disconnected correlation functions . The large distance behavior of the connected correlation function in a zero-temperature fixed point can be derived from the expression $`\chi 𝐧(\mathrm{𝟎})𝐧(𝐱)d^Dx`$ and the critical exponent of the susceptibility (12). Finally, one obtains the following relation $$4D\frac{3N}{N1}\eta ,$$ (14) where $`\eta `$ is given by Eq. (10). This equation does not have solutions at $`N=3`$. At $`N>3`$ Eq. (14) is incompatible with the requirement $`\eta >0`$. Thus, there are no accessible fixed points for $`N3`$. In the previous paragraph, inequality (14) is derived for the model (1) with the Gaussian random field (2). It can also be extended to a more general situation (6). If one adds a weak Gaussian random field (2) to any Hamiltonian, it suffices for Eq. (13) to become valid. The addition of the Gaussian random field corresponds to the transformation $`R(𝐧_a𝐧_b)R(𝐧_a𝐧_b)+\mathrm{\Delta }𝐧_a𝐧_b`$ in Eq. (6), where $`\mathrm{\Delta }>0`$ is a constant. Thus, if at some $`\mathrm{\Delta }`$ the function $`\stackrel{~}{R}(𝐧_a𝐧_b)=R(𝐧_a𝐧_b)\mathrm{\Delta }𝐧_a𝐧_b`$ is possible as a disorder-induced term in Eq. (6), then Eq. (13) is valid for the system with the disorder-induced term $`R(𝐧_a𝐧_b)`$. Finally, we conclude, that inequality (14) may be broken only for Hamiltonians (6), which lie outside the physically acceptable region or on its border. This suggests the strong coupling regime with a presumably finite correlation radius. In the random anisotropy case a similar consideration uses the connected and disconnected correlation functions of the field $`(n_x(𝐫)n_y(𝐫))`$ in presence of Gaussian disorder (3). The resulting condition for the critical exponent, $`\eta (N1)ϵ/4`$, rules out all but one fixed points of RG equation (9). The question of the large distance behavior of the random field and random anisotropy Heisenberg models was discussed by Aharony and Pytte on the basis of an approximate equation of state . They also obtained QLRO in the random anisotropy case and its absence in the random field model. However, we believe, that this is an occasional coincidence, since the equation of state is valid only in the first order in the strength of the disorder, while higher orders are crucial for critical properties . In particular, the approach incorrectly predicts the absence of QLRO in the random field XY model and its presence in the random anisotropy spherical model. It also provides incorrect critical exponents in the Heisenberg case. The random anisotropy Heisenberg model is relevant for the amorphous magnets . In the same time, for their large distance behavior the dipole interaction may be important . Besides, a weak nonrandom anisotropy is inevitably present due to mechanical stresses. In conclusion, we have found, that the random anisotropy Heisenberg model has the infinite correlation radius and a power dependence of the correlation function of the magnetization on the distance at low temperatures and weak disorder in $`4ϵ`$ dimensions. On the other hand, the correlation radius of the random field Heisenberg model is always finite. The author is thankful to E. Domany, G. Falkovich, Y. Gefen, S.E. Korshunov, Y.B. Levinson, V.L. Pokrovskiy and A.V. Shytov for useful discussions. This work was supported by RFBR grant 96-02-18985 and by grant 96-15-96756 of the Russian Program of Leading Scientific Schools.
no-problem/9812/hep-ph9812270.html
ar5iv
text
# hep-ph/9812270SMU-HEP/98-07 Heavy Quark Production*footnote **footnote *Presented at 4th Workshop on Heavy Quarks at Fixed Target (HQ 98), Fermilab, Batavia, IL, 10-12 Oct 1998. ## Introduction The production of heavy quarks in high energy processes has become an increasingly important subject of study both theoretically and experimentally. The theory of heavy quark production in perturbative Quantum Chromodynamics (pQCD) is more challenging than that of light parton (jet) production because of the new physics issues brought about by the additional heavy quark mass scale. The correct theory must properly take into account the changing role of the heavy quark over the full kinematic range of the relevant process from the threshold region (where the quark behaves like a typical “heavy particle”) to the asymptotic region (where the same quark behaves effectively like a massless parton). With steadily improving experimental data on a variety of processes sensitive to the contribution of heavy quarks (including the direct measurement of heavy flavor production), this is a very rich field for studying different aspects of the QCD theory including the problems of multiple scales, summation of large logarithms, subtleties of renormalization, and higher order corrections. We shall briefly review a limited subset of these issues.<sup>1</sup><sup>1</sup>1For a recent comprehensive review, see: Frixione, Mangano, Nason, and Ridolfi, Ref. FMNR97a ## The Factorization Theorem Perturbative calculations for heavy quark production are performed in the context of the factorization theorem expressed below in the commonly used form: $`\sigma _{ac}`$ $`=`$ $`f_{ab}(x,\mu ^2)\widehat{\sigma }_{bc}(Q^2/\mu ^2,M_H^2/\mu ^2,\alpha _s(\mu ))+𝒪(\mathrm{\Lambda }_{QCD}^2/Q^2)`$ (1) While the factorization was originally proven for massless quarks,CSS85 the theorem has recently been extended by CollinsCollins97 to incorporate quarks of any mass, including “heavy quarks.” (Note, we have explicitly retained the $`M_H^2`$ dependence in $`\widehat{\sigma }`$.) It is important to note that the corrections to the factorization are only of order $`\mathrm{\Lambda }_{QCD}^2/Q^2`$, and not $`M_H^2/Q^2`$, even for the case of general quark masses. The factorization theorem can also be expressed as a composition of $`t`$-channel two particle irreducible (2PI) amplitudes:<sup>2</sup><sup>2</sup>2I must necessarily leave out many details here; for a precise treatment, see CollinsCollins97 . $`\sigma _{ac}`$ $``$ $`\widehat{\sigma }_{bc}f_{ab}\left[C{\displaystyle \frac{1}{1(1Z)K}}\right]Z\left[{\displaystyle \frac{1}{1K}}T\right]`$ (2) Here, C represents the graph for a hard scattering, K represents the graph for a rung, T represents the graph that couples to the target, and Z represents a collinear projection operator. The first term in brackets roughly corresponds to the hard scattering coefficient function $`\widehat{\sigma }`$, and the second term to the parton distribution function (PDF), $`f`$. Note that these two terms only communicate through a collinear projection operator, Z. Part of the effort in generalizing the factorization theorem for the case of massive quarks involves constructing the proper Z, and demonstrating that terms containing (1-Z) are power suppressed. However, once Z is determined, Eq.(2) yields an all-orders prescription for computing for both the hard scattering coefficient ($`\widehat{\sigma }`$) and the parton distribution function ($`f`$). A calculation using this formalism was first performed by ACOTACOT for the case of heavy quark production in deeply inelastic scattering, and we now examine this process in detail. ## Heavy Quark Production in DIS Several experimental groupsHERA have studied the semi-inclusive deeply inelastic scattering (DIS) process for heavy-quark production $`\mathrm{}_1+N\mathrm{}_2+Q+X.`$ New data from HERA investigates the DIS process in a very different kinematic range from that available at fixed-target experiments. This perception has changed the way that we compute the semi-inclusive DIS heavy quark production. Traditionally, the heavy quark mass was treated as a large scale, and the number of active parton flavors was fixed to be the number of quarks lighter than the heavy quark. In this scheme, the perturbation expansion begins with the $`𝒪(\alpha _s^1)`$ heavy quark creation fusion process $`\gamma gc\overline{c}`$, (cf., Fig. 1b). We refer to this approach as the Fixed Flavor Number (FFN) scheme since the number of flavors coming from parton distributions is fixed at three for charm production.<sup>3</sup><sup>3</sup>3The necessary diagrams have been computed to $`𝒪(\alpha _S^2)`$ by Smith, van Neerven, and collaborators, cf., Ref. BSV . More recently, a Variable Flavor Number (VFN) scheme (ACOT ACOT ) has been proposed which includes the heavy quark as an active parton flavor with non-zero heavy quark mass. In this case, the perturbation expansion begins with the $`𝒪(\alpha _s^0)`$ heavy quark excitation process $`\gamma cc`$, (cf., Fig. 1a). The key advantages of this scheme are:schmidt 1. By incorporating the heavy quark into the parton framework, the composite scheme yields a result which is valid from threshold to asymptotic energies; in contrast, the FFN scheme contains unsubtracted mass singularities which will vitiate the perturbation expansion in the $`m_c0`$ or $`E\mathrm{}`$ limit. 2. Because the composite scheme resums the large logarithms appearing in the FFN scheme into the parton distribution functions, it includes the numerically dominant terms of the $`𝒪(\alpha _s^2)`$ FFN scheme calculation in a $`𝒪(\alpha _s^1)`$ calculation. In effect, the VFN scheme subsumes the FFN scheme. To illustrate this fact with a concrete calculation, in Fig. 2, we plot the cross section for “heavy” quark production as a function of the quark mass.<sup>4</sup><sup>4</sup>4To be specific, we have computed single quark production for a photon exchange with $`x=0.1`$, $`\mu =Q=10`$ GeV, and the cross section is in arbitrary units. This figure clearly shows the three important kinematic regions. 1) In the massless region, where $`m_HQ`$, the ACOT VFN result reduces precisely to the massless $`\overline{\mathrm{MS}}`$ result. 2) In the decoupling region, where $`m_HQ`$, this “heavy quark” decouples and its contribution vanishes. 3) In the transition region, where $`m_HQ`$, this (not-so) “heavy quark” plays an important dynamic role. While the FFN scheme is appropriate only when $`m_HQ`$, we see that the VFN scheme is valid throughout the full kinematic range.<sup>5</sup><sup>5</sup>5Buza et al., have determined the asymptotic form of the heavy quark coefficient functions which are then used to determine the threshold matching conditions between the three- and four-flavor shemes, Ref. BSV . Thorne and Roberts have a similar scheme with slightly different matching conditions, Ref. thorne . This point is also illustrated in a calculation by Kretzerkretzer (cf., Fig. 3) which shows the partial contributions to the charged current $`F_2^{charm}`$.<sup>6</sup><sup>6</sup>6Kretzer and Schienbein have performed the first calculation of the $`𝒪(\alpha _S)`$ quark initiated process for general masses and general couplings, Ref. kretzer . In this figure, each line is actually a pair of lines: the thin lines represents the result for $`F_2^{charm}`$ using the ACOT scheme with $`m_s=0.5`$ GeV, and the thick lines regularize the strange quark with the massless $`\overline{\mathrm{MS}}`$ prescription. (The charm mass is, of course, retained.) The fact that these two calculations match so closely (particularly in comparison to the $`\mu `$-variation) indicates: 1) the ACOT scheme smoothly reduces to the desired massless $`\overline{\mathrm{MS}}`$ limit as $`m_H0`$, and 2) for $`m_H\stackrel{<}{}\mathrm{\Lambda }_{QCD}`$ we see that the quark mass no longer plays a dynamic role in the process and becomes purely a regulator. ## Heavy Quarks and the Global PDF Analysis Recent precision data on $`F_2`$ and on $`F_2^{charm}`$ from HERA indicate that the charm contribution can rise to 25% of the total $`F_2`$ at small-$`x`$. These results clearly imply the need to perform new global analyses to account for the correct physics behind these measurements. Tung and LaiLaiTun97a have repeated the CTEQ4M global analysis,CTEQ but this time implementing the heavy quark leptoproduction within the ACOT formalism to obtain a CTEQ4HQ set of PDF’s. The deviation of CTEQ4HQ distributions from CTEQ4M are minimal, and are most noticeable at small-$`x`$; interestingly, the differences are larger for the light quarks than for the gluon and charm. The effect of these new PDF’s and the comparison with data are shown in Fig. 4. The solid curves show the CTEQ4M distributions convoluted with massless matrix elements. The dashed curves show the CTEQ4M distributions convoluted with massive matrix element; while technically this is a mismatch of schemes, this comparison is useful to gauge the magnitude of the heavy quark effects, (which we observe are comparable to the experimental uncertainties). Finally, the dotted curves show the CTEQ4HQ distributions convoluted with massive matrix element. When a consistent scheme is used for both the matrix elements and the PDF’s, the agreement with data is excellent. (This is as expected since this data was included in the fit.) It is interesting to note that overall $`\chi ^2`$ for CTEQ4HQ ($`\chi ^2`$=1293) was slightly improved compared to the previous best fit CTEQ4M ($`\chi ^2`$=1320) for 1297 data points. While this difference is small, we find it reassuring that the proper treatment of the heavy quark mass resulted in an improved fit; particularly when compared with a 4-flavor FFN fit ($`\chi ^2`$=1349) or a 3-flavor FFN fit ($`\chi ^2`$=1380). A recent re-analysis of the EMC dataharris concluded that there could be an intrinsic charm component in the proton of $`0.86\pm 0.60\%`$. It would be interesting to repeat this calculation in the context of a global analysis using the VFN ACOT formalism to see if more recent data favor an intrinsic charm component. ## Heavy Quarks and Extraction of $`s(x)`$ A topic closely related to DIS charm production is the extraction of the strange quark distribution.<sup>7</sup><sup>7</sup>7For a comprehensive review, see Conrad, Shaevitz, and Bolton, Ref. conrad . In principle, we can extract $`s(x)`$ by comparing DIS neutral and charged current data. To leading order, we have: $`{\displaystyle \frac{F_2^{NC}}{F_2^{CC}}}`$ $``$ $`{\displaystyle \frac{5}{18}}\left\{1{\displaystyle \frac{3}{5}}{\displaystyle \frac{(s+\overline{s})(c+\overline{c})+\mathrm{}}{q+\overline{q}}}\right\}.`$ (3) While the individual $`F_2`$ structure functions are measured precisely (cf., Fig. 5),seligman this approach is indirect in the sense that small uncertainties in the larger valence distributions will magnify the uncertainty on the extracted $`s(x)`$. A direct method of obtaining $`s(x)`$ is to use the neutrino induced di-muon process: $`\nu _\mu N\mu ^{}cX`$ with the subsequent decay $`cs\mu ^+\nu _\mu `$. Here, the di-muon signal is directly related to the charm production rate, which goes via the process $`W^+sc`$ at leading order. The method has the advantage that the signal from the $`s`$-quark is not a small effect beneath the valence process. A complete NLO experimental analysis was performed using the CCFR data set.CCFRcharm The recently collected data from the NuTeV experiment will provide an opportunity to extend the precision of these investigations still further.nutev Their high intensity sign-selected neutrino beam and the new calibration beam allows for large improvement in the systematic uncertainty while minimizing statistical errors. (See the paper by T. Adams, this meeting.adams ) ## Hadroprodution of Heavy Quarks We now turn to the hadroproduction of heavy quarks, and discuss how the method of ACOTACOT ; ost is used to provide a dynamic role for the heavy quark parton. We concentrate mostly on b-production at the Tevatron for definiteness, and present typical results for $`b`$ quark production.FMNR97a ; cdf ; dzero (See the paper by A. Zieminski, this meeting.Zieminski ) Fig. 6a shows the scaled differential cross section vs. $`p_T`$ for $`b`$ production at 1800 GeV for the leading order (LO) calculations. The heavy creation (HC) process<sup>8</sup><sup>8</sup>8In this section we let $`g`$ represent both gluons and light quarks, where applicable. Therefore, the HC process described as $`ggb\overline{b}`$ also includes $`q\overline{q}b\overline{b}`$. ($`ggb\overline{b}`$) represents the LO contribution to the fixed-flavor-number (FFN) scheme result. The heavy excitation (HE) process ($`gbgb`$) plus the HC term represents the LO contribution to the variable-flavor-number (VFN) scheme result. The pair of lines for each result shows the effect of varying $`\mu `$. In a similar manner, Fig. 6b shows the total FFN and VFN results.<sup>9</sup><sup>9</sup>9The formidable calculations of the NLO $`ggb\overline{b}`$ process were computed by Nason, Dawson, and Ellis (Ref. NDE ), and also by Beenakker et al., (Ref. Smithetal ). These calculations were implemented in a Monte Carlo framework (including correlations) by Mangano, Nason, and Ridolfi , (Ref. MNR ). Two interesting features are worth noting. 1) Examining Fig. 6a we observe the HE contribution is comparable to the HC one, in spite of the smaller $`b`$-quark PDF compared to the gluon distribution. Closer examination reveals that two effects contribute to this unexpected result: a larger color factor and the presence of $`t`$-channel gluon exchange diagrams for the HE process, as compared to the HC process. 2) The LO-VFN (=HC+HE) contributions (Fig. 6a) (tree processes) give a reasonable approximation to the full cross section TOT-VFN (Fig. 6b); thus, the NLO-VFN correction is relatively small. This is in sharp contrast to the familiar FFN scheme where the TOT-FFN term is more than twice as large as the LO-FFN (=HC). This is, of course, an encouraging result, suggesting that the VFN scheme heavy quark parton picture represents an efficient way to organize the perturbative QCD series. In Fig. 6a, we also observe that while the TOT-VFN result provides minimal $`\mu `$-variation for low $`p_T`$, the improvement is decreased for large $`p_T`$. This may be, in part, due to that fact that the TOT-VFN result shown here is missing the NLO-HE process $`gbggb`$ since this calculation, with masses retained, does not exist. In a separate effort, Cacciari and GrecoCacGre have used a NLO fragmentation formalism to resum the heavy quark contributions in the limit of large $`p_T`$. This calculation effectively includes the massless limit of the $`gbggb`$ contribution (omitted above); the result is a decreased $`\mu `$-variation in the large $`p_T`$ region. Recently, this calculation has been merged with the massive FFN calculation by Cacciari, Greco, and Nason, (Ref. CGN ); the result is a calculation which matches the FFN calculation at low $`p_T`$, and takes advantage of the NLO fragmentation formalism in the high $`p_T`$ region. ## Massive vs. Massless Evolution In a consistently formulated pQCD framework incorporating non-zero mass heavy quark partons, there is still the freedom to define parton distributions obeying either mass-independent or mass-dependent evolution equations. With properly matched hard cross-sections, different choices merely correspond to different factorization schemes, and they yield the same physical cross-sections. We demonstrate this principle in a concrete order $`\alpha _s`$ calculation of the DIS charm structure function.dis97oln In Fig. 7 we display the separate contributions to $`F_2^{charm}`$ for both mass-independent and mass-dependent evolution. The matching properties are best examined by comparing the (scheme-dependent) heavy excitation $`F_2^{HE}`$ and the subtraction $`F_2^{SUB}`$ contributions of Fig. 7a. We observe the following. 1) Within each scheme, $`F_2^{HE}`$ and $`F_2^{SUB}`$ are well matched near threshold, cf., Fig. 7a. Above threshold, they begin to diverge, but the difference $`(F_2^{HE}F_2^{SUB})`$, which contributes to $`F_2^{TOT}`$, is insensitive to the different schemes. 2) It is precisely this matching of $`F_2^{HE}`$ and $`F_2^{SUB}`$ which ensures the scheme dependence of $`F_2^{TOT}`$ is properly of higher-order in $`\alpha _s`$, (cf., Fig. 7b). This matching is not accidental, but simply a result of using a consistent renormalization scheme for both $`F_2^{HE}`$ and $`F_2^{SUB}`$. To understand this we expand these terms near threshold ($`\mu m_H`$) where the $`m_H/Q`$ terms are relevant: $`\sigma _{SUB}`$ $`=`$ $`{}_{}{}^{R}f_{g/P}^{}{}_{}{}^{R}\widehat{\sigma }_{g\gamma ^{}c\overline{c}}^{(1)}={}_{}{}^{R}f_{g/P}^{}{\displaystyle \frac{\alpha _s}{2\pi }}{\displaystyle _{m_H^2}^{\mu ^2}}{\displaystyle \frac{d\mu ^2}{\mu ^2}}{}_{}{}^{R}P_{gc}^{(1)}\sigma _{c\gamma ^{}c}^{(0)}+0`$ $`\sigma _{HE}`$ $``$ $`{}_{}{}^{R}f_{c/P}^{}{}_{}{}^{R}\widehat{\sigma }_{c\gamma ^{}c}^{(0)}{}_{}{}^{R}f_{g/P}^{}{\displaystyle \frac{\alpha _s}{2\pi }}{\displaystyle _{m_H^2}^{\mu ^2}}{\displaystyle \frac{d\mu ^2}{\mu ^2}}{}_{}{}^{R}P_{gc}^{(1)}\sigma _{c\gamma ^{}c}^{(0)}+𝒪(\alpha _s^2)`$ Here, the prescript $`R`$ specifies the renormalization scheme. From these relations, it is evident that $`F_2^{HE}`$ and $`F_2^{SUB}`$ will match to $`𝒪(\alpha _s^2)`$ so long as a consistent choice or renormalization scheme $`R`$ is made for the splitting kernels, $`{}_{}{}^{R}P_{gc}^{(1)}`$. This is the key mechanism that compensates the different effects of the mass-independent vs. mass-dependent evolution, and yields a $`\sigma _{TOT}`$ which is identical up to higher-order terms. The lesson is clear: the choice of a mass-independent $`\overline{\mathrm{MS}}`$ or a mass-dependent (non-$`\overline{\mathrm{MS}}`$) evolution is purely a choice of scheme, and becomes simply a matter of convenience–there is no physically new information gained from the mass-dependent evolution. ## Conclusions We have provided a brief overview of some current experimental and theoretical issues of heavy quark production. The wealth of recent heavy quark production data from both fixed-target and collider experiments will allow us to to extract a precision measurement of structure functions which can provide important constraints for searches of new physics at the highest energy scales. As an important physical process involving the interplay of several large scales, heavy quark production poses a significant challenge for further development of QCD theory. We thank J.C. Collins, R.J. Scalise, and W.-K. Tung for valuable discussions, and the Fermilab Theory Group for their kind hospitality during the period in which part of this research was carried out. This work is supported by the U.S. Department of Energy, and the Lightner-Sams Foundation.
no-problem/9812/cond-mat9812284.html
ar5iv
text
# Identifying the protein folding nucleus using molecular dynamics Molecular dynamics simulations of folding in an off-lattice protein model reveal a nucleation scenario, in which a few well-defined contacts are formed with high probability in the transition state ensemble of conformations. Their appearance determines folding cooperativity and drives the model protein into its folded conformation. Thermodynamically, the folding transition in small proteins is analogous to a first-order transition whereby two thermodynamic states (folded and unfolded) are free energy minima while intermediate states are unstable. The kinetic mechanism of transitions from the unfolded state to the folded state is nucleation . Folding nuclei can be defined as the minimal stable element of structure whose existence results in subsequent rapid assembly of the native state. This definition corresponds to a “post-critical nucleus” related to the first stable structures that appear immediately after the transition state is overcome . The thermal probability of a transition state conformation is low compared to the folded and unfolded states, which are both accessible at the folding transition temperature $`T_f`$ (see Fig. 1). Kinetic analyses for a number of lattice model chains of different lengths and degrees of sequence design (optimization) point to a specific protein folding nucleus scenario. Passing through the transition state with subsequent rapid assembly of the native conformation requires the formation of some (small) number of specific obligatory contacts (protein folding nucleus). This result has been verified for sequences designed in the lattice model using different sets of potentials, where it is shown that nucleus location was identical for two different sequences designed with different potentials to fold into the same structure of a lattice 48-mer. This finding and related results suggest that the folding nucleus location depends more on the topology of the native structure than on a particular sequence that folds into that structure. The dominance of geometrical/topological factors in the determination of the folding nucleus is a remarkable property that has evolutionary implications (see below). It is important to understand the physical origin of this property of folding proteins and assess its generality. To this end, it is important to study other than lattice models and other than Monte-Carlo dynamic algorithms. Here we employ the discrete molecular dynamics (MD) simulation technique (the Gō model with the square-well potential of the inter-residue interaction) to search for the nucleus in a continuous off-lattice model . The transition region Our proposed method to search for a folding nucleus is based on the observation that equilibrium fluctuations around the native conformation can be separated into “local” unfolding (followed by immediate refolding) and “global” unfolding that leads to a transition into an unfolded state and requires longer time to refold. Local unfolding fluctuations are the ones that do not reach the top of the free energy barrier and, hence, are committed to moving quickly back to the native state. In contrast, global unfolding fluctuations are the ones that overcome the barrier and are committed to descend further to the unfolded state. Similarly, the fluctuations from the unfolded state can be separated into those that descend back to the unfolded state and those that result in productive folding. The difference between the two modes of fluctuation is whether or not the major free energy barrier is overcome. This means that the nucleation contacts (i. e. the ones that are formed on the “top” of the free energy barrier as the chain passes it upon folding) should be identified as contacts that are present in the “maximally locally unfolded” conformations but are lost in the globally unfolded conformations of comparable energy. Thus, in order to identify the folding nucleus, we study the conformations of the 46-mer that appear in various kinds of folding $``$ unfolding fluctuations. First, consider the time behavior of the potential energy at $`T_f`$ (see Fig. 3a). The transition state conformations belong to the transition region TR from the folded state to the unfolded state that lies in the energy range $`\{110<E<90\}`$ (see Fig. 1). Region TR corresponds to the minimum of the histogram of the energy distribution. If we know the past and the future of a certain conformation that belongs to the TR, we can distinguish four types of such conformations (see Figs. 2 and 3a): (i) UU conformations that originate in and return to the unfolded region without ascending to the folded region; (ii) FF conformations that originate in and return to the folded region without descending to the unfolded region; (iii) UF conformations that originate in the unfolded region and descend to the folded region; and (iv) FU conformations that originate in the folded region and descend to the unfolded region. If the nucleus exists, then the UF, FU, FF, and UU conformations must have different properties depending on their history. For example, the difference between UF, FU, FF, and UU conformations is pronounced for the rms displacements from the native state of the residues in the vicinity of the residues 10 and 40 and is illustrated in Fig. 3b. One difference between the FF conformations and UU conformations is that the protein folding nucleus is more likely to be retained in the FF conformations than in the UU conformations. The contacts belonging to the critical nucleus (“nucleation contacts”) start appearing in the UF conformations, and start disappearing in the FU conformations, so that the frequencies of nucleation contacts in UF and FU conformations should be in between FF and UU. Our goal is to select the contacts that are crucial for the folding $``$ unfolding transition. To this end we select the contacts that appear much more often in the FF conformations than in the UU conformations. We discover that if we set the threshold for the difference in contact frequencies between FF and UU conformations to be 0.2, then there are only five contacts that persist: $`(\text{residue }11,\text{residue }39)`$, $`(10,40)`$, $`(11,40)`$, $`(10,41)`$, and $`(11,41)`$ (see Fig. 3c,d). These contacts can serve as evidence for the protein folding nucleus in the folding $``$ unfolding transition in our model. Next, we demonstrate that these five selected contacts belong to the protein folding nucleus. Suppose we fix just one of them, e. g. $`(10,40)`$, i. e. we impose a covalent (“permanent”) link between residue $`10`$ and residue $`40`$. If this contact belongs to the protein folding nucleus, its fixation by a covalent bond would eliminate the barrier between the folded and unfolded states, i. e. only the native basin of attraction will remain. Hence, we hypothesize that the cooperative transition between the unfolded and folded state will be eliminated and the energy histogram (Fig. 1) should change qualitatively from bimodal to unimodal. Our MD simulations support this hypothesis (Fig. 4): fixation of only one nucleation contact, $`(10,40)`$, gives rise to a qualitative change in the energy distribution from bimodal to unimodal. Indeed, the probability to find an unfolded state with a fixed link, $`(10,40)`$, which belongs to the protein folding nucleus, is drastically reduced compared to the probability of the unfolded state of the original 46-mer, indicating the importance of the selected contact $`(10,40)`$. To provide a “control” that a specific contact plays such a dramatic role in changing the character of the energy landscape, we fix a randomly-chosen contact, $`(19,37)`$, which is not predicted by our analysis, to belong to the critical nucleus. Our hypothesis predicts no qualitative change in the energy distribution histogram since the barrier, determined primarily by nucleation contacts, should not change dramatically for this control. Fig. 4 shows that this is indeed the case. (The stability of the folded state is somewhat increased for the control because any preformed native contacts decrease the entropy of the unfolded state — i. e. they stabilize the folded state). We also find that for the UF conformation that the rms displacements of the residues from their native positions are smaller than those for the FU conformations (Fig. 3b). This observation is consistent with the fact that the nucleation contacts are formed first upon entering into productive folding and are destroyed last upon unfolding. Discussion Our main conclusion is that the existence of a few ($`5`$) specific contacts is signature of the transition state conformations. Those contacts can be defined as the protein folding nucleus. Other contacts may also be present in transition state conformations. However, they are optional and vary from conformation to conformation, while nucleation contacts are present in transition state conformations with high probability. Formation of nucleation contacts can be considered as an obligatory step in the folding process: after they are formed the major barrier is overcome and subsequent folding proceeds “downhill” in the free energy landscape without encountering any further major free energy barriers. This is illustrated by our results that show that even one nucleation contact eliminates the free energy barrier and, hence, leads to fast “downhill” motion to the folded state. As a control our results show that fixation of an arbitrary non-nucleation contact does not result in a similar effect. The protein folding nucleus scenario of the transition state was initially derived from Monte-Carlo studies of lattice models and was consistent with protein engineering experiments with several small proteins . Here, for the first time, we confirm this scenario in the off-lattice MD simulations. The consistency between conclusions made in different simulations and in experiments is remarkable, and supports the possibility that the protein folding nucleus formation is a generic scenario to describe the protein folding transition state. Our present study buttresses the point that the location of a protein folding nucleus is determined by the geometry of the native state rather than the energetics of interactions in the native state (the two factors are not entirely independent, since native contacts must be generally more stable to provide stability to the native conformation). In the present study, we used the Gō model (where all native contacts have the same energy). Nonetheless, it turns out that some contacts (nucleation) are “more equal than others” in terms of their role in shaping the free energy landscape of the chain and determining folding kinetics. This fact has implications for protein evolution, raising the possibility that proteins that have similar structures but different sequences may have similarly located protein folding nuclei. This prediction was verified for SH3 domains and for cold-shock proteins . In terms of the evolutionary selection of protein sequences, the robustness of the folding nucleus suggests that any additional evolutionary pressure that controls the folding rate may have been applied selectively to nucleus residues, so that nucleation positions may have been under double (stability + kinetics) pressure in all proteins that fold into a given structure. Such additional evolutionary pressure has indeed been found in the analysis of several protein superfamilies . Methods We study a “beads on a string” model of a protein. We model the residues as hard spheres of unit mass. The potential of interaction between residues is “square-well”. We follow the Gō model , where the attractive potential between residues is assigned to the pairs that are in contact in the native state and repulsive potential is assigned to the pairs that are not in contact in the native state. Thus, the potential energy is given by $$E=\frac{1}{2}\underset{i,j=1}{\overset{N}{}}U_{ij}$$ (1) where $`i`$ and $`j`$ denote residues $`i`$ and $`j`$. $`U_{ij}`$ is the matrix of pair interactions $$U_{ij}=\{\begin{array}{cc}+\mathrm{},\hfill & |r_ir_j|a_0\hfill \\ \mathrm{\Delta }_{ij}ϵ,\hfill & a_0<|r_ir_j|a_1\hfill \\ 0,\hfill & |r_ir_j|>a_1.\hfill \end{array}$$ (2) Here, $`a_0/2`$ is the radius of the hard sphere, and $`a_1/2`$ is the radius of the attractive sphere and $`ϵ`$ sets the energy scale. $`\mathrm{\Delta }`$ is a matrix of contacts with elements $$\mathrm{\Delta }_{ij}\{\begin{array}{cc}1,\hfill & |r_i^{NS}r_j^{NS}|a_1\hfill \\ 1,\hfill & |r_i^{NS}r_j^{NS}|>a_1,\hfill \end{array}$$ (3) where $`r_i^{NS}`$ is the position of the $`i^{th}`$ residue when the protein is in the native conformation. Note that we penalize the non-native contacts by imposing $`\mathrm{\Delta }_{ij}<0`$. The parameters are chosen as follows: $`ϵ=1`$, $`a_0=9.8`$ and $`a_1=19.5`$. The covalent bonds are also modeled by a square-well potential: $$V_{i,i+1}=\{\begin{array}{cc}0,\hfill & b_0<|r_ir_{i+1}|<b_1\hfill \\ +\mathrm{},\hfill & |r_ir_{i+1}|b_0,\text{ or }|r_ir_{i+1}|b_1.\hfill \end{array}$$ (4) The values of $`b_0=9.9`$ and $`b_1=10.1`$ are chosen so that average covalent bond length is equal to 10. The original configuration of the protein ($`N=46`$ residues) was designed by collapse of a homopolymer at low temperature . It contains $`n^{}=212`$ native contacts, so the native state energy $`E_{NS}=212`$. The radius of gyration of the globule in the native state is $`R_G20`$. The folding transition temperature $`T_f=1.44`$ is determined by the location of the peak in the heat capacity dependence on temperature. Our simulations employ the discrete MD algorithm . To control the temperature of the protein we introduce 954 particles that interact with the protein and with each other via hard-core collisions and so serve as a “heat bath”. Thus, by changing the kinetic energy of those heat bath particles we are able to control the temperature of the environment. The heat bath particles are hard spheres of the same radii as the chain residues and have unit mass. Temperature is measured in units of $`ϵ/k_B`$. The variable time step is defined by the shortest time between two consecutive collisions. Acknowledgments We thank R. S. Dokholyan for careful reading of the manuscript. NVD is supported by NIH NRSA molecular biophysics predoctoral traineeship (GM08291-09). EIS is supported by NIH grant RO1-52126. The Center for Polymer Studies acknowledges the support of the NSF. Nikolay V. Dokholyan<sup>1</sup>, Sergey V. Buldyrev<sup>1</sup>, H. Eugene Stanley<sup>1</sup> and Eugene I. Shakhnovich<sup>2</sup> <sup>1</sup>Center for Polymer Studies, Physics Department, Boston University, Boston, MA 02215, USA <sup>2</sup>Department of Chemistry, Harvard University, 12 Oxford Street, Cambridge, MA 02138, USA Correspondence should be addressed to N.V.D. email: dokh@bu.edu
no-problem/9812/hep-ph9812275.html
ar5iv
text
# Exploiting ℎ→𝑊^∗⁢𝑊^∗ Decays at the Upgraded Fermilab Tevatron ## I Introduction The mass generation mechanisms for electroweak gauge bosons and for fermions are among the most prominent mysteries in contemporary high energy physics. In the Standard Model (SM) and its supersymmetric (SUSY) extensions, elementary scalar doublets of the SU<sub>L</sub>(2) interactions are responsible for the mass generation. The scalar Higgs bosons are thus crucial ingredients in the theory, and searching for the Higgs bosons has been one of the major motivations in the current and future collider programs . Although the masses of Higgs bosons are free parameters in the models, they are subject to generic bounds based on theoretical arguments. The triviality bound indicates that the Higgs boson mass ($`m_h`$) should be less than about 800 GeV for the SM to be a consistent low-energy effective theory . Vacuum stability argument, on the other hand, suggests a correlation between the $`m_h`$ lower bound and the new physics scale $`\mathrm{\Lambda }`$ beyond which the SM is no longer valid . In other words, the discovery of SM-like Higgs boson implies a scale $`\mathrm{\Lambda }`$ at which new physics beyond the SM must set in; and the smaller $`m_h`$ is, the lower $`\mathrm{\Lambda }`$ would be. In the minimal supersymmetric Standard Model (MSSM), it has been shown that the mass of the lightest neutral Higgs boson must be less than about 130 GeV , and in any weakly coupled SUSY theory $`m_h`$ should be lighter than about 150 GeV . On the experimental side, the non-observation of Higgs signal at the LEP2 experiments has established a lower bound on the SM Higgs boson mass of 89.8 GeV at a 95% Confidence Level (CL) . Future searches at LEP2 will eventually be able to discover a SM Higgs boson with a mass up to 105 GeV . The CERN Large Hadron Collider (LHC) is believed to be able to cover up to the full $`m_h`$ range of theoretical interest, about 1000 GeV , although it will be challenging to discover a Higgs boson in the “intermediate” mass region 110 GeV $`<m_h<`$ 150 GeV, due to the huge SM background to $`hb\overline{b}`$ and the requirement of excellent di-photon mass resolution for the $`h\gamma \gamma `$ signal. More recently, it has been discussed intensively how much the Fermilab Tevatron upgrade can do for the Higgs boson search . It appears that the most promising processes continuously going beyond the LEP2 reach would be the associated production of an electroweak gauge boson and the Higgs boson $$p\overline{p}WhX,ZhX.$$ (1) The leptonic decays of $`W,Z`$ provide a good trigger and $`hb\overline{b}`$ may be reconstructible with adequate $`b`$-tagging. It is now generally believed that for an upgraded Tevatron with a c. m. energy $`\sqrt{s}=2`$ TeV and an integrated luminosity $`𝒪(1030)\mathrm{fb}^1`$ a SM-like Higgs boson can be observed at a $`35\sigma `$ level up to a mass of about 120 GeV . The Higgs discovery through these channels crucially depends up on the $`b`$-tagging efficiency and the $`b\overline{b}`$ mass resolution. It is also limited by the event rate for $`m_h>120`$ GeV. It may be possible to extend the mass reach to about 130 GeV by combining leptonic $`W,Z`$ decays and slightly beyond via the decay mode $`h\tau ^+\tau ^{}`$ . It is interesting to note that this mass reach is just near the theoretical upper bound in MSSM. In the context of a general weakly-coupled SUSY model, it would be of great theoretical significance for the upgraded Tevatron to extend the Higgs boson coverage to $`m_h150`$ GeV. Moreover, it would have interesting implications on our knowledge for a new physics scale $`\mathrm{\Lambda }`$ if we do find a SM-like Higgs boson or exclude its existence in the mass range 130 GeV$``$180 GeV, the so-called “chimney region” between the triviality upper bound and the vacuum stability lower bound . It is important to note that the leading production mechanism for a SM-like Higgs boson at the Tevatron is the gluon-fusion process via heavy quark triangle loops $$p\overline{p}ggXhX.$$ (2) There are also contributions to $`h`$ production from the vector boson fusion processes<sup>*</sup><sup>*</sup>*Here and henceforth, $`W^{}(Z^{})`$ generically denotes a $`W(Z)`$ boson of either on- or off-mass-shell. $$W^{}W^{},Z^{}Z^{}h,$$ (3) where $`W^{}W^{}`$ and $`Z^{}Z^{}`$ are radiated off the quark partons. In Fig. 1, we present cross sections for SM Higgs boson production at the Tevatron for processes (1), (2) and (3). We see that the gluon fusion process yields the largest cross section, typically a factor of four above the associated production (1). For $`m_h>160`$ GeV, $`WW,ZZ`$ fusion processes become comparable to that of (1). In calculating the total cross sections, the QCD corrections have been included for all the processes , and we have used the CTEQ4M parton distribution functions . Although the decay mode $`hb\overline{b}`$ in Eqs. (2) and (3) would be swamped by the QCD background, the decay modes to vector boson pairs $$hW^{}W^{},Z^{}Z^{}$$ (4) will have increasingly large branching fractions for $`m_h>130`$ GeV and are natural channels to consider for a heavier Higgs boson. In Fig. 2(a), we show the cross sections for $`ggh`$ with $`hW^{}W^{}`$ and $`Z^{}Z^{}`$ versus $`m_h`$ at $`\sqrt{s}=2`$ TeV. The leptonic decay channels are also separately shown by solid and dashed curves, respectively, for $`h`$ $`W^{}W^{}\mathrm{}\nu jj\mathrm{and}\mathrm{}\overline{\nu }\overline{\mathrm{}}\nu ,`$ (6) $`Z^{}Z^{}\mathrm{}\overline{\mathrm{}}jj\mathrm{and}\mathrm{}\overline{\mathrm{}}\nu \overline{\nu },`$ where $`\mathrm{}=e,\mu `$ and $`j`$ is a quark jet. The scale on the right-hand side gives the number of events expected for 30 $`\mathrm{fb}^1`$. We see that for the $`m_h`$ range of current interest, there will be about 1000 events produced for the semi-leptonic mode $`W^{}W^{}\mathrm{}\nu jj`$ and about 300 events for the pure leptonic mode $`W^{}W^{}\mathrm{}\overline{\nu }\overline{\mathrm{}}\nu `$. Although the $`\mathrm{}\nu jj`$ mode has a larger production rate, the $`\mathrm{}\overline{\nu }\overline{\mathrm{}}\nu `$ mode is cleaner in terms of the SM background contamination. The corresponding modes from $`Z^{}Z^{}`$ are smaller by about an order of magnitude. It is natural to also consider the $`hW^{}W^{}`$ mode from the $`Wh`$ associated production in Eq. (1). This is shown in Fig. 2(b) by the solid curves for $`W^\pm h\mathrm{}^\pm \nu W^{}W^{}`$ $`\mathrm{}\nu \mathrm{}\nu \mathrm{}\nu ,`$ (8) $`\mathrm{}^\pm \nu \mathrm{}^\pm \nu jj.`$ The trilepton signal is smaller than the like-sign lepton plus jets signal by about a factor of three due to the difference of $`W`$ decay branching fractions to $`\mathrm{}=e,\mu `$ and to jets. For comparison, also shown in Fig. 2(b) are $`Whb\overline{b}\mathrm{}\nu `$ (solid) and $`Zhb\overline{b}\mathrm{}\overline{\mathrm{}}`$ (dashed) via $`hb\overline{b}`$. We see that the signal rates for these channels drop dramatically for a higher $`m_h`$. Comparing the $`h`$ decays in Fig. 2(a) and (b), it makes the gauge boson pair modes of Eq. (4) a clear choice for Higgs boson searches beyond 130 GeV. In fact, the pure leptonic channel in Eq. (6) has been studied at the SSC and LHC energies and at a 4 TeV Tevatron . Despite the difficulty in reconstructing $`m_h`$ from this mode due to the two missing neutrinos, the obtained results for the signal identification over the substantial SM backgrounds were all encouraging. In a more recent paper , two of the current authors carried out a parton-level study for the $`W^{}W^{}`$ channels of Eq. (6) for the 2 TeV Tevatron upgrade. We found that the di-lepton mode in Eq. (6) is more promising than that of $`\mathrm{}\nu jj`$ due to the much larger QCD background to the latter. While the results were encouraging, realistic simulations including detector effects were called for to draw further conclusions. In this paper, we concentrate on the pure leptonic channel and carry out more comprehensive analyses for the signal and their SM backgrounds. We perform detector level simulations by making use of a Monte Carlo program SHW developed for the Run-II SUSY/Higgs Workshop . We present optimized kinematic cuts which can adequately suppress the large SM backgrounds and, moreover, have been structured so as to provide a statistically robust background normalization. For the $`WhWW^{}W^{}`$ channel, although the trilepton signal of Eq. (8) is rather weak, the like-sign leptons plus two jets in Eq. (8) can be useful to enhance the signal observability. For completeness, we have also included the contributions from the vector boson fusion of Eq. (3) and $`W\tau \nu \nu \mathrm{}\nu _{\mathrm{}}`$ decay mode, although they are small. We also comment on the systematic effects on the signal and background measurements which would degrade signal observability. After combining all the channels studied, we find that with a c. m. energy of 2 TeV and an integrated luminosity of 30 fb<sup>-1</sup>, the signal of $`hW^{}W^{}`$ can be observable at a 3$`\sigma `$ level or better for the mass range of $`145\mathrm{GeV}<m_h<180`$ GeV. For 95% CL exclusion, the mass reach is $`135\mathrm{GeV}<m_h<190`$ GeV. We thus conclude that the upgraded Fermilab Tevatron will have the potential to significantly advance our knowledge of Higgs boson physics. This provides strong motivation for luminosity upgrade of the Fermilab Tevatron beyond the Main Injector plan. Our signal and background Monte Carlo simulation was performed using the PYTHIA package interfaced with the SHW detector simulation . For pair production of resonances, e.g. $`WW`$, PYTHIA incorporates the full $`224`$ matrix elements thereby insuring proper treatment of the final state angular correlations. Similarly for $`hWW`$, the angular correlations between the four final state fermions have been taken into account. The full $`Z/\gamma ^{}`$ interference is simulated for $`ZZ`$ production; however, the $`WZ`$ process considers only the pure $`Z`$ contribution. For Higgs boson production in association with a gauge boson in Eq (1), the associated $`W`$ and $`Z`$ decay angular distributions are treated properly. The production cross-sections for the principal background processes were normalized to $`\sigma (WW)=10.4`$ pb, $`\sigma (t\overline{t})=6.5`$ pb, $`\sigma (WZ)=3.1`$ pb, and $`\sigma (ZZ)=1.4`$ pb. The rest of the paper is organized as follows. In sections II and III, we present in details our studies for the pure leptonic and like-sign leptons plus jets signals, respectively. In section IV, we first summarize our results. We then present a study of these channels with a model-independent parameterization for the signal cross section. We conclude with a few remarks. ## II Di-leptons plus Missing Transverse Energy Signal For the pure leptonic channel in Eq. (6), we identify the final state signal as two isolated opposite-sign charged leptons and large missing transverse energy. The leading SM background processes are $`p\overline{p}`$ $``$ $`W^+W^{}\mathrm{}\overline{\nu }\overline{\mathrm{}}\nu ,ZZ(\gamma ^{})\nu \overline{\nu }\mathrm{}\overline{\mathrm{}},WZ(\gamma ^{})\mathrm{}\overline{\nu }\mathrm{}\overline{\mathrm{}},`$ (9) $`p\overline{p}`$ $``$ $`t\overline{t}\mathrm{}\overline{\nu }\overline{\mathrm{}}\nu b\overline{b},p\overline{p}Z(\gamma ^{})\tau ^+\tau ^{}\mathrm{}\overline{\nu }\overline{\mathrm{}}\nu \nu _\tau \overline{\nu }_\tau .`$ (10) We first impose basic acceptance cuts for the leptonsThe cuts for leptons were chosen to reflect realistic trigger considerations. It is desirable to extend the acceptance in $`\eta _{\mathrm{}}`$. $`p_T(e)>10\mathrm{GeV},|\eta _e|<1.5,`$ (11) $`p_T(\mu _1)>10\mathrm{GeV},p_T(\mu _2)>5\mathrm{GeV},|\eta _\mu |<1.5,`$ (12) $`m(\mathrm{}\mathrm{})>10\mathrm{GeV},\mathrm{\Delta }R(\mathrm{}j)>0.4,\text{ / }E_T>10\mathrm{GeV},`$ (13) where $`p_T`$ is the transverse momentum and $`\eta `$ the pseudo-rapidity. The cut on the invariant mass $`m(\mathrm{}\mathrm{})`$ is to remove the photon conversions and leptonic $`J/\psi `$ and $`\mathrm{{\rm Y}}`$ decays. The isolation cut on $`\mathrm{\Delta }R(\mathrm{}j)`$ removes the muon events from heavy quark ($`c,b`$) decays.The electron identification in the SHW simulation imposes strict isolation requirements already. At this level, the largest background comes from the Drell-Yan process for $`\tau ^+\tau ^{}`$ production. However, the charged leptons in this background are very much back-to-back and this feature is also true, although to a lesser extent, for other background processes as well. On the other hand, due to the spin correlation of the Higgs boson decay products, the two charged leptons tend to move in parallel. We demonstrate this point in Figs. 3 and 4 where the distributions of the azimuthal angle in the transverse plane \[$`\varphi (\mathrm{}\mathrm{})`$\] and the three-dimensional opening-angle between the two leptons \[$`\theta (\mathrm{}\mathrm{})`$\] for the signal and backgrounds are shown.<sup>§</sup><sup>§</sup>§Since we are mainly interested in the shapes of the kinematic distributions, we present them normalized to unity with respect to the total cross section with appropriate preceeding cuts. This comparison motivates us to impose the cuts $$\varphi (\mathrm{}\mathrm{})<160^{},\theta (\mathrm{}\mathrm{})<160^{}.$$ (14) The $`\tau ^+\tau ^{}`$ background can be essentially eliminated with the help of additional cuts $$p_T(\mathrm{}\mathrm{})>20\mathrm{GeV},\mathrm{cos}\theta _{\mathrm{}\mathrm{}\text{ / }E_T}<0.5,M_T(\mathrm{}\text{ / }E_T)>20\mathrm{GeV},$$ (15) where $`\theta _{\mathrm{}\mathrm{}\text{ / }E_T}`$ is the relative angle between the lepton pair transverse momentum and the missing transverse momentum, which is close to 180 for the signal and near 0 for the Drell-Yan $`\tau ^+\tau ^{}`$ background. The two-body transverse-mass is defined for each lepton and the missing energy as $$M_T^2(\mathrm{}\text{ / }E_T)=2p_T(\mathrm{})\text{ / }E_T(1\mathrm{cos}\theta _{\mathrm{}\text{ / }E_T}),$$ (16) and the distributions are shown in Fig. 5. We can further purify the signal by removing the high $`m(\mathrm{}\mathrm{})`$ events from $`Z\mathrm{}\overline{\mathrm{}}`$ as well as from $`t\overline{t},W^+W^{}`$, as demonstrated in Fig. 6. We therefore impose $`m(\mathrm{}\mathrm{})`$ $`<`$ $`78\mathrm{GeV}\mathrm{for}e^+e^{},\mu ^+\mu ^{},`$ (17) $`m(\mathrm{}\mathrm{})`$ $`<`$ $`110\mathrm{GeV}\mathrm{for}e\mu .`$ (18) As suggested in Ref. , the lepton correlation angle between the momentum vector of the lepton pair and the momentum of the higher $`p_T`$ lepton ($`\mathrm{}_1`$) in the lepton-pair rest frame, $`\theta _\mathrm{}_1^{}`$, also has discriminating power between the signal and backgrounds. This is shown in Fig. 7. We thus select events with $$0.3<\mathrm{cos}\theta _\mathrm{}_1^{}<0.8.$$ (19) A characteristic feature of the top-quark background is the presence of hard $`b`$-jets. We thus devise the following jet-veto criteriaThe previous study at the parton-level suggested a more stringent jet-veto cut. It turns out that it would be too costly for the signal and the more sophisticated jet-veto criteria of Eq. (22) is thus desirable. $`\mathrm{veto}\mathrm{if}p_T^{j_1}>95\mathrm{GeV},|\eta _j|<3,`$ (20) $`\mathrm{veto}\mathrm{if}p_T^{j_2}>50\mathrm{GeV},|\eta _j|<3,`$ (21) $`\mathrm{veto}\mathrm{if}p_T^{j_3}>15\mathrm{GeV},|\eta _j|<3.`$ (22) Furthermore, if either of the two hard jets ($`j_1,j_2`$) is identified as a $`b`$ quark, the event will be also vetoed. The $`b`$-tagging efficiency is taken to be $$ϵ_b=1.1\times 57\%\mathrm{tanh}(\frac{\eta _b}{36.05}),$$ (23) where the factor 1.1 reflects the 10% improvement for a lepton impact parameter tag. The results up to this stage are summarized in Table I for the signal $`m_h=140`$190 GeV as well as the SM backgrounds. The acceptance cuts discussed above are fairly efficient, approximately 35% in retaining the signal while suppressing the backgrounds substantially. We see that the dominant background comes from the electroweak $`WW`$ production, about a factor of 30 higher than the signal rate. The sub-leading backgrounds $`t\overline{t}`$ and $`W`$+fake (the background where a jet mimics an electron with a probability of $`P(je)=10^4`$ ) are also bigger than the signal. We note that although the $`b`$-jet veto is effective against the $`t\overline{t}`$ background, the final results are not affected if the veto efficiency is significantly worse. One can improve the signal observability by constructing a likelihood based on some characteristic kinematical variables. We choose the variables as * $`\mathrm{cos}\theta _{\mathrm{}\mathrm{}}`$, the polar angle with respect to the beam axis of the di-lepton ; * $`\varphi (\mathrm{}\mathrm{})`$ as in Eq. (14); * $`\theta (\mathrm{}\mathrm{})`$ as in Eq. (14); * $`\mathrm{cos}\theta _{\mathrm{}\mathrm{}\text{ / }E_T}`$ as in Eq. (15); * $`p_T^{j1}`$ as in Eq. (22); * $`p_T^{j2}`$ as in Eq. (22). We wish to evaluate the likelihood for a candidate event to be consistent with one of five event classes: a Higgs boson signal ($`140<m_h<`$ 190 GeV), $`WW`$, $`t\overline{t}`$, $`WZ`$ or $`ZZ`$. For a single variable $`x_i`$, the probability for an event to belong to class $`j`$ is given by $$P_i^j(x_i)=\frac{f_i^j(x_i)}{\mathrm{\Sigma }_{k=1}^5f_i^k(x_i)},$$ (24) where $`f_i^j`$ denotes the probability density for class $`j`$ and variable $`i`$. The likelihood of an event to belong to class $`j`$ is given by the normalized products of the individual $`P_i^j(x_i)`$ for the $`n=6`$ kinematical variables: $$^j=\frac{\mathrm{\Pi }_{i=1}^nP_i^j(x_i)}{\mathrm{\Sigma }_{k=1}^5\mathrm{\Pi }_{i=1}^nP_i^k(x_i)},$$ (25) The value of $`^j`$ for a Higgs boson signal hypothesis ($`j=1`$) is shown in Fig. 8 where it can be seen that a substantial fraction of the $`t\overline{t}`$ and $`WW`$ background can be removed for a modest loss of acceptance. The $`WZ`$ and $`ZZ`$ backgrounds have similar distributions to the $`WW`$ and have been omitted for clarity. We thus impose the requirement $$^{j=1}>0.10.$$ (26) The improved results are summarized in Table II. In identifying the signal events, it is crucial to reconstruct the mass peak of $`m_h`$. Unfortunately, the $`W^{}W^{}`$ mass from the $`h`$ decay cannot be accurately reconstructed due to the two undetectable neutrinos. However, both the transverse mass $`M_T`$ and the cluster transverse mass $`M_C`$ , defined as $`M_T`$ $`=`$ $`2\sqrt{p_T^2(\mathrm{}\mathrm{})+m^2(\mathrm{}\mathrm{})},`$ (27) $`M_C`$ $`=`$ $`\sqrt{p_T^2(\mathrm{}\mathrm{})+m^2(\mathrm{}\mathrm{})}+\text{ / }E_T,`$ (28) yield a broad peak near $`m_h`$ and have a long tail below. The cluster transverse mass $`M_C`$ has a Jacobian structure with a well defined edge at $`m_h`$. We show the nature of these two variables for the signal with $`m_h=170`$ GeV and the leading $`WW`$ background in Fig. 9(a) for $`M_T`$ and (b) for $`M_C`$ after application of the likelihood cut. For a given $`m_h`$ to be studied, one can perform additional cut optimization. In Table III, we list $`m_h`$-dependent criteria for the signal region defined as $$m_h60<M_C<m_h+5\mathrm{GeV}.$$ (29) We illustrate the effect of the optimized cuts of Table III in Fig. 10, where the cluster tranverse mass distribution for a $`m_h=170`$ GeV signal and the summed backgrounds, normalized to 30 $`\mathrm{fb}^1`$, are shown before (a) and after the final cuts (b). A clear excess of events from the Higgs signal can be seen in Fig. 10(b). It is important to note that before application of the final cuts, the dominant backgrounds are $`WW`$ and the $`W`$+fake with other sources accounting for less than 10% of the total. Moreover, for 30 $`\mathrm{fb}^1`$ integrated luminosity, the statistical error in the background is less than 2% before application of the final cuts. We therefore argue that one should be able to normalize the SM background curve ($`WW`$) with sufficient precision to unambiguously identify a significant excess attributable to Higgs boson signal. It should also be noted that by selectively loosening the final cuts, it is possible to maintain the same $`S/\sqrt{B}`$ while increasing the accepted background by up to factor of 5, and the accepted signal by a factor of 2.5. This can provide a powerful cross-check of the predicted background $`M_C`$ shape and can be used to demonstrate the stability of any observed excess. Our final results for the channel $`hW^{}W^{}\mathrm{}\overline{\nu }\overline{\mathrm{}}\nu `$ are summarized in Table IV. We have included the contributions to $`hW^{}W^{}`$ from the signal channels in Eqs. (1) and (3). Although they are small to begin with, they actually increase the accepted signal cross section by $`1218\%`$. We have also included the contribution from $`W\tau \nu \mathrm{}\nu _{\mathrm{}}\nu `$.From consideration of the $`W`$+$`(je)`$ background, it should be clear that improving the sensitivity by incorporating hadronic tau decays will be a difficult task. We nonetheless encourage the effort. It can be seen that one may achieve a $`S/B`$ of at least 6% for 140 GeV$`<m_h<190`$ GeV and reach 45% for $`m_h=170`$ GeV. The statistical significance, $`S/\sqrt{B}`$, for 30 fb<sup>-1</sup> integrated luminosity, is 3$`\sigma `$ or better for $`150<m_h<180`$ GeV. In Fig. 11(a), we present the integrated luminosities needed to reach a 3$`\sigma `$ significance and 95% CL exclusion computed assuming Poisson probabilities for $`m_h`$. To assess the effect of inherent systematic uncertainties, we re-evaluate the corresponding curves in Fig. 11(b) assuming a 10% systematic error for the signal and SM backgrounds.<sup>\**</sup><sup>\**</sup>\**For the purposes of computing the effects of systematic errors on the sensitivity to a Higgs signal, we have scaled the expected background upward by a given percentage and the expected signal downward by the same percentage simulataneously. The results are somewhat degraded, but they are still encouraging. ## III Like-sign Di-lepton Plus Jets Signal When considering the $`hW^{}W^{}`$ mode in the associated production channels of Eq. (1), it is natural to consider the trilepton mode of Eq. (8) . However, the leptonic branching fractions for the $`W`$ decays limit the signal rate. Also, the leading irreducible SM background $`WZ(\gamma ^{})3\mathrm{}`$ is difficult to suppress to a sufficient level. On the other hand, the $`W^{}W^{}\mathrm{}\nu jj`$ mode gives like-sign leptons plus two-jets events as in Eq. (8) with a three times larger rate than the trilepton mode; while the leading background is higher order than $`WZ(\gamma ^{})`$. In this case, the contributing channels include $`WhWW^{}W^{}\mathrm{}^\pm \nu \mathrm{}^\pm \nu jj,`$ (30) $`WhWZ^{}Z^{}\mathrm{}^\pm \nu \mathrm{}^\pm \mathrm{}^{}jj,`$ (31) $`ZhZW^{}W^{}\mathrm{}^\pm \mathrm{}^{}\mathrm{}^\pm \nu jj,`$ (32) $`ZhZZ^{}Z^{}\mathrm{}^\pm \mathrm{}^{}\mathrm{}^\pm \mathrm{}^{}jj.`$ (33) We identify the final state signal as two isolated like-sign charged leptons plus jets. A soft third lepton may be present. The SM backgrounds are $`p\overline{p}`$ $``$ $`WWW,WWZ,WZZ,ZZZ,t\overline{t}W,t\overline{t}Z\mathrm{}^\pm \mathrm{}^\pm jjX,`$ (34) $`p\overline{p}`$ $``$ $`W^\pm Z(\gamma ^{})+jj\mathrm{}^\pm \mathrm{}^\pm jjX,ZZ(\gamma ^{})+jj\mathrm{}^\pm \mathrm{}^\pm jjX,t\overline{t}\mathrm{}\overline{\nu }jjb\overline{b},`$ (35) $`p\overline{p}`$ $``$ $`Wjj,Z(\gamma ^{})jj+\mathrm{fake}.`$ (36) Although the triple gauge boson production in Eq. (34) constitutes the irreducible backgrounds, the $`WZjj`$, $`t\overline{t}`$ through $`b`$ or $`c`$ semileptonic decay and the background from $`je`$ fakes turn out to be larger. The basic acceptance cuts required for the leptons are $`p_T(\mathrm{})>10\mathrm{GeV},|\eta _{\mathrm{}}|<1.5,m(\mathrm{}\mathrm{})>10\mathrm{GeV},`$ (37) $`0.3<\mathrm{\Delta }R(\mathrm{}j)<6,\text{ / }E_T>10\mathrm{GeV}.`$ (38) For a muon, we further demand that the scalar sum of additional track momenta within 30 be less than 60% of the muon momentum. We require that there are at least two jets with $$p_T^j>15\mathrm{GeV},|\eta _j|<3.$$ (39) To suppress the $`WZ`$ background, we require the leading jet to be within $`|\eta _{j_1}|<1.5`$ and to have a charged track multiplicity satisfying $`2N12`$; while the sub-leading jet to be within $`|\eta _{j_2}|<2.0`$. The $`t\overline{t}`$ background typically exhibits greater jet activity; we therefore veto events having $$p_T^{j_3}>30\mathrm{GeV},$$ (40) and events with a fourth jet satisfying Eq. (39). To suppress backgrounds associated with heavy flavor jets, we veto the event if any of the jets have a $`b`$-tag. In Fig. 12, we present the di-jet mass distributions for the signal and backgrounds. Since the di-jets in the signal are mainly from a $`W^{}`$ decay, $`m(jj)`$ is close to or lower than $`M_W`$. This motivates us to further require $$m(jj)<110\mathrm{GeV},\mathrm{\Sigma }_j|p_T^j|<150\mathrm{GeV}.$$ (41) Finally, it is interesting to note that the lepton correlation angle introduced in Eq. (19) has strong discriminating power to separate the signal from backgrounds as shown in Fig. 13. We then impose a final cut $$\mathrm{cos}\theta _\mathrm{}_1^{}<0.95.$$ (42) With these cuts, we present the results for the signal and backgrounds in Table V. We can see that for a given $`m_h`$, the $`S/B`$ is larger than that for the di-lepton plus $`\text{ / }E_T`$ signature, reaching as high as 63%. One can consider further optimization of cuts with $`m_h`$ dependence. However, the rather small signal rate for a 30 fb<sup>-1</sup> luminosity limits the statistical significance. Also, the systematic uncertainty in the background may be worse than the pure leptonic channel. ## IV Discussions and Conclusion We have carried out comprehensive studies for $`hW^{}W^{}`$ via the two channels $`p\overline{p}hW^{}W^{}\mathrm{}\overline{\nu }\overline{\mathrm{}}\nu ,`$ (43) $`p\overline{p}W^\pm h,ZhW^\pm (Z)W^{}W^{}\mathrm{}^\pm \mathrm{}^\pm jj.`$ (44) In combining both channels, we present our summary figure in Fig. 14, again for (a) statistical effects only; (b) 10% systematic error for the signal and SM backgrounds included for both channels. We conclude that with a c. m. energy of 2 TeV and an integrated luminosity of 30 fb<sup>-1</sup> the Higgs boson signal via $`hW^{}W^{}`$ should be observable at a 3$`\sigma `$ level or better for the mass range of $`145\mathrm{GeV}<m_h<180`$ GeV. For 95% CL exclusion, the mass reach is $`135\mathrm{GeV}<m_h<190`$ GeV. Our results presented here are valid not only for the SM Higgs boson, but also for SM-like ones such as the lightest supersymmetric Higgs boson in the decoupling limit . A Higgs mass bound can be translated into exploring fundamental parameters for a given theoretical model, as shown in Ref. . Furthermore, if there is an enhancement for $`\mathrm{\Gamma }(hgg)\times BR(hWW,ZZ)`$ over the SM expectation, or if $`BR(hb\overline{b})`$ is suppressed, such as in certain parameter region in SUSY , the signals of Eq. (4) would be more substantial and more valuable to study. We can make our study more general in this regard by considering the quantity $`\sigma (h)\times BR(hW^{}W^{})`$ as a free parameter. Define a ratio of this parameter to the SM expectation for the signal to be $$R=\frac{\sigma (h)\times BR(hW^{}W^{})_{NewPhysics}}{\sigma (h)\times BR(hW^{}W^{})_{SM}}.$$ (45) Measuring $`R`$ would represent a generic Higgs boson search in a model-independent way. Figure 15 gives the 95% CL exclusion for the ratio $`R`$ versus $`m_h`$ for several values of the integrated luminosity, where $`R=1`$ corresponds to the SM expectation. Figure 15(a) is for the channel Eq. (43) where we have only included the gluon-fusion contribution, and Fig. 15(b) for Eq. (44). On the other hand, once a Higgs boson signal is established, a careful examination of $`R`$ would help confirm the SM or identify possible new physics. Finally, we would like to point out that further improvement on our results is still possible by including other channels. Although there would be even larger SM backgrounds, the channel $`hW^{}W^{}\mathrm{}\nu jj`$ was found to be helpful in improving the Higgs boson coverage. Combining with $`hZ^{}Z^{}\mathrm{}\mathrm{}jj`$ as shown in Fig. 2, we would expect some possible improvement which deserves further study. The channels $`hZ^{}Z^{}\mathrm{}\overline{\mathrm{}}\nu \overline{\nu },4\mathrm{}`$ may have smaller SM backgrounds, especially for the $`4\mathrm{}`$ mode. Unfortunately, the signal rate would be very low for the anticipated luminosity at the Tevatron. It is nevertheless prudent to keep them in mind in searching for the difficult Higgs boson signal. In summary, we have demonstrated the feasibility for an upgraded Tevatron to significantly extend the Higgs boson mass coverage. The Fermilab Tevatron with luminosity upgrade will have the potential to significantly advance our knowledge of Higgs boson physics. Acknowledgments: We would like to thank the participants of the Tevatron Run-II Higgs Working Group, especially M. Carena, J. Conway, H. Haber, J. Hobbs, J.-M. Qian, D. Rainwater, S. Willenbrock and D. Zeppenfeld for discussions. We would also like to thank T. Junk for making available the programs used to calculate the sensitivity in the Poissonian limit, and E. Flattum for developing the SHW/PAW interface. This work was supported in part by a DOE grant No. DE-FG02-95ER40896 and in part by the Wisconsin Alumni Research Foundation. A.S.T acknowledges the support from the University of Michigan by a DOE grant No. DE-FG02-95ER40899.
no-problem/9812/astro-ph9812240.html
ar5iv
text
# Detection of Pre-Shock Dense Circumstellar Material of SN 1978K ## 1 Introduction Massive stars lose mass via stellar winds throughout their lifetime. Stellar winds expand away from the stars and form circumstellar envelopes. As a massive star ends its life in a supernova (SN) explosion, the SN ejecta plows through the circumstellar material, driving a forward shock into the circumstellar material and a reverse shock into the SN ejecta. Optical emission is generated in the ionized SN ejecta, cooled SN ejecta behind the reverse shock, shocked circumstellar material, and the ambient ionized circumstellar material (Chevalier & Fransson 1994). These four regions have different physical conditions and velocity structures. Consequently, optical luminosities and spectral characteristics of Type II SNe not only vary rapidly for individual SNe, but also differ widely among SNe with different progenitors. Optical spectra of Type II SNe older than a few years are characterized by broad hydrogen Balmer lines and oxygen forbidden lines, with FWHM greater than a few thousand km s<sup>-1</sup>, reflecting the rapid expansion of the SN ejecta (e.g., SN 1979C and SN 1980K - Fesen et al. 1998; SN 1986E - Cappellaro, Danziger, & Turatto 1995; SN 1987F - Filippenko 1989; SN 1994aj - Benetti et al. 1998). Some Type II SNe, however, do not seem to show such broad emission lines. The most notable case is SN 1978K. SN 1978K in NGC 1313 was discovered in 1990 during a spectrophotometric survey of extragalactic H ii regions (Ryder & Dopita 1993). Ryder et al. (1993) examined archival optical images of NGC 1313 and established that the optical maximum of the supernova occurred in 1978, possibly two months before July 31. However, the optical spectra of SN 1978K obtained in 1990–1992 do not show any emission line broader than 600 km s<sup>-1</sup> (Ryder et al. 1993; Chugai, Danziger, & Della Valle 1995). This is in sharp contrast to SN 1980K, which shows broad, 6000 km s<sup>-1</sup> emission lines in spectra obtained in 1988 and 1997 (Fesen et al. 1998). SN 1978K is intriguing at radio wavelengths as well. While its radio flux shows temporal variations consistent with the expectation of a typical Type II SN, its radio spectrum shows a low-frequency turnover that is most plausibly caused by free-free absorption from an H ii region along the line of sight (Ryder et al. 1993). Montes, Weiler, & Panagia (1997) re-analyzed the radio observations of SN 1978K, and find that the intervening H ii region has an emission measure EM = $`8.5\times 10^5(T_e/10^4K)^{1.35}`$ cm<sup>-6</sup> pc, where $`T_e`$ is the electron temperature. To determine the nature of this “H ii region” toward SN 1978K, we have obtained a high-dispersion echelle spectrum at the wavelength range of 6530–6610 Å. This spectrum clearly resolves the narrow \[N ii\]$`\lambda \lambda `$6548, 6583 lines and a narrow H$`\alpha `$ component from a moderately broad H$`\alpha `$ component. The narrow H$`\alpha `$ and \[N ii\] lines must arise from the “H ii region”, and the broad H$`\alpha `$ component from the SN ejecta. In this paper, we report the echelle observation (§2), compare our spectrum with previous low-dispersion spectra (§3), argue that the “H ii region” toward SN 1978K is circumstellar, and suggest a feasible explanation for SN 1978K’s apparent lack of very broad emission lines (§4). ## 2 High-Dispersion Spectrum of SN 1978K We obtained a high-dispersion spectrum of SN 1978K using the echelle spectrograph on the 4-m telescope at Cerro Tololo Inter-American Observatory (CTIO) on 1997 February 27. The spectrograph was used in a long-slit, single-order mode; the cross disperser was replaced by a flat mirror and a broad H$`\alpha `$ filter (FWHM = 75 Å) was inserted behind the slit. The slit width was 250 $`\mu `$m, or 1$`\stackrel{}{\mathrm{.}}`$64. The data were recorded with the red long-focus camera and a Tektronix 2048 $`\times `$ 2048 CCD. The pixel size was 0.08 Å pixel<sup>-1</sup> along the dispersion and 0$`\stackrel{}{\mathrm{.}}`$26 pixel<sup>-1</sup> in the spatial axis. The instrumental FWHM was 14$`\pm 1`$ km s<sup>-1</sup>. The data were wavelength-calibrated but not flux-calibrated. The echelle observation of SN 1978K was made with a 10-min exposure. SN 1978K and two unrelated H ii regions are detected. No spatially extended H ii features exist at the position of SN 1978K. A spectrum extracted from a 5<sup>′′</sup> slit length<sup>1</sup><sup>1</sup>1This large slit length is necessary to include all the light from SN 1978K, as the telescope was slightly out of focus for this observation. When the focus problem was resolved (2 hours later), SN 1978K had already set. centered on SN 1978K is presented in Figure 1. The high-dispersion spectrum shows three sets of lines with distinct velocity widths. The narrowest (unresolved) are the telluric H$`\alpha `$ and OH $`\lambda `$6553.617 and $`\lambda `$6577.285 lines (Osterbrock et al. 1996). The broadest is the H$`\alpha `$ emission from the supernova ejecta. It is centered at 6572.76$`\pm `$0.22 Å, corresponding to a heliocentric velocity (V<sub>hel</sub>) of 455$`\pm `$10 km s<sup>-1</sup>; its FWHM is $``$450 km s<sup>-1</sup> and FWZI $``$1,100 km s<sup>-1</sup>. The third set of lines consists of the narrow \[N ii\]$`\lambda \lambda `$6548, 6583 lines and a narrow H$`\alpha `$ component. The narrow H$`\alpha `$ component is superimposed near the peak of the broad H$`\alpha `$ emission of the supernova, hence its central velocity, V<sub>hel</sub> $``$ 419 km s<sup>-1</sup>, and FWHM, 75–100 km s<sup>-1</sup>, are somewhat uncertain. The \[N ii\]$`\lambda `$6583 line, at V<sub>hel</sub> = 419$`\pm `$5 km s<sup>-1</sup>, shows a line split of $``$70 km s<sup>-1</sup>; its FWHM is $``$125 km s<sup>-1</sup>. The \[N ii\]$`\lambda `$6548 line, being weaker, does not show an obvious line split; however, its asymmetric line profile indicates the presence of a brighter red component and a weaker blue component, consistent with those seen in the \[N ii\]$`\lambda `$6583 line. The narrow H$`\alpha `$ component and the narrow \[N ii\] lines most likely originate from the same emitting region, and will be referred to as “nebular” emission. We have measured the nebular \[N ii\]$`\lambda `$6583/H$`\alpha `$ ratio to be 0.8–1.3. The large uncertainty in this ratio is caused by the uncertainty in the nebular H$`\alpha `$ flux, as it is difficult to separate the nebular and supernova contributions to the observed H$`\alpha `$ emission. The possible range of nebular \[N ii\]/H$`\alpha `$ ratio is derived from the lower and upper limits of the nebular H$`\alpha `$ flux, estimated by assuming high and low peaks of supernova emission, respectively. ## 3 Comparison with Previous Low-Dispersion Spectra A relatively low-dispersion spectrum of SN 1978K was obtained on 1990 Jan 23 by Ryder et al. (1993). That spectrum showed an H$`\alpha `$ line centered at 6570.2$`\pm `$0.6 Å with a FWHM of 563 km s<sup>-1</sup>. It also detected the \[N ii\]$`\lambda `$6583 line at 6589.6$`\pm `$1.0 Å. As this spectrum has a resolution of $``$5 Å and a pixel size of 1.5 Å pixel<sup>-1</sup>, the \[N ii\] lines are not well resolved from the H$`\alpha `$ line and consequently the velocity and flux measurements might not be very accurate. The \[N ii\]$`\lambda `$6583/H$`\alpha `$ flux ratio, 0.049, derived from this low-dispersion spectrum is really the ratio of nebular \[N ii\]$`\lambda `$6583 flux to the combined supernova and nebular H$`\alpha `$ flux. The \[N ii\]$`\lambda `$5755 line is also detected and the \[N ii\]$`\lambda `$5755/H$`\alpha `$ flux ratio is 0.025. Another low-dispersion spectrum of SN 1978K was obtained on 1992 October 22 by Chugai et al. (1995). The resolution of this spectrum is 10 Å. Thus the redshifts and widths of spectral lines cannot be reliably determined. The \[N ii\]$`\lambda `$6583/H$`\alpha `$ flux ratio is 0.072, and the \[N ii\]$`\lambda `$5755/H$`\alpha `$ flux ratio is 0.016. Using our echelle spectrum, we have measured the ratio of nebular \[N ii\]$`\lambda `$6583 flux to the combined supernova and nebular H$`\alpha `$ flux to be 0.06. This is different from the previous measurements, 0.049 and 0.072. While our measurement should be more accurate because of our higher spectral resolution, the supernova H$`\alpha `$ flux might have varied from 1990 to 1997 (Chugai et al. 1995). It is not clear whether the \[N ii\] flux itself has changed. Nebular lines toward SN 1978K are also detected in the UV spectra of SN 1978K obtained with the Faint Object Spectrograph on board the Hubble Space Telescope on 1994 September 26 and 1996 September 22–23 (Schlegel et al. 1998). The Ly$`\alpha `$ line and the blended \[Ne iv\]$`\lambda \lambda `$2421, 2424 doublet are detected. Both lines have FWHMs comparable to the instrumental resolution, 7 Å, corresponding to 1727 km s<sup>-1</sup> at Ly$`\alpha `$ and 866 km s<sup>-1</sup> at \[Ne iv\]. These \[Ne iv\] lines have critical densities of 8$`\times `$10<sup>4</sup> and 2.5$`\times `$10<sup>5</sup> cm<sup>-3</sup>, respectively (Zheng 1988); therefore, these \[Ne iv\] lines must originate from the nebula. The Ly$`\alpha `$ line emission, like the H$`\alpha `$ emission, contains both the supernova ejecta and nebular components. ## 4 Discussion ### 4.1 Origin of the Narrow H$`\alpha `$ and \[N ii\] Lines The most intriguing features detected in our high-dispersion spectrum of SN 1978K are the narrow nebular H$`\alpha `$ and \[N ii\] lines, which are presumably emitted by the “H ii region along the line of sight” implied by the radio spectrum of SN 1978K (Ryder et al. 1993). However, as we argue below, the \[N ii\] line strengths suggest that this “H ii region” is circumstellar, rather than interstellar. The nebular \[N ii\]$`\lambda `$6583/H$`\alpha `$ line ratio, 0.8–1.3, is unusually high for normal interstellar H ii regions in a spiral galaxy. For example, H ii regions in M101 have \[N ii\]$`\lambda `$6583/H$`\alpha `$ ratios $``$0.3 (Kennicutt & Garnett 1996). SN 1978K is at the outskirts of NGC 1313, where abundances are expected to be low and the H ii excitation is expected to be high. If the nebular H$`\alpha `$ and \[N ii\] lines toward SN 1978K originate in an interstellar H ii region, we would expect the \[N ii\]$`\lambda `$6583/H$`\alpha `$ ratio to be $``$0.1 or lower. A low interstellar \[N ii\]/H$`\alpha `$ ratio is confirmed by the bright H ii region detected along the slit at $`90^{\prime \prime }`$ east of SN 1978K. This H ii region is brighter than the nebula toward SN 1978K in the H$`\alpha `$ line, but its \[N ii\]$`\lambda `$6583 line is not detected. We may rule out an interstellar H ii region explanation for the narrow nebular lines seen in SN 1978K. The high \[N ii\]$`\lambda `$6583/H$`\alpha `$ ratio may be caused by a high electron temperature or a high nitrogen abundance. These conditions can be easily provided by SN 1978K and its progenitor. If the nebula was ionized by the UV flash of SN 1978K, the electron temperature may be higher than that of a normal H ii region, as in the case of SN 1987A’s outer rings (Panagia et al. 1996). However, the \[N ii\]$`\lambda `$6583 line intensity increases by only a factor of 2 for an electron temperature increase from 10,000 K to 15,000 K. This increase cannot explain fully the observed high \[N ii\]/H$`\alpha `$ ratio. A higher nitrogen abundance is needed. An elevated nitrogen abundance is characteristic of ejecta nebulae around evolved massive stars, such as luminous blue variables (LBVs) and Wolf-Rayet (WR) stars; the \[N ii\]$`\lambda `$6583/H$`\alpha `$ ratios of these ejecta nebulae are frequently observed to be $``$1 (Esteban et al. 1992; Smith et al. 1998). Therefore, the most reasonable origin of the nebular emission lines toward SN 1978K would be a circumstellar ejecta nebula. The observed high \[N ii\]/H$`\alpha `$ ratio may be caused by the combination a high nitrogen abundance and a high electron temperature. SN 1978K’s circumstellar ejecta nebula has a very high density, as strong \[N ii\]$`\lambda `$5755 line is observed in SN 1978K’s spectrum. The \[N ii\] ($`\lambda `$6548+$`\lambda `$6583)/$`\lambda `$5755 ratio is measured to be 2.55 by Ryder et al. (1993), and 6.0 by Chugai et al. (1995), indicating that collisional de-excitation is significant for the $`{}_{}{}^{1}D_{2}^{}`$ level of N<sup>+</sup>. If we assume an electron temperature of 1–1.5$`\times 10^4`$ K, the observed \[N ii\] line ratios imply electron densities of 3–12$`\times 10^5`$ cm<sup>-3</sup>. The circumstellar ejecta nebula of SN 1978K can be compared to those observed around LBVs and WR stars. The density of SN 1978K’s nebula is higher than those of WR nebulae, but within the range for LBV nebulae (Stahl 1989; Esteban et al. 1992). We adopt the emission measure EM = $`8.5\times 10^5(T_e/10^4K)^{1.35}`$ cm<sup>-6</sup> pc determined from the radio observations (Montes et al. 1997) for SN 1978K’s nebula. This emission measure is much higher than those observed in ejecta nebulae around WR stars, typically a few $`\times `$10<sup>2</sup> to 10<sup>3</sup> cm<sup>-6</sup> pc (Esteban et al. 1992; Esteban & Vílchez 1992), but lies toward the high end of the range typically seen in LBV nebulae, a few $`\times `$10<sup>3</sup> to 10<sup>5</sup> cm<sup>-6</sup> pc (Hutsemékers 1994; Smith et al. 1998). Finally, the H$`\alpha `$ and \[N ii\] velocity profiles seen in our SN 1978K spectrum suggest an expansion velocity<sup>2</sup><sup>2</sup>2The expansion velocity implied by the line split in the \[N ii\] line is $`>`$35 km s<sup>-1</sup>. The expansion velocity can also be approximated by the HWHM of the H$`\alpha `$ and \[N ii\] lines, 40–55 km s<sup>-1</sup>. of 40–55 km s<sup>-1</sup>, which is lower than those of most ejecta nebulae around WR stars but is within the range for LBV nebulae (Nota et al. 1995; Chu, Weis, & Garnett 1999). It is thus likely that the observed nebula toward SN 1978K was ejected by the progenitor during a LBV phase before the SN explosion. This ejecta nebula could be either part of the circumstellar envelope that the SN ejecta expands into, or a shell that is detached from the circumstellar envelope. We will demonstrate below that the latter is unlikely. If the ejecta nebula is a detached shell, the observed emission measure and density imply that the shell thickness is only 4$`\times `$10<sup>-5</sup> to 4$`\times `$10<sup>-6</sup> pc. The thickness of a detached, dense shell will be broadened by diffusion, and may be crudely approximated by $`(c/V_{exp})R`$, where $`c`$ is the isothermal sound velocity, $`V_{exp}`$ is the expansion velocity, and $`R`$ is the radius. We find that the radius of SN 1978K’s free-expanding ejecta shell would have to be no greater than $`2\times 10^4`$ pc, which is smaller than the expected radius of the SN ejecta. This is impossible. Therefore, we conclude that the narrow H$`\alpha `$ and \[N ii\] lines must originate in the pre-shock, ionized circumstellar envelope of SN 1978K. The narrow nebular lines from the pre-shock, ionized circumstellar envelope of SN 1978K are not unique among SNe. The high-dispersion spectrum of SN 1997ab shows narrow P-Cygni H$`\alpha `$ and narrow \[N ii\]$`\lambda `$6583 lines, and the FWZI of the P-Cygni H$`\alpha `$ line, 180 km s<sup>-1</sup>, is comparable to that of SN 1978K’s H$`\alpha `$ line (Salamanca et al. 1998). The P-Cygni profile of SN 1997ab’s narrow H$`\alpha `$ line indicates a high density, $`10^7`$ cm<sup>-3</sup>. This density exceeds the critical density of the $`{}_{}{}^{1}D_{2}^{}`$ level of N<sup>+</sup>, and causes a weak \[N ii\]$`\lambda `$6584 line (see Figure 1 of Salamanca et al. 1998). If SN 1997ab’s circumstellar material is nitrogen-rich like that of SN 1978K, we predict that its \[N ii\]$`\lambda `$5755 line is strong and should be detectable as well. SN 1997ab is very likely a younger version of SN 1978K, and SN 1978K’s nebular H$`\alpha `$ line may have exhibited a P-Cygni profile in 1979-1980. ### 4.2 SN Evolution in a Very Dense Circumstellar Envelope The most notable SN characteristic of SN 1978K is its apparent lack of very broad (a few thousand km s<sup>-1</sup>) emission lines. Adopting canonical expansion velocities and sizes for SN 1978K, Ryder et al. (1993) has derived a mass of $`>`$80 M for the circumstellar envelope. This mass is too large to reconcile with the current understanding of massive stellar evolution. To lower the circumstellar mass, Chugai et al. (1995) propose that the circumstellar envelope is clumpy. We consider that the large size $``$0.1 pc adopted by Ryder et al. (1993) is over-estimated and inconsistent with the expansion velocity implied by our observed H$`\alpha `$ FWHM of 450 km s<sup>-1</sup>. There is no need to assume an unseen, larger expansion velocity. We suggest that the small expansion velocity of SN 1978K is caused by the dense circumstellar envelope, which has quickly decelerated the expansion of SN ejecta. If optical spectra had been obtained immediately after the SN explosion in 1978, very broad emission lines would have been detected. Rapid deceleration of SN ejecta has been observed in two other SNe, SN 1986J and SN 1997ab. SN 1986J has been noted to have very similar spectral properties as SN 1978K<sup>3</sup><sup>3</sup>3We have examined a large number of SN spectra reported in the literature. SN 1986J appears to be the only SN besides SN 1978K that shows strong \[N ii\]$`\lambda `$5755 line, indicating a very high density and possibly an enhanced nitrogen abundance.. SN 1986J probably exploded four years before its initial discovery in 1986 (Rupen et al. 1987; Chevalier 1987). Its optical spectra obtained soon after the discovery show narrow hydrogen Balmer lines and nitrogen forbidden lines, indicating an expansion velocity $`<`$ 600 km s<sup>-1</sup> (Leibundgut et al. 1991). SN 1997ab is the only other SN for which narrow nebular emission lines from the dense circumstellar envelope have been unambiguously resolved and detected. SN 1997ab’s light curve peaked in 1996; the FWHM of its H$`\alpha `$ line decreased rapidly from 2500 km s<sup>-1</sup> on 1997 March 2 to 1800 km s<sup>-1</sup> on 1997 May 30 (Hagen et al. 1997; Salamanca et al. 1998). Clearly, SN 1978K, SN 1986J, and SN 1997ab all possess very dense circumstellar envelopes, and we may expect them to evolve similarly. The expansion of SN 1978K might have slowed down to below 1000 km s<sup>-1</sup> within the first $``$2 years after the explosion, and the SN ejecta could not have reached a radius greater than $``$0.02 pc in 1990. A factor of 5 reduction in the radius would lower Ryder et al.’s (1993) estimate of mass to a reasonable value, and the hypothesis of a clumpy circumstellar envelope will no longer be necessary. ### 4.3 Future Work Previous spectrophotometric observations of SNe were rarely made with spectral resolutions better than 2 Å. Our echelle observation of SN 1978K has demonstrated that high-dispersion spectroscopy is powerful in resolving pre-shock, ionized circumstellar material. A high-dispersion spectroscopic survey of young SNe in nearby galaxies may detect more circumstellar envelopes and even detached ejecta ring nebulae, such as the rings around SN 1987A (Burrows et al. 1995). The density and velocity structures of these circumstellar envelopes would shed light on the mass loss history as well as physical properties of the massive progenitors. Our spectrum of SN 1978K unfortunately covers only the \[N ii\] and H$`\alpha `$ lines. In order to measure the density, temperature, and abundances of the circumstellar material, it is necessary to obtain high-dispersion spectra covering a large wavelength range. It is also important to monitor the spectral changes indicative of density changes in the circumstellar envelope. A large change at all wavelengths is expected when the SN ejecta expands past the outer edge of the circumstellar envelope. We would like to thank the referee for useful suggestions to improve this paper. YHC acknowledges the support of NASA LTSA grant NAG 5-3246. MJM and KWW wish to thank the Office of Naval Research (ONR) for the 6.1 funding supporting this research. Figure Captions
no-problem/9812/hep-ph9812345.html
ar5iv
text
# Superheavy Dark Matter from Thermal Inflation ## I Introduction A ‘model-independent’ bound of about $`10^5\mathrm{GeV}`$ has often been invoked (see and references therein) as the largest possible mass the dark matter particle can have. The derivation of this bound uses the unitarity bound on the annihilation cross-section, and makes one crucial assumption: thermal equilibrium in the early universe. The unitarity bound on the annihilation cross-section tells us $$\sigma _Av\frac{1}{M^2}$$ (1) where $`M`$ is the mass of the dark matter particle, and $`\sigma _Av`$ is the thermally averaged annihilation cross-section that appears in the relevant Boltzman equation (i.e. annihilation per unit time is given by $`n\sigma _Av`$, where $`n`$ is the proper number density of the dark matter particle). The assumption of thermal equilibrium, on the other hand, tells us that the freeze-out abundance is given by the thermal distribution $$n\left(MT_f\right)^{\frac{3}{2}}\mathrm{exp}\left(\frac{M}{T_f}\right)$$ (2) where $`T_f`$ is the freeze-out temperature. This assumes a cold relic. For a hot relic that freezes out at $`T_fM`$, the number density will be higher leading to a stronger bound on its mass . The exponential suppression implies that $`T_f`$ cannot be too much smaller than $`M`$; hence, combining this with the unitarity bound, it can be shown that $$\mathrm{\Omega }h^20.1\left(\frac{M}{10^5\mathrm{GeV}}\right)^2$$ (3) where $`\mathrm{\Omega }`$ is the ratio of the mass density in the dark matter particle to the critical density today, and $`h`$ is the Hubble constant in units of $`100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. This bound naturally has important implications for dark matter searches. Recently, it has been shown that this bound can be evaded by violating the assumption of thermal equilibrium in the early universe, for instance by producing the dark matter particle at the end of inflation via preheating/reheating or gravitational particle-creation . The masses required for a present abundance of $`\mathrm{\Omega }1`$ are within a few orders of magnitude of the Hubble parameter at the end of inflation, i.e. $`10^{12}10^{16}\mathrm{GeV}`$. For this reason, they have been called superheavy dark matter or WIMPZILLAs. Here, we propose a different production mechanism that can also evade the $`10^5\mathrm{GeV}`$ upper bound, and naturally achieves $`\mathrm{\Omega }1`$. It makes use of a late period of inflation called thermal inflation that occurs at an energy scale of around $`10^6\mathrm{GeV}`$, with a Hubble parameter of the order of $`1\mathrm{keV}`$, which is to be compared with the GUT-scale ordinary inflation having $`V^{1/4}10^{16}\mathrm{GeV}`$ and $`H10^{13}\mathrm{GeV}`$ used in . Thermal inflation provides a natural solution to the Polonyi or moduli problem that generically arises in string theory (see also ). It occurs when a ‘flaton’, a scalar field with a small mass and a large vacuum expectation value, is held at the origin by its finite temperature effective potential. One gets a few $`e`$-folds of inflation because the flaton’s potential energy dominates the thermal energy density well before the temperature drops below the critical temperature for the flaton to start rolling away from the origin. The prototypical flaton potential is described in §II A. Thermal inflation occurs at a very low energy scale which is the reason it can successfully dilute the potentially harmful moduli produced after ordinary inflation. Note that it will also dilute any superheavy dark matter produced at the end of ordinary inflation. Our idea for dark matter production works roughly as follows. A particle $`\psi `$ and its antiparticle $`\overline{\psi }`$, which carry some conserved charge to make them stable, are coupled to the flaton. They are initially massless during the thermal inflation when the flaton is held at the origin by the finite temperature effects of $`\psi `$ and $`\overline{\psi }`$. After the temperature of the universe drops below the critical temperature, the flaton begins to fast-roll down its potential. The $`\psi `$ particle quickly gains mass in the process, which reduces its annihilation cross-section, and its abundance quickly freezes out. After that, the flaton continues to roll until it reaches the true vacuum, acquiring a large expectation value and giving $`\psi `$ a large mass.<sup>§</sup><sup>§</sup>§A related mechanism has also been considered in . The parameters for thermal inflation give a mass for $`\psi `$ of around $`10^{10}\mathrm{GeV}`$, and work out naturally to give an abundance of $`\mathrm{\Omega }_{\psi \overline{\psi }}1`$. The details are explained in §II. This production mechanism evades the conventional upper bound of $`10^5\mathrm{GeV}`$ by giving the particle $`\psi `$ a larger annihilation cross-section at freeze-out than what one would expect based on its final mass, and by entropy production after the freeze-out. Since thermal inflation provides a natural solution to the moduli problem, and since the prediction of $`\mathrm{\Omega }_{\psi \overline{\psi }}1`$ follows rather naturally from its parameters (assuming $`\psi `$ is stable), the possibility of a significant fraction of the universe being composed of superheavy dark matter should be taken seriously. In §III, we discuss the observational consequences and detectability of such dark matter, which could be completely inert, weakly-interacting, or even electromagnetically or strongly-charged. Finally, we conclude in §IV with discussions of other plausible realizations of the mechanism outlined above, which include the possibility of producing near Planck mass relics that could perhaps be electromagnetically charged. ## II Superheavy dark matter from thermal inflation ### A Particle physics A simple model of thermal inflation is provided by the superpotential $$W\left(\varphi \right)=\frac{\lambda _\varphi \varphi ^4}{4M_{\mathrm{Pl}}}$$ (4) where $`\varphi `$ is the flaton, and $`M_{\mathrm{Pl}}2.4\times 10^{18}\mathrm{GeV}`$. This form for the superpotential can be guaranteed by a $`Z_4`$ discrete gauge symmetry because the superpotential is a holomorphic function of $`\varphi `$, i.e. does not depend on $`\varphi `$’s complex conjugate $`\varphi ^{}`$. The supersymmetric part of the scalar potential is then given by $$V_{\mathrm{susy}}(\varphi ,\varphi ^{})=\left|\frac{W}{\varphi }\right|^2=\frac{\left|\lambda _\varphi \right|^2\left|\varphi \right|^6}{M_{\mathrm{Pl}}^2}$$ (5) In addition, one requires $`\varphi `$’s soft supersymmetry-breaking mass-squared to be negative. The scalar potential is then $$V(\varphi ,\varphi ^{})=V_{\mathrm{ti}}m_\varphi ^2\left|\varphi \right|^2+\left(\frac{A\lambda _\varphi \varphi ^4}{M_{\mathrm{Pl}}}+\text{c.c.}\right)+\frac{\left|\lambda _\varphi \right|^2\left|\varphi \right|^6}{M_{\mathrm{Pl}}^2}$$ (6) where $`V_{\mathrm{ti}}`$ and $`A`$ are other soft supersymmetry-breaking parameters. One would expect $`\left|A\right|m_\varphi `$. The scale of the soft supersymmetry-breaking parameters of the Minimal Supersymmetric Standard Model (MSSM) is expected to be around $`100\text{GeV}`$ to $`1\text{TeV}`$. This potential has four degenerate minima The domain walls associated with this degeneracy are most likely harmless, either because the vacua are identified because of a discrete gauge symmetry, or because higher order terms, that would generically be present in the absence of a gauge symmetry, break the degeneracy. with $`\left|\varphi \right|=\varphi _{\mathrm{vac}}`$ where $$\varphi _{\mathrm{vac}}^2=\frac{m_\varphi M_{\mathrm{Pl}}}{\sqrt{3}\left|\lambda _\varphi \right|}\left(\sqrt{1+\frac{4\left|A\right|^2}{3m_\varphi ^2}}+\frac{2\left|A\right|}{\sqrt{3}m_\varphi }\right)$$ (7) For $`m_\varphi =100\text{GeV}`$, $`\left|\lambda _\varphi \right|=1`$, and neglecting $`A`$, this gives $`\varphi _{\mathrm{vac}}=10^{10}\text{GeV}`$. Requiring zero cosmological constant at the minima gives $$V_{\mathrm{ti}}=\frac{2}{3}m_\varphi ^2\varphi _{\mathrm{vac}}^2\left[1+\frac{\left|A\right|}{\sqrt{3}m_\varphi }\left(\sqrt{1+\frac{4\left|A\right|^2}{3m_\varphi ^2}}+\frac{2\left|A\right|}{\sqrt{3}m_\varphi }\right)\right]$$ (8) Thermal inflation starts when the energy density of the universe starts to be dominated by $`V_{\mathrm{ti}}`$, and ends a few $`e`$-folds later when the temperature drops below that required to hold $`\varphi `$ at $`\varphi =0`$. $`\varphi `$ then rapidly rolls towards, and oscillates about, the minima of its potential. It eventually decays, leaving a radiation dominated universe, at a temperature $`T_{\mathrm{dec}}`$. We require $$T_{\mathrm{dec}}10\text{MeV}$$ (9) to avoid interfering with nucleosynthesis. In order for $`\varphi `$ to be held at $`\varphi =0`$ by thermal effects during the thermal inflation, $`\varphi `$ must have unsuppressed interactions with other fields in the thermal bath. We will assume these interactions include a coupling of the form A flaton $`\varphi `$ cannot have a coupling of the form $`W\varphi ^2\psi `$ since it would lead to an unsuppressed quartic self-coupling in the potential for $`\varphi `$. Such a coupling can be forbidden by an appropriate gauge symmetry. The only other possibility would be for $`\varphi `$ to be charged under some continuous gauge symmetry, which in the vacuum would be broken at a scale $`\varphi _{\mathrm{vac}}10^{10}\text{GeV}`$. We do not consider this possibility here. $$W=\lambda _\psi \varphi \overline{\psi }\psi $$ (10) with $`|\lambda _\psi |1`$. After thermal inflation, $`\psi `$ and $`\overline{\psi }`$ will acquire masses $$M=\left|\lambda _\psi \right|\varphi _{\mathrm{vac}}$$ (11) from this coupling. For $`|\lambda _\psi |=1`$ and $`\varphi _{\mathrm{vac}}=10^{10}\text{GeV}`$, this gives $`M=10^{10}\text{GeV}`$. In order to satisfy the decay constraint, Eq. (9), we require $`\varphi `$ to have some, possibly indirect, couplings to the MSSM, beyond the ever present gravitational couplings. We will assume this is achieved by having $`\psi `$ and $`\overline{\psi }`$ charged under SU(3)$`\times `$SU(2)$`\times `$U(1). Other possibilities were considered in Ref. . In order not to interfere with gauge coupling unification, $`\psi `$ and $`\overline{\psi }`$ should form complete representations of SU(5). These couplings give a decay rate $$\mathrm{\Gamma }_\varphi 3\times 10^9\left(\frac{g_\psi ^2m_\varphi ^3}{\varphi _{\mathrm{vac}}^2}\right)$$ (12) where $`g_\psi `$ is the number of internal degrees of freedom of $`\psi `$ and $`\overline{\psi }`$. For example, if $`\psi =\mathrm{𝟓}`$ and $`\overline{\psi }=\overline{\mathrm{𝟓}}`$ then $`g_\psi =40`$. The decay temperature is therefore $`T_{\mathrm{dec}}`$ $``$ $`g_{}^{\frac{1}{4}}\mathrm{\Gamma }_\varphi ^{\frac{1}{2}}M_{\mathrm{Pl}}^{\frac{1}{2}}2\times 10^5\left({\displaystyle \frac{g_\psi m_\varphi ^{3/2}M_{\mathrm{Pl}}^{1/2}}{\varphi _{\mathrm{vac}}}}\right)`$ (13) $``$ $`300\text{MeV}\left({\displaystyle \frac{g_\psi }{100}}\right)\left({\displaystyle \frac{m_\varphi }{100\text{GeV}}}\right)^{\frac{3}{2}}\left({\displaystyle \frac{10^{10}\text{GeV}}{\varphi _{\mathrm{vac}}}}\right)`$ (14) where $`g_{}`$ is the effective number of massless degrees of freedom in the universe at temperature $`T_{\mathrm{dec}}`$. Note that parametric resonance is unlikely to be important because $`\varphi `$ oscillates around a large vacuum expectation value rather than the origin. Renormalization, amongst other things, will split the degeneracy of Eq. (11) amongst the various components of $`\psi `$ and $`\overline{\psi }`$. The renormalization will tend to make SU(3) charged components the heaviest, and SU(3)$`\times `$SU(2)$`\times `$U(1) singlet components, if they exist, the lightest . Our main unjustified assumption is now to assume that $`\psi `$ and $`\overline{\psi }`$ carry opposite charge under some discrete (or continuous) gauge symmetry under which all other fields are neutral. This symmetry is, however, helpful in avoiding unwanted superpotential couplings to MSSM fields. The lightest component of $`\psi `$ and $`\overline{\psi }`$ will then be absolutely stable and so potentially a dark matter candidate. <sup>\**</sup><sup>\**</sup>\**Throughout this paper, we use the same notation $`\psi `$ for the complete representation and its lightest component (the dark matter). For example, we could have a $`Z_8`$ discrete gauge symmetry, under which $`\varphi `$, $`\psi `$, and $`\overline{\psi }`$ have charges $`2`$, $`1`$, and $`1`$, respectively. This would guarantee Eqs. (4) and (10), and after $`\varphi `$ acquires its vacuum expectation value, the $`Z_8`$ will be broken down to a $`Z_2`$ under which only $`\psi `$ and $`\overline{\psi }`$ are charged. A simple choice of representations that satisfies the discrete anomaly cancellation conditions is $`\psi =\mathrm{𝟏𝟔}+\mathrm{𝟏}`$ and $`\overline{\psi }=\overline{\mathrm{𝟏𝟔}}+\mathrm{𝟏}`$ of SO(10). ### B Abundance During thermal inflation, $`\psi `$ and $`\overline{\psi }`$ will be relativistic and in thermal equilibrium. Their number density will therefore be $$n\left(T\right)=\frac{7\zeta (3)g_\psi T^3}{8\pi ^2}$$ (15) where $`\zeta (3)1.202`$, and $`g_\psi `$ is the number of internal degrees of freedom of $`\psi `$ and $`\overline{\psi }`$. <sup>††</sup><sup>††</sup>††We have assumed, as is appropriate for a supersymmetric theory, that there are equal numbers of bosonic and fermionic degrees of freedom ($`7/8=(1+3/4)/2`$). If $`\psi =\mathrm{𝟏𝟔}+\mathrm{𝟏}`$ and $`\overline{\psi }=\overline{\mathrm{𝟏𝟔}}+\mathrm{𝟏}`$ of SO(10), as in the example of the previous section, then $`g_\psi =136`$. Through the coupling of Eq. (10), $`\psi `$ and $`\overline{\psi }`$ will generate the finite temperature effective potential for $`\varphi `$ $$V_T\left(\varphi \right)=V_{\mathrm{ti}}+\left(\frac{g_\psi }{32}\left|\lambda _\psi \right|^2T^2m_\varphi ^2\right)\left|\varphi \right|^2+\mathrm{}$$ (16) Thermal inflation will therefore end at the temperature $$T_\mathrm{c}=\frac{4\sqrt{2}m_\varphi }{\sqrt{g_\psi }\left|\lambda _\psi \right|}$$ (17) when $`\varphi `$ begins to roll away from $`\varphi =0`$. Shortly afterwards, the abundance of $`\psi `$ and $`\overline{\psi }`$ freezes out. Meanwhile, $`\varphi `$ will continue to roll towards its vacuum expectation value $`\left|\varphi \right|=\varphi _{\mathrm{vac}}`$. Once $`\varphi `$ acquires its vacuum expectation value $`\left|\varphi \right|=\varphi _{\mathrm{vac}}`$, the coupling of Eq. (10) will give $`\psi `$ and $`\overline{\psi }`$ masses $$M=\left|\lambda _\psi \right|\varphi _{\mathrm{vac}}$$ (18) The freeze-out abundance of $`\psi `$ and $`\overline{\psi }`$ can be estimated as follows. First, it is important to keep in mind that the temperature does not drop as $`\varphi `$ rolls from $`0`$ to $`\varphi _{\mathrm{vac}}`$. This is because the time-scale for the roll-over is $`m_\varphi ^1`$, which is much smaller than the Hubble time $`H_{\mathrm{ti}}^1m_\varphi ^1\varphi _{\mathrm{vac}}^1M_{\mathrm{pl}}m_\varphi ^1`$. Hence $`T=T_\mathrm{c}`$ throughout the roll-over. The freeze-out occurs as $`\psi `$ gains mass and begins to become non-relativistic when its thermal abundance is given by $$n(m_\psi )=g_\psi \left(\frac{m_\psi T_\mathrm{c}}{2\pi }\right)^{\frac{3}{2}}\mathrm{exp}\left(\frac{m_\psi }{T_\mathrm{c}}\right)$$ (19) where $`m_\psi `$ denotes the $`\varphi `$ dependent mass of $`\psi `$, $`m_\psi =\left|\lambda _\psi \varphi \right|`$, which increases as $`\varphi `$ rolls down the potential. In other words, in contrast with the usual freeze-out calculation, it is $`m_\psi `$ that is changing with time rather than the temperature. The freeze-out abundance is determined by equating the annihilation rate $`\mathrm{\Gamma }_{\psi \overline{\psi }}`$ with the inverse-time-scale of the problem at hand, i.e. $`m_\varphi `$. $$\mathrm{\Gamma }_{\psi \overline{\psi }}\left(m_\psi \right)n\left(m_\psi \right)\sigma \left|v\right|\left(m_\psi \right)m_\varphi $$ (20) Using the fact that $`\sigma \left|v\right|1/m_\psi ^2`$, and that $`m_\varphi \left|\lambda _\psi \right|T_\mathrm{c}`$, it is not hard to see that the freeze-out occurs when $`m_\psi T_\mathrm{c}`$, and that the freeze-out abundance is given by Eq. (15) with $`T=T_\mathrm{c}`$, suppressed by at most a factor $`\left|\lambda _\psi \right|`$. So for $`\left|\lambda _\psi \right|1`$, and to within an order of magnitude, there is no significant net annihilation of $`\psi `$ and $`\overline{\psi }`$ after the beginning of the roll-over, and the freeze-out occurs well before $`\varphi `$ reaches $`\varphi _{\mathrm{vac}}`$. After the freeze-out, $`\varphi `$ will continue to roll towards its vacuum expectation value, increasing the mass of $`\psi `$ to a large value. It can be checked that the annihilation rate by the time $`\varphi `$ reaches the minimum is negligible <sup>‡‡</sup><sup>‡‡</sup>‡‡If $`\left|\lambda _\psi \right|`$ is small, the fifth power of $`\left|\lambda _\psi \right|`$ in this formula could make $`\mathrm{\Gamma }_{\psi \overline{\psi }}`$ significant which means that there would be some annihilation of $`\psi `$ and $`\overline{\psi }`$. However, in this case the cosmological abundance will also be boosted up by a higher $`T_c`$ (see Eqs. (17) and (26)). $$\mathrm{\Gamma }_{\psi \overline{\psi }}\frac{T_\mathrm{c}^3}{M^2}\frac{m_\varphi ^3}{\left|\lambda _\psi \right|^5\varphi _{\mathrm{vac}}^2}\left(\frac{\left|\lambda _\varphi \right|^{3/2}m_\varphi ^{1/2}}{\left|\lambda _\psi \right|^5M_{\mathrm{Pl}}^{1/2}}\right)H_{\mathrm{ti}}H_{\mathrm{ti}}$$ (21) Subsequently, $`\varphi `$ will oscillate around $`\varphi _{\mathrm{vac}}`$. <sup>\**</sup><sup>\**</sup>\**Note that the backreaction of the finite density of $`\psi `$ particles on $`\varphi `$’s potential is negligible. One might worry that during the oscillation, a significant amount of annihilation or production of $`\psi `$ and $`\overline{\psi }`$ might occur if $`\varphi `$ returns to small values. The rapid build up of gradient energy will prevent $`\varphi `$ from returning to small values except in a few isolated places. In addition, $`\varphi `$ would eventually be prevented from returning to small values by the Hubble expansion. Strings with walls attached are also formed, which will likely disappear quickly . Their direct radiation into the heavy $`\psi `$ particles is heavily suppressed because $`M_\psi m_\varphi `$. On the other hand, the $`\psi `$ particles are light in the cores of the strings, and could be created and trapped there. If a string loop carries a net $`\psi `$-charge, it will be released in the form of (heavy) $`\psi `$ particles when the loop annihilates. If we assume each string produces of the order of one $`\psi `$ particle, our rough estimate of the $`\psi `$ abundance should still be valid. The energy density in $`\psi `$ and $`\overline{\psi }`$ will then scale with that of the oscillating flaton until the flaton finally decays, leaving a radiation dominated universe at a temperature $`T_{\mathrm{dec}}`$. The energy density of $`\psi `$ and $`\overline{\psi }`$ will then scale with the entropy density of the universe, $`s`$. The current value of the entropy density is $$s_0=2.2\times 10^{38}\text{GeV}^3$$ (22) Finally, we wish to compare the current energy density of $`\psi `$ and $`\overline{\psi }`$ with the critical density $$3H_0^2=3\times 10^{47}\text{GeV}^4$$ (23) For $`\psi `$ and $`\overline{\psi }`$ to be a viable dark matter candidate we require $$\mathrm{\Omega }_{\psi \overline{\psi }}\frac{\rho _{\psi \overline{\psi }}}{3H_0^2}0.3$$ (24) Putting everything together we get $$\mathrm{\Omega }_{\psi \overline{\psi }}=n\left(T_\mathrm{c}\right)M\left(\frac{\rho _{\mathrm{dec}}}{V_{\mathrm{ti}}}\right)\left(\frac{s_0}{s_{\mathrm{dec}}}\right)\left(\frac{1}{3H_0^2}\right)$$ (25) where $`\rho _{\mathrm{dec}}`$ is the energy density of the universe at the end of the flaton decay. Therefore, using $`\rho _{\mathrm{dec}}=\frac{3}{4}T_{\mathrm{dec}}s_{\mathrm{dec}}`$, $$\mathrm{\Omega }_{\psi \overline{\psi }}=\left|\lambda _\varphi \right|\left|\lambda _\psi \right|^2\left(\frac{g_\psi }{100}\right)^{\frac{1}{2}}\left(\frac{m_\varphi }{100\text{GeV}}\right)^{\frac{3}{2}}\left(\frac{5\times 10^4\varphi _{\mathrm{vac}}T_{\mathrm{dec}}}{g_\psi m_\varphi ^{3/2}M_{\mathrm{Pl}}^{1/2}}\right)$$ $$\times \left(\frac{8\pi ^2n\left(T_\mathrm{c}\right)}{7\zeta (3)g_\psi T_\mathrm{c}^3}\right)\left(\frac{\sqrt{g_\psi }\left|\lambda _\psi \right|T_\mathrm{c}}{4\sqrt{2}m_\varphi }\right)^3\left(\frac{M}{\left|\lambda _\psi \right|\varphi _{\mathrm{vac}}}\right)\left(\frac{2m_\varphi ^2\varphi _{\mathrm{vac}}^2}{3V_{\mathrm{ti}}}\right)\left(\frac{m_\varphi M_{\mathrm{Pl}}}{\sqrt{3}\left|\lambda _\varphi \right|\varphi _{\mathrm{vac}}^2}\right)$$ (26) where all factors in brackets are of order $`1`$. We have displayed explicitly the assumed relations between the various quantities, e.g. $`M=|\lambda _\psi |\varphi _{\mathrm{vac}}`$, etc. ## III Observational Constraints and Detection The above simple model leaves open the question of what kind of interaction $`\psi `$ has with ordinary matter. The same is true of other production mechanisms of superheavy dark matter . Let us go through the different possibilities one by one. Electromagnetically charged. These have been referred to as CHAMPs in the literature . At late times, they primarily take the form of $`p^+\psi ^{}`$ (hydrogen with a heavy “electron” which has very low cross-section with other atoms), $`\psi ^+e^{}`$ (heavy hydrogen) or $`\psi ^{}\mathrm{He}^{++}e^{}`$ ($`\psi ^{}`$ bound to the helium nucleus to make another kind of heavy hydrogen). Various constraints exist on such particles, ranging from the absence of heavy-hydrogen-like atoms in water to nondetection in $`\gamma `$-ray and cosmic-ray detectors . By far, the strongest constraint appears to come from the existence of old neutron stars , where only $`M10^{16}\mathrm{GeV}`$ is allowed. Otherwise, a sufficient net number of $`\psi ^+`$ particles collects in the neutron star, forms a black hole in the center and eats up the star on a short time scale; this is in part because the hydrogen-heavy-hydrogen scattering cross-section is high, given by the square of the Bohr radius in the low velocity limit $`\sigma 10^{17}\mathrm{cm}^2`$. However, if the abundance of $`\psi ^+`$ and that of $`\psi ^{}`$ bound to helium are the same in the halo, the constraint is weakened to $`M10^{10}\mathrm{GeV}`$. The conventional bound of $`M10^5\mathrm{GeV}`$ would then be fatal to the existence of such particles. The kind of production mechanism like the one proposed here, or elsewhere, which evades the conventional bound, could resurrect the intriguing idea that the dark matter can be charged. But as we can see, significant astrophysical constraints already exist. It should be noted that the near Planck mass relic that will be discussed in §IV satisfies even the demanding bound of $`M10^{16}\mathrm{GeV}`$. The economic importance of such stable massive electromagnetically charged particles cannot be overestimated. For example, they could be used to catalyze nuclear fusion . Strongly charged. These have been referred to as SIMPs in the literature . Significant bounds on their masses, if they have significant cosmological abundances, come from nucleosynthesis as well as the absence of anomalously heavy isotopes of familiar nuclei . A systematic study of constraints from direct detection and the existence of old neutron stars and the Earth was made in Ref. . Assuming there is no $`\psi `$-$`\overline{\psi }`$ asymmetry, the neutron-star argument and underground plus balloon experiments provide competitive bounds: only $`M10^810^{10}\mathrm{GeV}`$ is allowed for a $`\psi `$-proton cross-section of $`\sigma 10^{30}10^{25}\mathrm{cm}^2`$. As in the case of electromagnetically charged dark matter, the possibility of producing them without overclosing the universe gives such dark matter candidates a new life. Weakly charged. Naturally, no significant constraints exist if $`\psi `$ has only weak-scale interactions like the neutralino (i.e. $`\sigma 10^{44}(m_n/\mathrm{GeV})^4\mathrm{cm}^2`$ in the large $`M`$ limit, where $`m_n`$ is the mass of the relevant nucleon). Direct detection appears rather difficult simply because the halo number density scales as $`M^1`$, and the neutralino with mass $`100\mathrm{GeV}`$ is already difficult to detect. Note that a large mass does not increase significantly the nuclear recoil: $`\mathrm{\Delta }EM^2m_nv^2/(M+m_n)^2`$ where $`v200\mathrm{km}\mathrm{s}^1`$ is the average halo velocity of these particles; in the large $`M`$ limit, $`\mathrm{\Delta }E`$ is asymptotically $`M`$-independent. Indirect detection might seem even more hopeless. Not only does the halo number density drop by a factor of $`M`$, the annihilation (which gives rise to neutrinos) rate is suppressed by $`M^2`$ according to the unitary bound. However, three opposing factors help us here. First, on a sufficiently long time scale, the neutrino flux from $`\psi `$-$`\overline{\psi }`$ annihilation in the core of the Sun or the Earth is determined not by the annihilation rate, but by the capture rate, which depends on the scattering cross-section with nucleons and is not heavily suppressed. Second, indirect detection works by observing muons that result from the interaction of the neutrinos with the rocks of the Earth. The cross-section for producing muons and the range of the muons both scale up with energy, and hence the mass of $`\psi `$. A detailed calculation taking into account these effects as well as other relevant ones will be presented in a forthcoming paper. Completely neutral. $`\psi `$ could be a singlet under SU(3)$`\times `$SU(2)$`\times `$U(1). It is of course virtually impossible to detect such particles, except by their gravitational effects. Lastly, superheavy dark matter has been postulated as the origin of ultra-high-energy cosmic-ray events at energies $`10^{10}\mathrm{GeV}`$, above the Greisen-Zatsepin-Kuzmin cut-off . The parameters of the model presented in the last section are sufficiently flexible to allow a mass of $`M10^{11}\mathrm{GeV}`$ to explain such events. The $`\psi `$ particles cannot by themselves be the primaries because of the large mass, even if they have hadronic interactions . The simplest way is to have them decay into hadrons, e.g. protons, which reach the Earth’s atmosphere. But then, one has to invoke special reasons to explain why they are not stable but sufficiently long-lived. ## IV Discussion We have shown that a simple and well-motivated model of thermal inflation naturally produces dark matter particles $`\psi `$ and $`\overline{\psi }`$ of mass $`M10^{10}\mathrm{GeV}`$ with a cosmological abundance in the correct range. As our mechanism is implemented by thermal inflation, which solves the moduli problem by late entropy production, it is robust against such dilution. The same general mechanism could also be applied to what one might call ‘moduli thermal inflation’ to produce near Planck mass particles at low energy scales. Moduli thermal inflation is a limit of thermal inflation in which the flaton is replaced by a modulus, so that roughly speaking $`\left|\lambda _\varphi \right|m_\varphi /M_{\mathrm{Pl}}`$ in Eq. (6). $`\varphi _{\mathrm{vac}}`$ will then be of the order of the Planck scale. In the context of string theory, it is then very reasonable to assume that the vacuum expectation value of the modulus, $`\varphi =\varphi _{\mathrm{vac}}`$, corresponds to another ‘origin’ in field space where new fields become light; for example one could have superpotential couplings of the form $`W=\lambda _\chi \left(\varphi \varphi _{\mathrm{vac}}\right)\overline{\chi }\chi `$. Such a ‘coupled’ modulus would appear like an ordinary scalar field (e.g. a squark or slepton field) in the true vacuum. <sup>\*†</sup><sup>\*†</sup>\*†It would be an excellent candidate for an Affleck-Dine field . The decay temperature would then no longer scale as in Eq. (14) but instead could be as high as $`V_{\mathrm{ti}}^{1/4}`$. Putting these modifications into Eq. (26) would also give us a value of $`\mathrm{\Omega }_{\overline{\psi }\psi }`$ in the neighborhood of 1. However, another consequence of having $`\varphi _{\mathrm{vac}}M_{\mathrm{Pl}}`$ is that one could get a significant number of $`e`$-folds of non-slow-roll inflation as $`\varphi `$ rolls from $`\varphi 0`$ to $`\varphi \varphi _{\mathrm{vac}}`$. To avoid this inflation diluting the $`\psi `$ particles too much, one would require $`m_\varphi 10V_{\mathrm{ti}}^{1/2}/M_{\mathrm{Pl}}`$ and so $`\varphi _{\mathrm{vac}}M_{\mathrm{Pl}}/10`$. This would limit the final mass of the $`\psi `$ particles to be $`M\text{few}\times 10^{17}\mathrm{GeV}`$. However, this is still above even the stringent limit on electromagnetically charged dark matter obtained in Ref. . Note that because moduli thermal inflation occurs at too high an energy scale to solve the moduli problem, this scenario would only be viable if there were no moduli problem <sup>\*‡</sup><sup>\*‡</sup>\*‡ For example, because all the moduli are of this coupled type, rather than the decoupled type that give rise to the moduli problem. How one fits the dilaton into such a picture is unclear though. because otherwise the $`\psi `$ particles would be diluted by another epoch of thermal inflation, or some other late entropy production, that would be needed to dilute the decoupled moduli produced at the end of the moduli thermal inflation. A related scenario could emerge from some of the more plausible models of inflation . Here $`\psi `$ or $`\overline{\psi }`$ is the inflaton, which holds $`\varphi `$ at zero by the hybrid inflation mechanism rather than thermal effects. One could then get dark matter in the form of charged, near Planck mass, inflatons! ### Acknowledgements We thank Rocky Kolb, Dan Chung, Josh Frieman, and Andrew Sornborger for useful discussions. LH thanks the German-American Young Scholars’ Institute on Astroparticle Physics and the Aspen Center for Physics for hospitality. This work was supported by the DOE and the NASA grant NAG 5-7092 at Fermilab.
no-problem/9812/quant-ph9812004.html
ar5iv
text
# Feedback-control of quantum systems using continuous state-estimation ## I Introduction The continuous measurement of quantum systems has been a topic of considerable activity in recent years , and is particularly relevant at this time because experimental technology is now at the point where individual quantum systems can be monitored continuously . With these developments it should be possible in the near future to control quantum systems in real time by using the results of the measurement in the process of continuous feedback. A theory describing the dynamics that results from feeding back the measurement signal (usually a photocurrent) at each instant to control the Hamiltonian of a quantum system has been developed by Wiseman and Milburn . They have shown how to derive the resulting Stochastic Master Equation (SME) for the conditioned evolution, and the corresponding unconditioned master equation, both of which are Markovian in the limit of instantaneous feedback. This kind of feedback has already been used to reduce laser noise below the shot-noise level . However, there are many ways in which the measurement signal may be fed back to affect the system. In general, at a given time, any integral of the measurement record up until that time may be used to alter the system Hamiltonian and affect the dynamics. This leads, however, to a non-Markovian master equation, and the resulting dynamics cannot therefore be easily investigated, and, more importantly, understood. Nevertheless, as we shall examine in this paper, certain integrals of the measurement record provide specific information, such as the best estimates of dynamical variables. These may be fed back to alter the system evolution in a desired way, and while the unconditioned evolution of the system is no longer Markovian, simple equations may be derived for the selective evolution of system variables, and correspondingly simple non-linear (but Markovian) SME’s describe the evolution of the system in the limit of instantaneous feedback. This approach to quantum feedback has close analogies to that used in classical control theory, in particular that control is broken down into a state-estimation step and a feedback step. Because of this, classical results regarding the design of feedback loops can be applied, opening up new possibilities for controlling quantum systems. In this paper we consider a single quantum degree of freedom, which could be, for example, a trapped atom , ion or a moving mirror forming one end of an optical cavity , subjected to continuous position observation. Naturally the continuous observation of position actually corresponds to a continuous joint measurement of both position and momentum, because momentum information is implicit in the observed change in the position over time. We show how the best estimate of both the position and momentum at each point in time may be obtained from an integral of the measurement signal when the initial state of the system is Gaussian. We examine the dynamics which results from using the best estimates of the system variables in a feedback loop, and in particular investigate cooling and confinement using this mechanism. We also apply the Wiseman-Milburn direct feedback theory to investigate the implementation of cooling and confinement by feeding back the measurement signal at each time, a technique which has been considered by Dunningham et al. and Mancini et al. , and contrast this with the method involving estimation. In the next section we review briefly how the SME describing a continuous measurement of position results from real physical measurement schemes. In section III we review the solution of this master equation, which may be obtained in a simple manner for initially Gaussian states, and give the integrals that are required to measure both position and momentum (ie. to obtain the best estimates of position and momentum at each time). We then examine the dynamical equations which result from using the best estimates for the purposes of feedback, and present the classical control theory which may applied to this quantum feedback process due to an equivalence with classical estimation theory. In section IV we consider the problem of cooling and confinement using feedback. We apply both feedback by estimation and direct feedback to this problem. Section V concludes. ## II Continuous position measurements ### A Two physical position measurement systems A continuous measurement of the position of a macroscopic object may be obtained by observing continuously the phase of a light beam reflected from it. If we allow the object in question to form one end-mirror in an optical cavity, then in the limit in which one of the cavity mirrors is very lossy (the bad-cavity limit), the phase of the light output from the cavity provides a continuous measurement of the position of the moving mirror, since the light spends little time in the cavity . This is a simple way to treat position measurement by light reflection, and what is more, the position of a single atom may also be monitored continuously in the same manner. To monitor the position of a single atom, the atom is allowed to interact off-resonantly with the optical cavity mode, and this interaction is such that the atom generates a phase shift of the output light in a manner similar to the moving end-mirror. We now examine briefly these two situations, and derive the SME describing the measured systems. The Hamiltonian describing an optical cavity in which one of the mirrors is free to move along the axis of the cavity is $$H=H_m\mathrm{}g_ma^{}ax+H_d$$ (1) where $`a`$ is the annihilation operator for the cavity mode, $`g_m=\omega _0/L`$ is the coupling constant describing the interaction between the cavity mode and the moving mirror (in which $`\omega _0`$ is the mode frequency, and $`L`$ is the length of the cavity), $`H_m`$ is the Hamiltonian for the mechanical motion of the mirror, and $`H_d`$ describes the coherent driving of the cavity mode. Note that we have moved into the interaction picture with respect to the free Hamiltonian of the cavity mode. In deriving this Hamiltonian it is assumed that the cavity mode follows the motion of the mirror adiabatically, and in particular that the change in the cavity length due to the motion of the mirror is small compared to the cavity length itself . That is, $`|x|L`$. One of the end-mirrors is chosen to be lossy so as to provide output coupling, and the cavity is driven through this mirror. The part of the Hamiltonian which describes coherent driving of the cavity is given by $$H_d=i\mathrm{}E(aa^{}),$$ (2) where $`E`$ is related to the laser power $`P`$ by $`E=\sqrt{\gamma P/(\mathrm{}\omega _0)}`$, and $`\gamma `$ is the decay rate of the cavity due to the output coupling mirror . We see that the Hamiltonian describing the interaction between the mirror and the cavity field is of the form $`a^{}ax`$. This is exactly what we need in order to obtain a continuous position measurement by monitoring the phase of the output light. This is because $`a^{}a`$ is the generator of a phase shift for the light, and therefor a Hamiltonian of this form produces a phase shift proportional to the position of the mirror, which is exactly what is required. The Hamiltonian describing the off-resonant interaction between a two-level atom, and an optical cavity in which it is trapped, is $$H=H_a\mathrm{}\frac{g_0^2}{\mathrm{\Delta }}a^{}a\mathrm{cos}^2(k_0x)+H_d$$ (3) where $`k_0=\omega _0/c`$ is the wavenumber of the cavity mode, $`\mathrm{\Delta }`$ is the detuning between the cavity mode and the two level atom ($`\mathrm{\Delta }=\omega _a\omega _0`$, where $`\omega _a`$ is the frequency of the atomic transition), $`x`$ is the atomic position operator, $`g_0`$ is the cavity-QED coupling constant giving the strength of the interaction between the cavity mode and the atom and $`H_a`$ is the Hamiltonian for the mechanical motion of the atom. We will assume the atom to be harmonically trapped, which might be achieved by using a second light field , or by ion tapping . To obtain a continuous position measurement by monitoring the phase of the output light, we require the interaction of the atomic motion and the cavity mode to be of the same form as that for the mirror. To achieve this we need simply ensure that the atom is trapped in a region small compared to the wavelength of the light, about a region halfway between a node and an anti-node so that we may approximate $`\mathrm{cos}^2(k_0x)=\mathrm{cos}^2(k_0x_0+k_0x^{})1/2+k_0x^{}`$. Renaming $`x^{}`$ as $`x`$ (merely a shift in which the resulting extra term in the Hamiltonian is unimportant), we obtain the correct interaction Hamiltonian. To realize a position measurement the phase quadrature of the output light must be monitored, and we choose homodyne detection since it provides the simplest treatment. Performing homodyne detection of the phase quadrature with a detector efficiency $`\eta `$, the SME describing the evolution of the system conditioned on the continuous measurement record is (see also ), $$d\rho _\text{c}=\frac{i}{\mathrm{}}[H,\rho _\text{c}]dt+\gamma 𝒟[a]\rho _\text{c}dt+\sqrt{\eta \gamma }[ia]\rho _\text{c}dW,$$ (4) where $`\rho _\text{c}`$ is the system density matrix conditioned on the measurement record, $`dW`$ is the Wiener increment, satisfying the Ito calculus relation $`(dW)^2=dt`$, and the superoperators $`𝒟`$ and $``$ are given by $`2𝒟[c]\rho _\text{c}`$ $`=`$ $`2c\rho _\text{c}c^{}c^{}c\rho _\text{c}\rho _\text{c}c^{}c,`$ (5) $`[c]\rho _\text{c}`$ $`=`$ $`c\rho _\text{c}+\rho _\text{c}c^{}\text{Tr}[c\rho _\text{c}+\rho _\text{c}c^{}]\rho _\text{c},`$ (6) for an arbitrary operator $`c`$. ### B Adiabatic elimination of cavity dynamics We are interested only in the dynamics of the atom (or equivalently the mirror), and we are also interested purely in the bad-cavity limit (large $`\gamma `$) which corresponds to good position measurement . In this limit, due directly to the high cavity damping rate, the cavity mode is slaved to the atom dynamics, and can therefore be adiabatically eliminated to obtain a SME purely for the atom. To do this we proceed by following essentially the treatment in reference . Noting first that in the absence of the interaction with the atom, the steady state of the cavity mode is the coherent state $`|\alpha =2E/\gamma `$, we transform the system using $$\rho _\text{c}^{}=D(\alpha )\rho _\text{c}D^{}(\alpha )$$ (7) where $`D(\alpha )`$ is the displacement operator, such that $`D(\alpha )|0=|\alpha `$ . In this ‘displacement picture’, the steady-state of the cavity is now close to the vacuum, with the SME being $`d\rho _\text{c}^{}`$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[H_\text{m}\mathrm{}g(a^{}a+\alpha (a+a^{})+|\alpha |^2)x,\rho _\text{c}^{}]dt`$ (9) $`+\gamma 𝒟[a]\rho _\text{c}^{}dt+\sqrt{\eta \gamma }[ia]\rho _\text{c}^{}dW.`$ The regime required for adiabatic elimination is $$\left|\frac{H_m}{\gamma }\right|\frac{g(|\alpha |^2+1)|x|}{\gamma }=ϵ1,$$ (10) where $`ϵ`$ will be our small parameter governing the approximation. Here $`g=g_\text{m}`$ for the case of the mirror, or $`g=k_0g_0^2/\mathrm{\Delta }`$ for the atom. Relation (10) also implies that $`(g\alpha |x|/\gamma )<ϵ`$, so this quantity also serves as the small parameter. To proceed we assume that the elements of the cavity mode density matrix in the number basis, $`\rho _\text{c}^{nm}`$, scale with the small parameter $`ϵ`$ as $`\rho _\text{c}^{nm}ϵ^{\left(n+m\right)}`$, and we will show that this is consistent with the regime (10). Under this assumption, the state of the cavity+(atom/mirror) may then be expanded up to second order in $`ϵ`$ as $`\rho _\text{c}^{}`$ $`=`$ $`\rho _{\text{00}}^\text{a}|00|+(\rho _{\text{10}}^\text{a}|10|+\text{H.c.})`$ (12) $`+\rho _{\text{11}}^\text{a}|11|+(\rho _{\text{20}}^\text{a}|20|+\text{H.c.})+O(ϵ^3),`$ so that $$\rho _\text{a}=\text{Tr}_\text{c}[\rho _\text{c}^{}]=\rho _{\text{00}}^\text{a}+\rho _{\text{11}}^\text{a}+O(ϵ^3).$$ (13) where $`\text{Tr}_\text{c}`$ represents a trace over the cavity mode. The equations of motion for the density matrix elements $`\rho _{\text{ij}}^\text{a}`$ may then be obtained from the master equation for $`\rho _\text{c}^{}`$, giving $`d\rho _{\text{00}}^\text{a}`$ $`=`$ $`_\text{m}^0\rho _{\text{00}}^\text{a}dt+ig\alpha (x\rho _{\text{10}}^\text{a}\rho _{\text{10}}^\text{a}x)dt+\gamma \rho _{\text{11}}^\text{a}dt`$ (16) $`i\sqrt{\eta \gamma }(\rho _{\text{10}}^\text{a}\rho _{\text{10}}^\text{a}\text{Tr}[\rho _{\text{10}}^\text{a}\rho _{\text{10}}^\text{a}]\rho _{\text{00}}^\text{a})dW`$ $`+O(ϵ^3),`$ $`d\rho _{\text{10}}^\text{a}`$ $`=`$ $`_\text{m}^0\rho _{\text{10}}^\text{a}dt\frac{\gamma }{2}\rho _{\text{10}}^\text{a}dt`$ (20) $`+ig[x(\alpha \rho _{\text{00}}^\text{a}+\rho _{\text{10}}^\text{a}+\sqrt{2}\alpha \rho _{\text{20}}^\text{a})\alpha \rho _{\text{11}}^\text{a}x]dt`$ $`i\sqrt{\eta \gamma }(\sqrt{2}\rho _{\text{20}}^\text{a}\rho _{\text{11}}^\text{a}\text{Tr}[\rho _{\text{10}}^\text{a}\rho _{\text{10}}^\text{a}]\rho _{\text{10}}^\text{a})dW,`$ $`+O(ϵ^3)`$ $`d\rho _{\text{11}}^\text{a}`$ $`=`$ $`_\text{m}^1\rho _{\text{11}}^\text{a}dt+ig\alpha (x\rho _{\text{10}}^\text{a}\rho _{\text{10}}^\text{a}x)dt\gamma \rho _{\text{11}}^\text{a}dt`$ (22) $`+i\sqrt{\eta \gamma }\text{Tr}[\rho _{\text{10}}^\text{a}\rho _{\text{10}}^\text{a}]\rho _{\text{11}}^\text{a}dW+O(ϵ^3),`$ $`d\rho _{\text{20}}^\text{a}`$ $`=`$ $`_\text{m}^0\rho _{\text{20}}^\text{a}dt\gamma \rho _{\text{20}}^\text{a}dt+igx(2\rho _{\text{20}}^\text{a}+\sqrt{2}\alpha \rho _{\text{10}}^\text{a})dt`$ (24) $`+i\sqrt{\eta \gamma }\text{Tr}[\rho _{\text{10}}^\text{a}\rho _{\text{10}}^\text{a}]\rho _{\text{20}}^\text{a}dW+O(ϵ^3),`$ where we have used $$\text{Tr}[i(aa^{})\rho _\text{c}^{}]=\text{Tr}[i(\rho _{\text{10}}^\text{a}\rho _{\text{10}}^\text{a})]+O(ϵ^3).$$ (25) and defined $$_\text{m}^l\rho _{\text{ij}}^\text{a}dt\frac{i}{\mathrm{}}[H_\text{m}\mathrm{}g(|\alpha |^2+l)x,\rho _{\text{ij}}^\text{a}]dt.$$ (26) In order to write a SME for the motion of the atom we need to find a closed form equation for $`\rho _\text{a}\rho _{\text{00}}^\text{a}+\rho _{\text{11}}^\text{a}`$ but the differential equations for the diagonal elements of the cavity mode density operator involve the off-diagonal elements. The adiabatic elimination exploits the difference in time-scales between the cavity and the motional dynamics by assuming that the heavily damped off-diagonal elements have reached steady state values determined by the motional state. This will allow the off-diagonal elements to be written in terms of the diagonal elements which will result in the desired SME. This is a little more complicated than the usual adiabatic elimination, because the off-diagonal elements are not merely strongly damped, but also contain a stochastic driving term. To adiabatically eliminate $`\rho _{\text{20}}^\text{a}`$, we drop terms proportional to $`\rho _{\text{20}}^\text{a}`$ which are insignificant compared to the damping term, and obtain to leading order $`d\rho _{\text{20}}^\text{a}`$ $`=`$ $`\gamma \rho _{\text{20}}^\text{a}dt+i\sqrt{2}g\alpha x\rho _{\text{10}}^\text{a}dt`$ (28) $`+i\sqrt{\eta \gamma }\text{Tr}[\rho _{\text{10}}^\text{a}\rho _{\text{10}}^\text{a}]\rho _{\text{20}}^\text{a}dW.`$ For the purposes of showing that the contribution from the stochastic driving is insignificant (ie. that it is not of leading order in $`ϵ`$), we assume now that $`\rho _{\text{10}}^\text{a}`$ is constant. Since $`\rho _{\text{10}}^\text{a}`$ is actually stochastically driven as well, this is not exactly correct; in the steady state both off-diagonal elements will be fluctuating. However, the resulting analysis demonstrates that these fluctuations are higher order in $`ϵ`$, so that setting $`\rho _{\text{10}}^\text{a}`$ to its mean value is self-consistent. With this assumption Eq.(28) is just linear multiplicative white noise , with the stead-state solution $`\rho _{\text{20}}^\text{a}`$ $`=`$ $`i\sqrt{2}\left({\displaystyle \frac{g\alpha }{\gamma }}\right)x\rho _{\text{10}}^\text{a},`$ (29) $`\sigma _{\text{20}}^\text{a}`$ $`=`$ $`\sqrt{\frac{\eta }{2}}|i\text{Tr}[\rho _{\text{10}}^\text{a}\rho _{\text{10}}^\text{a}]\rho _{\text{20}}^\text{a}|,`$ (30) where $`\sigma _{\text{20}}^\text{a}`$ is the standard deviation of $`\rho _{\text{20}}^\text{a}`$ in the steady-state. Since $`\rho _{\text{10}}^\text{a}`$ is first order in $`ϵ`$, the fluctuations about the steady-state are of third order in $`ϵ`$, while the average value is of second order. We can therefore ignore the fluctuations to leading order in $`ϵ`$, and write the result of the adiabatic elimination as $$\rho _{\text{20}}^\text{a}=i\sqrt{2}\left(\frac{g\alpha }{\gamma }\right)x\rho _{\text{10}}^\text{a}+O(ϵ^3).$$ (31) Proceeding in the same way to adiabatically eliminate $`\rho _{\text{10}}^\text{a}`$, we find again that the fluctuations may be ignored to leading order, and obtain, $$\rho _{\text{10}}^\text{a}=2i\left(\frac{g\alpha }{\gamma }\right)[x\rho _{\text{00}}^\text{a}\rho _{\text{11}}^\text{a}x]+O(ϵ^2),$$ (32) Note that we have retained the term proportional to $`\rho _{\text{11}}^\text{a}`$, which is only third order in $`ϵ`$. However, retaining this term is useful as this allows us to most easily recover the SME for the mirror. This result (Eq. (32)) confirms that $`\rho _{\text{10}}^\text{a}`$ is indeed first order in $`ϵ`$. The fact that $`\rho _{\text{20}}^\text{a}`$ is second order then follows immediately from Eq. (31), and that $`\rho _{\text{11}}^\text{a}`$ is second order follows from the fact that $`\rho _{\text{10}}^\text{a}`$ is first order. Thus the assertion made above regarding the scaling of the elements of $`\rho _\text{c}`$ is seen to be consistent with the regime (10). To obtain the stochastic master equation we now substitute Eqs. (31) and (32) into Eqs. (16) and (22) and combine them to obtain an equation for $`\rho _\text{a}\rho _{\text{00}}^\text{a}+\rho _{\text{11}}^\text{a}`$. To leading order in $`ϵ`$ the resulting stochastic master equation for the atom is $`d\rho _\text{a}`$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[H_m\mathrm{}g|\alpha |^2x,\rho _\text{a}]dt`$ (34) $`+2k𝒟[x]\rho _\text{a}dt+\sqrt{2\eta k}[x]\rho _\text{a}dW,`$ This is the expected form for a process describing the continuous measurement of position . Note, however, that an extra term proportional to $`x`$ appears in the effective Hamiltonian. This is just the radiation pressure force on the mirror or the dipole force on the atom. As this is simply a classical linear force on the mirror, it may be cancelled by applying an equal and opposite linear potential along with the trapping potential, and we will assume that this is the case in all further analysis. For the case of a measurement on an atom, $`k=2k_0^2g_0^4|\alpha |^2/(\gamma \mathrm{\Delta }^2)`$, while for the moving mirror $`k=2g_m^2|\alpha |^2/\gamma `$. This quantity may be referred to as the measurement constant, as it describes the rate at which information is obtained about the atomic position, and the corresponding rate at which noise is fed into the atomic momentum as a result of the measurement. Note that for unit efficiency detection and a pure initial state, the stochastic master equation is equivalent to a stochastic Schrödinger equation for the state vector . The measurement signal is the photocurrent from the homodyne detection, being given by $$d\stackrel{~}{Q}=\beta [\eta \gamma i(aa^{})dt+\sqrt{\eta \gamma }dW],$$ (35) where $`\beta `$ is determined by the strength of the local oscillator and the reflectivity of the beam splitter in the homodyne detection setup . Using Eqs.(25) and (32) in Eq.(35), we may write this as $`d\stackrel{~}{Q}`$ $`=`$ $`\beta [2\eta \sqrt{2\gamma k}xdt+\sqrt{\eta \gamma }dW]`$ (36) $`=`$ $`\beta \sqrt{\gamma /(2k)}[4\eta kxdt+\sqrt{2\eta k}dW],`$ (37) and defining a scaled measurement signal by $`dQ=d\stackrel{~}{Q}(\sqrt{2k/(\beta ^2\gamma )})`$, we may write $$dQ=4\eta kxdt+\sqrt{2\eta k}dW.$$ (38) The scaled photocurrent, $`I(t)=dQ/dt`$, may then be written as $$I(t)=4\eta kx+\sqrt{2\eta k}\epsilon (t),$$ (39) where $`\epsilon (t)`$ is the delta correlated noise source corresponding to $`dW`$. ## III Estimation and Feedback We come now to the central part of this paper, the question of how to employ a continuous measurement in the control of a quantum system. As we noted in the introduction an arbitrary integral over the whole photocurrent could be used to modulate an arbitrary feedback Hamiltonian. This large number of degrees of freedom makes it difficult to motivate any particular feedback scheme in real systems. Perhaps as a result both theoretical and experimental efforts in quantum feedback have focussed on feeding back the photocurrent at each moment in time. In this paper we refer to this as direct feedback. However, classical control theory faces exactly the same problems and in this paper we propose a family of feedback algorithms which is adapted from strategies employed in analogous classical systems. Classical strategies often break the search for useful control down into an estimation step and a control step. The powerful technique of dynamic programming is able to find optimal algorithms for both tasks in sufficiently simple systems . In general we propose that the estimation step for a quantum mechanical system will involve the solution in real time of an appropriate SME which accounts for realistic levels of measurement and process noise, using an initially mixed state reflecting lack of knowledge of the system. The state estimate which results from the SME can then be used to modify the system Hamiltonian in order to achieve the desired control of the system. In the previous section we reviewed how the stochastic master equation (Eq.(34)), for the continuous position measurement of a single quantum degree of freedom may be derived from a real measurement process. Fortunately, if the Hamiltonian for the mechanical dynamics is no more than quadratic in the position and momentum, and the initial state of the system is Gaussian, the SME may be solved analytically, since it remains Gaussian at all times . Taking the initial state to be Gaussian is also sensible, because there is reason to believe that non-classical states evolve rapidly to Gaussians due to environmental interactions, of which the measurement process is one example . A quantum mechanical Gaussian state is uniquely determined by its mean values and covariance matrix (see for example ), just as is the case for classical probability distributions and so we only need to find equations for these variables in order to fully describe the evolution of the conditioned state. In what follows, we will refer to the elements of the covariance matrix, being the position and momentum variance and their joint covariance, simply as the covariances. The expectation values of operators are found from $$dc=\text{Tr}(cd\rho )$$ (40) as for any master equation, see for example . The Itô rules for stochastic differential equations result in equations for the covariances. Performing this calculation gives the equations for the means as $`dx`$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[x,H_m]dt+2\sqrt{2\eta k}V_xdW,`$ (41) $`dp`$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[p,H_m]dt+2\sqrt{2\eta k}CdW,`$ (42) and the equations for the covariances are $`\dot{V}_x`$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[x^2,H_m]+{\displaystyle \frac{2i}{\mathrm{}}}x[x,H_m]8\eta kV_x^2,`$ (43) $`\dot{V}_p`$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[p^2,H_m]+{\displaystyle \frac{2i}{\mathrm{}}}p[p,H_m]+2k\mathrm{}^28\eta kC^2,`$ (44) $`\dot{C}`$ $`=`$ $`{\displaystyle \frac{i}{2\mathrm{}}}[xp+px,H_m]8\eta kV_xC`$ (46) $`+{\displaystyle \frac{i}{\mathrm{}}}x[p,H_m]+{\displaystyle \frac{i}{\mathrm{}}}p[x,H_m].`$ In these equations, $`V_x`$ and $`V_p`$ are the variances in position and momentum respectively, and $$C=\frac{1}{2}xp+pxxp$$ (47) is the symmetrized covariance. The Gaussian assumption is required to obtain Eqs. (46) but not Eqs. (42). These two systems of equations are precisely equivalent to the SME (34) under the assumption that the initial state is Gaussian. First of all it should be noted that, while it is not explicit, the equations for the covariances are closed, in that they do not depend on the means, and, in addition, they do not depend upon the measurement signal. As a consequence the covariances at any point in time depend only upon the duration of the measurement, and not the specific measurement record. These equations are instances of coupled Riccati equations, and may be solved analytically . The full solutions are fairly cumbersome, and we do not need to give them here. Once these equations have been solved, the solutions may be substituted into the equations for the means, and these are readily solved since they are merely linear equations with (stochastic) driving. Writing them in the form $$d𝐱=A𝐱dt+2\sqrt{2\eta k}d𝐘(t)$$ (48) where $`𝐱=(x,p)^T`$ and $`d𝐘(t)=(V_x,C)^TdW`$, the solution is naturally just $`𝐱(t)=e^{At}𝐱(0)+2\sqrt{2\eta k}e^{At}{\displaystyle _0^t}e^{At^{}}𝑑𝐘(t^{}).`$ (49) During the measurement process two things happen. The first is that the mean position and momentum obey not only the evolution dictated by the Hamiltonian, but also suffer continual random kicks due to the measurement process. This is because, at each time step, the effect of the measurement process is to perform a ‘weak’ measurement of position, and since the result of the measurement is necessarily random, the position of the state in phase space changes in a random fashion . The second effect of the measurement process, and the part that is governed by the deterministic equations for the covariances, is to narrow the width of the state in phase space. For ideal (unit efficiency) detection, an initial mixture is reduced, over time, to a completely pure state. At such a time there is, in that sense, no uncertainty, as the quantum state is completely determined, and remains so. For inefficient detection, the degree to which the state is mixed is also reduced during the measurement, but, in general, to a non-zero level determined by the detection efficiency . Now let us consider how we might control the system by the process of feedback. In classical control theory one attempts to obtain, at each point in time, the best possible estimate of the state of the system, and then uses the resulting estimates for the system variables in a feedback loop to control the dynamics. Now, in the quantum example we are considering here, since the distributions for all the variables are always Gaussian, the mean position and momentum are also our best estimates of these variables. In fact, it would be quite reasonable to define a continuous measurement of a system variable as a process by which we obtain an estimate of that variable at each point in time. Hence, by this definition, what we need to do to achieve a continuous measurement of a system variable, is to write down that integral of the measurement signal which gives us continuously our best estimate of that variable. The equations for the means are written above in terms of the Wiener process, rather than the actual measurement signal $`dQ`$. Rewriting them in terms of the measurement signal we have $`dx`$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[x,H_m]dt8\eta kxV_xdt+2V_xdQ,`$ (50) $`dp`$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[p,H_m]dt8\eta kxCdt+2CdQ,`$ (51) so that the continuous position measurement does indeed provide us with a continuous measurement of both position and momentum, in the sense introduced above. The strategy we have outlined here of only employing the mean values and not the variances of the conditioned state in the control law turns out to be optimal in some classical systems which are termed *separable*. The above equations require that we put in *a priori* estimates of the state as the initial conditions. While we expect the initial state to be highly mixed (so that in that sense we initially have very little knowledge regarding the state), it is assumed that one can obtain sensible estimates for the initial means and covariances. This assumption is certainly reasonable, for one can almost always obtain good estimates of the initial density matrix from a knowledge of the way in which the system is prepared. This is just as true in classical estimation theory. In that case the initial values of the variables will be, in general, poorly known, but good estimates for the initial probability distributions for the variables (being analogous to the density matrix in the quantum case) may be obtained from a knowledge of the initial preparation. A further point to note is that after a sufficient time the best estimates of the variables actually do not depend upon the initial estimates, but only upon the measurement record. Hence, even an observer with very imprecise knowledge of the initial state will have obtained accurate information after a time, and the resulting feedback, while perhaps initially of no great advantage, will eventually produce the desired effect. This property of the SME is shown in reference , where the question of estimation is considered in detail. As a final point regarding the question of estimation and initial states, it worth noting that one can always wait a sufficient time for the covariances to attain their steady-state values before initiating feedback, thus obviating to a certain extent the need to use initial estimates for the covariances. In classical control theory this is concern about the errors in state estimates is termed caution, and in linear systems with quadratic costs and Gaussian noises caution turns out not to be optimal. Curiously enough, Eqs. (51) do not admit of an analytic solution in terms of $`dQ`$ unless the covariances have their steady state values. This is because the linear equation now has an explicit time dependence due to the fact that the time-dependent covariances multiply the means. However, it is a simple matter to integrate these equations numerically, and a computer could perform the necessary calculations to obtain the best estimates of the variables in real time, and hence track the evolution of the system. This process of estimation is not only interesting because we can monitor the system evolution, but because the estimates may be used in a feedback loop to control the dynamics. Now that we know how to obtain the best estimates of the system variables, the process of feedback involves continually adjusting the Hamiltonian so that one or more of its terms are proportional to some function of these estimates. In treating this process of feedback we must be careful to ensure that the act of feedback (the act of adjusting the Hamiltonian) happens after the measurement at each time step. This is obviously essential, due to the fact that the measurement must be obtained before any adjustment based on that measurement can take place. However, in the limit of instantaneous feedback this is very simple. First we consider the measurement step, in which the system evolves for a time $`dt`$, and the measurement signal is incremented by the amount $`dQ`$. At this point the feedback is allowed to act, and in the limit in which it is instantaneous (that is, much faster than any of the time scales which characterize the system dynamics), the Hamiltonian is updated before the next time step. At the next time step the equations of motion for the estimates (and, in fact, all system variables) have the new Hamiltonian with the new values for the estimates, so that, effectively, the Hamiltonian has the desired value at all times. In the limit of instantaneous feedback, the SME describing the evolution of the system is therefore simply just as it was before, but with the Hamiltonian, $`H_m`$, replaced with a new Hamiltonian, having specific dependencies on the estimates of $`x`$ and $`p`$, which are simply $`\text{Tr}[x\rho _\text{a}]`$ and $`\text{Tr}[p\rho _\text{a}]`$. The SME describing the conditional evolution of the system, for general instantaneous feedback via estimation from a continuous measurement of position, is, therefore given by Eq.(34), where $`H_m`$ is now a function of the average position and momentum: $$H_m=f(x,p,\text{Tr}[x\rho _\text{a}],\text{Tr}[p\rho _\text{a}]).$$ (52) While the SME for the conditional evolution is therefore rather simple, particularly in that it is Markovian, an equation that would describe the overall average (non-selective) evolution would not be. This is because the average evolution at any given time is not simply a function of the average density operator at that time, but depends on the previous history. We have however provided a recipe to calculate the unconditioned state at all times since this only requires averaging over all the trajectories generated by our conditional feedback SME. We will now show that there is a precise analogy to be made between linear quantum mechanical systems (and in particular those subjected to continuous position measurement) and classical systems which are driven by a certain specified noise process. Once we define a quantum mechanical cost function, this precise analogy will allow us to identify the optimal feedback strategy by using classical LQG theory. In the estimation step of classical control, a system called a Kalman filter is often used to obtain an estimate of the state of the system from the measurement record. Where the system is linear, and the noise on the system (often referred to as plant or process noise) and the noise on the measurement are both Gaussian, this is the optimal state observer. There is, in fact, a noise-driven classical system with noisy measurement for which the Kalman filter turns out to be precisely Eqs. (42) and Eqs. (46). Consider a classical harmonic oscillator with dynamical variables $`x_\text{c}`$ and $`p_\text{c}`$, obeying the equations $`\dot{x}_\text{c}`$ $`=`$ $`p_\text{c}/m`$ (53) $`\dot{p}_\text{c}`$ $`=`$ $`m\omega ^2x_\text{c}+\sqrt{2\eta k}\mathrm{}\zeta _1(t),`$ (54) where the noise driving the classical system is delta correlated so that $`\zeta _1(t)\zeta _1(t^{})=\delta (tt^{})`$, and the classical measurement result, being also noisy, is given by $$\dot{Q}_\text{c}=4\eta kx_\text{c}+\sqrt{2\eta k}\zeta _2(t),$$ (55) where $`\zeta _2(t)\zeta _2(t^{})=\delta (tt^{})`$ and $`\zeta _1`$ and $`\zeta _2`$ are uncorrelated. The equations for the best estimates and their covariances provided by the Kalman filter are then exactly the same as for the quantum system, with the identification $$dW=2\sqrt{2\eta k}(x_\text{c}x_\text{c})dt+\zeta _2dt,$$ (56) which is referred to as the innovation or the residual. Hence, when the quantum states are Gaussian (and in that sense classical), the quantum measurement process may be viewed as a classical estimation process in which noise, $`\zeta _1(t)`$, is continually fed into the system to maintain the uncertainty relations. It is important to note that the strength of the noise in this analogous classical process is determined by the accuracy of the measurement. Unlike classical systems where higher accuracy always results in better state estimation, high accuracy in the position measurement will result in large momentum variance, and hence large average energy, of the conditioned states. The existence of an analogous classical system is very nice because it allows us to use results from classical control theory when considering the quantum system. In particular, when the cost function is quadratic in the system variables, a separation theorem applies to the classical system. This states that, given that it is the values of optimal estimates that being fed back, when calculating the feedback required for optimal control we may assume that the dynamical variables are known exactly : there is no advantage in considering the accuracy of our estimate. In this case the restriction of feeding back only best estimates is justified as leading to the optimal strategy. Moreover, another result which applies to the classical system states that the optimal control law will be the one which would be calculated by assuming there is no noise either on the plant or on the measurement. This stronger property is termed certainty equivalence. All that remains to be done to allow us to match the quantum with the classical theory is to consider a class of quantum mechanical feedback Hamiltonians and a quantum mechanical cost function, so as to complete the precise analogy between quantum and classical systems which exists for the Kalman filter and the SME. We now examine briefly the relevant results from classical control theory. We note that classical optimal control theory has been applied in the past to closed (unmonitored) quantum systems by Rabitz and co-workers . The classical system for which the Kalman filter is equivalent to the SME given by Eqs.(54) may be written $$d𝐱_\text{c}=A𝐱_\text{c}+\sqrt{2\eta k}(0,1)^T\zeta _1(t)dt+B𝐮,$$ (57) where $`𝐱_\text{c}=(x_\text{c},p_\text{c})^T`$ are the classical variables. Here we have added feedback variables $`𝐮`$, which will be chosen to be some function of the dynamical variables $`𝐱_\text{c}`$ so as to implement control. Note that ‘optimal’ control is defined as a feedback algorithm which minimizes a cost function, which is usually a function of the system state. The role of the cost function is to define how far the system state is from the desired state during the process of feedback-control. Classical LQG control theory tells us that for linear systems, driven by Gaussian noise, and in which the cost function is quadratic in the system variables (LQG stands for Linear, Quadratic, Gaussian), the optimal feedback is obtained by choosing $`𝐮=K𝐱`$. The form of the cost function is chosen to be $$I=_o^t\left(𝐱_\text{c}^TP𝐱_\text{c}+𝐮^TQ𝐮\right)𝑑t^{},$$ (58) and the optimal solution is the one that minimizes the expectation value $`J=\text{E}[I]`$ of this cost function over the time that the feedback acts. It turns out that $`K=Q^1B^TU`$ and although $`U`$ is time-dependent, the steady state value is often all that is used in practice, and obeys the equation $$0=P+A^TU+UAUBQ^1B^TU.$$ (59) We will use these results in the next section when we consider cooling a single quantum particle. Note that the choice of cost function is crucial in determining the optimal strategy. For example, placing boundaries on the available strength of feedback, rather than using a quadratic cost function typically implies that some form of bang-bang control (a control algorithm in which $`𝐮`$ takes one of two values) is optimal. In this case the optimal strategy is therefore not linear in the estimated state. So long as the feedback Hamiltonian given by Eq. (52) is linear in the position and momentum operators, so that it has the form $$H_m=f(\text{Tr}[x\rho _\text{a}],\text{Tr}[p\rho _\text{a}])x+g(\text{Tr}[x\rho _\text{a}],\text{Tr}[p\rho _\text{a}])p$$ (60) where $`f`$ and $`g`$ are arbitrary functions, then the dynamic equations for the covariances remain decoupled from the equations for the means, and remain deterministic. If this is not the case, then the equations for the means become coupled to those for the covariances, and the situation becomes more complex. Furthermore, if the feedback Hamiltonian has terms that are higher than quadratic in $`x`$ and $`p`$ then the correspondence between quantum feedback and LQG control will be lost since it will not preserve the Gaussian property of the states and since the quantum Hamiltonian will affect the state in a way that is distinct from the evolution of a classical probability distribution. It remains to define a physically motivated quantum mechanical cost function which maps onto the classical system we are considering. Clearly the part of the cost function which refers to the performance of the system should be an expectation value of the unconditioned density operator. It should also be quadratic in the position and momentum, since this ensures that it is straightforward to minimize a typical measure of control such as the average energy, just as it does for a classical system. On the other hand, when assessing the cost of control, it will be sufficient to consider classical quantities since the feedback Hamiltonian will typically be modulated by essentially classical quantities such as electric currents or lasers powers. With these considerations, we can define a sensible quantum mechanical cost function as $$J_q=_0^t\left(\text{Tr}\left(𝐱^TP𝐱\rho \right)+𝐮^TQ𝐮_\text{c}\right)𝑑t$$ (61) Where $`_\text{c}`$ indicates an average over the classical random variables $`𝐮`$. As noted above the density matrix average can be performed by first taking expectation values over the conditioned states given by the SME and then averaging over the trajectories. The cost function for a given trajectory is $$I_q=_0^t\left(𝐱^TP𝐱+\text{Tr}\left(PV\right)+𝐮^TQ𝐮\right)𝑑t$$ (62) where the angle brackets indicate that we are talking about the mean values and $`V`$ is the covariance matrix of the conditioned states. The second term in the integral is the expectation value of $`P`$ for the conditioned state if the mean $`𝐱`$ is zero. This term is independent of both the mean values and the feedback, so long as we use a linear feedback Hamiltonian as discussed above. It represents a minimum cost due to the finite width of the conditioned states. For a given trajectory, $`𝐮`$ is not random, since it is a deterministic function of the current, and possibly of the past values of $`𝐱`$. We can now re-express the quantum cost by averaging Eq.(62) over the trajectories, which gives $$J_q=_0^t\left(𝐱^TP𝐱_\text{c}+\text{Tr}\left(PV\right)+𝐮^TQ𝐮_\text{c}\right)𝑑t.$$ (63) On the other hand, we have identified a classical problem for which $`𝐱`$ and $`V`$ obey the Kalman filter equations for the estimate of the noisy classical state $`𝐱_\text{c}`$. Since the Kalman filter is merely sufficient statistics for a posterior probability distribution for $`𝐱_\text{c}`$, we can write the average over $`𝐱_\text{c}`$ in $`J`$ in terms of this mean and covariance, giving $$J=_0^t\left(𝐱^TP𝐱_c+\text{Tr}\left(PV\right)+𝐮^TQ𝐮_c\right)𝑑t.$$ (64) With this we see that the classical and quantum cost functions are identical. Thus the quantum mechanically optimal strategy (given that the feedback Hamiltonians are no more than quadratic) for the cost function we have introduced will be the classically optimal strategy for the fictitious classical system, whose Kalman filter equations reproduce the SME, under the analogous classical cost function. Moreover, it is clear that for each SME describing a linear quantum system subjected to a linear measurement, there will be some classical LQG model which can be constructed to find the optimal feedback algorithm for a similarly defined quadratic cost. It is important to note that this is merely the optimal strategy for a given strength of measurement. For example, if the aim is to minimize the energy of an oscillator, overly strong position measurement will result in states of high average momentum and therefore energy. It is, however, straightforward to find the optimal measurement strength, if desired. One simply uses the procedure above to find the optimal strategy for a given measurement strength, $`k`$, and takes the extra step of optimizing the result over $`k`$. For the physical position measurements we discuss in section II, changing the measurement strength corresponds to changing the laser power driving the cavity. Up to this point we have not considered how particular feedback Hamiltonians could be implemented, and so we complete this section with a discussion of this important question. Clearly terms in the feedback Hamiltonian proportional to functions of $`x`$ are implemented by applying the required force to the system. By the use of estimation the forces can be adjusted so that they are proportional to any particular function of the average momentum and position as indicated above. This allows terms to be added to the dynamical equation for the momentum, but not to those for the position. We will show in the following section that in order to achieve the best results for phase-space localization we must add terms to the dynamical equation for the position, and therefore it is important to be able to implement a term in the feedback Hamiltonian proportional to momentum. This is not so straightforward, but we suggest two possible ways in which it might be achieved. If the exact location of the trap is not an important consideration, then shifts in the position (being strictly equivalent to a linear momentum term in the Hamiltonian), are achieved simply by shifting all the position dependent terms in the Hamiltonian, in particular the trapping potential. This is a shift in the origin of the coordinates, and, being a virtual shift in the position, produces a term in the dynamical equation for the position proportional to the rate at which the trap is being shifted. When the experimental arrangement is such that the distance covered by the particle during the cooling is negligibly small compared to the trapping apparatus this may prove to be a very effective way of implementing a feedback Hamiltonian linear in momentum. A second method would be to apply a large impulse to the particle so that during one feedback time-step the particle is moved the desired distance, and an equal and opposite impulse is then applied to reset the momentum. Naturally the feasibility of this method will also depend upon the practicalities of a given experimental arrangement. ## IV Cooling and Confinement via feedback ### A Using feedback by estimation Cooling and localization of individual quantum systems is an important first step in the process of control. This is certainly true for trapped atoms, ions, and cavity mirrors which we used as our examples in section II. By cooling, we mean localization in momentum space, and by confinement we mean localization in position space. When these two processes are combined, then we may speak of phase-space localization. We now apply the formulation of quantum feedback introduced in the previous section to the problem of phase space localization. As indicated in that section, we can use classical control theory to find the optimal feedback. First however, let us examine the steady state solutions for the covariances in the absence of feedback. For a harmonically trapped particle the equations for the covariances become $`\dot{V}_x`$ $`=`$ $`(2/m)C8k\eta V_x^2`$ (65) $`\dot{V}_p`$ $`=`$ $`2m\omega ^2C8k\eta C^2+2k\mathrm{}^2`$ (66) $`\dot{C}`$ $`=`$ $`V_p/mm\omega ^2V_x8k\eta CV_x`$ (67) The resulting steady-state covariances are $`V_x`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{}}{\sqrt{2\eta }m\omega }}\right){\displaystyle \frac{1}{\sqrt{\xi +1}}}`$ (68) $`V_p`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{}m\omega }{\sqrt{2\eta }}}\right){\displaystyle \frac{\xi }{\sqrt{\xi +1}}}`$ (69) $`C`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{}}{2\sqrt{\eta }}}\right){\displaystyle \frac{\sqrt{\xi 1}}{\sqrt{\xi +1}}}.`$ (70) where $$\xi =\sqrt{1+\frac{4}{\eta r^2}},r=\frac{m\omega ^2}{2\mathrm{}\eta k}$$ (71) Clearly the final state is in general a mixed Gaussian state, with the exact orientation and squeezing determined by the measurement constant, oscillation frequency, particle mass and detection efficiency. From these covariances the purity of the final state is readily obtained by using $$Tr[\rho _\text{c}^2]=(\mathrm{}/2)(V_xV_pC^2)^{1/2}.$$ (72) We find that the steady state purity of the monitored state is $$Tr[\rho _\text{c}^2]=\sqrt{\eta }.$$ (73) For perfect detection efficiency ($`\eta =1`$), the state is therefore pure, and perfectly determined at each point in time. For imperfect detection the state is not completely pure, and is increasingly mixed as the detection becomes less efficient. Inefficient detection also models environmental noise, which in the case of a cavity mirror would be coupling to a thermal bath, and in the case of a atom would be spontaneous emission . The equations of motion for the conditioned covariances are unchanged by linear feedback, whether direct or using estimation, and Eq. (73) therefore gives the lower limit on the purity of the final cooled state achievable for a given detector efficiency. Now we know the covariances and resulting purity of the conditioned state, we want to know how well we can localize the mean position and momentum of this state in phase space, by feeding back the estimated values. The stochastic equations for the means are $`dx`$ $`=`$ $`(p/m)dt+2\sqrt{2\eta k}V_xdW,`$ (74) $`dp`$ $`=`$ $`m\omega ^2xdt+2\sqrt{2\eta k}CdW.`$ (75) We wish to minimize the distribution of $`x`$ and $`p`$ about the origin, and so it is sensible to take the cost function to be minimized as in the previous section $$J_q=_0^t\left(\text{Tr}\left(𝐱^TP𝐱\rho \right)+q^2𝐮^TQ𝐮_\text{c}\right)𝑑t$$ (76) where $`q`$ is a weighting constant which in this case has units of time, and $$P=Q=\left(\begin{array}{cc}m\omega ^2& 0\\ 0& 1/m\end{array}\right).$$ (77) With this choice of $`P`$ and $`Q`$ the cost function is a weighted sum of the energy of the oscillator and a fictitious energy one could associate with the feedback variable $`𝐮`$. In general one can choose any quadratic function of the feedback variables, and a particular choice would be made to suit a given situation. Note that the feedback variable $`𝐮`$ appears in the cost function to reflect the fact that we are not unrestricted in the magnitude of the feedback we bring to bear. If this consideration is relatively unimportant, $`q`$ is chosen to be small, and so the cost function reduces essentially to the energy of the oscillator, which is certainly the quantity we wish to minimize in the process of phase-space localization. With this form for the cost function, classical LQG control theory tells us that linear feedback will provide optimal control. Choosing $`B=I`$ in the feedback equation, so as to allow feedback in the dynamical equations for both variables (the most general case), we need merely solve Eq.(59) for $`V`$ to find the optimal value of the feedback matrix $`K`$. Performing this calculation we find that an optimal solution is $`K=(1/q)I`$. That is, feedback to provide an equal damping rate on both the position and momentum. Note that the smaller we make the weighting constant $`q`$, the larger the damping rate, being $`\mathrm{\Gamma }_x=\mathrm{\Gamma }_p=\mathrm{\Gamma }=1/q`$. With this feedback, the dynamical equations for the means become $`dx`$ $`=`$ $`\mathrm{\Gamma }_xxdt+(1/m)pdt+2\sqrt{2\eta k}V_xdW,`$ (78) $`dp`$ $`=`$ $`m\omega ^2xdt\mathrm{\Gamma }_ppdt+2\sqrt{2\eta k}CdW.`$ (79) It is now the mean and variance of the conditioned means which are of interest, as they tell us how well localized the particle is, and about what point in phase-space. We will denote the means of the conditioned means as $`x`$ and $`p`$, and the covariances of the means as $`V_x^\text{e}`$, $`V_p^\text{e}`$ and $`C^\text{e}`$, where the ‘e’ refers to the fact that they are excess to the quantum conditional covariances resulting from the measurement process. Clearly the steady state values for the means $`x`$ and $`p`$ is the origin of phase space, while the equations for the covariances are $`\dot{\stackrel{~}{V}}_x^\text{e}`$ $`=`$ $`2\mathrm{\Gamma }_x\stackrel{~}{V}_x^\text{e}+2\omega \stackrel{~}{C}^\text{e}+{\displaystyle \frac{2\omega }{r}}\stackrel{~}{V}_x^2`$ (80) $`\dot{\stackrel{~}{V}}_p^\text{e}`$ $`=`$ $`2\mathrm{\Gamma }_p\stackrel{~}{V}_p^\text{e}2\omega \stackrel{~}{C}^\text{e}+{\displaystyle \frac{2\omega }{r}}\stackrel{~}{C}^2`$ (81) $`\dot{\stackrel{~}{C}}^\text{e}`$ $`=`$ $`(\mathrm{\Gamma }_x+\mathrm{\Gamma }_p)\dot{\stackrel{~}{C}}^\text{e}\omega (\stackrel{~}{V}_x^\text{e}\stackrel{~}{V}_p^\text{e})+{\displaystyle \frac{2\omega }{r}}\stackrel{~}{C}\stackrel{~}{V}_x,`$ (82) and the tildes denote dimensionless scaled covariances given by $$\stackrel{~}{V}_x=\frac{2m\omega }{\mathrm{}}V_x,\stackrel{~}{V}_p=\frac{2}{\mathrm{}m\omega }V_p,\stackrel{~}{C}=\frac{2}{\mathrm{}}C.$$ (83) Putting $`\mathrm{\Gamma }_x=\mathrm{\Gamma }_p=\mathrm{\Gamma }`$, and solving for the steady-state covariances, we obtain $`\stackrel{~}{V}_x^\text{e}`$ $`=`$ $`{\displaystyle \frac{2𝒬}{r(1+4𝒬^2)}}\left[\left(1+2𝒬^2\right)\stackrel{~}{V}_x^2+2𝒬^2\stackrel{~}{C}^2+2𝒬\stackrel{~}{C}\stackrel{~}{V}_x\right]`$ (84) $`\stackrel{~}{V}_p^\text{e}`$ $`=`$ $`{\displaystyle \frac{2𝒬}{r(1+4𝒬^2)}}\left[2𝒬^2\stackrel{~}{V}_x^2+\left(1+2𝒬^2\right)\stackrel{~}{C}^22𝒬\stackrel{~}{C}\stackrel{~}{V}_x\right]`$ (85) $`\stackrel{~}{C}^\text{e}`$ $`=`$ $`{\displaystyle \frac{2𝒬}{r(1+4𝒬^2)}}\left[𝒬\stackrel{~}{V}_x^2+𝒬\stackrel{~}{C}^2+\stackrel{~}{C}\stackrel{~}{V}_x\right],`$ (86) where $`𝒬\omega /(2\mathrm{\Gamma })`$ . The total average covariances resulting from the localization process are simply the sum of the conditional covariances and these excess covariances. The overall resulting purity may then be calculated using Eq.(72), if so desired. In the previous section we noted that terms in the feedback Hamiltonian proportional to momentum are harder to generate than those proportional to position, and since the optimal feedback we have used above requires both kinds of terms, it is of interest to examine what may be achieved with a position term alone. This imposes the condition that $`K_{11}=K_{12}=0`$, where $`K_{ij}`$ are the elements of the feedback matrix $`K`$. To derive the optimal solution under this condition we solve Eq.(59) as before, but this time set $$B=\left(\begin{array}{cc}0& 0\\ 0& 1\end{array}\right).$$ (87) This time taking the small $`q`$ limit, $`q\omega =2𝒬1`$, an optimal feedback strategy is given by $`K_{21}=m\omega /q`$ and $`K_{22}=1/q`$. In this case the steady state solution for the excess variances are $`\stackrel{~}{V}_x^\text{e}`$ $`=`$ $`{\displaystyle \frac{1}{r}}\left[\stackrel{~}{V}_x^2+4𝒬^2\stackrel{~}{C}^2+4𝒬\stackrel{~}{C}\stackrel{~}{V}_x\right]`$ (88) $`\stackrel{~}{V}_p^\text{e}`$ $`=`$ $`{\displaystyle \frac{1}{r}}\left[\stackrel{~}{V}_x^2+2𝒬\stackrel{~}{C}^2\right]`$ (89) $`\stackrel{~}{C}^\text{e}`$ $`=`$ $`{\displaystyle \frac{1}{r}}\stackrel{~}{V}_x^2,`$ (90) We see, therefore, that using feedback by estimation, it is indeed possible to obtain phase-space localization with only a position dependent term in the feedback Hamiltonian, although this is clearly not as good as using a combination of position and momentum damping. To summarize our results so far, we see that when using feedback by estimation, and when we average over the conditional evolution, there is an additional uncertainty in the final localized state over that due to measurement inefficiency, and that this excess uncertainty decreases with the magnitude of the feedback. This additional uncertainty is due to the noise which is continually fed into the system as the result of the measurement. The effect of this noise is decreased as the damping constant, $`\mathrm{\Gamma }`$, is increased. However, there is ultimately a limit upon the magnitude of the feedback, and hence upon $`\mathrm{\Gamma }`$, and this is reflected in the choice of the weighting constant $`q`$ in the cost function. We will see in the next section that direct feedback provides an alternative strategy for dealing with the measurement noise. ### B Adding direct feedback The beauty of direct quantum feedback, formulated by Wiseman and Milburn , is that it may be used to cancel the noise which drives the mean values of the dynamical variables. This is possible because the noise in the measurement signal is the same noise that drives the system. Feeding back the measurement signal itself (by choosing a feedback Hamiltonian directly proportional to this signal) essentially allows the noise driving the system to drive it twice at each step. If the feedback Hamiltonian is chosen in the right way, then the effect of the noise at the first step may be canceled by that at the second step. The result is that the steady state of the feedback master equation can have the same variances as the conditioned states since all of the fluctuations in the mean values are overcome. We note that direct feedback would be analogous to the use of residual feedback in classical control theory (in which the innovation is fed back to drive the system) to cancel the noise driving the Kalman filter. As in the previous section, we choose the feedback Hamiltonian to be linear in $`x`$ and $`p`$, as this is sufficient for our purposes. For direct feedback, the feedback Hamiltonian is proportional to the measurement signal $`I(t)`$, so we may write $$H_D=I(t)(\alpha x+\beta p).$$ (91) The stochastic master equation that results is $`d\rho _\text{a}`$ $`=`$ $`{\displaystyle \frac{i}{\mathrm{}}}[H_m,\rho _\text{a}]dt+2k𝒟[x]\rho _\text{a}dt+{\displaystyle \frac{1}{\eta }}𝒟[F]\rho _\text{a}dt`$ (94) $`i\sqrt{2k}[F,x\rho _\text{a}+\rho _\text{a}x]dt`$ $`+[\sqrt{2\eta k}x{\displaystyle \frac{i}{\sqrt{\eta }}}F]\rho _\text{a}dW,`$ where $`F=(\sqrt{2k}\eta )(\alpha x+\beta p)/\mathrm{}`$. This is precisely the model that has been used previously to discuss the manipulation of the motion of atoms and mirrors through feedback . Applied to a harmonic oscillator and initially Gaussian states it is possible to rewrite this master equation in terms of the mean values and covariances exactly as was done above. The equations for the covariances are just as before (Eqs.(67)), but the equations for the means are now $`dx`$ $`=`$ $`(p/m)dt+4\eta k\beta xdt+\sqrt{2\eta k}(2V_x+\beta )dW,`$ (95) $`dp`$ $`=`$ $`m\omega ^2xdt4\eta k\alpha xdt+\sqrt{2\eta k}(2C\alpha )dW.`$ (96) We see that in order to cancel the noise driving the means we merely need choose the feedback Hamiltonian such that $`\alpha =2C`$ and $`\beta =2V_x`$. However, it is also clear that direct feedback from a continuous position measurement is limited in a way in which feedback by estimation is not. Using direct feedback alone it is not possible to provide a damping term for the mean momentum, or, in fact, any term in the equations for the mean values which is proportional to the mean momentum. This is because we are using continuous position measurement, so that the measurement signal is proportional to the mean position, and not the mean momentum. It is this limitation that feedback using estimation allows us to overcome. Further, it is also clear that while feedback by estimation allows us to achieve phase space localization even in cases where it was not possible to provide the feedback Hamiltonian with a term involving the momentum operator, direct feedback alone will not provide either cooling or confinement without the use of a momentum term. Consequently, a momentum term in the Hamiltonian is crucial for the cooling achieved in this system by the scheme of Mancini et al. . Alternatively, in the absence of the momentum term, these equations are clearly well adapted to modifying the effective potential seen by the atom, and this is discussed at length by Dunningham et al. . It is important to note that the feedback Hamiltonian Eq. (91) could only be realized in the limit of an ideal (infinitely broad band) feedback signal, due to the fact that the measurement noise is white noise (at least to an excellent approximation). The cost, $`J_q`$, of such a signal is therefore also infinite. The use of linear direct feedback cannot improve the results obtained using feedback by estimation (since this is already optimal), and this is reflected in the fact that it eliminates the noise only in the limit of infinite cost, a statement which is also true of the optimal algorithm using feedback by estimation. However, direct feedback does provide an alternative strategy, and, depending on the limitations imposed by a specific implementation, it might well prove advantageous to use it in combination with feedback by estimation. To summarize, the lowest temperature available is given by the steady-state covariance matrix of the conditional states (Eqs.(70)), and in the limit in which the cost of control can be disregarded, either direct feedback or LQG control, or some combination (being analogous to classical LQG control plus residual feedback), give a means of achieving this as a limiting case. The result is that the final cooled, localized state has the covariances given by Eqs.(70), and the resulting purity, given by Eq.(73), would be limited only by the detection efficiency and environmental noise. ## V Conclusion In this paper we have shown that it is possible to formulate, in a simple manner, feedback in linear quantum systems such that the best estimates of system variables are used to control the system. This significantly extends the range of available possibilities for the control of quantum systems using feedback. Due to the fact that in linear systems the estimation process may be modeled by its classical analogue, Kalman filtration, classical LQG control theory may be applied to quantum feedback by estimation. While we have focused on applying results from LQG theory to linear systems, there are many other techniques from classical state observer based control which could be applied to control quantum systems. For example, the techniques of adaptive control, where system parameters are estimated on-line in order to cope with non-linearities or uncertainties about the system to be controlled, could be employed. Another problem to be faced in more complicated systems is the computational overhead in propagating the state estimate. Linear systems are tractable classically because the mean and covariance matrix provide all the necessary information about the posterior probability distribution with the result that the whole distribution need not be propagated. Propagating the SME’s of non-linear quantum systems in real time will require extensive computational resources. Approximations to the full SME, perhaps along the lines of classical extended Kalman filters, may well be useful or necessary in real near-future experiments. ## Acknowledgments ACD would like to thank Hideo Mabuchi, Sze Tan and Dan Walls for helpful discussions, and the University of Auckland for financial support. KJ would like to thank Prof. Dan Walls for hospitality during a stay at the University of Auckland where this work was carried out.
no-problem/9812/hep-ph9812253.html
ar5iv
text
# 1 ## 1 In the study of possible virtual effects in B decays which are due to physics beyond the Standard Model(SM), a crucial task is to subtract reliably the SM contributions. For the best studied rare $`b`$ decay process, $`bs+\gamma `$, many efforts have been made in the last decade to calculate it as accurately as possible within SM. However, since the signals of new physics in $`b`$ decays emerge unluckily with branching ratios at the level of $`10^7`$ or less , the commonly focused processes of which $`bs+\gamma `$ is the prototype are very problematic for testing new physics against the SM background. Moreover, other yet unobserved B decays, like various $`B\tau `$ processes, have also been shown to be rather insensitive to a large class of new physics models . In a recent letter , we have emphasized an alternative approach to the challenge of identifying virtual effects from new physics in $`b`$ decays, by the consideration of processes which have negligible strength in the SM. Such processes could thus serve as sensitive probes for new physics, relatively free of SM “pollution”. In we focused on the $`bss\overline{d}`$ transition, which is a box-diagram induced process in the SM with a very small branching ratio of below $`10^{11}`$, and we studied possible effects from the minimal supersymmetric standard model (MSSM) and from a supersymmetric model with R-parity violation. We have shown that the existing limits on parameters of MSSM and of R-parity violating interactions allow this process to occur well in excess of the SM rate, thus providing an unusual opportunity for stricter limits or, hopefully, for discovering new physics effects. In the present letter we undertake a study of the occurrence of the $`bss\overline{d}`$ transition in Two Higgs Doublet Models (THDMs), which are frequently considered as likely candidates for the extension of the SM. We shall study two models in which the charged Higgs exchange is contributing to box diagrams, which we denote as Model I and Model II , respectively, as well as a third model allowing a tree level transition mediated by neutral Higgs bosons . We begin with some comments on the calculation of the W-box diagrams in the SM. Due to strong cancellations between the contributions from the top, the charm and the up quarks in the loops, the leading order SM result for the $`bss\overline{d}`$ decay rate is $`\mathrm{\Gamma }={\displaystyle \frac{m_b^5}{48(2\pi )^3}}\left|{\displaystyle \frac{G_F^2}{2\pi ^2}}m_W^2V_{tb}V_{ts}^{}\right|^2\left|V_{td}V_{ts}^{}f\left(x\right)+yV_{cd}V_{cs}^{}g(x,y)\right|^2,`$ (1) where $`f(x)={\displaystyle \frac{111x+4x^2}{4x(1x)^2}}{\displaystyle \frac{3}{2(1x)^3}}\mathrm{ln}x,`$ (2) $`g(x,y)=\mathrm{ln}y+{\displaystyle \frac{8x4x^21}{4(1x)^2}}\mathrm{ln}x+{\displaystyle \frac{4x1}{4(1x)}},`$ (3) with $`x=m_W^2/m_t^2`$, $`y=m_c^2/m_W^2`$. The first term in (1) is suppressed by the small Cabibbo-Kobayashi-Maskawa (CKM) matrix elements $`V_{td}V_{ts}^{}`$, while the second term is suppressed by $`y=m_c^2/m_W^2`$ and contributes about a half of the CKM suppressed term. In this second contribution, we have neglected a kinematics dependent term when we perform the integral over the loop momentum. This amounts to neglecting a small $`(m_c^2/m_W^2)\mathrm{ln}(f(p)/m_c)`$ contribution wIth $`f(p)`$ a function depending on the external momenta $`p`$. We have checked the effect of the neglected dependence numerically and found that it never exceeds $`10\%`$ of the $`m_c^2/m_W^2`$ term in the whole kinematic region<sup>2</sup><sup>2</sup>2 We note that this dependence does not appear in the calculation of the $`K\overline{K}`$ mixing, where the external momentum squares are of the order of $`m_s^2`$ and can be safely neglected compared to $`m_c^2`$, nor in the case of $`B\overline{B}`$ mixing, where even the contributions of the order of $`m_b^2/m_W^2`$ are sub-dominant.. Since the involved CKM matrix elements are not well bounded and the relative phase of the two terms is not fixed, we can only determine a range for the branching ratio of $`bss\overline{d}`$, which turns out to be always below $`10^{11}`$ in the SM. Note that QCD corrections may not change the value by orders of magnitudes, if compared with the analogous processes $`B^0\overline{B}^0`$ and $`K^0\overline{K}^0`$ mixing . All these features combine to single out this process as a very sensitive one to new physics. The THDMs are the minimal extensions of the SM. With one more Higgs doublet, one has to suppress the tree level flavor changing neutral current (FCNC) interactions due to neutral Higgs bosons to be consistent with the data. The Model I allows only one Higgs doublet to couple to both the up- and the down-type quarks . In the Model II one Higgs doublet is coupled only to up-type quarks while the other doublet is coupled only to down-type quarks , and thus the Higgs content in the Model II is the same as in the MSSM. In both the Model I and Model II, discrete symmetries are enforced to forbid the more general couplings and thus the tree level FCNC interactions are absent. In another model, the Model III , the tree level FCNC is assumed to be small enough to lie within the experimental bounds. ## 2 The relevant Lagrangian in the Model I and II is $``$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}{\displaystyle \frac{g_2}{M_W}}\left[\mathrm{cot}\beta \overline{u}_{i,R}(M_UV)_{ij}d_{j,L}\xi \overline{u}_{i,L}(M_DV)_{ij}d_{j,R}\right]H^++h.c.,`$ (4) where $`V`$ represents the $`3\times 3`$ unitary CKM matrix, $`M_U`$ and $`M_D`$ denote the diagonalized quark mass matrices, the subscripts $`L`$ and $`R`$ denote left-handed and right-handed quarks, and $`i,j=1,2,3`$ are the generation indices. For model I, $`\xi =\mathrm{cot}\beta `$; while for Model II, $`\xi =\mathrm{tan}\beta `$. In the charged Higgs mediated box diagrams, the couplings proportional to the up quark masses dominate the contribution to $`bss\overline{d}`$ decay. In both models we have $`\mathrm{\Gamma }={\displaystyle \frac{m_b^5}{48(2\pi )^3}}\left|{\displaystyle \frac{G_F^2}{2\pi ^2}}m_W^2V_{tb}V_{ts}^{}\right|^2\left|V_{td}V_{ts}^{}\left[f(x)+A(x,z)\right]+yV_{cd}V_{cs}^{}\left[g(x,y)+B(x,z)\right]\right|^2,`$ (5) where $`A(x,z)`$ $`=`$ $`{\displaystyle \frac{\mathrm{cot}^2\beta }{2}}\left({\displaystyle \frac{14x}{x(1z)(1x)}}+{\displaystyle \frac{3x}{(1x)^2(zx)}}\mathrm{ln}x+{\displaystyle \frac{z^24xz}{x(1z)^2(zx)}}\mathrm{ln}z\right)`$ (6) $`+`$ $`{\displaystyle \frac{\mathrm{cot}^4\beta }{4x}}\left({\displaystyle \frac{1+z}{(1z)^2}}+{\displaystyle \frac{2z}{(1z)^3}}\mathrm{ln}z\right),`$ $`B(x,z)`$ $`=`$ $`\mathrm{cot}^2\beta \left({\displaystyle \frac{4xz}{2(zx)(1z)}}\mathrm{ln}z{\displaystyle \frac{3x}{2(1x)(zx)}}\mathrm{ln}x\right)`$ (7) $`{\displaystyle \frac{1}{4}}\mathrm{cot}^4\beta \left({\displaystyle \frac{1}{1z}}+{\displaystyle \frac{1}{(1z)^2}}\mathrm{ln}z\right),`$ with $`z=m_{H^+}^2/m_t^2`$. The functions $`A(x,z)`$ and $`B(x,z)`$ denote the new contributions from the charged Higgs. Because we have dropped in (5) the terms which are proportional to a factor less than $`m_bm_s/M_W^2`$ at the amplitude level, the limit $`\mathrm{cot}\beta 0`$ corresponds to the SM case. As in the SM, the result depends on the relative CKM phase between the terms with the $`m_c^2/m_W^2`$ and the CKM suppressed contributions. In Fig. 1 we plot $`R`$ as the function of $`\mathrm{cot}\beta `$, where $$R=\frac{|yV_{cd}V_{cs}^{}\left[g(x,y)+B(x,z)\right]|}{|V_{td}V_{ts}^{}\left[f(x)+A(x,z)\right]|}$$ (8) is the ratio between the amplitudes induced by the $`m_c^2/m_W^2`$ and by the CKM suppressed terms. It is clear that the SM is the limit with the maximal importance of the $`m_c^2/m_W^2`$ contribution, while in the limit of large $`\mathrm{cot}\beta `$ this contribution is negligible. In the numerical calculation, we use $`m_b=4.5`$ GeV, $`m_c=1.5`$ GeV, $`|V_{ts}|=0.04`$, $`|V_{td}|=0.01`$ and $`|V_{cd}|=0.22`$. We set the relative phase between the two terms in (5) as zero which corresponds to a maximal constructive interference. In Fig. 2, we show the numerical results for branching ratio of $`bss\overline{d}`$ decay as a function of $`\mathrm{cot}\beta `$ for different charged Higgs masses. We have taken $`m_{H^+}`$ as $`100`$$`800`$ GeV while $`\mathrm{cot}\beta `$ ranging from 0.1 to 5. Some semi-quantitative considerations suggest a small $`\mathrm{cot}\beta `$ ($`<2`$),<sup>3</sup><sup>3</sup>3We refer the reader to for more detailed discussions. in which region the decay branching ratio for $`bss\overline{d}`$ decay is unobservable. However, phenomenologically a $`\mathrm{cot}\beta `$ as large as 5 is still consistent with the low energy data and the direct search of the Higgs boson in top decays. In this parameter space of a large $`\mathrm{cot}\beta `$ the charged Higgs box diagrams are sizable. We note that in the MSSM $`\mathrm{cot}\beta <1`$ is preferred in model building, corresponding to an unobservable contribution from the charged Higgs box diagrams in this model . As can be seen in Fig. 2, the branching ratio in the Model I and II can be of the order $`10^8`$ at large $`\mathrm{cot}\beta `$ region. The searching of the decay $`bss\overline{d}`$ and its hadronic channels $`B^\pm K^\pm K^\pm X`$ in B experiments will further constrain the parameters in the THDMs. ## 3 In the THDM III, there exists the Yukawa interaction $$=\xi _{ij}\overline{Q}_{i,L}\varphi _2D_{j,R}+(\mathrm{the}\mathrm{up}\mathrm{quark}\mathrm{sector})+h.c..$$ (9) Here, the scalar Higgs doublet $`\varphi _2`$ mediates the FCNC transitions $`d_id_j`$ at the tree level, if the coupling $`\xi _{ij}`$ is nonzero. As discussed in the literature, FCNC effects induced by the loop diagrams are always negligible compared to the tree level ones in the Model III , hence the box diagrams with charged Higgs can be safely dropped in the present decay. We will neglect the unimportant QCD correction since no GIM-like cancellation happens in the process. The couplings in (9) are constrained strongly from the neutral meson mixing by the requirement that the FCNC contribution to the mixing from the interactions in (9) does not exceed its experimental value. In the $`K\overline{K}`$ and $`B_s\overline{B}_s`$ mixing, the dominant contributions are proportional to $`\xi _{sd}^2`$ and $`\xi _{sb}^2`$, respectively $$\mathrm{\Delta }M_F=2\xi _{sq}^2\left(\frac{M_S^F}{m_h^2}+\frac{M_P^F}{m_A^2}\right),$$ (10) with $`F=K,B_S`$, and $`q=d,b`$, respectively, and $`M_S^F`$ $`=`$ $`{\displaystyle \frac{1}{6}}\left(f_F^2M_F+{\displaystyle \frac{f_F^2M_F^3}{(m_s+m_q)^2}}\right),`$ $`M_P^F`$ $`=`$ $`{\displaystyle \frac{1}{6}}\left(f_F^2M_F+{\displaystyle \frac{11f_F^2M_F^3}{(m_s+m_q)^2}}\right).`$ (11) Here $`m_h`$ and $`m_A`$ are the masses of the neutral scalar and pseudoscalar Higgs bosons. Note that we have taken $`\xi _{ij}=\xi _{ji}`$ while it is not difficult to generalize to the case without this assumption. In numerical calculations, we use $`f_K=160`$ MeV, $`f_{B_s}=200`$ MeV, $`\mathrm{\Delta }M_K=3.491\times 10^{15}`$ GeV . Taking $`m_A=m_hm_H`$, we obtain the bound $`{\displaystyle \frac{\xi _{sd}}{m_H}}<8.3\times 10^8\mathrm{GeV}^1`$ (12) from $`\mathrm{\Delta }M_K`$. The experimental lower limit from $`B_s\overline{B}_s`$ mixing $`\mathrm{\Delta }M_{B_s}>5.2\times 10^{12}\mathrm{GeV}\text{[12]},`$ (13) gives no constraint on $`\xi _{sb}/m_H`$, since the contribution in the SM itself exceeds this number. Furthermore, free from any assumption made for the FCNC Higgs couplings in the lepton sector, the bounds from $`B_sl_il_j`$ and $`BKl_il_j`$ ($`l_i,l_j=e,\mu \mathrm{or}\tau `$) do not exclude even $`\xi _{sb}/m_H`$ as large as $`10^1`$ . In the presence of the interactions (9), $`bss\overline{d}`$ can be induced by a tree diagram exchanging the neutral Higgs bosons $`h`$ (scalar) and $`A`$ (pseudo-scalar), with the amplitude $$𝒜=\frac{i}{2}\xi _{sb}\xi _{sd}\left(\frac{1}{m_h^2}(\overline{s}b)(\overline{s}d)\frac{1}{m_A^2}(\overline{s}\gamma _5b)(\overline{s}\gamma _5d)\right).$$ (14) The decay rate is thus $$\mathrm{\Gamma }=\frac{m_b^5}{3072(2\pi )^3}|\xi _{sb}\xi _{sd}|^2\left\{11\left(\frac{1}{m_h^4}+\frac{1}{m_A^4}\right)+\frac{2}{m_h^2m_A^2}\right\}.$$ (15) The numerical results are shown in Fig.3 as a function of $`|\xi _{sb}\xi _{sd}|/m_H^2`$. From Fig. 3, it can be seen that the decay $`bss\overline{d}`$ is observable in the Model III, if $`|\xi _{sb}\xi _{sd}|/m_H^2>10^{10}`$ GeV<sup>-2</sup>, which corresponds to a branching ratio around $`10^9`$. This requires at least $`|\xi _{sb}/m_H|>10^3`$ GeV<sup>-1</sup>. Corresponding to this number, the neutral Higgs contribution to $`\mathrm{\Delta }M_{B_s}`$ is $`10^6`$ times larger than its present lower limit. To our knowledge, such a large $`\mathrm{\Delta }M_{B_s}`$ is difficult to exclude. ## 4 In summary, we have presented a study of the $`bss\overline{d}`$ process in several THDMs. Firstly, we confirmed that the charged Higgs box contribution in MSSM is indeed negligible , while on the other hand in Models I and II this contribution can induce observable effects at the $`10^8`$ level for the branching ratio. A large $`B_s\overline{B}_s`$ mixing, which is at least $`10^6`$ times larger than its present lower limit, is required in Model 3 for this process to be observable. Combining the above results with those of Ref , we reemphasize that the search for the processes we recommended , like $`B^{}K^{}K^{}\pi ^+`$, will serve immediately to get better limits on the parameters of R-parity violating theories; at the later stage, when lower experimental limits are obtainable, it would constitute a direct check of the various Non-Standard Model theories investigated here and in Ref . The work of KH and DXZ is partially supported by the Academy of Finland (no. 37599). The work of PS is partially supported by the Fund for Promotion of Research at the Technion.
no-problem/9812/astro-ph9812473.html
ar5iv
text
# Cosmology from Type Ia Supernovae ## Abstract This Lawrence Berkeley National Laboratory reprint is a reduction of a poster presentation from the Cosmology Display Session #85 on 9 January 1998 at the American Astronomical Society meeting in Washington D.C. It is also available on the World Wide Web at http://www-supernova.LBL.gov/ This work has also been referenced in the literature by the pre-meeting abstract citation: Perlmutter et al., B.A.A.S., volume 29, page 1351 (1997). This presentation reports on first evidence for a low-mass-density/positive-cosmological-constant universe that will expand forever, based on observations of a set of 40 high-redshift supernovae. The experimental strategy, data sets, and analysis techniques are described. More extensive analyses of these results with some additional methods and data are presented in the more recent LBNL report #41801 (Perlmutter et al., 1998; Ap.J., in press), astro-ph/9812133. slugcomment: LBNL-42230 Presented at the January 1998 Meeting of the AAS
no-problem/9812/quant-ph9812042.html
ar5iv
text
# References Is the Statistical Interpretation of Quantum Mechanics Implied by the Correspondence Principle ? Kurt Gottfried<sup>1</sup><sup>1</sup>1 Presented at the conference “Epistemological and Experimental Perspectives on Quantum Physics,” Vienna, September 3-6, 1998; to appear in 7th. Yearbook, Institute Vienna Circle, D. Greenberger, W.L. Reiter and A. Zeilinger (eds.), Kluwer: Dordrecht 1999. Laboratory for Nuclear Studies, Cornell University, Ithaca NY 14853 November 29, 1998 Our impresarion, Anton Zeilinger, ignored my pleas of ignorance and prevailed on me to talk about my discussions with John Bell about the foundations of quantum mechanics. This ‘debate’ was aborted by John’s tragic death shortly after we last met at a wonderful workshop in Amherst attended by several people in this audience. At that time, John’s last paper “Against Measurement” was about to be published. It featured a wonderfully barbed attack on the treatment of measurement in my 1966 textbook. I was delighted that the most profound student of quantum mechanics since the Founding Fathers, and an old friend from CERN, had paid close attention to what I had written, because with but one exception, no one publishing in the field had ever mentioned my work even when espousing a position that I had taken long before. John would not have changed his views had he been able to hear my response at the CERN memorial symposium. Furthermore, I was far from satisfied with what I published back then. On the other hand, I continue to find John’s critique of orthodoxy to be rather less overwhelming than his superb rhetoric. Nevertheless, what I will say was stimulated by reflecting on the views John espoused in our last conversations and in his last paper. I In large part, the interpretation of quantum mechanics is so controversial because the basic equations of the theory seem not to say what they mean in terms of pre-existing concepts. In retrospect, the successful developments of physics followed a clear path from the Principia until it disappeared into a fog in 1925. Whatever Thomas Kuhn and his disciples may say, before the advent of quantum mechanics physics had ultimately proven to be a cumulative pursuit, with new knowledge built on prior concepts. Newton’s equations of motion defined all its new concepts in terms of then accepted conceptions of space and time. The next giant steps concerned heat and electromagnetism. Statistical mechanics, while full of subtleties, nevertheless demonstrated how thermodynamics could be related to an underlying Newtonian description. In electrodynamics, the meaning of the new concept, the electromagnetic field, is defined by Maxwell’s equations and the Lorentz force law by means of Newton’s equations. Admitedly, special relativity introduced a new conception of space and time, but their meaning was defined by a more penetrating examination of pre-existing concepts using familiar tools: measuring rods and clocks. General relativity, while profoundly new, is, if you pardon the expression, trivial, provided you are freely falling and short-sighted. Furthermore, Einstein’s equations tell you that – you do not need him whispering in your ear. Quantum mechanics does not fit this pattern, it would appear. On its own, the formalism does not seem to disclose what the wave function means, and gurus such as Born, Bohr and Heisenberg were needed to translate the equations into our language. I acknowledged this in my response to John: > If one were to hand the Schrödinger equation to Maxwell, and tell him that it describes the structure of matter in terms of various point particles whose masses and charges are to be seen in the equation, this knowledge would not, by itself, enable Maxwell to figure out what is meant by the wave function. Eventually he would need help: “Oh, I forgot to tell you that according to Rabbi Born, a great thinker in the yeshiva that flourished in Göttingen in the early part of the 20th century, $`|\mathrm{\Psi }(𝒓_1,\mathrm{},t)|^2`$ is …” My aim is to explore whether the situation is really this bleak – to ask to what extent, if any, the statistical interpretation of quantum mechanics is revealed by the theory’s formalism. I make no claim to having a fully satisfactory answer to this question. What I have to say is really the reading of an old book from the back towards the front. I now put myself into the shoes of this fictional Maxwell. The task of fathoming the physical content of the Schrödinger equation leads me to adopt the following assumptions and ground rules: * The mathematical formalism of orthodox quantum mechanics provides a complete and consistent description of Nature as it stands. The implications of the formalism, such as the superposition principle, are not to be tampered with. * It is not permissible to invoke the statistical interpretation, the collapse of the wave function, or the influence of some dynamical environment. * If quantum mechanics is indeed complete, there must exist conditions under which classical mechanics provides an essentially exact approximation to quantum mechanics for systems having properties that are defined by this very limit. The goal is to examine the classical limit of the quantum mechanical formalism to learn the extent to which this limit compels the statistical interpretation. If such a link could be established it would continue the tradition of connecting new developments in physics to the conceptual roots of the discipline. I will present an argument which claims that the Schrödinger equation, when examined in the classical limit, leads to the statistical interpretation for degrees of freedom described by finite-dimensional Hilbert spaces having no classical counterpart. In contrast, the argument, as it stands here, does not lead to the statistical interpretation for the degrees of freedom that have a classical counterpart. Incidentally, my title is purposely close to that of a remarkable paper by Van Vleck written 70 years ago, “The Correspondence Principle in the Statistical Interpretation of Quantum Mechanics,” which had a somewhat similar goal. So I am singing for my supper from an old though unfinished score. II Schrödinger’s very first equation in his first paper on wave mechanics is the classical Hamilton-Jacobi equation; then he made the great leap to his wave equation. Is there a return path? The first step is to note that the Schrödinger equation implies that in the naive classical limit $`\mathrm{}0`$, the wave function $`\mathrm{\Psi }`$ has an essential singularity, which then motivates the WKB Ansatz $$\mathrm{\Psi }=e^{i\mathrm{\Theta }(𝒒,t)/\mathrm{}},\mathrm{\Theta }=S+\frac{\mathrm{}}{i}U+O(\mathrm{}^2),$$ (1) where $`𝒒(𝒒_1,\mathrm{},𝒒_N)`$ is a point in the configuration space $``$ of an $`N`$ particle system. At this first stage, I ignore “internal” variables, such as spin, inhabiting finite dimensional Hilbert spaces with no classical counterpart. The Schrödinger’s equation then produces the following familiar facts: 1. Ansatz (1) is only legitimate – only produces a respectable asymptotic series, if $$\frac{|\mathrm{\Theta }/𝒒_k|^2}{|^2\mathrm{\Theta }/𝒒_l^2|}\mathrm{};$$ (2) when this condition is violated the approximation (1) is not valid, and the Schrödinger equation itself must be used. 2. The leading term $`S`$ satisfies the classical Hamilton-Jacobi equation in the region accessible to classical motion. Recall that the classical trajectories are the curves in $``$ everywhere normal to the surfaces of constant $`S`$. Thus $`\mathrm{\Psi }`$ is related not to one classical trajectory, but to a family or set of such trajectories, $`\{𝒒(t)\}`$. 3. The next order term $`U`$ is related to $`S`$ by $$\frac{U}{t}+\underset{k=1}{\overset{N}{}}\frac{1}{2m_k}\left(\frac{^2S}{𝒒_k^2}+2\frac{S}{𝒒_k}\mathbf{}\frac{U}{𝒒_k}\right)=0.$$ (3) Thus $`U`$, which is the $`\mathrm{}`$-independent factor in $`\mathrm{\Psi },`$ is real in the classically accessible regions, and therefore $$w(𝒒,t)\mathrm{exp}[2U(𝒒,t)]=\underset{\mathrm{}0}{lim}|\mathrm{\Psi }|^2.$$ (4) When rephrased in terms of $`w`$, (3) becomes a classical continuity equation in $``$: $$\frac{w}{t}+\underset{k}{}\frac{}{𝒒_k}(w\dot{𝒒}_k)=0,$$ (5) where $`\dot{𝒒}_k`$ is the velocity at time $`t`$ of the $`k^{\mathrm{th}}`$ particle as determined by the classical equations of motion. Thus $`w`$ plays the role of a density and $`w\dot{𝒒}`$ that of a 3$`N`$ dimensional current vector in $``$; i.e., Eq. 5 is the $`\mathrm{}0`$ limit of the Schrödinger continuity equation. 4. The quantity $`w(𝒒,t)`$, apart from an overall arbitrary constant, is Van Vleck’s determinant $`D`$, a purely classical quantity (involving derivatives of $`S`$). For a given $`|\mathrm{\Psi }|^2`$, $`D`$ determines how the trajectories that form the set $`\{𝒒(t)\}`$ are “populated.” In short, in the classical limit the Schrödinger equation does not describe a single system, but a population of replicas of such a system moving along the trajectories $`\{𝒒(t)\}`$. As $`\mathrm{\Psi }`$ depends only on the degrees of freedom of a single system, and because experiments can be done on individual systems, this population is to be be visualized as a set of identical specimens which, one at a time, follow one or another of the allowed classical trajectories $`\{𝒒(t)\}`$, and which, in retrospect, produced a population of these trajectories as specified by $`w(𝒒,t).`$ For any specific solution $`\mathrm{\Psi }`$ of a specific Schrödinger equation, that is all that can be said about the the trajectories and their population. To have better knowledge of which trajectories are being followed given this $`\mathrm{\Psi }`$, one must intervene by changing the Hamiltonian in the Schrödinger equation, e.g., with a potential that does not allow trajectories to proceed unless they pass through an aperture of dimension $`a`$ with edges smooth enough to leave the WKB approximation valid. Thereafter, the trajectories and their population will change; i.e., $`\mathrm{\Psi }`$ will change. The preceding discussion does not purport to be a derivation of the Born interpretation of $`\mathrm{\Psi }(𝒒,t)`$. To avert such a misperception, I have used the word “population” instead of probability. Furthermore, the classical description of what can be extracted from $`\mathrm{\Psi }`$ is only valid as long as the inequality (2) is satified, i.e., if the system is sufficiently heavy, the forces sufficiently smooth, and the time interval over which the semiclassical description is needed is sufficiently short. The Born interpretation has no such restrictions, quite aside from the profound difference between quantum mechanical probabilities and probability distributions for a population of identical systems obeying the laws of classical mechanics. The restrictions imposed by the semiclassical approximation do not emasculate the argument being made here. For the purpose at hand, it suffices that systems exist that, on the one hand, have properties which allow some of their degrees of freedom to be described semiclassically under appropriate circumstances, and on the other, have inherently nonclassical degrees of freedom that must be treated in strict accordance with the laws of quantum mechanics. III Consider, then, such a system. The degrees of freedom with a classical counterpart will be the position $`𝒒`$ in a configuration space $``$, while the degrees of freedom with no such counterpart live in a finite dimensional “internal” Hilbert space $``$. I should explain that the rules of my game allowed the fictional Newton to know the physical phenomena of electrodynamics when he was asked to unravel the meaning of the new concepts defined by Maxwell’s equations. In the same spirit, the fictional Maxwell is to know the phenomena that quantum mechanics supposedly accounts for. For example, he would be aware of the Stern-Gerlach experiment, and thus know that if an atomic species has a magnetic moment, this moment only displays a discrete set of orientations, while from the Schrödinger equation he would know that angular momentum is quantized. In this scenario, therefore, the internal space of the systems of interest, $``$, is spanned by a finite set of states, which without loss of anything essential can be taken to be the spinors $`\chi _\mu (𝒎)`$, where $`𝒎`$ is the direction of the quantization axis and $`\mu =j,j1,\mathrm{},j`$. As in the paradigmatic example of the Stern-Gerlach experiment, the Hamiltonian is assumed to contain a time-independent perturbation $`H_{\mathrm{ext}}(𝒒)`$ which is an operator both in $``$ and $``$, is nonzero only in some bounded region $`_{\mathrm{ext}}`$ of configuration space, and is brought to diagonal form by the basis whose quantization axis is $`𝒏`$. Let $`\psi _{\mathrm{in}}(𝒒,t)`$ be a wave function that is accurately approximated by the WKB Ansatz, and which, for $`t<t_1`$, describes a set of classical trajectories $`\{𝒒_{\mathrm{in}}(t)\}`$ that form a beam moving towards $`_{\mathrm{ext}}`$, which they enter at $`t=t_1.`$ Furthermore, let $`\chi _\mu (𝒎)`$ be the internal state of this beam, so that the incident state is represented by $$\mathrm{\Psi }_{\mathrm{in}}(𝒒,t)=\psi _{\mathrm{in}}(𝒒,t)\chi _\mu (𝒎).$$ (6) For $`t_1<t<t_2`$ this beam traverses the region $`_{\mathrm{ext}}`$ where the perturbation $`H_{\mathrm{ext}}(𝒒)`$ exists, and provided this is sufficiently smooth, the WKB approximation will continue to be valid. The state (6) is then expanded in the diagonal basis, and for $`t>t_1`$ evolves into $$\mathrm{\Psi }(𝒒,t)=\underset{\sigma =j}{\overset{j}{}}c_\sigma \psi _\sigma (𝒒,t)\chi _\sigma (𝒏),c_\sigma =(\chi _\sigma ^{}(𝒏),\chi _\mu (𝒎)).$$ (7) Each $`\psi _\sigma `$ has a phase which is a solution of a separate Hamilton-Jacobi equation with a potential that depends on the quantum number $`\sigma `$, and describes a set of trajectories $`\{𝒒_\sigma (t)\}`$ that move in distinct directions. Each such set has a density $`w_\sigma (𝒒,t)`$ satisfying a separate continuity equation, which means that $$𝑑𝒒w_\sigma (𝒒,t)=C_\sigma ,t_1<t<t_2,$$ (8) where the constants $`C_\sigma `$ could depend on $`\sigma `$. But $`\psi _\sigma (𝒒,t)\psi _{\mathrm{in}}(𝒒,t_1)`$ as $`tt_1`$ from above, and therefore all the $`C_\sigma `$ are equal: $$𝑑𝒒w_\sigma (𝒒,t)=𝑑𝒒w_{\mathrm{in}}(𝒒,t)=1,$$ (9) where the last equality is just a convention. By design, the region in which the perturbation $`H_{\mathrm{ext}}`$ acts is large enough to produce beams that are well separated for $`t>t_2`$, so that for such times the density is $$w(𝒒,t)=\underset{\sigma }{}|c_\sigma |^2w_\sigma (𝒒,t).$$ (10) Because of (10), the fraction of the incident population that ends up in the beam bearing the label $`\sigma `$ is $`|c_\sigma |^2.`$ All but one of these beams, with $`\sigma =\rho `$, can then be eliminated by a another interaction term in $`H`$. This filtered beam can be sent through a second arrangement of the Stern-Gerlach variety with any other orientation $`𝒌`$, which will produce a set of beams with population fractions $$|(\chi _\rho ^{}(𝒏),\chi _\sigma (𝒌))|^2.$$ (11) All the familiar statements about the complex coefficients $`c_\sigma `$ as probability amplitudes emerge, therefore, though thus far only expressed as fractions of various populations that pass through some combination of filters and fields. This then leads to the question of whether a label $`\mathrm{`}\mathrm{`}\sigma `$ along $`𝒏`$” can be assigned to a specimen following a trajectory in the initial set $`\{𝒒_{\mathrm{in}}(t)\}`$, before it enters the force field that produces the separation into the distinct sets $`\{𝒒_\sigma (t)\}`$. If this could be done, then each member of the population would have an inherent property called $`\sigma `$, which is revealed by the subsequent segregation into the distinct sets $`\{𝒒(t)_\sigma \}`$. But this cannot be done, because the intial internal state $`\chi _\sigma (𝒏)`$ in (6) can be expanded in any of the infinity of bases in $``$, and the appropriate choice is only revealed after the beam has entered the separating field. > Thus an intrinsic property “$`\sigma `$ along $`𝐧`$” cannot be assigned to individual specimens; all that can be said is that if a specimen passed through the field oriented along $`𝐧`$, the probability that it will emerge in the population “$`\sigma `$” is $`|c_\sigma |^2.`$ This is just the orthodox meaning of probabilities in quantum mechanics: the probability of a specific outcome as revealed by measurement, John Bell notwithstanding. The same combination of semiclassical and quantum mechanical descriptions can be given for experiments of the Bohm-EPR type – for example, a system at rest that disintegrates in two fragments which then follow opposed classical trajectories $`\{𝒒_1(t)\}`$ and $`\{𝒒_2(t)\}`$, and a suitable correlated state in the joint internal Hilbert space $`_1_2`$. When the widely separated fragments are passed through fields that produce distinct trajectories for the various eigenvalues $`(\sigma _1,\sigma _2)`$ along the directions $`(𝒏_1,𝒏_2),`$ they will produce correlations that violate the Bell inequalities, with all the familiar implications that follow therefrom. I thank David Mermin for asking several pointed questions.
no-problem/9812/hep-ph9812296.html
ar5iv
text
# Hierarchy Problem in the Shell-Universe Model ## Abstract In the model where the Universe is considered as a thin shell expanding in 5-dimensional hyper-space there is a possibility to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness. PACS numbers: 04.50.+h, 98.80.Cq Several authors in the physics literature speculated about the possibility that our Universe may be a thin membrane in a large dimensional hyper-Universe (for simplicity here we consider the case of five dimensions). This approach is an alternative to the conventional Kaluza-Klein picture that extra dimensions are curled up to an unobservable size. In this paper we want to conceder Universe as a bubble expanding in five dimensional space-time. The model of shell-Universe do not contradict to present time experiments and are supported by at least two observed facts. First is the isotropic runaway of galaxies, which for close universe model is usually explained as an expansion of a bubble in five dimensions. Second is existence of preferred frame in the Universe where the relict background radiation is isotropic. In the framework of the close-Universe model without boundaries this can also be explained if the universe is 3-dimensional sphere and the mean velocity of the background radiation is zero with respect to its center in the fifth dimension. In shell-Universe models the expansion rate of the Universe should depend not only on the matter density on the shell, but also on the properties of matter in inner-outer regions. This can give rise to the effect similar to the hypothetical dark matter. Also some authors want to introduce action at a distance without ultrafast communication as a possible connection of matter through the fifth dimension . In the case of non-compact extra dimensions one needs a mechanism to confine matter inside the 4-dimensional manifold. Usually it is considered that trapping is the result of the existence of special solution of 5-dimensional Einstein equations. This trapping has to be gravitationally repulsive in nature and can be produced by a large cosmological constant or by some matter with negative energy possibly filling the inner-outer space to the shell. In these models the zero-zero component (the only component we need here) of the metric tensor slightly generalize the standard Kaluza-Klein one $$\stackrel{~}{g}_{00}=\sigma (y)g_{00}(x^\mu ),$$ (1) where $`x^\mu `$ are ordinary coordinates of 4-dimensional space-time and $`y`$ is the fifth coordinate. Trapping is ruled by the potential $$\sigma (y)exp(E^2|y|),$$ (2) which is solution of Einstein equations and growth rapidly towards the fifth dimension $`y`$. The integration constant $`E^2`$ (corresponding to the width of the 4-dimensional world $`ϵ`$) is usually taken proportional to the 5-dimensional cosmological constant $$E^2\mathrm{\Lambda }ϵ^2,$$ (3) to solve the cosmological constant problem in four dimensions. Indeed let us consider the Newton’s approximation of 5-dimensional Einstein equations with cosmological term for the trapped point-like source $$(\mathrm{\Delta }\mathrm{\Lambda })\stackrel{~}{g}_{00}=6\pi ^2GM\delta (r)\delta (y),$$ (4) where $`\mathrm{\Delta }`$ is the 4-dimensional Laplacian. Using (2) and (3) and separating variables we obtain the ordinary Newton’s formula without the cosmological term $$g_{00}=12gM/r,$$ (5) where $$gG/ϵ$$ (6) is the 4-dimensional gravitational constant. It seems that the only new idea of the last decade in the subject of multidimensional models is an attempt to solve the hierarchy problem . Hierarchy problem is a puzzle concerning masses of scalar fields. The light Higgs with the mass $`m_H10^3GeV`$ is needed in the Standard Model on the electroweak scale. But masses of scalar fields quadratically divergent in the loop expansion in quantum field theory and renormalization effects should drive these masses up to the very large Plank scale $$M_Pg^{1/2}10^{19}GeV,$$ (7) the natural cut-off of any quantum field theory. In the case of higher-dimensional theories there is a possibility to obtain very large Planck scale of the 4-dimensional world if the higher-dimensional fundamental Planck scale is, for example, the gauge unification scale $`m`$. Recently it was shown that in higher-dimensional models with compact internal dimensions of size $`\rho `$, using the Newton’s law in D+4 dimensions, at distances much larger than the size of internal dimensions $$M_P^2\rho ^Dm^{2+D}.$$ (8) It was claimed that this consideration can be physical in more than five dimensions, since for $`D=1`$ and $`m10^3GeV`$ the size of internal dimension will be huge $`\rho 10^{13}cm`$. We would like to notice here that this mechanism also takes place in models with extended extra dimensions. The only thing we need is to change the internal dimension radius $`\rho `$ in (8) by the Universe thickness $`ϵ`$. Then the formula (6) is just the same as (8) for the case of five dimensions. Of course this does not mean that the thickness of our world in the fifth dimension $`ϵ`$ is the order of $`10^{13}cm`$. We must take into account that the measured mass of the Higgs particle in four dimensions $`m_H`$ itself may be very different from the mass in five dimensions $$m_H^2m^2\mathrm{\Lambda }.$$ (9) For the case of $`m^2\mathrm{\Lambda }`$ we can obtain any arbitrarily small value for $`m_H`$, in particular $`m_H10^3GeV`$ needed in the Standard Model. So even in the 5-dimensional model we can have only one scale $$m^3\mathrm{\Lambda }^{3/2}M_P^2/\mathrm{\Lambda }ϵ^3,$$ (10) which corresponds to the thickness of the Universe. The only parameter of the model $`\mathrm{\Lambda }`$ can be measured, for example, in the planned sub-millimeter measurements of gravity, since at distances of the shell thickness size Newton’s law must change. For these distances in the right hand side of equation (4) we have 4-dimensional delta function $`\delta (R)`$, where $`R`$ is the 4-dimensional radial coordinate. So we can not separate the variables and also hide the cosmological constant $`\mathrm{\Lambda }`$. The solution is $$\stackrel{~}{g}_{00}=\frac{2}{z}\left[I_1(z)\mathrm{\Lambda }GMK_1(z)\right],$$ (11) where $`z=\mathrm{\Lambda }^{1/2}R`$ and $`I_1,K_1`$ are modified Bessel functions of order one. In (11) integration constants are chosen to give the Newton’s 5-dimensional law $$\stackrel{~}{g}_{00}=12GM/R^2$$ (12) in the limit $`z1`$. So in the shell-universe model in five dimensions, except for solving some principal problems of cosmology, there is a very attractive possibility to have just the one fundamental scale corresponding to the thickness of the shell and ruled by the 5-dimensional cosmological constant.
no-problem/9812/hep-ph9812396.html
ar5iv
text
# Model-Independent Analysis of 𝑩→𝝅⁢𝑲 Decays and Bounds on the Weak Phase 𝜸 ## 1 Introduction The CLEO Collaboration has recently reported the observation of some rare two-body decays of the type $`B\pi K`$, as well as interesting upper bounds for the decays $`B\pi \pi `$ and $`BK\overline{K}`$ . In particular, they find the CP-averaged branching ratios $`{\displaystyle \frac{1}{2}}\left[\text{Br}(B^0\pi ^{}K^+)+\text{Br}(\overline{B}^0\pi ^+K^{})\right]`$ $`=`$ $`(1.4\pm 0.3\pm 0.1)\times 10^5,`$ $`{\displaystyle \frac{1}{2}}\left[\text{Br}(B^+\pi ^+K^0)+\text{Br}(B^{}\pi ^{}\overline{K}^0)\right]`$ $`=`$ $`(1.4\pm 0.5\pm 0.2)\times 10^5,`$ $`{\displaystyle \frac{1}{2}}\left[\text{Br}(B^+\pi ^0K^+)+\text{Br}(B^{}\pi ^0K^{})\right]`$ $`=`$ $`(1.5\pm 0.4\pm 0.3)\times 10^5.`$ (1) This observation caused a lot of excitement, because these decays offer interesting insights into the relative strength of various contributions to the decay amplitudes, whose interference can lead to CP asymmetries in the decay rates. It indeed appears that there may be potentially large interference effects, depending on the magnitude of some strong interaction phases (see, e.g., ). Thus, although at present only measurements of CP-averaged branching ratios have been reported, the prospects are good for observing direct CP violation in some of the $`B\pi K`$ or $`BK\overline{K}`$ decay modes in the near future. It is fascinating that some information on CP-violating parameters can be extracted even without observing a single CP asymmetry, from measurements of CP-averaged branching ratios alone. This information concerns the angle $`\gamma `$ of the so-called unitarity triangle, defined as $`\gamma =\text{arg}[(V_{ub}^{}V_{ud})/(V_{cb}^{}V_{cd})]`$. With the standard phase conventions for the Cabibbo–Kobayashi–Maskawa (CKM) matrix, $`\gamma =\text{arg}(V_{ub}^{})`$ to excellent accuracy. There have been proposals for deriving bounds on $`\gamma `$ from measurements of the ratios $`R`$ $`=`$ $`{\displaystyle \frac{\tau (B^+)}{\tau (B^0)}}{\displaystyle \frac{\text{Br}(B^0\pi ^{}K^+)+\text{Br}(\overline{B}^0\pi ^+K^{})}{\text{Br}(B^+\pi ^+K^0)+\text{Br}(B^{}\pi ^{}\overline{K}^0)}},`$ $`R_{}`$ $`=`$ $`{\displaystyle \frac{\text{Br}(B^+\pi ^+K^0)+\text{Br}(B^{}\pi ^{}\overline{K}^0)}{2[\text{Br}(B^+\pi ^0K^+)+\text{Br}(B^{}\pi ^0K^{})]}},`$ (2) whose current experimental values are $`R=1.07\pm 0.45`$ (we use $`\tau (B^+)/\tau (B^0)=1.07\pm 0.03`$) and $`R_{}=0.47\pm 0.24`$. The Fleischer–Mannel bound $`R\mathrm{sin}^2\gamma `$ excludes values around $`|\gamma |=90^{}`$ provided that $`R<1`$. However, this bound is subject to theoretical uncertainties arising from electroweak penguin contributions and strong rescattering effects, which are difficult to quantify . The bound $$1\sqrt{R_{}}\overline{\epsilon }_{3/2}|\delta _{\mathrm{EW}}\mathrm{cos}\gamma |+O(\overline{\epsilon }_{3/2}^2)$$ (3) derived by Rosner and the present author , where $`\delta _{\mathrm{EW}}=0.64\pm 0.15`$ accounts for electroweak penguin contributions, is less affected by such uncertainties; however, it relies on an expansion in the small parameter $$\overline{\epsilon }_{3/2}=\sqrt{2}R_{\mathrm{SU}(3)}\mathrm{tan}\theta _C\left[\frac{\text{Br}(B^+\pi ^+\pi ^0)+\text{Br}(B^{}\pi ^{}\pi ^0)}{\text{Br}(B^+\pi ^+K^0)+\text{Br}(B^{}\pi ^{}\overline{K}^0)}\right]^{1/2},$$ (4) whose value has been estimated to be $`\overline{\epsilon }_{3/2}=0.24\pm 0.06`$. Here $`\theta _C`$ is the Cabibbo angle, and the factor $`R_{\mathrm{SU}(3)}f_K/f_\pi `$ accounts for SU(3)-breaking corrections. Assuming the smallness of certain rescattering effects, higher-order terms in the expansion in $`\overline{\epsilon }_{3/2}`$ can be shown to strengthen the bound (3) provided that the value of $`R_{}`$ is not much larger than indicated by current data, i.e., if $`R_{}<(1\overline{\epsilon }_{3/2}/\sqrt{2})^20.7`$ . Our main goal in the present work is to address the question to what extent these bounds can be affected by hadronic uncertainties such as final-state rescattering effects, and whether the theoretical assumptions underlying them are justified. To this end, we perform a general analysis of the various $`B\pi K`$ decay modes, pointing out where theoretical information from isopsin and SU(3) flavour symmetries can be used to eliminate hadronic uncertainties. Our approach will be to vary parameters not constrained by theory (strong-interaction phases, in particular) within conservative ranges so as to obtain a model-independent description of the decay amplitudes. An analysis pursuing a similar goal has recently been presented by Buras and Fleischer . Where appropriate, we will point out the relations of our work with theirs and provide a translation of notations. We stress, however, that although we take a similar starting point, some of our conclusions will be rather different from the ones reached in their work. In Section 2, we present a general parametrization of the various isospin amplitudes relevant to $`B\pi K`$ decays and discuss theoretical constraints resulting from flavour symmetries of the strong interactions and the structure of the low-energy effective weak Hamiltonian. We summarize model-independent results derived recently for the electroweak penguin contributions to the isovector part of the effective Hamiltonian and point out constraints on certain rescattering contributions resulting from $`BK\overline{K}`$ decays . The main results of this analysis are presented in Section 2.6, which contains numerical predictions for the various parameters entering our parametrization of the decay amplitudes. The remainder of the paper deals with phenomenological applications of these results. In Section 3, we discuss corrections to the Fleischer–Mannel bound resulting from final-state rescattering and electroweak penguin contributions. In Section 4, we show how to include rescattering effects to the bound (3) at higher orders in the expansion in $`\overline{\epsilon }_{3/2}`$. Detailed predictions for the direct CP asymmetries in the various $`B\pi K`$ decay modes are presented in Section 5, where we also present a prediction for the CP-averaged $`B^0\pi ^0K^0`$ branching ratio, for which at present only an upper limit exists. In Section 6, we discuss how the weak phase $`\gamma `$, along with a strong-interaction phase difference $`\varphi `$, can be determined from measurements of the ratio $`R_{}`$ and of the direct CP asymmetries in the decays $`B^\pm \pi ^0K^\pm `$ and $`B^\pm \pi ^\pm K^0`$ (here $`K^0`$ means $`K^0`$ or $`\overline{K}^0`$, as appropriate). This generalizes a method proposed in to include rescattering corrections to the $`B^\pm \pi ^\pm K^0`$ decay amplitudes. Section 7 contains a summary of our result and the conclusions. ## 2 Isospin decomposition ### 2.1 Preliminaries The effective weak Hamiltonian relevant to the decays $`B\pi K`$ is $$=\frac{G_F}{\sqrt{2}}\left\{\underset{i=1,2}{}C_i\left(\lambda _uQ_i^u+\lambda _cQ_i^c\right)\lambda _t\underset{i=3}{\overset{10}{}}C_iQ_i\right\}+\text{h.c.},$$ (5) where $`\lambda _q=V_{qb}^{}V_{qs}`$ are products of CKM matrix elements, $`C_i`$ are Wilson coefficients, and $`Q_i`$ are local four-quark operators. Relevant to our discussion are the isospin quantum numbers of these operators. The current–current operators $`Q_{1,2}^u\overline{b}s\overline{u}u`$ have components with $`\mathrm{\Delta }I=0`$ and $`\mathrm{\Delta }I=1`$; the current–current operators $`Q_{1,2}^c\overline{b}s\overline{c}c`$ and the QCD penguin operators $`Q_{3,\mathrm{},6}\overline{b}s\overline{q}q`$ have $`\mathrm{\Delta }I=0`$; the electroweak penguin operators $`Q_{7,\mathrm{},10}\overline{b}se_q\overline{q}q`$, where $`e_q`$ are the electric charges of the quarks, have $`\mathrm{\Delta }I=0`$ and $`\mathrm{\Delta }I=1`$. Since the initial $`B`$ meson has $`I=\frac{1}{2}`$ and the final states $`(\pi K)`$ can be decomposed into components with $`I=\frac{1}{2}`$ and $`I=\frac{3}{2}`$, the physical $`B\pi K`$ decay amplitudes can be described in terms of three isospin amplitudes. They are called $`B_{1/2}`$, $`A_{1/2}`$, and $`A_{3/2}`$ referring, respectively, to $`\mathrm{\Delta }I=0`$ with $`I_{\pi K}=\frac{1}{2}`$, $`\mathrm{\Delta }I=1`$ with $`I_{\pi K}=\frac{1}{2}`$, and $`\mathrm{\Delta }I=1`$ with $`I_{\pi K}=\frac{3}{2}`$ . The resulting expressions for the decay amplitudes are $`𝒜(B^+\pi ^+K^0)`$ $`=`$ $`B_{1/2}+A_{1/2}+A_{3/2},`$ $`\sqrt{2}𝒜(B^+\pi ^0K^+)`$ $`=`$ $`B_{1/2}+A_{1/2}2A_{3/2},`$ $`𝒜(B^0\pi ^{}K^+)`$ $`=`$ $`B_{1/2}A_{1/2}A_{3/2},`$ $`\sqrt{2}𝒜(B^0\pi ^0K^0)`$ $`=`$ $`B_{1/2}A_{1/2}+2A_{3/2}.`$ (6) From the isospin decomposition of the effective Hamiltonian it is obvious which operator matrix elements and weak phases enter the various isospin amplitudes. Experimental data as well as theoretical expectations indicate that the amplitude $`B_{1/2}`$, which includes the contributions of the QCD penguin operators, is significantly larger than the amplitudes $`A_{1/2}`$ and $`A_{3/2}`$ . Yet, the fact that $`A_{1/2}`$ and $`A_{3/2}`$ are different from zero is responsible for the deviations of the ratios $`R`$ and $`R_{}`$ in (2) from 1. Because of the unitarity relation $`\lambda _u+\lambda _c+\lambda _t=0`$ there are two independent CKM parameters entering the decay amplitudes, which we choose to be<sup>1</sup><sup>1</sup>1Taking $`\lambda _c`$ to be real is an excellent approximation. $`\lambda _c=e^{i\pi }|\lambda _c|`$ and $`\lambda _u=e^{i\gamma }|\lambda _u|`$. Each of the three isospin amplitudes receives contributions proportional to both weak phases. In total, there are thus five independent strong-interaction phase differences (an overall phase is irrelevant) and six independent real amplitudes, leaving as many as eleven hadronic parameters. Even perfect measurements of the eight branching ratios for the various $`B\pi K`$ decay modes and their CP conjugates would not suffice to determine these parameters. Facing this problem, previous authors have often relied on some theoretical prejudice about the relative importance of various parameters. For instance, in the invariant SU(3)-amplitude approach based on flavour-flow topologies , the isospin amplitudes are expressed as linear combinations of a QCD penguin amplitude $`P`$, a tree amplitude $`T`$, a colour-suppressed tree amplitude $`C`$, an annihilation amplitide $`A`$, an electroweak penguin amplitude $`P_{\mathrm{EW}}`$, and a colour-suppressed electroweak penguin amplitude $`P_{\mathrm{EW}}^C`$, which are expected to obey the following hierarchy: $`|P||T||P_{\mathrm{EW}}||C||P_{\mathrm{EW}}^C|>|A|`$. These naive expectations could be upset, however, if strong final-state rescattering effects would turn out to be important , a possibility which at present is still under debate. Whereas the colour-transparency argument suggests that final-state interactions are small in $`B`$ decays into a pair of light mesons, the opposite behaviour is exhibited in a model based on Regge phenomenology . For comparison, we note that in the decays $`BD^{()}h`$, with $`h=\pi `$ or $`\rho `$, the final-state phase differences between the $`I=\frac{1}{2}`$ and $`I=\frac{3}{2}`$ isospin amplitudes are found to be smaller than $`30^{}`$$`50^{}`$ . Here we follow a different strategy, making maximal use of theoretical constraints derived using flavour symmetries and the knowledge of the effective weak Hamiltonian in the Standard Model. These constraints help simplifying the isospin amplitude $`A_{3/2}`$, for which the two contributions with different weak phases turn out to have the same strong-interaction phase (to an excellent approximation) and magnitudes that can be determined without encountering large hadronic uncertainties . Theoretical uncertainties enter only at the level of SU(3)-breaking corrections, which can be accounted for using the generalized factorization approximation . Effectively, these simplifications remove three parameters (one phase and two magnitudes) from the list of unknown hadronic quantities. There is at present no other clean theoretical information about the remaining parameters, although some constraints can be derived using measurements of the branching ratios for the decays $`B^\pm K^\pm \overline{K}^0`$ and invoking SU(3) symmetry . Nevertheless, interesting insights can be gained by fully exploiting the available information on $`A_{3/2}`$. Before discussing this in more detail, it is instructive to introduce certain linear combinations of the isospin amplitudes, which we define as $`B_{1/2}+A_{1/2}+A_{3/2}`$ $`=`$ $`P+A{\displaystyle \frac{1}{3}}P_{\mathrm{EW}}^C,`$ $`3A_{3/2}`$ $`=`$ $`T+C+P_{\mathrm{EW}}+P_{\mathrm{EW}}^C,`$ $`2(A_{1/2}+A_{3/2})`$ $`=`$ $`TA+P_{\mathrm{EW}}^C.`$ (7) In the latter two relations, the amplitudes $`T`$, $`C`$ and $`A`$ carry the weak phase $`e^{i\gamma }`$, whereas the electroweak penguin amplitudes $`P_{\mathrm{EW}}`$ and $`P_{\mathrm{EW}}^C`$ carry the weak phase<sup>2</sup><sup>2</sup>2Because of their smallness, it is a safe approximation to set $`\lambda _t=\lambda _c`$ for the electroweak penguin contributions, and to neglect electroweak penguin contractions in the matrix elements of the four-quark operators $`Q_i^u`$ and $`Q_i^c`$. $`e^{i\pi }`$. Decomposing the QCD penguin amplitude as $`P=_q\lambda _qP_q`$, and similarly writing $`A=\lambda _uA_u`$ and $`P_{\mathrm{EW}}^C=\lambda _tP_{\mathrm{EW},t}^C`$, we rewrite the first relation in the form $`B_{1/2}+A_{1/2}+A_{3/2}`$ $`=`$ $`\lambda _c(P_tP_c\frac{1}{3}P_{\mathrm{EW},t}^C)+\lambda _u(A_uP_t+P_u)`$ (8) $``$ $`|P|e^{i\varphi _P}\left(e^{i\pi }+\epsilon _ae^{i\gamma }e^{i\eta }\right).`$ By definition, the term $`|P|e^{i\varphi _P}e^{i\pi }`$ contains all contributions to the $`B^+\pi ^+K^0`$ decay amplitude not proportional to the weak phase $`e^{i\gamma }`$. We will return to a discussion of the remaining terms below. It is convenient to adopt a parametrization of the other two amplitude combinations in (7) in units of $`|P|`$, so that this parameter cancels in predictions for ratios of branching ratios. We define $`{\displaystyle \frac{3A_{3/2}}{|P|}}`$ $`=`$ $`\epsilon _{3/2}e^{i\varphi _{3/2}}(e^{i\gamma }qe^{i\omega }),`$ $`{\displaystyle \frac{2(A_{1/2}+A_{3/2})}{|P|}}`$ $`=`$ $`\epsilon _Te^{i\varphi _T}(e^{i\gamma }q_Ce^{i\omega _C}),`$ (9) where the terms with $`q`$ and $`q_C`$ arise from electroweak penguin contributions. In the above relations, the parameters $`\eta `$, $`\varphi _{3/2}`$, $`\varphi _T`$, $`\omega `$, and $`\omega _C`$ are strong-interaction phases. For the benefit of the reader, it may be convenient to relate our definitions in (8) and (9) with those adopted by Buras and Fleischer . The identificantions are: $`|P|e^{i\varphi _P}\lambda _c|P_{tc}|e^{i\delta _{tc}}`$, $`\epsilon _ae^{i\eta }\rho e^{i\theta }`$, $`\varphi _{3/2}\delta _{T+C}`$, and $`\varphi _T\delta _T`$. The notations for the electroweak penguin contributions conincide. Moreover, if we define $$\overline{\epsilon }_{3/2}\frac{\epsilon _{3/2}}{\sqrt{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2}},$$ (10) then $`\overline{\epsilon }_{3/2}r_\mathrm{c}`$ and $`\epsilon _T/\epsilon _{3/2}r/r_\mathrm{c}`$. With this definition, the parameter $`\overline{\epsilon }_{3/2}`$ is precisely the quantity that can be determined experimentally using the relation (4). ### 2.2 Isovector part of the effective weak Hamiltonian The two amplitude combinations in (9) involve isospin amplitudes defined in terms of the strong-interaction matrix elements of the $`\mathrm{\Delta }I=1`$ part of the effective weak Hamiltonian.<sup>3</sup><sup>3</sup>3This statement implies that QED corrections to the matrix elements are neglected, which is an excellent approximation. This part contains current–current as well as electroweak penguin operators. A trivial but relevant observation is that the electroweak penguin operators $`Q_9`$ and $`Q_{10}`$, whose Wilson coefficients are enhanced by the large mass of the top quark, are Fierz-equivalent to the current–current operators $`Q_1`$ and $`Q_2`$ . As a result, the $`\mathrm{\Delta }I=1`$ part of the effective weak Hamiltonian for $`B\pi K`$ decays can be written as $$_{\mathrm{\Delta }I=1}=\frac{G_F}{\sqrt{2}}\left\{\left(\lambda _uC_1\frac{3}{2}\lambda _tC_9\right)\overline{Q}_1+\left(\lambda _uC_2\frac{3}{2}\lambda _tC_{10}\right)\overline{Q}_2+\mathrm{}\right\}+\text{h.c.},$$ (11) where $`\overline{Q}_i=\frac{1}{2}(Q_i^uQ_i^d)`$ are isovector combinations of four-quark operators. The dots represent the contributions from the electroweak penguin operators $`Q_7`$ and $`Q_8`$, which have a different Dirac structure. In the Standard Model, the Wilson coefficients of these operators are so small that their contributions can be safely neglected. It is important in this context that for heavy mesons the matrix elements of four-quark operators with Dirac structure $`(VA)(V+A)`$ are not enhanced with respect to those of operators with the usual $`(VA)(VA)`$ structure. To an excellent approximation, the net effect of electroweak penguin contributions to the $`\mathrm{\Delta }I=1`$ isospin amplitudes in $`B\pi K`$ decays thus consists of the replacements of the Wilson coefficients $`C_1`$ and $`C_2`$ of the current–current operators with the combinations shown in (11). Introducing the linear combinations $`C_\pm =(C_2\pm C_1)`$ and $`\overline{Q}_\pm =\frac{1}{2}(\overline{Q}_2\pm \overline{Q}_1)`$, which have the advantage of being renormalized multiplicatively, we obtain $$_{\mathrm{\Delta }I=1}\frac{G_F}{\sqrt{2}}|V_{ub}^{}V_{us}|\left\{C_+(e^{i\gamma }\delta _+)\overline{Q}_++C_{}(e^{i\gamma }\delta _{})\overline{Q}_{}\right\}+\text{h.c.},$$ (12) where $$\delta _\pm =\frac{3\mathrm{cot}\theta _C}{2|V_{ub}/V_{cb}|}\frac{C_{10}\pm C_9}{C_2\pm C_1}.$$ (13) We have used $`\lambda _u/\lambda _t\lambda _u/\lambda _c\mathrm{tan}\theta _C|V_{ub}/V_{cb}|e^{i\gamma }`$, with the ratio $`|V_{ub}/V_{cb}|=0.089\pm 0.015`$ determined from semileptonic $`B`$ decays . From the fact that the products $`C_\pm \overline{Q}_\pm `$ are renormalization-group invariant, it follows that the quantities $`\delta _\pm `$ themselves must be scheme- and scale-independent (in a certain approximation). Indeed, the ratios of Wilson coefficients entering in (13) are, to a good approximation, independent of the choice of the renormalization scale. Taking the values $`C_1=0.308`$, $`C_2=1.144`$, $`C_9=1.280\alpha `$ and $`C_{10}=0.328\alpha `$, which correspond to the leading-order coefficients at the scale $`\mu =m_b`$ , we find $`(C_{10}+C_9)/(C_2+C_1)1.14\alpha `$ and $`(C_{10}C_9)/(C_2C_1)1.11\alpha `$, implying that $`\delta _{}\delta _+`$ to a good approximation. The statement of the approximate renormalization-group invariance of the ratios $`\delta _\pm `$ can be made more precise by noting that the large values of the Wilson coefficients $`C_9`$ and $`C_{10}`$ at the scale $`\mu =m_b`$ predominantly result from large matching contributions to the coefficient $`C_9(m_W)`$ arising from box and $`Z`$-penguin diagrams, whereas the $`O(\alpha )`$ contributions to the anomalous dimension matrix governing the mixing of the local operators $`Q_i`$ lead to very small effects. If these are neglected, then to next-to-leading order in the QCD evolution the coefficients $`(C_{10}\pm C_9)`$ are renormalized multiplicatively and in precisely the same way as the coefficients $`(C_2\pm C_1)`$. We have derived this result using the explicit expressions for the anomalous dimension matrices compiled in .<sup>4</sup><sup>4</sup>4The equivalence of the anomalous dimensions at next-to-leading order is nontrivial because the operators $`Q_9`$ and $`Q_{10}`$ are related to $`Q_1`$ and $`Q_2`$ by Fierz identities, which are valid only in four dimensions. The corresponding two-loop anomalous dimensions are identical in the naive dimensional regularization scheme with anticommuting $`\gamma _5`$. Hence, in this approximation the ratios of coefficients entering the quantities $`\delta _\pm `$ are renormalization-scale independent and can be evaluated at the scale $`m_W`$, so that $$\frac{C_{10}\pm C_9}{C_2\pm C_1}\pm C_9(m_W)=\frac{\alpha }{12\pi }\frac{x_t}{\mathrm{sin}^2\theta _W}\left(1+\frac{3\mathrm{ln}x_t}{x_t1}\right)+\mathrm{}1.18\alpha ,$$ (14) where $`\theta _W`$ is the Weinberg angle, and $`x_t=(m_t/m_W)^2`$. This result agrees with an equivalent expression derived by Fleischer . The dots in (14) represent renormalization-scheme dependent terms, which are not enhanced by the factor $`1/\mathrm{sin}^2\theta _W`$. These terms are numerically very small and of the same order as the coefficients $`C_7`$ and $`C_8`$, whose values have been neglected in our derivation. The leading terms given above are precisely the ones that must be kept to get a consistent, renormalization-group invariant result. We thus obtain $$\delta _+=\delta _{}=\frac{\alpha }{8\pi }\frac{\mathrm{cot}\theta _C}{|V_{ub}/V_{cb}|}\frac{x_t}{\mathrm{sin}^2\theta _W}\left(1+\frac{3\mathrm{ln}x_t}{x_t1}\right)=0.68\pm 0.11,$$ (15) where we have taken $`\alpha =1/129`$ for the electromagnetic coupling renormalized at the scale $`m_b`$, and $`m_t=\overline{m}_t(m_t)=170`$ GeV for the running top-quark mass in the $`\overline{\mathrm{MS}}`$ renormalization scheme. Assuming that there are no large $`O(\alpha _s)`$ corrections with this choice, the main uncertainty in the estimate of $`\delta _+`$ in the Standard Model results from the present error on $`|V_{ub}|`$, which is likely to be reduced in the near future. We stress that the sensitivity of the $`B\pi K`$ decay amplitudes to the value of $`\delta _+`$ provides a window to New Physics, which could alter the value of this parameter significantly. A generic example are extensions of the Standard Model with new charged Higgs bosons such as supersymmetry, for which there are additional matching contributions to $`C_9(m_W)`$. We will come back to this point in Section 4. ### 2.3 Structure of the isospin amplitude $`𝑨_{\mathrm{𝟑}\mathbf{/}\mathrm{𝟐}}`$ $`U`$-spin invariance of the strong interactions, which is a subgroup of flavour SU(3) symmetry corresponding to transformations exchanging $`d`$ and $`s`$ quarks, implies that the isospin amplitude $`A_{3/2}`$ receives a contribution only from the operator $`\overline{Q}_+`$ in (12), but not from $`\overline{Q}_{}`$ . In order to investigate the corrections to this limit, we parametrize the matrix elements of the local operators $`C_\pm \overline{Q}_\pm `$ between a $`B`$ meson and the $`(\pi K)`$ isospin state with $`I=\frac{3}{2}`$ by hadronic parameters $`K_{3/2}^\pm e^{i\varphi _{3/2}^\pm }`$, so that $`3A_{3/2}`$ $`=`$ $`K_{3/2}^+e^{i\varphi _{3/2}^+}(e^{i\gamma }\delta _+)+K_{3/2}^{}e^{i\varphi _{3/2}^{}}(e^{i\gamma }+\delta _+)`$ (16) $``$ $`\left(K_{3/2}^+e^{i\varphi _{3/2}^+}+K_{3/2}^{}e^{i\varphi _{3/2}^{}}\right)(e^{i\gamma }qe^{i\omega }).`$ In the SU(3) limit $`K_{3/2}^{}=0`$, and hence SU(3)-breaking corrections can be parametrized by the quantity $$\kappa e^{i\mathrm{\Delta }\phi _{3/2}}\frac{2K_{3/2}^{}e^{i\varphi _{3/2}^{}}}{K_{3/2}^+e^{i\varphi _{3/2}^+}+K_{3/2}^{}e^{i\varphi _{3/2}^{}}}=2\left[\frac{K_{3/2}^+}{K_{3/2}^{}}e^{i(\varphi _{3/2}^+\varphi _{3/2}^{})}+1\right]^1,$$ (17) in terms of which $$qe^{i\omega }=\left(1\kappa e^{i\mathrm{\Delta }\phi _{3/2}}\right)\delta _+.$$ (18) This relation generalizes an approximate result derived in . The magnitude of the SU(3)-breaking effects can be estimated by using the generalized factorization hypothesis to calculate the matrix elements of the current–current operators . This gives $$\kappa 2\left[\frac{a_1+a_2}{a_1a_2}\frac{A_K+A_\pi }{A_KA_\pi }+1\right]^1=(6\pm 6)\%,\mathrm{\Delta }\phi _{3/2}0,$$ (19) where $`A_K=f_K(m_B^2m_\pi ^2)F_0^{B\pi }(m_K^2)`$ and $`A_\pi =f_\pi (m_B^2m_K^2)F_0^{BK}(m_\pi ^2)`$ are combinations of hadronic matrix elements, and $`a_1`$ and $`a_2`$ are phenomenological parameters defined such that they contain the leading corrections to naive factorization. For a numerical estimate we take $`a_2/a_1=0.21\pm 0.05`$ as determined from a global analysis of nonleptonic two-body decays of $`B`$ mesons , and $`A_\pi /A_K=0.9\pm 0.1`$, which is consistent with form factor models (see, e.g., ) as well as the most recent predictions obtained using light-cone QCD sum rules . Despite the fact that nonfactorizable corrections are not fully controlled theoretically, the estimate (19) suggests that the SU(3)-breaking corrections in (18) are small. More importantly, such effects cannot induce a sizable strong-interaction phase $`\omega `$. Since $`\overline{Q}_+`$ and $`\overline{Q}_{}`$ are local operators whose matrix elements are taken between the same isospin eigenstates, it is very unlikely that the strong-interaction phases $`\varphi _{3/2}^+`$ and $`\varphi _{3/2}^{}`$ could differ by a large amount. If we assume that these phases differ by at most $`20^{}`$, and that the magnitude of $`\kappa `$ is as large as 12% (corresponding to twice the central value obtained using factorization), we find that $`|\omega |<2.7^{}`$. Even for a phase difference $`\mathrm{\Delta }\phi _{3/2}|\varphi _{3/2}^+\varphi _{3/2}^{}|=90^{}`$, which seems totally unrealistic, the phase $`|\omega |`$ would not exceed $`7^{}`$. It is therefore a safe approximation to work with the real value $$\delta _{\mathrm{EW}}(1\kappa )\delta _+=0.64\pm 0.15,$$ (20) where to be conservative we have added linearly the uncertainties in the values of $`\kappa `$ and $`\delta _+`$. We believe the error quoted above is large enough to cover possible small contributions from a nonzero phase difference $`\mathrm{\Delta }\phi _{3/2}`$ or deviations from the factorization approximation. For completeness, we note that our general results for the structure of the electroweak penguin contributions to the isospin amplitude $`A_{3/2}`$, including the pattern of SU(3)-breaking effects, are in full accord with model estimates by Deshpande and He . Generalizations of our results to the case of $`B\pi \pi `$, $`K\overline{K}`$ decays and the corresponding $`B_s`$ decays are possible using SU(3) symmetry, as discussed in . In the last step, we define $`K_{3/2}^+e^{i\varphi _{3/2}^+}+K_{3/2}^{}e^{i\varphi _{3/2}^{}}|P|\epsilon _{3/2}e^{i\varphi _{3/2}}`$, so that $$\frac{3A_{3/2}}{|P|}=\epsilon _{3/2}e^{i\varphi _{3/2}}(e^{i\gamma }\delta _{\mathrm{EW}}).$$ (21) The complex quantity $`qe^{i\omega }`$ in our general parametrization in (9) is now replaced with the real parameter $`\delta _{\mathrm{EW}}`$, whose numerical value is known with reasonable accuracy. The fact that the strong-interaction phase $`\omega `$ can be neglected was overlooked by Buras and Fleischer, who considered values as large as $`|\omega |=45^{}`$ and therefore asigned a larger hadronic uncertainty to the isospin amplitude $`A_{3/2}`$ . In the SU(3) limit, the product $`|P|\epsilon _{3/2}`$ is determined by the decay amplitude for the process $`B^\pm \pi ^\pm \pi ^0`$ through the relation $$|P|\epsilon _{3/2}=\sqrt{2}\frac{R_{\mathrm{SU}(3)}}{R_{\mathrm{EW}}}\mathrm{tan}\theta _C|𝒜(B^\pm \pi ^\pm \pi ^0)|,$$ (22) where<sup>5</sup><sup>5</sup>5We disagree with the result for this correction presented in . $$R_{\mathrm{EW}}=\left|e^{i\gamma }\frac{V_{td}}{V_{ud}}\frac{V_{us}}{V_{ts}}\delta _{\mathrm{EW}}\right|\left|1\lambda ^2R_t\delta _{\mathrm{EW}}e^{i\alpha }\right|$$ (23) is a tiny correction arising from the very small electroweak penguin contributions to the decays $`B^\pm \pi ^\pm \pi ^0`$. Here $`\lambda =\mathrm{sin}\theta _C0.22`$ and $`R_t=[(1\rho )^2+\eta ^2]^{1/2}1`$ are Wolfenstein parameters, and $`\alpha `$ is another angle of the unitarity triangle, whose preferred value is close to $`90^{}`$ . It follows that the deviation of $`R_{\mathrm{EW}}`$ from 1 is of order 1–2%, and it is thus a safe approximation to set $`R_{\mathrm{EW}}=1`$. More important are SU(3)-breaking corrections, which can be included in (22) in the factorization approximation, leading to $$R_{\mathrm{SU}(3)}\frac{a_1}{a_1+a_2}\frac{f_K}{f_\pi }+\frac{a_2}{a_1+a_2}\frac{F_0^{BK}(m_\pi ^2)}{F_0^{B\pi }(m_\pi ^2)}\frac{f_K}{f_\pi }1.2,$$ (24) where we have neglected a tiny difference in the phase space for the two decays. Relation (22) can be used to determine the parameter $`\overline{\epsilon }_{3/2}`$ introduced in (10), which coincides with $`\epsilon _{3/2}`$ up to terms of $`O(\epsilon _a)`$. To this end, we note that the CP-averaged branching ratio for the decays $`B^\pm \pi ^\pm K^0`$ is given by $`\text{Br}(B^\pm \pi ^\pm K^0)`$ $``$ $`{\displaystyle \frac{1}{2}}\left[\text{Br}(B^+\pi ^+K^0)+\text{Br}(B^{}\pi ^{}\overline{K}^0)\right]`$ (25) $`=`$ $`|P|^2\left(12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2\right).`$ Combining this result with (22) we obtain relation (4), which expresses $`\overline{\epsilon }_{3/2}`$ in terms of CP-averaged branching ratios. Using preliminary data reported by the CLEO Collaboration combined with some theoretical guidance based on factorization, one finds $`\overline{\epsilon }_{3/2}=0.24\pm 0.06`$ . To summarize, besides the parameter $`\delta _{\mathrm{EW}}`$ controlling electroweak penguin contributions also the normalization of the amplitude $`A_{3/2}`$ is known from theory, albeit with some uncertainty related to nonfactorizable SU(3)-breaking effects. The only remaining unknown hadronic parameter in (21) is the strong-interaction phase $`\varphi _{3/2}`$. The various constraints on the structure of the isospin amplitude $`A_{3/2}`$ discussed here constitute the main theoretical simplification of $`B\pi K`$ decays, i.e., the only simplification rooted on first principles of QCD. ### 2.4 Structure of the amplitude combination $`𝑩_{\mathrm{𝟏}\mathbf{/}\mathrm{𝟐}}\mathbf{+}𝑨_{\mathrm{𝟏}\mathbf{/}\mathrm{𝟐}}\mathbf{+}𝑨_{\mathrm{𝟑}\mathbf{/}\mathrm{𝟐}}`$ The above result for the isospin amplitude $`A_{3/2}`$ helps understanding better the structure of the sum of amplitudes introduced in (8). To this end, we introduce the following exact parametrization: $$B_{1/2}+A_{1/2}+A_{3/2}=|P|\left[e^{i\pi }e^{i\varphi _P}\frac{\epsilon _{3/2}}{3}e^{i\gamma }\left(e^{i\varphi _{3/2}}\xi e^{i\varphi _{1/2}}\right)\right],$$ (26) where we have made explicit the contribution proportional to the weak phase $`e^{i\gamma }`$ contained in $`A_{3/2}`$. From a comparison with the parametrization in (8) it follows that $$\epsilon _ae^{i\eta }=\frac{\epsilon _{3/2}}{3}e^{i\varphi }\left(\xi e^{i\mathrm{\Delta }}1\right),$$ (27) where $`\varphi =\varphi _{3/2}\varphi _P`$ and $`\mathrm{\Delta }=\varphi _{1/2}\varphi _{3/2}`$. Of course, this is just a simple reparametrization. However, the intuitive expectation that $`\epsilon _a`$ is small, because this terms receives contributions only from the penguin $`(P_uP_t)`$ and from annihilation topologies, now becomes equivalent to saying that $`\xi e^{i\mathrm{\Delta }}`$ is close to 1, so as to allow for a cancelation between the contributions corresponding to final-state isospin $`I=\frac{1}{2}`$ and $`I=\frac{3}{2}`$ in (27). But this can only happen if there are no sizable final-state interactions. The limit of elastic final-state interactions can be recovered from (27) by setting $`\xi =1`$, in which case we reproduce results derived previously in . Because of the large energy release in $`B\pi K`$ decays, however, one expects inelastic rescattering contributions to be important as well . They would lead to a value $`\xi 1`$. From (27) it follows that $$\epsilon _a=\frac{\epsilon _{3/2}}{3}\sqrt{12\xi \mathrm{cos}\mathrm{\Delta }+\xi ^2}=\frac{2\sqrt{\xi }}{3}\epsilon _{3/2}\sqrt{\left(\frac{1\xi }{2\sqrt{\xi }}\right)^2+\mathrm{sin}^2\frac{\mathrm{\Delta }}{2}},$$ (28) where without loss of generality we define $`\epsilon _a`$ to be positive. Clearly, $`\epsilon _a\epsilon _{3/2}`$ provided the phase difference $`\mathrm{\Delta }`$ is small and the parameter $`\xi `$ close to 1. There are good physics reasons to believe that both of these requirements may be satisfied. In the rest frame of the $`B`$ meson, the two light particles produced in $`B\pi K`$ decays have large energies and opposite momenta. Hence, by the colour-transparency argument their final-state interactions are expected to be suppressed unless there are close-by resonances, such as charm–anticharm intermediate states ($`D\overline{D}_s`$, $`J/\psi K`$, etc.). However, these contributions could only result from the charm penguin and are thus included in the term $`|P|e^{i\varphi _P}`$ in (8). As a consequence, the phase difference $`\varphi =\varphi _{3/2}\varphi _P`$ could quite conceivably be sizable. On the other hand, the strong phases $`\varphi _{3/2}`$ and $`\varphi _{1/2}`$ in (26) refer to the matrix elements of local four-quark operators of the type $`\overline{b}s\overline{u}u`$ and differ only in the isospin of the final state. We believe it is realistic to assume that $`|\mathrm{\Delta }|=|\varphi _{1/2}\varphi _{3/2}|<45^{}`$. Likewise, if the parameter $`\xi `$ were very different from 1 this would correspond to a gross failure of the generalized factorization hypothesis (even in decays into isospin eigenstates), which works so well in the global analysis of hadronic two-body decays of $`B`$ mesons . In view of this empirical fact, we think it is reasonable to assume that $`0.5<\xi <1.5`$. With this set of parameters, we find that $`\epsilon _a<0.35\epsilon _{3/2}<0.1`$. Thus, we expect that the rescattering effects parametrized by $`\epsilon _a`$ are rather small. A constraint on the parameter $`\epsilon _a`$ can be derived assuming $`U`$-spin invariance of the strong interactions, which relates the decay amplitudes for the processes $`B^\pm \pi ^\pm K^0`$ and $`B^\pm K^\pm \overline{K}^0`$ up to the substitution $$\lambda _uV_{ub}^{}V_{ud}\frac{\lambda _u}{\lambda },\lambda _cV_{cb}^{}V_{cd}\lambda \lambda _c,$$ (29) where $`\lambda 0.22`$ is the Wolfenstein parameter. Neglecting SU(3)-breaking corrections, the CP-averaged branching ratio for the decays $`B^\pm K^\pm \overline{K}^0`$ is then given by $`\text{Br}(B^\pm K^\pm \overline{K}^0)`$ $``$ $`{\displaystyle \frac{1}{2}}\left[\text{Br}(B^+K^+\overline{K}^0)+\text{Br}(B^{}K^{}K^0)\right]`$ (30) $`=`$ $`|P|^2\left[\lambda ^2+2\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +(\epsilon _a/\lambda )^2\right],`$ which should be compared with the corresponding result for the decays $`B^\pm \pi ^\pm K^0`$ given in (25). The enhancement (suppression) of the subleading (leading) terms by powers of $`\lambda `$ implies potentially large rescattering effects and a large direct CP asymmetry in $`B^\pm K^\pm \overline{K}^0`$ decays. In particular, comparing the expressions for the direct CP asymmetries, $`A_{\mathrm{CP}}(\pi ^+K^0)`$ $``$ $`{\displaystyle \frac{\text{Br}(B^+\pi ^+K^0)\text{Br}(B^{}\pi ^{}\overline{K}^0)}{\text{Br}(B^+\pi ^+K^0)+\text{Br}(B^{}\pi ^{}\overline{K}^0)}}={\displaystyle \frac{2\epsilon _a\mathrm{sin}\eta \mathrm{sin}\gamma }{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2}},`$ $`A_{\mathrm{CP}}(K^+\overline{K}^0)`$ $`=`$ $`{\displaystyle \frac{2\epsilon _a\mathrm{sin}\eta \mathrm{sin}\gamma }{\lambda ^2+2\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +(\epsilon _a/\lambda )^2}},`$ (31) one obtains the simple relation $$\frac{A_{\mathrm{CP}}(K^+\overline{K}^0)}{A_{\mathrm{CP}}(\pi ^+K^0)}=\frac{\text{Br}(B^\pm \pi ^\pm K^0)}{\text{Br}(B^\pm K^\pm \overline{K}^0)}.$$ (32) In the future, precise measurements of the branching ratio and CP asymmetry in $`B^\pm K^\pm \overline{K}^0`$ decays may thus provide valuable information about the role of rescattering contributions in $`B^\pm \pi ^\pm K^0`$ decays. In particular, upper and lower bounds on the parameter $`\epsilon _a`$ can be derived from a measurement of the ratio $$R_K=\frac{\text{Br}(B^\pm K^\pm \overline{K}^0)}{\text{Br}(B^\pm \pi ^\pm K^0)}=\frac{\lambda ^2+2\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +(\epsilon _a/\lambda )^2}{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2}.$$ (33) Using the fact that $`R_K`$ is minimized (maximized) by setting $`\mathrm{cos}\eta \mathrm{cos}\gamma =1`$ (+1), we find that $$\frac{\lambda (\sqrt{R_K}\lambda )}{1+\lambda \sqrt{R_K}}\epsilon _a\frac{\lambda (\sqrt{R_K}+\lambda )}{1\lambda \sqrt{R_K}}.$$ (34) This generalizes a relation derived in . Using data reported by the CLEO Collaboration , one can derive the upper bound $`R_K<0.7`$ (at 90% CL) implying $`\epsilon _a<0.28`$, which is not yet a very powerful constraint. However, a measurement of the branching ratio for $`B^\pm K^\pm \overline{K}^0`$ could improve the situation significantly. For the purpose of illustration, we note that from the preliminary results quoted for the observed event rates one may deduce the “best fit” value $`R_K0.15`$ (with very large errors!). Taking this value literally would give the allowed range $`0.03<\epsilon _a<0.14`$. Based on a detailed analysis of individual rescattering contributions, Gronau and Rosner have argued that one expects a similar pattern of final-state interactions in the decays $`B^\pm K^\pm \overline{K}^0`$ and $`B^0K^\pm K^{}`$ . One could then use the tighter experimental bound $`\text{Br}(B^0K^\pm K^{})<2\times 10^6`$ to obtain $`\epsilon _a<0.16`$. However, this is not a model-independent result, because the decay amplitudes for $`B^0K^\pm K^{}`$ are not related to those for $`B^\pm \pi ^\pm K^0`$ by any symmetry of the strong interactions. Nevertheless, this observation may be considered a qualitative argument in favour of a small value of $`\epsilon _a`$. ### 2.5 Structure of the amplitude combination $`𝑨_{\mathrm{𝟏}\mathbf{/}\mathrm{𝟐}}\mathbf{+}𝑨_{\mathrm{𝟑}\mathbf{/}\mathrm{𝟐}}`$ None of the simplifications we found for the isospin amplitude $`A_{3/2}`$ persist for the amplitude $`A_{1/2}`$. Therefore, the sum $`A_{1/2}+A_{3/2}`$ suffers from larger hadronic uncertainties than the amplitude $`A_{3/2}`$ alone. Nevertheless, it is instructive to study the structure of this combination in more detail. In analogy with (16), we parametrize the matrix elements of the local operators $`C_\pm \overline{Q}_\pm `$ between a $`B`$ meson and the $`(\pi K)`$ isospin state with $`I=\frac{1}{2}`$ by hadronic parameters $`K_{1/2}^\pm e^{i\varphi _{1/2}^\pm }`$, so that $$3A_{1/2}=K_{1/2}^+e^{i\varphi _{1/2}^+}(e^{i\gamma }\delta _+)+K_{1/2}^{}e^{i\varphi _{1/2}^{}}(e^{i\gamma }+\delta _+).$$ (35) Next, we define parameters $`\epsilon ^{}`$ and $`r`$ by $$\frac{\epsilon ^{}}{2}(1\pm r)\frac{2}{3|P|}\left(K_{1/2}^\pm +K_{3/2}^\pm \right).$$ (36) This general definition is motivated by the factorization approximation, which predicts that $`ra_2/a_1=0.21\pm 0.05`$ is the phenomenological colour-suppression factor , and $$\frac{\epsilon ^{}}{\epsilon _{3/2}}\frac{a_1A_K}{a_1A_K+a_2A_\pi }=0.84\pm 0.04.$$ (37) With the help of these definitions, we obtain $`{\displaystyle \frac{2(A_{1/2}+A_{3/2})}{|P|}}`$ $``$ $`{\displaystyle \frac{\epsilon ^{}}{2}}\left[(1+r)e^{i\varphi _{1/2}^+}(e^{i\gamma }\delta _+)+(1r)e^{i\varphi _{1/2}^{}}(e^{i\gamma }+\delta _+)\right]`$ (38) $`+{\displaystyle \frac{2\epsilon _{3/2}}{3}}\left(e^{i\varphi _{3/2}}e^{i\varphi _{1/2}^+}\right)(e^{i\gamma }\delta _+),`$ where we have neglected some small, SU(3)-breaking corrections to the second term. Nevertheless, the above relation can be considered a general parametrization of the sum $`A_{1/2}+A_{3/2}`$, since it still contains two undetermined phases $`\varphi _{1/2}^\pm `$ and magnitudes $`\epsilon ^{}`$ and $`r`$. With the explicit result (38) at hand, it is a simple exercise to derive expressions for the quantities entering the parametrization in (9). We find $`\epsilon _Te^{i\varphi _T}`$ $`=`$ $`{\displaystyle \frac{\epsilon ^{}}{2}}e^{i\varphi _{1/2}^+}\left[(e^{i\mathrm{\Delta }\varphi _{1/2}}+1)+r(e^{i\mathrm{\Delta }\varphi _{1/2}}1)\right]+{\displaystyle \frac{2\epsilon _{3/2}}{3}}\left(e^{i\varphi _{3/2}}e^{i\varphi _{1/2}^+}\right),`$ $`q_Ce^{i\omega _C}`$ $`=`$ $`\delta _+{\displaystyle \frac{r(e^{i\mathrm{\Delta }\varphi _{1/2}}+1)+(e^{i\mathrm{\Delta }\varphi _{1/2}}1)+{\displaystyle \frac{4\epsilon _{3/2}}{3\epsilon ^{}}}\left[e^{i(\varphi _{3/2}\varphi _{1/2}^+)}1\right]}{(e^{i\mathrm{\Delta }\varphi _{1/2}}+1)+r(e^{i\mathrm{\Delta }\varphi _{1/2}}1)+{\displaystyle \frac{4\epsilon _{3/2}}{3\epsilon ^{}}}\left[e^{i(\varphi _{3/2}\varphi _{1/2}^+)}1\right]}},`$ (39) where $`\mathrm{\Delta }\varphi _{1/2}=\varphi _{1/2}^{}\varphi _{1/2}^+`$. This result, although rather complicated, exhibits in a transparent way the structure of possible rescattering effects. In particular, it is evident that the assumption of “colour suppression” of the electroweak penguin contribution, i.e., the statement that $`q_C=O(r)`$ , relies on the smallness of the strong-interaction phase differences between the various terms. More specifically, this assumption would only be justified if $$|\mathrm{\Delta }\varphi _{1/2}|<2r\widehat{=}25^{},|\varphi _{3/2}\varphi _{1/2}^+|<\frac{3r}{2}\frac{\epsilon ^{}}{\epsilon _{3/2}}\widehat{=}15^{}.$$ (40) We believe that, whereas the first relation may be a reasonable working hypothesis, the second one constitues a strong constraint on the strong-interaction phases, which cannot be justified in a model-independent way. As a simple but not unrealistic model we may thus consider the approximate relations obtained by setting $`\mathrm{\Delta }\varphi _{1/2}=0`$ , which have been derived previously in : $`\epsilon _Te^{i\varphi _T}`$ $``$ $`\epsilon ^{}e^{i\varphi _{1/2}^+}+{\displaystyle \frac{2\epsilon _{3/2}}{3}}\left(e^{i\varphi _{3/2}}e^{i\varphi _{1/2}^+}\right),`$ $`q_Ce^{i\omega _C}`$ $``$ $`\delta _+{\displaystyle \frac{r+{\displaystyle \frac{2\epsilon _{3/2}}{3\epsilon ^{}}}\left[e^{i(\varphi _{3/2}\varphi _{1/2}^+)}1\right]}{1+{\displaystyle \frac{2\epsilon _{3/2}}{3\epsilon ^{}}}\left[e^{i(\varphi _{3/2}\varphi _{1/2}^+)}1\right]}}.`$ (41) The fact that in the case of a sizable phase difference between the $`I=\frac{1}{2}`$ and $`I=\frac{3}{2}`$ isospin amplitudes the electroweak penguin contribution may no longer be as small as $`O(r)`$ has been stressed in but was overlooked in . Likewise, there is some uncertainty in the value of the parameter $`\epsilon _T`$, which in the topological amplitude approach corresponds to the ratio $`|TA|/|P|`$ . Unlike the parameter $`\epsilon _{3/2}`$, the quantities $`\epsilon ^{}`$ and $`r`$ cannot be determined experimentally using SU(3) symmetry relations. But even if we assume that the factorization result (37) is valid and take $`\epsilon _{3/2}=0.24`$ and $`\epsilon ^{}=0.20`$ as fixed, we still obtain $`0.12<\epsilon _T<0.20`$ depending on the value of the phase difference $`(\varphi _{3/2}\varphi _{1/2}^+)`$. Note that from the approximate expression (41) it follows that $`\epsilon _T<\epsilon ^{}`$ provided that $`\epsilon ^{}/\epsilon _{3/2}>2/3`$, as indicated by the factorization result. This observation may explain why previous authors find the value $`\epsilon ^{}=0.15\pm 0.05`$ , which tends to be somewhat smaller than the factorization prediction $`\epsilon ^{}0.20`$. ### 2.6 Numerical results Before turning to phenomenological applications of our results in the next section, it is instructive to consider some numerical results obtained using the above parametrizations. Since our main concern in this paper is to study rescattering effects, we will keep $`\epsilon _{3/2}=0.24`$ fixed and assume that $`\epsilon ^{}/\epsilon _{3/2}=0.84\pm 0.04`$ and $`r=0.21\pm 0.05`$ as predicted by factorization. Also, we shall use the factorization result for the parameter $`\kappa `$ in (19). For the strong-interaction phases we consider two sets of parameter choices: one which we believe is realistic and one which we think is very conservative. For the realistic set, we require that $`0.5<\xi <1.5`$, $`|\varphi _{3/2}\varphi _{1/2}^{(+)}|<45^{}`$, and $`|\varphi _I^+\varphi _I^{}|<20^{}`$ (with $`I=\frac{1}{2}`$ or $`\frac{3}{2}`$). For the conservative set, we increase these ranges to $`0<\xi <2`$, $`|\varphi _{3/2}\varphi _{1/2}^{(+)}|<90^{}`$, and $`|\varphi _I^+\varphi _I^{}|<45^{}`$. In our opinion, values outside these ranges are quite inconceivable. Note that, for the moment, no assumption is made about the relative strong-interaction phases of tree and penguin ampltiudes. We choose the various parameters randomly inside the allowed intervals and present the results for the quantities $`\epsilon _ae^{i\eta }`$ in units of $`e^{i\varphi }`$, $`\epsilon _Te^{i\varphi _T}`$ in units of $`e^{i\varphi _{3/2}}`$, and $`q_{(C)}e^{i\omega _{(C)}}`$ in units of $`\delta _+`$ in the form of scatter plots in Figures 1 and 2. The black and the gray points correspond to the realistic and to the conservative parameter sets, respectively. The same colour coding will be used throughout this work. The left-hand plot in Figure 1 shows that the parameter $`\epsilon _a`$ generally takes rather small values. For the realistic parameter set we find $`\epsilon _a<0.08`$, whereas values up tp 0.15 are possible for the conservative set. There is no strong correlation between the strong-interaction phases $`\eta `$ and $`\varphi `$. An important implication of these observations is that, in general, there will be a very small difference between the quantities $`\epsilon _{3/2}`$ and $`\overline{\epsilon }_{3/2}`$ in (10). We shall therefore consider the same range of values for the two parameters. From the right-hand plot we observe that for realistic parameter choices $`0.15<\epsilon _T<0.22`$; however, values between 0.08 and 0.24 are possible for the conservative parameter set. Note that there is a rather strong correlation between the strong-interaction phases $`\varphi _T`$ and $`\varphi _{3/2}`$, which differ by less than $`20^{}`$ for the realistic parameter set. We will see in Section 5 that this implies a strong correlation between the direct CP asymmetries in the decays $`B^\pm \pi ^0K^\pm `$ and $`B^0\pi ^{}K^\pm `$. Figure 2 shows that, even for the realistic parameter set, the ratio $`q_C/\delta _+`$ can be substantially larger than the naive expectation of about 0.2. Indeed, values as large as 0.7 are possible, and for the conservative set the wide range $`0<q_C/\delta _+<1.4`$ is allowed. Likewise, the strong-interaction phase $`\omega _C`$ can naturally be large and take values of up to $`75^{}`$ even for the realistic parameter set. (Note that, without loss of generality, only points with positive values of $`\omega _{(C)}`$ are displayed in the plot. The distribution is invariant under a change of the sign of the strong-interaction phase.) This is in stark contrast to the case of the quantity $`qe^{i\omega }`$ entering the isospin amplitude $`A_{3/2}`$, where both the magnitude $`q`$ and the phase $`\omega `$ are determined within very small uncertainties, as is evident from the figure. ## 3 Hadronic uncertainties in the Fleischer–Mannel bound As a first phenomenological application of the results of the previous section, we investigate the effects of rescattering and electroweak penguin contributions on the Fleischer–Mannel bound on $`\gamma `$ derived from the ratio $`R`$ defined in (2). In general, $`R1`$ because the parameter $`\epsilon _T`$ in (9) does not vanish. To leading order in the small quantities $`\epsilon _i`$, we find $$R12\epsilon _T\left[\mathrm{cos}\stackrel{~}{\varphi }\mathrm{cos}\gamma q_C\mathrm{cos}(\stackrel{~}{\varphi }+\omega _C)\right]+O(\epsilon _i^2),$$ (42) where $`\stackrel{~}{\varphi }=\varphi _T\varphi _P`$. Because of the uncertainty in the values of the hadronic parameters $`\epsilon _T`$, $`q_C`$ and $`\omega _C`$, it is difficult to convert this result into a constraint on $`\gamma `$. Fleischer and Mannel have therefore suggested to derive a lower bound on the ratio $`R`$ by eliminating the parameter $`\epsilon _T`$ from the exact expression for $`R`$. In the limit where $`\epsilon _a`$ and $`q_C`$ are set to zero, this yields $`R\mathrm{sin}^2\gamma `$ . However, this simple result must be corrected in the presence of rescattering effects and electroweak penguin contributions. The generalization is $$R\frac{12q_C\epsilon _a\mathrm{cos}(\omega _C+\eta )+q_C^2\epsilon _a^2}{(12q_C\mathrm{cos}\omega _C\mathrm{cos}\gamma +q_C^2)(12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2)}\mathrm{sin}^2\gamma .$$ (43) The most dangerous rescattering effects arise from the terms involving the electroweak penguin parameter $`q_C`$. As seen from Figure 2, even restricting ourselves to the realistic parameter set we can have $`2q_C\mathrm{cos}\omega _C\delta _+0.7`$ and $`q_C^20.5\delta _+^20.2`$, implying that the quadratic term in the denominator by itself can give a 20% correction. The rescattering effects parametrized by $`\epsilon _a`$ are presumably less important. The results of the numerical analysis are shown in Figure 3. In addition to the parameter choices described in Section 2.6, we vary $`\epsilon _{3/2}`$ and $`\delta _+`$ in the ranges $`0.24\pm 0.06`$ and $`0.68\pm 0.11`$, respectively. Now also the relative strong-interaction phase $`\varphi `$ between the penguin and $`I=\frac{3}{2}`$ tree amplitudes enters. We allow values $`|\varphi |<90^{}`$ for the realistic parameter set, and impose no constraint on $`\varphi `$ at all for the conservative parameter set. The figure shows that the corrections to the Fleischer–Mannel bound are not as large as suggested by the result (43), the reason being that this result is derived allowing arbitrary values of $`\epsilon _T`$, whereas in our analysis the allowed values for this parameter are constrained. However, there are sizable violations of the naive bound $`R<\mathrm{sin}^2\gamma `$ for $`|\gamma |`$ in the range between $`65^{}`$ and $`125^{}`$, which includes most of the region $`47^{}<\gamma <105^{}`$ preferred by the global analysis of the unitarity triangle . Whereas these violations are numerically small for the realistic parameter set, they can become large for the conservative set, because then a large value of the phase difference $`|\varphi _{3/2}\varphi _{1/2}^+|`$ is allowed . We conclude that under conservative assumptions only for values $`R<0.8`$ a constraint on $`\gamma `$ can be derived Fleischer has argued that one can improve upon the above analysis by extracting some of the unknown hadronic parameters $`q_C`$, $`\epsilon _a`$, $`\omega _C`$ and $`\eta `$ from measurements of other decay processes . The idea is to combine information on the ratio $`R`$ with measurements of the direct CP asymmetries in the decays $`B^0\pi ^{}K^\pm `$ and $`B^\pm \pi ^\pm K^0`$, as well as of the ratio $`R_K`$ defined in (33). One can then derive a bound on $`R`$ that depends, besides the electroweak penguin parameters $`q_C`$ and $`\omega _C`$, only on a combination $`w=w(\epsilon _a,\eta )`$, which can be determined up to a two-fold ambiguity assuming SU(3) flavour symmetry. Besides the fact that this approach relies on SU(3) symmetry and involves significantly more experimental input than the original Fleischer–Mannel analysis, it does not allow one to eliminate the theoretical uncertainty related to the presence of electroweak penguin contributions. ## 4 Hadronic uncertainties in the $`𝑹_{\mathbf{}}`$ bound As a second application, we investigate the implications of recattering effects on the bound on $`\mathrm{cos}\gamma `$ derived from a measurement of the ratio $`R_{}`$ defined in (2). In this case, the theoretical analysis is cleaner because there is model-independent information on the values of the hadronic parameters $`\epsilon _{3/2}`$, $`q`$ and $`\omega `$ entering the parametrization of the isospin amplitude $`A_{3/2}`$ in (9). The important point noted in is that the decay amplitudes for $`B^\pm \pi ^\pm K^0`$ and $`B^\pm \pi ^0K^\pm `$ differ only in this single isospin amplitude. Since the overall strength of $`A_{3/2}`$ is governed by the parameter $`\overline{\epsilon }_{3/2}`$ and thus can be determined from experiment without much uncertainty, we have suggested to derive a bound on $`\mathrm{cos}\gamma `$ without eliminating this parameter. In this respect, our strategy is different from the Fleischer–Mannel analysis. The exact theoretical expression for the inverse of the ratio $`R_{}`$ is given by $`R_{}^1`$ $`=`$ $`1+2\overline{\epsilon }_{3/2}{\displaystyle \frac{\mathrm{cos}\varphi (\delta _{\mathrm{EW}}\mathrm{cos}\gamma )+\epsilon _a\mathrm{cos}(\varphi \eta )(1\delta _{\mathrm{EW}}\mathrm{cos}\gamma )}{\sqrt{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2}}}`$ (44) $`+\overline{\epsilon }_{3/2}^2(12\delta _{\mathrm{EW}}\mathrm{cos}\gamma +\delta _{\mathrm{EW}}^2),`$ where $`\overline{\epsilon }_{3/2}`$ has been defined in (10). Relevant for the bound on $`\mathrm{cos}\gamma `$ is the maximal value $`R_{}^1`$ can take for fixed $`\gamma `$. In , we have worked to linear order in the parameters $`\epsilon _i`$, so that terms proportional to $`\epsilon _a`$ could be neglected. Here, we shall generalize the discussion and keep all terms exactly. Varying the strong-interaction phases $`\varphi `$ and $`\eta `$ independently, we find that the maximum value of $`R_{}^1`$ is given by $$R_{}^11+2\overline{\epsilon }_{3/2}\frac{|\delta _{\mathrm{EW}}\mathrm{cos}\gamma \pm \epsilon _a(1\delta _{\mathrm{EW}}\mathrm{cos}\gamma )|}{\sqrt{12\epsilon _a\mathrm{cos}\gamma +\epsilon _a^2}}+\overline{\epsilon }_{3/2}^2(12\delta _{\mathrm{EW}}\mathrm{cos}\gamma +\delta _{\mathrm{EW}}^2),$$ (45) where the upper (lower) signs apply if $`\mathrm{cos}\gamma <c_0`$ ($`\mathrm{cos}\gamma >c_0`$) with $$c_0=\frac{(1+\epsilon _a^2)\delta _{\mathrm{EW}}}{1+\epsilon _a^2\delta _{\mathrm{EW}}^2}\delta _{\mathrm{EW}}.$$ (46) Keeping all terms in $`\overline{\epsilon }_{3/2}`$ exactly, but working to linear order in $`\epsilon _a`$, we find the simpler result $$R_{}^1\left(1+\overline{\epsilon }_{3/2}|\delta _{\mathrm{EW}}\mathrm{cos}\gamma |\right)^2+\overline{\epsilon }_{3/2}(\overline{\epsilon }_{3/2}+2\epsilon _a)\mathrm{sin}^2\gamma +O(\overline{\epsilon }_{3/2}\epsilon _a^2).$$ (47) The higher-order terms omitted here are of order 1% and thus negligible. The annihilation contribution $`\epsilon _a`$ enters this result in a very transparent way: increasing $`\epsilon _a`$ increases the maximal value of $`R_{}^1`$ and therefore weakens the bound on $`\mathrm{cos}\gamma `$. In , we have introduced the quantity $`\mathrm{\Delta }_{}`$ by writing $`R_{}=(1\mathrm{\Delta }_{})^2`$, so that $`\mathrm{\Delta }_{}=1\sqrt{R_{}}`$ obeys the bound shown in (3). Note that to first order in $`\overline{\epsilon }_{3/2}`$ the rescattering contributions proportional to $`\epsilon _a`$ do not enter.<sup>6</sup><sup>6</sup>6Contrary to what has been claimed in , this does not mean that we were ignoring rescattering effects altogether. At linear order, these effects enter only through the strong-interaction phase difference $`\varphi `$, which we kept arbitrary in deriving the bound on $`\mathrm{cos}\gamma `$. Armed with the result (45), we can now derive the exact expression for the maximal value of the quantity $`\mathrm{\Delta }_{}`$, corresponding to the minimal value of $`R_{}`$. It is of advantage to consider the ratio $`\mathrm{\Delta }_{}/\overline{\epsilon }_{3/2}`$, the bound for which is to first order independent of the parameter $`\overline{\epsilon }_{3/2}`$. We recall that this ratio can be determined experimentally up to nonfactorizable SU(3)-breaking corrections. Its current value is $`\mathrm{\Delta }_{}/\overline{\epsilon }_{3/2}=1.33\pm 0.78`$. In the left-hand plot in Figure 4, we show the maximal value for the ratio $`\mathrm{\Delta }_{}/\overline{\epsilon }_{3/2}`$ for different values of the parameters $`\overline{\epsilon }_{3/2}`$ and $`\epsilon _a`$. The upper (red) and lower (blue) pairs of curves correspond to $`\overline{\epsilon }_{3/2}=0.18`$ and 0.30, respectively, and span the allowed range of values for this parameter. For each pair, the dashed and solid lines correspond to $`\epsilon _a=0`$ and 0.1, respectively. To saturate the bound (45) requires to have $`\eta \varphi =0^{}`$ or $`180^{}`$, in which case $`\epsilon _a=0.1`$ is a conservative upper limit (see Figure 1). The dotted curve shows for comparison the linearized result obtained by neglecting the higher-order terms in (3). The parameter $`\delta _{\mathrm{EW}}=0.64`$ is kept fixed in this plot. As expected, the bound on the ratio $`\mathrm{\Delta }_{}/\overline{\epsilon }_{3/2}`$ is only weakly dependent on the values of $`\overline{\epsilon }_{3/2}`$ and $`\epsilon _a`$. In particular, not much is lost by using the conservative value $`\epsilon _a=0.1`$. Note that for values $`\mathrm{\Delta }_{}/\overline{\epsilon }_{3/2}>0.8`$ the linear bound (3) is conservative, i.e., weaker than the exact bound, and even for smaller values of $`\mathrm{\Delta }_{}/\overline{\epsilon }_{3/2}`$ the violations of this bound are rather small. Expanding the exact bound to next-to-leading order in $`\overline{\epsilon }_{3/2}`$, we obtain $$\frac{\mathrm{\Delta }_{}}{\overline{\epsilon }_{3/2}}|\delta _{\mathrm{EW}}\mathrm{cos}\gamma |\overline{\epsilon }_{3/2}\left[\left(\frac{\mathrm{\Delta }_{}}{\overline{\epsilon }_{3/2}}\right)^2\left(\frac{1}{2}+\frac{\epsilon _a}{\overline{\epsilon }_{3/2}}\right)\mathrm{sin}^2\gamma \right]+O(\overline{\epsilon }_{3/2}^2),$$ (48) showing that $`\mathrm{\Delta }_{}/\overline{\epsilon }_{3/2}>(1/2+\epsilon _a/\overline{\epsilon }_{3/2})^{1/2}`$ is a criterion for the validity of the linearized bound. This generalizes a condition derived, for the special case $`\epsilon _a\overline{\epsilon }_{3/2}`$, in . To obtain a reliable bound on the weak phase $`\gamma `$, we must account for the theoretical uncertainty in the value of the electroweak penguin parameter $`\delta _{\mathrm{EW}}`$ in the Standard Model, which is however straightforward to do by lowering (increasing) the value of this parameter used in calculating the right (left) branch of the curves defining the bound. The solid line in the right-hand plot in Figure 4 shows the most conservative bound obtained by using $`\epsilon _a=0.1`$ and varying the other two parameters in the ranges $`0.18<\overline{\epsilon }_{3/2}<0.30`$ and $`0.49<\delta _{\mathrm{EW}}<0.79`$. The scatter plot shows the distribution of values of $`\mathrm{\Delta }_{}/\overline{\epsilon }_{3/2}`$ obtained by scanning the strong-interaction parameters over the same ranges as we did for the Fleischer–Mannel case in the previous section. The horizontal band shows the current central experimental value with its $`1\sigma `$ variation. Unlike the Fleischer–Mannel bound, there is no violation of the bound (by construction), since all parameters are varried over conservative ranges. Indeed, for the points close to the right branch of the bound $`\eta \varphi =0^{}`$, so that according to Figure 1 almost all of these points have $`\epsilon _a<0.03`$, which is smaller than the value we used to obtain the theoretical curve. The dashed curve shows the bound for $`\epsilon _a=0`$, which is seen not to be violated by any point. This shows that the rescattering effects parametrized by the quantity $`\epsilon _a`$ play a very minor role in the bound derived from the ratio $`R_{}`$. We conclude that, if the current experimental value is confirmed to within one standard deviation, i.e., if future measurements find that $`\mathrm{\Delta }_{}/\overline{\epsilon }_{3/2}>0.55`$, this would imply the bound $`|\gamma |>75^{}`$, which is very close to the value of $`77^{}`$ obtained in . Given that the experimental determination of the parameter $`\overline{\epsilon }_{3/2}`$ is limited by unknown nonfactorizable SU(3)-breaking corrections, one may want to be more conservative and derive a bound directly from the measured ratio $`R_{}`$ rather than the ratio $`\mathrm{\Delta }_{}/\overline{\epsilon }_{3/2}`$. In the left-hand plot in Figure 5, we show the same distribution as in the right-hand plot in Figure 4, but now for the ratio $`R_{}`$. The resulting bound on $`\gamma `$ is slightly weaker, because now there is a stronger dependence on the value of $`\overline{\epsilon }_{3/2}`$, which we vary as previously between 0.18 and 0.30. If the current value of $`R_{}`$ is confimed to within one standard deviation, i.e., if future measurements find that $`R_{}<0.71`$, this would imply the bound $`|\gamma |>72^{}`$. Besides providing interesting information on $`\gamma `$, a measurement of $`R_{}`$ or $`\mathrm{\Delta }_{}/\overline{\epsilon }_{}`$ can yield information about the strong-interaction phase $`\varphi `$. In the right plot in Figure 5, we show the distribution of points obtained for fixed values of the strong-interaction phase $`|\varphi |`$ between $`0^{}`$ and $`180^{}`$ in steps of $`30^{}`$. For simplicity, the parameters $`\epsilon _{3/2}=0.24`$ and $`\delta _{\mathrm{EW}}=0.64`$ are kept fixed in this plot, while all other hadronic parameters are scanned over the realistic parameter set. We observe that, independently of $`\gamma `$, a value $`R_{}<0.8`$ requires that $`|\varphi |<90^{}`$. This conclusion remains true if the parameters $`\epsilon _{3/2}`$ and $`\delta _{\mathrm{EW}}`$ are varied over their allowed ranges. We shall study the correlation between the weak phase $`\gamma `$ and the strong phase $`\varphi `$ in more detail in Section 6. Finally, we emphasize that a future, precise measurement of the ratio $`R_{}`$ may also yield a surprise and indicate physics beyond the Standard Model. The global analysis of the unitarity triangle requires that $`|\gamma |<105^{}`$ , for which the lowest possible value of $`R_{}`$ in the Standard Model is about 0.55. If the experimental value would turn out to be less than that, this would be strong evidence for New Physics. In particular, in many extensions of the Standard Model there would be additional contributions to the electroweak penguin parameter $`\delta _{\mathrm{EW}}`$ arising, e.g., from penguin and box diagrams containing new charged Higgs bosons. This could explain a larger value of $`R_{}`$. Indeed, from (47) we can derive the bound $`\delta _{\mathrm{EW}}`$ $``$ $`{\displaystyle \frac{\sqrt{R_{}^1\overline{\epsilon }_{3/2}(\overline{\epsilon }_{3/2}+2\epsilon _a)\mathrm{sin}^2\gamma _{\mathrm{max}}}1}{\overline{\epsilon }_{3/2}}}`$ (49) $`+\mathrm{cos}\gamma _{\mathrm{max}},`$ where $`\gamma _{\mathrm{max}}`$ is the maximal value allowed by the global analysis (assuming that $`\gamma _{\mathrm{max}}>\mathrm{arccos}(c_0)50^{}`$). In Figure 6, we show this bound for the current value $`\gamma _{\mathrm{max}}=105^{}`$ and three different values of $`\overline{\epsilon }_{3/2}`$ as well as two different values of $`\epsilon _a`$. The gray band shows the allowed range for $`\delta _{\mathrm{EW}}`$ in the Standard Model. In the hypothetical situation where the current central values $`R_{}=0.47`$ and $`\overline{\epsilon }_{3/2}=0.24`$ would be confirmed by more precise measurements, we would conclude that the value of $`\delta _{\mathrm{EW}}`$ is at least twice as large as predicted by the Standard Model. ## 5 Prospects for direct CP asymmetries and prediction for the <br>$`𝑩^\mathrm{𝟎}\mathbf{}𝝅^\mathrm{𝟎}𝑲^\mathrm{𝟎}`$ branching ratio ### 5.1 Decays of charged $`𝑩`$ mesons We will now analyse the potential of the various $`B\pi K`$ decay modes for showing large direct CP violation, starting with the decays of charged $`B`$ mesons. The smallness of the rescattering effects parametrized by $`\epsilon _a`$ (see Figure 1) combined with the simplicity of the isospin amplitude $`A_{3/2}`$ (see Section 2.3) make these processes particularly clean from a theoretical point of view. Explicit expressions for the CP asymmetries in the various decays can be derived in a straightforward way starting from the isospin decomposition in (6) and inserting the parametrizations for the isospin amplitudes derived in Section 2. The result for the CP asymmetry in the decays $`B^\pm \pi ^\pm K^0`$ has already been presented in (31). The corresponding expression for the decays $`B^\pm \pi ^0K^\pm `$ reads $$A_{\mathrm{CP}}(\pi ^0K^+)=2\mathrm{sin}\gamma R_{}\frac{\epsilon _{3/2}\mathrm{sin}\varphi +\epsilon _a\mathrm{sin}\eta \epsilon _{3/2}\epsilon _a\delta _{\mathrm{EW}}\mathrm{sin}(\varphi \eta )}{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2},$$ (50) where the theoretical expression for $`R_{}`$ is given in (44), and we have not replaced $`\epsilon _{3/2}`$ in terms of $`\overline{\epsilon }_{3/2}`$. Neglecting terms of order $`\epsilon _a`$ and working to first order in $`\epsilon _{3/2}`$, we find the estimate $`A_{\mathrm{CP}}(\pi ^0K^+)2\epsilon _{3/2}\mathrm{sin}\gamma \mathrm{sin}\varphi 0.5\mathrm{sin}\gamma \mathrm{sin}\varphi `$, indicating that potentially there could be a very large CP asymmetry in this decay (note that $`\mathrm{sin}\gamma >0.73`$ is required by the global analysis of the unitarity triangle). In Figure 7, we show the results for the two direct CP asymmetries in (31) and (50), both for the realistic and for the conservative parameter sets. These results confirm the general observations made above. For the realistic parameter set, and with $`\gamma `$ between $`47^{}`$ and $`105^{}`$ as indicated by the global analysis of the unitarity triangle , we find CP asymmetries of up to 15% in $`B^\pm \pi ^\pm K^0`$ decays, and of up to 50% in $`B^\pm \pi ^0K^\pm `$ decays. Of course, to have large asymmetries requires that the sines of the strong-interaction phases $`\eta `$ and $`\varphi `$ are not small. However, this is not unlikely to happen. According to the left-hand plot in Figure 1, the phase $`\eta `$ can take any value, and the phase $`\varphi `$ could quite conceivably be large due to the different decay mechanisms of tree- and penguin-initiated processes. We stress that there is no strong correlation between the CP asymmetries in the two decay processes, because as shown in Figure 1 there is no such correlation between the strong-interaction phases $`\eta `$ and $`\varphi `$. ### 5.2 Decays of neutral $`𝑩`$ mesons Because of their dependence on the hadronic parameters $`\epsilon _T`$, $`q_C`$ and $`\omega _C`$ entering through the sum $`A_{1/2}+A_{3/2}`$ of isospin amplitudes, the theoretical analysis of neutral $`B\pi K`$ decays is affected by larger hadronic uncertainties than that of the decays of charged $`B`$ mesons. Nevertheless, some interesting predictions regarding neutral $`B`$ decays can be made and tested experimentally. The expression for the direct CP asymmetry in the decays $`B^0\pi ^{}K^\pm `$ is $$A_{\mathrm{CP}}(\pi ^{}K^+)=\frac{2\mathrm{sin}\gamma }{R}\frac{\epsilon _T(\mathrm{sin}\stackrel{~}{\varphi }\epsilon _Tq_C\mathrm{sin}\omega _C)+\epsilon _a[\mathrm{sin}\eta \epsilon _Tq_C\mathrm{sin}(\stackrel{~}{\varphi }\eta +\omega _C)]}{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2},$$ (51) where $`\stackrel{~}{\varphi }=\varphi _T\varphi _P`$. This result reduces to (50) under the replacements $`q_C\delta _{\mathrm{EW}}`$, $`\omega _C0`$, $`RR_{}^1`$, and $`\epsilon _T\epsilon _{3/2}`$. The corresponding expression for the direct CP asymmetry in the decays $`B^0\pi ^0K^0`$ and $`\overline{B}^0\pi ^0\overline{K}^0`$ is more complicated and will not be presented here. Below, we shall derive an exact relation between the various asymmetries, which can be used to compute $`A_{\mathrm{CP}}(\pi ^0K^0)`$. Gronau and Rosner have emphasized that one expects $`A_{\mathrm{CP}}(\pi ^{}K^+)A_{\mathrm{CP}}(\pi ^0K^+)`$, and that one could thus combine the data samples for these decays to enhance the statistical significance of an early signal of direct CP violation . We can easily understand the argument behind this observation using our results. Neglecting the small rescattering contributions proportional to $`\epsilon _a`$ for simplicity, we find $$\frac{A_{\mathrm{CP}}(\pi ^{}K^+)}{A_{\mathrm{CP}}(\pi ^0K^+)}\frac{1}{R_{}R}\frac{\epsilon _T(\mathrm{sin}\stackrel{~}{\varphi }\epsilon _Tq_C\mathrm{sin}\omega _C)}{\epsilon _{3/2}\mathrm{sin}\varphi }\frac{1}{R_{}R}\frac{\epsilon _T}{\epsilon _{3/2}}.$$ (52) In the last step, we have used that the electroweak penguin contribution is very small because it is suppressed by an additional factor of $`\epsilon _T`$, and that the strong-interaction phases $`\varphi `$ and $`\stackrel{~}{\varphi }`$ are strongly correlated, as follows from the right-hand plot in Figure 1. Numerically, the right-hand side turns out to be close to 1 for most of parameter space. This is evident from the left-hand plot in Figure 8, which confirms that there is indeed a very strong correlation between the CP asymmetries in the decays $`B^0\pi ^{}K^\pm `$ and $`B^\pm \pi ^0K^\pm `$, in agreement with the argument given in . Combining the data samples for these decays collected by the CLEO experiment, one may have a chance for observing a statistically significant signal for the first direct CP asymmetry in $`B`$ decays before the operation of the asymmetric $`B`$ factories. The decays $`B^0\pi ^0K^0`$ and $`\overline{B}^0\pi ^0\overline{K}^0`$ have not yet been observed experimentally, but the CLEO Collaboration has presented an upper bound on their CP-averaged branching ratio of $`4.1\times 10^5`$ . In analogy with (2), we define the ratios $`R_0`$ $`=`$ $`{\displaystyle \frac{\tau (B^+)}{\tau (B^0)}}{\displaystyle \frac{2[\text{Br}(B^0\pi ^0K^0)+\text{Br}(\overline{B}^0\pi ^0\overline{K}^0)]}{\text{Br}(B^+\pi ^+K^0)+\text{Br}(B^{}\pi ^{}\overline{K}^0)}},`$ $`R_0`$ $`=`$ $`{\displaystyle \frac{2[\text{Br}(B^0\pi ^0K^0)+\text{Br}(\overline{B}^0\pi ^0\overline{K}^0)]}{\text{Br}(B^0\pi ^{}K^+)+\text{Br}(\overline{B}^0\pi ^+K^{})}}={\displaystyle \frac{R_0}{R}}.`$ (53) Using our parametrizations for the different isospin amplitudes, we find that the ratios $`R`$, $`R_{}`$ and $`R_0`$ obey the relations $$R_0R+R_{}^11=\mathrm{\Delta }_1,R_0RR_{}=\mathrm{\Delta }_2+O(\overline{\epsilon }_i^3),$$ (54) where $`\mathrm{\Delta }_1`$ $`=`$ $`2\overline{\epsilon }_{3/2}^2(12\delta _{\mathrm{EW}}\mathrm{cos}\gamma +\delta _{\mathrm{EW}}^2)2\overline{\epsilon }_{3/2}\overline{\epsilon }_T(1\delta _{\mathrm{EW}}\mathrm{cos}\gamma )\mathrm{cos}(\varphi _T\varphi _{3/2})`$ $`2\overline{\epsilon }_{3/2}\overline{\epsilon }_Tq_C(\delta _{\mathrm{EW}}\mathrm{cos}\gamma )\mathrm{cos}(\varphi _T\varphi _{3/2}+\omega _C),`$ $`\mathrm{\Delta }_2`$ $`=`$ $`\mathrm{\Delta }_14\overline{\epsilon }_{3/2}^2(\delta _{\mathrm{EW}}\mathrm{cos}\gamma )^2\mathrm{cos}^2\varphi `$ (55) $`+4\overline{\epsilon }_{3/2}\overline{\epsilon }_T(\delta _{\mathrm{EW}}\mathrm{cos}\gamma )\mathrm{cos}\varphi \left[q_C\mathrm{cos}(\stackrel{~}{\varphi }+\omega _C)\mathrm{cos}\gamma \mathrm{cos}\stackrel{~}{\varphi }\right],`$ and $`\overline{\epsilon }_T`$ is defined in analogy with $`\overline{\epsilon }_{3/2}`$ in (10), so that $`\overline{\epsilon }_T/\epsilon _T=\overline{\epsilon }_{3/2}/\epsilon _{3/2}`$. The first relation in (54) generalizes a sum rule derived by Lipkin, who neglected the terms of $`O(\epsilon _i^2)`$ on the right-hand side as well as electroweak penguin contributions . The second relation is new. It follows from the fact that $`R_0=R_{}+O(\epsilon _i^2)`$, which is evident since the pairs of decay amplitudes entering the definition of the two ratios differ only in the isospin amplitude $`A_{3/2}`$. The left-hand plot in Figure 9 shows the results for the ratio $`R_0`$ versus $`|\gamma |`$. The dependence of this ratio on the weak phase turns out to be much weaker than in the case of the ratios $`R`$ and $`R_{}`$. For the realistic parameter set we find that $`0.7<R_0<1.0`$ for most choices of strong-interaction parameters. Combining this with the current value of the $`B^\pm \pi ^\pm K^0`$ branching ratio, we obtain values between $`(0.47\pm 0.18)\times 10^5`$ and $`(0.67\pm 0.26)\times 10^5`$ for the CP-averaged $`B^0\pi ^0K^0`$ branching ratio. The right-hand plot in Figure 9 shows the strong correlation between the ratios $`R_{}`$ and $`R_0=R_0/R`$, which holds with a remarkable accuracy over all of parameter space. In Figure 10, we show the estimates of $`R_0`$ obtained by neglecting the terms of $`O(\overline{\epsilon }_i^2)`$ and higher in the two sum rules in (54). Using the present data for the various branching ratios yields to the estimates $`R_0=(0.1\pm 0.9)`$ from the first and $`R_0=(0.5\pm 0.2)`$ from the second sum rule. Both results are consistent with the theoretical expectations for $`R_0`$ exhibited in the left-hand plot in Figure 9; however, the second estimate has a much smaller experimental error and, according to Figure 10, it is likely to have a higher theoretical accuracy. We can rewrite this estimate as $$\frac{1}{2}\left[\text{Br}(B^0\pi ^0K^0)+\text{Br}(\overline{B}^0\pi ^0\overline{K}^0)\right]\frac{\text{Br}(B^\pm \pi ^\pm K^0)\text{Br}(B^0\pi ^{}K^\pm )}{4\text{Br}(B^\pm \pi ^0K^\pm )},$$ (56) where the branching ratios on the right-hand side are averaged over CP-conjugate modes. With current data, this relation yields the value $`(0.33\pm 0.18)\times 10^5`$. Combining the three estimates for the CP-averaged $`B^0\pi ^0K^0`$ branching ratio presented above we arrive at the value $`(0.5\pm 0.2)\times 10^5`$, which is about a factor of 3 smaller than the other three $`B\pi K`$ branching ratios quoted in (1). We now turn to the study of the direct CP asymmetry in the decays $`B^0\pi ^0K^0`$ and $`\overline{B}^0\pi ^0\overline{K}^0`$. Using our general parametrizations, we find the sum rule $`A_{\mathrm{CP}}(\pi ^+K^0)R_{}^1A_{\mathrm{CP}}(\pi ^0K^+)+RA_{\mathrm{CP}}(\pi ^{}K^+)R_0A_{\mathrm{CP}}(\pi ^0K^0)`$ $`=2\mathrm{sin}\gamma \overline{\epsilon }_{3/2}\overline{\epsilon }_T\left[\delta _{\mathrm{EW}}\mathrm{sin}(\varphi _T\varphi _{3/2})q_C\mathrm{sin}(\varphi _T\varphi _{3/2}+\omega _C)\right].`$ (57) By scanning all strong-interaction parameters, we find that for the realistic (conservative) parameter set the right-hand side takes values of less that 4% (7%) times $`\mathrm{sin}\gamma `$ in magnitude. Neglecting these small terms, and using the approximate equality of the CP asymmetries in $`B^\pm \pi ^0K^\pm `$ and $`B^0\pi ^{}K^\pm `$ decays as well as the second relation in (54), we obtain $$A_{\mathrm{CP}}(\pi ^0K^0)\frac{1RR_{}}{RR_{}^2}A_{\mathrm{CP}}(\pi ^0K^+)+\frac{A_{\mathrm{CP}}(\pi ^+K^0)}{RR_{}}.$$ (58) The first term is negative for most choices of parameters and would dominate if the CP aymmetry in $`B^\pm \pi ^0K^\pm `$ decays would turn out to be large. We therefore expect a weak anticorrelation between $`A_{\mathrm{CP}}(\pi ^0K^0)`$ and $`A_{\mathrm{CP}}(\pi ^+K^0)`$, which is indeed exhibited in the right-hand plot in Figure 9. For completeness, we note that in the decays $`B^0`$, $`\overline{B}^0\pi ^0K_S`$ one can also study mixing-induced CP violation, as has been emphasized recently in . Because of the large hadronic uncertainties inherent in the calculation of this effect, we do not study this possibility further. ## 6 Determination of $`𝜸`$ from $`𝑩^\mathbf{\pm }\mathbf{}𝝅𝑲`$, $`𝝅𝝅`$ decays Ultimately, one would like not only to derive bounds on the weak phase $`\gamma `$, but to measure this parameter from a study of CP violation in $`B\pi K`$ decays. However, as we have pointed out in Section 2, this is not a trivial undertaking because even perfect measurements of all eight $`B\pi K`$ branching ratios would not suffice to eliminate all hadronic parameters entering the parametrization of the decay amplitudes. Because of their theoretical cleanness, the decays of charged $`B`$ mesons are best suited for a measurement of $`\gamma `$. In , we have described a strategy for achieving this goal, which relies on the measurements of the CP-averaged branching ratios for the decays $`B^\pm \pi ^\pm K^0`$ and $`B^\pm \pi ^\pm \pi ^0`$, as well as of the individual branching ratios for the decays $`B^+\pi ^0K^+`$ and $`B^{}\pi ^0K^{}`$, i.e., the direct CP asymmetry in this channel. This method is a generalization of the Gronau–Rosner–London (GRL) approach for extracting $`\gamma `$ . It includes the contributions of electroweak penguin operators, which had previously been argued to spoil the GRL method . The strategy proposed in relies on the dynamical assumption that there is no CP-violating contribution to the $`B^\pm \pi ^\pm K^0`$ decay amplitudes, which is equivalent to saying that the rescattering effects parametrized by the quantity $`\epsilon _a`$ in (8) are negligibly small. It is evident from the left-hand plot in Figure 1 that this assumption is indeed justified in a large region of parameter space. Here, we will refine the approach and investigate the theoretical uncertainty resulting from $`\epsilon _a0`$. As a side product, we will show how nontrivial information on the strong-interaction phase difference $`\varphi =\varphi _{3/2}\varphi _P`$ can be obtained along with information on $`\gamma `$. To this end, we consider in addition to the ratio $`R_{}`$ the CP-violating observable $$\stackrel{~}{A}\frac{A_{\mathrm{CP}}(\pi ^0K^+)}{R_{}}A_{\mathrm{CP}}(\pi ^+K^0)=2\mathrm{sin}\gamma \overline{\epsilon }_{3/2}\frac{\mathrm{sin}\varphi \epsilon _a\delta _{\mathrm{EW}}\mathrm{sin}(\varphi \eta )}{\sqrt{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2}}.$$ (59) The purpose of subtracting the CP asymmetry in the decays $`B^\pm \pi ^\pm K^0`$ is to eliminate the contribution of $`O(\epsilon _a)`$ in the expression for $`A_{\mathrm{CP}}(\pi ^0K^+)`$ given in (50). A measurement of this asymmetry is the new ingredient in our approach with respect to that in . With the definition of $`\stackrel{~}{A}`$ as given above, the rescattering effects parametrized by $`\epsilon _a`$ are suppressed by an additional factor of $`\overline{\epsilon }_{3/2}`$ and are thus expected to be very small. As shown in Section 4, the same is true for the ratio $`R_{}`$. Explicitly, we have $`R_{}^1`$ $`=`$ $`1+2\overline{\epsilon }_{3/2}\mathrm{cos}\varphi (\delta _{\mathrm{EW}}\mathrm{cos}\gamma )+\overline{\epsilon }_{3/2}^2(12\delta _{\mathrm{EW}}\mathrm{cos}\gamma +\delta _{\mathrm{EW}}^2)+O(\overline{\epsilon }_{3/2}\epsilon _a),`$ $`\stackrel{~}{A}`$ $`=`$ $`2\mathrm{sin}\gamma \overline{\epsilon }_{3/2}\mathrm{sin}\varphi +O(\overline{\epsilon }_{3/2}\epsilon _a).`$ (60) These equations define contours in the $`(\gamma ,\varphi )`$ plane. When higher-order terms are kept, these contours become narrow bands, the precise shape of which depends on the values of the parameters $`\overline{\epsilon }_{3/2}`$ and $`\delta _{\mathrm{EW}}`$. In the limit $`\epsilon _a=0`$ the procedure described here is mathematically equivalent to the construction proposed in . There, the errors on $`\mathrm{cos}\gamma `$ resulting from the variation of the input parameters have been discussed in detail. For a typical example, where $`\gamma =76^{}`$ and $`\varphi =20^{}`$, we found that the uncertainties resulting from a 15% variation of $`\overline{\epsilon }_{3/2}`$ and $`\delta _{\mathrm{EW}}`$ are $`\mathrm{cos}\gamma =0.24\pm 0.09\pm 0.09`$, correspondig to errors of $`\pm 5^{}`$ each on the extracted value of $`\gamma `$. Our focus here is to evaluate the additional uncertainty resulting from the rescattering effects parametrized by $`\epsilon _a`$ and $`\eta `$. For given values of $`\overline{\epsilon }_{3/2}`$, $`\delta _{\mathrm{EW}}`$, $`\epsilon _a`$, $`\eta `$, and $`\gamma `$, the exact results for $`R_{}`$ in (44) and $`\stackrel{~}{A}`$ in (59) can be brought into the generic form $`A\mathrm{cos}\varphi +B\mathrm{sin}\varphi =C`$, where in the case of $`R_{}`$ $`A`$ $`=`$ $`2\overline{\epsilon }_{3/2}{\displaystyle \frac{\delta _{\mathrm{EW}}\mathrm{cos}\gamma +\epsilon _a\mathrm{cos}\eta (1\delta _{\mathrm{EW}}\mathrm{cos}\gamma )}{\sqrt{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2}}},`$ $`B`$ $`=`$ $`2\overline{\epsilon }_{3/2}{\displaystyle \frac{\epsilon _a\mathrm{sin}\eta (1\delta _{\mathrm{EW}}\mathrm{cos}\gamma )}{\sqrt{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2}}},`$ $`C`$ $`=`$ $`R_{}^11\overline{\epsilon }_{3/2}^2(12\delta _{\mathrm{EW}}\mathrm{cos}\gamma +\delta _{\mathrm{EW}}^2),`$ (61) whereas for $`\stackrel{~}{A}`$ $`A`$ $`=`$ $`2\overline{\epsilon }_{3/2}{\displaystyle \frac{\epsilon _a\delta _{\mathrm{EW}}\mathrm{sin}\eta }{\sqrt{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2}}},`$ $`B`$ $`=`$ $`2\overline{\epsilon }_{3/2}{\displaystyle \frac{1\epsilon _a\delta _{\mathrm{EW}}\mathrm{cos}\eta }{\sqrt{12\epsilon _a\mathrm{cos}\eta \mathrm{cos}\gamma +\epsilon _a^2}}},`$ $`C`$ $`=`$ $`{\displaystyle \frac{\stackrel{~}{A}}{\mathrm{sin}\gamma }}.`$ (62) The two solutions for $`\mathrm{cos}\varphi `$ are given by $$\mathrm{cos}\varphi =\frac{AC\pm B\sqrt{A^2+B^2C^2}}{A^2+B^2}.$$ (63) The physical solutions must be such that $`\mathrm{cos}\varphi `$ is real and its magnitude less than 1. In Figure 12, we show the resulting contour bands obtained by keeping $`\overline{\epsilon }_{3/2}=0.24`$ and $`\delta _{\mathrm{EW}}=0.64`$ fixed to their central values, while the rescattering parameters are scanned over the ranges $`0<\epsilon _a<0.08`$ and $`180^{}<\eta <180^{}`$. Assuming that $`\mathrm{sin}\gamma >0`$ as suggested by the global analysis of the unitarity triangle, the sign of $`\stackrel{~}{A}`$ determines the sign of $`\mathrm{sin}\varphi `$. In the plot, we assume without loss of generality that $`0^{}\varphi 180^{}`$. For instance, if $`R_{}=0.7`$ and $`\stackrel{~}{A}=0.2`$, then the two solutions are $`(\gamma ,\varphi )(98^{},25^{})`$ and $`(\gamma ,\varphi )(153^{},67^{})`$, only the first of which is allowed by the upper bound $`\gamma <105^{}`$ following from the global analysis of the unitarity triangle . It is evident that the contours are rather insensitive to the rescattering effects parametrized by $`\epsilon _a`$ and $`\eta `$. The error on $`\gamma `$ due to these effects is about $`\pm 5^{}`$, which is similar to the errors resulting from the theoretical uncertainties in the parameters $`\overline{\epsilon }_{3/2}`$ and $`\delta _{\mathrm{EW}}`$. The combined theoretical uncertainty is of order $`\pm 10^{}`$ on the extracted value of $`\gamma `$. To summarize, the strategy for determining $`\gamma `$ would be as follows: From measurements of the CP-averaged branching ratio for the decays $`B^\pm \pi ^\pm \pi ^0`$, $`B^\pm \pi ^\pm K^0`$ and $`B^\pm \pi ^0K^\pm `$, the ratio $`R_{}`$ and the parameter $`\overline{\epsilon }_{3/2}`$ are determined using (2) and (4), respectively. Next, from measurements of the rate asymmetries in the decays $`B^\pm \pi ^\pm K^0`$ and $`B^\pm \pi ^0K^\pm `$ the quantity $`\stackrel{~}{A}`$ is determined. From the contour plots for the quantities $`R_{}`$ and $`\stackrel{~}{A}`$ the phases $`\gamma `$ and $`\varphi `$ can then be extracted up to discrete ambiguities. In this determination one must account for theoretical uncertainties in the values of the parameters $`\overline{\epsilon }_{3/2}`$ and $`\delta _{\mathrm{EW}}`$, as well as for rescattering effects parametrized by $`\epsilon _a`$ and $`\eta `$. Quantitative estimates for these uncertainties have been given above. ## 7 Conclusions We have presented a model-independent, global analysis of the rates and direct CP asymmetries for the rare two-body decays $`B\pi K`$. The theoretical description exploits the flavour symmetries of the strong interactions and the structure of the low-energy effective weak Hamiltonian. Isospin symmetry is used to introduce a minimal set of three isospin amplitudes. The explicit form of the effective weak Hamiltonian in the Standard Model is used to simplify the isovector part of the interaction. Both the numerical smallness of certain Wilson coefficient functions and the Dirac and colour structure of the local operators are relevant in this context. Finally, the $`U`$-spin subgroup of flavour SU(3) symmetry is used to simplify the structure of the isospin amplitude $`A_{3/2}`$ referring to the decay $`B(\pi K)_{I=3/2}`$. In the limit of exact $`U`$-spin symmetry, two of the four parameters describing this amplitude (the relative magnitude and strong-interaction phase of electroweak penguin and tree contributions) can be calculated theoretically, and one additional parameter (the overall strength of the amplitude) can be determined experimentally from a measurement of the CP-averaged branching ratio for $`B^\pm \pi ^\pm \pi ^0`$ decays. What remains is a single unknown strong-interaction phase. The SU(3)-breaking corrections to these results can be calculated in the generalized factorization approximation, so that theoretical limitations enter only at the level of nonfactorizable SU(3)-breaking effects. However, since we make use of SU(3) symmetry only to derive relations for amplitudes referring to isospin eigenstates, we do not expect gross failures of the generalized factorization hypothesis. We stress that the theoretical simplifications used in our analysis are the only ones rooted on first principles of QCD. Any further simplification would have to rest on model-dependent dynamical assumptions, such as the smallness of certain flavour topologies with respect to others. We have introduced a general parametrization of the decay amplitudes, which makes maximal use of these theoretical constraints but is otherwise completely general. In particular, no assumption is made about strong-interaction phases. With the help of this parametrization, we have performed a global analysis of the branching ratios and direct CP asymmetries in the various $`B\pi K`$ decay modes, with particular emphasis on the impact of hadronic uncertainties on methods to learn about the weak phase $`\gamma =\text{arg}(V_{ub}^{})`$ of the unitarity triangle. The main phenomenological implications of our results can be summarized as follows: * There can be substantial corrections to the Fleischer–Mannel bound on $`\gamma `$ from enhanced electroweak penguin contributions, which can arise in the case of a large strong-interaction phase difference between $`I=\frac{1}{2}`$ and $`I=\frac{3}{2}`$ isospin amplitudes. Whereas these corrections stay small (but not negligible) if one restricts this phase difference to be less than $`45^{}`$, there can be large violations of the bound if the phase difference is allowed to be as large as $`90^{}`$. * On the contrary, rescattering effects play a very minor role in the bound on $`\gamma `$ derived from a measurement of the ratio $`R_{}`$ of CP-averaged $`B^\pm \pi K`$ branching ratios. They can be included exactly in the bound and enter through a parameter $`\epsilon _a`$, whose value is less than 0.1 even under very conservative conditions. Including these effects weakens the bounds on $`\gamma `$ by less than $`5^{}`$. We have generalized the result of our previous work , where we derived a bound on $`\mathrm{cos}\gamma `$ to linear order in an expansion in the small quantity $`\overline{\epsilon }_{3/2}`$. Here we refrain from making such an approximation; however, we confirm our previous claim that to make such an expansion is justified (i.e., it yields a conservative bound) provided that the current experimental value of $`R_{}`$ does not change by more than one standard deviation. The main result of our analysis is given in (45), which shows the exact result for the maximum value of the ratio $`R_{}`$ as a function of the parameters $`\delta _{\mathrm{EW}}`$, $`\overline{\epsilon }_{3/2}`$, and $`\epsilon _a`$. The first parameter describes electroweak penguin contributions and can be calculated theoretically. The second parameter can be determined experimentally from the CP-averaged branching ratios for the decays $`B^\pm \pi ^\pm \pi ^0`$ and $`B^\pm \pi ^\pm K^0`$. We stress that the definition of $`\overline{\epsilon }_{3/2}`$ is such that it includes exactly possible rescattering contributions to the $`B^\pm \pi ^\pm K^0`$ decay amplitudes. The third parameter describes a certain class of rescattering effects and can be constrained experimentally once the CP-averaged $`B^\pm K^\pm \overline{K}^0`$ branching ratio has been measured. However, we have shown that under rather conservative assumptions $`\epsilon _a<0.1`$. * The calculable dependence of the $`B^\pm \pi K`$ decay amplitudes on the electroweak penguin contribution $`\delta _{\mathrm{EW}}`$ offers a window to New Physics. In many generic extensions of the Standard Model such as multi-Higgs models, we expect deviations from the value $`\delta _{\mathrm{EW}}=0.64\pm 0.15`$ predicted by the Standard Model. We have derived a lower bound on $`\delta _{\mathrm{EW}}`$ as a function of the value of the ratio $`R_{}`$ and the maximum value for $`\gamma `$ allowed by the global analysis of the unitarity triangle. If it would turn out that this value exceeds the Standard Model prediction by a significant amount, this would be strong evidence for New Physics. In particular, we note that if the current central value $`R_{}=0.47`$ would be confirmed, the value of $`\delta _{\mathrm{EW}}`$ would have to be at least twice its standard value. * We have studied in detail the potential of the various $`B\pi K`$ decay modes for showing large direct CP violation and investigated the correlations between the various asymmetries. Although in general the theoretical predictions suffer from the fact that an overall strong-interaction phase difference is unknown, we conclude that there is a fair chance for observing large direct CP asymmetries in at least some of the decay channels. More specifically, we find that the direct CP asymmetries in the decays $`B^\pm \pi ^0K^\pm `$ and $`B^0\pi ^{}K^\pm `$ are almost fully correlated and can be up to 50% in magnitude for realistic parameter choices. The direct CP asymmetry in the decays $`B^0\pi ^0K^0`$ and $`\overline{B}^0\pi ^0\overline{K}^0`$ tends to be smaller by about a factor of 2 and anticorrelated in sign. Finally, the asymmetry in the decays $`B^\pm \pi ^\pm K^0`$ is smaller and uncorrelated with the other asymmetries. For realistic parameter choices, we expect values of up to 15% for this asymmetry. * We have derived sum rules for the branching ratio and direct CP asymmetry in the decays $`B^0\pi ^0K^0`$ and $`\overline{B}^0\pi ^0\overline{K}^0`$. A rather clean prediction for the CP-averaged branching ratio for these decays in given in (56). We expect a value of $`(0.5\pm 0.2)\times 10^5`$ for this branching ratio, which is about a factor of 3 less than the other $`B\pi K`$ branching ratios. * Finally, we have presented a method for determining the weak phase $`\gamma `$ along with the strong-interaction phase difference $`\varphi `$ from measurements of $`B^\pm \pi K`$, $`\pi \pi `$ branching ratios, all of which are of order $`10^5`$. This method generalizes an approach proposed in to include rescattering corrections to the $`B^\pm \pi ^\pm K^0`$ decay amplitudes. We find that the uncertainty due to rescattering effects is about $`\pm 5^{}`$ on the extracted value of $`\gamma `$, which is similar to the errors resulting from the theoretical uncertainties in the parameters $`\overline{\epsilon }_{3/2}`$ and $`\delta _{\mathrm{EW}}`$. The combined theoretical uncertainty in our method is of order $`\pm 10^{}`$. A global analysis of branching ratios and direct CP asymmetries in rare two-body decays of $`B`$ mesons can yield interesting information about fundamental parameters of the flavour sector of the Standard Model, and at the same time provides a window to New Physics. Such an analysis should therefore be a central focus of the physics program of the $`B`$ factories, which in many respects is complementary to the time-dependent studies of CP violation in neutral $`B`$ decays into CP eigenstates. ###### Acknowledgments. This is my last paper as a member of the CERN Theory Division. It is a pleasure to thank my colleagues for enjoyful interactions during the past five years. I am very grateful to Guido Altarelli, Martin Beneke, Gian Giudice, Michelangelo Mangano, Paolo Nason and, especially, to Alex Kagan for their help in a difficult period. I also wish to thank Andrzej Buras, Guido Martinelli, Chris Sachrajda, Berthold Stech, Jack Steinberger and Daniel Wyler for their support. It is a special pleasure to thank Elena, Jeanne, Marie-Noelle, Michelle, Nannie and Suzy for thousands of smiles, their friendliness, patience and help. Finally, I wish to the CERN Theory Division that its structure may change in such a way that one day it can be called a Theory Group.
no-problem/9812/hep-ph9812519.html
ar5iv
text
# References entries eqnarray ITEP/TH-77/98 hepth/9812519 Low energy theorems and the Dirac operator spectral density in QCD A.Gorsky ITEP, Moscow, 117259, B.Cheryomushkinskaya 25 ## Abstract We discuss the behaviour of the spectral density of the massless Dirac operator at the small eigenvalues and quark masses compatible with the restrictions imposed by the low energy theorems in QCD. Sum rule for its derivative over the quark mass is found. 1. The search for the universal characteristics of QCD at the strong coupling regime remains the important problem to clarify the structure of QCD vacuum. One of the most important universal objects is the spectral density of the massless Dirac operator whose behaviour near zero eigenvalue provides the pattern of the spontaneous symmetry breaking. The Banks-Casher relation states that the density at the origin is the fermionic condensate $`<\overline{q}q>=\pi \rho (0)`$, where $$\begin{array}{c}\widehat{D}q=\lambda q\\ \rho (\lambda )=<V^1\delta (\lambda \lambda _n)>_A\end{array}$$ (1) V is the Euclidean volume and averaging over the gluon ensemble is assumed. Generalization of the Banks-Casher relation for the moments of the density at the finite volume has been found in . The obvious question concerns the behaviour of the spectral density at small masses and eigenvalues. In the perturbation theory density behaves as $`\rho (\lambda )=c\lambda ^3`$ therefore the linear and quadratic terms have the nonperturbative nature. It would describe the critical behaviour of the system at zero temperature and possibly fix a universality class. Let us remark that the universality properties of the spectral density allow to apply the matrix model technique to analyse its behaviour at the finite volume (see for the review). It is natural to relate the characteristics of the spectral density to other universal objects in QCD. The best candidates are the low energy theorems for the zero-momentum QCD correlators in the different channels which amount from the chiral Ward identities . The first attempt to get the information about the spectral density behaviour involved isovector scalar correlator $`𝑑x<S^i(x)S^j(0)>`$ . It was claimed that it yields the term linear in $`\lambda `$ that vanishes for $`N_f=2`$. Recently two other sum rules arising from the correlators $$𝑑x<A^i(x)A^j(0)V^i(x)V^j(0)>$$ and $$𝑑x<S^i(x)S^j(0)\delta ^{ij}P^0(x)P^0(0)>$$ were obtained but no additional information on the spectral density has been extracted. In this note we discuss the complete set of restrictions imposed by low energy theorems for the two point and three point correlators on the spectral density. Correlators in the scalar and pseudoscalar channels yield the information on the spectral density itself while those in the vector and axial channels are relevant only for the correlations of the eigenvalues. 2. Let us consider the correlators in the isovector and isoscalar scalar and pseudoscalar channels in the $`N_f=2`$ case. In what follows we shall discuss only correlators which are free from the nonuniversal high energy contributions. The complete list of such low- energy theorems for two , three and four point correlators can be found in . Corresponding correlators can be expressed in terms of the spectral density as follows; $$\begin{array}{c}𝑑x<\delta ^{ij}S^0(x)S^0(0)P^i(x)P^j(0)>=\frac{G_\pi ^2\delta _{ij}}{M_\pi ^2}+\delta _{ij}\frac{B^2}{8\pi ^2}(l_34l_4+3)=\\ \delta _{ij}(\frac{8m^2\rho (\lambda ,m)}{(\lambda ^2+m^2)^2}4m\frac{_m\rho (\lambda ,m)}{(\lambda ^2+m^2)})\end{array}$$ (2) $$\begin{array}{c}𝑑x<S^i(x)S^j(0)\delta _{ij}P^0(x)P^0(0)>=\delta _{ij}8B^2l_7=\\ \delta _{ij}(\frac{8m^2\rho (\lambda ,m)}{(\lambda ^2+m^2)^2}4\frac{<{\scriptscriptstyle 𝑑xQ(x)Q(0)}>}{m^2V})\end{array}$$ (3) $$\begin{array}{c}𝑑x<P^3(x)P^0(0)>=\frac{G_\pi \stackrel{~}{G}_\pi }{M_\pi ^2}=\\ 8(m_um_d)m\frac{\rho (\lambda ,m)}{(\lambda ^2+m^2)^2}\frac{4(m_um_d)<{\scriptscriptstyle 𝑑xQ(x)Q(0)}>}{m^3V}\end{array}$$ (4) where $`G_\pi =2F_\pi B=\frac{F_\pi m_\pi ^2}{m};\stackrel{~}{G}_\pi =(m_um_d)\frac{4B^2l_7}{F_\pi }`$ and $`Q(x)`$ is the topological charge density. Sum rule arising from the last low energy theorem exactly coincides with the one coming from correlator (3). Low energy constants $`l_3,l_4`$ behave as $`logm`$ at small quark masses, while the constant $`l_7`$ contains no chiral logarithms . We can add here low energy theorem for three point correlator $$\begin{array}{c}𝑑x𝑑y<S^0(x)P^i(y)P^k(0)>=\frac{\delta _{ik}G_\pi ^3}{M_\pi ^4F_\pi }\delta _{ik}\frac{B^2G_\pi F_\pi }{8\pi ^2M_\pi ^2}(l_34l_4+3)=\\ \delta _{ij}\frac{d}{dm}\frac{2\rho (\lambda ,m)}{\lambda ^2+m^2}.\end{array}$$ (5) It appears that the sum rules resulted from the low energy theorems (2) and (5) are identical. The last low energy theorem which is potentially important amounts from the four point pseudoscalar correlator $$\begin{array}{c}𝑑x𝑑y𝑑z<P^i(x)P^j(y)P^k(z)P^l(0)>=\frac{G_\pi ^4}{F_\pi ^2M_\pi ^8}(\delta _{ij}\delta _{kl}+\delta _{ik}\delta _{jl}+\delta _{il}\delta _{jk})(\frac{M_\pi ^2}{F^2}+\frac{M_\pi ^4(24l_49)}{96\pi ^2F^4}+\mathrm{})=\\ (\delta _{ij}\delta _{kl}+\delta _{ik}\delta _{jl}+\delta _{il}\delta _{jk})\frac{4\rho (\lambda ,m)}{(\lambda ^2+m^2)^2}+R_{ijkl}.\end{array}$$ (6) Unfortunately it is more difficult to extract the information from the four point correlator since there is the contribution $`R_{ijkl}`$ corresponding to the diagram with at least two fermionic loops which can,t be expressed in terms of the spectral density . Moreover the sum rules are sensible to the two loop contribution to the four point correlator in the chiral theory which is unknown at a moment. Therefore there are only two rigorous independent sum rules for the spectral density. Combination of the sum rules (2) and (3) yields the following model independent sum rule for the mass derivative of the spectral density $$\begin{array}{c}m\frac{4_m\rho (\lambda ,m)}{(\lambda ^2+m^2)}=\\ \frac{B^2}{8\pi ^2}(l_34l_4+3)\frac{G_\pi ^2}{M_\pi ^2}+8B^2l_7+\frac{4<{\scriptscriptstyle 𝑑xQ(x)Q(0)}>}{m^2V}.\end{array}$$ (7) 3. Turn now to the discussion of the restrictions imposed by the sum rules on the behaviour of the density near the origin. We would like to look for the following anzatz $$\begin{array}{c}\rho (\lambda ,m)=\rho (0)+c_1\lambda log\lambda +c_2\lambda +c_3m+c_4mlogm+c_5mlog^2m+O(m^2,\lambda ^2,m\lambda ),\end{array}$$ (8) where $`c_i`$ are the constants to be found. Let us first make an assumption that the spectral density has to be regular if $`\lambda 0`$ or $`m0`$ so there are no terms like $`(\frac{\lambda }{m})^n`$ or $`mlog\lambda `$ in the expansion above. It appears that this anzatz is consistent with the low-energy theorems for the correlators which are free from the high energy contributions. Apart from the sum rules above we assume two additional model independent restrictions on the spectral density. First we use the universality of the $`mlogm`$ correction to the chiral condensate . Secondly there is unambiguous statement that there are no $`logm`$ contributions to the correlator $`<S^jS^i>`$. This fact has been already used to show that $`c_2=0`$ . Consider first correlators (2),(5) which give rise the identical sum rules. The leading $`m^1`$ singularity immediately comes from $`\rho (0)`$ term but the matching of the $`logm`$ term appears to be a subtle point. It is easily seen that the constants $`c_4,c_1`$ don,t contribute hence one has to assume that $`c_50`$. The correction to the condensate looks as $$\begin{array}{c}<\overline{q}q>_m=<\overline{q}q>_0(1\frac{3M_\pi ^2logM_\pi ^2}{32\pi ^2F_\pi ^2}+\mathrm{})\end{array}$$ (9) and it is supposed that there is no $`mlog^2m`$ term. Hence using the Banks-Casher formula for the condensate we can claim that to cancel $`mlog^2m`$ correction we have to assume that $`c_10`$. However this term in the spectral density yields the divergence in the integral at the UV region rising the question on the substraction of the perturbative contribution. The proper version of this proceedure which would allow to make a prediction for $`c_1,c_4`$ deserves further investigation. To discuss the restriction on $`c_3`$ we have to expand the topological susceptibility up to the second order in the quark mass $$\begin{array}{c}\frac{<{\scriptscriptstyle 𝑑xQ(x)Q(0)}>}{V}=\frac{mBF_\pi ^2}{2}+dB^2m^2+\mathrm{},\end{array}$$ (10) hence the coefficient d enters our sum rules. If we substitute the expression for the spectral density and assume that all logarithmic contribution cancel the model independent relation arises $$\begin{array}{c}c_3=8l_7+2d.\end{array}$$ (11) Note that the analogous relation has been considered in but the term in the lhs has been missed. Since the constants $`c_1`$ and $`c_5`$ can,t vanish simultaneously the $`m^2logm`$ and $`m^2log^2m`$ terms have to be manifest in the expansion of the topological susceptibility. While the $`logm`$ term can be attributed to the first correction to the condensate the origin of the $`log^2m`$ term is unclear. Certainly it is desirable to consider the $`N_f>2`$ case when the dependence of the topological susceptibility on the quark masses is more transparent. Recently the behaviour of the spectral density at small quark masses with arbitrary number of flavours has been found within the matrix theory approach . It was claimed that the singular terms are present in the diffusive domain where the spectral density manifests the $`mlog\lambda `$ contribution. It can be shown that such term is consistent with the sum rule found above. However it is not clear how possible singularity agrees with the nonzero quark condensate. 4. In this note we considered the behaviour of the Dirac operator spectral density around the origin. It appeared that the low energy theorems impose strong restrictions on it but don,t determine it unambigously. It would be crusual to derive the complete list of the two loop contributions to the off shell correlators in the chiral theory since this information provides the additional restrictions on the linear terms in the spectral density as well as on the $`\lambda ^2`$ terms. We have shown that the nonsingular spectral density manifests unexpected $`mlog^2m`$ term which possibly can be attributed to the renormalization of the mass in the strong external field similar to the QED renormalization of the mass . Otherwise some singular terms in the spectral density have to be admitted. This work is partially motivated by an attempt to develop the interpretation of the spectral density of the Dirac operator within the brane approach. This issue was addressed for the N=1 SUSY theory in and quark masses fix the positions of D6 branes representing the fundamental matter in the ”momentum” space. Therefore to clarify the brane picture it is necessary to elaborate the $`N_f`$ dependence of the spectral density as well the the case of the generic mass matrix. These questions will be discussed elsewhere. I am grateful to H.Leutwyler for the hospitality in ITP at Bern University and discussions which initiated this work. I would like to thank A.Smilga for the useful comments and D.Toublan for bringing reference to my attention. The work was supported in part by grants INTAS-96-0482 and RFBR 97-02-16131.
no-problem/9812/astro-ph9812282.html
ar5iv
text
# Dissipative Process as a Mechanism of Differentiating Internal Structures between Dwarf and Normal Elliptical Galaxies in a CDM Universe ## 1 INTRODUCTION Recent progress in observational devices and techniques has enhanced our knowledge of formation and evolution of galaxies on a firm statistical basis. The luminosity function of galaxies observed in general fields and in nearby clusters shows a steep slope in both the bright and faint ends, whereas its slope in between exhibits a plateau or a slight dip from $`M_\mathrm{B}=19`$ to $`13`$ mag (Loveday 1997; Trentham 1998). Hierarchical clustering models of galaxy formation in a standard cold dark matter (CDM) universe predict that the number of galaxies monotonically increases with decreasing mass, yielding a power-law mass function in a low-mass end (Davis et al. 1985; White & Frenk 1991; Cole et al. 1994). This implies that, unless the mass-to-light ratio is very different from what we usually expect, an excessive number of dwarf galaxies are predicted beyond that observed in the luminosity function. In order to remove this serious discrepancy several mechanisms for suppressing the formation of dwarf galaxies have been proposed so far. Efstathiou (1992) proposed an effect of photoionization by ultraviolet background radiation that keeps the gas hot and unable to collapse (see also Chiba & Nath 1994; Thoul & Weinberg 1995). Dekel & Silk (1986) proposed an energy feedback that induces a significant mass loss from low-mass galaxies having the shallow gravitational potential (see also Saito 1979; Yoshii & Arimoto 1987; Lacey & Silk 1991). A clear distinction is seen in structural and photometric quantities between dwarf and normal ellipticals in spite of their morphological similarity. Faber & Lin (1983) appreciated that dwarf ellipticals have an exponential surface brightness profile rather than de Vaucouleurs’ $`r^{1/4}`$-profile applied to normal ellipticals (de Vaucouleurs 1948; Kormendy 1977). Bright dwarfs with $`M_\mathrm{B}<16`$ mag show a distinct luminosity spike in their center commonly referred to as the nucleated dwarfs (Caldwell & Bothun 1987; Binggeli & Cameron 1991; Ichikawa, Wakamatsu & Okamura 1986), and faint dwarfs do not usually show such a nucleus (Sandage et al. 1985). Young & Currie (1994) and Binggeli & Jerjen (1998) showed that the surface brightness profile of dwarf ellipticals changes from an exponential to an $`r^{1/4}`$-like form as their luminosity increases. Vader et al. (1988) analysed the data from Vigroux et al. (1988) and found that many dwarf ellipticals show systematically redder colors at larger radii away from the galaxy center (Vader et al. 1988; Kormendy & Djorgovski 1989; Chaboyer 1994; Durrell et al. 1996), which is opposite to the trend observed in normal ellipticals. Vader et al. (1988) interpreted this inverse color gradient in dwarf galaxies in term of the positive gradient of stellar age across the system. However, because of their very low metallicities, stars in dwarf galaxies must have been formed on very short time scales and therefore no appreciable age difference results. Recent high-resolution observations of nearby dwarf galaxies show a web of filaments, loops and expanding super-giant shells which are imprinted in the gas around the individual galaxies and such a striking feature likely stems from the interaction of interstellar medium with energetic stellar winds from massive stars and/or supernova explosions (Meurer, Freeman & Dopita 1992; Puche & Westpfahl 1994; Westpfahl & Puche 1994; Marlowe, Heckman & Wyse 1995; Hunter 1996). All these observations motivate us to consider that the energy feedback from supernovae may be of prime importance to understand the discrepancy between the mass function of galaxies predicted from hierarchical models and the luminosity function obtained from recent redshift surveys. From theoretical viewpoints, while collisionless $`N`$-body simulations are successful in producing an $`r^{1/4}`$-profile through the merging of galaxies (White 1979) as well as their monolithic collapse (van Albada 1982), only a few simulations incorporating the star formation and energy feedback from supernovae have been performed (e.g. Theis, Burkert & Hensler 1992; Mori et al. 1997) and in most cases a simple one-zone chemical evolution model has yet been used to interpret the data (e.g. Dekel & Silk 1986; Yoshii & Arimoto 1987; Babul & Ferguson 1996). Navarro, Eke & Frenk (1996) recently studied the dynamical response of a virialized system to an impulsive mass loss from the galaxy center based on collisionless $`N`$-body simulations. However, their assumption of impulsive mass loss needs to be justified because it is not known a priori whether the time scale of mass loss is shorter than the dynamical time of the system. Therefore, more realistic simulations are necessary in order to model the evolution of galaxies and properly interpret the accumulated data including the spatial gradients of structural and photometric quantities. In this paper, using a powerful hybrid code for three-dimensional $`N`$-body and hydrodynamical simulations, we present a unified view of the dynamical, chemical and spectro-photometric evolution of dwarf and normal elliptical galaxies from cosmologically motivated initial conditions. In §2 we briefly describe the basic equations for the model and the method of incorporating the cooling, star formation, and energy feedback process in the model. More detailed description of the numerical method is deferred to the paper by Mori, Nakasato & Nomoto (1998). In §3 we carry out the simulations for dwarf and normal elliptical galaxies originated from a $`1\sigma `$ density perturbation in a CDM universe. In §4 we summarise the result of this paper. ## 2 MODELS Our three-dimensional simulation method combines the collisionless dynamics with the hydrodynamics in a consistent manner. The evolution of a collisionless system consisting of dark matter and stars is followed by an $`N`$-body code, and the hydrodynamical properties of a gaseous component are calculated using the Smoothed Particle Hydrodynamics (SPH) (Lucy 1977; Gingold & Monaghan 1977). The SPH equations have a similar structure to the self-gravitational $`N`$-body system. The SPH has no restriction on spatial resolution or deviation from any symmetry and provides a surprisingly accurate result in some applications even with a small number of particles. These characteristics make the SPH most suited to the study of formation and evolution of galaxies. ### 2.1 Basic equations The gaseous component is described by the fluid equation for a perfect gas in the Lagrangian form. The smoothed average for a hydrodynamical quantity $`f(𝒓)`$ is given by $`<f(𝒓)>={\displaystyle f(𝒓^{})W(𝒓𝒓^{},h)d^3𝒓^{}},`$ (1) provided $`W(𝒓𝒓^{},h)d^3𝒓^{}=1`$ and $`lim_{h0}W(𝒓𝒓^{},h)=\delta (𝒓𝒓^{})`$, where $`W(𝒓,h)`$ is the smoothing kernel and $`h`$ is the smoothing length. For nearby gas particles, we replace the integration by summation because of the finite number of fluid elements. For example, the gas density $`\rho _\mathrm{g}`$ at position $`𝒓`$ is given by $`\rho _\mathrm{g}(𝒓)={\displaystyle \underset{j}{}}m_jW(|𝒓𝒓_j|,h),`$ (2) where $`m_j`$ is the mass of gas particle located at position $`𝒓_j`$. In this paper, following Monaghan & Lattanzio (1985), we use the standard kernel of spherically symmetric spline. The momentum equation is given by $`{\displaystyle \frac{d𝒗_\mathrm{g}}{dt}}={\displaystyle \frac{1}{\rho _\mathrm{g}}}P+𝒈,`$ (3) and the thermal energy equation associated with the rates of cooling $`\mathrm{\Lambda }`$ and heating $`\mathrm{\Gamma }`$ is given by $`{\displaystyle \frac{d\epsilon }{dt}}={\displaystyle \frac{P}{\rho _\mathrm{g}}}𝒗_\mathrm{g}+{\displaystyle \frac{\mathrm{\Gamma }\mathrm{\Lambda }}{\rho _\mathrm{g}}},`$ (4) with $`P=(\gamma 1)\rho _\mathrm{g}\epsilon ,`$ (5) where $`𝒗_\mathrm{g}`$ is the gas velocity, $`𝒈`$ is the gravitational force, $`P`$ is the gas pressure, $`\gamma `$ (=5/3) is the adiabatic index, and $`\epsilon `$ is the specific internal energy. We assume that the gas is optically thin and in ionization equilibrium. In order to focus mostly on the heating by energy feedback from supernovae to the interstellar medium, we neglect the effect of photoionization by UV background radiation for simplicity. We use the radiative cooling taken from Table 6 of Sutherland & Dopita (1993). In Fig. 1 the cooling rate is plotted against the temperature for different metallicities. The top curve shows the cooling rate for \[Fe/H\]=+0.5, and the lower curves show those with decreasing the metallicity at intervals of 0.5 dex. The bottom curve corresponds to the zero metallicity for the primordial gas composition. The equation of motion for a collisionless particle of either dark matter or star at $`𝒓`$ is given by $`{\displaystyle \frac{d𝒗}{dt}}`$ $`=`$ $`𝒈,`$ (6) with $`𝒈`$ $`=`$ $`G{\displaystyle \underset{j}{\overset{N}{}}}{\displaystyle \frac{m_j}{\{(𝒓𝒓_j)^2+ϵ^2\}^{3/2}}},`$ (7) where $`G`$ is the gravitational constant and $`ϵ`$ is the softening parameter. We calculate the gravitational force $`𝒈`$ by using the “Remote-GRAPE” system (Nakasato, Mori & Nomoto 1997) for which GRAPE refers to a special purpose computer for efficiently calculating the gravitational force and potential (Sugimoto et al. 1990). Self-gravity calculations can be performed in parallel with other calculations such as gasdynamics, star formation and feedback, so that the total calculation time is considerably shortened. The performance analysis is reported by Nakasato, Mori & Nomoto (1997). ### 2.2 Physical processes In each time step we calculate at each fluid point three time scales of the local gas dynamical time ($`t_{\mathrm{dyn}}`$), the local cooling time ($`t_{\mathrm{cool}}`$), and the sound crossing time ($`t_{\mathrm{sound}}`$): $`t_{\mathrm{dyn}}=\sqrt{{\displaystyle \frac{3\pi }{32G\rho _\mathrm{g}}}},`$ (8) $`t_{\mathrm{cool}}={\displaystyle \frac{3}{2}}{\displaystyle \frac{1}{\mu ^2(1Y)^2}}{\displaystyle \frac{k_\mathrm{B}T}{n_\mathrm{g}\mathrm{\Lambda }}},`$ (9) and $`t_{\mathrm{sound}}={\displaystyle \frac{l}{c_\mathrm{s}}},`$ (10) where $`\rho _\mathrm{g}=\mu m_\mathrm{H}n_\mathrm{g}`$, $`\mu `$ is the mean molecular weight, $`m_\mathrm{H}`$ is the hydrogen mass, $`n_\mathrm{g}`$ is the gas number density, $`T`$ is the gas temperature, $`Y`$ (=0.25) is the helium mass fraction, $`k_\mathrm{B}`$ is the Boltzmann constant, $`c_\mathrm{s}`$ is local sound speed, and $`l`$ is the local scale length of the fluid. Here we set $`l`$ to be equal to the smoothing length of a gas particle. In our simulation stars are assumed to form in the rapidly cooling, Jeans-unstable, and converging region subject to $`t_{\mathrm{cool}}<t_{\mathrm{dyn}}<t_{\mathrm{sound}}\mathrm{and}𝒗<0.`$ (11) Once a region simultaneously satisfying these criteria has been identified, we create new collisionless star particles there at a rate determined by local gas properties. The subsequent motion of star particles thus formed is determined only by gravity. The criteria in equation (11) are similar to those used by Katz (1992), Navarro & White (1993), and Steinmetz & Müller (1994). However, they prescribed that about one third or half of the mass of a gas particle forms a new collisionless star particle and the rest is heated by supernovae. In contrast, we do not fix the mass fraction of a gas particle that is converted to a new star particle. We assume that the star formation rate (SFR) is proportional to the local gas density and inversely proportional to the local dynamical time: $`{\displaystyle \frac{d\rho _\mathrm{s}}{dt}}=C{\displaystyle \frac{\rho _\mathrm{g}}{t_{\mathrm{dyn}}}},`$ (12) where the SFR coefficient $`C`$ is treated as a free parameter according to the analysis by Katz, Weinberg & Hernquist (1995). We note that our simulation is insensitive to the adopted value of this parameter (cf. §3.1). We estimate the mass of a newly born star particle as $`m_\mathrm{s}=\left\{1\mathrm{exp}\left(C{\displaystyle \frac{\mathrm{}t}{t_{\mathrm{dyn}}}}\right)\right\}\pi l^3\rho _\mathrm{g},`$ (13) where $`\mathrm{}t`$ is the time step used (Mori, Nakasato & Nomoto 1998). Given the slope index $`x`$ and lower and upper mass limits ($`m_\mathrm{l},m_\mathrm{u}`$) for the initial stellar mass function (IMF), the number of Type II supernovae (SNe II) progenitors for a star particle with mass $`m_\mathrm{s}`$ is calculated as $`N_{\mathrm{SN}}={\displaystyle \frac{x1}{x}}{\displaystyle \frac{1(m_{\mathrm{SN},\mathrm{l}}/m_\mathrm{u})^x}{1(m_\mathrm{l}/m_\mathrm{u})^{1x}}}{\displaystyle \frac{m_\mathrm{s}}{m_\mathrm{u}}},`$ (14) where $`m_{\mathrm{SN},\mathrm{l}}=8M_{}`$ is the lower mass limit of stars that will explode as SNe II. These IMF parameters affect the heating rate of interstellar medium and the ejection rate of heavy elements from the star particle. The number of SNe II is the most sensitive to the IMF slope among others. Current resolution in our simulation gives $`m_\mathrm{s}10^{46}M_{}`$ far exceeding a typical mass of single stars. Therefore, we distribute the associated mass of the star particle over approximately $`10^{24}`$ single stars according to Salpeter’s (1955) IMF ($`x=1.35`$) provided that the lower and upper mass limits are taken as $`m_\mathrm{l}=0.1M_{}`$ and $`m_\mathrm{u}=60M_{}`$, respectively. Stars once formed work as sources of transferring the energy, synthesized heavy elements, and materials (H and He) to the interstellar medium through supernovae or stellar winds from massive stars. This feedback process is most critical in the simulation of galaxy formation. However, previous authors adopted different treatments, mostly because there is no good appreciation of how it should be modeled in the SPH algorithm (Katz 1992; Navarro & White 1993; Mihos & Hernquist 1994). In this paper, when a star particle is formed and identified with a stellar assemblage as described above, stars more massive than 8 M start to explode as SNe II with the explosion energy of $`10^{51}`$ ergs and their outer layers are blown out with synthesized heavy elements into the interstellar medium leaving the 1.4 M remnant. We can regard this stellar assemblage as continuous energy release at an average rate of $`L_{\mathrm{SN}}=8.44\times 10^{35}`$ ergs s<sup>-1</sup> per star during the explosion period from $`\tau (m_\mathrm{u})=5.4\times 10^6`$ yr until $`\tau (8M_{})=4.3\times 10^7`$ yr, where $`\tau (m)`$ is the lifetime of a star of mass $`m`$ (David, Forman & Jones 1990). Prior to the onset of SNe II explosions, however, their progenitors develop stellar winds and also release the energy of $`10^{50}`$ ergs into the interstellar medium at an average rate of $`L_{\mathrm{SW}}=7.75\times 10^{34}`$ ergs s<sup>-1</sup> per star. The released energy from stellar winds is supplied to the gas particles within a sphere of radius $`R_{\mathrm{snr}}`$, and the energy, heavy elements and materials from SNe II are subsequently supplied to the same region. The radius $`R_{\mathrm{snr}}`$ is set to be equal to the maximum extension of the shock front in the adiabatic phase of supernova remnant and given by Shull & Silk (1979), $`R_{\mathrm{snr}}=32.9\left({\displaystyle \frac{E}{10^{51}\mathrm{ergs}}}\right)^{1/4}\left({\displaystyle \frac{n_\mathrm{g}}{1\mathrm{cm}^3}}\right)^{1/2}\mathrm{pc},`$ (15) where $`E`$ is the released energy. In each time step we estimate $`n_\mathrm{g}`$ equal to the number density of the surrounding gas of the star particle with the minimum limit of $`10^4`$ cm<sup>-3</sup>. The gas within $`R_{\mathrm{snr}}`$ remains adiabatic until the multiple SNe II phase ends at $`\tau (8M_{})`$, and then it cools according to the adopted cooling rate of the gas. Tsujimoto et al. (1996) tabulated the masses of 27 heavy elements synthesized in the progenitors of SN Ia and II, and determined the number ratio of Type Ia supernovae (SNe Ia) relative to SNe II that best reproduces the observed abundance pattern among heavy elements in the solar neighborhood as well as in the Large and Small Magellanic Clouds. Given the IMF, the total number of SNe Ia and II from a star particle is easily estimated, and this number can then be related to the mass of ejected heavy elements from a star particle by making use of Table 2 of Tsujimoto et al. (1996). We note that our analysis in this paper is restricted to the metal enrichment by SNe II only, and the full analysis including the contribution from SNe Ia will be given elsewhere. We compute the evolution of spectral energy distribution (SED) of a star particle based on the method of stellar population synthesis which utilizes the stellar evolutionary tracks for various masses and metallicities of stars (Arimoto & Yoshii 1986). Using the updated version by Kodama & Arimoto (1997), we calculate the SED for $`\lambda =300`$$`\stackrel{_{}}{A}`$ $`40000`$$`\stackrel{_{}}{A}`$ as a function of elapsed time from the formation of each star particle. The SED of a whole galaxy is then obtained by summing up SED’s of all star particles ever formed at different times with different metallicities. In this paper the response functions for the $`UBVRI`$ passbands are taken from Bessell (1990) and those for the $`JHKL`$ magnitudes from Bessell & Brett (1988). ### 2.3 Virialized protogalaxy Following a standard scenario of the CDM universe ($`\mathrm{\Omega }_0=1`$, $`H_0=50`$ km s<sup>-1</sup>Mpc<sup>-1</sup>), we consider a protogalaxy as originated from the CDM density fluctuation with $`\delta M/M`$ equal to $`\nu `$ times the rms value $`\sigma `$. The fluctuation is normalized to unity for a spherical top-hat window of comoving radius 16 Mpc with a bias parameter of $`b=1`$. We assume that this protogalaxy is composed of 10 % baryon and 90 % dark matter in mass and is initially in virial equilibrium. A protogalaxy with the total mass $`M`$ is assumed to have the density profile of a singular isothermal sphere with the truncation radius equal to $`R_{\mathrm{vir}}`$. The locus of $`R_{\mathrm{vir}}`$ versus $`M`$ for a $`1\sigma `$ density peak is shown by thin line in Fig. 2 (cf. Bardeen et al. 1986). The local dynamical time of the singular isothermal sphere at the radius of $`r`$ is given by $`\tau _{\mathrm{dyn}}(r)=\left({\displaystyle \frac{3\pi ^2R_{\mathrm{vir}}}{8GM}}\right)^{\frac{1}{2}}r,`$ (16) and the local gas cooling time is given by $`\tau _{\mathrm{cool}}(r)={\displaystyle \frac{2\pi Gm_\mathrm{H}^2}{(1Y)^2F}}{\displaystyle \frac{1}{\mathrm{\Lambda }}}r^2,`$ (17) where $`F(=0.1)`$ is the baryonic fraction. Equating $`\tau _{\mathrm{dyn}}(r)`$ to $`\tau _{\mathrm{cool}}(r)`$, we define the cooling radius as $`R_{\mathrm{cool}}(M)=\left({\displaystyle \frac{3R_{\mathrm{vir}}}{32}}\right)^{\frac{1}{2}}{\displaystyle \frac{(1Y)^2F\mathrm{\Lambda }}{m_\mathrm{H}^2G^{\frac{3}{2}}}}M^{\frac{1}{2}}.`$ (18) The $`R_{\mathrm{cool}}`$ versus $`M`$ relation for the solar-abundance gas is shown by the upper thick line and the relation for the primordial gas is shown by the lower thick line in Fig. 2. In a region above (below) the locus of $`R_{\mathrm{cool}}(M)`$, the cooling time is longer (shorter) than the dynamical time. For a less massive protogalaxy with $`M<7\times 10^{10}M_{}`$, the cooling is efficient over a whole range of $`r`$, so that these galaxies can cool and condense leading to the burst of star formation. On the other hand, for $`M>7\times 10^{10}M_{}`$, the star formation occurs only in the cooling region inside the radius of $`R_{\mathrm{cool}}`$ $`(<R_{\mathrm{vir}})`$. In a particular case of $`M=2\times 10^{12}M_{}`$, the cooling region inside $`R_{\mathrm{cool}}`$ contains about 5% of the total mass. It is clear from Fig. 2 that the cooling is more efficient for higher metallicity. Since synthesized heavy elements by SNe II spread out due to the feedback process and stellar motions, the cooling region in which star formation occurs expands towards an outer part of the system. Accordingly this expansion of cooling region induces the formation of massive ellipticals even from the initial condition of low densities like $`1\sigma `$-peaks. ## 3 SIMULATIONS The formation of the CDM halos through the hierarchical clustering has been investigated with $`N`$-body simulations. Dubinski & Carlberg (1991) argued that their equilibrium density profile is fitted by $`\rho r^1(r+a)^3`$ or Hernquist’s (1990) profile which has a central cusp and resembles de Vaucouleurs’ $`r^{1/4}`$-profile in projection. Navarro, Frenk & White (1997), however, pointed out that their structure can be approximated as $`\rho (r)r^1(r+a)^2`$, which is more extended than the Hernquist profile. Fukushige & Makino (1997) used a high-resolution simulation and showed that the CDM halos have a steeper central cusp than quoted above. The resulting structure of the CDM halos depends on the number of particles used in the simulation. Besides, there is no definitive view about whether the baryonic component has the same equilibrium profile as the dark matter halo. Thus, in this paper, we assumed that both baryon and dark matter initially have the King profile with the central concentration index of $`c=2`$ (Fig. 3; cf. Binney & Tremaine 1987). Since our prime motivation is to clarify the effect of energy feedback in the evolution of galaxies, we simply neglect the possible mass-dependent profile of dark halos (Navarro, Frenk & White 1997) and their angular momentum distribution (Mao & Mo 1998). This two-component system is made to settle in a virial equilibrium from which the gas temperature and the velocity dispersion of dark matter are estimated. Our simulation uses $`3\times 10^4`$ gas particles and the same number of dark matter particles to set up an initial condition. As the number of star particles increases due to star formation, the total number of particles increases up to about $`10^5`$ particles in the end of our simulation. ### 3.1 Dwarf elliptical galaxies We consider a less massive protogalaxy having a total mass of $`10^{10}M_{}`$ with a baryon to dark matter ratio of 1/9. The tidal radius of the King profile is 8.45 kpc. The mean density of the total system is $`2.7\times 10^{25}`$ g cm<sup>-3</sup>, the mean temperature of the gas is $`10^{5.1}`$ K, the mean velocity dispersion of dark matter is 72 km s<sup>-1</sup>, the mean dynamical time is $`1.3\times 10^8`$ yrs, and the mean cooling time is $`7.6\times 10^7`$ yrs. The gravitational softening parameter is adopted as 0.02 kpc for gas particles, 0.05 kpc for dark matter, and 0.03 kpc for star particles. Owing to the efficient radiative cooling mainly through collisional excitation of H and He<sup>+</sup>, the gas temperature rapidly drops, which induces a dynamical contraction of gas and dark matter by the self-gravity. When the gas temperature becomes close to $`10^4`$ K and stops decreasing, a quasi-isothermal contraction is established. The density in the central region increases by the accretion of the surrounding gas, and eventually the intensive star formation is triggered. When massive stars begin to explode as SNe II, the gas in the vicinity of SNe II acquires the thermal energy and synthesized heavy elements released from SNe II. Then, the gas temperature locally increases up to about $`10^{7.5}`$K, and subsequent formation of stars is virtually halted. About 5% of the initial gas mass is used up in this formation of the first generation stars. Figure 4 shows the snapshots for the projected particle positions and the integrated energy spectra as a function of elapsed time from $`0.5\times 10^7`$ to $`1.0\times 10^9`$ yrs. The top three panels in each column show the spatial distributions of dark matter, gas, and stars, respectively, projected onto the $`xy`$ plane. The bottom panel shows the spectral energy distribution (SED) from stellar populations in the evolving galaxy. The supernova-driven gas flow is generated outwards from the center of the protogalaxy. This outflow collides with the inflow of the gas accreting from outside, and the high-density super-shell is eventually formed. Figure 5 shows the radial profiles of the gas density (top panel), the temperature (middle panel), and the radial velocity (bottom panel) at the elapsed time of $`1.0\times 10^7`$ yrs. We find from this figure that the shock waves propagate outwards with the shock front at $`0.5`$ kpc and the hot cavity is created inside the super-shell. While the gas is continuously swept up by the super-shell, the gas density further increases due to the enhanced cooling rate in the already dense shell. Then the secondary star formation begins within the super-shell, and subsequent SNe further accelerate the outward expansion of the shell. This situation is clearly understood from the second panels of Fig. 4 in a time sequence from $`1.0\times 10^7`$ to $`3.0\times 10^7`$ yrs. Star formation in the expanding shell continue for $`5.0\times 10^7`$ yrs until the gas density in the shell becomes too low to form new stars. About 20% of the initial gas mass is turned into stars in this stage. Finally, the outflowing gas escapes from the gravitational potential of the whole system and is ejected into the intergalactic space. The final stellar system possesses $`25\%`$ of the initial gas mass. The stars initially have the velocity vectors of the gas from which they were formed. Therefore, the first generation stars have zero systematic velocity, but the later generation stars have a large outward radial velocity component. The oscillation of swelling and contraction of the stellar system continues for several $`10^8`$ yrs, and the system becomes settled in a quasi-steady state until $`1`$ Gyrs. Consequently the system has a large velocity dispersion and a large core. The surface mass distribution is approximately exponential and is more extended than de Vaucouleurs’ $`r^{1/4}`$-profile. Stars are formed for the most part before the gas is fully polluted to the yield value of the synthesized heavy elements. The average metallicity of the stars in the system is as low as \[Fe/H\] $`2.4`$. This metallicity is consistent with a range covered by the observations, but is much lower than those of normal galaxies (Dekel & Silk 1986; Yoshii & Arimoto 1987). One outstanding feature discovered by our simulation is a positive metallicity gradient in this system which is in sharp contrast to the observed negative gradient for massive galaxies (Carollo, Danziger & Buson 1993). The star-forming site moves outwards with the expanding shell and the gas in this shell is gradually enriched with synthesized heavy elements from SNe II. Stars of later generations are necessarily born at larger radii with larger metallicities, leading to the emergence of a positive metallicity gradient in the resulting stellar system. Figure 6 shows the projected distribution of stellar metallicity at different elapsed times of $`1.0\times 10^7,2.0\times 10^7,3.0\times 10^7`$, and $`1.0\times 10^9`$ yrs. The distribution does not change with time and the system should keep such a metallicity gradient during the age of the universe. Figure 7 shows the projected surface brightness distribution at 15 Gyrs in the B, V, R, I and K bands. These profiles are plotted against a linear scale of the radius in kpc or a quatic root of the radius in kpc. The resulting system is characterized by an exponential profile rather than de Vaucouleurs’ $`r^{1/4}`$-profile. Figure 8 shows the projected color profiles at 15 Gyrs for V–K (circles), B–R (squares), B–V (diamonds), and V–R (triangles). These profiles are scaled vertically in this figure to coincide with each other at the galaxy center. The result is consistent with the observed trend of redder colors at larger radii for dwarf galaxies (Vader et al. 1988; Kormendy & Djorgovski 1989; Chaboyer 1994). In our simulation of dwarf ellipticals, the total stellar mass is smaller when a flatter IMF is adopted. However, the stellar metallicity is not so sensitive to the IMF slope and remains about one tenth of than the solar in agreement with observations of nearby dwarf galaxies. This is because for the flatter IMF the number of supernovae is larger so that the heated gas and heavy elements therein are blown out of the system at a larger rate. We also examined how our simulation depends on the SFR coefficient $`C`$ in equation 12. For $`C=0.1`$ and $`C=1.0`$, we carried out the simulations with the same initial condition. Our simulation run gives the total stellar mass of $`3.12\times 10^8M_{}`$ for $`C=0.1`$ and $`4.27\times 10^8M_{}`$ for $`C=1.0`$, indicating that the result is insensitive to the adopted value of $`C`$ as noted by Katz (1992) and Katz, Weinberg & Hernquist (1995). ### 3.2 Normal elliptical galaxies In this section we study the formation of a more massive system for the purpose of comparison with a less massive system of dwarf elliptical galaxies. We consider a protogalaxy having a total mass of $`10^{12}M_{}`$ with a baryon to dark matter ratio of 1/9. The tidal radius of the King profile is 82.8 kpc. The mean density of the total system is $`2.8\times 10^{26}`$ g cm<sup>-3</sup>, the mean temperature of the gas is $`10^{6.1}`$ K, and the mean velocity dispersion of dark matter is 228 km s<sup>-1</sup>, the mean dynamical time is $`4.0\times 10^8`$ yrs, and the mean cooling time is $`5.7\times 10^{10}`$ yrs. The gravitational softening parameter is adopted as 0.2 kpc for gas particles, 0.5 kpc for dark matter, and 0.3 kpc for star particles. Figure 9 shows the snapshots for the projected particle positions and the integrated energy spectra as a function of elapsed time from $`0.1\times 10^9`$ to $`2.0\times 10^9`$ yrs. The top three panels in each column show the spatial distributions of dark matter, gas, and stars, respectively, projected onto the $`xy`$ plane. The bottom panel shows the SED from the evolving galaxy. Contrary to dwarf ellipticals, a massive protogalaxy is considered to evolve from relatively low gas density and high virial temperature, so that the gas does not cool rapidly. The protogalaxy initially in virial equilibrium shrinks quasi statically, and the gas density near the galaxy center gradually increases. When the gas density rises sufficiently, the temperature suddenly drops due to the thermal instability. Figure 10 shows that the gas temperature drops down to about $`10^4`$ K and the gas density further increases in the central region where active star formation starts to occur. Newly born massive stars heat up the surrounding gas and at the same time the rate of their formation is suppressed in the heated gas. Therefore, higher or lower rate of star formation is stabilized and self-regulated when all star-forming activities are confined within the deep gravitational potential of a massive galaxy. Figure 11 shows the projected surface brightness distribution at 15 Gyrs plotted against a linear scale of the radius in kpc or a quatic root of the radius in kpc. The surface brightness distribution at 15 Gyr is similar to an $`r^{1/4}`$-profile, and the integrated blue luminosity is $`M_B=21.5`$ mag. Since massive stars enrich the gas with synthesized heavy elements from which new stars are subsequently born, the stellar metallicity increases in proportion to the heavy element abundances in the gas. In the central region of the galaxy, cycles of stellar birth and death continue until the metallicity approaches the yield value, whereas in the outer part of the galaxy the gas is used up in star formation before it is significantly polluted. Consequently as shown in Fig. 12, there appears a negative gradient of stellar metallicity across the system corresponding to bluer colors at larger radii as observed in normal galaxies (e.g. Carollo, Danziger & Buson 1993). We however note that the average stellar metallicity is \[Fe/H\]$`0.2`$, which is about three times lower than the observed metallicity of \[Fe/H\] $`+0.3`$, either because Salpeter’s IMF adopted in our simulation is too steep or because an initial density of the protogalaxy taken from a $`1\sigma `$ density peak is too low to form massive elliptical galaxies. Our numerical experiments indicate that a flatter IMF with $`x=1.05`$ is able to reproduce the observed metallicity. In order to refine the model, we need more extensive simulations from different initial conditions by including the iron supply from SNe Ia in the gas. Comprehensive analysis along this line will be reported elsewhere. ## 4 SUMMARY AND CONCLUSION We have developed a three-dimensional $`N`$-body/SPH simulation code combined with stellar population synthesis and applied it to the study of dynamical, chemical and spectro-photometric evolution of dwarf and normal elliptical galaxies embedded in a dark matter halo. A protogalaxy is assumed to be a virialized non-rotating sphere in a $`1\sigma `$ CDM perturbation enclosing the total mass of $`10^{10}M_{}`$ and $`10^{12}M_{}`$ with 10 % baryonic mass. For dwarf ellipticals, the gas temperature rapidly decreases by the efficient cooling and the protogalaxy starts to collapse due to the self-gravity of gas and dark matter. When the gas density near the galaxy center becomes large, star formation burst takes place (SFR $`23M_{}`$ yr<sup>-1</sup>) and the collapse is halted by supernova-driven wind from the central star-forming region. About 75 % of the initial gas mass is then lost from the system on a much shorter time scale compared to the mean crossing time of a star in the system. Thereby the system expands and recovers a new equilibrium state as a loosely-bound stellar system exhibiting an exponential structure. On the other hand, normal ellipticals are considered to evolve from ten times lower gas density and ten times higher virial temperature. The gas is thermally stable having a relatively longer cooling time. In our simulation, most of the gas is used up in formation of stars and only a small fraction of gas is evaporated out of the system in the form of irregular gaseous blobs. Since the time scale of this gas removal is much longer than the crossing time of a star, the stellar system remains in quasi-equilibrium and evolves into an $`r^{1/4}`$-structure as observed. The dynamical evolution of the system is determined not only by the amount of gas removal but also by its time scale. These decisive factors of gas removal depend critically on the cooling efficiency which is systematically higher for less massive protogalaxies in the CDM universe (see Fig. 2). In particular their efficient cooling causes a significant dynamical impact in making an exponential structure of dwarf galaxies. However, in a low-mass end below $`10^8M_{}`$, protogalaxies have much more efficient cooling, so that a large amount of gas is locked up in formation of the first-generation stars until they start to explode as supernovae. Consequently supernovae no longer affect the subsequent evolution of the stellar system and very low-mass galaxies could survive as first demonstrated by Yoshii & Arimoto (1987). It is therefore intriguing to simulate spheroidal stellar systems along their mass sequence from giant elliptical galaxies to compact globular clusters in order to see whether the energy feedback works to suppress the formation of dwarf galaxies only and modifies the shape of the luminosity function as observed. We are grateful to T. Shigeyama, M. Chiba and T. Tsujimoto for many fruitful discussions, and to T. Kodama for providing us the tables of population synthesis. This work has been supported in part by the Research Fellowship of the Japan Society for the Promotion of Science for Young Scientists (6867), the Grant-in-Aid for Scientific Research (05242102, 06233101), and the Center-of-Excellence Research (07CE2002) of the Ministry of Education, Science, Sports, and Culture in Japan. The computations were mainly carried out on the “Remote-GRAPE” system at University of Tokyo and partly done on Fujitsu VPP-300 at the National Astronomical Observatory in Japan, and Fujitsu VPP-500 at the Institute of Physical and Chemical Research (RIKEN).
no-problem/9812/astro-ph9812460.html
ar5iv
text
# On the model of dust in the Small Magellanic Cloud ## 1. Introduction The Small Magellanic Cloud (SMC), one of our nearest neighbours, contains considerably less heavy elements and dust than the Galaxy (Bouchet et al. bouchet (1985); Welty et al. welty (1997)). In addition, dust in the SMC seems to be different from dust in the Galaxy (Prévot et al. prevot (1984); Pei pei (1992)). For example, a typical extinction curve of the SMC is almost linear with inverse wavelength and does not show a presence of the UV bump at 2175 Å (Prévot et al. prevot (1984); Thompson et al. thomp (1988)). Recently, Rodrigues et al. (rodr (1997)) have measured the linear polarization for a sample of the SMC stars in the optical and found that the wavelength of maximum polarization is generally smaller than that in the Galaxy. The low metallicity of the SMC implies that it should be at an early stage of its chemical evolution, thus resembling in this respect galaxies at high redshifts. A strong support to this view has come from the discovery that the dust in starburst galaxies, apparently the only type of galaxies found so far for redshifts $`z>2.5`$, has an extinction law remarkabely similar to that in the SMC (Gordon, Calzetti, & Witt gcw (1997)). Several attempts have been made to model the dust in the SMC. Bromage & Nandy (bn83 (1983)) and Pei (pei (1992)) modelled some SMC mean extinction curves using the dust mixture of spherical graphite and silicate grains with a power-law size distribution (Mathis, Rumpl & Nordsieck mrn (1977), hereafter MRN; Draine & Lee dl84 (1984)). Bromage & Nandy (bn83 (1983)) and Pei (pei (1992)) have shown that an MRN-like mixture with a lower fractional mass of graphite grains in comparison to silicate grains, and with other parameters as in the Galaxy, can satisfactorily explain the SMC extinction. Pei (pei (1992)) even succeeded to fit the SMC extinction law with silicate grains alone. Rodrigues et al. (rodr (1997)) made model fits to both extinction and polarization for two stars in the SMC, AzV 398 and AzV 456, representing lines of sight with different properties. The authors have found that the mixture of bare silicate and amorphous carbon, or graphite spheres together with silicate cylinders with an MRN-like power-law size distribution explains quite well both the extinction and the polarization. However, the SMC dust models proposed so far are too simplistic concerning both the choice of grain constituents and grain-size distributions. Recently, Mathis (m96 (1996)), Li & Greenberg (lg97 (1997)) and Zubko, Krełowski, & Wegner (zkw2 (1998)) have presented more sophisticated models of Galactic dust using core-mantle, multilayer and composite grains which are much more physically justified then the models previously considered. Note that the models by Zubko et al. (zkw2 (1998)) were calculated with the regularization approach (Zubko zubko (1997)) which is a very efficient method capable of deriving optimum and unique size distributions in a general form for any predefined mixture of grains by simultaneously fitting the extinction curve, the elemental abundances and the mass fraction constraints. The uniqueness of the grain size distributions follows from the mathematical nature of the problem when we need to solve a Fredholm integral equation of the first order, being a typical ill-posed problem. The regularization approach reduces this problem to minimization of a strongly convex quadratic functional. The latter problem was strictly proved to have a single solution. See for more details, e.g. Groetsch (groe84 (1984)), Tikhonov et al. (tikh90 (1990)) or Zubko (zubko (1997)). This method can be expanded to allow one to deduce the uncertainty in the solution based on the data uncertainty, but this is beyond the scope of this letter (Zubko 1999, in preparation). Recently, Gordon & Clayton (gc98 (1998)) have derived the extinction curves for four SMC stars with several improvements in comparison to previous studies: higher S/N IUE spectra, and a more careful choice of the pairs of reddened and comparison stars. The sightlines toward three stars: AzV 18, 214, and 398, located in the SMC bar pass through the regions of active star formation and exhibit similar extinction law. The sightline toward the star AzV 456, located in the SMC wing, passes through a much more quiescent region of star formation and shows a Galaxy-like extinction with the 2175 Å UV bump. The purpose of the present Letter is to report the first SMC dust models calculated with the regularization approach to the new high quality extinction curves. We modelled the extinction toward the star AzV 398 which is thought to be a typical SMC bar sightline (Rodrigues et al. rodr (1997); Gordon & Clayton gc98 (1998)). The results of the study which includes all the four stars will be presented in a forthcoming paper. ## 2. Empirical data We transform the extinction curve for AzV 398, derived by Gordon & Clayton (gc98 (1998)) in the standard form: $`E(\lambda )=[\tau (\lambda )\tau (V)]/[\tau (B)\tau (V)]`$, to the extinction cross section per H atom: $$\frac{\tau (\lambda )}{N_\mathrm{H}}=0.921\frac{E(\mathrm{B V})}{N_\mathrm{H}}[E(\lambda )+R_V]$$ (1) where $`\tau `$ is the optical thickness, $`E(\mathrm{B V})`$ is the B Vcolour excess, $`R_V`$ is the total-to-selective extinction ratio, and $`N_\mathrm{H}`$ is the column number density of hydrogen. For $`N_\mathrm{H}`$ we take the value 1.5$`\times `$10<sup>22</sup> cm<sup>-2</sup> from Bouchet et al. (bouchet (1985)), corresponding to atomic hydrogen. Since the sightline toward AzV 398 is associated with an H II region (Gordon & Clayton gc98 (1998)), it is likely that the contribution of molecular hydrogen to $`N_\mathrm{H}`$ is neglegibly small (for sightlines not passing through the SMC bar this may not be the case, see Lequeux lequeux (1994)). The value for $`E(\mathrm{B V})`$=0.37 was also taken from Bouchet et al. (bouchet (1985)) and $`R_V`$=2.87 from Gordon & Clayton (gc98 (1998)). The elemental abundances (gas + dust), currently adopted for the SMC, were taken from Welty et al. (welty (1997)): C/H=46 p.p.m. (atoms per 10<sup>6</sup> H atoms) or 7.66$`\pm `$0.13 dex, O/H=107 p.p.m. or 8.03$`\pm `$0.10 dex, Si/H=10 p.p.m. or 7.00$`\pm `$0.18 dex, Mg/H=9.1 p.p.m. or 6.96$`\pm `$0.12 dex, and Fe/H=6.6 p.p.m. or 6.82$`\pm `$0.13 dex. Note that these values are 2–5 times less than the respective Galactic abundances following recent revision (Snow & Witt sw (1996); Cardelli et al. cardelli (1996)). Since we have no information on the amounts of elements in dust and gas separately in the SMC, we simply assume in this study that as in the Galaxy 42% of carbon ($``$20 p.p.m.), 37% of oxygen ($``$40 p.p.m.), and all silicon, magnesium and iron are locked up in dust (Cardelli et al. cardelli (1996); Zubko et al. zkw2 (1998)). The actual amount of elements locked up in dust is uncertain, but one may expect even lower amounts of elements in dust because the SMC is less chemically processed than our Galaxy. Recently, Witt, Gordon, & Furton (wgf (1998)) and Ledoux et al. (ledoux (1998)) proposed that the silicon nanoparticles might be the source of the extended red emission (ERE) in our Galaxy. Zubko, Smith, & Witt (zsw (1999)) have modeled the mean Galactic extinction curve with the silicon nanoparticles involved and have shown that this hypothesis is consistent with the available data on extinction, elemental abundances and the ERE. On the other hand, Perrin, Darbon, & Sivan (pds (1995)) and Darbon, Perrin, & Sivan (dps (1998)) have revealed ERE in extragalactic objects showing active star formation: the starburst galaxy M82 and the nebula 30 Doradus in the Large Magellanic Cloud, respectively. Since the sightline toward AzV 398 passes through a star-forming region, we may expect to observe ERE from there as well. We thus included the silicon nanoparticles in our modeling. As in Zubko et al. (zsw (1999)), we used the silicon core–SiO<sub>2</sub> mantle model of silicon nanoparticles with optical constants of nanosized silicon from Koshida et al. (koshida (1993)). We also included in present study the grain constituents (graphite, silicate, SiC, organic refractory, amorphous carbon, water ice and others) and respective optical constants previously used by Zubko et al. (zkw2 (1998)). ## 3. Models of extinction We performed extensive work on modeling the extinction curve for AzV 398, searching for the physically reasonable mixtures of dust constituents. Our goal was to find the models which would simultaneously fit the extinction, consume the allowed amounts of chemical elements and include silicon nanoparticless (by analogy with the Galactic case, Zubko et al. zsw (1999)). We report in this Letter three simple models which fulfil all the above requirements. The results are presented in Figs On the model of dust in the Small Magellanic CloudOn the model of dust in the Small Magellanic Cloud and Table 1. The model grains are mostly made up of two species: silicate (MgFeSiO<sub>4</sub>) and organic refractory residue, which coexist either in core(silicate)-mantle(organic refractory) or in spherical porous composite grains with the latter containing also small amounts of amorphous carbon. The total mass fraction of silicate + organic refractory is about 0.9. The other important model component is the silicon nanoparticles, which are found to have a mass fraction of about 0.07–0.085. The fact that organic refractory is among the major grain constituents is in good agreement with the expectations that the interstellar radiation field (ISRF) in the SMC is stronger than in the Galaxy (Lequeux et al. leqetal (1994)) since the icy mantles on silicate grains formed in molecular clouds can be processed by the UV radiation into organic refractory (Greenberg & Li gl96 (1996)). Note especially that our attempts to include silicate and SiC grains coated by either amorphous carbon or water ice mantles and also bare carbonaceous (graphite, amorphous carbon), silicate and SiC grains resulted in very low mass fractions of such grains, typically less than 1 per cent. This means that the conditions in the SMC (ISRF intensity, duration of the exposure by the UV radiation) are probably favourable for converting icy mantles into organic refractory, but not for the further processing of organic refractory into amorphous carbon. In contrast to the SMC, the presense of icy mantles may be allowed for the dust grains in the Galactic diffuse medium (Zubko et al. zkw2 (1998)). Following Zubko et al. (zkw2 (1998)), the models based on the silicate core–organic refractory mantle (composite) grains are refered to as G (M) models. The GM model is a combination of G and M models and contains both core-mantle and composite grains. As shown in Fig. On the model of dust in the Small Magellanic Cloud all the above models fit the extinction curve quite well. The size distributions of both core-mantle and composite grains are quite wide and cover both small and larger grains with the preference to the grains of sizes 10–100 nm. Silicon nanoparticles have a diameter of 3.0 nm by definition. All the models consume the maximum amounts of carbon, oxygen and silicon allowed for dust and slightly less for magnesium and iron. All the carbon consumed is contained in organic refractory. Approximately equal amounts of silicon are locked up in the silicon nanoparticles (4–5 p.p.m.) and in other components (5–6 p.p.m.). Silicate core-organic refractory mantle grains prevail by mass in all the cases. Note that the dust composition in our models is drastically different from that in previous SMC models, which were based on modification of the standard MRN model (Pei pei (1992); Rodrigues et al. rodr (1997)). The grain size distributions in our models are not a simple power law, as assumed in the previous models, but are rather optimized to reproduce the extinction curve and to simultaneously obey the abundance constraints. Since we do not know presently the wavelength-dependent intensity of the ISRF in the SMC and, in addition, the existing extinction curves for the SMC have a UV boundary at around 0.13 $`\mu `$m, we are unable to estimate the fraction of the UV photons absorbed by the silicon nanoparticles. In the Galaxy, Gordon et al. (gwf (1998)) found this fraction to be 0.10$`\pm `$0.03. However, as was found by Zubko et al. (zsw (1999)), the mass fraction of silicon nanoparticles may serve as a good indicator in this case. The values of this quantity derived for the models presented above: 0.07, 0.071, and 0.085 suggest that the SMC is similar to the Galaxy, and therefore should also be a source of significant ERE. Moreover, the proximity of the silicon nanoparticle mass fractions obtained by modeling rather different extinction laws: Galactic and SMC, with different chemical constraints and different dust models suggests that silicon nanoparticles and ERE may be a universal phenomena in galaxies. It is evident from Fig. On the model of dust in the Small Magellanic Cloud and Table 1 that each of the models presented is almost equally good (as indicated by $`\overline{\mu }`$) in fulfilling the requirements formulated above, with the G model being slightly more preferable. In order to discriminate between the models, we calculated the model scattering properties, albedo and asymmetry parameter, which are displayed in Fig. On the model of dust in the Small Magellanic Cloud. We compare our results with the observational data for the Galaxy taken from Gordon et al. (gcw (1997)) to illustrate the significantly different predictions for the SMC. The model albedos are quite close to one other for all wavelengths, whereas the model asymmetry parameter shows large differences, especially in the UV. In general, the model albedos are lower than respective Galactic ones, except for the near IR, where we may see an opposite effect. Only the asymmetry parameter of M model is close to the respective Galactic values. Note that the albedos of Pei’s (pei (1992)) model of the SMC dust significantly exceed the albedos, calculated in our models, and also the expected Galactic values (Fig. On the model of dust in the Small Magellanic Cloud). Another possible mean to choose the most appropriate model may be the polarization data. We hope to include these data in a self-consistent analysis in forthcoming papers. In summary, we report here for the first time more refined models of the SMC bar dust which are in good agreement with the observed extinction, elemental abundances, and the strength of the ISRF. The models were calculated by using the regularization approach. The major grain constituents were found to be silicates, organic refractory and nanosized silicon. This conclusion is subject to some uncertainty due to the uncertain element depletion patterns in the SMC. We predicted the scattering properties of our models, which are significantly different from the Galactic values. More observational constraints, e.g. the polarization data, are to be included into analysis to choose the most appropriate dust model. I thank Karl Gordon and Geoff Clayton for providing me the extinction curve for AzV 398 in the electronic table. During my work, I benefited from many stimulating discussions with Ari Laor. This research was supported by a grant from the Israel Science Foundation.
no-problem/9812/chao-dyn9812019.html
ar5iv
text
# Improving the false nearest neighbors method with graphical analysis ## I Introduction One of the main tasks of time series analysis is to determine from a given time series the basic properties of the underlying process, such as nonlinearity, complexity, chaos etc. Among the most widely used approaches is state space reconstruction by time delay embedding . After this step has been taken one can calculate correlation dimensions, various entropy quantities and estimates for Lyapunov exponents. The crucial problem is how to select a minimal embedding dimension for the pseudo phase-space. If the embedding dimension is too small, one cannot unfold the geometry of the (possible strange) attractor, and if one uses a too high embedding dimension, most numerical methods characterizing the basic dynamical properties can produce unreliable or spurious results. The false-nearest-neighbors (FNN) algorithm is one of the tools that can be used to determine the number of time-delay coordinates needed to reconstruct the dynamics. In this method one forms a collection $$𝐲(k)=[x(k),x(k+1),\mathrm{},x(k+d1)]$$ (1) of $`d`$-dimensional vectors for a given time delay (here normalized to 1), $`x(1),x(2),\mathrm{},x(N)`$ is a scalar time series. If the number $`d`$ of time-delay coordinates in (1) is too small, then two time-delay vectors $`𝐲(k)`$ and $`𝐲(l)`$ may be close to each other due to the projection rather than to the inherent dynamics of the system. When this is the case, points close to each other may have very different time evolution, and actually belong to different parts of the underlying attractor. In order to determine the sufficient number $`d`$ of time-delay coordinates one next looks at the nearest neighbor of each vector (1) with respect to the Euclidean metric. We denote the nearest neighbor of $`𝐲(k)`$ by $`𝐲(n(k))`$. We then compare the “$`(d+1)`$”st coordinates of $`𝐲(k)`$ and $`𝐲(n(k))`$, e.g., $`x(k+d)`$ and $`x(n(k)+d)`$. If the distance $`|x(k+d)x(n(k)+d)|`$ is large the points $`𝐲(k)`$ and $`𝐲(n(k))`$ are close just by projection. They are false nearest neighbors and they will be pulled apart by increasing the dimension $`d`$. If the distances $`|x(k+d)x(n(k)+d)|`$ are predominantly small, then only a small portion of the neighbors are false and $`d`$ can be considered a sufficient embedding dimension. In the FNN algorithm the neighbor is declared false if $$\frac{|x(k+d)x(n(k)+d)|}{𝐲(k)𝐲(n(k))}>R_{tol},$$ (2) or if $$\frac{𝐲(k)𝐲(n(k))^2+[x(k+d)x(n(k)+d)]^2}{R_A^2}>A_{tol}^2,$$ (3) where $$R_A^2=\frac{1}{N}\underset{k=1}{\overset{N}{}}[x(k)\overline{x}]^2,$$ (4) and $`\overline{x}`$ is the mean of all points. The parameter $`R_{tol}`$ in the first threshold test (1) is fixed beforehand, and in most studies it has been set to $`1020`$. The second criterion (3) was proposed in order to provide correct diagnostics for noise and usually one takes $`A_{tol}2`$. If this test fails, then even the ($`d+1`$-dimensional) nearest neighbors themselves are far apart in the extended $`d+1`$ dimensional space and should be considered false neighbors. Using tests (2) and (3) one can check all $`d`$-dimensional vectors in the data set, and compute the percentage of false nearest neighbors. By increasing the dimension $`d`$ this percentage should drop to zero or to some acceptable small number. In that case the embedding dimension is large enough to represent the dynamics. This method works quite well with noise free data, and the percentage of false neighbors does not depend on the number of data points if it is sufficient. However, if data is corrupted with noise, the percentage of false nearest neighbors for a given embedding dimension increases as the amount of data is increased, and therefore a longer time series leads to erroneous false nearest neighbors as a result of noise corruption rather than of an incorrect embedding dimension. One possible solution to this problem is to modify the threshold test (2) to account for additional noise effects. For example, instead of test (2) the threshold could be determined by $$\frac{|x(k+d)x(n(k)+d)|}{𝐲(k)𝐲(n(k))}>R_{tol}+\frac{2ϵR_{tol}\sqrt{d}+2ϵ}{𝐲(k)𝐲(n(k))}.$$ (5) Here the new parameter $`ϵ`$ must be chosen properly. Obviously the optimal value for $`ϵ`$ should be determined by the noise level but unfortunately we have usually very limited information on the amplitude of the noise in a given time series. ## II Graphical representation of nearest neighbor distributions Without a clear understanding of the distribution of neighboring points in the time delay coordinates the original test (2) or the modified test (5) cannot guarantee that we have reached a sufficient embedding dimension, even if the percentage of false nearest neighbors is low. We have therefore constructed a simple graphical presentation which simultaneously displays all essential features. The basic idea is that we show the distance $`R_\mathrm{\Delta }=|x(k+d)x(n(k)+d)|`$ as a function of the original distance $`R_d=𝐲(k)𝐲(n(k))`$ for all $`d`$-dimensional vectors in the data set. The $`x`$-variable $`R_d`$ should be scaled with the normalization coefficient $`\sqrt{d}`$ in order to remove unessential changes in the graphs due to changes in the embedding dimension (see Appendix). As the first example we have chosen the Henon system $$X_{n+1}=11.4X_n^2+Y_n,Y_{n+1}=0.3X_n$$ (6) The parameters of this system were selected from the chaotic region (the dimension of the attractor is $`1.26`$), and the total number of data points is $`1000`$. In Figure 1 we have plotted $`(\stackrel{~}{R}_d,R_\mathrm{\Delta })`$ pairs ($`\stackrel{~}{R}_d=R_d/\sqrt{d}`$) for all vectors $`𝐲`$. The displayed box size is $`0.024\times 0.024`$ units. Two distributions have also been presented in each graph: the $`\stackrel{~}{R}_d`$ distribution on the bottom part of the graphs, and the radial distribution plotted on the quarter arc. The embedding dimension $`d`$ is scanned from $`1`$ to $`4`$, and each set of four graphs is presented in four different cases where the amplitude of the additional uniformly distributed (measurement) noise is 0%, 0.1%, 1% and 10% of the total amplitude. According to (2) a neighbor is false if it lies above the straight line going through the origin with slope $`R_{tol}`$. If we use the test (5) the line has the same slope but there is an intercept equal to the noise correction term (scaled with $`\sqrt{d}`$). Normally we must know the slope a priori but using these graphs it is not necessary. If there is no noise we clearly see that with the embedding dimension $`>1`$ all points lie in the sector determined by the $`x`$-axis and a line with slope angle well below 90 degrees. This important feature can be understood if we assume that the dynamics is given by $$x(k+dT)=f(x(k),x(k+1),\mathrm{},x(k+d1)).$$ (7) Then we can write $$\left|x(k+d)x(l+d)\right|f(\xi )𝐲(k)𝐲(l)$$ (8) for some $`\xi `$, which implies that $$\frac{R_\mathrm{\Delta }}{R_d}f(\xi ).$$ (9) Therefore all points in the $`(\stackrel{~}{R}_d,R_\mathrm{\Delta })`$ plots must lie under a line which depends on the specific system. The limit (9) is true only when the embedding dimension is sufficient, and for noise it is never possible. If the time series includes some additional noise we see its effect as a blurred border line. If the embedding dimension is too low the points cumulate close to the $`y`$-axis. The radial distribution plot confirms this result. If $`d=1`$ the distribution has significant values only with angles close to $`90`$ degrees but if $`d>1`$ the distribution is almost zero within a distinct range at high angles. The $`\stackrel{~}{R}_d`$ distribution is high only in the vicinity of zero. A small amount of noise (0.1%, the second row from the bottom in Figure 1) does not change the picture much. If the level of additional noise is increased to 1% the points do not show as well formed pattern. Also the radial distribution is quite broad but it nevertheless has a clear zero range at high angles if the embedding dimension is $`3`$, which can be regarded as an indication of underlying chaotic (or at least deterministic) dynamics. The maximum of the $`\stackrel{~}{R}_d`$ distribution has clearly shifted towards large values which is typical for pure noise. In the case of more noisy data (10% on the top row of Figure 1) the distribution of points is totally different. Increasing the embedding dimension does not really change the overall shape of the point distribution. The radial distribution is fairly even, and the $`\stackrel{~}{R}_d`$ distribution is well centered and its maximum shifts toward higher values when the embedding dimension is increased. (With this kind of distribution the modified test (5) does not really take noise effects into account.) In Figure 2 we have presented corresponding graphs for the Lorenz system $`\dot{X}`$ $`=`$ $`16(YX)`$ (10) $`\dot{Y}`$ $`=`$ $`X(45.92Z)Y`$ (11) $`\dot{Z}`$ $`=`$ $`XY4Z`$ (12) using $`10000`$ data points and the sampling delay of $`0.05`$. For these parameter values the dimension of the attractor is $`2.07`$. Here we observe similar kind of behavior for various distributions as in the case of the Henon system. Since the true dimension of the attractor is greater than $`2`$, a clearly bounded sector pattern of points can only be seen in the graphs with embedding dimension $`3`$. For $`d=2`$ most of the points lie under a line with slope under $`90`$ degrees which is also reflected in the noticeable maximum of the radial distribution, and since there is only a small portion of points between this maximum and the $`y`$-axis we can estimate that the true dimension of the attractor is not much greater than $`2`$. The effect of even a small amount of noise can be clearly seen in Figure 2. Already with 1% of noise the sector pattern has changed to a vertical one. This is shown clearly in the regression lines (corresponding to the first principal component of the points $`(\stackrel{~}{R}_d,R_\mathrm{\Delta })`$) plotted in Figure 2. In the two bottom rows the regression lines have a slope well below $`90`$ degrees, and this can be taken as evidence of deterministic dynamics. For the two top rows the regression line is almost vertical (see also Figure 3) indicating noise contamination. Furthermore we see that the $`\stackrel{~}{R}_d`$ distribution shows approximately Gaussian shape, which spreads out and moves further and further away from the origin as the noise level or embedding dimension increases. The radial distribution, on the other hand, moves closer to the $`90`$-degrees line as noise contamination increases, which means that the height/width ratio of the point distribution increases, and therefore that it is more and more difficult to predict the next point. In the standard procedure noise effect are taken into account by the condition (3), which means that points outside a circle of radius $`A_{tol}R_A`$ are counted false (actually it is an ellipse, due to the scaling of $`\stackrel{~}{R}_d`$.) For Figures 2 and 4 this radius is 500 times the box size (and for figures 1 and 5 the factor is about 20). Although the boundary is quite far away one can imagine that higher levels of noise and higher embedding dimensions both increase the number of false neighbours, as has been reported . If the total number of data points of the preceeding system is decreased to $`1000`$ the graphs are not so simple to interpret (Figure 4). There is no significant difference between graphs with embedding dimension 2 and 3. As usual, reliable estimation of the underlying dynamical dimension requires a sufficient number of data points. However, by using this graphical representation we can nevertheless make a rough estimate on dimension even when only relatively few data points are available. As a final example we have analyzed the Mackey-Glass system $$\dot{X}=\frac{0.2X(t+31.8)}{1+[X(t+31.8)]^{10}}0.1X(t)$$ (13) using the sampling delay of 2. As the dimension of the attractor with these parameter values is about $`3.6`$, the embedding dimension must be at least 4. This can be seen in Figure 5: only in rightmost graph there is a clear sector type of pattern, and the radial distribution is zero over a nonzero range of angles near $`90`$ degrees. ## III Conclusions We have presented a graphical method to analyze time series in order to estimate the sufficient embedding dimension and the portion of additional noise. This tool consists of a $`(\stackrel{~}{R}_d,R_\mathrm{\Delta })`$ plot augmented with two distributions. Furthermore, the slope of the regression line of points in the $`(\stackrel{~}{R}_d,R_\mathrm{\Delta })`$ graphs can be used to recognize noise in deterministic systems. The advantage of the present method is that even small amount of noise contamination can be distinguished from deterministic chaos. This also means that we now see how the problem of determining the correct embedding dimension becomes more difficult with even a small amount of noise, and that for a deterministic system where the proportion of noise is substantial one should use the conditions (2) or (5) with great caution. If the FNN algorithm is used to estimate the embedding dimension, our presentation should be used in parallel in order to get relevant and reliable results. To summarize our method we present a list of guidelines on how to distinguish a deterministic time series from sources with noise: The time series is produced by a deterministic system if: 1. the points in the $`(\stackrel{~}{R}_d,R_\mathrm{\Delta })`$ plot form a clear sector pattern with a zero radial distribution over a distinct range below $`90`$ degrees, 2. the $`R_d`$ distribution is centered close to zero, 3. the slope of the regression line is well below 90 degrees. The noise level in the time series is substantial if: 1. the radial distribution is spread out over the whole range from 0 to 90 degrees, 2. the $`R_d`$ distribution has a clear maximum far away from zero, 3. the slope of the regression line is close to $`90`$ degrees. ## Let $`f`$ be a function which has been sampled very densely. Then we can assume that the nearest neighbor of the $`d`$-dimensional vector is the vector that starts at the next (or previous) sample point $$(f(t_0+\delta ),f(t_0+2\delta ),f(t_0+3\delta ),\mathrm{},f(t_0+d\delta )).$$ (14) The distance between these two points is therefore $`R_d`$ $`=\sqrt{{\displaystyle \underset{i=1}{\overset{d}{}}}\left(f(t_0+i\delta )f(t_0+(i1)\delta )\right)^2}`$ (16) $`\sqrt{{\displaystyle \underset{i=1}{\overset{d}{}}}\delta ^2f^{}(t_0+i\delta )^2}\delta \sqrt{d}|f^{}(t_0)|,`$ where we have assumed that the function $`f`$ changes relatively slowly (or that it is linear). The distance between the targets is $$R_\mathrm{\Delta }=\left|f(t_0+\delta +1)f(t_0+\delta )\right|\delta \left|f^{}(t_0)\right|,$$ (17) and by combining the results (16) and (17) we conclude that the ratio of $`R_\mathrm{\Delta }/R_d`$ is $`1/\sqrt{d}`$, and therefore is it reasonable in all cases to normalize this ratio with $`\sqrt{d}`$.
no-problem/9812/cond-mat9812396.html
ar5iv
text
# On the Implications of Discrete Symmetries for the 𝛽-function of Quantum Hall Systems ## I Introduction It has been conjectured that the large group of discrete symmetries enjoyed by quantum Hall systems might permit the complete determination of the $`\beta `$-function of these materials. This $`\beta `$-function describes the renormalization-group (RG) flow of the conductivities $`\sigma _{xx}`$ and $`\sigma _{xy}`$ in the delocalization-scaling theory of these systems. The question addressed in ref. was whether a few physically well-motivated constraints on the $`\beta `$-function, exploiting available data both from the weakly and the strongly coupled domains of the system, suffice to completely fix the full non-perturbative form of the $`\beta `$-function in an appropriate scheme, without actually deriving it from a microscopic theory. More precisely, the main idea was that the consistency of the RG flow with the discrete symmetry group — which relates large and small $`\sigma _{xx}`$ — might be sufficient to completely determine the $`\beta `$-function if combined with the known weak-coupling results in the asymptotic domain of large $`\sigma _{xx}`$. Such a construction would specify, among other quantities, the universality class (and critical exponents) of the quantum critical “anyon delocalization” points which are believed to exist in this system. Since this goal was largely thwarted by the existence of more than one $`\beta `$-function which satisfied these conditions, the main focus of ref. is the establishment of further conditions which would uniquely determine a solution. There has been a resurgence of interest in this topic of late, with the recent appearance of two ansätze for the nonperturbative form of this exact $`\beta `$-function. In this letter we add our own multiple-parameter ansatz to this list. We revive this ansatz not because we believe it to be the final word on this subject, but rather to emphasize that the determination of the $`\beta `$-function requires more than simply symmetry and asymptotic information, as well as to clarify the relation of these ansätze with the conditions outlined in ref. . In ref. it was the necessity for extra information which drove the formulation of quasi-holomorphy, while in ref. it is holomorphy which is invoked for the same reasons. It would be interesting to understand what this information might be for the ansatz of ref. , which is sufficiently narrow to produce specific values for the delocalization critical exponents. It is otherwise difficult to know why this ansatz should be preferable over others which may also do so. We present our ideas in the following way. First, we briefly recap the symmetry and asymptotic conditions which we demand of the $`\beta `$-function. In so doing we expand on the consequences of holomorphy, in order to make contact with ref. . Next, we describe our ansatz, which contains that of ref. as a special case. We use this ansatz to explore in more detail the form taken by $`\beta `$ at large $`\sigma _{xx}`$ and near the critical points. ## II Symmetries and Asymptotics The precise form of the constraints identified in ref. are: 1. Asymptotics: The $`\beta `$-function which is predicted by the effective sigma-model of weak localization in a magnetic field (see ref. for a comprehensive review) may be explicitly computed for large $`\sigma _{xx}`$, since this corresponds to weak coupling in the sigma-model. Writing $`\sigma _{xy}+i\sigma _{xx}=(e^2/\mathrm{})z`$ where $`z=x+iy`$ and $`\overline{z}=xiy`$, we have: $$\beta ^z=\frac{dz}{dt}=b_0+\frac{b_1}{y}+\frac{b_2}{y^2}+\mathrm{}+(q,\overline{q}\mathrm{expansion}).$$ (1) Here $`t`$ is a logarithmic scale parameter, $`q=\mathrm{exp}(2\pi iz)`$, $`\overline{q}=\mathrm{exp}(2\pi i\overline{z})`$, and the “$`q,\overline{q}`$-expansion” refers to the leading non-perturbative (dilute instanton gas) corrections to the perturbative loop expansion in $`1/y`$. The coefficients $`b_i`$ of the perturbative sigma-model $`\beta `$-function are known through six-loop order ($`i4`$) , with values: $$b_0=0,b_1=\frac{i}{2\pi ^2},b_2=0,b_3=\frac{3i}{8\pi ^4},b_4=0.$$ (2) The even powers of $`1/y`$ vanish in this renormalization scheme (to the order known). Note, however, that only the leading non-vanishing coefficient $`b_1`$ is scheme independent. By contrast, only the leading term in the instanton expansion is available , and is proportional to the anti-instanton result: $`y^k\overline{q}`$, for $`k`$ positive. There are three senses in which the $`\beta `$-function might be asked to agree with eqs. (1) and (2). 1. Weak Agreement: The weakest condition simply requires agreement with the scheme-independent perturbative part of these results — i.e. with only $`b_0`$ and $`b_1`$. 2. Strong Agreement: A stronger requirement is agreement with the entire perturbative expansion, as computed in the sigma-model. 3. Very Strong Agreement: The strongest requirement would be agreement with all known terms, including the dilute instanton gas expansion. None of the ansätze which have been proposed to date satisfy this condition, since they do not properly reproduce the power of $`y`$ in the leading instanton result. We shall not here focus much attention on this condition, however, since existing instanton calculations for disordered systems are performed using the replica trick, which is known to sometimes fail even for systems where the perturbative results they give are accurate . 2. Scaling: We require the $`\beta `$-function to share the (real) analytic structure to be expected of any RG flow. That is, since $`\beta ^z`$ describes the differential elimination of degrees of freedom at a particular scale, it does not contain singularities, apart from those places where the number of relevant degrees of freedom change. We therefore demand that the $`\beta `$-function should be non-singular throughout the upper half of the complex $`z`$-plane ($`\mathrm{IH}:\mathrm{Im}z>0`$). The continuous (second order) quantum critical delocalization transitions in this system (see ref. for a review) are then identified with the zeros of $`\beta `$. From eqs. (1) and (2) it is clear that $`\beta `$ also goes to zero as $`zi\mathrm{}`$. Because $`\sigma _{xy}=(e^2/\mathrm{})x`$ enters the sigma model as the coefficient of a topological term, $`\beta `$ cannot depend on $`x`$ to any order in perturbation theory, implying that the leading asymptotic behaviour of $`\beta `$ is a function of $`y`$ alone. Thus, unless all $`b_i`$ except $`b_0`$ vanish (as happens in systems with unbroken complex supersymmetry), agreement with the sigma-model large-$`y`$ form precludes the $`\beta `$-function being a complex analytic (holomorphic) function of $`z`$ only. The exploitation of scaling ideas and data are at the very heart of the “phenomenological” approach proposed in ref. , and further pursued in ref. . It is universality which allows us to entertain the idea that the $`\beta `$-function could be determined up to asymptotics by macroscopic scaling properties. Conversely, should the $`\beta `$-function be determined, it need not shed much light on the nature of the microscopic physics responsible for the scaling laws in this system. It can, however, show that all the scaling data, as well as the phase diagram and Hall quantization, are encoded in the low-energy effective theory as a global discrete symmetry of the (complexified) Kramers-Wannier type. It is this alleged symmetry which provides the final physically-motivated constraint on the $`\beta `$-function. It also is what endows our conjecture with most of its power, relating properties which are perturbative within the sigma-model context to those which are not. 3. Automorphy: As explained at length elsewhere , the observed “superuniversality” of the critical exponents in the hierarchy of delocalization transitions that take place in the quantum Hall system was the original motivation for conjecturing that the low-energy theory respects, in the fully spin-polarized case, a global discrete symmetry $`\mathrm{\Gamma }=\mathrm{\Gamma }_0(2)`$. ($`\mathrm{\Gamma }_0(2)`$, which is also denoted $`\mathrm{\Gamma }_\mathrm{T}(2)`$ in refs. , is a well-known sub-group of the modular group, $`SL(2,\text{ZZ})`$. ) The mathematical fact that this subgroup automatically ensures Hall quantization on odd-denominator fractions (when $`\sigma _{xx}0`$) is also encouraging. Independent arguments arose at about the same time in the form of the “law of corresponding states”, from a mean-field treatment of the microscopic theory . The modular group (and its subgroups) act on the complex conductivity as special Möbius transformations: $`\gamma (z)=(az+b)/(cz+d)`$ where $`a,b,c,d`$ are integers satisfying $`adbc=1`$. The subgroup $`\mathrm{\Gamma }_0(2)`$ is defined by the additional condition that $`c`$ be even. If such a symmetry is present at low energy the $`\beta `$-function must respect it in a very specific sense. A function $`f(z,\overline{z})`$ is called automorphic of weight $`(u,v)`$ under $`\mathrm{\Gamma }`$ iff it transforms like a generalized tensor: $`f(\gamma (z),\gamma (\overline{z}))`$ $`=`$ $`\left({\displaystyle \frac{d\gamma }{dz}}\right)^{u/2}\left({\displaystyle \frac{d\overline{\gamma }}{d\overline{z}}}\right)^{v/2}f(z,\overline{z})`$ (3) $`=`$ $`(cz+d)^u(c\overline{z}+d)^vf(z,\overline{z})`$ (4) for every $`\gamma `$ in $`\mathrm{\Gamma }`$. It was shown in ref. that if the RG commutes with $`\mathrm{\Gamma }`$, then the physical (contravariant) $`\beta `$-function $`\beta ^z`$ is a negative weight $`(2,0)`$ function, while the complex conjugate function, $`\beta ^{\overline{z}}`$, has weight $`(0,2)`$. Similarly, given a metric $`G_{ij}`$, the covariant $`\beta `$-function, $`\beta _i=G_{ij}\beta ^j`$ must have positive weights: $`\beta _z(2,0)`$ and $`\beta _{\overline{z}}(0,2)`$. Because the constraints 1 through 3 are extracted from experimental data and/or general knowledge about scaling and perturbation theory, they would seem to be a reasonable starting point for the search for the exact quantum-Hall $`\beta `$-function. Since we display many solutions to these conditions below they cannot be sufficient in themselves to uniquely determine the result. Before presenting these solutions we pause to discuss the holomorphy and quasiholomorphy assumptions. ## III Holomorphy A natural guess for $`\beta ^z`$ is that it is a holomorphic (or anti-holomorphic) function: $`\beta ^z=\beta ^z(z)`$ (or $`\beta ^z=\beta ^z(\overline{z})`$), a proposal recently revived in ref. . This is a very predictive ansatz because it permits the use of powerful results from complex analysis. As discussed above, this ansatz is inconsistent with even the weak form of the sigma-model behaviour at large $`y`$, and so it necessarily implies the breakdown of this sigma-model description of weak localization in magnetic fields. The purpose of the present section is to establish that a holomorphic (or anti-holomorphic) $`\beta `$-function must also have a singularity somewhere in $`\overline{\mathrm{IH}}=\mathrm{IH}\mathrm{IQ}\mathrm{}`$, where $`\mathrm{IQ}`$ are the rational numbers. One could conceivably tolerate a pole on the real axis, but it is then difficult to obtain an acceptable flow. For these two reasons the holomorphic option was rejected in ref. . A particularly useful fact for any meromorphic function $`f`$ of weight $`(k,0)`$ with respect to $`\mathrm{\Gamma }_0(2)`$, relates the ‘index’ of its zeros and poles within a fundamental domain $`\overline{\mathrm{IF}}`$ of $`\overline{\mathrm{IH}}`$ : $$n_{\mathrm{}}+n_0+\frac{n_{}}{2}+\underset{p}{}n_p=\frac{k}{4}.$$ (5) Here $`n_p`$ is the leading power of $`zz_p`$ which appears in a Laurent expansion of $`f`$ about the pole or zero at $`z_p`$ in the interior or boundary of $`\overline{\mathrm{IF}}`$. $`n_{}`$ is the same quantity in the expansion of $`f`$ about the fixed point, $`z_{}=(1+i)/2`$, of the group $`\mathrm{\Gamma }_0(2)`$. Similarly, $`n_{\mathrm{}}`$ is the leading power in a Laurent expansion of $`f`$ in powers of $`q`$ about $`z=i\mathrm{}`$, while $`n_0`$ counts the leading power of $`\stackrel{~}{q}=\mathrm{exp}(i\pi /z)`$ in an expansion of $`z^kf(z)`$ in powers of $`\stackrel{~}{q}`$ about $`z=0`$ . Eq. (5) implies, in particular, that no weight $`(2,0)`$ function like $`\beta ^z`$ can be holomorphic without acquiring singularities somewhere in $`\overline{\mathrm{IH}}`$. For example, as was observed in ref. , the $`\beta `$-function for the Seiberg-Witten $`N=2`$ supersymmetric $`SU(2)`$ gauge theory is the unique weight $`(2,0)`$ function (up to normalization) which has a simple zero only at $`z_{}`$ (and its images under $`\mathrm{\Gamma }_0(2)`$) and which approaches a constant as $`zi\mathrm{}`$. The complex quantity $`z`$ in this case is related to the gauge coupling ($`g`$) and vacuum angle ($`\theta `$) by: $`z=(\theta /2\pi )+(4\pi i/g^2)`$. Eq. (5) forces $`\beta (z)`$ to have a simple pole at $`z=0`$ (as well as the other integers on the real axis), corresponding to the infinite-coupling limit in the sigma-model. Similar considerations apply if the quantum Hall $`\beta `$-function were to be holomorphic, and in particular it would also be singular somewhere. A simple proposal is to choose $`\beta ^z`$ or $`\beta ^{\overline{z}}`$ to be proportional to the Seiberg-Witten $`N=2`$ supersymmetric $`\beta `$-function. Unfortunately, the flow in this case is repelled by the odd-denominator fractions on the real axis, instead flowing towards $`z_{}`$, which is an attractive fixed point, with no irrelevant directions. Clearly this flow cannot describe the second-order transitions of the quantum Hall systems. One might imagine making more complicated choices, such as to force the $`\beta `$-function to have simple zeroes (i.e. $`n_{\mathrm{}}=n_{}=1`$) both at $`i\mathrm{}`$ and $`z_{}`$, with no poles or singularities elsewhere for nonzero $`\sigma _{xx}`$ — thus making it at least qualitatively similar to the perturbative sigma-model result. Such a condition must have a double pole at $`z=0`$. We now argue that any holomorphic $`\beta `$-function having a simple zero at $`z=z_{}`$ cannot have both a relevant and irrelevant direction there, giving an unacceptable flow. To establish this result proceed as follows. The critical exponents are related to the derivative of the $`\beta `$-function at its zeroes, and a simple argument shows that holomorphy dictates that these must have the same sign. This is because the the matrix of derivatives for holomorphic $`\beta ^z`$ necessarily has the following form: $$\left(\begin{array}{cc}\frac{d\beta ^z}{dz}& \frac{d\beta ^z}{d\overline{z}}\\ \frac{d\beta ^{\overline{z}}}{dz}& \frac{d\beta ^{\overline{z}}}{d\overline{z}}\end{array}\right)=\left(\begin{array}{cc}B& 0\\ 0& \overline{B}\end{array}\right),$$ (6) from which we see that the product of the eigenvalues of this matrix is $`B\overline{B}0`$, implying: (i) both eigenvalues have the same sign; or (ii) one (or both) is zero. Neither of these cases describes the observed flow near the quantum Hall critical points. As observed in refs. , qualitatively acceptable flows are obtained by moving the pole of the $`\beta `$-function to the fixed point $`z_{}`$, in which case the $`\beta `$-function can approach a constant as $`zi\mathrm{}`$ and $`z0`$. The simplest such $`\beta `$-function — which has $`n_{}=1`$ and all others zero — turns out to be just the inverse of the holomorphic weight $`(2,0)`$ function $`(z)`$ defined below. As discussed in ref. , the pole makes it problematic to identify the universal critical exponents at $`z_{}`$, and ref. instead explicitly exhibits the flow near this point in order to make comparisons with the data. ## IV Quasi-holomorphy The alternative followed in ref. was to start with the observation that eq. (5) is much more kind in its implications for the covariant function, $`\beta _z`$, than it is for $`\beta ^z`$. This is because $`\beta _z`$ transforms under $`\mathrm{\Gamma }_0(2)`$ as an automorphic function of positive weight $`(k=2)`$. In fact, for $`\mathrm{\Gamma }_0(2)`$ — but not for $`SL(2,\text{ZZ})`$ — there is a unique (up to normalization) holomorphic $`(2,0)`$ function $`(z)`$, which is nowhere singular on $`\overline{\mathrm{IH}}`$. It can be expressed in terms of the famous modular discriminant function $`\mathrm{\Delta }=q(1q^n)^{24}`$ which generates the holomorphic (but not automorphic) Eisenstein function: $$E_2(z)=\frac{1}{2\pi i}_z\mathrm{log}\mathrm{\Delta }=124\underset{n=1}{\overset{\mathrm{}}{}}\frac{nq^n}{1q^n},$$ (7) as follows: $$(z)=2E_2(2z)E_2(z)=1+24\underset{n=1}{\overset{\mathrm{}}{}}\frac{nq^n}{1+q^n}.$$ (8) $`(z)`$ transforms as a weight $`(2,0)`$ function with respect to $`\mathrm{\Gamma }_0(2)`$ even though $`E_2(z)`$ does not. Unfortunately, since $`(z)`$ does not vanish as $`zi\mathrm{}`$ it is inconsistent with the perturbative expression, eq. (1). The inconsistency between $`(z)`$ and the perturbative result in powers of $`1/y`$ motivates the search for more general weight $`(2,0)`$ quantities which are not holomorphic but have the more general form: $`1/y+g(z)`$, for holomorphic $`g(z)`$. Such functions were called quasi-holomorphic in ref. . The most general quasi-holomorphic $`(2,0)`$ form which is nowhere singular in $`\mathrm{IH}`$ is a linear combination of $`(z)`$ and: $`(z,\overline{z})`$ $`=`$ $`{\displaystyle \frac{1}{\pi y}}+{\displaystyle \frac{2}{3}}\left[E_2(2z)E_2(z)\right]`$ (9) $`=`$ $`{\displaystyle \frac{1}{\pi y}}+16{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{nq^n}{1q^{2n}}}.`$ (10) Both $``$ and $``$ vanish at $`z_{}=(1+i)/2`$. It is the unlikely existence of this quasi-holomorphic “Hecke” function which makes the idea of quasi-holomorphy useful. Further motivations for restricting attention to quasi-holomorphic building blocks are discussed in ref. . 4. Quasi-holomorphy: Ref. therefore proposed that $`\beta _z`$ is quasi-holomorphic. The unique quasi-holomorphic choice which is consistent with the weak form of agreement with sigma-model perturbation theory then is (if $`G_{ij}1`$ as $`zi\mathrm{}`$): $$\beta _z=\frac{i}{2\pi }.$$ (11) With this proposal, the scheme dependence of $`\beta ^z`$ enters through the definition of the metric, $`G_{ij}`$. ## V More Ansätze If the $`\beta `$-function is not holomorphic then the previously-mentioned conditions are insufficient to completely pin it down. To establish this point, we now exhibit many more ansätze which satisfy all but the very-strong asymptotic condition. Since the ratio of any two $`(0,2)`$ functions is a $`(0,0)`$ function, any $`(0,2)`$ function $`\beta ^{\overline{z}}`$ can be written as: $$\beta ^{\overline{z}}=\frac{i}{2\pi }WR,$$ (12) where, $`R(z,\overline{z})`$ is a weightless (weight $`(0,0)`$) function to be specified and, in the spirit of quasi-holomorphy, we choose to write the weight $`(0,2)`$ factor as $`W=/𝒟`$ with: $$𝒟=||^2+a\pi ^2||^2+\pi b\overline{}+\pi c\overline{}+\frac{d}{y^2}.$$ (13) Here $`a,b,c`$ and $`d`$ are constants, and $`𝒟`$ transforms under $`\mathrm{\Gamma }_0(2)`$ as a weight $`(2,2)`$ function. In order to make the invariance of $`R`$ with respect to $`\mathrm{\Gamma }_0(2)`$ explicit, it is convenient to change variables from $`z`$ to $`f=\vartheta _3^4(z)\vartheta _4^4(z)/\vartheta _2^8(z)`$, and write $`R=R(f,\overline{f})`$. This may always be done since this $`f`$ plays the same role for $`\mathrm{\Gamma }_0(2)`$ as Klein’s famous $`j`$-function does for the full modular group. In particular, it is invariant under $`\mathrm{\Gamma }_0(2)`$ and uniquely labels every point in the fundamental domain $`\overline{\mathrm{IF}}`$ of the group (i.e. it is a one-to-one map of $`\overline{\mathrm{IF}}`$ onto the complex sphere). So far we have made no assumptions beyond automorphy. In choosing our ansatz for $`a,b,c,d`$ and $`R`$ our guidance is (strong) agreement with the large-$`y`$ limit, as well as requiring a zero of $`\beta ^{\overline{z}}`$ at $`z_{}`$, and the absence of singularities and zeros elsewhere in $`\mathrm{IH}`$. Ansatz 1: The simplest case is to choose $`d0`$, in which case $`𝒟>0`$ throughout $`\mathrm{IH}`$ so long as $`b`$ and $`c`$ are sufficiently small. In this case all assumptions are satisfied with the choice $`R=1`$. Thus: $`\beta ^{\overline{z}}(z,\overline{z})`$ $`=`$ $`{\displaystyle \frac{i}{2\pi }}{\displaystyle \frac{(z,\overline{z})}{𝒟(z,\overline{z})}}`$ (14) $`=`$ $`{\displaystyle \frac{i}{2\pi ^2y}}\left(1+{\displaystyle \frac{b+c}{y}}+{\displaystyle \frac{a+d}{y^2}}\right)^1+O(q,\overline{q}).`$ (15) Notice that $`b_2=0`$ in agreement with eq. (2) if we set $`b=c`$ in the ansatz, in which case $`b_{2n}=0`$ for all $`n`$. Similarly $`b_3`$ is properly reproduced if $`a+d=3/(4\pi ^2)0.076`$. This ansatz then predicts all other terms in the perturbative series: $$\beta _{\mathrm{pert}}^{\overline{z}}=\frac{i}{2\pi ^2y}\left(1\frac{3}{4\pi ^2y^2}\right)^1$$ (16) which gives the following coefficients $`b_i`$: $$b_{2n}=0,b_{2n+1}=\frac{3^ni}{2^{2n+1}\pi ^{2n+2}}.$$ (17) In principle, the values of $`a,b,c`$ and $`d`$ can be separately extracted by comparison with the leading non-perturbative terms proportional to $`q`$ and $`\overline{q}`$. Writing $`D=13/(4\pi ^2y^2)`$ this gives: $`\beta _{q,\overline{q}}^{\overline{z}}`$ $`=`$ $`{\displaystyle \frac{8i}{\pi D}}\left[1{\displaystyle \frac{1}{2\pi yD}}\left(3+2\pi c+{\displaystyle \frac{3b+2\pi a}{y}}\right)\right]q`$ (19) $`{\displaystyle \frac{4i}{\pi ^2yD^2}}\left(3+2\pi b+{\displaystyle \frac{3c+2\pi a}{y}}\right)\overline{q}.`$ Notice, however, that the leading powers are $`y^0q`$ and $`y^1\overline{q}`$, which does not agree with ref. , (who finds a positive power of $`y`$ premultiplying $`q`$). The ansatz of eq. (15) has a simple zero at the fixed point, $`z_{}=(1+i)/2`$. The critical exponents at this point are found by diagonalizing the matrix of derivatives of the $`\beta `$-function at this point. Using $`(z)6.10i(zz_{})+O((zz_{})^2)`$ and $`(x,y)3.69i(xx_{})+2.41(yy_{})+\mathrm{}`$, we find the localization length exponent $`\nu d/0.147`$ and irrelevant exponent (see ref. for definitions and a review of experimental results) $`y0.096/d`$ . Choosing the parameter $`d0.34`$ puts the prediction for $`\nu `$ in agreement with experimental results $`\nu _{\mathrm{exp}}=2.3\pm 0.1`$ ($`2.4\pm 0.2`$ ). This choice for $`d`$ permits an absolute prediction from this ansatz for the irrelevant exponent: $`y0.29`$, which does not seem to reproduce the results of numerical simulations, which give $`\nu _{\mathrm{num}}=2.35\pm 0.03`$ , and $`y_{\mathrm{num}}=0.38\pm 0.02`$ ($`0.42\pm 0.04`$ ). Ansatz 2: More complicated ansätze are also possible. For example, if $`d=0`$, then $`R`$ must be chosen to vanish at $`z_{}`$ in order to cancel the pole in $`/𝒟|_{d=0}`$. This is easily arranged since $`f1/4`$ has its only (double) zero at this point. This type of ansatz contains the one proposed in ref. as the special case $`b=c=A`$ and $`a=A^2`$, with $`R`$ given by the rational function $`R=\left(Q1/2\right)/Q`$, and $`Q=f+\overline{f}+2(f\overline{f})/(\pi A)`$. The value of the parameter $`A0.623`$ is chosen to ensure the cancellation of the pole at $`z_{}`$. Although for holomorphic functions such a rational form for $`R`$ follows on general grounds , we are not aware of any similar result for the nonholomorphic functions considered here. This particular ansatz also does not agree, in the strong sense, with the sigma-model result at large $`y`$. This is because $`R=1+O(q,\overline{q})`$ as $`zi\mathrm{}`$, and so its prediction for the perturbative $`\beta `$-function is the same as the perturbative part of eq. (15), with $`b+c=0`$ and $`a+d=a=A^20.388`$. This clearly disagrees with the perturbative theory already at order $`O(1/y^3)`$. If only agreement in the weak sense is desired, then there is no reason to set $`b+c=0`$, exhibiting this ansatz as one of a several-parameter family. Putting aside agreement with sigma-model perturbation theory, the predictions for the critical exponents obtained from this ansatz in ref. become $`\nu =2.12`$ and $`y=0.31`$, which is consistent (within the roughly 10% errors) with experimental scaling data, but has difficulty with the numerical simulations of the quantum Hall system . As was already pointed out in ref., this ansatz varies as $`y^0q`$ and so, like all of the previous ansätze, disagrees with the leading nonperturbative terms predicted by the sigma-model. One might wonder if the requirement of agreement with the sigma model in the very strong sense could itself be the remaining condition which uniquely determines the form of $`\beta ^z`$. Leading instanton correction of the form $`y^kq`$ for $`k=1`$ or $`k=2`$, as required by the very strong asymptotic condition if the sigma-model instanton calculation is taken seriously, can be achieved within the framework of the ansätze considered here by adding terms like $`F=y^2/fy^2q+\mathrm{}`$ or $`G=y^2/fyq+\mathrm{}`$ to the product $`WR`$. Both $`F`$ and $`G`$ are weight $`(0,2)`$ functions with simple zeros at $`z_{}`$ (and at $`z=0`$), and so would change the values which are inferred for the critical exponents. It would be encouraging to hope that concentration on this condition, or a solution to the consistency conditions for the metric $`G_{ij}`$ discussed in ref., might lead to further progress in finding sufficient conditions for the determination of the exact scaling properties of the quantum Hall system. ## VI Acknowledgements This research has been supported in part by NSERC (Canada), FCAR (Québec) and the Norwegian Research Council. We thank Dan Arovas, Brian Dolan and Cyril Furtlehner for useful discussions.
no-problem/9812/quant-ph9812058.html
ar5iv
text
# Untitled Document Constructive Inversion of Energy Trajectories in Quantum Mechanics Richard L. Hall Department of Mathematics and Statistics, Concordia University, 1455 de Maisonneuve Boulevard West, Montréal, Québec, Canada H3G 1M8. Abstract We suppose that the ground-state eigenvalue $`E=F(v)`$ of the Schrödinger Hamiltonian $`H=\mathrm{\Delta }+vf(x)`$ in one dimension is known for all values of the coupling $`v>0.`$ The potential shape $`f(x)`$ is assumed to be symmetric, bounded below, and monotone increasing for $`x>0.`$ A fast algorithm is devised which allows the potential shape $`f(x)`$ to be reconstructed from the energy trajectory $`F(v).`$ Three examples are discussed in detail: a shifted power-potential, the exponential potential, and the sech-squared potential are each reconstructed from their known exact energy trajectories. PACS 03 65 Ge 1. Introduction This paper is concerned with what may be called ‘geometric spectral inversion’. We suppose that a discrete eigenvalue $`E=F(v)`$ of the Schrödinger Hamiltonian $$H=\mathrm{\Delta }+vf(x)$$ $`(1.1)`$ is known for all sufficiently large values of the coupling parameter $`v>0`$ and we try to use this data to reconstruct the potential shape $`f.`$ The usual ‘forward’ problem would be: given the potential (shape) $`f(x),`$ find the energy trajectory $`F(v);`$ the problem we now consider is the inverse of this $`Ff.`$ This problem must at once be distinguished from the ‘inverse problem in the coupling constant’ discussed, for example, by Chadan and Sabatier . In this latter problem, the discrete part of the ‘input data’ is a set $`\{v_i\}`$ of values of the coupling constant that all yield the identical energy eigenvalue $`E.`$ The index $`i`$ might typically represent the number of nodes in the corresponding eigenfunction. In contrast, for the problem discussed in the present paper, $`i`$ is kept fixed and the input data is the graph $`(F(v),v),`$ where the coupling parameter has any value $`v>v_c,`$ and $`v_c`$ is the critical value of $`v`$ for the support of a discrete eigenvalue with $`i`$ nodes. We shall mainly discuss the bottom of the spectrum $`i=0`$ in this paper. However, on the basis of results we have obtained for the inversion IWKB of the WKB approximation , there is good reason to believe that constructive inversion may also be possible starting from any discrete eigenvalue trajectory $`F_i(v),`$ $`i>0.`$ In fact, perhaps not surprisingly, IWKB yields better results starting from higher trajectories; moreover, they become asymptotically exact as the eigenvalue index is increased without limit. By making suitable assumptions concerning the class of potential shapes, theoretical progress has already been made with this inversion problem \[3-5\]. The most important assumptions that we retain throughout the present paper are that $`f(x)`$ is symmetric, monotone increasing for $`x>0,`$ and bounded below: consequently the minimum value is $`f(0).`$ We assume that our spectral data, the energy trajectory $`F(v),`$ derives from a potential shape $`f(x)`$ with these features. We have discussed how two potential shapes $`f_1`$ and $`f_2`$ can cross over and still preserve spectral ordering $`F_1<F_2.`$ It is known that lowest point $`f(0)`$ of $`f`$ is given by the limit $$f(0)=\underset{v\mathrm{}}{lim}\frac{F(v)}{v}.$$ $`(1.2)`$ We have proved that a potential shape $`f`$ has a finite flat portion ($`f^{}(x)=0`$) in its graph starting at $`x=0`$ if and only if the mean kinetic energy is bounded. That is to say, $`s=F(v)vF^{}(v)<K,`$ for some positive number $`K.`$ More specifically, the size $`b`$ of this patch can be estimated from $`F`$ by means of the inequality: $$sKf(x)=f(0),|x|b,\mathrm{and}b=\frac{\pi }{2}K^{\frac{1}{2}}.$$ $`(1.3)`$ The monotonicity of the potential, which allows us to prove results like this, also yields the Concentration Lemma $$q(v)=_a^a\psi ^2(x,v)𝑑x>\frac{f(a)F^{}(v)}{f(a)f(0)}1,v\mathrm{},$$ $`(1.4)`$ where $`\psi (x,v)`$ is the normalized eigenfunction satisfying $`H\psi =F(v)\psi .`$ More importantly, perhaps, if $`F(v)`$ derives from a symmetric monotone potential shape $`f`$ which is bounded below, then $`f`$ is uniquely determined . The significance of this result can be appreciated more clearly upon consideration of an example. Suppose the bottom of the spectrum of $`H`$ is given by $`F(v)=\sqrt{v},`$ what is $`f\mathrm{?}`$ It is well known, of course, that $`f(x)=x^2F_o(v)=\sqrt{v};`$ but are there any others? Are scaling arguments reversible? A possible source of disquiet for anyone who ponders such questions is the uncountable number of (unsymmetric) perturbations of the harmonic oscillator all of which have the identical spectrum to that of the unperturbed oscillator $`f(x)=x^2`$. If, in addition to symmetry and monotonicity, we also assume that a potential shape $`f_1(x)`$ vanishes at infinity and that $`f_1(x)`$ has area, then a given trajectory function $`F_1(v)`$ corresponding to $`f_1(x)`$ can be ‘scaled’ to a standard form in which the new function $`F(v)=\alpha F_1(\beta v)`$ corresponds to a potential shape $`f(x)`$ with area $`2`$ and minimum value $`f(0)=1.`$ Thus square-well potentials, which of course are completely determined by depth and area, are immediately invertible; moreover it is known that, amongst all standard potentials, the square-well it ‘extremal’ for it has the lowest possible energy trajectory. In Ref. an approximate variational inversion method is developed; it is also demonstrated constructively that all separable potentials are invertible. However, these results and additional constraints are not used in the present paper. When a potential has area $`2A`$, we first assumed, during our early attempts at numerical inversion, that it would be very useful to determine $`A`$ from $`F(v)`$ and then appropriately constrain the inversion process. However, the area constraint did not turn out to be useful. Thus the numerical method we have established for constructing $`f(x)`$ from $`F(v)`$ does not depend on use of this constraint, and is therefore not limited to the reconstruction of potentials which vanish at infinity and have area. Much of numerical analysis assumes that errors arising from arithmetic computations or from the computation of elementary functions is negligibly small. The errors usually studied in depth are those that arise from the discrete representation of continuous objects such as functions, or from operations on them, such as derivatives or integrals. In this paper we shall take this separation of numerical problems to a higher level. We shall assume that we have a numerical method for solving the eigenvalue problem in the forward direction $`f(x)F(v)`$ that is reliable and may be considered for our purposes to be essentially error free. Our main emphasis will be on the design of an effective algorithm for the inverse problem assuming that the forward problem is numerically soluble. The forward problem is essential to our methods because we shall need to know not only the given exact energy trajectory $`F(v)`$ but also, at each stage of the reconstruction, what eigenvalue a partly reconstructed potential generates. This line of thought immediately indicates that we shall also need a way of temporarily extrapolating a partly reconstructed potential to all $`x.`$ Our constructive inversion algorithm hinges on the assumed symmetry and monotonicity of $`f(x).`$ This allows us to start the reconstruction of $`f(x)`$ at $`x=0,`$ and sequentially increase x. In Section (2) it is shown how numerical estimates can be made for the shape of the potential near $`x=0,`$ that is for $`x<b,`$ where $`b`$ is a parameter of the algorithm. In Section (3) we explore the implications of the potential’s monotonicity for the ‘tail’ of the wave function. In Section (4) we establish a numerical representation for the form of the unknown potential for $`x>b`$ and construct our inversion algorithm. In Section (5) the algorithm is applied to three test problems. 2. The reconstruction of $`f(x)`$ near $`x=0.`$ Since the energy trajectory $`F(v)`$ which we are given is assumed to arise from a symmetric monotone potential, and since the spectrum generated by the potential is invariant under shifts along the $`x`$-axis, we may assume without loss of generality that the minimum value of the potential occurs at $`x=0.`$ We now investigate the behaviour of $`F(v),`$ either analytically or numerically, for large values of $`v.`$ The purpose is to establish a value for the starting point $`x=b>0`$ of our inversion algorithm and the shape of the potential in the interval $`x[0,b].`$ First of all, the minimum value $`f(0)`$ of the potential is provided by the limit (1.2). Now, if the mean kinetic energy $`s=(\psi ,\mathrm{\Delta }\psi )=F(v)vF^{}(v)`$ is found to be bounded above by a positive number $`K,`$ then we know that the potential shape $`f(x)`$ satisfies $`f(x)=f(0),x[0,b],`$ where b is given by (1.3). In this case we have a value for $`b`$ and also the shape $`f(x)`$ inside the interval $`[0,b].`$ If the mean potential energy $`s`$ is (or appears numerically to be) unbounded, then we adopt another strategy: we model $`f(x)`$ as a shifted power potential near $`x=0.`$ Since we never know $`f(x)`$ exactly, we shall need another symbol for the approximation we are currently using for $`f(x).`$ We choose this to be $`g(x)`$ and we suppose that the bottom of the spectrum of $`\mathrm{\Delta }+vg(x)`$ is given by $`G(v).`$ The goal is to adjust $`g(x)`$ until $`G(v)`$ is close to the given $`F(v).`$ Thus we write $$f(x)g(x)=f(0)+Ax^q,x[0,b].$$ $`(2.1)`$ Therefore we have three positive parameters to determine, $`b,A,`$ and $`q.`$ We first suppose that $`g(x)`$ has the form (2.1) for all $`x0.`$ We now choose a ‘large’ value $`v_1`$ of $`v.`$ This is related to the later choice of $`b`$ by a bootstrap argument: the idea is that we choose $`v_1`$ so large that the turning point determined by $$\psi _{xx}(x,v_1)/\psi (x,v_1)=v_1f(x)F(v)=0$$ $`(2.2)`$ is equal to $`b.`$ The concentration lemma guarantees that this is possible. By scaling arguments we have $$G(v)=f(0)v+E(q)(vA)^{\frac{2}{2+q}},$$ $`(2.3)`$ where $`E(q)`$ is the bottom of the spectrum of the pure-power Hamiltonian $`\mathrm{\Delta }+|x|^q.`$ We now ‘fit’ $`G(v)`$ to $`F(v)`$ by the equations $`G(v_1)=F(v_1)`$ and $`G(2v_1)=F(2v_1)`$ which yield the estimate for $`q`$ given by $$\eta =\frac{2}{2+q}=\frac{\mathrm{log}(F(2v_1)2v_1f(0))\mathrm{log}(F(v_1)v_1f(0))}{\mathrm{log}(2)}.$$ $`(2.4)`$ Thus $`A`$ is given by $$A=\left((F(v_1)v_1f(0))/E(q)\right)^{\frac{1}{\eta }}/v_1.$$ $`(2.5)`$ We choose $`b`$ to be equal to the turning point corresponding to the model potential $`g(x)`$ with the smaller value of $`v,`$ that is to say so that $`f(0)+Ab^q=F(v_1)/v_1,`$ or $$b=\left(\frac{F(v_1)v_1f(0)}{Av_1}\right)^{\frac{1}{q}}.$$ $`(2.5)`$ Thus we have determined the three parameters which define the potential model $`g(x)`$ for $`x[b,b].`$ 3. The tail of the wavefunction Let us suppose that the ground-state wave function is $`\psi (x,v).`$ Thus the turning point $`\psi _{xx}(x,v)=0`$ occurs for a given $`v`$ when $$x=x_t(v)=f^1(R(v)),R(v)=\left(\frac{F(v)}{v}\right).$$ $`(3.1)`$ The concentration lemma (1.4) quantifies the tendency of the wave function to become, as the coupling $`v`$ is increased, progressively more concentrated on the patch $`[c,c],`$ where $`x=c`$ is the point (perhaps zero) where $`f(x)`$ first starts to increase. This allows us to think in terms of the wave function having a ‘tail’. We think of a symmetric potential as having been determined from $`x=0`$ up to the current point $`x.`$ The question we now ask is: what value of $`v`$ should we use to determine how $`f(x),`$ or, more particularly, our approximation $`g(x)`$ for $`f(x),`$ continues beyond the current point. We have found that a good choice is to choose $`v`$ so that the turning point $`x_t(v)=x/2,`$ or some other similar fixed fraction $`\sigma <1`$ of the current $`x`$ value. The algorithm seems to be insensitive to this choice. Since $`g(x)`$ has been constructed up to the current point, and $`F(v)`$ is known, the value of v required follows by inverting (3.1). It has been proved that R(v) is monotone and therefore invertible. Hence we have the following general recipe for $`v:`$ $$v=R^1(g(\sigma x)),\sigma =\frac{1}{2}.$$ $`(3.2)`$ Since we can only determine Schrödinger eigenvalues of $`H=\mathrm{\Delta }+vg(x)`$ if the potential is defined for all $`x,`$ we must have a policy about temporarily extending $`g(x).`$ We have tried many possibilities and found the simplest and most effective method is to extend $`g(x)`$ in a straight line, with slope to be determined. In Figure (1) we illustrate the ideas just discussed for the case of the sech-squared potential. The inset graph shows the sech-squared potential perturbed from $`x=x_a`$ by five straight line extensions; meanwhile the main graph shows the corresponding set of five wave functions which agree for $`0xx_a`$ and then continue with different ‘tails’ dictated by the corresponding potential extensions. The value of the coupling $`v`$ is the value that makes the turning point of the wave function occur at $`x=x_a/2.`$ This figure illustrates the sort of graphical study that has lead to the algorithm described in this paper. 4. The inversion algorithm We must first define the ‘current’ approximation $`g(x)`$ for the potential $`f(x)`$ sought. For values of $`x`$ less than $`b,`$ $`g(x)`$ is defined either as the horizontal line $`f(x)=f(0)`$ or as the shifted power potential (2.1). For values of $`x`$ greater than $`b,`$ the $`x`$-axis is divided into steps of length $`h.`$ Thus the ‘current’ value of $`x`$ would be of the form $`x=x_k=b+kh,`$ where $`k`$ is a positive integer. The idea is that $`g(x_k)`$ is determined sequentially and $`g(x)`$ is interpolated linearly between the $`x_k`$ points. We suppose that $`\{g(x_k)\}`$ have already been determined up to $`k`$ and we need to find $`y=g(x_{k+1}).`$ For $`xx_k`$ we let $$g(x)=g(x_k)+(yg(x_k))\frac{xx_k}{h}.$$ $`(4.1)`$ If, from a study of $`F(v),`$ the underlying potential $`f(x)`$ has been shown to be bounded above, it is convenient to rescale $`F(v)`$ so that it corresponds to a potential shape $`f(x)`$ which vanishes at infinity. In this case it is slightly more efficient to modify (4.1) so that for large $`x`$ the straight-line extrapolation of $`g(x)`$ is ‘cut’ to zero instead of becoming positive. In either case we now have for the current point $`x_k`$ an approximate potential $`g(x)`$ parameterized by the ‘next’ value $`y=g(x_{k+1}).`$ The task of the inversion algorithm is simply to choose this value of $`y.`$ Let us suppose that, for given values of $`k`$ and $`y,`$ the bottom of the spectrum of $`H=\mathrm{\Delta }+vg(x)`$ is given by $`G(v,k,y),`$ then the inversion algorithm may be stated in the following succinct form in which $`\sigma <1`$ is a fixed parameter. Find $`y`$ such that $$vg(\sigma x_k)=F(v)=G(v,k,y);\mathrm{then}g(x_{k+1})=y.$$ $`(4.2)`$ The value of $`v`$ is first chosen so that the turning point of the wave function generated by $`g`$ occurs at $`\sigma x_k;`$ after this, the value of $`y`$ is chosen so that $`G`$ ‘fits’ $`F`$ for this value of $`v.`$ The value of the parameter $`\sigma `$ chosen for the examples discussed in section (5) below is $`\sigma =\frac{1}{2}.`$ The idea behind this choice can best be understood from a study of Figure (1): the value of the coupling $`v`$ must be such that the current value of $`x`$ for which $`y`$ is sought is in the ‘tail’ of the corresponding wave function; that is to say, the turning point $`\sigma x`$ should be before $`x,`$ but not too far away. Fortunately the inversion algorithm seems to be insensitive to the choice of $`\sigma .`$ 5. Three examples The first example we consider is the unbounded potential whose shape $`f(x)`$ and corresponding exact energy trajectory $`F(v)`$ are given by the $`\{f,F\}`$ pair $$f(x)=1+|x|^{\frac{3}{2}}F(v)=v+E(3/2)v^{\frac{4}{7}},$$ $`(5.1)`$ where $`E(3/2)`$ is the bottom of the spectrum of $`H=\mathrm{\Delta }+|x|^{\frac{3}{2}}`$ and has the approximate value $`E(3/2)1.001184.`$ Applying the inversion algorithm to $`F(v)`$ we obtain the reconstructed potential shown in Figure (2). We first set $`v_1=10^4`$ and find that the initial shape is determined (as described in Section (2)) to be $`1+x^{1.5}`$ for $`x<b=0.072.`$ For larger values of $`x`$ the step size is chosen to be $`h=0.05`$ and $`40`$ iterations are performed by the inversion algorithm. The results are plotted as hexagons on top of the exact potential shape shown as a smooth curve. This entire computation takes less than $`20`$ seconds with a program written in C++ running on a $`200`$MHz Pentium Pro. The following two examples are bounded potentials both having large-$`x`$ limit zero, lowest point $`f(0)=1,`$ and area $`2.`$ The exponential potential has the $`\{f,F\}`$ pair $$f(x)=e^{|x|}J_{2|E|^{\frac{1}{2}}}^{}(2v^{\frac{1}{2}})=0E=F(v),$$ $`(5.2)`$ where $`J_\nu ^{}(x)`$ is the derivative of the Bessel function of the first kind of order $`\nu .`$ For the sech-squared potential we have $$f(x)=\mathrm{sech}^2(x)F(v)=\left[\left(v+\frac{1}{4}\right)^{\frac{1}{2}}\frac{1}{2}\right]^2.$$ $`(5.3)`$ In Figure (3) the two energy trajectories are plotted. Since the two potentials have lowest value $`1`$ and area $`2`$ it follows that the corresponding trajectories both have the form $`F(v)v^2`$ for small $`v`$ and they both satisfy the large-$`v`$ limit $`lim_v\mathrm{}\left(F(v)/v\right)=1.`$ Thus the differences between the potential shapes is somehow encoded in the fine differences between these two similar energy curves for intermediate values of $`v:`$ it is the task of our inversion theory to decode this information and reveal the underlying potential shape. If we apply the inversion algorithm to these two problems we obtain the results shown in Figures (4) and (5). The parameters used are exactly the same as for the first problem described above. The time taken to perform the inversions is again less than $`20`$ seconds if we discount, in the case of the exponential potential, the extra time taken to compute $`F(v)`$ itself. 6. Conclusion Once we suspect (or know) that an energy trajectory $`F(v)`$ derives from a potential shape $`f(x),`$ it is certainly possible in principle to model the potential discretely as $`g(x)`$ and then find $`g`$ approximately by a least-squares fit of $`G(v)`$ to $`F(v).`$ Such a ‘brute force’ method would not be easy or fast, even for problems in one dimension. In terms of the reconstructions presented in this paper, one would have to consider minimizing a function of the form $`_{i=1}^{40}|G(v_i;\text{Y})F(v_i)|^2,`$ where the vector Y represents the $`40`$ values of $`g(x_k)`$ to be determined. We have found that such a function of Y has very erratic behaviour unless the starting point can be chosen quite close to the critical point. The purpose of the approach discussed in this paper is however not so much to do with efficiency as with understanding. The method we have found is intimately linked to the basic properties of the problem: the implications of monotonicity, the relation between the position of the turning point of the wave function and the value of $`v,`$ and the tail behaviour. The effectiveness of the resulting algorithm stems from its systematic use of all this information. If a potential shape $`f(x)`$ is symmetric but not monotonic (on the half axis), then for large values of the coupling $`v`$ the problem will necessarily split into regimes that become more and more isolated as $`v`$ increases. The situation could become arbitrarily complicated, perhaps involving resonances, and we have no idea at present whether reconstruction $`Ff`$ would in principle be possible in the general case. If the potential were unimodal and monotonic away from the minimum point, we do not at present know what might be the spectral inheritance of the additional property of the symmetry of $`f(x).`$ Is there non-uniqueness in this case? Could a symmetric potential be constructed that would have the same energy trajectory $`F(v)`$ as that of a given non-symmetrical unimodal potential shape $`f(x)\mathrm{?}`$ Many interesting questions such as this which are simple to pose nevertheless appear at present to be very difficult to answer. In our earlier papers on this topic we discussed some suggestions for applications of this form of spectral inversion. The situations that are most strongly suggestive are those such as the screened-Coulomb potentials used in atomic physics where the coupling varies with the atomic number. In such a case $`F_n(v)`$ or, more accurately, pair differences between such functions, would only be known at certain isolated points. Now that an effective form of constructive inversion is available, it will be possible to consider this more physically important type of application. Another approach which has not yet been applied to geometric spectral inversion is via control theory. Rabitz et al have successfully used ideas from control theory to reconstruct molecular potentials from sets of data that are directly measurable. This is the ultimate goal of the present work on geometric spectral inversion. Acknowledgment Partial financial support of this work under Grant No. GP3438 from the Natural Sciences and Engineering Research Council of Canada is gratefully acknowledged. References K. Chadan and P. C. Sabatier, Inverse Problems in Quantum Scattering Theory (Springer, New York, 1989). The ‘inverse problem in the coupling constant’ is discussed on p406 R. L. Hall, Phys. Rev A 51, 1787 (1995). R. L. Hall, J. Phys. A:Math. Gen 25, 4459 (1992). R. L. Hall, Phys. Rev. A 50, 2876 (1995). R. L. Hall, J. Phys. A:Math. Gen 28, 1771 (1995). O. L. De Lange and R. E. Raab, Operator Methods in Quantum Mechanics (Oxford University Press, Oxford, 1991). Perturbed harmonic oscillators with identical spectra to that generated by $`f(x)=x^2`$ are given on p71 H. S. W. Massey and C. B. O. Mohr, Proc. Roy. Soc. 148, 206 (1934). S. Flügge, Practical Quantum Mechanics (Springer, New York, 1974). The exponential potential is discussed on p196 and the sech-squared potential on p94 P. Gross, H. Singh, and H. Rabitz, Phys. Rev. A 47, 4593 (1993). Z. Lu and H. Rabitz, Phys. Rev. A 52, 1961 (1995). Figure (1) The potential $`f(x)=\mathrm{sech}^2(x)`$ is perturbed from $`x=x_a`$ by straight-line segments. Each segment leads to a perturbation in the tail of the corresponding wave function. The coupling $`v`$ is chosen so that $`x_a=x_t/2,`$ where $`x_t`$ is the turning point of the wave function. Figure (2) Constructive inversion of the energy trajectory $`F(v)`$ for the shifted power potential $`f(x)=1+|x|^{\frac{3}{2}}.`$ For $`xb=0.072,`$ the algorithm correctly generates the model $`f(x);`$ for larger values of $`x,`$ in steps of size $`h=0.05,`$ the hexagons indicate the reconstructed values for the potential $`f(x),`$ shown exactly as a smooth curve. The unnormalized wave functions are also shown. Figure (3) The ground-state energy trajectories $`F(v)`$ for the exponential potential (E) and the sech-squared potential (S). For small $`v,`$ $`F(v)v^2;`$ for large $`v,`$ $`lim_v\mathrm{}\left(F(v)/v\right)=1.`$ The shapes of the underlying potentials are buried in the details of $`F(v)`$ for intermediate values of $`v.`$ Figure (4) Constructive inversion of the energy trajectory $`F(v)`$ for the exponential potential $`f(x)=\mathrm{exp}(x).`$ For $`xb=0.048,`$ the algorithm correctly generates the model $`f(x)=1+|x|;`$ for larger values of $`x,`$ in steps of size $`h=0.05,`$ the hexagons indicate the reconstructed values for the potential $`f(x),`$ shown exactly as a smooth curve. The unnormalized wave functions are also shown. Figure (5) Constructive inversion of the energy trajectory $`F(v)`$ for the sech-squared potential $`f(x)=\mathrm{sech}^2(x).`$ For $`xb=0.1,`$ the algorithm correctly generates the model $`f(x)=1+x^2;`$ for larger values of $`x,`$ in steps of size $`h=0.05,`$ the hexagons indicate the reconstructed values for the potential $`f(x),`$ shown exactly as a smooth curve. The unnormalized wave functions are also shown.
no-problem/9812/astro-ph9812209.html
ar5iv
text
# A NEW APPROACH TO QUASARS AND TWINS BY COSMOLOGICAL WAVEGUIDES ## References
no-problem/9812/astro-ph9812255.html
ar5iv
text
# 1 The lithium plateau enigma may be summarized as follows : ## 1 The lithium plateau enigma may be summarized as follows : Main-sequence Pop II stars stars with effective temperatures between 5500K and 6500K show a remarquably constant value for the lithium abundance ($``$ the lithium plateau). Furthermore the dispersion around this value is very small (Bonifacio & Molaro 1997). However, from the observations of Pop I stars, there is strong evidence that the lithium abundance highly varies from star to star. The variation with T<sub>eff</sub> and age clearly appears in the galactic cluster data. From the theory and modelisation of stellar internal structure, lithium is expected to vary from star to star due to both nuclear destruction and/or element settling. These effects account well for the observations of Pop I stars, although more quantitative comparisons between observational and theoretical results are still needed. Helioseimology now provides a spectacular confirmation of the precision we have attained in the modelisation of the solar internal structure, including element settling (Richard et al. 1996). So why is the lithium abundance constant in the so-called lithium plateau while all predictions suggest that it should vary from star to star? Is there an “abundance attractor” which would work in Pop II stars but not in Pop I stars? ## 2 Hints for a solution Several models have been proposed to account for the lithium plateau. The “old standard model” in which no settling was introduced is excluded as unphysical. In the “mass loss model” (Vauclair & Charbonnel 1995), a stellar wind is supposed to prevent element settling during the stellar lifetime. In the “rotation model” (Pinsonneault et al. 1990, Charbonnel & Vauclair 1999), the Pop II stars are supposed to have been mildly mixed below the convection zone due to rotation-induced shears. In any case the solution seems somewhat “ad hoc” as it assumes that some parameters are fixed in all stars (the mass loss rate or the rotation rate) for the lithium value to remain constant along the plateau. It would be much more satisfying to find a “lithium abundance attractor” which would remain stable in halo stars while fundamental parameters (M, T<sub>eff</sub>, \[Fe/H\]) vary. ## 3 Lithium abundance attractor Such an attractor may exist (Vauclair & Charbonnel 1998) : Indeed, the lithium profiles inside the Pop II standard stellar models including element segregation present a maximum value, Li<sub>max</sub>, which remains constant all over the range in T<sub>eff</sub> and metallicity of the plateau while the surface value is expected to change. This result leads to the idea that the observed lithium abundances may be related to Li<sub>max</sub>. Since the observations of the lithium in the plateau reveal a very small dispersion around a stable value, this value must indeed lie close to Li<sub>max</sub>. In this case the derived primordial value is 2.35. When compared to BBN computations (Copi et al. 1995) this result leads to a baryonic number between 1.2 and 5 $`10^{10}`$. For H = 50, this value corresponds to $`0.018<\mathrm{\Omega }_b<0.075`$. The macroscopic process which would act in the way of moving this lithium up to the surface is still to be found.
no-problem/9812/astro-ph9812229.html
ar5iv
text
# Environmental Effects on the Faint End of the Luminosity Function ## 1. Introduction Much recent work has been devoted to measuring the galaxy luminosity function (LF) within rich clusters, particularly with regard to the faint end which has become accessible to detailed study through various technical and observational improvements (see the paper by Smith et al. in these proceedings). These studies suggest that the LF becomes steep (Schechter (1976) slope $`\alpha 1.5`$) in many clusters, faintwards of about $`M_B=17.5`$ or $`M_R19`$ (for $`H_0`$ = 50 km s<sup>-1</sup> Mpc<sup>-1</sup>), where (generally low surface brightness) dwarfs begin to dominate (e.g. Smith, Driver & Phillipps 1997; Trentham 1997a,b). Using deep CCD imaging from the Anglo-Australian Telescope, we have now extended this work (see Driver, Couch & Phillipps 1998), in order to examine the luminosity distribution in and across a variety of Abell and ACO clusters. In particular, we were interested in any possible dependence of the dwarf population (specifically the ratio of the number of dwarfs to the number of giants) on cluster type or on position within the cluster. ## 2. Dwarfs in Rich Clusters A number of papers (e.g. Driver et al. 1994; Smith et al. 1997; Wilson et al. 1997) have demonstrated remarkably similar dwarf populations in a number of morphologically similar, dense rich clusters like (and including) Coma. This similarity appears not only in the faint end slope of the LF, around $`\alpha =1.8`$, but also in the point at which the steep slope cuts in, $`M_R19`$ (i.e. about $`M^{}+3.5`$). The latter implies equal ratios of dwarf to giant galaxy numbers in the different clusters. However, there clearly do exist differences between some clusters. For example, several of the clusters in the Driver et al. (1998) sample do not show a conspicuous turn up at the faint end (see also Lopez-Cruz et al. 1997 for further examples). Either these clusters contain completely different types of dwarf galaxy population or, as we suggest, the turn up occurs at fainter magnitudes. For a composite giant plus dwarf LF, this is equivalent to a smaller number of dwarfs relative to giants. To simplify the discussion, we will define the dwarf to giant ratio DGR as the number of galaxies with $`16.5M_R19.5`$ compared to those with $`19.5M_R`$), i.e. $`DGR=\frac{N(16.5>M_R>19.5)}{N(19.5>M_R>23.5)}`$ . The DGR does not have any obvious dependence on cluster richness (Driver et al. 1998; see also Turner et al. 1993), but we can also check for variations with morphological characteristics of the clusters. For giant galaxies, it is well known that a cluster’s structural and population characteristics are well correlated. For example, dense regular clusters are of early Bautz-Morgan type (dominated by cD galaxies) and have the highest fractions of giant ellipticals (Dressler 1980). In a similar way, we find that the DGR (i.e. the fraction of dwarfs) is smallest in these early Bautz-Morgan type clusters (Driver et al. 1998). Next consider the galaxy density. We can characterise the clusters by their central (giant) galaxy number densities, for instance the number of galaxies brighter than $`M_R=19.5`$ within the central 1 Mpc<sup>2</sup> area. An alternative would be to use Dressler’s (1980) measure of the average number of near neighbours. We then find (solid squares in Figure 1) that the clusters with the least prominent dwarf populations (low DGRs $`1`$) are just those with the highest projected galaxy densities (e.g. the Bautz-Morgan Type I-II cluster A3888). Previously, Turner et al. (1993) had noted that the rich but low density cluster A3574, which is very spiral rich (Willmer et al. 1991), had a very high ratio of low surface brightness (LSB) dwarfs to giants. This is now backed up by the observations of clusters like A204 which are dwarf rich (DGR $`3`$), have low central densities and late B-M types (A204 is B-M III). To extend the range of environments studied, we can add in further LF results from the literature (Figure 2). A problem here, of course, is the lack of homogeneity due to different observed wavebands, different object detection techniques and so forth. Nevertheless, we can explore the general trends. Several points are shown for surveys of Coma (hexagons). These surveys (Thompson & Gregory 1993, Lobo et al. 1997, Secker & Harris 1996 and Trentham 1998) cover different areas and hence different mean projected densities (see also the next section). All these lie close to the relation defined by our original data, with the larger area surveys having higher DGRs. Points (filled triangles) representing the rich B-M type I X-ray selected clusters studied by Lopez-Cruz et al. (1997) fall at somewhat lower DGR than most of our clusters at similar densities. However we should note that these clusters were selected (from a larger unpublished sample) only if they had LFs well fitted by a single Schechter function. This obviously precludes clusters with steep LF turn-ups at intermediate magnitudes and hence rules out high DGRs. The one comparison cluster they do show with a turn up (A1569 at DGR $`4.2`$) clearly supports our overall trend. Ferguson & Sandage (1991 = FS), on the other hand, deduced a trend in the opposite direction, from a study of fairly poor groups and clusters, with the early type dwarf-to-giant ratio increasing for denser clusters. However, this is not necessarily as contradictory to the present result as it might initially appear. For instance, FS select their dwarfs morphologically, not by luminosity (morphologically classified dwarfs and giants significantly overlap in luminosity) and they also concentrate solely on early type dwarfs. If, as we might expect, low density regions have significant numbers of late type dwarf irregulars (e.g. Thuan et al. 1991), then the FS definition of DGR may give a lower value than ours for these regions. Furthermore FS calculate their projected densities from all detected galaxies, down to very faint dwarfs. Regions with high DGR will therefore be forced to much higher densities than we would calculate for giants only. These two effects may go much of the way to reconciling our respective results. This is illustrated by the open triangles in Figure 2, which are an attempt to place the FS points on our system; magnitudes have been adjusted approximately for the different wavebands, DGRs have been estimated from the LFs and the cluster central densities (from Ferguson & Sandage 1990) have been scaled down by the fraction of their overall galaxy counts which are giants (by our luminosity definition). Given the uncertainties in the translation, most of the FS points then lie close to those of our overall distribution. Finally, a field LF with a steep faint end tail ($`\alpha 1.5`$; e.g. Marzke, Huchra & Geller 1994, Zucca et al. 1997, Morgan, Smith & Phillipps 1998) would also give a point (filled pentagon) at DGR $`4`$, again consistent with the trend seen in the clusters. Nevertheless, there are exceptions. The FS points of lowest density (the Leo and Dorado groups) also have low DGR (and lie close to our main ‘outlier’, the point for the outer region of A22). The Local Group (shown by the star) would also be in this regime, at low density and DGR = 2, as would the ‘conventional’ field with $`\alpha 1.1`$ (Efstathiou, Ellis & Peterson 1988; Loveday et al. 1992) and hence DGR $`1.5`$ (open pentagon). This may suggest that at very low density the trend is reversed (i.e. is in the direction seen by FS), or that the cosmic (and/or statistical) scatter becomes large. More data in the very low density regime is probably required before we can make a definitive statement on a possible reversal of the slope of the DGR versus density relation. In particular, the scatter in the derived faint end of the field LF between different surveys (see, e.g., the recent discussion in Metcalfe et al. 1998) precludes using this to tie down the low density end of the plot. ### 2.1. Population Gradients It was suggested by the results on A2554 (Smith et al. 1997), that the dwarf population was more spatially extended than that of the giants, i.e. the dwarf to giant ratio increased outwards. This type of population gradient has now been confirmed by the results in Driver et al. (1998) illustrated in Figure 1, where we contrast the inner 1 Mpc<sup>2</sup> areas (solid symbols) with the outer regions of the same clusters (open symbols). The triangles show in slightly more detail the run of DGR with radius (and hence density) across A2554. A similar effect can be seen for Coma in Figure 2 and can explain the discrepancy between the LFs derived for the core and as against larger areas. It is found, too, in Virgo (Phillipps et al. 1998a; Jones et al., these proceedings), where the dwarf LSBG population has almost constant number density across the central areas while the giant density drops by a factor $`3`$. ## 3. A Dwarf Population Density Relation The obvious synthesis of the above results is a relationship between the local galaxy density and the fraction of dwarfs (i.e. the relative amplitude of the dwarf LF). The inner, densest parts of rich clusters have the smallest fraction of dwarfs, while loose clusters and the outer parts of regular clusters, where the density is low, have high dwarf fractions. It is particularly interesting to note the clear overlap region in Figure 1, where regions of low density on the outskirts of dense clusters (open squares) have similar DGRs to the regions of the same density at the centres of looser clusters (solid squares). The proposed relation of course mimics the well known morphology - density relation (Dressler 1980), wherein the central parts of rich clusters have the highest early type galaxy fraction, this fraction then declining with decreasing local galaxy density. Putting the two relations together, it would also imply that dwarfs preferentially occur in the same environments as spirals. This would be in agreement with the weaker clustering of low luminosity systems in general (e.g. Loveday et al. 1995), as well as for spirals compared to ellipticals (Geller & Davies 1976). Thuan et al. (1991) have previously discussed the similar spatial distributions of dwarfs (in particular dwarf irregulars) and larger late type systems. ## 4. The Origin of the Relation As with the corresponding morphology - density relation for giant galaxies, the cause of our population - density relation could be either ‘nature’ or ‘nurture’, i.e. initial conditions or evolution. Some clues may be provided by the most recent semi-analytic models of galaxy formation, which have been able to account successfully for the excess of (giant) early type galaxies in dense environments (e.g. Baugh, Cole & Frenk 1996), basically through different merging histories for different types of galaxy. Does this also work for the dwarfs? The steep faint end slope of the LF appears to be a generic result of hierarchical clustering models (e.g. White & Frenk 1991; Frenk et al. 1996; Kauffmann, Nusser & Steinmetz 1997 = KNS), so is naturally accounted for in the current generation of models. The general hierarchical formation picture envisages (mainly baryonic) galaxies forming at the cores of dark matter halos. The halos themselves merge according to the general Press-Schechter (1974) prescription to generate the present day halo mass function. However the galaxies can retain their individual identities within the growing dark halos, because of their much longer merging time scales. The accretion of small halos by a large one then results in the main galaxy (or cluster of galaxies, for very large mass halos) acquiring a number of smaller satellites (or the cluster gaining additional, less tightly bound, members). KNS have presented a detailed study of the distribution of the luminosities of galaxies expected to be associated with a single halo of given mass. We can thus easily compare the theoretically expected numbers of dwarf galaxies per unit giant galaxy luminosity with our empirical results (Phillipps et al. 1998b). The KNS models mimic a “Milky Way system” (halo mass $`5\times 10^{12}M_{}`$), a sizeable group (halo mass $`5\times 10^{13}M_{}`$) and a cluster mass halo ($`10^{15}M_{}`$). Their results imply that the Milky Way and small group halos have similar numbers of dwarf galaxies per unit giant galaxy light, whereas the dense cluster environment has a much smaller number of dwarfs for a given total giant galaxy luminosity. Thus the predictions of the hierarchical models (which depend, of course, on the merger history of the galaxies) are in qualitative agreement with our empirical results if we identify loose clusters and the outskirts of rich clusters with a population of (infalling?) groups (cf. Abraham et al. 1996), whereas the central dense regions of the clusters originate from already massive dark halos. If we renormalise from unit galaxy light to an effective giant galaxy LF amplitude (see Phillipps et al. 1998b) then the actual expected ratios ($`1`$ to a few) are also consistent with our observational results. By inputting realistic star formation laws etc., KNS could further identify the galaxies in the most massive halos with old elliptical galaxies, and those in low mass halos with galaxies with continued star formation. This would imply the likelihood that our dwarfs in low density regions may still be star forming, or at least have had star formation in the relatively recent past (cf. Phillipps & Driver 1995 and references therein). Note, too, that these galaxy formation models would also indicate that the usual (giant) morphology - density relation and our (dwarf) population - density relation do arise in basically the same way. Finally, we can see that if these semi-analytic models are reasonably believable, then we need not necessarily expect the field to be even richer in dwarfs than loose clusters; the dwarf to giant ratio seems to level off at the densities reached in fairly large groups. ## 5. Summary To summarise, then, we suggest that the current data on the relative numbers of dwarf galaxies in different clusters and groups can be understood in terms of a general dwarf population versus local galaxy density relation, similar to the well known morphology - density relation for giants. Low density environments are the preferred habitat of low luminosity galaxies; in dense regions they occur in similar numbers to giants, but at low densities dwarfs dominate numerically by a large factor. This fits in with the general idea that low luminosity galaxies are less clustered than high luminosity ones (particularly giant ellipticals). Plausible theoretical justifications for the population - density relation can be found within the context of current semi-analytic models of hierarchical structure formation. ## References Abraham R.G., et al., 1996, ApJS, 471, 694 Baugh C.M., Cole S., Frenk C.S., 1996, MNRAS, 283, 1361 Dressler A., 1980, ApJ, 236, 351 Driver S.P., Phillipps S., Davies J.I., Morgan I., Disney M.J., 1994, MNRAS, 268, 393 Driver S.P., Couch W.J., Phillipps S., 1998, MNRAS, 301, 369 Efstathiou G., Ellis R.S., Peterson B.A., 1988, MNRAS, 232, 431 Ferguson H.C., Sandage A., 1990, AJ, 100, 1 Ferguson H.C., Sandage A., 1991, AJ, 96, 1520 Frenk C.S., Evrard A.E., White S.D.M., Summers F.J., 1996, ApJ, 472, 460 Geller M.J., Davis M., 1976, ApJ, 208, 13 Jones J.B., Phillipps S., Schwartzenberg J.M., Parker Q.A., 1998, The Low Surface Brightness Universe, in press Kauffmann G., Nusser A., Steinmetz M., 1997, MNRAS, 286, 795 Lobo C., et al., 1997, A&A, 317, 385 Lopez-Cruz O., Yee H.K.C., Brown J.P., Jones C., Forman W., 1997, ApJL, 475, L97 Loveday J., Maddox S.J., Efstathiou G., Peterson B.A., 1995, ApJ, 442, 457 Loveday J., Peterson B.A., Efstathiou G., Maddox S.J., 1992, ApJ, 390, 338 Marzke R., Huchra J.P., Geller M.J., 1994, ApJ, 428, 43 Metcalfe N., Ratcliffe A., Shanks T., Fong R., 1998, MNRAS, 294, 147 Morgan I., Smith R.M., Phillipps S., 1998, MNRAS, 295, 99 Phillipps S., Driver S.P., 1995, MNRAS, 274, 832 Phillipps S., Driver S.P., Couch W.J., Smith R.M., 1998b, ApJ, 498, L119 Phillipps S., Parker Q.A., Schwartzenberg J.M., Jones J.B., 1998a, ApJ, 493, L59 Press W.H., Schechter P.L., 1974, ApJ, 187, 425 Schechter P., 1976, ApJ, 203, 297 Secker J., Harris W.E., 1996, ApJ, 469, 623 Smith R.M., Driver S.P., Phillipps S., 1997, MNRAS, 287, 415 Smith R.M., Phillipps S., Driver S.P., Couch R.M., 1998, The Low Surface Brightness Universe, in press Thompson L.A., Gregory S.A., 1993, AJ, 106, 2197 Thuan T.X., Alimi J.M., Gott J.R., Schneider S.E., 1991, ApJ, 370, 25 Trentham N., 1997a, MNRAS, 286, 133 Trentham N., 1997b, MNRAS, 290, 334 Trentham N., 1998, MNRAS, 293, 71 Turner J.A., Phillipps S., Davies J.I., Disney M.J., 1993, MNRAS, 261, 39 White S.D.M., Frenk C.S., 1991, ApJ, 379, 52 Willmer C., Focardi P., Chan R., Pellegrini P., da Costa L., 1991, AJ, 101, 57 Wilson G, Smail I., Ellis R.S., Couch W.J., 1997, MNRAS, 284, 915 Zucca E., et al., 1997, in Wide-Field Spectroscopy, eds. Kontizas E. et al., Dordrecht; Reidel, p.247
no-problem/9812/hep-ph9812389.html
ar5iv
text
# Are Right-Handed Mixings Observable? 11footnote 1Talk at the “Corfu Summer Inst. on Elementary Particle Physics”, Sept. 1998. (December 1998) Asymmetric mass matrices can induce large RH mixings. Those are non -measurable in the SM but are there and play an important role in its extensions. The RH rotations are in particular relevant for the proton decay, neutrino properties and baryon asymmetry. E.g. large RH mixings lead to kaon dominated proton decay even without SUSY and could be the reason for a large neutrino mixing. By studying those phenomena one can learn about the RH rotation matrices and this can reduce considerably the arbitrariness in the present fermionic mass study. Right-handed (RH) mixings are not relevant in the framework of the standard model (SM). Also, RH currents have not been observed experimentally (yet?). So, why are RH mixings interesting? What are RH mixings? To diagonalize a general complex (mass) matrix M one needs a bi-unitary transformation, i.e. two unitary matrices $`U_{L,R}`$, such that $$U_{L}^{}{}_{}{}^{}MU_R=M_{diagonal}$$ (1) or $$U_{L}^{}{}_{}{}^{}MM^{}U_L=(M_{diag.})^2=U_{R}^{}{}_{}{}^{}M^{}MU_R.$$ (2) Only in the case of hermitian (symmetric) matrices is $`U_R`$ related to $`U_L`$ $$M=M^{}(M^T)U_R=U_L(U_{L}^{}{}_{}{}^{}).$$ (3) RH fermions are singlets in the SM and only LH charged currents are involved in the weak interactions $$_W=W_{}^{}{}_{\mu }{}^{}\overline{u_L}\gamma ^\mu V_{CKM}d_L+h.c.$$ (4) where $$V_{CKM}=U_{L}^{u}{}_{}{}^{}U_L^d.$$ The $`U_R`$’s do not play a role in the SM. However, the fermionic mass matrices are generated here by unknown Yukawa couplings and therefore are completely arbitrary. Hence, the SM must be extended to “explain” the fermionic masses and mixings, an extension which is already suggested by * Grand Unification: $`\alpha _1(M_W),\alpha _2(M_W),\alpha _3(M_W)\alpha (M_{GUT})`$ * Yukawa Unification: $`m_\tau (M_{GUT})m_b(M_{GUT})`$ * L-R restoration at $`M_RM_W`$ * Mixed massive neutrinos (seesaw) with: $`M_{\nu _R}M_W`$ etc.. Many different “models” are known to give the right masses of the charged fermions and $`V_{CKM}`$ (within the experimental errors) and this is an indication that the mass problem is far from being solved. Part of this freedom is due to the fact that these suggestions disregard the RH rotations. Most models use hermitian mass matrices for no other reasons than simplicity. However, recently more and more asymmetric mass matrices are used (mainly to have additional freedom for the neutrino sector). Asymmetric mass matrices imply $`U_LU_R`$, so that here the $`U_R`$’s are a clue to distinguish between different models. It is true that RH currents have not been observed till now<sup>2</sup><sup>2</sup>2There is a certain indication that RH currents can be observed in bottom decays. but this means only that the relevant gauge bosons are heavy and/or mix very little with the observed LH ones and/or the RH neutrinos are very heavy. The limits on RH gauge bosons are clearly very model dependent . Our main point is however that even if RH currents will not be directly observed at low energies they play an important role at energies where the L-R symmetries are restored. RH mixings effect therefore phenomena like: * Proton decay * Neutrino seesaw * Leptogenesis via decays of RH neutrinos as the origin of baryon asymmetry etc. , which are indirectly observable. Now, it is clear that the symmetries which dictate the mass matrices are effective at scales relevant for the theories beyond the SM. In those theories the RH mixings are not arbitrary any more, there are also no reason to assume that they are small. Actually even large RH mixings are not unnatural and are the standard in $`P_{LR}`$ invariant theories We claim also that the large leptonic mixing (recently observed by Super-Kamiokande ) may be related to large RH rotations. What is $`P_{LR}`$ ? In the framework of Current Algebra it is common to assign the baryons to a $`P`$\- invariant $`(3,\overline{3})(\overline{3},3)`$ representation under the global chiral group: $`SU_L(3)\times SU_R(3)\times P`$ . The baryons acquire their masses when the chiral group is broken into its diagonal subgroup $`SU_{L+R}(3)`$ , under which the baryons constitute $`\mathrm{𝟖}\mathrm{𝟏}`$ Dirac spinors. An analogous symmetry can be applied to fermions in $`LR`$ symmetric gauge theories. As an example, let us consider the leptons in the $`E_6`$ GUT . Those are LH Weyl spinors that transform like $`(1,3,\overline{3})`$ under the maximal subgroup of $`E_6`$, $$E_6SU_C(3)\times SU_L(3)\times SU_R(3).$$ Whereas $`P`$-reflection for the global symmetry leads per definition to $`SU_L(3)SU_R(3)`$ exchange, in the gauge theories $`L,R`$ are only an historical notation. The chirality of the local currents is fixed by the representation content of the fermions under $`SU_L(3)\times SU_R(3)`$ . Hence, for gauge theories we have to require, in addition to Parity exchange, also $`SU_L(3)SU_R(3)`$. The irreducible representation of the leptons under $`SU_C(3)\times SU_L(3)\times SU_R(3)\times P_{LR}`$ is $$(1,3,\overline{3})_{LH}(1,\overline{3},3)_{RH},$$ which requires two families. Under the diagonal $`SU_C(3)\times SU_{L+R}(3)`$ one obtains then $`\mathrm{𝟖}\mathrm{𝟏}`$ of Dirac spinors. Applying this to the $`e`$ and $`\mu `$ families this is realized in analogy with the hadrons as follows. Such a model was actually constructed in 1977 when the third heavy family was not yet observed. It is quite a general belief now that this top-family is the only one acquiring masses through direct coupling to the Higgs representation, while the light families get their masses through second order “corrections”. It is then natural that these two light families obey symmetries like $`P_{LR}`$ . When those symmetries are broken, the particles gain their physical masses and mixings.<sup>3</sup><sup>3</sup>3We know that in SUSY theories as well, sfermions of the two light families must be quite degenerate to avoid FCNCs. The $`P_{LR}`$ operation can be formally defined in terms of two families $$P_{LR}f^i(x)P_{LR}^1=ϵ^{ij}\sigma _2\widehat{f}^j(\overline{x}).$$ (5) The $`P_{LR}`$ invariant Lagrange looks then as follows $$_Y=y_{12}\overline{\mathrm{\Psi }^{1c}}\mathrm{\Phi }_{12}\mathrm{\Psi }^2y_{21}\overline{\mathrm{\Psi }^{2c}}\mathrm{\Phi }_{21}\mathrm{\Psi }^1+h.c.$$ (6) The corresponding mass matrices are hence pure off-diagonal in this limit $$\begin{array}{cc}\begin{array}{cc}M_2^u=\left(\begin{array}{cc}0& m_u\\ m_c& 0\end{array}\right)& M_2^d=\left(\begin{array}{cc}0& m_d\\ m_s& 0\end{array}\right)\end{array}& \\ \begin{array}{cc}M_2^e=\left(\begin{array}{cc}0& m_e\\ m_\mu & 0\end{array}\right)& M_2^\nu =\left(\begin{array}{cc}0& m_{\nu _e}\\ m_{\nu _\mu }& 0\end{array}\right)\end{array}.& \end{array}$$ These matrices can be diagonalized by the transformations $$\begin{array}{ccccc}\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)& \left(\begin{array}{cc}0& m_1\\ m_2& 0\end{array}\right)& \left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)& =\left(\begin{array}{cc}m_1& 0\\ 0& m_2\end{array}\right)& \end{array}.$$ and those are equivalent to the exchanges $$u_{LH}^cc_{LH}^cd_{LH}^cs_{LH}^ce_{LH}^+\mu _{LH}^+,$$ (7) which mean full RH rotations. Applying this to the effective dim.6 B-violating Lagrangian of $`SO(10)`$ and noting that only the two light families are relevant for the proton decay, two decay modes result $$P\overline{\nu }_\mu K^+\text{and}P\mu ^+K^0.$$ Now, to make such a model realistic one must break $`P_{LR}`$ by a small amount, to allow for Cabbibo mixing and add the heavy t-family. Also, to induce gauge unification (without SUSY) an intermediate breaking scale, $`M_I10^{12}`$ GeV is required. This is however also the right RH neutrino mass scale for the seesaw mechanism and leptogenesis as well as the scale of the invisible Axion window . In this talk I would like to report on a systematic study of models with large RH rotations and their possible effects. I will give an example in terms of a “realistic” $`SO(10)`$ Model with such mixings. By this I mean a conventional $`SO(10)`$ theory that reproduces all the observed fermionic masses and LH mixings but at the same time generates large RH angles. This can be obtained by requiring small deviations from the $`P_{LR}`$ invariant case. E.g. consider at the high unification scale the following mass matrices (those can be obtained using a global $`U_f(1)`$ or a discrete symmetry) $$\begin{array}{ccc}m_d=\left(\begin{array}{ccc}0& m_d& 0\\ m_s& 0& 0\\ 0& 0& m_b\end{array}\right)& m_u=\left(\begin{array}{ccc}a& m_1& b\\ m_2& 0& 0\\ c& 0& m_3\end{array}\right)& m_{\mathrm{}}=\left(\begin{array}{ccc}0& m_e& 0\\ m_\mu & 0& 0\\ 0& 0& m_c\end{array}\right)\end{array}.$$ These matrices give the following RH angles,in the u-sector, at the high scale $$\begin{array}{ccc}\mathrm{\Theta }_{12}^R=1.57rad.& \mathrm{\Theta }_{23}^R=0.0rad.& \mathrm{\Theta }_{13}^R=1.50rad.\end{array}$$ (8) We studied in detail the embedding of those matrices in the framework of an $`SO(10)`$ model broken at $`M_U`$ to the Pati-Salam group and this in the second step to the SM at $`M_I`$ $$SO(10)\stackrel{M_U}{}SU_C(4)\times SU_L(2)\times SU_R(2)\stackrel{M_I}{}SM$$ (9) The Higgs representations needed for the local breaking and the generation of the fermionic mass matrices, fix the two loop renormalization group equations (RGEs). Those are used for two cases, one with D-Parity ($`g_L=g_R`$) and the other without it ($`g_Lg_R`$). We found: with D-Parity: $$\begin{array}{ccc}M_U=1.04\times 10^{15}GeV& M_I=5.66\times 10^{13}GeV& \alpha _U=0.02841\end{array}$$ (10) and without D-Parity: $$\begin{array}{ccc}M_U=5.68\times 10^{15}GeV& M_I=2.09\times 10^{11}GeV& \alpha _U=0.04207\end{array}$$ (11) Using then the fermionic mass matrices and $`V_{CKM}`$ at $`M_Z`$ we evaluated the values of the matrix elements at $`M_I`$ and also give the RH mixing angles at this scale. Those values were used to calculate the proton and neutron B-violating branching ratios (see tab. 1 and tab. 2). We obtained very similar results in those two cases and only the absolute rates depend on the details of the local breaking. Without D-Parity we obtain: $$\tau _{total}^{proton}=1.1\times 10^{34\pm .7\pm 1.0_{5.0}^{+.5}}yrs.$$ (12) For the uncertainties and threshold corrections we used the estimates of Langacker and Lee et al . Our main prediction are the branching ratios which are independent on those uncertainties and the details of the local breaking. The absolute rates indicate, however, that the results of the model are well in the range of observability of the new proton decay experiments . The branching ratios are very similar to the “smoking gun” predictions of the SUSY GUTs and in contradiction with the conventional GUTs where $`Pe^+\pi ^0`$ dominates. Using a $`U(1)_F`$ one can obtain naturally large leptonic mixings induced by the large RH rotations . We will study also effects of large RH mixings on the proton decay in SUSY SO(10). Those could play an important role in view of the fact that it was shown recently that RRRR and RRLL effective dim.5 operators can dominate proton decay in such models . Also, effects of SUSY and non SUSY leptogenesis as the origin of the baryon asymmetry will be considered. Part of this work was done in collaboration with Carsten Merten. I would like to thank also M. K. Parida for discussions and for pointing to us a mistake.
no-problem/9812/astro-ph9812034.html
ar5iv
text
# 1 Introduction ## 1 Introduction We care about the bulge of the Milky Way both because of what it tells us about the formation and evolution of our own galaxy and because its structure and stellar content are often used as a proxies in the study of other galactic bulges and elliptical galaxies. Thus, in the spirit of this meeting, it serves as a vital link between the near and the far, between the present and the past. Buried within the Galactic bulge is the center of the Galaxy, a region which, on a small scale, has some properties in common with luminous AGNs and starburst galaxies. Because of its proximity, it can be studied in greater detail than any other such galactic nucleus. On the other hand, between us and it lie clouds of interstellar dust of great enough optical depth that the average visual extinction is about 30 magnitudes. Thus it is only at infrared and longer wavelengths that the central few arc minutes of the Galaxy can be studied. Indeed, out to a radius of about 2 (and much farther if observing on or close to the major axis) the visual extinction is still great enough that optical observations are difficult to impossible along most lines of sight. This review of our current state of knowledge of the star formation and chemical enrichment history of the Galactic bulge will be brief. I will make no attempt to cite the many research papers that have appeared over the past decade or so that deal with these topics. The papers that are referred to, though, do contain extensive references to the literature. Furthermore, the review will be restricted to stars and their optical and infrared photospheric radiation. Simply put, the history of star formation can be traced by a survey of either hot blue stars or cool red ones. The former, which will primarily be massive main sequence stars, are effective tracers of the most recent epoch of star formation. The latter will not only be effective tracers of the most recent epoch – the late-type supergiants – but will also trace out older epochs of star formation via the presence of luminous AGB stars. This review will deal primarily with surveys of the cool stellar population in the central part of the Milky Way so that star formation can be investigated over a broader period of time. The next section, though, will briefly consider some work on the hot stellar component in the immediate vicinity of the Galactic Center itself. ## 2 The Central Few Arc Minutes Of The Galaxy Krabbe et al. (1995) have reported on an extensive survey of the central few arc seconds of the Galaxy. They identified more than 20 luminous blue supergiants and Wolf-Rayet stars in a region not more than a parsec in radius. The inferred masses of some of these stars approaches 100 M$``$. From this they conclude that between 3 and 7 Myr ago there was a burst of star formation in the central region. They also identified a small population of cool luminous AGB stars from which one can conclude that there was significant star formation activity a few 100 Myr ago as well. Blum et al. (1996a) carried out a K-band survey of the central 2 arc minutes of the Galaxy and drew renewed attention to the presence of a significant excess of luminous stars ($`K_0<6`$) when compared to a typical old stellar population such as is found in Baade’s Window, for example. Most of these stars were found by Blum and others to be M stars, presumably a mixture of supergiants and AGB stars. However, as Blum et al. (1996a,b) pointed out, the distinction between an M supergiant and a luminous M-type AGB stars cannot be made on the basis of luminosity alone since there is a two magnitude range in which the luminosities of the two very different class of stars overlap (see Fig. 5 of Blum et al. 1996b). But assigning stars to one class or the other is critical in deciphering the star formation history of this region. With K-band spectra, though, it becomes straightforward to make this distinction for almost all cases (Fig. 1 of Blum et al. 1996b). As first quantified by Baldwin et al. (1973) M-type supergiants can be easily distinguished from ordinary giants of the same temperature (or color) via the strengths of the H<sub>2</sub>O and CO absorption bands in K-band spectra. Blum et al. (1996b) analyzed K-band spectra for a representative sample of 19 of the luminous stars identified in their survey area. Only 3 of these stars were found to be supergiants; one of these is the well known IRS 7. The remainder are AGB stars, some of which could be long period variables as well. From the spectra and the multi-color photometry they were able to calculate effective temperatures and bolometric luminosities for the stars. With the assumption that the abundance of the stars they observed is comparable to that of disk stars in the solar neighborhood, they were able to estimate ages for the stars from a comparison with stellar interior models. Rather than continuous star formation, they concluded that there have been multiple epochs of star formation in the central few parsecs of the Galaxy. The most recent epoch, less than 10 Myr ago, corresponds with that found by Krabbe et al. (1995). Blum et al. also identified significant periods of star formation as having occurred about 30 Myr, between 100 and 200 Myr, and more than about 400 Myr in the past. The majority of stars they observed are associated with the oldest epoch of star formation. What about the abundances of stars in the central few parsecs? Ramírez et al. (1998) have obtained high resolution K band spectra for 10 M giants in this region and did a full spectral synthesis analysis of them. They were able to measure a true \[Fe/H\] with their observations of iron lines and thus remove any ambiguity that could arise by inferring \[Fe/H\] from measurements of elements that are often used as proxies (e.g. Mg or Ca). For these 10 stars they derive a mean \[Fe/H\] of 0.0 with a dispersion no larger than their uncertainties, about $`\pm 0.2`$ dex. Their mean value is a few tenths of a dex greater than the mean \[Fe/H\] determined for Baade’s Window K giants (Sadler et al. 1996; McWilliam & Rich 1994). While it may be surprising that the mean \[Fe/H\] at the Galactic Center is not super-solar, the small increase in the mean value of \[Fe/H\] compared with Baade’s Window is consistent with estimates for the gradient in \[Fe/H\] in the inner Galactic bulge (Tiede et al. 1995; Frogel et al. 1999). On the other hand, a non-detectable dispersion in \[Fe/H\] stands in contrast to a dispersion that is more than an order of magnitude in size for the K giants in Baade’s Window (Sadler et al. 1996; McWilliam & Rich 1994). It is, however, consistent with the lack of dispersion found for the M giants in Baade’s Window (Frogel & Whitford 1987; Terndrup et al. 1991). The fact that \[Fe/H\] is near solar at the Galactic Center with a star formation rate per unit mass at least at present is considerably in excess of the solar neighborhood value suggests that the rate of chemical enrichment has been quite different at the two locations. The difference in the measured dispersions between K and M giants remains to be explained. In Baade’s Window there is no detectable population of K giants with luminosities great enough to place them near the top of the giant branch (DePoy et al. 1993). At the same time, it is generally thought that in a stellar population most of whose stars have \[Fe/H\] greater than –1.0, nearly all K giants eventually evolve into M giants. Thus the observed dispersions should be similar for the two groups. That they are not could imply that estimates for evolutionary rates and lifetimes near the upper end of the RGB and AGB are wrong. It could also point to problems with the analysis of the M giants, although in the case of the Ramírez et al. work this seems unlikely since the underlying principles of their analysis are basically the same as that employed for the optical studies of the K giants. ## 3 The Inner Galactic Bulge Now we turn our attention to the inner $`3^{}`$ of the Galactic bulge. This region, which is interior to Baade’s Window, will be referred to as the inner Galactic bulge. With the 2.5 meter duPont Telescope at Las Campanas Observatory I have obtained JHK images of 11 fields within the inner bulge, three of which are within $`1^{}`$ of the Galactic Center. The two questions that are being addressed are: What is the abundance of the stellar population in this region and is there any evidence that a detectable component of the population is relatively young, i.e. significantly younger than globular clusters? To answer the question about stellar abundances my collaborators and I are taking two independent approaches. The first is based on the finding of Kuchinski et al. (1995) that the giant branch of a metal rich globular cluster in a K, JK color magnitude diagram is linear over 5 magnitudes and has a slope that is proportional to its optically determined \[Fe/H\]. Results from this part of the study, based on the LCO data, will be summarized here. The second approach, which is expected to give a more detailed and precise answer to the abundance question, and is based on the analysis of K-band spectra obtained at CTIO of about one dozen M stars in each of 11 fields. This is a work in progress. We have used two indicators to test for the presence of intermediate age stars in the bulge (i.e. an age not more than a few Gyr as opposed to closer to 10 Gyr). The first is a determination of the luminosity of the brightest stars on the giant branch of each of the fields. A sign of a relatively young age would be if there were stars brighter than those found in Baade’s Window. The second indicator involves a comparison of the properties of long period variables in the bulge with their counterparts in Galactic globular clusters. ### 3.1 ABUNDANCES IN THE INNER GALACTIC BULGE The best “fixed reference point” in any attempt to determine abundances within the inner bulge is the determination by McWilliam & Rich (1994) of the mean abundance of a sample of K giants in Baade’s Window based on high resolution spectroscopy. They found a mean \[Fe/H\] of about –0.2. A similar result was found by Sadler et al.(1996) based on spectroscopy of several hundred K giants in Baade’s Window. Furthermore, both of these independent analyses agreed that the spread in \[Fe/H\] in Baade’s Window was considerably greater than an order of magnitude and could be as large as two orders of magnitude. Observations of Baade’s Window M giants, on the other hand, both in the near IR and of red TiO bands (e.g. Frogel & Whitford 1987; Terndrup et al. 1991) consistently pointed to a greater than solar abundance with no measurable dispersion. The independent estimate of \[Fe/H\] for the Baade’s Window giants based on the near-IR slope method (Tiede et al. 1995) differed from the previous determinations in that they found an \[Fe/H\] close to the value based on the optical spectra of K giants. The near-IR survey of inner bulge fields has yielded color- magnitude diagrams that, except for the fields with the highest extinction, reach as faint as the horizontal branch. Thus, with data for the entire red giant branch above the level of the HB we can apply the technique developed by Kuchinski et al. (1995) which derives an estimate for \[Fe/H\] based on the slope of the RGB above the HB. Although the calibration of this technique is based on observations of globular clusters, the applicability of the method to stars in the bulge was demonstrated by Tiede et al. (1995) in their analysis of stars in Baade’s Window. Although we were able to estimate, statistically, the reddening to each field, the method itself is reddening independent since it depends only on a slope measurement. Based on 7 fields on or close to the minor axis of the bulge at galactic latitudes between $`+0.1^{}`$ and $`2.8^{}`$ we derive a dependence of $``$\[Fe/H\]$``$ on latitude for $`b`$ between $`0.8^{}`$ and $`2.8^{}`$ of $`0.085\pm 0.033`$ dex/degree. When combined with the data from Tiede et al. we find for $`0.8^{}b10.3^{}`$ the slope in $``$\[Fe/H\]$``$ is $`0.064\pm 0.012`$ dex/degree, somewhat smaller than the admittedly crude value derived by Minniti et al. (1995). An extrapolation to the Galactic Center predicts \[Fe/H\] $`=+0.034\pm 0.053`$ dex, in close agreement with the result of Ramírez et al. (1998). Also in agreement with Ramírez et al., we find no evidence for a dispersion in \[Fe/H\]. Details of this work are in Frogel et al. (1999). Analysis of the K-band spectra of the brightest M giants in each of the fields surveyed is nearing completion; the results appear to be consistent with those based on the RGB slope method, namely, an \[Fe/H\] for Baade’s Window M giants close to the McWilliam & Rich value but with little or no gradient as one goes into the central region. Also, little or no dispersion in \[Fe/H\] within each field is visible in the spectroscopic data. Further work on the calibration of these data must be done before definitive conclusions can be drawn. In summary, several independent lines of evidence point to an \[Fe/H\] for stars within a few parsecs of the Galactic Center of close to solar. The gradient in \[Fe/H\] between Baade’s Window and the Center is small – not more than a few tenths of a dex. Exterior to Baade’s Window there is a further small decline in mean \[Fe/H\] (e.g. Terndrup et al. 1991, Frogel et al. 1990; Minniti et al. 1995). It remains to be seen whether this gradient arises from a change in the mean \[Fe/H\] of a single population or a change in the relative mix of two populations, one relatively metal rich and identifiable with the bulge, the other relatively metal poor and more closely associated with the halo. Support for the latter interpretation is found in the survey of TiO band strengths in M giants in outer bulge fields by Terndrup et al. (1990) for which they found a bimodal distribution. McWilliam & Rich (1994) proposed an explanation based on selective elemental enhancements as to why earlier abundance estimates of bulge M giants seemed to consistently yield \[Fe/H\] values in excess of solar. What still remains to be understood is why even recent measurements of the M giant abundances do not reveal any evidence for an intrinsic dispersion in \[Fe/H\] in any given field. Finally, an issue that needs further investigation is the degree to which the indirect methods used for getting at \[Fe/H\] are influenced by selective element enhancements. ### 3.2 STELLAR AGES IN THE INNER GALACTIC BULGE If a stellar population has an age significantly younger than 10 Gyr, say not more than a few Gyr, then stars on the AGB can reach luminosities several magnitudes brighter than they would in an older population. After correction for extinction we noted that our fields closest to the Galactic Center had significant numbers of bright, red stars. With the stars in Baade’s Window as a guide we chose a reddening corrected K magnitude of 8.0 as the limit to the brightest magnitude obtainable in an old population and counted the number stars in each surveyed field brighter than this relative to the number in a predefined, fainter magnitude interval. We found that at radial distances greater than $`1.3^{}`$ the ratio was constant with a value equal to that for Baade’s Window. On the hand, for the fields closer to the center than $`1.0^{}`$ this ratio was significantly larger, implying the presence of a relatively young population of stars, not more than a couple of Gyr old. This is consistent with Blum et al.’s work on the inner few arc minutes of the bulge. Details are in Frogel et al. (1999) The second test we applied to see if there is evidence for a young population in the Galactic bulge was to compare the luminosities and periods of bulge long period variables (LPVs) with those found in globular clusters (Frogel & Whitelock 1998). For LPVs of the same age, those with greater \[Fe/H\] will have longer periods. LPVs with longer periods also have higher mean luminosities. In the past, claims have been made for the presence of a significant intermediate age population of stars in the bulge based on the finding of some LPVs with periods in excess of 500–600 days. It is necessary, however, to have a well defined sample of stars if one is going to draw conclusions based on the rare occurrence of one type of star. The M giants in Baade’s Window are just such a well defined sample (e.g. Frogel & Whitford 1987). Frogel & Whitelock (1998) presented a detailed comparison of LPVs in the bulge and in metal rich globular clusters. They demonstrated that with the exception of a few of the LPVs in Baade’s Window with the longest periods, the distribution in bolometric magnitudes of the LPVs from the two populations overlap completely. Furthermore, because of the dependence of period and luminosity on \[Fe/H\] and the fact that there has been no reliable survey for LPVs in globulars with \[Fe/H\] $`>0.25`$, while a significant fraction of the giants in Baade’s Window have \[Fe/H\] $`>0.0`$ (McWilliam & Rich 1994), the brightest Baade’s Window LPVs can be understood as a result of this higher \[Fe/H\] compared with globular clusters. Finally, observations with the Infrared Astronomical Satellite (IRAS) at 12$`\mu `$m were used to estimate the integrated flux at this wavelength from the Galactic bulge as a function of galactic latitude along the minor axis. Galactic disk emission was removed from the IRAS measurements with the aid of a simple model. These fluxes were then compared with predictions for the 12$`\mu `$m bulge surface brightness based on observations of complete samples of optically identified M giants in minor axis bulge fields (Frogel & Whitford 1987; Frogel et al. 1990). No evidence was found for any significant component of 12$`\mu `$m emission in the bulge other than that expected from the optically identified M star sample plus normal, lower luminosity stars. Since these stars are themselves fully accountable by an old population, the conclusion from this study was, again, that most of the Galactic bulge has no detectable population of stars younger than those in Baade’s Window, i.e. younger than an age comparable to that of globular clusters.
no-problem/9812/astro-ph9812309.html
ar5iv
text
# Gamma Ray Bursts vs. Afterglows <sup>1</sup><sup>1</sup>institutetext: Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, Mo. 63130 email: katz@wuphys.wustl.edu (Received December 15, 1998; accepted ) ## Abstract When does a GRB stop and its afterglow begin? A GRB may be defined as emission by internal shocks and its afterglow as emission by an external shock, but it is necessary to distinguish them observationally. With these definitions irregularly varying emission (at any frequency) must be the GRB, but smoothly varying intensity is usually afterglow. The GRB itself and its afterglow may overlap in time and in frequency, and distinguishing them will, in general, require detailed modeling. ###### Key Words.: gamma-ray bursts – afterglows – shocks offprints: J. I. Katz At first glance there appears to be little difficulty in distinguishing between GRB and their afterglows. GRB were discovered in 1972, and are observed in gamma-rays with detectors most sensitive to photons in the range 100–1000 keV. The observed durations of GRB range from milliseconds to $`1000`$ s. Afterglows were first observed in 1997 in the radio, visible and X-ray bands, and have durations of hours to months. These appear to be very different phenomena, although causally associated—afterglows follow GRB. This phenomenological distinction between GRB and their afterglows is likely to become insufficient. It is based on the observations of GRB and of afterglows by two distinct classes of instruments with distinct (and largely non-overlapping) sensitivities: 1. GRB detectors have broad angular acceptance and little sensitivity below 30 keV. Their angular acceptance implies high background levels. This makes them comparatively insensitive to steady sources of low flux, although very sensitive to transients of low fluence. These properties are not failures of instrument design; rather, they represent the optimal adaptation of detector technology to the observation of unpredictable brief transients of high peak flux but low fluence (compared to a steady source integrated over a long time). 2. Afterglows are detected with instruments which are sensitive to steady sources of low flux and known position. Observing such sources is the usual problem in astronomy, and these are conventional astronomical instruments, whose sensitivity depends on long temporal integration and high angular resolution. Their resolution discriminates against background. They cannot find unpredictable transients because their high resolution limits their angular acceptance, but once steered to an afterglow by a GRB detector they are sensitive to sources of low flux but long duration (and therefore of comparatively large fluence). This instrumental distinction between GRB and their afterglows will not survive when the gap between the two classes of instruments is bridged. To some extent, this has already happened, as the X-ray detectors on BeppoSAX are able to detect X-ray emission during the brief gamma-ray duration of bright GRB within their field of view. The X-ray and soft gamma-ray bands overlap around 10–30 keV (the distinction between them is largely a matter of definition), and X-ray emission during the detected gamma-ray emission is generally considered to be just the low frequency extrapolation of the gamma-ray emission (although there is a lively controversy concerning the validity and interpretation of this extrapolation; Katz 1994a , Preece, et al. P96 (1996), Cohen, et al. C97 (1997), Preece, et al. P98 (1998)). The detection of simultaneous visible counterparts has long been the holy grail of GRB research. Instruments are rapidly improving in sensitivity and speed of response, and the prospect of rapid ($`<10`$ s) dissemination of accurate GRB coordinates (for example, from HETE-2) makes it likely that they will be observed soon. The phenomenological distinction between GRB and their afterglows will disappear when the temporal gap between simultaneous and delayed (by hours to days) observations is bridged. Once the simultaneous radiation of a GRB is detected, continuing to stare at that point on the sky will produce a continuous intensity history interrupted only by dawn, bad weather or Earth occultation. Will there then be any possibility of distinguishing between GRB and their afterglows, or any purpose in doing so? I wish to argue that the answer to both these questions in yes, provided the terms GRB and afterglow are redefined as indicating the physical processes which produce the radiation. There is no doubt that GRB involve relativistic outflow from a central source of energy, and that the observed radiation is produced in optically thin (except at radio frequencies) regions far from the central source. In order to tap the kinetic energy of relativistic matter it must exchange momentum with some other matter or radiation, either nearly at rest in a local observer’s frame or also moving relativistically. The former case is called an external shock, and the matter at rest is generally assumed to be either the surrounding interstellar medium or a non-relativistic outflow produced by the GRB’s progenitor. The latter case is called an internal shock, and the interaction is assumed to be between different portions of the relativistic outflow, produced at different times and with differing Lorentz factors. Although the term “shock” is generally used, it is neither necessary nor demonstrated that a shock occurs; streams of low density matter may interpenetrate, exchanging momentum more gradually as a consequence of plasma instability (such an instability is also required for a shock, which must be collisionless because of the low densities). In at least some GRB most of the early gamma-ray emission is produced by internal shocks. These GRB consist of several sharp and clearly separated subpulses, often with intensity dropping to background levels between the subpulses. Fenimore, Madras and Nayakchin F96 (1996) and Sari and Piran SP97 (1997) showed that such temporal behavior cannot be produced by an external shock of plausible efficiency, no matter how clumpy the external medium, thus refuting the original argument (Rees and Mészáros RM92 (1992), Katz 1994b ) for external shock models, that interaction with a heterogeneous medium can explain how a single class of similar collapse or coalescence events could produce the observed “zoo” of diverse GRB pulse profiles. The radiation of these multipeaked GRB can only be explained by internal shock (Rees and Mészáros RM94 (1994)) models, in which the variability is attributed to modulation of a longer-lived source of energy (Katz K97 (1997)). Additional evidence for the internal shock origin of multi-peaked GRB was presented at this meeting by Fenimore and Ramirez-Ruiz (F99 (1999)) and by Quilligan, et al. (Q99 (1999)) who found that the properties of their subpulses do not evolve through a burst, suggesting that subpulses are independent events, in effect individual GRB. This is inconsistent with external shock models, in which the characteristic radii and time scales monotonically increase. Internal shocks are rather inefficient ($`20`$% for typical parameters) in dissipating (a precondition for radiating) the kinetic energy of relativistic outflows. Because no GRB occurs in perfect vacuum, even if it were to occur in an intergalactic medium, there is always matter for external interaction. The efficiency of radiation by this external interaction may be low, particularly if the ambient density is low. It is natural to associate this external interaction (or shock) with the phenomenologically defined afterglow because its duration is predicted to be much longer than that of internal shocks, and because the smooth single-peaked behavior of afterglows observed (to date) is consistent with that predicted (Katz 1994a ; 1994b ) for external shocks. I suggest that it is useful to define GRB as the radiation produced by internal shocks and afterglows as the radiation produced by external shocks. Then, instead of arguing about nomenclature, we can engage in a more scientifically productive discussion about the physical origin of the various components of the observed radiation. How can they be distinguished? A spiky temporal profile is an unambiguous indicator of an internal shock, but that rule does not answer all questions. There are GRB with smooth single-peaked profiles, which can be explained by either internal or external shocks, The observed duration of some GRB is $`1`$ hour, and others may last a day or longer (Katz K97 (1997)), so that duration is also not sufficient to distinguish GRB from their afterglows. In some bursts, the internal and external shock emission may overlap in both spectrum and in time. Unfortunately, there appears to be no general rule for distinguishing external from internal shock emission. The predicted asymptotic spectra with and without radiative cooling (Cohen, et al. C97 (1997)) do not distinguish between internal and external shocks. Making this distinction will require detailed spectral and temporal modeling to associate different spectral components which are produced by the same physical process. Identifying the physical origin of each will require hydrodynamic modeling of the source. This is likely to be a formidable task. ###### Acknowledgements. This work was supported in part by NSF AST 94-16904.
no-problem/9812/cond-mat9812001.html
ar5iv
text
# Impurity corrections to the thermodynamics in spin chains using a transfer-matrix DMRG method ## I Introduction The study of quantum impurities remains a large part of condensed matter physics. The Kondo effect is maybe one of the most famous examples for impurity effects, but more recently much effort has been devoted to impurities in low-dimensional magnetic systems in connection with high temperature superconductivity. For the particular case of impurities in quasi one-dimensional systems, much progress has been made with field theory descriptions e.g. for the Kondo model, quantum wires, and spin chains. In those cases, the impurity behaves effectively as a boundary condition at low temperatures and the behavior can be described in terms of a renormalization crossover between fixed points as a function of temperature. Numerically this renormalization picture has been confirmed, but so far it was only possible to examine a limited number of energy eigenvalues in the spectrum. While some efforts have been made to extract thermodynamic properties from the energy spectrum directly, such an approach is tedious and remains limited by finite system sizes. Monte Carlo simulations appear to be well suited for determining thermodynamic properties, but for the particular case of impurity properties it turns out to be difficult to accurately determine a correction which is of order $`1/N`$, where $`N`$ is the system size. We now apply the transfer matrix DMRG to impurity systems. This overcomes those problems by explicitly taking the thermodynamic limit $`N\mathrm{}`$, while still being able to probe impurity corrections and local properties at finite temperatures even for frustrated systems (which are not suitable for Monte Carlo simulations due to the minus sign problem). There are two separate impurity effects that we wish to address. The first is the impurity correction $`F_{\mathrm{imp}}`$ to the total free energy of a one dimensional system $$F_{\mathrm{imp}}=\underset{N\mathrm{}}{lim}(F_{\mathrm{total}}NF_{\mathrm{pure}}),$$ (1) where $`F_{\mathrm{pure}}`$ is the free energy per site for an infinite system without impurities. In other words, the impurity contribution is that part of the total free energy that does not scale with the system size $`N`$ $$F_{\mathrm{total}}=NF_{\mathrm{pure}}+F_{\mathrm{imp}}+𝒪(1/N).$$ (2) Therefore, the impurity free energy $`F_{\mathrm{imp}}`$ is directly proportional to the impurity density of the system, and it immediately determines the corresponding impurity specific heat and impurity susceptibility, i.e. quantities that can be measured by experiments as a function of temperature and impurity density. Despite the obvious importance of this impurity contribution we are not aware of any numerical studies that considered this quantity for any non-integrable impurity system. Traditional methods would require an extensive finite size scaling analysis to track down the $`1/N`$ correction to the total free energy per site, but our approach allows us to calculate $`F_{\mathrm{imp}}`$ directly in the thermodynamic limit. We would like to point out that in other studies the response of an impurity spin to a local magnetic field is often termed “impurity susceptibility”, but we prefer to reserve this expression for the impurity contribution $$\chi _{\mathrm{imp}}=\frac{^2}{B^2}F_{\mathrm{imp}},$$ (3) where $`B`$ is a global magnetic field on the total system. The second aspect of impurity effects are local properties of individual sites near the impurity location, e.g. correlation functions and the response to a local magnetic field. Local properties have been the central part of a number of works for many impurity models. Our approach is now able to calculate these impurity effects directly in the thermodynamic limit and we get quick and accurate results to extremely low temperatures. It turns out that the local impurity effects can be determined much more accurately and to lower temperatures than the impurity contribution $`F_{\mathrm{imp}}`$, which remains limited by accuracy problems even with this method. The Density Matrix Renormalization Group (DMRG) has had a tremendous success in describing low energy static properties of many one-dimensional (1D) quantum systems such as spin chains and electron systems. More recently many useful extensions to the DMRG have been developed. Nishino showed how to successfully apply the density matrix idea to two-dimensional (2D) classical systems by determining the largest eigenvalue of a transfer matrix. Bursill, Xiang and Gehring have then shown that the same idea can be used to calculate thermodynamic properties of the quantum spin-1/2 $`XY`$-chain. The method has later been improved by Wang and Xiang as well as Shibata and been applied to the anisotropic spin-1/2 Heisenberg chain with great success. In this paper we apply a generalization of the method to impurity systems, which is presented in section II. In Section III we study two different impurity models in the spin-1/2 chain and are able to confirm predictions from field theory calculations. Section IV concludes this work with a discussion of the results and a critical analysis of accuracy and applicability to other systems. ## II The Method The method of the transfer matrix DMRG can in principle be applied to any one-dimensional system for which a transfer matrix can be defined. As a concrete example we will consider the antiferromagnetic spin-1/2 chain, since this model is well understood in terms of field theory treatments and has direct experimental relevance. The Heisenberg Hamiltonian can be written as $$H=\underset{i=1}{\overset{N}{}}h_i,h_i=J_i𝐒_i𝐒_{i+1}+B_iS_i^z,$$ (4) where $`J_i`$ is the exchange coupling between sites $`i`$ and $`i+1`$, and $`B_i`$ is an external magnetic field in the $`z`$-direction at site $`i`$. Periodic boundary conditions, $`𝐒_{N+1}𝐒_1`$, are assumed. The partition function is defined by $$Z=\text{tr}e^{\beta H}=\text{tr}e^{\beta (H_o+H_e)},$$ (5) where $`\beta =\frac{1}{k_BT}`$ and where we in the last step have partitioned $`H`$ into odd and even site terms, $$H_o=\underset{i=1}{\overset{N/2}{}}h_{2i1},H_e=\underset{i=1}{\overset{N/2}{}}h_{2i}.$$ (6) ### A The transfer matrix method The quantum transfer matrix for this system is defined as usual via the Trotter-Suzuki decomposition $$Z_M=\text{tr}\left(e^{\frac{\beta H_o}{M}}e^{\frac{\beta H_e}{M}}\right)^M,$$ (7) where $`M`$ is the Trotter number. This expression approximates the partition function up to an error of order $`(\beta /M)^2`$ and becomes exact in the limit $`M\mathrm{}`$. By inserting a complete set of states between each of the exponentials in Eq. (7) and rearranging the resulting matrix elements, the partition function can be written as a trace over a product of transfer matrices, $$Z_M=\text{tr}\underset{i=1}{\overset{N/2}{}}T_M(2i1),$$ (8) where $`T_M(2i1)`$ is the $`2^{2M}\times 2^{2M}`$ dimensional quantum transfer matrix from lattice site $`2i1`$ to site $`2i+1`$. Note that $`T_M`$ is in general not symmetric. However, if the two-site Hamiltonian $`h_{2i1}`$ and $`h_{2i}`$ of Eq. (4) is invariant under the exchange $`2i12i`$, and $`2i2i+1`$ respectively, as is the case unless we have applied a non-uniform magnetic field, $`T_M`$ is a product of two symmetric transfer matrices, one from site $`2i1`$ to site $`2i`$ and the other from site $`2i`$ to $`2i+1`$. For a uniform system, the transfer matrix is independent of lattice site, $`T_M(2i1)T_M`$, and the partition function is then given by $$Z_M=\text{tr}T_M^{N/2}.$$ (9) In the thermodynamic limit of the uniform system, $`Z_M`$ is given by $$\underset{N\mathrm{}}{lim}Z_M=\lambda ^{N/2},$$ (10) where $`\lambda `$ is the largest eigenvalue of $`T_M`$. The largest eigenvalue $`\lambda `$ can be found exactly only for small Trotter numbers $`M`$. As $`M`$ increases, the dimension of $`T_M`$ grows exponentially, and we have to use some approximation technique to find $`\lambda `$. Analogous to the case where the DMRG can be used to find a certain eigenstate of a Hamiltonian as the number of lattice sites increase, we can use the DMRG to find the largest eigenvalue of $`T_M`$ as the Trotter number $`M`$ increase. The strategy is thus to start with a system block, $`T_{M/2}^s`$, and an environment block, $`T_{M/2}^e`$, with a small $`M`$. The superblock transfer matrix, $`T_M`$, with Trotter number $`M`$, is constructed by “gluing” together the system block with the environment block. Periodic boundary conditions in the Trotter direction must be used. The reduced density matrix is constructed from the target state, i.e. the eigenstate of $`T_M`$ with largest eigenvalue. Since the transfer matrix is non-symmetric, the left and right eigenvectors will not be complex conjugates of each other. A reduced density matrix for the system as part of the superblock can be constructed by taking a partial trace of $`T_M^{N/2}`$ over the environment degrees of freedom, $$\rho =\frac{1}{Z_M}\text{tr}_{\mathrm{env}}T_M^{N/2}.$$ (11) In the thermodynamic limit only the state with the largest eigenvalue will contribute, $$\rho \stackrel{N\mathrm{}}{}\text{tr}_{\mathrm{env}}|\psi ^R\psi ^L|,$$ (12) where $`|\psi ^R`$ and $`\psi ^L|`$ are the right and left eigenvectors of the superblock transfer matrix, $`T_M`$, corresponding to the largest eigenvalue, $`\lambda `$. The matrix elements are given by $`\rho _{i^{},i}=_j\psi _{i^{},j}^R\psi _{i,j}^L`$ , where $`i`$ and $`j`$ label the degrees of freedom of the system and the environment respectively and the target states are given by $`\psi ^L|=_{i,j}\psi _{i,j}^Li|j|`$ and $`|\psi ^R=_{i,j}\psi _{i,j}^R|i|j`$. The left and right eigenvectors of the density matrix with the largest eigenvalues are then used to define the projection operators onto the truncated basis. After the first iterations we will keep $`m`$ states for the system and the environment and the superblock will be $`4m^2`$ dimensional. More details on the transfer matrix DMRG algorithm for quantum systems have been presented in Refs. . Since $`\rho `$ is non-symmetric it is not obvious that the eigenvalues are real, but because $`\rho `$ represents a density matrix we expect it to be positive definite. However, because of numerical inaccuracies complex eigenvalues tend to appear in the course of iterations. This is usually connected to level crossings in the eigenvalue spectra of the density matrix as $`M`$ is increased. In addition, we also observe that just before the complex eigenvalues appear, multiplet symmetries in the eigenvalue spectrum of $`\rho `$ may split up (often due to “level repulsion” from lower lying states which were previously neglected). To overcome this problem it is important to keep all original symmetries (i.e. eigenvalue multiplets of $`\rho `$) even as $`M`$ increases. For that purpose it is often necessary to increase the number of states $`m`$ just before a multiplet tends to split up, which avoids the numerical error that leads to the symmetry breaking. In case of a level crossing of two multiplets that do not split up, complex eigenvalues may still appear and it is then possible to numerically transform the complex eigenstate pair into a real pair spanning the same space. Since the transformed real pair still is orthonormal to every other eigenvector, this transformation does not cause any troubles for later iterations. With this method the eigenvalues stay complex until the two levels have crossed and moved enough apart, at which point the eigenvalues become real again. The renormalization procedure is therefore not as straightforward as in ordinary DMRG runs, since it is essential to track all eigenvalues of $`\rho `$ and to dynamically adjust the number of states $`m`$, thereby sometimes repeating previous iteration steps. ### B Impurities and local properties The renormalization scheme also allows the calculation of thermal expectation values of local operators. The magnetization of the spin at site $`i`$ is determined by $$S_i^z=\frac{1}{Z}\text{tr}S_i^ze^{\beta H}.$$ (13) The operator $`S_i^z`$ thus only has to be incorporated in the corresponding transfer matrix at site $`i`$. For the pure system, we then arrive at the formula $$S_i^z=\frac{\psi ^L|T_M^{sz}(i)|\psi ^R}{\lambda },$$ (14) where $`T_M^{sz}(i)`$ is defined similar to $`T_M(i)`$ but with the additional operator $`S_i^z`$ included in addition to the Boltzmann weights. To measure the local bond energy, $`h(i)=𝐒_i𝐒_{i+1}`$, a similar construction can be used. Let us now assume that the system has a single impurity. The systems we will study is the periodic spin-1/2 chain with one weakened link or an external spin-1/2 coupled to one spin in the chain $`H_1`$ $`=`$ $`H_0\delta J𝐒_N𝐒_1`$ (15) $`H_2`$ $`=`$ $`H_0+J^{}𝐒_1𝐒_f,`$ (16) where $$H_0=\underset{i=1}{\overset{N1}{}}J𝐒_i𝐒_{i+1}+J𝐒_N𝐒_1$$ (17) is the periodic chain and $`𝐒_f`$ is the spin operator of an external spin-1/2. The models are depicted in Fig. 1. For systems with such a local impurity, which is contained within two neighboring links, only one of the $`T_M(i)`$ in Eq. (8) will differ from a common $`T_M`$ $$Z_M=\text{tr}\left(T_M^{N/21}T_{\mathrm{imp}}\right),$$ (18) where $`T_{\mathrm{imp}}`$ is the transfer matrix of the two links containing the impurity and $`T_M`$ is the transfer matrix describing the bulk. In the thermodynamic limit the partition function will still be dominated by the largest eigenvalue of the “pure” transfer matrix. From Eq. (18) we have $$\underset{N\mathrm{}}{lim}Z_M=\lambda ^{N/21}\psi ^L|T_{\mathrm{imp}}|\psi ^R,$$ (19) where $`\lambda `$, $`\psi ^L|`$ and $`|\psi ^R`$ all correspond to the pure system. The generalization of Eqs. (18) and (19) to impurity configurations ranging over more than two links is straightforward. In this case more than one impurity transfer matrix has to be introduced and this could be used to study e.g. multiple impurities and impurity-impurity interactions. Let us define $`\lambda _{\mathrm{imp}}\psi ^L|T_{\mathrm{imp}}|\psi ^R`$. The total free energy of the system is then given by $`F`$ $`=`$ $`{\displaystyle \frac{1}{\beta }}\mathrm{ln}Z`$ (20) $`=`$ $`{\displaystyle \frac{1}{\beta }}\mathrm{ln}\left(\lambda ^{N/21}\lambda _{\mathrm{imp}}\right)`$ (21) $`=`$ $`{\displaystyle \frac{N}{2\beta }}\mathrm{ln}\lambda {\displaystyle \frac{1}{\beta }}\mathrm{ln}{\displaystyle \frac{\lambda _{\mathrm{imp}}}{\lambda }}.`$ (22) By comparing with Eq. (2) we can retrieve the pure and impure parts $$F_{\mathrm{pure}}=\frac{1}{2\beta }\mathrm{ln}\lambda ,F_{\mathrm{imp}}=\frac{1}{\beta }\mathrm{ln}\frac{\lambda _{\mathrm{imp}}}{\lambda }.$$ (23) The impurity susceptibility can be found from the change of $`F_{\mathrm{imp}}`$ in a small magnetic field from Eq. (3) $$\chi _{\mathrm{imp}}=\frac{^2}{B^2}F_{\mathrm{imp}}.$$ (24) Local properties such as the magnetization of the impurity spin can be determined by $$S_{\mathrm{imp}}^z=\frac{\psi ^L|T_{\mathrm{imp}}^{sz}|\psi ^R}{\lambda _{\mathrm{imp}}}.$$ (25) The magnetization of spins close to the impurity is readily obtained by $$S^z=\frac{\psi ^L|T_M^{sz}(T_M)^xT_{\mathrm{imp}}|\psi ^R}{\lambda ^{x+1}\lambda _{\mathrm{imp}}},$$ (26) where $`2x`$ is the number of sites between the impurity and the spin of interest. Note that since a transfer matrix involves a total of three lattice sites, $`T_M^{sz}`$ can be constructed to measure the spin at any of these sites (or the mean value). The actual site of the measurement in Eq. (26) is thus determined both by $`x`$ and how $`T_M^{sz}`$ is set up. The expectation value in Eq. (26) is most easily calculated by first computing the vectors $`\psi ^L|T_M^{sz}`$ and $`T_{\mathrm{imp}}|\psi ^R`$, then acting with $`T_M`$ on one of these states, and finally calculating the inner product of the resulting states. Eq. (26) can be generalized to measure any equal time correlation function with or without an impurity, e.g. by replacing $`T_{\mathrm{imp}}`$ by $`T_M^{sz}`$. The reduced density matrix for the impurity system can be constructed by taking the thermodynamic limit of the impurity version of Eq. (11), $$\rho =\frac{1}{Z_M}\text{tr}_{\mathrm{env}}\left(T_M^{N/21}T_{\mathrm{imp}}\right),$$ (27) with $`Z_M`$ as in Eq. (18). In our calculations we have used the same density matrix as for the pure case, i.e. Eq. (12), and we have found it to give good results. This form can most easily be motivated by writing Eq. (27) on the form $`\rho =\text{tr}_{\mathrm{env}}\left(T_M^{N/41}T_{\mathrm{imp}}T_M^{N/4}\right)/Z_M`$. From a computational point of view, this method is also very convenient; by storing all target states and projection operators from the DMRG run for the pure system, all local impurities can be studied by simply using the same projection operators and target states. This makes subsequent DMRG runs for different impurity parameters very fast. There are also other choices of density matrices that can be made. The thermodynamic limit of Eq. (27) can also be interpreted as $`\rho |\psi ^R\psi ^L|T_{\mathrm{imp}}/\lambda _{\mathrm{imp}}`$ in which case the impurity transfer matrix would be taken into account in the density matrix. This would in some sense be analogous to including an operator different from the Hamiltonian in the density matrix of the ordinary “zero-temperature” DMRG, which is usually not necessary to measure e.g. correlation functions in the ordinary DMRG. This approach would destroy the computational advantage of using the pure density matrix, because the pure projection operators and target states could not be used but instead complete DMRG runs would have to be done for each impurity configuration and coupling. ## III Results ### A Field theory predictions To make a meaningful analysis of the numerical results, we first need to understand the spin-1/2 chain in the framework of the quantum field theory treatment. This turns out to give a good description of the impurity behavior in terms of a renormalization flow between fixed points, and we will be able to set up concrete expectations for the impurity susceptibility in Eq. (3) as well as local properties. The effective low energy spectrum of the spin-1/2 chain is well described by a free boson Hamiltonian density $$=\frac{v}{2}\left[(\mathrm{\Pi }_\varphi )^2+(_x\varphi )^2\right],$$ (28) plus a marginal irrelevant operator $`\mathrm{cos}\sqrt{8\pi }\varphi `$ and other higher order operators which we have neglected. Here $`\mathrm{\Pi }_\varphi `$ is the momentum variable conjugate to $`\varphi `$. In the long wave-length limit the spin operators can be expressed in terms of the bosonic fields using the notation of Ref. $`S_j^z`$ $``$ $`{\displaystyle \frac{1}{\sqrt{2\pi }}}{\displaystyle \frac{\varphi }{x}}+(1)^j\text{const.}\mathrm{cos}\sqrt{2\pi }\varphi `$ (29) $`S_j^{}`$ $``$ $`e^{i\sqrt{2\pi }\stackrel{~}{\varphi }}[\text{const.}\mathrm{cos}\sqrt{2\pi }\varphi +(1)^j\text{const.}],`$ (30) At this point we can introduce the impurities in Eq. (15) and (16) in a straightforward way as perturbations. The field theoretical expressions for these perturbations can then be analyzed in terms of their leading scaling dimensions. Local perturbations with a scaling dimension of $`d>1`$ are considered irrelevant, while perturbations with a small scaling dimension $`d<1`$ are relevant and drive the system to a different fixed point. Hence, we can predict a systematic renormalization flow towards or away the corresponding fixed point, respectively. Such an analysis has been made in Ref. and the renormalization flows have been confirmed by determining the finite size corrections to the low energy spectrum. In particular, a small weakening of one link in the chain $$H_1=H_0\delta J𝐒_N𝐒_1$$ (31) has been found to be a relevant perturbation described by the operator $`\mathrm{sin}\sqrt{2\pi }\varphi `$ with scaling dimension $`d=1/2`$, so that the periodic chain $`(\delta J=0)`$ is an unstable fixed point. The open chain $`(\delta J=J)`$ on the other hand is a stable fixed point where the perturbation is described by the leading irrelevant operator $`_x\varphi (N)_x\varphi (0)`$ with scaling dimension of $`d=2`$. Hence we expect a renormalization flow between the two fixed points as the temperature is lowered, and the temperature dependence of the impurity susceptibility as well as local properties will be described by a crossover function. Below a certain crossover temperature $`T_K`$ this crossover function describes the behavior of the stable fixed point (the open chain) while above $`T_K`$ the system may exhibit a completely different behavior. The crossover temperature $`T_K`$ itself is determined by the initial coupling strength $`\delta J`$ $`\underset{\delta J0}{lim}T_K`$ $``$ $`0`$ (32) $`\underset{\delta JJ}{lim}T_K`$ $``$ $`\mathrm{}.`$ (33) In other words, close to the unstable fixed point the crossover temperature is very small, indicating that we have to go to extremely low temperatures before we can expect to observe the behavior of the stable fixed point. A similar scenario holds for the impurity model with one external spin $`𝐒_f`$, $$H_2=H_0+J^{}𝐒_1𝐒_f.$$ (34) In this case, the periodic chain $`(J^{}=0)`$ is also the unstable fixed point, while the open chain with a decoupled singlet ($`J^{}\mathrm{}`$) is the stable fixed point. As mentioned above, the renormalization flow has been confirmed for the energy corrections of individual eigenstates, but we now seek to extend this analysis to thermodynamic properties, which will allow us to determine the crossover behavior and $`T_K`$ directly. ### B One weak link Our first task is to establish the renormalization behavior from a periodic chain fixed point to the open chain fixed point as a function of temperature. To examine the effective boundary condition on the spin-operators, it is instructive to look at the correlation functions at spin-sites close to the impurity. For periodic boundary conditions, the leading operator for the spin $`S_1^z`$-operator is given by $`\mathrm{cos}\sqrt{2\pi }\varphi `$ with scaling dimensions of $`d=1/2`$ according to Eq. (30). On the other hand, open boundary conditions restrict the allowed operators and the leading operator for $`S_1^z`$ is found to be $`_x\varphi (0)`$ with scaling dimension of $`d=1`$. Hence, the autocorrelation function at the impurity behaves differently, depending on the effective boundary condition $$S_1^z(\tau )S_1^z(0)\{\begin{array}{ccc}1/\tau \hfill & & \mathrm{periodic}\mathrm{b}.\mathrm{c}.\hfill \\ 1/\tau ^2\hfill & & \mathrm{open}\mathrm{b}.\mathrm{c}.\hfill \end{array}$$ (35) Therefore, a useful quantity to consider is the response $`\chi _1`$ to a local magnetic field given by the Kubo formula $`\chi _1(T)`$ $`=`$ $`{\displaystyle ^{1/T}}S_1^z(\tau )S_1^z(0)𝑑\tau `$ (36) $`\stackrel{T0}{}`$ $`\{\begin{array}{ccc}\mathrm{log}T\hfill & & \mathrm{periodic}\mathrm{b}.\mathrm{c}.\hfill \\ \mathrm{const}.+𝒪(T)\hfill & & \mathrm{open}\mathrm{b}.\mathrm{c}.\hfill \end{array}`$ (39) In Fig. 2 we have presented the results of $`\chi _1`$ for different impurity strengths $`\delta J`$ on a logarithmic temperature scale. For the periodic chain ($`\delta J=0`$) we clearly observe the logarithmic scaling, but for any finite $`\delta J`$ a turnover to a constant behavior is observed as $`T0`$, i.e. the behavior of the open chain. The turnover temperature $`T_K`$ occurs at larger and larger values as we approach the stable fixed point \[see Eq. (33)\]. An interesting aspect is that the curves actually cross: At very high temperatures the high temperature expansion always dictates a larger response for a weakened link, while at very low temperatures this relation is reversed by quantum mechanical effects and the renormalization flow. We now turn to the true impurity susceptibility of Eq. (3) which is the experimentally more relevant quantity. Maybe the simplest non-trivial case to consider is the open chain $`\delta J=J`$. At low temperatures the impurity susceptibility can be calculated from the leading irrelevant local operator which is allowed in the Hamiltonian. This operator turns out to be $`(_x\varphi (0))^2`$ which gives a constant impurity susceptibility with a logarithmic correction similar to the pure susceptibility as follows from a dimensional analysis. (In fact this operator can be absorbed in the free Hamiltonian by a defining a velocity $`v`$ that depends on the system size. An explicit calculation of integrals over the correlation functions also comes to the same conclusion.) $$\chi _{\mathrm{imp}}^{\mathrm{open}}(T)\stackrel{T0}{}\mathrm{const}.+𝒪\left(1/\mathrm{log}(T/T_0)\right)$$ (40) While it is possible to calculate the impurity susceptibility numerically according to Eq. (23), the use of a second derivative in Eq. (24) causes large problems with the accuracy at lower temperatures since it involves taking the differences of large numbers. Luckily, the excess local susceptibility $`\chi _{\mathrm{local}}`$ of the first site under a global magnetic field turns out to give a good estimate of the true impurity susceptibility $$\chi _{\mathrm{local}}=\frac{dS_1^z}{dB}\chi _{\mathrm{pure}}\chi _{\mathrm{imp}}+\mathrm{const}.$$ (41) where $`B`$ is a global magnetic field and the constant is due to the alternating part. The results for this quantity are shown in Fig. 3 which are consistent with Eq. (40). The second derivative in Eq. (24) has a similar behavior in the intermediate temperature range, but is not accurate enough to extrapolate to the $`T0`$ limit as explained above. Now we are in the position to consider the impurity susceptibility of one weak link in the chain. By reducing $`\delta J`$ it is possible to tune the the system all the way from the open chain fixed point to the periodic chain. The operator $`(_x\varphi (0))^2`$, which was responsible for the open chain impurity susceptibility is thereby reduced continuously. However, it is an entirely different operator corresponding to $`𝐒_N𝐒_1`$ which is responsible for the renormalization. This operator changes scaling dimension as we go from periodic boundary conditions towards the open chain $$𝐒_N𝐒_1\{\begin{array}{ccc}\mathrm{sin}\sqrt{2\pi }\varphi \hfill & d=1/2\hfill & \mathrm{periodic}\mathrm{b}.\mathrm{c}.\hfill \\ _x\varphi (N)_x\varphi (0)\hfill & d=2\hfill & \mathrm{open}\mathrm{b}.\mathrm{c}.\hfill \end{array}$$ (42) Since we wish to study the effect of this renormalization, we choose to subtract the open contribution systematically from $`\chi _{\mathrm{imp}}`$, so we obtain exactly the part which will exhibit the crossover of the renormalization $$\chi _{\mathrm{imp}}(\delta J)\delta J\chi _{\mathrm{imp}}^{\mathrm{open}}=f(T/T_K)/T_K.$$ (43) We see that this difference is zero at either fixed point. After subtracting $`\delta J\chi _{\mathrm{imp}}^{\mathrm{open}}`$ the impurity susceptibility comes only from the operator in Eq. (42), which allows us to postulate the scaling form in Eq. (43). Below $`T_K`$ the difference in Eq. (43) will asymptotically go to a constant for a very weak link $`\delta JJ`$, which comes from the $`d=2`$ operator in Eq. (42). Indeed we observe in Fig. 4 that this difference is negative and proportional to $`J\delta J1/T_K`$ and largely temperature independent as $`T0`$. On the other hand as $`\delta J`$ becomes small enough, the behavior is much different: The expression in Eq. (43) even decreases with $`\delta J`$ and the turn-over to constant behavior happens at much lower temperatures (outside the plot range). Interestingly, this results in a highly nontrivial behavior as a function of $`T`$ and $`T_K`$ close to the unstable fixed point $`\underset{T0}{lim}\underset{T_K0}{lim}f(T/T_K)/T_K`$ $``$ $`T_K/T^2\delta J^20`$ (44) $`\underset{T_K0}{lim}\underset{T0}{lim}f(T/T_K)/T_K`$ $``$ $`1/T_K1/\delta J^2\mathrm{}.`$ (45) This means that at $`T=0`$ a minute perturbation $`\delta J`$ results in an extremely large negative $`\chi _{\mathrm{imp}}1/\delta J^2`$ (although this behavior occurs in an “unphysical” limit). The reason that the two limits do not commute is of course because one describes the behavior of the stable fixed point $`TT_K`$, while the other one describes the behavior at the unstable fixed point. While our numerical results cannot show the entire crossover of Eq. (43), the increase below $`T_K`$ for small $`J\delta J`$ is clearly observed as well as the decrease and change of curvature above $`T_K`$ for small $`\delta J`$. Therefore, our data in Fig. 4 supports the renormalization scenario and the nontrivial behavior of Eq. (45). ### C One external spin The model of an external spin antiferromagnetically coupled to the chain in Eq. (16) is maybe a little more exotic, but is still of great interest in a number of studies. In Ref. it was first shown that the stable fixed point corresponds to open boundary conditions with a decoupled singlet. This was confirmed numerically, but more recently Dr. Liu postulated a completely different behavior using some non-local transformations on Fermion-fields which mysteriously were rearranged to form a solvable model. While we cannot trust or understand many of his calculations, the predictions are in strong contrast to any previous expectations and should be tested explicitly. In particular, he predicted the response of the impurity spin to a local magnetic field $$\chi _f(T)=^{1/T}S_f^z(\tau )S_f^z(0)𝑑\tau $$ (46) to be proportional to $`T^{5/2}`$ at the Heisenberg point. We predict, however, that this response is described by the autocorrelation function of the leading operator for $`𝐒_f`$. By a symmetry analysis we find that for open boundary conditions this operator is given by $`_x\varphi (0)`$ with scaling dimension $`d=1`$. The local response is therefore a constant as $`T0`$ with a linear term $$\chi _f(T)\stackrel{T0}{}\mathrm{const}.+𝒪(T).$$ (47) This also agrees with the findings in Ref. which had similar reservations about Ref. . We now explicitly calculate $`\chi _f`$ for several coupling strengths $`J^{}`$ as shown in Fig. 5. Our data fits well to the predicted form and we can certainly rule out any $`T^{5/2}`$ behavior. Moreover, we find a scaling behavior which holds for all coupling strengths $$\chi _f(T)=g(T/T_K)/T_K.$$ (48) A similar scaling relation was observed before. Our results are consistent with previous numerical studies, but distinctly larger in the low-temperature region. We attribute this to the finite size method used in Ref. , which becomes unreliable when the temperature falls below the finite size gap of the system. At large $`J^{}`$ our results can be compared to that of two coupled spins forming a singlet. Note, that the response to a local magnetic field on one spin in a singlet is finite as $`T0`$ and proportional to $`1/J^{}`$ (and does not show activated behavior as a simple calculation shows). Therefore, our findings are completely consistent with the expectation that the impurity spin is locked into a singlet at the stable fixed point. ## IV Discussion To accurately calculate impurity properties we have to determine not only the largest eigenvalue of the transfer matrix, $`T_M`$, but also the corresponding eigenvectors to high accuracy. To estimate the error of the impurity properties is difficult. Errors come both from the finite Trotter number, $`M`$, and from the finite number of states, $`m`$, in the DMRG. The scaling of pure properties with $`M`$ and $`m`$ usually turn out to be simpler than the scaling of impurity properties, which show a less clear form of the errors. We have used the value $`\beta /M=0.05`$ for all calculations presented in this article. With this value, the error due to the finite $`M`$ should be small, which is also confirmed by test-runs. We have tested the error due to the finite Trotter number $`M`$ by doing separate DMRG runs for different values of $`\beta /M`$. For the pure case at moderate temperatures we find that the eigenvalue $`\lambda `$ scale as $`1/M^2`$, as is expected, while at the lowest temperatures, the error due to the finite $`m`$ is larger making it difficult to see the expected $`1/M^2`$ scaling. For the impure case, the convergence of $`\lambda _{\text{imp}}`$ with $`M`$ is more complicated, but the overall scaling is however still roughly $`1/M^2`$. We have also tested the convergence with the number of basis states $`m`$. For both the pure and the impure cases we find a rapid convergence with increasing $`m`$. We have used a maximum of $`m=65`$ for the calculations on the weakened link impurity and $`m=38`$ in the external spin case. We found however no noticeable difference between $`m=38`$ and $`m=65`$ down to $`T=0.02`$ in a test run for the external spin. The truncation error, i.e. $`1_{i=1}^mw_i`$, where $`w_i`$ are the largest eigenvalues of the density matrix, is less than about $`10^5`$ for $`m=65`$ at the lowest temperatures. Note that the truncation error is determined during the pure sweep in which also the projection operators and target states are determined. It could thus be used as an estimate of an upper limit of the error of the pure properties, but it is difficult to say how good estimate it yields for the impurity properties. To test our results we have also done Quantum Monte Carlo (QMC) simulations for a few temperatures and couplings. The DMRG data was well within the error bars of the QMC results. Local properties converge much faster with $`m`$ than the impurity susceptibility. This fact might be explained by the difficulty to numerically take a second derivative, since we have to subtract two large numbers to find $`\delta F`$ in Eq. (24). This is however not the case for local properties, since we know that the local magnetization is zero in the absence of a magnetic field. Another source of inaccuracy in $`\lambda _{\text{imp}}`$ comes from the error of the target states. Let us assume that the target state is determined up to some error $`ϵ`$; $`|\psi _{\text{exact}}=|\psi _{\text{DMRG}}+|ϵ`$. Since the target state is, to numerical accuracy, an eigenstate of $`T_M`$, the eigenvalue will be determined to order $`ϵ^2`$. Expectation values of other operators, for example the impurity transfer matrix, will however in general only be accurate to order $`ϵ`$. The local properties don’t seem to suffer too much from this effect, the reason might be that there is some cancellation of errors in the quotient $`\psi ^L|T_{\text{imp}}^{sz}|\psi ^R/\lambda _{\mathrm{imp}}`$. While the accuracy of the second derivative is good enough for the pure susceptibility down to about $`T=0.01`$, we cannot trust the impurity susceptibility below temperatures roughly an order of magnitude larger. Local impurity properties on the other hand seem to be well represented down to about $`T=0.02`$. In summary we have shown that the transfer matrix DMRG is a useful method for calculating finite temperature impurity properties of a spin chain in the thermodynamic limit. We have considered two impurity models: One weakened link and one external spin, but the method can be applied to other impurity configurations and electron systems. We find that the local response of the spin next to a weakened link always crosses over to a constant below some $`T_K`$, i.e. to the behavior of the open chain fixed point. According to our calculations, the impurity susceptibility shows an exotic cross-over behavior with non-commuting limits. For the external spin impurity, we have found that the data for the local response shows the expected cross-over to open chain behavior as the temperature is lowered. The response has the scaling form in Eq. (48) and we can explicitly show the data collapse and determine $`T_K`$ (Fig. 5). ## V Acknowledgment We would like to thank H. Johannesson, A. Klümper, T. Nishino, I. Peschel, S. Östlund, N. Shibata and X. Wang for valuable contributions. This research was supported in part by the Swedish Natural Science Research Council (NFR).
no-problem/9812/hep-th9812027.html
ar5iv
text
# Untitled Document EFI-98-58, ILL-(TH)-98-06 hep-th/9812027 String Theory in Magnetic Monopole Backgrounds David Kutasov<sup>1</sup>, Finn Larsen<sup>1</sup>, and Robert G. Leigh<sup>2</sup> <sup>1</sup>Enrico Fermi Institute, University of Chicago, 5640 S. Ellis Av., Chicago, IL 60637 <sup>2</sup>Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801 We discuss string propagation in the near-horizon geometry generated by Neveu-Schwarz fivebranes, Kaluza-Klein monopoles and fundamental strings. When the fivebranes and KK monopoles are wrapped around a compact four-manifold $``$, the geometry is $`AdS_3\times S^3/\text{ZZ}_N\times `$ and the spacetime dynamics is expected to correspond to a local two dimensional conformal field theory. We determine the moduli space of spacetime CFT’s, study the spectrum of the theory and compare the chiral primary operators obtained in string theory to supergravity expectations. 12/98 1. Introduction It is currently believed that many (perhaps all) vacua of string theory have the property that their spacetime dynamics can be alternatively described by a theory without gravity \[1,,2,,3,,4\]. This theory is in general non-local, but in certain special cases it is expected to become a local quantum field theory (QFT). It is surprising that string dynamics can be equivalent to a local QFT. A better understanding of this equivalence would have numerous applications to strongly coupled gauge theory, black hole physics and a non-perturbative formulation of string theory. An important class of examples for which string dynamics is described by local QFT is string propagation on manifolds that include an anti-de-Sitter spacetime $`AdS_{p+1}`$\[3,,5,,6\]. In this case the corresponding theory without gravity is a $`p`$ dimensional conformal field theory (CFT). In general, solving the string equations of motion on $`AdS_{p+1}`$ requires turning on Ramond-Ramond (RR) backgrounds, which are not well understood. This makes it difficult to study the $`AdS/CFT`$ correspondence in string theory and most of the work on this subject is restricted to situations where the curvature on $`AdS_{p+1}`$ is small and the low energy supergravity approximation is reliable. String theory on $`AdS_3`$ is special in several respects. First, in this case the “dual” CFT is two dimensional and the corresponding conformal symmetry is infinite dimensional. In general, two dimensional CFT’s are better understood than their higher dimensional analogs and one may hope that this will also be the case here. Second, string theory on $`AdS_3`$ can be defined without turning on RR fields and thus should be more amenable to traditional worldsheet methods. Perturbative string theory on $`AdS_3`$ was studied in . The purpose of this paper is to continue this study and to apply it to some additional examples that are of interest in the different contexts mentioned above. Another application of the results of appears in . In section 2 we introduce the brane configuration whose near-horizon geometry will serve as the background for string propagation later in the paper. The configuration of interest includes $`NS5`$-branes, Kaluza-Klein (KK) monopoles and fundamental strings. We describe the supergravity solution and its near-horizon limit, and review some earlier results on chiral primary operators that are visible in supergravity. We also determine the moduli space of vacua in this geometry; by the $`AdS/CFT`$ correspondence this gives the moduli space of dual CFT’s. The resulting moduli space, which can be thought of as the moduli space of M-theory on $`AdS_3\times S^2\times T^6`$, is given in (2.17). The duality group is $`F_{4(4)}(\text{ZZ})`$, a discrete, non-compact version of the exceptional group $`F_4`$. In section 3 we review the work of on string theory on $`AdS_3\times S^3\times T^4`$, and elaborate on some aspects of it. We find the spectrum of chiral primaries in string theory and compare it to supergravity. We also comment on a proposed identification of string theory on $`AdS_3\times S^3\times `$ with CFT on the symmetric product $`^n/S_n`$, and show that the spectra of $`U(1)^4`$ affine Lie algebras in string theory and in CFT on the symmetric product disagree. We also briefly describe the extension of the work of to heterotic string theory and show that the resulting spacetime CFT is “heterotic” as well (i.e. its left and right central charges are different). In sections 4, 5 we discuss string propagation in the magnetic monopole background of section 2. The near-horizon geometry includes a Lens space $`S^3/\text{ZZ}_N`$; therefore we start in section 4 with a description of CFT on Lens spaces. In section 5 we turn to string theory on such spaces and discuss in turn bosonic, heterotic and type II strings. We discuss the spectrum, obtain the moduli space of vacua, and compare the resulting structure to the supergravity analysis of section 2. We show that the set of chiral operators in string theory on $`AdS_3\times S^3/\text{ZZ}_N\times T^4`$ is much larger than that in the corresponding low energy supergravity theory. In particular, it includes an exponentially large density of perturbative string states which carry momentum and winding around an $`S^1`$ in $`S^3/\text{ZZ}_N`$. We also show that string theory in a monopole background exhibits an effect familiar from quantum mechanics, the shift of angular momenta of electrically charged particles in the background of a magnetic monopole. Two appendices contain conventions, results and derivations used in the text. 2. Supergravity Analysis 2.1. Brane Configuration Consider M-theory compactified on a six dimensional manifold $`𝒩`$ parametrized by $`(x^4,x^5,x^6,x^7,x^8,x^{11})`$, down to five non-compact dimensions $`(x^0,x^1,x^2,x^3,x^9)`$. We will concentrate on the case $`𝒩=T^6`$, but will comment briefly on the cases where $`𝒩`$ is $`K3\times T^2`$ or a Calabi-Yau manifold. Since we would like to use weak coupling techniques, we identify $`x^{11}`$ with the M-theory direction, and send its radius to zero. In the resulting weakly coupled string theory we consider the following brane configuration \[9\]: (a) $`N`$ KK monopoles wrapped around the $`T^4`$ labeled by $`(x^5,\mathrm{},x^8)`$, infinitely extended in $`x^9`$ and charged under the gauge field $`A_\mu =G_{\mu 4}`$ $`(\mu =0,1,2,3)`$. (b) $`N^{}`$ $`NS5`$-branes wrapped around the above $`T^4`$ and extended in $`x^9`$. (c) $`p`$ fundamental strings infinitely stretched in $`x^9`$. The configuration (a) – (c) preserves four supercharges which form a chiral $`(4,0)`$ supersymmetry algebra in the $`1+1`$ dimensional non-compact spacetime $`(x^0,x^9)`$ shared by all the branes. It will be useful later to note that all the unbroken supercharges originate from the same worldsheet chirality. At low energies the theory on the branes decouples from bulk string dynamics and approaches a $`(4,0)`$ superconformal field theory. M-theory on $`T^6`$ has a large U-duality group which can be used to relate the above brane configuration to many others, such as that of three $`M5`$-branes wrapped around different four-cycles in $`T^6`$ and intersecting along the $`x^9`$ direction . The specific realization (a) – (c) is special in that only Neveu-Schwarz (NS) sector fields are excited; therefore we will be able to use the results of to study the near-horizon dynamics. The classical supergravity fields around the above collection of $`NS5`$-branes, KK monopoles and fundamental strings are as follows. The string frame metric is $$\begin{array}{cc}\hfill ds^2=& H_5\left[H_K^1\left(dx_4+P_K(1\mathrm{cos}\theta )d\varphi \right)^2+H_K\left(dr^2+r^2(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)\right)\right]+\hfill \\ \hfill +& F^1(dt^2+dx_9^2)+\underset{i=5}{\overset{8}{}}dx_i^2\hfill \end{array}$$ where the harmonic functions $$\begin{array}{cc}\hfill H_5=& 1+\frac{P_5}{r}\hfill \\ \hfill H_K=& 1+\frac{P_K}{r}\hfill \\ \hfill F=& 1+\frac{Q}{r}\hfill \end{array}$$ are associated with $`NS5`$-branes, KK monopoles, and fundamental strings, respectively, and we have parametrized the space transverse to the branes $`(x^1,x^2,x^3)`$ by spherical coordinates $`(r,\theta ,\varphi )`$. The dilaton and NS $`B_{\mu \nu }`$ field are: $$\begin{array}{cc}\hfill B_{t9}=& F\hfill \\ \hfill B_{\varphi 4}=& P_5(1\mathrm{cos}\theta )\hfill \\ \hfill e^{2[\mathrm{\Phi }_{10}(r)\mathrm{\Phi }_{10}(\mathrm{})]}=& \frac{F}{H_5}\hfill \end{array}$$ The charges $`P_5`$, $`P_K`$, $`Q`$ in (2.1) are related to the numbers of branes $`N^{}`$, $`N`$ and $`p`$ via: $$\begin{array}{cc}\hfill P_5=& \frac{\alpha ^{}}{2R}N^{}\hfill \\ \hfill P_K=& \frac{R}{2}N\hfill \\ \hfill Q=& \frac{\alpha ^3g_\mathrm{s}^2}{2RV}p\hfill \end{array}$$ where $`R`$ is the radius of $`x^4`$ (asymptotically far from the branes), $`g_s\mathrm{exp}\mathrm{\Phi }_{10}(\mathrm{})`$ and $`(2\pi )^4V`$ is the volume of the $`T^4`$. Note that: (a) From the point of view of the $`3+1`$ dimensional non-compact spacetime labeled by $`x^\mu `$, $`\mu =0,1,2,3`$, the vacuum (2.1) – (2.1) is magnetically charged under two gauge fields, $`G_{\mu 4}`$, $`B_{\mu 4}`$. The magnetic charges are $`N`$ and $`N^{}`$, respectively. (b) One can make the vacuum (2.1) – (2.1) arbitrarily weakly coupled everywhere by sending $`g_s0`$ and $`p\mathrm{}`$ with $`Q`$ fixed. (c) T-duality in $`x^4`$ exchanges $`NS5`$-branes and KK monopoles. Thus it exchanges $`N`$ and $`N^{}`$ as well as type IIA and IIB. (d) The fields that are excited in (2.1) – (2.1) exist in all closed string theories, including the bosonic string, the heterotic string, and the type II superstring. Therefore, one can study this solution in all these theories. 2.2. The near-horizon limit A dual description of the decoupled low energy dynamics on the branes is obtained by studying string dynamics in the background (2.1) – (2.1) in the near-horizon limit $`r0`$ . In this limit the metric reduces to<sup>1</sup> Many similar examples are discussed in .: $$\begin{array}{cc}\hfill ds^2=& \frac{P_5}{P_K}\left[dx_4+P_K(1\mathrm{cos}\theta )d\varphi \right]^2+P_5P_K(d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)\hfill \\ \hfill +& \frac{P_5P_K}{r^2}dr^2+\frac{r}{Q}(dt^2+dx_9^2)+\underset{i=5}{\overset{8}{}}dx_i^2\hfill \end{array}$$ The $`B_{\mu \nu }`$ field is given by (2.1). The dilaton, which far from the branes is arbitrary, becomes a fixed scalar in the near-horizon geometry (2.1): $$e^{2\mathrm{\Phi }_6^{\mathrm{hor}}}=e^{2\mathrm{\Phi }_{10}}\frac{V}{\alpha ^2}=\frac{Q}{P_5}\frac{V}{\alpha ^2}e^{2\mathrm{\Phi }_{10\mathrm{}}}=\frac{p}{N^{}}$$ The three dimensional space parametrized by $`t`$, $`r`$, and $`x^9`$ becomes after the coordinate change $`\rho ^2=4P_5P_Kr/Q`$: $$ds^2=\frac{l^2}{\rho ^2}d\rho ^2+\frac{\rho ^2}{l^2}(dt^2+dx_9^2)$$ where $$l^2=4P_5P_K=\alpha ^{}NN^{}$$ The metric (2.1) is that of $`AdS_3`$ with curvature $`\mathrm{\Lambda }=1/l^2`$. We next turn to the three dimensional space parametrized by $`\theta `$, $`\varphi `$, and $`x_4`$. Far from the branes, the radius of $`x^4`$, $`R`$, is a free parameter, the expectation value of a massless scalar field. In the near-horizon geometry (2.1) $`R`$ is fixed: $$R_4^{\mathrm{hor}}=\sqrt{G_{44}}R=R\sqrt{\frac{P_5}{P_K}}=\sqrt{\frac{N^{}\alpha ^{}}{N}}$$ The corresponding scalar field is massive. The radius of $`x^4`$ (2.1) is typically of string size (if $`N`$ and $`N^{}`$ are comparable). Hence, at low energies we can dimensionally reduce along $`x^4`$. This leaves an $`S^2`$ labeled by $`\theta `$, $`\varphi `$, which can be interpreted as the horizon of a four dimensional black hole, or a five dimensional black string. In the full theory $`x^4`$ is retained and the resulting three dimensional space can be identified with the manifold $`S^3/\text{ZZ}_N`$. Altogether, we are led to study string theory on $$AdS_3\times S^3/\text{ZZ}_N\times T^4$$ Another difference between the asymptotic and near-horizon geometries is that while, as we saw, the asymptotic background (2.1) – (2.1) is magnetically charged under both $`G_{\mu 4}`$ and $`B_{\mu 4}`$ (with different magnetic charges $`N`$, $`N^{}`$), in the near-horizon geometry (2.1) $`G_{\varphi 4}B_{\varphi 4}=0`$. Thus, the vacuum (2.1) is magnetically charged under $`G_{\mu 4}+B_{\mu 4}`$ only, $$G_{\varphi 4}+B_{\varphi 4}=2P_5(1\mathrm{cos}\theta )$$ Recalling that $`G+B`$ and $`GB`$ couple to different chiralities on the worldsheet, we see that the background in question is $`S^2\times S^1`$ with the $`SO(3)`$ isometry of the two-sphere arising from the worldsheet chirality<sup>2</sup> The foregoing discussion is valid for large $`N`$, $`N^{}`$, when supergravity is reliable. We will see later that when $`N`$ or $`N^{}`$ are $`2`$, a second $`SO(3)`$ arises from the other worldsheet chirality. that does not couple to $`G+B`$, which we will refer to as right-moving. We conclude that the $`\text{ZZ}_N`$ orbifold in (2.1) is asymmetric. 2.3. Black Holes and Spacetime CFT One of the motivations for the present work is the relation to black holes. The configuration given in (2.1) can be generalized by adding one more charge, corresponding to momentum along the 9th direction. If $`x^9`$ is compact, the resulting metric is that of a regular black hole in four dimensions, the Cvetič-Youm dyon \[12\]. Such black holes can be interpreted as excitations of the configuration (2.1). The black hole entropy follows from the large degeneracy of these excitations. Understanding the spacetime dynamics of strings on (2.1) involves understanding precisely these excitations, and thus is important for black hole physics. Brown and Henneaux have shown that any theory of gravity on $`AdS_3`$ has a large symmetry algebra containing two copies of the Virasoro algebra with central charge $$c_{\mathrm{st}}\frac{3l}{2l_p}$$ where $`l`$ is the radius of curvature of $`AdS_3`$ (2.1), and $`l_p`$ is the three dimensional Newton constant. The calculation of is semiclassical and is expected in general to receive both quantum gravity corrections which are suppressed by powers of $`l_p/l`$, and string corrections suppressed by powers of $`l_s/l`$ ($`l_s^2\alpha ^{}`$). We will see examples of such corrections below. In our case, the semiclassical computation gives $$c_{\mathrm{st}}=6NN^{}p$$ Strominger pointed out that if the spacetime dynamics of a theory of gravity on $`AdS_3`$ corresponds to a unitary and modular invariant CFT with the central charge (2.1) (and satisfies certain additional mild assumptions), then the standard CFT degeneracy of states agrees with the Bekenstein-Hawking entropy of the corresponding black holes. For the case of interest here the details were discussed in . Thus in our study of string theory on (2.1) we will be interested in computing the central charge (2.1) and any stringy corrections to it, and understanding the spectrum of operators that contribute to the BH<sup>3</sup> $`=`$ Black Hole, Bekenstein-Hawking, Brown-Henneaux entropy. 2.4. The Spectrum of Perturbations Later we will study the spectrum of perturbations of string theory in the background (2.1). Some of these perturbations are visible already in the supergravity approximation. In this subsection we summarize the results of \[16,,17\] regarding the spectrum of chiral operators in supergravity. Consider eleven dimensional supergravity compactified on $$AdS_3\times S^2\times 𝒩$$ where $`𝒩`$ is a Calabi-Yau manifold, $`K3\times T^2`$ or $`T^6`$. We will assume that the manifold $`𝒩`$ has Planck (or string) scale size, and dimensionally reduce all supergravity fields on it. The size of $`S^2`$ is assumed to be large; therefore we will keep Kaluza – Klein harmonics on the sphere. The theory preserves $`(4,0)`$ supersymmetry and, as explained in the previous subsection, is equivalent to a two dimensional CFT. States are labeled by the quantum numbers $`h`$, $`\overline{h}`$, $`\overline{j}`$, where $`h`$ is the scaling dimension with respect to the left-moving Virasoro algebra of , while $`\overline{h}`$ and $`\overline{j}`$ are the quantum numbers under the right-moving $`N=4`$ superconformal algebra (i.e. under right-moving Virasoro and $`SU(2)_R`$). Note that an $`SL(2)_R\times SL(2)_L`$ subalgebra of Virasoro is identified with the isometry of $`AdS_3`$ in (2.1) while the $`SU(2)_R`$ symmetry is the $`SO(3)`$ isometry of the two-sphere. Reducing the eleven dimensional supergravity fields on the manifold (2.1) gives rise to short representations of the $`N=4`$ superconformal algebra. To analyze the resulting spectrum it is convenient to perform the reduction in two steps, first reducing to five dimensions on the manifold $`𝒩`$, and then further reducing on $`AdS_3\times S^2`$. After reducing on $`𝒩`$, one finds the following spectrum: (a) A gravity multiplet, whose bosonic components are the five dimensional graviton and a gauge field (the graviphoton). This multiplet contains eight bosonic and eight fermionic degrees of freedom. (b) $`n_H`$ hypermultiplets, consisting of two real scalars and fermions. Each multiplet thus has two bosonic and two fermionic degrees of freedom. (c) $`n_V`$ vectormultiplets consisting of a vector field, a scalar and fermions ($`4+4`$ physical degrees of freedom). (d) $`n_S`$ gravitino multiplets consisting of two vectors and fermions ($`6+6`$ degrees of freedom). The values of $`n_H`$, $`n_V`$, and $`n_S`$ for different choices of the manifold $`𝒩`$ are : (a) On a Calabi-Yau manifold, at a generic point in moduli space, $`n_H=2(h_{21}+1)`$, $`n_V=h_{12}1`$, and $`n_S=0`$. (b) On $`K3\times T^2`$, generically $`n_V=22`$, $`n_H=2(n_V1)`$, and $`n_S=2`$. At points in moduli space where the gauge symmetry is enhanced, $`n_V`$ increases accordingly. Note that this problem is dual to the heterotic string on a torus, a case we will discuss below. (c) On $`T^6`$, $`n_H=n_V=14`$ and $`n_S=6`$. The further reduction on $`AdS_3\times S^2`$ gives the following spectrum of chiral primaries: $`h\overline{h}`$ degeneracy range of $`\overline{h}=\overline{j}`$ $`1/2`$ $`n_H`$ $`1/2,3/2,\mathrm{}`$ $`0`$ $`n_V`$ $`1,2,\mathrm{}`$ $`1`$ $`n_V`$ $`1,2,\mathrm{}`$ $`1/2`$ $`n_S`$ $`3/2,5/2,\mathrm{}`$ $`1/2`$ $`n_S`$ $`3/2,5/2,\mathrm{}`$ $`3/2`$ $`n_S`$ $`1/2,3/2,\mathrm{}`$ $`1`$ $`1`$ $`2,3,\mathrm{}`$ $`0`$ $`1`$ $`2,3,\mathrm{}`$ $`1`$ $`1`$ $`1,2,\mathrm{}`$ $`2`$ $`1`$ $`1,2,\mathrm{}`$ Table 1: Spectrum of chiral primaries for $`AdS_3\times S^2\times 𝒩`$. In section 5 we will find a stringy generalization of the spectrum of Table 1 for toroidally compactified heterotic and type II string theories, and will see that the above table is indeed obtained in the supergravity limit. 2.5. Moduli Space of Vacua The purpose of this subsection is to determine the moduli space of vacua of M-theory on the manifold (2.1) for the case $`𝒩=T^6`$. The moduli space of M-theory on $`T^6\times \mathrm{IR}^{4,1}`$ is $$E_{6(6)}(\text{ZZ})\backslash E_{6(6)}/USp(8)$$ $`E_{6(6)}`$ is a non-compact form of $`E_6`$ with maximal compact subgroup $`USp(8)`$. Black strings in the five non-compact dimensions are charged under the various $`B_{\mu \nu }`$ fields (which in five dimensions are dual to gauge fields). There are $`27`$ independent strings transforming in the $`\mathrm{𝟐𝟕}`$ of the $`E_{6(6)}(\text{ZZ})`$ U-duality group (2.1): the $`M5`$-branes with four of their dimensions wrapped on a $`T^4`$ (15 possibilities), the $`M2`$-branes with one dimension wrapped on an $`S^1`$ (6 of these), and the KK monopoles charged under the six gauge fields $`G_{\mu ,i}`$ and wrapped around the remaining $`T^5`$. It is convenient to organize these 27 charges into an $`8\times 8`$ symplectic-traceless antisymmetric matrix (utilizing the maximal $`USp(8)`$ subgroup of $`E_6`$): $$\mathrm{𝟐𝟕}=\left(\begin{array}{cc}bJ_{(1)}& Z\\ Z^T& \frac{1}{3}bJ_{(3)}+A_{ij}T^{ij}\end{array}\right)$$ where $`J_{(i)}`$ are the symplectic invariants of $`USp(2i)`$, and $`T^{ij}`$ are a basis of traceless antisymmetric $`6\times 6`$ matrices. One can choose the $`6\times 2`$ charges $`Z`$ in (2.1) to correspond to the $`M2`$-brane and KK monopole charges described above, while $`A_{ij}`$ and $`b`$ parametrize the $`M5`$-brane charges. Eq. (2.1) makes manifest the decomposition of the $`\mathrm{𝟐𝟕}`$ of $`E_6`$ in terms of representations of its $`USp(2)\times USp(6)`$ subgroup: $`b(\mathrm{𝟏},\mathrm{𝟏})+Z(\mathrm{𝟐},\mathrm{𝟔})+A(\mathrm{𝟏},\mathrm{𝟏𝟒})`$. The near-horizon geometry (2.1) is U-dual to a configuration of three $`M5`$-branes with charges: $`b0`$, $`Z=A_{ij}=0`$ . To find the subgroup of $`E_{6(6)}(\text{ZZ})`$ that is left unbroken by $`b`$, one notes that $`E_6`$ has a maximal $`F_4`$ subgroup, under which the $`\mathrm{𝟐𝟕}`$ decomposes as $`\mathrm{𝟐𝟔}+\mathrm{𝟏}`$. The $`USp(2)\times USp(6)`$ discussed above is the maximal compact subgroup of $`F_4`$. Therefore, $`b`$ is a singlet under $`F_4`$, and the subgroup of $`E_{6(6)}(\text{ZZ})`$ U-duality preserved by the near-horizon geometry is $`F_{4(4)}(\text{ZZ})`$. The moduli space is the coset: $$F_{4(4)}(\text{ZZ})\backslash F_{4(4)}/USp(2)\times USp(6)$$ The adjoint of $`F_4`$, the $`\mathrm{𝟓𝟐}`$, decomposes under $`USp(2)\times USp(6)`$ as $`[(\mathrm{𝟑},\mathrm{𝟏})+(\mathrm{𝟏},\mathrm{𝟐𝟏})]+(\mathrm{𝟐},\mathrm{𝟏𝟒})`$. The noncompact form thus has signature $`(28,24)`$, and the coset has dimension 28. The $`2\times 14`$ moduli correspond to the $`n_H=14`$ hypermultiplets with $`h=1`$, $`\overline{h}=1/2`$ in Table 1. In Section 5 we will reproduce aspects of the moduli space (2.1) in string theory. 3. String Theory on $`AdS_3\times S^3\times T^4`$ The near-horizon geometry of a system of $`k`$ $`NS5`$-branes and $`p`$ fundamental strings in type II string theory on a four-manifold $``$ ($`=T^4`$ or $`K3`$) is $$AdS_3\times S^3\times $$ Type II string theory on the manifold (3.1) has $`(4,4)`$ superconformal symmetry when $`=T^4`$ and in the type IIB case also when $`=K3`$. It has $`(4,0)`$ superconformal symmetry for the heterotic string on $`=T^4`$, $`K3`$, and for type IIA on $`=K3`$ (which is dual to the heterotic theory on $`T^4`$). String theory on manifolds including an $`AdS_3`$ factor was discussed in , and the case of type II string propagation on (3.1) with $`=T^4`$ was described in detail. In this section we will review this construction, as a warmup for the case (2.1). We will also make some comments on the spectrum of the theory, and briefly discuss the heterotic case. Some of the conventions and other details appear in Appendix A. 3.1. Symmetries of string theory on $`AdS_3\times S^3\times T^4`$ The worldsheet and spacetime symmetries of string theory on (3.1) act separately on the left and right-movers on the worldsheet. Therefore in this subsection we will discuss the chiral (holomorphic) symmetry structure, both on the worldsheet and in spacetime. This will also be useful when we turn to string theory on (2.1) whose (anti-) holomorphic structure is identical to that discussed here. The affine worldsheet symmetry of string theory on $`AdS_3\times S^3\times T^4`$ is $`\widehat{SL(2)}\times \widehat{SU(2)}\times \widehat{U(1)}^4`$. It is realized as follows. There are three bosonic currents $`j^A`$, realizing $`\widehat{SL(2)}`$ at level $`k+2`$, and three fermions $`\psi ^A`$, forming an $`\widehat{SL(2)}`$ at level $`2`$. Similarly, there are three bosonic currents $`K^a`$, realizing an $`\widehat{SU(2)}`$ at level $`k2`$, and three fermions $`\chi ^a`$, giving an $`\widehat{SU(2)}`$ at level 2. The total currents $$\begin{array}{cc}\hfill J^A=& j^A\frac{i}{k}ϵ_{BC}^A\psi ^B\psi ^C,A,B=+,,3\hfill \\ \hfill K^a=& k^a\frac{i}{k}ϵ_{bc}^a\chi ^b\chi ^c,a,b=+,,3\hfill \end{array}$$ thus have the same level $`k`$. The total worldsheet central charge of the $`SL(2)\times SU(2)`$ theory is identical to its flat space value ($`c=9`$), for all values of $`k`$. The $`\widehat{U(1)}^4`$ is realized in terms of free bosonic currents $`iY^j`$ and free fermions $`\lambda ^j`$, $`j=1,2,3,4`$. The worldsheet theory is superconformal; the supercurrent is $$T_F(z)=\frac{2}{k}(\eta _{AB}\psi ^Aj^B\frac{i}{3k}ϵ_{ABC}\psi ^A\psi ^B\psi ^C)+\frac{2}{k}(\chi ^ak_a\frac{i}{3k}ϵ_{abc}\chi ^a\chi ^b\chi ^c)+\lambda ^iY_i$$ Generally in string theory affine symmetries on the worldsheet give rise to gauge symmetries in spacetime. Contour integrals of the worldsheet generators give global charges, corresponding to gauge transformations that do not vanish rapidly enough at infinity. The analysis of shows that in gravity on $`AdS_3`$ one should allow a rich set of non-trivial behaviors of the metric at infinity. This leads to a large spacetime global symmetry group, the $`2d`$ conformal group. Similarly, allowing non-trivial behaviors of gauge fields at the boundary of $`AdS_3`$ leads to an enhancement of global spacetime gauge symmetries to the corresponding affine Lie algebras . To obtain a worldsheet description of the above infinite spacetime symmetry algebras it is convenient to use the Wakimoto representation of the conformal $`\sigma `$-model on $`AdS_3`$ . This representation, which is summarized in Appendix A, parametrizes the manifold by the coordinates $`(\varphi ,\gamma ,\overline{\gamma })`$. The radial direction is $`\varphi `$, with the boundary of $`AdS_3`$ corresponding to $`\varphi =\mathrm{}`$. The Wakimoto description is particularly useful for studying the structure of string theory on $`AdS_3`$ in the limit $`\varphi \mathrm{}`$. In this limit the following two important simplifications occur: (a) The worldsheet $`\sigma `$-model becomes free. (b) The string coupling goes to zero. Thus the full string theory becomes weakly coupled both on the worldsheet and in spacetime in this limit, regardless of the fixed coupling in the original description. This allows one to study all aspects of string theory on $`AdS_3`$ which are observable at $`\varphi \mathrm{}`$ using free field theory on the worldsheet. This includes the infinite spacetimes symmetries, since these correspond to gauge transformations which are completely specified by the behavior at infinity. As shown in , the form of the Virasoro $`(L_n)`$, $`\widehat{SU(2)}`$ $`(T_n^a)`$ and $`\widehat{U(1)}^4`$ $`(\alpha _n^i)`$ charges as $`\varphi \mathrm{}`$ is: $$\begin{array}{cc}\hfill L_n=& 𝑑z\left[(1n^2)J^3\gamma ^n+\frac{n(n1)}{2}J^{}\gamma ^{n+1}+\frac{n(n+1)}{2}J^+\gamma ^{n1}\right]\hfill \\ \hfill T_n^a=& 𝑑z\{G_{1/2},\chi ^a(z)\gamma ^n(z)\}\hfill \\ \hfill \alpha _n^i=& 𝑑z\{G_{1/2},\lambda ^i(z)\gamma ^n(z)\}\hfill \end{array}$$ The algebra satisfied by (3.1) is: $$\begin{array}{cc}\hfill [L_n,L_m]=& (nm)L_{n+m}+\frac{c_{\mathrm{st}}}{12}(n^3n)\delta _{n+m,0}\hfill \\ \hfill [T_n^a,T_m^b]=& iϵ_c^{ab}T_{n+m}^c+\frac{k_{\mathrm{st}}}{2}n\delta ^{ab}\delta _{n+m,0}\hfill \\ \hfill [L_m,T_n^a]=& nT_{n+m}^a\hfill \\ \hfill [\alpha _n^i,\alpha _m^j]=& pn\delta ^{ij}\delta _{n+m,0}\hfill \\ \hfill [L_m,\alpha _n^i]=& n\alpha _{n+m}^i\hfill \end{array}$$ where $$c_{\mathrm{st}}=6kp;k_{\mathrm{st}}=kp$$ and $`p`$ is a certain winding number that characterizes the embedding of the worldsheet into spacetime, reflecting the presence of $`p`$ fundamental strings in the vacuum, $$p\frac{dz}{2\pi i}\frac{_z\gamma (z)}{\gamma (z)}$$ As mentioned above, string theory on (3.1) exhibits $`N=4`$ superconformal invariance in spacetime. The superconformal generators are given by<sup>4</sup> More precisely, the supercharges $`Q`$ form the global $`N=4`$ superconformal algebra. As explained in , the full infinite superconformal symmetry can be obtained by acting on the global supercharges with the generators (3.1). This has been analyzed in . $$Q=𝑑ze^{\frac{\varphi }{2}}S(z);S(z)=e^{\frac{i}{2}_Iϵ_IH_I(z)}$$ where $`H_I`$, $`I=1,\mathrm{},5`$ are the scalar fields, defined in Appendix A, which are obtained by bosonizing the fermions, and $`ϵ_I=\pm 1`$. As discussed in , mutual locality of the supercharges and BRST invariance lead to the constraints: $$\underset{I=1}{\overset{5}{}}ϵ_I=\underset{I=1}{\overset{3}{}}ϵ_I=1$$ These projections leave eight spacetime supercharges, which together with $`L_{\pm 1}`$, $`L_0`$ and $`T_0^a`$ form the global $`N=4`$ superconformal algebra . 3.2. Comments on the spectrum The construction of string excitations in the background (3.1) is very similar to that corresponding to superstrings in flat spacetime. The plane wave zero mode wave function familiar from flat spacetime is replaced by $`V_{j;m,\overline{m}}V_{j^{};m^{},\overline{m}^{}}^{}\mathrm{exp}(i\stackrel{}{p}\stackrel{}{Y}+i\stackrel{}{\overline{p}}\stackrel{}{\overline{Y}})`$ where $`V`$, $`V^{}`$ are the wave functions on $`AdS_3`$, $`S^3`$ respectively. Their transformation properties under the worldsheet affine Lie algebra are described in Appendix A. $`(\stackrel{}{p},\stackrel{}{\overline{p}})`$ is a vector in an even, self-dual Narain lattice $`\mathrm{\Gamma }^{4,4}`$. The towers of string states are obtained by multiplying this zero mode wave function by a polynomial in the fermionic oscillators<sup>5</sup> For simplicity we restrict here to NS-NS sector excitations. The generalization to other sectors is straightforward. We also work at generic values of the momenta; for special values there might be physical states at other ghost numbers. $`\psi ^A`$, $`\chi ^a`$, $`\lambda ^j`$, and the bosonic oscillators $`j^A`$, $`k^a`$, $`Y^j`$ (and their derivatives), and a similar polynomial in the antiholomorphic oscillators, and restricting to the BRST cohomology. The most general state in the $`(1,1)`$ picture of the NS-NS sector has the form $$V_{NS}=e^{\varphi \overline{\varphi }}P_N(\psi ^A,\psi ^A,\mathrm{},j^A,j^A,\mathrm{})\overline{P}_{\overline{N}}(\overline{\psi }^A,\mathrm{})V_{j;m,\overline{m}}V_{j^{};m^{},\overline{m}^{}}^{}\mathrm{exp}(i\stackrel{}{p}\stackrel{}{Y}+i\stackrel{}{\overline{p}}\stackrel{}{\overline{Y}})$$ where $`P_N`$ is a polynomial in the bosonic and fermionic worldsheet fields and their derivatives with scaling dimension $`N`$, and similarly for $`\overline{P}_{\overline{N}}`$. BRST invariance is the requirement that the matter part of (3.1) is a lower component of an $`N=1`$ worldsheet superfield with $`(\mathrm{\Delta },\overline{\mathrm{\Delta }})=(\frac{1}{2},\frac{1}{2})`$. Thus, one must have: $$N\frac{j(j+1)}{k}+\frac{j^{}(j^{}+1)}{k}+\frac{|\stackrel{}{p}|^2}{2}=\frac{1}{2}$$ and a similar relation with $`N\overline{N}`$ and $`\stackrel{}{p}\stackrel{}{\overline{p}}`$ coming from the other chirality. One can think of (3.1) as determining $`j`$ (the “energy”) in terms of $`\stackrel{}{p}`$, $`j^{}`$ (the “momentum”) and $`N`$, the excitation level of the string. $`V_{NS}`$ is null if it can be written as the higher component of a worldsheet superfield with $`(\mathrm{\Delta },\overline{\mathrm{\Delta }})=(\frac{1}{2},0)`$ or $`(0,\frac{1}{2})`$. The free spectrum can be summarized by writing down the torus partition sum. The partition sum of CFT on $`AdS_3`$ (more precisely on its Euclidean version $`H_3^+`$) appears in ; the rest is standard. Denoting the $`SU(2)`$ characters with spin $`j`$ at level $`k`$ by $`\chi _j^{(k)}(\tau )`$, the string partition sum is ($`q\mathrm{exp}(2\pi i\tau )`$): $$\begin{array}{cc}\hfill Z(\tau ,\overline{\tau })=& \frac{1}{\sqrt{\tau _2}|\eta (\tau )|^2}\underset{j,\overline{j}\frac{k}{2}1}{}𝒩_{j,\overline{j}}\chi _j^{(k2)}(\tau )\chi _{\overline{j}}^{(k2)}(\overline{\tau })\frac{1}{|\eta (\tau )|^8}\underset{(\stackrel{}{p},\stackrel{}{\overline{p}})\mathrm{\Gamma }^{4,4}}{}q^{\frac{1}{2}\stackrel{}{p}^2}\overline{q}^{\frac{1}{2}\stackrel{}{\overline{p}}^2}\hfill \\ & \left|\left(\frac{\theta _3}{\eta }\right)^4\left(\frac{\theta _4}{\eta }\right)^4\left(\frac{\theta _2}{\eta }\right)^4\right|^2\hfill \end{array}$$ where the matrix $`𝒩_{j,\overline{j}}`$ parametrizes the bosonic $`\widehat{SU(2)}_{k2}`$ modular invariant (see e.g. ); we will mainly discuss the simplest case, the $`A`$-series modular invariant for which $`𝒩_{j,\overline{j}}=\delta _{j,\overline{j}}`$. We have factored out an infinite overall constant from the $`AdS_3`$ partition sum, which has a clear physical meaning – it is the infinite volume of the $`\varphi `$ direction (see Appendix A). The transformation properties of physical states such as (3.1) under the spacetime symmetries are obtained by computing the commutators of the generators (3.1), (3.1) with the vertices (3.1). As an example, under the spacetime Virasoro algebra, the states (3.1) transform either as primaries or as descendants. The primaries, $`V_{\mathrm{phys}}(h;m,\overline{m})`$ satisfy $$[L_n,V_{\mathrm{phys}}(h;m,\overline{m})]=\left(n(h1)m\right)V_{\mathrm{phys}}(h;m+n,\overline{m})$$ where $`h`$ is the scaling dimension of $`V_{\mathrm{phys}}`$. A state of the form (3.1) is primary under spacetime Virasoro if it is primary (in the sense of Appendix A) under $`\widehat{SL(2)}`$, however the latter is not a necessary condition. A large class of Virasoro primaries that are also $`\widehat{SL(2)}`$ primaries is obtained by taking the polynomial $`P_N`$ in (3.1) to be independent of $`\psi ^A`$, $`j^A`$ (and their derivatives). The resulting operators have spacetime scaling dimension $`h=j+1`$ with $`j`$ determined by (3.1) . Similarly, one can study the transformation properties of physical states under $`\widehat{SU(2)}`$, $`\widehat{U(1)}^4`$ (see for details). We would like to point out an interesting property of the spectrum of the $`\widehat{U(1)}^4`$ symmetry in this model. From the form of the generators $`\alpha _n^i`$ (3.1) it is clear that physical states which carry right-moving momentum $`\stackrel{}{p}`$ along the $`T^4`$, such as (3.1), and are primary under affine $`U(1)^4`$, transform as: $$[\alpha _n^i,V_{\mathrm{phys}}^\stackrel{}{p}(j;m,\overline{m})]=p^iV_{\mathrm{phys}}^\stackrel{}{p}(j;m+n,\overline{m})$$ i.e. the charges of primaries under $`U(1)^4`$ in spacetime and on the worldsheet are the same. However, we saw in eq. (3.1) that the spacetime $`U(1)^4`$ generators are not normalized canonically. In terms of canonically normalized generators, the charges of physical states are in fact $`p^i/\sqrt{p}`$, where $`p`$ is given in (3.1). For example, if the worldsheet $`T^4`$ on which the theory lives is a product of four circles with radii $`R_il_s`$ and the $`B`$ field on $`T^4`$ vanishes, the spectrum of $`U(1)^4`$ charges becomes $$Q^i=\frac{1}{\sqrt{p}}\left(\frac{n}{R_i}+\frac{mR_i}{2}\right)$$ It has been proposed that the spacetime SCFT corresponding to string theory on (3.1) is a blowing up deformation of the $`(4,4)`$ superconformal $`\sigma `$–model on the symmetric product $`^L/S_L`$ with $`L=kp`$. For $`=T^4`$ the symmetric product SCFT has a $`\widehat{U(1)}_R^4\times \widehat{U(1)}_L^4`$ affine symmetry, like string theory on (3.1). The spectrum of charges under this affine symmetry in the symmetric product SCFT is independent of the blowing up deformations. Therefore, it is interesting to compare it to the spectrum (3.1) in string theory. As there, we will for simplicity take the spacetime $`T^4`$ to be a product of four circles with radii $`r_i`$ $`(i=1,\mathrm{},4)`$ (which may in general be different from $`R_i`$), and set the $`B`$ field to zero. The generalization to an arbitrary $`T^4`$ is straightforward. To compare to (3.1) we have to normalize the $`\widehat{U(1)}^4`$ generators in the symmetric product canonically, and then compute their spectrum at the orbifold point. The resulting spectrum of charges $`q_i`$ is: $$q^i=\frac{1}{\sqrt{L}}\left(\frac{n}{r_i}+\frac{mr_i}{2}\right)$$ Since $`L=kp`$ and both in (3.1) and in (3.1) states with all possible values of the integers $`n,m`$ appear, we conclude that the string theory spectrum is incompatible with the symmetric product at any point in its moduli space (for $`k>1`$). We view this as evidence that the spacetime SCFT relevant for string theory on (3.1) is not on the moduli space of the symmetric product SCFT (at least for $`=T^4`$ and $`k>1`$). We next turn to a discussion of the chiral primaries. Unitarity of the spacetime $`N=4`$ superconformal algebra implies that the scaling dimension $`h`$, and $`SU(2)`$ spin $`j^{}`$ satisfy the inequality $`hj^{}`$. Chiral primaries are states that saturate this bound. One can prove that the spectrum (3.1) satisfies the $`N=4`$ unitarity bound. The proof uses the worldsheet unitarity bounds $`j<k/2,j^{}k/2`$. One also finds that the states that saturate the bound are those with $`N=1/2`$ and $`\stackrel{}{p}=0`$ in (3.1). There are analogous statements in the Ramond sector. The chiral primaries can be analyzed holomorphically; then the two chiralities are combined in all possible ways consistent with modular invariance. Consider first the NS sector. For $`N=1/2`$, $`\stackrel{}{p}=0`$, (3.1), (3.1) imply that $`j=j^{}`$. For generic $`j`$ there are eight physical states (with given $`m`$, $`m^{}`$): $$\begin{array}{cc}\hfill 𝒱_j^i=& e^\varphi \lambda ^iV_jV_j^{}\hfill \\ \hfill 𝒲_j^\pm =& e^\varphi (\psi V_j)_{j\pm 1}V_j^{}\hfill \\ \hfill 𝒳_j^\pm =& e^\varphi V_j(\chi V_j^{})_{j\pm 1}\hfill \end{array}$$ where we have used results and notations from Appendix A for combining fermions and bosons into representations of the total current algebras, and suppressed the indices $`m,m^{},\mathrm{}`$. One can show that the states (3.1) are BRST invariant and thus physical while the two remaining combinations, which involve $`(\psi V_j)_j`$ and $`(\chi V_j^{})_j`$ are not physical (one combination is not BRST invariant, and the other is null). The four operators $`𝒱_j`$ have $`h=j+1`$, $`j^{}=j`$; $`𝒲^\pm `$ have $`h=j+1\pm 1`$, $`j^{}=j`$; and $`𝒳^\pm `$ have $`h=j+1`$, $`j^{}=j\pm 1`$. The chiral operators are thus $`𝒲_j^{}`$, $`𝒳_j^+`$. The former are physical for $`j1/2`$ (for $`j=0`$ the corresponding operator is null); the latter exist for all $`j0`$. One can similarly analyze the spectrum of chiral primaries in the Ramond sector. It is convenient to split the spinors into six dimensional spinors $`S`$, which transform in the $`(\mathrm{𝟐},\mathrm{𝟐})`$ of $`SL(2)\times SU(2)`$, and spinors on the $`T^4`$, $`\mathrm{exp}\frac{i}{2}(ϵ_4H_4+ϵ_5H_5)`$ (see Appendix A). Equating the spacetime scaling dimension and $`SU(2)`$ spin leads to two BRST invariant chiral primaries with $`h=j^{}=j+1/2`$, $`j=0,1/2,\mathrm{}`$: $$𝒴_j^\pm =e^{\frac{\varphi }{2}}e^{\pm \frac{i}{2}(H_4H_5)}(SV_jV_j^{})_{j\frac{1}{2},j+\frac{1}{2}}$$ By applying the supercharges (3.1) one finds that the upper component of these superfields are $`𝒱_j^i`$ (3.1). To summarize, the holomorphic analysis leads to the following spectrum of chiral primaries: $`𝒱`$ $`h`$ range of $`j`$ $`𝒳_j^+`$ $`j+1`$ $`0,1/2,\mathrm{}`$ $`𝒲_j^{}`$ $`j`$ $`1/2,1,\mathrm{}`$ $`𝒴_j^\pm `$ $`j+1/2`$ $`0,1/2,\mathrm{}`$ Table 2: Holomorphic chiral primary operators. Tensoring Table 2 with its antiholomorphic analog leads to the following list of chiral primaries: $`(h,\overline{h})`$ degeneracy $`(\frac{l+3}{2},\frac{l+1}{2})`$ $`1`$ $`(\frac{l+1}{2},\frac{l+3}{2})`$ $`1`$ $`(\frac{l+1}{2},\frac{l+1}{2})`$ $`5`$ $`(\frac{l+2}{2},\frac{l+2}{2})`$ $`1`$ $`(\frac{l+2}{2},\frac{l+1}{2})`$ $`4`$ $`(\frac{l+1}{2},\frac{l+2}{2})`$ $`4`$ Table 3: Spectrum of chiral primaries for $`AdS_3\times S^3\times T^4`$. where the index $`l=0,1,\mathrm{}`$ up to an upper bound that is determined by the fact that in Table 2, $`j`$ satisfies the constraint $`jk/21`$. The five $`(h,\overline{h})=(\frac{1}{2},\frac{1}{2})`$ chiral primaries in Table 3 give rise to twenty moduli parametrizing the space $$SO(5,4;\text{ZZ})\backslash SO(5,4)/SO(5)\times SO(4)$$ Sixteen of the moduli arise from the NS-NS sector on the worldsheet; they correspond to the metric and $`B`$ field on the $`T^4`$. The remaining four are RR moduli. The spectrum of spacetime chiral primaries, Table 3, is in agreement with expectations from supergravity \[23,,24,,16,,17\]. In fact, Table 3 is identical to a table that appears in \[16\]. The upper bound on $`h,\overline{h}`$ that arises from string theory can not be seen in the supergravity analysis since it is due to string scale physics. Of course, the operators in Table 3 are only those that correspond to the single particle chiral states, and there are many additional chiral operators that correspond to multiparticle states. Finally, general considerations lead one to expect the spectrum of chiral primaries to be truncated at $`h,\overline{h}`$ of order the central charge , but checking this directly may be beyond the reach of our perturbative analysis. 3.3. Heterotic strings on $`AdS_3\times S^3\times T^4`$ The discussion of the previous subsections generalizes easily to the heterotic case. The right-moving worldsheet theory is the same as before, while the left-movers are purely bosonic and live on the manifold $`AdS_3\times S^3\times T^4\times \mathrm{\Gamma }^{16}`$, where $`\mathrm{\Gamma }^{16}`$ is the $`E_8\times E_8`$ or $`\mathrm{Spin}(32)/\text{ZZ}_2`$ torus<sup>6</sup> Of course, as usual, one can turn on moduli that mix the $`T^4`$ with $`\mathrm{\Gamma }^{16}`$ and give rise to the standard Narain moduli space of vacua.. The symmetry structure in the right-moving sector is as before, while the left-movers give rise to non-supersymmetric conformal symmetry, an $`SU(2)`$ affine Lie algebra from the sphere, and some additional affine symmetries from the $`T^4`$ and $`\mathrm{\Gamma }^{16}`$. At generic points in the Narain moduli space there is a $`U(1)^{20}\times SU(2)`$ left-moving affine symmetry. At points with enhanced gauge symmetry, the affine symmetry in spacetime is enhanced. The spacetime theory is thus a $`(4,0)`$ superconformal field theory. As we saw (3.1) – (3.1), the spacetime central charge is determined by the total $`\widehat{SL(2)}`$ level on the worldsheet, which is $`k=(k+2)2`$ for the right-movers, and $`k+2`$ for the left-movers. Therefore, the central charges of the spacetime theory are: $`\overline{c}_{\mathrm{st}}=6pk`$ and $`c_{\mathrm{st}}=6p(k+2)`$. The difference $$c_{\mathrm{st}}\overline{c}_{\mathrm{st}}=12p$$ between the left and right central charges is an example of a stringy correction to the semiclassical results of , (2.1). In the supergravity limit $`c_{\mathrm{st}}=\overline{c}_{\mathrm{st}}`$; (3.1) is suppressed relative to the leading contribution by two powers of $`l_s/l`$. Similarly, the levels of the left and right-moving $`\widehat{SU(2)}`$ algebras in spacetime differ: $`\overline{k}_{\mathrm{st}}k_{\mathrm{st}}=2p`$. One can repeat the discussion of the previous subsections rather closely; we will not describe the details here. As mentioned above, the theory described in this subsection is S-dual to type IIA on $`AdS_3\times S^3\times K3`$. It might be interesting to study this duality further using the techniques of and this paper. 4. Conformal Field Theory on $`S^3/\text{ZZ}_N`$ As we saw in section 2, the near-horizon geometry of $`NS5`$-branes and KK monopoles includes a Lens space. In order to study string propagation in this geometry we need to construct worldsheet CFT on $`SU(2)_R\times SU(2)_L/\text{ZZ}_N`$, where the $`\text{ZZ}_N`$ orbifold acts on the left-movers only. These CFT’s were studied in \[25,,26\]; in this section we will review their properties, first in the bosonic and then in the supersymmetric case. 4.1. Bosonic CFT on Lens spaces Consider $`SU(2)`$ WZW theory at level $`k`$. The theory has two affine $`SU(2)`$ symmetries acting on the left and right-movers. Denoting the left-moving currents<sup>7</sup> Whose OPE is given in Appendix A. by $`K^a(z)`$, and the right-moving ones by $`\overline{K}^a(\overline{z})`$, the action of the $`\text{ZZ}_N`$ symmetry by which we mod out to get the Lens space is as follows. We define a $`2\times 2`$ (hermitian) matrix of currents $`KK^a\sigma ^a`$, $`\overline{K}\overline{K}^a\sigma ^a`$ and $`G\mathrm{exp}(\frac{2\pi i}{N}\sigma _3)`$. The $`\text{ZZ}_N`$ then acts on the currents as: $$\begin{array}{cc}& KGKG^1;\overline{K}\overline{K}\hfill \\ & K^\pm e^{\pm \frac{4\pi i}{N}}K^\pm ,K^3K^3;\overline{K}^a\overline{K}^a\hfill \end{array}$$ Thus, for $`N>2`$ the projection breaks $`SU(2)_LU(1)`$. One can think of the resulting asymmetric orbifold CFT in the following way. It is well known (see e.g. ) that one can decompose $`SU(2)`$ WZW CFT into a product of the $`SU(2)/U(1)`$ parafermion coset CFT and a free scalar field $`\xi `$, normalized as $`\xi (z)\xi (0)=\frac{2}{k}\mathrm{ln}z`$, which is related to $`K^3`$ via: $$K^3=\frac{ik}{2}\xi $$ Since the eigenvalues of $`K^3`$ are half-integer, $`\xi `$ is a compact scalar field, $$\xi \xi +4\pi $$ More precisely, $`\xi `$ has period $`2\pi `$, but wavefunctions with $`j\text{ZZ}+1/2`$ go to minus themselves as $`\xi \xi +2\pi `$. Under the above decomposition, the $`SU(2)`$ primaries can be written as: $$V_{j;m,\overline{m}}^{}=f_{j;m,\overline{m}}e^{im\xi }$$ $`m`$, $`\overline{m}`$ take values in the range $`|m|,|\overline{m}|j`$, with $`jm,j\overline{m}\text{ZZ}`$ and $`k2j\text{ZZ}_+`$. The chiral parafermion fields $`f_{j;m,\overline{m}}`$ commute with the current $`K^3`$ (but not with $`\overline{K}^3`$) and have scaling dimensions $$\begin{array}{cc}\hfill \mathrm{\Delta }=& \frac{j(j+1)}{k+2}\frac{m^2}{k}\hfill \\ \hfill \overline{\mathrm{\Delta }}=& \frac{j(j+1)}{k+2}\hfill \end{array}$$ The left-moving parts of the operators $`f_{j;m,\overline{m}}`$ (4.1) with $`|m|=j`$ are further identified, $`(j,m)(\frac{k}{2}j,m\pm \frac{k}{2})`$. The currents $`K^\pm `$ take the form: $$K^\pm =e^{\pm i\xi }\psi _{\mathrm{para}}^\pm $$ where $`\psi _{\mathrm{para}}^\pm `$ are parafermionic fields with scaling dimension $`(k1)/k`$. Comparing (4.1) to (4.1) we see that the $`\text{ZZ}_N`$ acts on the chiral scalar field $`\xi `$ as $`\xi \xi +4\pi /N`$, decreasing the radius of $`\xi `$ (4.1) by $`N`$ units. Out of the original operators in the $`SU(2)`$ WZW model (4.1) only those with $`2mN\text{ZZ}`$ survive the $`\text{ZZ}_N`$ projection. These form the untwisted sector of the orbifold. The twisted sector of the orbifold includes operators of the form $$V_{j;m,m^{},\overline{m}}^{}=f_{j;m,\overline{m}}e^{im^{}\xi }$$ with $`mm^{}`$ (compare to (4.1)). These must have the same periodicity in $`\xi `$ as the untwisted ones: as $`\xi \xi +2\pi `$ they must be multiplied by $`()^{2j}`$. Therefore, they must satisfy: $$mm^{}\text{ZZ}$$ Modular invariance requires the inclusion in the theory of operators of the form (4.1) that satisfy level matching and are mutually local with respect to all operators in the untwisted sector (4.1) that survive the $`\text{ZZ}_N`$ projection. Requiring mutual locality of (4.1) (with $`2mN\text{ZZ}`$) and (4.1) leads to the constraint $$mm^{}\frac{k}{N}\text{ZZ}$$ Since completeness of the OPE requires us to include all operators (4.1) that satisfy (4.1), comparing to (4.1) we conclude that the asymmetric orbifold we are studying can only be consistent when $`\frac{k}{N}N^{}\text{ZZ}`$. Level matching of the operators (4.1) further leads to the requirement, $$\frac{m^2}{k}\frac{m^2}{k}\text{ZZ}$$ which together with (4.1) means that $`m`$, $`m^{}`$ satisfy the constraints: $$\begin{array}{cc}\hfill m^{}+m& N\text{ZZ}\hfill \\ \hfill m^{}m& N^{}\text{ZZ}\hfill \end{array}$$ Clearly, the constraint (4.1) must hold for all operators in the orbifold theory, since we can generate all such operators by multiplying operators of the form (4.1). As a check on (4.1) one can compute the resulting torus partition sum and check its modular invariance. This is done in Appendix B. 4.2. $`N=1`$ SCFT on Lens spaces The starting point of the construction is the $`N=1`$ superconformal $`SU(2)`$ WZW model (discussed around eq. (3.1)). The total left-moving $`SU(2)`$ current $`K^a`$ has level $`k`$; the $`\text{ZZ}_N`$ orbifold acts on $`K^a`$ as before (4.1). To preserve the worldsheet $`N=1`$ superconformal algebra (3.1) the orbifold must also act on the left-moving fermions $`\chi ^a`$ in a similar way to (4.1): $$\chi ^\pm e^{\pm \frac{4\pi i}{N}}\chi ^\pm ,\chi ^3\chi ^3$$ and, as before, the right-movers are invariant. To perform the orbifold (4.1), (4.1), we decompose the $`N=1`$ superconformal $`SU(2)`$ WZW model into $`SU(2)/U(1)\times U(1)`$. The first factor is well known to have an accidental $`N=2`$ superconformal symmetry; it gives rise to an $`N=2`$ minimal model (see e.g. \[27,,28\] for reviews). The second factor corresponds to a free superfield $`(\xi ,\chi ^3)`$ where $`\xi `$ is related to $`K^3`$ by (4.1). The analogs of the spectrum generating operators (4.1) in this case are: $$V_{j;m,m^{},\overline{m}}^{}=V_{j;m,\overline{m}}^{N=2}e^{im^{}\xi }$$ where $`m`$, $`m^{}`$ satisfy (4.1), and $`V_{j;m,\overline{m}}^{N=2}`$ are minimal model primaries on the left and $`\widehat{SU(2)}`$ primaries on the right. Focusing on the left-moving structure, there are two kinds of such operators: NS sector operators which are mutually local with respect to the supercurrent (3.1), and Ramond sector fields which have branch cuts with respect to $`G`$. They have the spectrum of scaling dimensions $`(\mathrm{\Delta })`$ and $`U(1)`$ charges $`(Q)`$: $$\begin{array}{cc}\hfill \mathrm{\Delta }_{NS}=& \frac{j(j+1)}{k}\frac{m^2}{k};Q_{NS}=\frac{2m}{k}\hfill \\ \hfill \mathrm{\Delta }_R=& \frac{j(j+1)}{k}\frac{m^2}{k}+\frac{1}{8};Q_R=\frac{2m}{k}\frac{1}{2}\hfill \end{array}$$ where the range of the indices $`j,m`$ is as in (4.1) in the NS sector, while in the Ramond sector $`|m\frac{1}{2}|j`$ (with the same identifications as before). The two kinds of Ramond operators in (4.1) are related by application of $`G_0^\pm `$, the zero modes of the $`N=2`$ superconformal generators. One can again verify using the appropriate characters that the resulting theory is modular invariant. This is discussed in Appendix B. 5. String Theory on $`AdS_3\times S_3/\text{ZZ}_N\times T^4`$ Armed with an understanding of string theory on $`AdS_3`$ (section 3), and CFT on $`S^3/\text{ZZ}_N`$ (section 4) we are now ready to tackle the problem of interest, string theory on the manifold (2.1). Some interesting aspects of the problem appear already in the bosonic case, which we therefore discuss first. 5.1. Bosonic string theory on $`AdS_3\times S^3/\text{ZZ}_N\times T^{20}`$ The right-movers are described by CFT on $`AdS^3\times S^3\times T^{20}`$; in the left-moving sector we replace the $`S^3`$ by the monopole background described in section 4.1. To conform with the supersymmetric case, we take the level of $`\widehat{SL(2)}`$ to be $`k+2`$ and that of $`\widehat{SU(2)}`$ to be $`k2`$, so that the total worldsheet central charge is $`k`$ independent. The spacetime central charge (3.1) is in this case $`c_{\mathrm{st}}=\overline{c}_{\mathrm{st}}=6(k+2)p`$. The torus partition sum from which one can read-off the spectrum of physical states is (see (3.1) for the notation): $$Z(\tau ,\overline{\tau })=\frac{1}{\sqrt{\tau _2}|\eta (\tau )|^2}\underset{j,\overline{j}\frac{k}{2}1}{}𝒩_{j,\overline{j}}\chi _j^{(k2)}(\tau )\overline{\chi }_{\overline{j}}^{(k2)}(\overline{\tau })\frac{1}{|\eta (\tau )|^{40}}\underset{(\stackrel{}{p},\stackrel{}{\overline{p}})\mathrm{\Gamma }^{20,20}}{}q^{\frac{1}{2}\stackrel{}{p}^2}\overline{q}^{\frac{1}{2}\stackrel{}{\overline{p}}^2}$$ where the left-moving characters $`\chi _j^{(k2)}`$ are the bosonic monopole characters $`\chi _j^{\mathrm{monopole}}`$ described in Appendix B, while the right-moving ones $`\overline{\chi }_{\overline{j}}^{(k2)}`$ are $`SU(2)`$ WZW characters. Of course, the bosonic theory is tachyonic, but it is still useful for studying in a simpler setting aspects of the theory which survive in the supersymmetric examples. One such aspect is the transformation of string excitations under the $`SO(3)`$ rotation symmetry of the two-sphere discussed in section 2. The sum over $`j`$ in the partition sum (5.1) runs over both half-integer and integer values; at the same time the theory only has bosonic excitations – all states contribute with positive sign to the vacuum energy (5.1). Naively this implies that this background violates the spin-statistics theorem. To resolve this puzzle we need to recall some facts about the behavior of magnetic monopoles and dyons in quantum mechanics (see for a discussion). Consider a quantum mechanical particle with spin zero and electric charge $`e`$ in the background of a magnetic monopole with magnetic charge $`g`$. Dirac quantization is the statement that $`eg\text{ZZ}/2`$. The system has a conserved angular momentum, but the allowed values that this angular momentum can take are: $$j=eg,eg+1,\mathrm{}$$ Thus, the spin $`j`$ is bounded from below by $`eg`$ and can even take half-integer values (if $`eg\text{ZZ}+1/2`$). This is consistent with spin-statistics since, as explained in , dyons with electric charge $`e`$ and magnetic charge $`g`$ with half integer $`eg`$ indeed behave as fermions. In particular the wavefunction of the system is antisymmetric under interchange of two such objects. This seems to explain half of the puzzle raised above. The fact that the partition sum (5.1) has contributions with half-integer $`j`$ despite the fact that we expect all excitations of a bosonic string to have integer spin must be due to the effect of the monopole field (5.1). In essence, some of the angular momentum $`j`$ is due to the electromagnetic field of the dyon; it is thus a property of the string vacuum and is not intrinsic to a particular excitation with charge $`e`$. In fact, we can use this interpretation to compute the electric charge of the different string excitations. Recall (2.1) that the magnetic charge of the vacuum is $`P_5`$. The angular momentum $`j`$ of string states is bounded by the quantum number $`m`$ (4.1), (4.1) by $`jm`$. Comparing to (5.1) we see that the electric charge $`e`$ of states like (4.1), (4.1) is $`e=m/P_5`$. There still seems to be some tension between the fact that dyons with half integer $`eg`$ are fermions, and the fact that excitations with half integer $`j`$ contribute with positive sign to the string partition sum (5.1). The resolution is that string theory on $`AdS_3\times S^3/\text{ZZ}_N`$ is not really quantizing these dyons. Instead it is quantizing bosonic particles in the background of a monopole with a given magnetic charge. Thus, the relevant statistics is not that associated with exchange of dyons, but rather the one that has to do with exchanging fluctuations of the bosonic fields that correspond to string modes in the fixed background of a monopole. The latter statistics is that of the basic quanta of the string field, which are bosonic. Hence, there is no contradiction between (5.1) and the spin-statistics theorem. 5.2. Heterotic string theory on $`AdS_3\times S^3/\text{ZZ}_N\times T^4`$ We next turn to the heterotic theory. Our main task will be to reproduce and extend the supergravity result, Table 1, for the spectrum of chiral primaries. The basic structure is very similar to that of section 3.3. The worldsheet right-movers are described by a superconformal $`\sigma `$-model on $`AdS_3\times S^3\times T^4`$ and give rise to $`N=4`$ superconformal symmetry in spacetime. The left-movers are described by a bosonic worldsheet $`\sigma `$-model on $`AdS_3\times S^3/\text{ZZ}_N\times T^{20}`$ and give rise to conformal symmetry in spacetime. The lattice of momenta corresponding to $`T_R^4\times T_L^{20}`$ is an even, self-dual Narain lattice $`\mathrm{\Gamma }^{4,20}`$. The left and right moving spacetime central charges are again determined by the total levels of $`\widehat{SL(2)}`$ and are equal to: $`c_{\mathrm{st}}=6p(k+2)`$, $`\overline{c}_{\mathrm{st}}=6pk`$. To determine the relation between $`k`$ and the number of branes in the background, we recall that according to section 4.1 the level of the bosonic $`\widehat{SU(2)}`$, $`k2`$, must be a product of two integers $$k2=NN^{}$$ in order for the asymmetric $`\text{ZZ}_N`$ orbifold to be consistent. We readily identify $`N`$ with the number of KK monopoles, $`N^{}`$ with the number of $`NS5`$-branes (see (2.1) – (2.1)). Thus, the central charge is in this case $$\begin{array}{cc}\hfill c_{\mathrm{st}}=& 6p(NN^{}+4)\hfill \\ \hfill \overline{c}_{\mathrm{st}}=& 6p(NN^{}+2)\hfill \end{array}$$ Comparing to the Brown and Henneaux semiclassical gravity analysis (2.1) we see that this is another example of stringy corrections modifying the semiclassical formula (2.1). We next turn to the spectrum of chiral primaries under the $`(4,0)`$ superconformal spacetime symmetry. The spacetime supersymmetry in this model arises purely from the right-movers. As in section 3, the most convenient way to analyze the spectrum of chiral operators is to construct it holomorphically and then combine the two worldsheet chiralities. Short representations of the right-moving $`N=4`$ superconformal algebra are given in Table 2. What remains to do is to analyze the left-moving Virasoro primaries that can be tensored with Table 2 to give physical states. We will next do that for the case of the $`A`$-series modular invariant, $`𝒩_{j,\overline{j}}=\delta _{j,\overline{j}}`$. The generalization to other modular invariants is straightforward. Since we want to compare to Table 1, we start with chiral primaries arising from the untwisted sector (4.1). The right-moving structure constrains $`j=j^{}`$ (the bosonic $`SL(2)`$ and $`SU(2)`$ spins are the same). One finds the following physical operators: $$\begin{array}{cc}& (J^AV_j)_{j\pm 1}V_{j;m}^{},\mathrm{\hspace{0.33em}\hspace{0.33em}2}mN\text{ZZ};h=j+1\pm 1\hfill \\ & V_j(K^aV_j^{})_{j\pm 1;m},\mathrm{\hspace{0.33em}\hspace{0.33em}2}mN\text{ZZ};h=j+1\hfill \\ & B^\alpha V_jV_{j;m}^{},\mathrm{\hspace{0.33em}\hspace{0.33em}2}mN\text{ZZ},\alpha =1,\mathrm{},n_V2;h=j+1\hfill \end{array}$$ The operators with $`h=j`$ in the first line of (5.1) start only at $`j=1/2`$. The same is true for those with $`SU(2)`$ spin $`j1`$ in the second line. The rest of the towers of operators start at $`j=0`$. The notation in the last line of (5.1) is as follows. $`B^\alpha `$ are purely left-moving worldsheet currents. At generic points in the Narain moduli space of $`\mathrm{\Gamma }^{4,20}`$ there are twenty such currents, corresponding to $`X^i`$, with $`i`$ running over the directions of the $`T^4`$, and the sixteen Cartan subalgebra generators of the heterotic gauge group in ten dimensions. Thus, generically, $`n_V=22`$. At points in the Narain moduli space where the gauge symmetry is enhanced to a non-abelian group $`G`$, $`n_V`$ increases accordingly. At such points the spacetime theory has a current algebra $`\widehat{G}`$ at level $`p`$. The holomorphic operators (5.1) can be arranged into the following towers: $`h`$ degeneracy range of $`j`$ $`j+2`$ $`1`$ $`0,1,\mathrm{}`$ $`j`$ $`1`$ $`1,2,\mathrm{}`$ $`j+1`$ $`n_V1`$ $`0,1,\mathrm{}`$ $`j+1`$ $`1`$ $`1,2,\mathrm{}`$ Table 4: Operators of bosonic sector of heterotic string. The spectrum of chiral primaries of the heterotic string is now obtained by tensoring Table 4 with Table 2. Furthermore, to compare to supergravity we put the index $`m`$ in Table 4 to zero, since the states with $`m0`$ are not low energy states in the supergravity limit $`N,N^{}\mathrm{}`$. This implies that the index $`j`$ (which is the same for left and right-movers) is integer. It is not difficult to see that the resulting spectrum is precisely of the form indicated in Table 1 with $`n_H=2(n_V1)`$ and $`n_S=2`$. Moduli of the spacetime CFT arise from chiral primaries with $`\overline{h}=1/2`$, $`h=1`$. In Table 1 they arise from the $`n_H=42`$ hypermultiplets that exist generically in moduli space; these give rise to an $`84`$ dimensional moduli space. In the string description the spacetime moduli are directly related to the worldsheet moduli $`x^a\overline{}\overline{x}^{\overline{b}}`$, where $`a`$ runs over the sixteen chiral scalars which exist in the heterotic string already in ten dimensions, the four scalars parametrizing the $`T^4`$, and $`\xi `$ defined in (4.1). The index $`\overline{b}`$ runs over the four antiholomorphic scalars parametrizing the right-movers on $`T^4`$. From the string point of view it is obvious that the moduli space of spacetime SCFT’s is $$SO(4,21;\text{ZZ})\backslash SO(4,21)/SO(4)\times SO(21)$$ since this is the moduli space of worldsheet theories. Note that the moduli space (5.1) is consistent with the fact that, as explained in section 2, the radius of $`x^4`$, $`R_4`$ (2.1), is a fixed scalar in the near-horizon geometry (2.1). While the string theory analysis reproduces the supergravity result of Table 1 in the region where the latter is expected to be applicable, it also generalizes it significantly. We have already seen an example of this above: the states visible in supergravity have $`m=m^{}=0`$ in the notation of (4.1) (and thus integer $`j`$) while the full string spectrum has in addition states with $`m=m^{}\frac{N}{2}\text{ZZ}`$, some of which have half-integer $`j`$ if $`N`$ is odd. Since T-duality in $`x^4`$ exchanges $`N`$ and $`N^{}`$ (see section 2), we also expect to find similar states in the twisted sector with $`m=m^{}\frac{N^{}}{2}\text{ZZ}`$. In fact, the twisted sectors have in this case a rich spectrum of stringy chiral operators. Consider, for example, vertex operators of the general form: $$(\overline{\psi }V_j)_{j1}V_{j;m,m^{},\overline{m}}^{}P_n(\psi ^A,J^A,\mathrm{})$$ In the right-moving sector this is an operator of the type $`𝒲_j^{}`$, as in Table 2; hence it is chiral under the right-moving spacetime $`N=4`$ superconformal algebra. $`V^{}`$ is given by (4.1). The left-moving worldsheet scaling dimension $`n`$ of the polynomial $`P_n`$ is determined by level matching: $$n1=\frac{m^2m^2}{NN^{}}$$ Recall that the r.h.s. of (5.1) is integer (4.1), since $`m`$, $`m^{}`$ must satisfy (4.1). For every solution of the level matching condition (5.1) one finds an operator in a short multiplet of the spacetime $`(4,0)`$ superconformal symmetry. For large $`n`$ the growth of the number of such solutions is exponential (in $`\sqrt{n}`$). These states are curved spacetime analogs of Dabholkar-Harvey states . Clearly, there are other states of the general form (5.1); their construction is a straightforward application of the methods of this paper and we will not discuss them in detail. The string construction also allows one to study in a simple way the transformation properties of various operators under the full Virasoro algebra. The supergravity analysis that leads to Table 1 is only sensitive to the transformation properties of the different states under the global superconformal algebra. While on the supersymmetric side this is not a serious restriction – an operator with $`h=j`$ must be a primary under the full superconformal algebra since (at least in a unitary SCFT) it cannot be a descendant – on the left-moving bosonic side one could ask whether the states in Table 1 are primary under the full Virasoro algebra or only under its $`SL(2)`$ subgroup; the latter are known as quasi-primaries in $`2d`$ CFT. This question can be answered in string theory by applying the Virasoro generators (3.1) to the vertex operators (5.1) and asking whether (3.1) is satisfied for all $`n`$ or only for $`n=0,\pm 1`$. One finds that all operators are primary under the full Virasoro algebra except for those on the first line of (5.1) that involve $`(J^AV_j)_{j+1}`$; these are quasi-primary. This is not surprising given the fact that for $`j=0`$ this combination appears in the Virasoro generator itself, and the stress tensor is a quasi-primary descendant of the identity in $`2d`$ CFT. 5.3. Type II string theory on $`AdS_3\times S^3/\text{ZZ}_N\times T^4`$ The main difference with respect to the analysis of the previous subsection is that the left-movers now live on $`AdS_3\times S^3/\text{ZZ}_N\times T^4`$ and are described by a superconformal worldsheet theory. The spacetime central charge is in this case given exactly by the semiclassical results (2.1), $`c_{\mathrm{st}}=\overline{c}_{\mathrm{st}}=6pk=6pNN^{}`$. In order to define the worldsheet theory we have to specify the GSO projection on the worldsheet. Performing a chiral projection, which acts separately on left and right-movers, $`()^{F_L}=()^{F_R}=1`$, leads to spacetime supersymmetry, which is still coming only from the right-moving sector (in agreement with the brane picture of section 2). The fact that the left-moving supercharges (3.1) are projected out can be seen by noting that their values of $`m,m^{}=\pm \frac{1}{2}`$ are inconsistent with the projection (4.1) when $`N,N^{}>1`$. To see the problem in more detail, trying to enforce mutual locality of (3.1) with the twisted NS sector states (4.1) leads to the condition $`(mm^{})/k\text{ZZ}`$ or, using (4.1), $`1/N\text{ZZ}`$, which implies $`N=1`$ ($`N^{}=1`$ also has enhanced SUSY, arising from the twisted sector). It is not difficult to repeat the analysis of chiral primaries performed in the previous subsection for the type II case. We start again with states that survive in the supergravity limit. Their holomorphic, left-moving structure is closely related to (3.1), (5.1). In the NS sector one has (in the $`1`$ picture): $$\begin{array}{cc}& e^\varphi (\psi ^AV_j)_{j\pm 1}V_{j;m}^{},\mathrm{\hspace{0.33em}\hspace{0.33em}2}mN\text{ZZ};h=j+1\pm 1\hfill \\ & e^\varphi V_j(\chi ^aV_j^{})_{j\pm 1;m},\mathrm{\hspace{0.33em}\hspace{0.33em}2}mN\text{ZZ};h=j+1\hfill \\ & e^\varphi \lambda ^iV_jV_{j;m}^{},\mathrm{\hspace{0.33em}\hspace{0.33em}2}mN\text{ZZ},i=1,\mathrm{},4;h=j+1\hfill \end{array}$$ where $`V^{}`$ is given by (4.1) with $`m=m^{}`$, and we are temporarily suppressing the right-moving index $`\overline{m}`$ (since the analysis is chiral). As in the previous subsection, one of the towers on the first line of (5.1) and one of the towers on the second line (the ones corresponding to spin $`j1`$) start at $`j=\frac{1}{2}`$. The other towers exist for all $`j0`$ (bounded from above by the worldsheet unitarity bound). The Ramond sector states are (in the $`1/2`$ picture): $$e^{\frac{\varphi }{2}}e^{\pm \frac{i}{2}(H_4H_5)}(SV_jV_j^{})_{j\pm \frac{1}{2},j\pm \frac{1}{2};m},\mathrm{\hspace{0.33em}\hspace{0.33em}2}mN\text{ZZ}$$ Altogether, the quantum numbers and degeneracies of the left-moving parts of the chiral operators which survive in the supergravity $`(m=0)`$ limit are: $`h`$ degeneracy range of $`j`$ $`j+2`$ $`1`$ $`0,1,\mathrm{}`$ $`j`$ $`1`$ $`1,2,\mathrm{}`$ $`j+1`$ $`5`$ $`0,1,\mathrm{}`$ $`j+1`$ $`1`$ $`1,2,\mathrm{}`$ $`j+1/2`$ $`4`$ $`1/2,3/2,\mathrm{}`$ $`j+3/2`$ $`4`$ $`1/2,3/2,\mathrm{}`$ Table 5: Left-moving primaries for $`AdS_3\times S^3/\text{ZZ}_N\times T^4`$. The physical spectrum of chiral primaries is obtained by tensoring Table 5 with Table 2. The result is precisely the one given in Table 1 with $`n_V=n_H=14`$ and $`n_S=6`$, in agreement with supergravity. In section 2.5 we saw that the moduli space of vacua of string theory on $`AdS_3\times S^3/\text{ZZ}_N\times T^4`$ is the twenty eight dimensional coset (2.1). It is interesting to ask how this space arises in the worldsheet construction. The twenty eight moduli are the upper components of chiral primaries with $`h=1`$, $`\overline{h}=\frac{1}{2}`$. Twenty of them arise from the NS sector. These are the obvious moduli of the worldsheet theory; they parametrize the coset (3.1). The remaining eight moduli come from the RR sector, and enlarge the moduli space from (3.1) to (2.1). As a check, note that $`SO(5,4)`$ is a subgroup of $`F_{4(4)}`$ and $`SO(5)\times SO(4)`$ is a subgroup of $`USp(2)\times USp(6)`$. The situation is in fact very similar to that in type II string theory on $`T^5\times \mathrm{IR}^{4,1}`$, where the full moduli space is given by the coset (2.1) but only the subspace $`SO(5,5;\text{ZZ})\backslash SO(5,5)/SO(5)\times SO(5)`$ comes from NS fields (the Narain moduli space) while the rest of the moduli come from the RR sector. $`SO(5,5)`$ is a subgroup of $`E_{6(6)}`$ and $`SO(5)\times SO(5)`$ a subgroup of $`USp(8)`$. Comments: (a) As in the previous subsection, the string analysis produces a rich spectrum of stringy chiral operators in addition to those seen in supergravity. (b) Both here and in the two previous subsections we have implicitly assumed that $`N,N^{}>2`$. When either $`N`$ or $`N^{}`$ is two, the theory in fact has more symmetry, both on the worldsheet and in spacetime. When either $`N`$ or $`N^{}`$ is one, we get back the theory discussed in . Acknowledgements: We thank J. Harvey, C. Johnson, E. Martinec, N. Seiberg and F. Wilczek for discussions. The work of DK and FL is supported in part by DOE grant #DE-FG02-90ER40560. FL is also supported by a McCormick fellowship through the EFI. RGL is supported by a DOE Outstanding Junior Investigator Award, under grant #DE-FG02-91ER40677. Appendix A. Some Properties of String Theory on $`AdS_3\times S^3\times T^4`$ In this Appendix we collect some conventions and results about CFT and string theory on $`AdS_3\times S^3\times T^4`$. 1. Worldsheet fermions: The fermions $`\psi ^A`$, $`\chi ^a`$, $`\lambda ^j`$ defined in the text are normalized as: $$\begin{array}{cc}\hfill \psi ^A(z)\psi ^B(w)=& \frac{k\eta ^{AB}/2}{zw},A,B=+,,3\hfill \\ \hfill \chi ^a(z)\chi ^b(w)=& \frac{k\delta ^{ab}/2}{zw},a,b=+,,3\hfill \\ \hfill \lambda ^i(z)\lambda ^j(w)=& \frac{\delta ^{ij}}{zw},i,j=1,2,3,4\hfill \end{array}$$ The metrics are $`\eta ^{33}=1,\eta ^+=2`$; $`\delta ^{33}=1,\delta ^+=2`$. To study the Ramond sector of the worldsheet theory one needs to construct the spin fields for $`\psi ^A`$, $`\chi ^a`$, $`\lambda ^j`$. It is convenient to choose a complex structure by pairing the ten free fermions into five complex ones, and bosonizing them into canonically normalized scalar fields, $`H_I`$, (normal ordering implied): $$\begin{array}{cc}\hfill iH_1=& \frac{1}{k}\psi ^+\psi ^{}\hfill \\ \hfill iH_2=& \frac{1}{k}\chi ^+\chi ^{}\hfill \\ \hfill iH_3=& \frac{2}{k}\psi ^3\chi ^3\hfill \\ \hfill H_4=& \lambda ^1\lambda ^2\hfill \\ \hfill H_5=& \lambda ^3\lambda ^4\hfill \end{array}$$ The spin fields take the form $`S=\mathrm{exp}\frac{i}{2}_{I=1}^5ϵ_IH_I`$, where $`ϵ_I=\pm 1`$. 2. Worldsheet current algebra: The currents (3.1) satisfy the OPE’s: $$\begin{array}{cc}\hfill J^A(z)J^B(w)=& \frac{k\eta ^{AB}/2}{(zw)^2}+\frac{iϵ_C^{AB}J^C}{zw}+\mathrm{},A,B,C=+,,3\hfill \\ \hfill K^a(z)K^b(w)=& \frac{k\delta ^{ab}/2}{(zw)^2}+\frac{iϵ_c^{ab}K^c}{zw}+\mathrm{};a,b,c=+,,3\hfill \end{array}$$ where $`iϵ^{+3}=2`$ in both $`SL(2)`$ and $`SU(2)`$ (thus, lowering a $`3`$ index gives a relative minus sign between the two). As in the text, we denote primaries of the bosonic $`\widehat{SL(2)}`$ by $`V_{j;m,\overline{m}}`$, and those of the bosonic $`\widehat{SU(2)}`$ by $`V_{j;m,\overline{m}}^{}`$. In both cases $`j`$ is the quadratic Casimir, while $`(m,\overline{m})`$ are the eigenvalues of $`(j^3,\overline{j}^3)`$ or $`(k^3,\overline{k}^3)`$, as appropriate. The normalizations of the operators $`V`$, $`V^{}`$ are such that: $$\begin{array}{cc}\hfill j^\pm (z)V_{j;m,\overline{m}}(w)=& (mj)V_{j;m\pm 1,\overline{m}}(w)/(zw)+\mathrm{}\hfill \\ \hfill j^3(z)V_{j;m,\overline{m}}(w)=& mV_{j;m,\overline{m}}(w)/(zw)+\mathrm{}\hfill \\ \hfill k^\pm (z)V_{j;m,\overline{m}}^{}(w)=& (jm)V_{j;m\pm 1,\overline{m}}(w)/(zw)+\mathrm{}\hfill \\ \hfill k^3(z)V_{j;m,\overline{m}}^{}(w)=& mV_{j;m,\overline{m}}^{}(w)/(zw)+\mathrm{}\hfill \end{array}$$ The scaling dimensions corresponding to $`SL(2)_{k+2}`$ and $`SU(2)_{k2}`$ are: $$\mathrm{\Delta }(V_j)=\frac{j(j+1)}{k};\mathrm{\Delta }(V_j^{}^{})=\frac{j^{}(j^{}+1)}{k}$$ The operators $`V_j`$ are left-right symmetric; thus their left and right scaling dimensions are equal. For $`SU(2)`$, there are non-diagonal modular invariants with $`j^{}\overline{j}^{}`$. The fermions $`\psi ^A`$, $`\chi ^a`$ transform in the spin $`(1,0)`$ and $`(0,1)`$ representations of $`SL(2)\times SU(2)`$. It is useful to decompose the products of fermions with bosonic primaries $`V`$, $`V^{}`$ into representations of the total currents (3.1). The relevant decompositions are: $$\begin{array}{cc}\hfill (\psi V_j)_{j+1;m,\overline{m}}=& (j+1m)(j+1+m)\psi ^3V_{j;m,\overline{m}}+\frac{1}{2}(j+m)(j+1+m)\psi ^+V_{j;m1,\overline{m}}\hfill \\ \hfill +& \frac{1}{2}(jm)(j+1m)\psi ^{}V_{j;m+1,\overline{m}}\hfill \\ \hfill (\psi V_j)_{j;m,\overline{m}}=& m\psi ^3V_{j;m,\overline{m}}\frac{1}{2}(j+m)\psi ^+V_{j;m1,\overline{m}}+\frac{1}{2}(jm)\psi ^{}V_{j;m+1,\overline{m}}\hfill \\ \hfill (\psi V_j)_{j1;m,\overline{m}}=& \psi ^3V_{j;m,\overline{m}}\frac{1}{2}\psi ^+V_{j;m1,\overline{m}}\frac{1}{2}\psi ^{}V_{j;m+1,\overline{m}}\hfill \\ \hfill (\chi V_j^{})_{j+1;m,\overline{m}}=& (j+1m)(j+m+1)\chi ^3V_{j;m,\overline{m}}^{}\frac{1}{2}(j+m)(j+m+1)\chi ^+V_{j;m1,\overline{m}}^{}\hfill \\ \hfill +& \frac{1}{2}(jm)(j+1m)\chi ^{}V_{j;m+1,\overline{m}}^{}\hfill \\ \hfill (\chi V_j^{})_{j;m,\overline{m}}=& m\chi ^3V_{j;m,\overline{m}}^{}+\frac{1}{2}(j+m)\chi ^+V_{j;m1,\overline{m}}^{}+\frac{1}{2}(jm)\chi ^{}V_{j;m+1,\overline{m}}^{}\hfill \\ \hfill (\chi V_j^{})_{j1;m\overline{m}}=& \chi ^3V_{j;m,\overline{m}}^{}+\frac{1}{2}\chi ^+V_{j;m1,\overline{m}}^{}\frac{1}{2}\chi ^{}V_{j;m+1,\overline{m}}^{}\hfill \end{array}$$ Similarly, the eight spin fields $`S=\mathrm{exp}\frac{i}{2}(ϵ_1H_1+ϵ_2H_2+ϵ_3H_3)`$ associated with the fermions $`\psi ^A`$, $`\chi ^a`$ transform as two copies (corresponding to $`ϵ_1ϵ_2ϵ_3=\pm 1`$) of $`(1/2,1/2)`$ under $`SL(2)\times SU(2)`$. The product $`SV_jV_j^{}^{}`$ can be decomposed into representations of the total currents (3.1). This gives rise to the four representations $`(j\pm 1/2,j^{}\pm 1/2)`$ for each of the two chiralities of $`S`$ above. 3. Wakimoto representation of $`SL(2)`$ CFT: A convenient representation of CFT on $`AdS_3`$ is given by the Wakimoto representation . The worldsheet Lagrangian is: $$=\varphi \overline{}\varphi \frac{2}{\alpha _+}\widehat{R}^{(2)}\varphi +\beta \overline{}\gamma +\overline{\beta }\overline{\gamma }\beta \overline{\beta }\mathrm{exp}\left(\frac{2}{\alpha _+}\varphi \right)$$ where<sup>8</sup> Recall that the level of the bosonic $`\widehat{SL(2)}`$ algebra is $`k+2`$. $`\alpha _+^2=2k`$ is related to $`l`$, the radius of curvature of $`AdS_3`$, via: $$l^2=l_s^2k.$$ Integrating out $`\beta ,\overline{\beta }`$ leads to the standard description of CFT on $`AdS_3`$. The description with $`\beta ,\overline{\beta }`$ (A.1) is convenient since it simplifies in the limit $`\varphi \mathrm{}`$: (a) The interaction term $`\beta \overline{\beta }\mathrm{exp}\left(\frac{2\varphi }{\alpha _+}\right)`$ goes to zero; thus the worldsheet theory becomes weakly coupled. (b) The linear dilaton term implies that the string coupling goes to zero exponentially in $`\varphi `$; thus the spacetime theory becomes weakly coupled. In the free field region $`\varphi \mathrm{}`$ the propagators that follow from (A.1) are: $`\varphi (z)\varphi (0)=\mathrm{log}|z|^2`$, $`\beta (z)\gamma (0)=1/z`$. The current algebra is represented by (normal ordering is implied): $$\begin{array}{cc}\hfill j^3=& \beta \gamma +\frac{\alpha _+}{2}\varphi \hfill \\ \hfill j^+=& \beta \gamma ^2+\alpha _+\gamma \varphi +k\gamma \hfill \\ \hfill j^{}=& \beta .\hfill \end{array}$$ Bosonic primary vertex operators are given by: $$V_{jm\overline{m}}=\gamma ^{j+m}\overline{\gamma }^{j+\overline{m}}\mathrm{exp}\left(\frac{2j}{\alpha _+}\varphi \right).$$ States (A.1) with $`j>1/2`$ correspond to wavefunctions on $`AdS_3`$ that are exponentially supported at $`\varphi \mathrm{}`$. Thus many of their properties, such as the transformation properties under the infinite symmetry algebras (3.1), (3.1) can be studied using the above weakly coupled representation. Appendix B. Modular properties of monopole characters In this Appendix we discuss the modular transformation properties of the characters of the monopole CFT discussed in section 4. We start with the bosonic theory and then turn to the fermionic one. 1. Bosonic case: The $`SU(2)_k`$ character in the representation with spin $`j`$, $`\chi _j^{(k)}(\tau )`$, is decomposed in terms of parafermion characters $`\mathrm{\Omega }_{jm}^{(k)}(\tau )`$ and $`U(1)`$ characters<sup>9</sup> More precisely $`L_m^{(k)}=\frac{1}{\eta (q)}_{n\text{ZZ}}q^{k(n+m/k)^2}`$ is the character of an extended algebra that exists in $`c=1`$ CFT for rational $`R^2/2`$. See for a more detailed discussion. $`L_m^{(k)}(\tau )`$ as: $$\chi _j^{(k)}(\tau )=\frac{1}{2}\underset{2m=k+1}{\overset{k}{}}\mathrm{\Omega }_{jm}^{(k)}(\tau )L_m^{(k)}(\tau )$$ where it is understood that $`jm\text{ZZ}`$, and we have extended the range of $`m`$ for convenience to $`m=k+1,\mathrm{},k`$. By using the identification $$\mathrm{\Omega }_{j,m}^{(k)}=\mathrm{\Omega }_{\frac{k}{2}j,m\pm \frac{k}{2}}^{(k)}$$ one can always map $`(j,m)`$ to the fundamental range discussed around eq. (4.1). Thus each distinct state contributes twice to the r.h.s. of (B.1); hence the $`1/2`$. The monopole characters obtained after performing the chiral $`\text{ZZ}_N`$ twist (4.1) can be written as $$\chi _j^{\mathrm{monopole}}(\tau )=\frac{1}{2}\underset{m^{}}{}\underset{2m=k+1}{\overset{k}{}}\mathrm{\Omega }_{jm}^{(k)}(\tau )L_m^{}^{(k)}(\tau )$$ where (4.1) $`m^{}+mN\text{ZZ}`$, $`m^{}mN^{}\text{ZZ}`$. We wish to determine the modular properties of the monopole characters (B.1). Since we have imposed level matching, the transformation under $`\tau \tau +1`$ is the same as that of the corresponding $`SU(2)_k`$ character. We next show that the same is true for the transformation $`\tau 1/\tau `$. The $`SU(2)_k`$ characters satisfy \[22\] $$\chi _j^{(k)}(\frac{1}{\tau })=\sqrt{\frac{2}{k+2}}\underset{j^{}=0}{\overset{\frac{k}{2}}{}}\mathrm{sin}\left[\frac{\pi (2j+1)(2j^{}+1)}{k+2}\right]\chi _j^{}^{(k)}(\tau )$$ while for the $`U(1)`$ characters: $$L_m^{(k)}(\frac{1}{\tau })=\frac{1}{\sqrt{2k}}\underset{2l=k+1}{\overset{k}{}}e^{\frac{4\pi iml}{k}}L_l^{(k)}(\tau )$$ These transformation properties, and the definition of the parafermionic characters (B.1) give $$\mathrm{\Omega }_{jm}^{(k)}(\frac{1}{\tau })=\frac{1}{\sqrt{k(k+2)}}\underset{j^{}=0}{\overset{\frac{k}{2}}{}}\underset{m^{}=k+1}{\overset{k}{}}e^{\frac{4\pi imm^{}}{k}}\mathrm{sin}\left[\frac{\pi (2j+1)(2j^{}+1)}{k+2}\right]\mathrm{\Omega }_{j^{}m^{}}^{(k)}(\tau )$$ With these formulae in hand it is straighforward to compute the transformation properties of the monopole characters from their definition (B.1). After performing the various sums we find $$\chi _j^{\mathrm{monopole}}(\frac{1}{\tau })=\sqrt{\frac{2}{k+2}}\underset{2j^{}=0}{\overset{k}{}}\mathrm{sin}\left[\frac{\pi (2j+1)(2j^{}+1)}{k+2}\right]\chi _j^{}^{\mathrm{monopole}}(\tau )$$ Comparing to (B.1) we see that the transformation properties of the monopole characters at a given level are identical to those of the $`SU(2)`$ characters at the same level. Therefore, we can take any modular invariant theory with $`SU(2)_R\times SU(2)_L`$ affine symmetry, and replace the $`SU(2)`$ characters by monopole characters for one of the two worldsheet chiralities without spoiling modular invariance. 2. Supersymmetric case: In the supersymmetric case, characters depend on the spin structure and $`()^F`$ projection for the fermions. They can be distinguished<sup>10</sup> In the literature on $`N=2`$ models (see e.g. \[27,,28\]) sectors labeled by $`s`$ have insertions of $`1\pm ()^F`$; they are obvious linear combinations of ours. by an index $`s=0,1,2,3`$; $`s=0,2`$ correspond to the Ramond sector with an insertion of $`()^{sF/2}`$. $`s=1,3`$ correspond to the NS sector with an insertion of $`()^{(s1)F/2}`$ The monopole characters are $$\chi _{js}^{\mathrm{monopole}}(\tau )=\frac{1}{2}\underset{m,m^{}}{}\chi _{jms}^{N=2}(\tau )L_m^{}^{(k)}(\tau )\left(\frac{\mathrm{\Theta }_s}{\eta }\right)^{\frac{1}{2}}$$ with the sum over $`m`$, $`m^{}`$ running over the same range (and with the same identifications) as in (B.1). $`\chi _{jms}^{N=2}`$ is the $`N=2`$ minimal model character with the appropriate boundary conditions. $`\mathrm{\Theta }_s(\tau )`$ are related to the standard Jacobi $`\theta `$-functions by relabeling of indices: $`s=0,2`$ correspond to $`\theta _2`$, $`\theta _1`$, while $`s=1,3`$ correspond to $`\theta _3`$, $`\theta _4`$. The last two terms on the r.h.s. of (B.1) are the contributions of $`K^3`$ and $`\chi ^3`$. Again, level matching implies that (B.1) transforms under $`\tau \tau +1`$ in the same way as the untwisted theory. The transformation under $`\tau 1/\tau `$ is discussed next. In the untwisted $`SU(2)`$ WZW theory the characters are $$\chi _{js}^{SU(2)}(\tau )=\chi _j^{(k2)}(\tau )\chi _s^{(2)}(\tau )$$ where $`\chi _j^{(k2)}`$ are the $`SU(2)_{k2}`$ characters and $$\chi _s^{(2)}(\tau )=\left(\frac{\mathrm{\Theta }_s}{\eta }\right)^{3/2}$$ Using (B.1) and the transformation of the $`\mathrm{\Theta }`$-functions $$\frac{\mathrm{\Theta }_s}{\eta }(\frac{1}{\tau })=\underset{s^{}}{}𝒞_{ss^{}}\frac{\mathrm{\Theta }_s^{}}{\eta }(\tau ),$$ where the matrix elements of the symmetric matrix $`𝒞`$ are $`𝒞_{ss^{}}=1`$ for $`(s,s^{})=(0,3)`$, $`(2,2)`$, $`(1,1)`$ and $`𝒞_{ss^{}}=0`$ otherwise, one finds for the untwisted theory $$\chi _{js}^{\mathrm{SU}\left(2\right)}(\frac{1}{\tau })=\frac{1}{\sqrt{2k}}\underset{j^{}s^{}}{}𝒞_{ss^{}}\mathrm{sin}\left[\frac{\pi (2j+1)(2j^{}+1)}{k}\right]\chi _{j^{}s^{}}^{\mathrm{SU}\left(2\right)}(\tau ).$$ Returning to the monopole characters (B.1), the $`N=2`$ minimal model characters transform as $$\chi _{jms}^{N=2}(\frac{1}{\tau })=\frac{1}{\sqrt{2k}}\underset{j^{}m^{}s^{}}{}𝒞_{ss^{}}e^{\frac{4\pi imm^{}}{k}}\mathrm{sin}\left[\frac{\pi (2j+1)(2j^{}+1)}{k}\right]\chi _{j^{}m^{}s^{}}^{N=2}(\tau ).$$ Combining this with (B.1), (B.1) one finds that $$\chi _{js}^{\mathrm{monopole}}(\frac{1}{\tau })=\frac{1}{\sqrt{2k}}\underset{j^{}s^{}}{}𝒞_{ss^{}}\mathrm{sin}\left[\frac{\pi (2j+1)(2j^{}+1)}{k}\right]\chi _{j^{}s^{}}^{\mathrm{monopole}}(\tau )$$ in agreement with the transformation of the untwisted characters (B.1). Therefore, just like in the bosonic case, in any SCFT with an $`\widehat{SU(2)}`$ symmetry we can replace the left-moving supersymmetric affine $`SU(2)`$ sector with a monopole SCFT without spoiling modular invariance. References relax T. Banks, W. Fischler, S. Shenker, and L. Susskind, hep-th/9610043, Phys. Rev. D55 (1997) 5112. relax N. Seiberg, hep-th/9705221, Phys. Lett. 408B (1997) 98. relax J. Maldacena, hep-th/9711200, Adv.Theor.Math.Phys. 2 (1998) 231. relax O. Aharony, M. Berkooz, D. Kutasov, and N. Seiberg, hep-th/9808149. relax S. Gubser, I. Klebanov, and A. Polyakov, hep-th/9802109, Phys. Lett. 428B (1998) 104 relax E. Witten, hep-th/9802150, Adv.Theor.Math.Phys. 2 (1998) 253. relax A. Giveon, D. Kutasov, and N. Seiberg, hep-th/9806194. relax S. Elitzur, O. Feinerman, A. Giveon and D. Tsabar, hep-th/9811245. relax M. Cvetič and A. Tseytlin, hep-th/9510097, Phys. Lett. 366B (1996) 95; hep-th/9512031, Phys. Rev. D53 (1997) 5619; K. Behrndt, I. Brunner, and I. Gaida, hep-th/9804159, Phys. Lett. 432B (1998) 310; hep-th/9806195. relax I. Klebanov and A. Tseytlin, hep-th/9604166, Nucl. Phys. B475 (1996) 179; V. Balasubramanian and F. Larsen, hep-th/9604189, Nucl. Phys. B478 (1996) 199. relax H. Boonstra, B. Peeters and K. Skenderis, hep-th/9803231, Nucl. Phys. B533 (1998) 127; M. Cvetič, H. Lu, and C.N. Pope, hep-th/9811107. relax M. Cvetič and D. Youm, hep-th/9507090, Phys. Rev. D53 (1996) 584. relax J. D. Brown and M. Henneaux, Comm. Math. Phys. 104 (1986) 207. relax A. Strominger, hep-th/9712251, J. High Energy Phys. 02 (1998) 009. relax V. Balasubramanian and F. Larsen, hep-th/9802198, Nucl. Phys. B528 (1998) 229. relax F. Larsen, hep-th/9805208. relax J. de Boer, hep-th/9806104. relax A. Cadavid, A. Ceresole, R. D’Auria, and S. Ferrara, hep-th/9506144, Phys. Lett. 357B (1995) 76; G. Papadopoulos and P. Townsend, hep-th/9506150, Phys. Lett. 357B (1995) 300. relax M. Wakimoto, Comm. Math. Phys. 104 (1986) 605. relax K. Ito, hep-th/9811002; M. Bershadsky, unpublished. relax K. Gawedzki, hep-th/9110076, in NATO ASI: Cargese 1991: 247-274. relax P. Di Francesco, P. Mathieu, and D. Senechal, Conformal Field Theory, Springer, New York (1997). relax J. Maldacena and A. Strominger, hep-th/9804085. relax S. Deger, A. Kaya, E. Sezgin, and P. Sundell, hep-th/9804166. relax S. Giddings, J. Polchinski, and A. Strominger, hep-th/9305083, Phys. Rev. D48 (1993) 5784. relax C. Johnson, hep-th/9403192, Phys. Rev. D50 (1994) 4032; hep-th/9406069, Phys. Rev. D50 (1994) 6512. relax D. Gepner, in proceedings, Superstrings, 238-302, Trieste, 1989. relax B. Greene, hep-th/9702155, in proceedings of TASI 1996, Boulder, CO. relax S. Coleman, The Magnetic Monopole Fifty Years Later, in proceedings, Int. School of subnuclear physics, Erice 1981, Summer school in theoretical physics, Les Houches 1981, Summer Institute on Particle and Fields, Banff 1981. relax A. Dabholkar and J. Harvey, Phys. Rev. Lett. 63 (1989) 478; A. Dabholkar, G. Gibbons, J. Harvey and F. Ruiz Ruiz, Nucl. Phys. B340 (1990) 33.
no-problem/9812/physics9812019.html
ar5iv
text
# Application of time-dependent density functional theory to optical activity ## I Introduction The time-dependent local density approximation (TDLDA) as well as the time-dependent Hartree-Fock theory has been applied to the optical absorption of atomic and molecular systems with considerable success \[1-14\]. Here we want to see how well the TDLDA method does on a more subtle aspect of the optical response, the optical activity of chiral molecules. Calculation of circular dichroism and especially optical rotatory power is more challenging because the operators that must be evaluated are more sensitive to the approximations on the wave function than the electric dipole in the usual form $`e\stackrel{}{r}`$. Nevertheless, we anticipate that TDLDA may be a useful theory because the operators are still of the form of single-electron operators. The TDLDA is derived by optimizing a wave function constructed from (time-dependent) single-particle wave functions, so its domain of validity is the one-particle observables. In our implementation of TDLDA , we represent the electron wave function on a uniform spatial grid. The real-time evolution of the wave function is directly calculated and the response functions are calculated by the time-frequency Fourier transformation. The method respects sum rules and the Kramers-Kronig relation between the circular dichroism and optical rotatory power. Since the grid representation is bias-free with respect to electromagnetic gauge, it is not subject to the gauge difficulties encountered when the space of the wave function is constructed from an atomic orbital representation. Optical activity has been a challenging problem for computational chemistry, but there has been considerable progress in recent years. For example, Carnell et al. present a good description of the circular dichroism of excited states of R-methyloxirane using a standard Gaussian representation of the wave function. The optical rotatory power is a much more difficult observable, since the whole spectrum contributes. Only very recently have $`abinitio`$ calculations been reported for this property. After presenting our calculational method, we report our exploratory study on optical activities of two chiral molecules: R-methyloxirane, a simple 10-atom molecule with known chiroptical properties up to the first few excited states, and C<sub>76</sub>, a fullerene with very large optical rotatory power and significant circular dichroism in the visible and UV. ## II Formalism ### A Some definitions Polarization of chiral molecule in applied electromagnetic field is expressed using two coefficients $`\alpha `$ and $`\beta `$ as $$\stackrel{}{p}=\alpha \stackrel{}{E}\frac{\beta }{c}\frac{\stackrel{}{H}}{t}.$$ (1) Here $`\alpha `$ is the usual polarizability and is given microscopically as $$\alpha (E)=e^2\underset{n}{}\left(\frac{1}{E_{n0}Ei\delta }+\frac{1}{E_{n0}+E+i\delta }\right)\frac{1}{3}\mathrm{\Phi }_0|\underset{i}{}\stackrel{}{r}_i|\mathrm{\Phi }_n^2,$$ (2) where $`\mathrm{\Phi }_n`$ and $`E_n`$ are the eigenvector and eigenvalue of the $`n`$-th eigenstate of the many-body Hamiltonian $`H`$, $`H\mathrm{\Phi }_n=E_n\mathrm{\Phi }_n`$, and $`E_{n0}=E_nE_0`$. The $`\delta `$ is an infinitesimal positive quantity. Employing the oscillator strength $$f_n=\frac{2mE_n}{\mathrm{}^2}\frac{1}{3}\mathrm{\Phi }_0|\underset{i}{}\stackrel{}{r}_i|\mathrm{\Phi }_n^2,$$ (3) we define the optical absorption strength whose integral is normalized to the active electron number, $$S(E)=\underset{n}{}\delta (EE_n)f_n.$$ (4) It is related to the imaginary part of the polarizability, $$S(E)=\frac{2mE}{\mathrm{}^2e^2}\frac{\mathrm{Im}\alpha (E)}{\pi }.$$ (5) The basic quantity which characterizes the chiroptical transition is the rotational strength defined by $$R_n=\frac{e^2\mathrm{}}{2mc}\mathrm{\Phi }_0|\underset{i}{}\stackrel{}{r}_i|\mathrm{\Phi }_n\mathrm{\Phi }_n|\underset{i}{}\stackrel{}{r}\times \stackrel{}{}|\mathrm{\Phi }_0.$$ (6) We define the complex rotational strength function, $$(E)=\underset{n}{}\left(\frac{1}{E_{n0}Ei\delta }\frac{1}{E_{n0}+E+i\delta }\right)R_n.$$ (7) The beta function in eq.(1) is related to $`(E)`$ by $`\beta (E)=\frac{\mathrm{}c}{3E}(E)`$. We will also use the rotational strength function $`R(E)`$ defined by $$R(E)=\underset{n}{}\delta (EE_n)R_n=\frac{\mathrm{Im}(E)}{\pi }.$$ (8) As is seen below, the optical rotatory power is proportional to the real part of $`\beta (E)`$, and the circular dichroism to $`R(E)`$. They are related to each other by the Moscowitz’s generalized Kramers-Kronig relation . The difference of complex indices of refraction for left and right circularly polarized light is proportional to $``$ in dilute media; the relation is $$n_Ln_R=\frac{8\pi N_1}{3}(E),$$ (9) where $`N_1`$ is the number of molecules per unit volume. For comparison with experiment, the common measure of circular dichroism is the decadic extinction coefficient, given by $$\mathrm{\Delta }ϵ=\frac{4\pi }{\lambda _{cm}C\mathrm{log}_e10}\mathrm{Im}(n_Ln_R),$$ (10) where $`C`$ is the concentration of molecules in moles/liter and the subscript on the wavelength $`\lambda `$ is a reminder to express it in centimeters. The optical rotatory power is conventionally reported as $$[\alpha ]=180^{}\frac{10}{C_{gm}\lambda }\mathrm{Re}(n_Ln_R),$$ (11) where $`C_{gm}`$ is the concentration of molecules in gm/cm<sup>3</sup>. ### B Real-time TDLDA We first rewrite the above strength functions as time integrations. We employ the time dependent wave function $`\mathrm{\Psi }(t)=\mathrm{exp}[iHt/\mathrm{}]\mathrm{\Psi }(0)`$ with the initial wave function at $`t=0`$ given by $`\mathrm{\Psi }(0)=\mathrm{exp}[ik_iz_i]\mathrm{\Phi }_0`$, where $`k`$ is a small wave number. In the linear response, the time-dependent polarizability is proportional to the dipole matrix element, $`z(t)=\mathrm{\Psi }(t)|_iz_i|\mathrm{\Psi }(t)`$. The frequency dependent polarizability in $`z`$ direction is then obtained as the time-frequency Fourier transformation of $`z(t)`$, $$\alpha _z(E)=\frac{e^2}{k}_0^{\mathrm{}}\frac{dt}{\mathrm{}}e^{i(E+i\delta )t/\mathrm{}}z(t).$$ (12) The polarizability $`\alpha (E)`$ is given by the orientational average, $`\alpha =(\alpha _x+\alpha _y+\alpha _z)/3`$. Similarly, we denote the angular momentum expectation value as $`L_z(t)=\mathrm{\Psi }(t)|i(\stackrel{}{r}\times )_z|\mathrm{\Psi }(t)`$. To linear order in $`k`$, we may express it as $$L_z(t)=2k\underset{n}{}\mathrm{cos}\left(\frac{E_{n0}t}{\mathrm{}}\right)\mathrm{\Phi }_0|\underset{i}{}z_i|\mathrm{\Phi }_n\mathrm{\Phi }_n|(\stackrel{}{r}_i\times \stackrel{}{}_i)_z|\mathrm{\Phi }_0.$$ (13) The complex rotatory strength function $`(E)`$ is expressed as, $`_z(E)`$ $`=`$ $`{\displaystyle \frac{e^2\mathrm{}}{2mc}}{\displaystyle \frac{i}{k}}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dt}{\mathrm{}}}e^{(E+i\delta )t/\mathrm{}}L_z(t)`$ (14) $`=`$ $`{\displaystyle \frac{e^2\mathrm{}}{2mc}}{\displaystyle \underset{n}{}}\left({\displaystyle \frac{1}{E_{n0}Ei\delta }}{\displaystyle \frac{1}{E_{n0}+E+i\delta }}\right)\mathrm{\Phi }_0|{\displaystyle \underset{i}{}}z_i|\mathrm{\Phi }_n\mathrm{\Phi }_n|{\displaystyle \underset{i}{}}(\stackrel{}{r}\times \stackrel{}{})_z|\mathrm{\Phi }_0.`$ (15) $`(E)`$ of eq.(7) is the sum over three directions, $`=_x+_y+_z`$. In the time-dependent local density approximation, the time-dependent wave function $`\mathrm{\Psi }(t)`$ is approximated by a single Slater determinant. We prepare the initial single electron orbitals as $`\psi _i(0)=\mathrm{exp}[ikz]\varphi _i`$ where $`\varphi _i`$ is the static Kohn-Sham orbitals in the ground state. The $`\psi _i(t)`$ follows the time-dependent Kohn-Sham equation, $$i\mathrm{}\frac{}{t}\psi _i(t)=\left\{\frac{\mathrm{}^2}{2m}^2+\underset{a}{}V_{ion}(\stackrel{}{r}\stackrel{}{R}_a)+e^2𝑑\stackrel{}{r}^{}\frac{\rho (\stackrel{}{r}^{},t)}{|\stackrel{}{r}\stackrel{}{r}^{}|}+\mu _{xc}(\rho (\stackrel{}{r},t))\right\}\psi _i(t),$$ (16) where $`V_{ion}`$ is the electron-ion potential and $`\mu _{xc}`$ is the exchange-correlation potential. The time-dependent density is given by $`\rho (\stackrel{}{r},t)=_i|\psi (\stackrel{}{r},t)|^2`$. The time-dependent dipole moment may be evaluated as $`z(t)=_i\psi _i|z|\psi _i`$ and the similar expression for $`L_z(t)`$. The strength functions are then evaluated with eqs.(12) and (14). ### C Sum rules According to the TRK sum rule, the integral of $`S(E)`$ is equal to the number of active electrons $`N`$. This sum rule is respected by the TDLDA. It also appears in the short time behavior of $`z(t)`$ as $$z(t)=N\frac{\mathrm{}k}{m}t(tsmall)$$ (17) For the rotational strength, we define energy-weighted sums as $$R^{(n)}=_0^{\mathrm{}}𝑑EE^nR(E).$$ (18) It is known that $`R^{(n)}`$ for $`n4`$ vanishes identically in the exact dynamics . The vanishing of $`R^{(0)}`$ in the time-dependent Hartree-Fock theory was first noticed in . The short time behavior of $`L(t)=L_x(t)+L_y(t)+L_z(t)`$ is related to $`R^{(n)}`$ as $$\frac{e^2\mathrm{}}{2mc}L(t)=2k\left\{R^{(0)}\frac{R^{(2)}}{2!}\left(\frac{t}{\mathrm{}}\right)^2+\frac{R^{(4)}}{4!}\left(\frac{t}{\mathrm{}}\right)^4\frac{R^{(6)}}{6!}\left(\frac{t}{\mathrm{}}\right)^6+\mathrm{}\right\}$$ (19) Here we note that $`L(t)`$ is an even function of $`t`$ as seen in eq.(13). Since $`R^{(n)}=0`$ for $`n4`$, we see that $`L(t)`$ behaves as $`t^6`$ for small time. $`L_i(t)(i=x,y,z)`$ behave as $`t^2`$ at small $`t`$ and the cancelation up to $`t^4`$ order occurs after summing up three directions. In the TDLDA dynamics, we confirmed that at least $`t^0`$ and $`t^2`$ coefficients of $`L(t)`$, namely $`R^{(0)}`$ and $`R^{(2)}`$, vanish identically. ### D Numerical detail The TDLDA uses the same Kohn-Sham Hamiltonian as is used in ordinary static LDA calculations. As is common in condensed matter theory, we use pseudopotentials that include the effects of $`K`$-shell electrons rather than include these electrons explicitly. The pseudopotentials are calculated by the prescriptions of ref. and . We employ the simple exchange-correlation term proposed in ref. . There are improved terms now in use, but it was not deemed important for our application. There are many numerical methods to solve the equations of TDLDA. Ours uses a Cartesian coordinate space mesh to represent the electron wave functions, and the time evolution is calculated directly . There are only four important numerical parameters in this approach: the mesh spacing, the number of mesh points, the time step in the time integration algorithm, and the total length of time that is integrated. We have previously found that the carbon molecules can be well treated using a mesh spacing of 0.3 Å. We find 0.25 Å is necessary for methyloxirane to represent accurately the orbitals around oxygen. We take the spatial volume to be a sphere of radius 8 Å both for methyloxirane and C<sub>76</sub> presented below. The total number of mesh points, defining the size of the vector space, is about $`4\pi R^3/(3\mathrm{\Delta }x^3)80,000(140,000)`$ for mesh size of 0.3 Å(0.25 Å). The algorithm for the time evolution is quite stable as long as the time step $`\mathrm{\Delta }t`$ satisfies $`\mathrm{\Delta }t<\mathrm{}/|H|`$, where $`|H|`$ is the maximum eigenvalue of the Hamiltonian. This is mainly dependent on the mesh size. For $`\mathrm{\Delta }x=0.3`$ Å, we find that $`\mathrm{\Delta }t=0.002\mathrm{}`$/eV is adequate. We integrate the equation of motion for a length of time $`T=60\mathrm{}`$/eV for C<sub>76</sub> ($`50\mathrm{}`$/eV for methyloxirane). Then individual states can be resolved if their energy separation satisfies $`\mathrm{\Delta }E>2\pi \mathrm{}/T0.1`$eV. Our numerical implementation, grid representation of the wave function and the time-frequency Fourier transformation for the response calculation, has several advantages over the usual approach using basis functions centered at the ions. They include: (1) The full spectrum of wide energy region may be calculated at once and it respects sum rules. The non-locality of the pseudopotential may cause violation of the sum rule, but the effect is small in the present systems. (2)Since the circular dichroism $`R(E)`$ and the optical rotatory power, real part of $`\beta (E)`$, are calculated as Fourier transformation of single function $`L(t)`$, the Kramers-Kronig relation is automatically satisfied. (3)The gauge independence of the results is satisfied to high accuracy. Employing the commutation relation $`[H,_i\stackrel{}{r}_i]=\frac{\mathrm{}^2}{m}_i\stackrel{}{}_i`$, there is alternative expressions for optical transitions with gradient operator instead of coordinate. For the rotational strength, for example, $$R_n=\frac{e^2\mathrm{}^3}{2m^2cE_{n0}}\mathrm{\Phi }_0|\underset{i}{}\stackrel{}{}_i|\mathrm{\Phi }_n\mathrm{\Phi }_n|\underset{i}{}\stackrel{}{r}_i\times \stackrel{}{}_i|\mathrm{\Phi }_0,$$ (20) The strength function with this expression may be calculable with initial wave function $`\psi _i(0)=\mathrm{exp}[id_z]\varphi _i`$ with small displacement parameter $`d`$. Since the grid representation of wave function does not have any preference on the gauge condition, our method gives almost identical results for the coordinate and gradient expressions of dipole matrix elements. ## III R-methyloxirane The geometry of R-methyloxirane is shown in Fig. 1. We use the same nuclear distances as in ref. . We show in Fig. 2 the results of the static calculation for the orbital energies. We find a LUMO 6.0 eV above the HOMO, and a triplet of unoccupied states 0.5 eV higher. In our calculation the lowest unoccupied orbitals have an diffuse, Rydberg-like character, $`s`$-wave for the lower and $`p`$-wave for the upper, as in previous calculations. The HOMO is localized in the vicinity of the oxygen atom, and the measured absorption strength seen at 7.1 and 7.7 eV is attributed to the excitation of a HOMO electron to the diffuse states. In the TDLDA, the excitation energy comes out close to the orbital difference energies, except for strongly collective states. Indeed we find in the TDLDA calculation that the excitations are within 0.1 eV of the HOMO-LUMO energy and the energy difference for the next state above the LUMO. This is one eV less than the experimental values. It is known that the LDA energy functional that we use is subject to overbind excitations close to the ionization threshold. There are improved energy functions that rectify this problem, but for this work we judged the error not important. The next property we examine is related to the electric dipole matrix element, namely the oscillator strength $`f`$ associated with the transition. The optical absorption strength is shown in Fig. 3. The total strength up to 100 eV excitation is f=22.4, which is 93% of the sum rule for the 24 active electrons. Notice the lowest two peaks, centered at 6.0 and 6.5 eV. These are the states we are interested in. Their oscillator strengths are given in Table I. We see that the states are both weak, less than a tenth of a unit. The effect of the time-dependent treatment is to lower the strengths by 30-50%. This is the well-known screening effect associated with virtual transitions of more deeply bound orbitals. We find that the computed transition strengths are within a factor of two of the measured ones. Typically, the TDLDA does somewhat better than this, but most of the experience has been with transitions carrying at least a tenth of a unit of oscillator strength. The original theoretical calculation gave very poor transition strengths, off by more than an order of magnitude. Unfortunately, the more recent study did not include theoretical transition strengths. We numerically confirmed that our method gives almost identical results with coordinate and gradient expressions of dipole matrix elements, as we noticed in the previous section. However, exceptionally, the oscillator strength of the very weak features discussed above suffer substantial dependence on the expression. With gradient formula for the transition matrix elements, strengths of both first and second transitions are larger by about factor two than the coordinate expression. Since the gradient formula emphasizes high-momentum components more heavily, we think the results with coordinate matrix elements may be more reliable for low transitions, and we quote them in Table I. We now turn to the chiroptical response. Fig. 4 shows the short-time behavior of $`L_x(t)`$ and the sum of the three Cartesian components $`L(t)=_iL_i(t)`$. An initial perturbation of $`k=0.001`$ Å<sup>-1</sup> is employed. To within numerical precision, $`L_x(t)`$ (solid line) grows with time as $`t^2`$, as discussed below eq.(18). The same is true for the other two components, $`L_y`$ and $`L_z`$. This shows that the numerical algorithm respects the first sum rule. The combined response, $`L(t)`$ (dashed lines) shows an extreme cancelation at short times, as required by the additional sum rules. However our numerical accuracy does not allow us to determine the order of the power behavior. The evolution of $`L(t)`$ for larger times is shown in Fig. 5. Physically, the TDLDA breaks down at long times because of coupling to other degrees of freedom. A typical width associates with such couplings is of the order of a tenth of an eV, implying that the responses damp out on a time scale of $`T10\mathrm{}`$/eV. We note that the TDLDA algorithm itself is very stable, and allows us to integrate to much larger times and obtain very sharp theoretical spectra. We next show the Fourier transform of the chiroptical response. The circular dichroism spectrum calculated with eq. (14) is shown in Fig. 6. Here we have integrated $`L`$ to $`T=50\mathrm{}`$/eV, including a filter function in the integration to smooth the peaks. One can see that the $`s`$\- and the $`p`$-transitions are clearly resolved, although the three $`p`$-transitions are not resolved from each other (as is the experimental case). The $`s`$-transition has a negative circular dichroism and the $`p`$-transition a positive one. Integrating over the peaks, the strengths of the two peaks are -0.0014 and +0.0014 Å<sup>3</sup>-eV, respectively. The strengths are commonly quoted in cgs units; the conversion factor is 1 eV-Å<sup>3</sup> = 1.609 $`\times 10^{36}`$ erg-cm<sup>3</sup>. The values in cgs units are given in Table I, compared to experiment and previous calculations. We find the signs are correctly given, but the values are somewhat too high, by a factor of 2 or 3. The calculation of ref. gave a result within the experimental range for the $`p`$-multiplet but too small (by a factor of 2) for the $`s`$-transition. Thus we find that the TDLDA has a somewhat poorer accuracy in this case. Next we consider the optical rotatory power. It could be calculated as the real part of the Fourier transformation eq.(14). In practice, however, we found the calculation employing Kramers-Kronig relation to the rotational strength function, $$\mathrm{Re}\beta (E)=\frac{2}{3}\mathrm{}c_0^{\mathrm{}}𝑑E^{}\frac{R(E^{})}{E_{}^{}{}_{}{}^{2}E^2},$$ (21) gives more accurate result especially at the energy below the lowest transition. The measurement is available at sodium D-line, 2.1 eV, $`[\alpha ]_D=+14.65^{}`$ which gives $`\beta =+0.0017`$ Å<sup>4</sup>. The calculated value at low energy is very sensitive to the number of states included in the sum. Fig.6 shows the calculated value as a function of a cutoff energy, upper bound in the integration in eq.(20). The value taking only the contribution of the lowest transition is -0.06. Including more states produces values that fluctuate in sign and magnitude within that range. Including all states below 100 eV gives a cancelation by a factor of 60 to yield a value $`\beta =0.001`$ Å<sup>4</sup>. This has the opposite sign but the same order of magnitude as the measured $`\beta `$. Clearly, to get high relative accuracy with such a strong cancelation is more demanding than our TDLDA can provide. ## IV C<sub>76</sub> Remarkably, it is possible to separate the chiral partners of the double-helical fullerene C<sub>76</sub> using stereospecific chemistry. The molecule shows a huge optical rotatory power, $`[\alpha ]_D=4000^{}`$, and a complex circular dichroism spectrum between 2 and 4 eV excitation . There has been reported only semi-empirical quantum chemistry calculation for the optical activity of this molecule . We first remark on the geometry of the molecule, which has a chiral $`D_2`$ symmetry. The accepted geometry is depicted in ref.; it may be understood as follows. We start with C<sub>60</sub>, in which all carbons on pentagons. Group the pentagons into triangles and divide the fullerene in half keeping two adjacent triangles of pentagons intact in each half. The “peel” of six pentagons already has a chiral geometry dependent on the relative orientation of its two triangles of pentagons. The C<sub>76</sub> is constructed by adding 16 carbon atoms between the split halves of the C<sub>60</sub>. The added carbon atoms lie entirely on hexagons which form a complete band around the fullerene. The inserted band has the geometry of an (8,2) chiral buckytube. The result is then the chiral C<sub>76</sub>. Our calculations are performed on a right-handed C<sub>76</sub>, in the sense that the band of hexagons corresponds to a right-handed buckytube. This is the same convention as used in ref. , their Fig. 3d. The C<sub>76</sub> has 152 occupied spatial orbitals. We show the orbitals near the Fermi level in Fig. 8. The HOMO-LUMO gap is only 0.9 eV, and there are many transition in the optical region. In Fig. 9 we show the optical absorption strength function for the range 0-50 eV. Smoothing is made with the width of 0.2 eV in the Fourier transformation. A concentration of strength is apparent at 6 eV excitation; there is a similar peak in graphite and C<sub>60</sub> which is associated with $`\pi \pi ^{}`$ transitions. The strong, broad peak centered near 20 eV is associated with $`\sigma \sigma ^{}`$ transitions and is also present in C<sub>60</sub> . The feature at 13 eV is not present in C<sub>60</sub> , however. In the next figure, Fig. 10, we show a magnified view of the absorption at low energy. We also compare the TDLDA strength with the single-electron strength, smoothed also by convolution with a Breit-Wigner function of width $`\mathrm{\Gamma }=0.2`$ eV. The TDLDA has a strong influence on the strength distribution, decreasing the total strength in the low energy region and concentrating in the 6 eV peak. The experimental absorption strength (with arbitrary normalization) is shown as the dashed line. It agrees with the TDLDA quite well. We next examine the circular dichroism spectrum. Fig. 11 shows the rotatory strength function between 0 and 50 eV. Like the case of methyloxirane, it is irregular without any large scale structures. Its integral is zero to an accuracy of 0.001 eV-Å<sup>3</sup>. The low energy region is shown in Fig. 12. Here one sees qualitative similarities between theory and experiment . The theoretical sharp negative peak at 1.8 eV corresponds to an experimental peak at 2.2 eV. Shifting the higher spectra by that amount (0.6 eV), one sees a correspondence between the next positive and negative excursions. We note that a similar shift in the excitation energy was also seen in the optical absorption of C<sub>60</sub> between the measurement and the TDLDA calculation . Our theoretical optical rotatory power is plotted in Fig. 13. The situation here is different from the methyloxirane, in that rotatory power is large in a region where there are allowed transitions. The measured optical rotatory power, $`[\alpha ]_D=4000^{}`$ at 2.1 eV corresponding to $`\beta =7.3`$ Å<sup>4</sup>, is shown as the star. It does not agree with theory, but we should remember that the spectrum needs to be shifted by 0.6 eV to reproduce the circular dichroism. Adjusting the theoretical spectrum upward by that amount, we find that it is consistent in sign and order of magnitude with the measurement. Since the optical rotatory power in the region of allowed transitions changes rapidly as excitation energy, a measurement of the energy dependence would be very desirable, and would allow a more rigorous comparison with the theory. ## V Concluding remarks We have presented an application of the time-dependent local density approximation to the optical activities of chiral molecules. Our method is based on the uniform grid representation of the wave function, real-time solution of the time-dependent Kohn-Sham equation, and the time-frequency Fourier transformation to obtain the response functions. In this way we can calculate the optical absorption, circular dichroism, and the optical rotatory power over a wide energy region, respecting sum rules and Kramers-Kronig relation. We applied our method to two molecules, methyloxirane and C<sub>76</sub>. For lowest two transitions of methyloxirane, the TDLDA reproduces the absorption and rotational strength in the accuracy within factor two. The qualitative feature of the circular dichroism spectrum of C<sub>76</sub> is also reproduced rather well. However, the optical rotatory power is found to be a very sensitive function with strong cancelation. Even though we obtained rotational strength of full spectral region, it is still difficult to make quantitative prediction of optical rotatory power at low energies in our present approach. ## VI Acknowledgment We thank S. Saito for providing us with coordinates of C<sub>76</sub>. We also thank him, S. Berry, and B. Kahr for discussions. This work is supported in part by the Department of Energy under Grant DE-FG-06-90ER40561, and by the Grant-in-Aid for Scientific Research from the Ministry of Education, Science and Culture (Japan), No. 09740236. Numerical calculations were performed on the FACOM VPP-500 supercomputer in the institute for Solid State Physics, University of Tokyo, and on the NEC sx4 supercomputer in the research center for nuclear physics (RCNP), Osaka University. ## Figure Captions Fig. 1. View of R-methyloxirane with hydrogen on the chiral carbon in the back (and not seen). The chirality is R because the three other bonds are arrange clockwise in the sequence O, CH<sub>3</sub>, CH<sub>2</sub>. Fig. 2 Static LDA orbitals in methyloxirane. Fig. 3 Optical absorption strength of methyloxirane in the energy region 0-50 eV. Fig. 4 Short-time chiroptical response of R-methyloxirane. Solid line is $`L_x(t)`$, dashed line is $`_iL_i(t)`$. Fig. 5 Chiroptical response of R-methyloxirane $`L(t)`$ for longer times. Fig. 6 $`R`$ in R-methyloxirane in the interval 0-20 eV. Fig. 7 Optical rotatory power Re$`\beta `$ at $`E=2.1`$ eV as a function of cutoff energy $`E_{max}`$ in the integration of eq. (20). Fig. 8 Static LDA orbitals in C<sub>76</sub> near the Fermi level. Fig. 9 Optical absorption spectrum of C<sub>76</sub> in the range 0-50 eV. Fig. 10 Optical absorption spectrum of C<sub>76</sub> in the range 0-8 eV. Dotted line is the single-electron strength, solid line the TDLDA, and dashed line experiment. Fig. 11 Circular dichroism spectrum of C<sub>76</sub>. Fig. 12 Circular dichroism spectrum of C<sub>76</sub> comparing theory (solid line) and experiment (dashed line). The experimental data is from ref. and is with arbitary normalization. Fig. 13 Optical rotatory power of C<sub>76</sub>, given as Re$`\beta `$ in unit of Å<sup>4</sup>. The cross is the measured value from $`[\alpha ]_D`$.
no-problem/9812/astro-ph9812360.html
ar5iv
text
# Calibrating UV Emissivity And Dust Absorption At 𝑧≈3 ## Introduction Most of the light from high mass stars is emitted in the ultraviolet (UV; $`\lambda 11003000`$Å), making it an attractive passband for tracing cosmic star formation evolution. This utility is accentuated with increasing redshift as the rest-frame UV emission enters the optical where modern detectors have quantum efficiencies approaching unity. Unfortunately, star formation occurs in a dusty environment, and dust efficiently absorbs and scatters UV radiation. This must also be the case in the early universe since dust has been observed in objects with $`z>4`$ (e.g. G97 ). The challenge of interpreting rest-frame UV emissivities is to devise an adequate prescription to account for dust absorption. Currently there is much debate in the literature on what the proper dust correction prescription is, resulting in different groups estimating $`\lambda =1600`$Å dust absorption factors ranging from a factor of about 3 (e.g. P98 ) to 20 SY97 at $`z3`$. The amount of high-$`z`$ dust absorption has a direct bearing on interpretting how galaxies evolve. Small dust corrections favor hierarchical models of galaxy formation, while large corrections favor monolithic collapse models MPD98 . Here we consider the UV luminosity density at $`z3`$ derived mainly from the $`U`$-dropouts in the Hubble Deep Field (HDF) W96 . Our technique M98 is based on the strong similarity between local starburst galaxies and Lyman-break systems (e.g. L97 ). Throughout this paper we adopt $`H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`q_0=0.5`$. ## Method In earlier works M95 ; M97 , we showed that for local UV selected starburst galaxies, the ratio of far infrared (FIR) to UV fluxes correlates with UV spectral slope $`\beta `$ ($`f_\lambda \lambda ^\beta `$ \- $`\beta `$ is essentially an ultraviolet color). This is illustrated in Fig. 1. Since $`F_{\mathrm{FIR}}`$ is dust reprocessed UV flux, this empirical correlation can be used to recover the intrinsic UV flux from UV quantities alone. In addition, for starbursts, the $`y`$ axis can be transformed directly into a UV absorption M98 . The fitted line is a simple linear fit to the transformed data of the form: $`A_{1600}\beta \beta _0`$. We selected our HDF $`U`$-dropout sample from a corner cut out of the $`U_{300}B_{450}`$ versus $`V_{606}I_{814}`$ color-color diagram ($`V_{606}I_{814}<0.5`$; $`U_{300}B_{450}1.3`$) and adopted the same magnitude limits as Madau et al. M96 . We select in $`V_{606}I_{814}`$ instead of $`B_{450}I_{814}`$ M96 because (1) $`V_{606}`$ is less affected by the Lyman forest and edge than $`B_{450}`$, and (2) this selection yields fairly even cutoff in $`\beta `$, and hence in $`A_{1600}`$. Note that our selection recovers high-$`z`$ galaxies in the “clipped corner” of the Madau et al. M96 selection area, and includes no known low-$`z`$ interlopers. We applied our absorption law fit to broad-band $`V_{606}I_{814}`$ colors transformed into $`\beta `$. The transformation was derived from high-quality IUE spectra that were “redshifted” through the $`z=2`$ to 4 range of $`U`$-dropouts. The transformation is linear in color with a quadratic $`z`$ correction. The $`z`$ correction is needed to account for the Lyman forest and Lyman-edge creeping into the $`V_{606}`$ band at high-$`z`$. Figure 2 shows a test of our technique. It compares the ratio of (rest frame) optical emission line flux to UV continuum flux density for local starbursts, and seven $`U`$-dropouts P98 ; WP98 ; B97 ; B98 . These ratios are not corrected for dust absorption. The overlap of the two samples indicates that $`U`$-dropouts are ionizing populations to the same degree as local starbursts. Hence their intrinsic UV spectrum should be similar. Pettini et al. P98 claim that $`U`$-dropouts probably suffer from little dust absorption since they tend to have fairly low $`F_{\mathrm{H}\beta }/f_{1600}`$ values. However, this ratio can be misleading. In fact, $`F(\mathrm{line})/f_{1600}`$ is not a good indicator of dust absorption: it does not correlate strongly with $`\beta `$, which we know to be a good indicator of dust absorption (Fig. 1). This is the case for both the local and $`U`$-dropout samples. The reason for this was first proposed by Fanelli et al. FOT88 : H II emission lines are seen through a larger column of dust than than the general UV continuum thus cancelling the expected benefit in opacity of observing in the optical instead of the UV. ## Results Figure 3 plots the absorption corrected absolute AB magnitude of the HDF $`U`$-dropouts versus $`\beta `$. The broken lines show $`M_{1600\mathrm{\AA }}`$ in the absence of absorption correction. The data show an apparent color - luminosity correlation. This is in part due to the selection limits, but the lack of very luminous blue galaxies is real. This implies that there is a mass - metallicity relationship at $`z3`$. It also shows that the most luminous galaxies tend to have the most dust absorption. A similar color - luminosity correlation is seen in local starbursts H98 . Summing the results for the HDF $`U`$-dropouts yields lower limits to the intrinsic UV emissivity, and hence the star formation rate density: $$\rho _{1600,0}1.5\times 10^{27}\mathrm{erg}\mathrm{s}^1\mathrm{Hz}^1\mathrm{Mpc}^3$$ $$\rho _{\mathrm{SFR}}0.19_{}\mathrm{yr}^1\mathrm{Mpc}^3$$ We find that $`\rho _{1600,0}`$ is factor of 9.2 higher than $`\rho _{1600}`$ first estimated by Madau et al. M96 . This difference is due to two effects: the dust absorption correction (factor of 5.5), and the improved $`U`$-dropout selection (factor of 1.7). These emissivities are still lower limits because we have made no completeness corrections, and because our $`V_{606}I_{814}`$ selection is only sensitive to galaxies with $`A_{1600}3.4`$ mag. Recently, Madau et al. MPD98 (see also Madau, this volume) have fit models to cosmological emissivity data covering rest-wavelengths from the FIR to the UV and redshifts out to $`4`$. Their HDF $`U`$-dropout sample now has a selection similar to ours. Our $`\rho _{1600,0}`$ estimate for the $`U`$-dropouts is a factor of 2.5 larger than their preferred model, which simulates the heirarchical collapse scenario and which includes a small amount of dust absorption. However it is only 30% larger than their “monolithic collapse” model. Hence, the initial phase of galaxy collapse was probably more rapid than predicted by heirarchical models and somewhat obscured from our view by at least modest amounts of dust.
no-problem/9812/astro-ph9812099.html
ar5iv
text
# Can streamer blobs prevent the buildup of the interplanetary magnetic field? ## 1 Introduction Using the LASCO C2 and C3 coronagraphs on the SOHO spacecraft (Brueckner et al. 1995), Sheeley et al. (1997) discovered blobs of material moving outward in coronal helmet streamers. Helmet streamers consist of a bubble-like or arch-like wide body (the helmet), consisting of closed magnetic field. Above the cusp, the pointed top of the helmet, lies a current sheet which extends radially outward, separating the two directions of open magnetic field around the helmet (see Fig. 1, top panel). We adopt the view that this streamer structure is formed by reconnection of open field lines in the current sheet that is left behind by a CME, as suggested by, e.g., Kopp (1992) and Kahler & Hundhausen (1992). Sheeley et al. (1997) observed that the blobs originate in the streamer current sheet, right above the cusp. They have an initial size of about $`1R_{\mathrm{}}`$ in the radial and $`0.1R_{\mathrm{}}`$ in the transverse direction. The blobs move radially outward along the streamer, with increasing velocities from about $`150\mathrm{kms}^1`$ near $`5R_{\mathrm{}}`$ to $`300\mathrm{kms}^1`$ near $`25R_{\mathrm{}}`$. Because of their relatively small initial sizes, low intensities ($`\mathrm{\Delta }I/I0.1`$), radial motions, slow but increasing velocities, and location in the streamer belt, Sheeley et al. (1997) conclude that these features passively trace the solar wind. Wang et al. (1998) carried out some more detailed observations of the spatial distribution, relative intensities, and shapes of the blobs, and observed a rather steady occurrence rate of about four blobs per day. We believe that the creation of these blobs is connected to a longstanding problem in solar physics: maintaining a roughly constant amount of flux in the interplanetary magnetic field (IMF). Coronal Mass Ejections (CMEs) generally originate in regions of closed magnetic field. These field lines are torn away from the Sun and stretched out into distended loops that extend well into interplanetary space (Hundhausen 1997). In principle, each consecutive CME thus introduces new flux into the heliosphere. This would result in an indefinite growth of the IMF, which is not observed (e.g. Gold 1962; Gosling 1975; MacQueen 1980; McComas, Gosling, & Phillips 1992). Apparently, some kind of reconnection takes place on the field lines that are torn away from the Sun, disconnecting the tops of the loops and returning new closed field lines to the Sun. The reconnection must start shortly after the CME’s departure, but it cannot be restricted to that first period. In situ measurements near and beyond 1 AU (e.g. Gosling, Birn, & Hesse 1995; Gosling 1996) show that most field lines within CMEs are still connected to the Sun at both ends. However, a few field lines are connected on only one end, while others are completely disconnected. Additionally, about one third of the CMEs exhibit a flux-rope topology, characterized by a series of helical field lines wrapped around a central axis. Both this helicity and the disconnected field lines can only be explained by magnetic reconnection behind the CME during the days before it reaches 1 AU (e.g. Gosling et al. 1995). On the other hand, to prevent an indefinite buildup of the IMF by field lines that are still connected to the Sun, reconnection must also keep taking place after the CME has passed 1 AU, several days after it has left the Sun. Three types of events show evidence for reconnection directly behind a departing CME. First, about one third of all CMEs are accompanied by long-duration (many hours) X-ray emission from expanding loops or arcades of loops (see Gosling 1993). Non-thermal emission at X-ray and radio wavelengths is often observed in conjunction with these events (Webb & Cliver 1995). A beautiful example of a reforming helmet streamer is presented by Hiei, Hundhausen, & Sime (1993). All these observations confirm the model of Kopp & Pneuman (1976) which explains hot loop formation by magnetic reconnection behind the CME. Second, there are some observations of moving metric type IV radio events which are interpreted as emission from electrons that have become trapped in disconnected plasmoids within CMEs (e.g. Kundu et al. 1989). Third, there are many coronagraph observations of large circular, ovoidal or outward U- or V-shaped structures that are usually interpreted as disconnected CMEs (e.g. Illing and Hundhausen 1983). Webb & Cliver (1995), analyzing all space-borne coronagraph and eclipse data up to then, concluded that possibly over 10 % of all CMEs fall in this category. Unfortunately, due to their infrequency compared to CMEs, even these three mechanisms combined cannot explain the necessary amount of flux-disconnection. An even bigger problem is posed by reconnection long after the CME. There have been suggestions that such reconnection would occur in interplanetary space itself (e.g. Wilcox 1971), but the new closed loops that would have to return to the Sun have not been observed (Gosling 1975; McComas et al. 1989). McComas et al. (1989, 1991) suggested that the large U-shaped disconnections described above are not related to a departing CME, but occur all by themselves across the heliospheric current sheet. However, triggering these large reconnections by emerging flux elsewhere in the corona seems rather ad-hoc (Webb & Cliver 1995). Additionally, these events are still quite infrequent compared to CMEs. Evidently, the mechanism, or combination of mechanisms, that can disconnect enough flux to offset the constant flow of CMEs has not yet been identified. In this Letter, we suggest that the creation of blobs in coronal streamers is a mechanism for ongoing, small-scale reconnection that can disconnect open field lines and reform closed magnetic loops on the Sun. Especially since the blobs occur not just shortly after CMEs, but also rather steadily under quiet coronal conditions, they may play an important role in maintaining the roughly constant IMF. ## 2 Mechanism We suggest that the blobs are the result of reconnection between two open field lines from both sides of a streamer current sheet. Naturally, we speak about field lines only to visualize what is going on; in reality, a finite amount of magnetic field will be involved in such a reconnection event. The field line topologies before and after the reconnection are outlined in Fig. 1. Reconnection between the two innermost open field lines in the top panel creates two new field lines. The first, with both ends connected to the Sun, is a new closed loop which becomes part of the helmet (as in e.g. Kahler & Hundhausen 1992). The other one, with its ends extending into interplanetary space, will move outward. The blobs are formed by plasma from the streamer that is swept up in the trough of this outward moving field line (middle and bottom panels). Considering the exponential decay of the density in the streamer with radial distance, almost all of the material in the blob is collected right away, just above the cusp. The loop that is moving outward will at first attempt to do so at the Alfvén speed. However, it is immediately slowed down when it sweeps up the plasma on its way. Nevertheless, the blob of material in the field line will accelerate faster than the surrounding wind, at least until the Lorentz force is balanced by the pressure differential that is built up when the plasma is swept up into a blob or, alternatively, until $`\beta 1`$ and the surrounding gas pressure starts to dominate the magnetic field. From then on, the field lines, and therefore the blobs, more or less flow along with their surroundings, tracing, as Sheeley et al. (1997) suggested, the slow solar wind. Fig. 1. The formation of a blob. The top panel shows the initial situation: a coronal streamer consisting of a helmet of closed loops surrounded by open field lines separated by a current sheet. In the lower two panels, reconnection has taken place between the two innermost field lines from the top panel. One new field line is a closed loop which becomes part of the helmet. The other one is now disconnected and moves outward, as shown in the bottom panels. On its way, it sweeps up plasma from the current sheet, which forms a blob. Einaudi et al. (1998) and Dahlburg, Boncinelli, & Einaudi (1998) performed numerical simulations of a current sheet, suggesting that the fast solar wind flowing along a current sheet can trigger, amongst others, a resistive instability, resulting in plasmoid formation. The initial situation in these simulations was a one-dimensional equilibrium, a plane current sheet, which is then destabilized by two-dimensional disturbances. The model does not incorporate the helmet and the beginning of the current sheet, i.e. the cusp. We believe, however, that reconnection will occur preferentially just there. First, by virtue of the magnetic topology, the cusp is the most natural location for reconnection. Second, we notice that when plasma flows outward along the open field lines on the side of the helmet, an inward “inertial pressure” will arise at the cusp, where the plasma has to change direction to follow the field lines, which bend to become aligned with the streamer stalk. This pressure pushes the open field lines a little closer together right above the cusp. The reconnection that follows produces the new loops, as described above. Further study of both the location of reconnection and the subsequent movement of the blobs is needed. Numerical simulations would have to include, amongst others, the complete magnetic topology, the radially decreasing background density, and the acceleration of the surrounding solar wind. ## 3 Evidence Several features of the blobs support the proposed mechanism. The first observation is the acceleration pattern of the blobs. Sheeley et al. (1997) observed that, in general, the blobs exhibit a fairly constant acceleration. However, in a speed-height plot representing measurements of about 65 independent blobs, they observed a peculiar “corner” at about $`6`$ or $`7R_{\mathrm{}}`$. Although this corner could be an artifact, Sheeley et al. (1997) remark that it might be a valid indication of a somewhat steeper initial acceleration. We suggest that the outward magnetic forces of the newly formed field line which sweeps up the plasma could account for this extra acceleration. As soon as $`\beta >1`$ and the magnetic field no longer dominates the gas pressure, the field line just moves along with the slow solar wind, as Sheeley et al. (1997) suggested. Second, the blobs often exhibit a concave-outward V-shape when the plasma sheet in which they move outward is slightly inclined to the line of sight (Wang et al. 1998). This supports the idea that a field line, with a V-(or U-)shaped trough at its bottom, is sweeping up plasma. However, the fact that the V-shape only becomes visible when the sheet is seen at an angle calls for a further explanation, which is illustrated in Fig. 2. This figure represents the blob creation, but now seen both from our viewpoint, in the plane of the sky (bottom row) and from the solar pole, in the plane of the current sheet (top row). The two field lines shown in the top left panel correspond to the innermost open field lines in the bottom left panel. The outgoing magnetic flux rope has little room to expand in the direction perpendicular to the current sheet, which, by its very nature, is very thin and bounded by magnetic fields. Thus, when the current sheet is seen side-on, the swept-up plasma looks like a narrow blob, as shown in the bottom right panel. However, there probably is a substantial azimuthal component to the magnetic field of the streamer (top left panel), e.g. due to differential rotation. This leads to a V-shape of the newly reconnected field line when seen from above. Consequently, from this point of view, the plasma that is swept up will be V-shaped as well (top right panel). As long as the current sheet is perpendicular to the plane of the sky, this does not affect our image of the blob; we simply see a projection that is still shaped like an elongated blob (as in the bottom right panel). However, when the sheet is inclined to the line of sight, the V-shape will become visible. We notice that an ESA Polar Orbiter at about 0.5 AU, a candidate mission for around 2007 (Priest et al. 1998), could observe the structures drawn in the top panels of Fig. 2. Fig. 2. The formation of a blob, seen from two directions. The bottom panels show the streamer in the plane of the sky, as we see it. The top panels show the same picture, but now seen from above. The two field lines in the top left panel correspond to the innermost open field lines in the bottom left panel. The two left panels show the situation before, the two right panels after the reconnection. Third, it should be noted that the blobs remain coherent all through the LASCO-coronagraph’s field of view, out to $`30R_{\mathrm{}}`$. According to Sheeley et al. (1997), they maintain constant angular spans and increase their lengths in rough accord with their speeds. This appearance is consistent with our model of a density enhancement that is bound on both sides by magnetic field lines, which maintain a roughly constant angular span when they extend more or less radially from the Sun (in the plane of the sky), and from below by the magnetic trough which initially sweeps up the material in the blob. Fourth, the blobs are not very bright, but do contain quite some material right from the moment they appear above the helmet. Sweeping up the material in a field line that is moving outward in an exponentially decreasing density is a very effective way to accomplish that. Another way would be to collect the material in a closed field line in the helmet. Considering that there is no observational evidence for the complete eruption of such a closed field line, Wang et al. (1998) have suggested that the material is released along an open field line after a reconnection between a closed field line and a nearby open one. However, such a mechanism makes it much harder to explain the three observations mentioned above. Recent measurements of abundances in streamers by Raymond et al. (1997) with the UVCS provide a definite test for whether the blob material comes out of the closed helmet or, as we propose, from the current sheet. These measurements show that elements with a high first ionization potential, like oxygen, are underabundant by an order of magnitude in the closed helmet, but only by a factor three on the sides of the helmet, where the field lines are open. Measurements of the abundances in blobs would thus allow us to determine whether their plasma originates in the closed helmet or in the open field lines adjoining the current sheet. In situ measurements of the slow solar wind provide another test: if the blobs are formed in the helmet instead of above it, the slow solar wind should contain a component exhibiting the specific characteristics of that material (Wang et al. 1998). Fifth, Wang et al. (1998) observed that the creation of the blobs is sometimes accompanied by a downflow of material. They compare this with downflows in the aftermath of CMEs, accompanying the closing down of fields blown open during the event. Clearly, this fits our model very well. While the outward moving loop will usually be most visible, the new closed loop will also collect some material while it is moving downward to become part of the helmet. Finally, we suggest that additional observational evidence for this mechanism can be collected by looking at the development of a helmet after the departure of a blob. The reconnection between open field lines not only releases a field line, which then forms a blob, but also adds a new closed field line to the helmet, which should grow in size (with a little delay to allow it to be filled up with material from below). We notice that a measurement of this growth, combined with an estimate of the field strength at the bottom of the helmet, immediately yields the flux involved in the reconnection. Unfortunately, the solar rotation complicates this otherwise straightforward observational test. The additional field lines would presumably add at most a few percent to the size of the streamer. In coronagraph observations, such changes may just as well be the result of projection effects of structures that rotate into (or out of) view, aggravated by the contribution function for the visibility of material out of the plane of the sky (Hundhausen 1993). However, this problem can be overcome with a thorough statistical study of the relation between the expulsion of the blobs and the size and intensity of the helmet in a large number of streamers. We have not yet performed such a study, but preliminary observations of a smaller number of streamers yielded promising results. As an alternative to statistical studies, some of the problems of the solar rotation could also be overcome with the proposed NASA STEREO mission, to be flown from mid-2003 (Rust 1997), which would observe the Sun from two different angles. ## 4 Summary We have suggested that the origin of the blobs in coronal streamers that were observed by Sheeley et al. (1997) is related to the longstanding problem of maintaining a roughly constant flux in the interplanetary magnetic field. Two open field lines, from both sides of the current sheet, reconnect to form two new loops. One is a closed field line and becomes part of the helmet. The other is now disconnected from the Sun and moves outward, sweeping up the material that forms the blob. We have reviewed the observational evidence that support this idea: the acceleration pattern of the blobs, the V-shape that some of them exhibit, the collection of the material, the subsequent coherence of the blobs, and the observed retraction of the inner loop. Finally, we have suggested several observational tests for this theory. Numerical models should also be able to reproduce the proposed mechanism. The authors thank S. Hill for his help in preparing the graphics, J. Kuijpers for many discussions and helpful comments, and the anonymous referee. M. K. van Aalst’s work at the SOHO Experiment Analysis Facility at Goddard Space Flight Center was supported by ESA, the Olga Koningsfonds, the Leids Kerkhoven Bosscha Fonds, and Utrecht University. A. J. C. Beliën carried out this work on an ESA Fellowship.
no-problem/9812/quant-ph9812033.html
ar5iv
text
# Individual addressing and state readout of trapped ions utilizing rf- micromotion ## Abstract A new scheme for the individual addressing of ions in a trap is described that does not rely on light beams tightly focused onto only one ion. The scheme utilizes ion micromotion that may be induced in a linear trap by dc offset potentials. Thus coupling an individual ion to the globally applied light fields corresponds to a mere switching of voltages on a suitable set of compensation electrodes. The proposed scheme is especially suitable for miniaturized rf (Paul) traps with typical dimensions of about 20-40 microns. Even the realization of elementary quantum information processing operations puts severe demands on the experimental techniques. A single two qubit quantum gate, for instance, requires two strongly interacting quantum systems, highly isolated from environmental disturbances. In 1995 Cirac and Zoller proposed a realization of quantum logic gates using a string of ions trapped in a linear rf (Paul) trap . In the meantime several experimental steps towards quantum logic gates implemented with trapped ions have been demonstrated, namely cooling one and two ions to the ground state, a quantum CNOT-gate connecting the internal and motional states of one ion and the deterministic creation of entangled states . In the Cirac-Zoller proposal addressing individual ions is done by tight focussing of a laser beam on one and only one ion. In this case inter-ion distances must be bigger than the diffraction limit of the addressing laser beam and cannot become smaller than roughly one micron. This requirement limits the minimum size of the trap and also the maximum level spacing of the harmonic trapping potential, since the inter-ion distance is determined by the balance of mutual Coulomb repulsion and the strength of the external potential . The trap level spacing in turn limits the maximum speed of gate operations because all laser pulses must be long enough to discriminate between different motional levels. The limitation in inter-ion distances also rules out motional frequencies beyond the linewidth of allowed dipole transitions commonly used for Doppler-cooling in ion traps. Such motional frequencies would allow for very efficient cooling without the need for special techniques like Raman-cooling. An alternative addressing technique that does not rely on focussed laser beams could therefore be very advantageous because it lifts restrictions in trap size, cooling and processing speed inherent in the Cirac-Zoller scheme. In , state creation was based on the dependence of transition-frequencies of individual ions on the micromotion induced by the trapping ac-field. This letter describes how this basic idea might be extended to individually address and read out single ions in a trap without the need for tightly focused laser beams. After a good compensation of micromotion in a linear trap the whole string of ions will reside on the rf-node line and will therefore not undergo forced oscillations in the rf-trapping field. With a suitable static electric field one and only one ion may be pushed from the rf node line, while leaving the others there. Since this one ion now undergoes forced oscillations at the rf drive frequency $`\mathrm{\Omega }_{\mathrm{rf}}`$, its two internal levels may be coupled on one of the micromotion sidebands with a Rabi-frequency $`\mathrm{\Omega }_\mathrm{m}`$ proportional to the first Bessel function $`J_1(𝐤\xi )`$ where $`\xi `$ is the amplitude vector of the driven motion and $`𝐤`$ is the wavevector (or wavevector difference for Raman-transitions) of the driving light field. The other ions reside at the node line and are therefore not coupled, although the coupling laser beam might illuminate them. Similarly the final state of individual ions may be read out by detecting fluorescence induced on a micromotion sideband of a dipole allowed cycling transition. The method turns out to be especially suitable for micro-fabricated traps with typical dimensions of 20 to 40 $`\mu `$m. A linear quadrupole trap of this type with a distance between diagonally opposite electrodes of 600 $`\mu `$m was successfully demonstrated by the NIST group and no technical limits seem to forbid further miniaturization by a factor of 10-20. One way to create enough degrees of freedom for the ’individual compensation’ sketched above is to split one of the ground rods of the linear quadrupole trap into a number $`N_s`$ of sections that is greater or equal to the number $`N_i`$ of ions trapped (see Fig. 1 a). The sections are individually wired to control their voltages independently. As specified below in the case $`N_s=N_i=N`$ and equally spaced sections of length $`d`$ one can balance the individual compensation voltages in such a way, that only one of the ions is pushed into a position where the rf-field is nonzero, while all the other ions remain on the rf field-node line. To further elaborate how this might be done, it will be assumed that all ions are initially on the rf-node line. In the experiment this means the micromotion of all ions has to be carefully nulled with the same set of electrodes and then the displacement voltages introduced below will be added to the null voltages. The electric field created by an individual compensation electrode will of course depend on its actual shape. For the purpose of this letter it will be sufficient to approximate compensation electrode $`j`$, held at voltage $`U_j`$ by the field created by a conducting sphere of radius $`d/2`$ held at that voltage (see Figs. 1 b, c). The magnitude of the electric field $`E_j(r)`$ at distance $`r`$ is then $$E_j(r)=\frac{U_jd}{2r^2}.$$ (1) Inter-ion spacing can be controlled to some extend by the dc voltages applied to the trap endcaps . Here it will be assumed that the ions are equally spaced with an inter ion distance $`d`$ and are a distance $`r`$ away from the electrodes (See Fig 1 a, b). This approximation is valid for 3 ions and for the center part of a long string. One could use the exact inter ion distances , but this would only complicate the following discussion without radically changing the underlying physics. The ions and electrodes are labeled from left to right. The electric field of electrode $`j`$ at the position of ion $`i`$ can then be decomposed into a part parallel to the trap axis $$E_{ij}^{(z)}=\frac{U_jd^2}{2(r^2+[(ij)d]^2)^{3/2}},$$ (2) and the perpendicular part that will actually push the ion into the rf-field $$E_{ij}=\frac{U_jdr}{2(r^2+[(ij)d]^2)^{3/2}}=m_{ij}\frac{U_jd}{2r^2},$$ (3) with relative distance factors $`m_{ij}=(1+[(ij)d/r]^2)^{3/2}`$. The total field perpendicular to the trap axis experienced by ion $`i`$ is therefore $$E_i=\underset{j=1}{\overset{N}{}}E_{ij}^t.$$ (4) With respect to the slow (secular) motion, the field will just change the equilibrium positions of the ions but will have no effect on the vibrational modes (in the limit of small amplitudes) . The perpendicular part will displace an ion of mass $`m`$ and charge $`Q`$ off the trap axis by an amount $$y_i=\frac{8QE_i}{mq^2\mathrm{\Omega }_{\mathrm{rf}}^2},$$ (5) where $`q=(2QV)/(mr^2\mathrm{\Omega }_{\mathrm{rf}}^2)`$ and $`V`$ is the amplitude of the rf-field. At this position the ion will undergo micromotion with an amplitude of $$\xi _i=y_i\frac{q}{2}=\frac{2E_ir^2}{V}.$$ (6) The index $`\kappa _i=𝐤\widehat{𝐲}\xi _i`$ ($`\widehat{𝐲}`$ is a unit vector along the y-direction) with which an incoming laser beam will now be frequency modulated in the rest frame of ion $`i`$ depends on its wave vector $`𝐤`$ (or the wave vector difference $`\delta 𝐤=𝐤_\mathrm{𝟏}𝐤_\mathrm{𝟐}`$ for Raman excitation) and will therefore be determined by the actual experimental geometry. In the remaining $`𝐤`$ is assumed to be parallel to $`\widehat{𝐲}`$, $$\kappa _i=k\xi _i=ky_i\frac{q}{2}.$$ (7) Reexpressing Eq.(7) in terms of the compensation voltages in Eq.(3) yields: $$\kappa _i=\frac{kd}{V}\underset{j=1}{\overset{N}{}}m_{ij}U_j.$$ (8) This defines a system of linear equations for the compensation voltages $`U_i/V`$ expressed in multiples of the rf-amplitude $`V`$ that can be written in matrix form: $$𝐦\frac{𝐔}{V}=\frac{\kappa }{kd}𝐞,$$ (9) with $`\kappa _i=\kappa e_i`$. The condition that only ion $`l`$ is modulated with index $`\kappa `$ while all other ions are unmodulated corresponds to a solution of Eq. 9 with $`e_i=\delta _{il}`$. Small corrections to the actual ion positions due to the compensation field component along $`z`$ \[Eq. (2)\] and the slight change in the mutual repulsion of the ions could be incorporated in a self consistent way, but are neglected in the first order approximation presented here. Solutions of Eq. (9) may be found analytically for small ion numbers or numerically. As one might expect, the scaled amplitudes $`U_j/V`$ become very large if $`rd`$, because then the difference in magnitude of the matrix elements $`m_{ij}`$ is very small. The method is therefore more suited for traps where $`r`$ is within an order of magnitude with the ion spacing $`d`$. In the examples calculated below $`r=`$15 $`\mu `$m and an ion spacing $`d`$ of 3 $`\mu `$m is used. These parameters could be realized by scaling the existing micro-fabricated trap of the NIST group down by a factor of 20, and lowering the rf-amplitude $`V`$ by a factor of 400 compared to the values used in that trap. The Rabi frequency on one of the first micromotion sidebands will be $`\mathrm{\Omega }_m=J_1(\kappa )\mathrm{\Omega }_0`$ where $`\mathrm{\Omega }_0`$ is the unmodulated carrier Rabi frequency. Results calculated with $`J_1(\kappa )=`$0.1 which corresponds to $`\kappa `$0.2 and $`k=2\pi /(313\mathrm{n}\mathrm{m})`$ resonant with the <sup>9</sup>Be<sup>+</sup> S-P transition are shown in Fig. 2 for 3 ions and Fig. 3 for 10 and 51 ions. Figure 2 shows the scaled voltages $`U_j/V`$ ($`i`$=1,2,3) for the aforementioned parameters, while in figure 3 a) $`U_j/V`$ for addressing ion 5 of 10 ions is displayed. Fig. 3 b) is a plot of the magnitude of the electric field perpendicular to the trap axis generated by the 10 electrodes held at the voltages given in Fig. 3 a). As expected there are 9 positions where the axial field is zero, coinciding with the ion positions of all but ion 5. At this position the field exhibits a maximum. Finally Fig. 3 c) shows the electrode voltages necessary to address the middle ion of a string of 51 ions. The voltages seem to be easy to realize in the case of three ions (peak voltage 0.15 V for an rf-amplitude of 2.5 V, roughly 1/400 of the voltages used in the Boulder micro-trap ) and manageable in the case of 51 ions (peak voltage 30 V for the same rf amplitude). Ion numbers between 4 and 50 will require voltages between 0.25 and 30 V. The maximum ion displacements are on the order of 40 nm along the trap axis and the addressed ion $`i`$ is displaced roughly 200 nm perpendicular to the trap axis. These displacements could be further diminished by operating the trap at a higher $`q`$ parameter (e.g. $`q=`$0.6). In conclusion a new way to individually address single ions in a linear trap that is suitable for quantum information processing was proposed. A simple calculation yielded the order of magnitude of all relevant parameters. All required conditions could be met with existing technologies. The proposed mechanism of addressing a quantum gate reduces a severe optics and engineering problem to a mere changing of voltages on compensation electrodes. No diffraction limited focussing on single ions and complicated schemes to switch the beam on and off or shuttle it around in the trap are necessary. In principle the coupling laser could illuminate the ions all the time during a computation, while the ’pulse lenght’ of the interaction is solely controlled by switching the compensation voltages. The scheme renders the distance limit for inter-ion spacing set by the laser wavelenght in the Cirac-Zoller scheme obsolete and the modifications to push e.g. a special pair or more ions off the rf-node line and thus realize multi-ion gates are obvious. The read-out of individual ions can be done the same way as the addressing, since the micromotion sideband frequency may be chosen much higher than the natural linewidth of typical cycling transitions used in ion-traps. Motional heating was found to be substantially suppressed in the non center-of-mass vibrational modes . In those modes Rabi-frequencies of individual ions depend on their position. This dependencies could readily be incorporated into the definition of $`\kappa _i`$ so operations on all ions would require the same interaction time and laser intensity. Finally reaching the Lamb-Dicke regime with Doppler cooling is straightforward in miniaturized traps so one could use the gate operations proposed by Sørensen and Mølmer which also work with the ions in thermal motion and in the presence of moderate motional heating. This implies that the experimentally most demanding preconditions of the Cirac-Zoller proposal, namely ground state cooling and addressing by focussed laser beams, may be circumvented in future quantum logic experiments. The author is grateful for discussions with D. J. Wineland, J. C. Bergquist, C. Monroe, C. Myatt, Q. Turchette, H. C. Nägerl, C. Roos and R. Blatt. He also acknowledges a EC TMR postdoctoral fellowship under contract ERB-FMRX-CT96-0087.
no-problem/9812/astro-ph9812386.html
ar5iv
text
# Kilohertz QPO Frequency and Flux Decrease in AQL X-1 and Effect of Soft X-ray Spectral Components ## 1 Introduction Kilohertz QPOs have been observed in about 20 low-mass X-ray binaries by the Rossi X-ray Timing Explorer (RXTE) since its launch at the end of 1995 (van der Klis (1998)). Although the detail production mechanism of these QPOs is not fully understood, there is little doubt that they correspond to the dynamical time scale near the neutron star surface, and as such they enable us to probe strong gravity effects and the equation of state of neutron stars (Kaaret et al. (1997); Zhang, W. et al. (1996); van der Klis (1998); Miller et al. (1997); Lamb et al. (1998)). The kHz QPO frequency has been found to correlate with at least two quantities: source count rate or flux and energy spectral shape or hardness ratios (van der Klis (1998) and references therein; Kaaret et al. (1998); Zhang, W. et al. 1998b ; Mendéz et al. (1999)). Soft X-ray transients like 4U 1608-52 and Aql X-1, with their large swings in both count rates and energy spectral shapes, are ideal for the study of these kinds of correlations. 4U 1608-52 and Aql X-1 have both been observed with RXTE/PCA during their outburst decay. A similar correlation between the QPO frequency and the X-ray flux has been observed in different flux ranges (Yu et al. (1997); Mendéz et al. (1998); Zhang, W. et al. 1998a ). In each flux range, the QPO frequency appears correlated with the X-ray flux. Spectral studies suggest that there is a significant correlation between the QPO frequency and the spectral shape even among data from different flux ranges in 4U 1608-52 (Kaaret et al. (1998)). A correlation between the QPO frequency and the spectral shape has also been found in other QPO sources. For example, a correlation between the QPO frequency and the flux of the blackbody component was observed in 4U 0614+091 (Ford et al. (1997)). The QPO frequency is usually regarded as the Keplerian frequency at the inner disk radius or its beat frequency with the neutron star spin in the beat-frequency model (Alpar and Shaham (1985); Lamb, et al. (1985); Strohmayer et al. (1996); Zhang, S. N. et al. (1998); Miller et al. (1996)). The higher the mass accretion rate, the higher should be the QPO frequency. This will lead to a correlation between the QPO frequency and the flux. However, when a transition of the accretion state happens, same mass accretion rate may correspond to different inner disk radii and different fluxes before and after the transition. A spectral variation is probably associated with a variation of the accretion state, and might lead to a new flux range or frequency range. Comparison of the spectral components before and after the spectral variation and study of the correlation between QPO frequency and the spectral shape will help to reveal how the kHz QPOs are generated. In this letter, we present our study with the RXTE/PCA data before and after an X-ray burst in Aql X-1. ## 2 Observations and Data Analysis An X-ray burst with a peak PCA count rate above 40,000 cps was observed by RXTE/PCA on March 1, 1997, when the daily-averged ASM count rate is $``$ 4 cps. Nearly coherent oscillation at about 549 Hz was detected in the X-ray burst. A $``$10% flux decrease and a corresponding QPO frequency decrease were observed after the X-ray burst (Zhang, W. et al. 1998a ). The count rate decrease is $``$ 64 cps in the entire PCA band. The data used for timing analysis in the kHz frequency range is taken in the event mode E\_125us\_64M\_0\_1s. The energy range 2-10 keV was selected, the same as those used in Zhang et al. (1998a). The Standard 1 data is used to study the frequency range below 0.5 Hz. The Standard 2 data is used in the spectral analysis. All the errors quoted in the following are 1 $`\sigma `$ errors. The low energy response of the RXTE/PCA detectors is not sufficient in constraining the absorption column density $`N_H`$. We have fixed it at 3.0$`\times `$$`10^{21}`$ cm<sup>-2</sup> in our spectral fittings, according to previous studies (Czerny et al. (1987); Verbunt, et al. (1994); Zhang, S. N. et al. (1998)). ### 2.1 Timing Analysis We first calculate the dynamical power spectra around the X-ray burst. Then a model composed of a constant and a Lorentzian function is fit to the 200-2000 Hz power spectra to determine the QPO frequency. In Fig. 1, we show the light curve around the X-ray burst and how QPO frequency evolved. The QPO frequency decreased from about 813$`\pm `$3 Hz before the burst to 776$`\pm `$4 Hz after the burst. In Fig. 2, the QPO frequency vs. PCA count rate (CR) in the energy range 2-10 keV is shown for the observation of two consecutive RXTE orbits on March 1. The data points on the lower-left corner are taken after the X-ray burst, and those points on the right side branch are taken before the X-ray burst. We apply a linear model to fit the right side branch and get $`f_{qpo}=(219.4\pm 32.0)+(1.954\pm 0.002)\times CR`$. The average count rate after the X-ray burst in 2-10 keV band is $``$ 478 cps, the inferred QPO frequency from the above QPO frequency vs. count rate relation at this count rate is 714$`\pm `$30 Hz. Thus the inferred QPO frequency decrease is about 99$`\pm `$30 Hz, more than twice of the observed frequency decrease of 37$`\pm `$5 Hz. In order to study the source variability at frequencies lower than 0.5 Hz, we divide the light curve into 256 s segments, then calculate the power spectrum of each segment. We average 16 power spectra before the X-ray burst and 4 power spectra after the burst. In Fig. 3, we show the average Leahy normalized power spectra in the frequency range 0.002-0.5 Hz before and after the X-ray burst. The power spectrum before the X-ray burst shows a very low frequency noise (VLFN) component of a power law form (Hasinger & van der Klis (1989)). A model composed of a white noise level of 2 and a power law component is fit to the average power spectra. We obtain a power law index $`\alpha `$=$`1.5\pm 0.2`$ and $`A`$=$`0.0024\pm 0.0015`$. The fractional root-mean-square (rms) variability of the VLFN is about 1.2$`\pm `$0.4%. The power spectrum after the X-ray burst is consistent with a white noise. ### 2.2 Spectral Analysis The energy spectrum of Aql X-1 has changed little during the outburst before March 1 (Zhang, W. et al. 1998a ) and was of a blackbody type. In subsequent observations, the energy spectra gradually changed from a blackbody type to a blackbody plus a power-law type (see Zhang, S. N. et al. (1998)). The spectra before and after the X-ray burst are shown in Fig. 4. In Fig. 4(a), we plot the ratio between the spectra (2-10 keV) before and after the X-ray burst, namely spectrum $`N_1(E)`$ and spectrum $`N_2(E)`$, respectively. The ratio as a function of energy is not a constant. The flux decrease is more severe above 5 keV. Thus a variation of the spectral shape was associated with the flux decrease. Spectral models of a single blackbody (BB), a single power law (PL) and their combination (BB+PL) can not yield an acceptable fit to both spectra $`N_1(E)`$ and $`N_2(E)`$ in the energy range 3-20 keV. The model composed of two BB components and a PL component gives an acceptable fit. The inclusion of two BB components, one of which may be a multi-color disk (MCD) component, has been applied previously to the soft X-ray transient 4U 1608-52 during an outburst (Mitsuda et al. (1984)) and Aql X-1 during an outburst rise (Cui et al. (1998)). In Table 1, we list the parameters of the spectral fit to both spectra with 2BB+PL and BB+MCD+PL models. The emission area of the $``$ 0.26 keV BB component in the 2BB+PL model fit is much larger than the surface area of a neutron star. This suggests that it may be a disk component and the fit with BB+MCD+PL model may be reasonable. Assuming the disk inclination angle as zero, we find that the apparent inner disk radius is in the 1 $`\sigma `$ range 200-470 km. In both spectra, the PL components only contribute less than 1/20 of the total flux in the energy range 2-10 keV. The difference between the average PCA count rate ($`>`$ 10 keV) before and after the X-ray burst is 0.64$`\pm `$0.38 cps. This indicates that the PCA flux variation mainly comes from the spectral components in the soft X-ray range below 10 keV. The two BB temperatures are stable during the flux decrease as shown in Table 1. Thus it is reasonable to study the flux decrease by subtracting the spectrum $`N_2(E)`$ from the spectrum $`N_1(E)`$. In Fig. 4(b), we plot the subtracted spectrum ($`N_1(E)`$-$`N_2(E)`$) in the energy range 2-10 keV. A model composed of two BB components yields an acceptable fit to this spectrum, which is also plotted. In Table 2, we list the best-fit parameters and the fluxes of the two components. The 2-10 keV energy flux of the residual spectrum is about (7$`\pm `$2)% of that of the total before the X-ray burst. The subtraction of the PL components in Table 1 mainly affects the 0.28 keV component shown in Fig. 4(b), i.e. a flux of 0.7$`\pm `$2.5 photons cm<sup>-2</sup> s<sup>-1</sup> in 2-4 keV. ## 3 Discussion and Conclusion We have reported that there was a simultaneous decrease of the X-ray flux, the QPO frequency, and the VLFN component in Aql X-1 immediately following a Type-I X-ray burst. The decrease lasted at least 400 s until the observation was stopped. The flux decrease was mainly due to a decrease of the spectral components in the energy range below 10 keV. As a type I X-ray burst only occurs when the condition for the thermonuclear instability is met, the X-ray burst and the flux decrease may be causally related. The X-ray flux derived from the spectral fit in Table 1 has an uncertainty as large as 20%, which is insufficient to determine the decrease of the X-ray luminosity. However, the spectral fit shown in Table 2 yields more confined parameters, suggesting that there is a (7$`\pm `$2)% decrease of the X-ray luminosity in 2-10 keV. The index of the PL component decreases together with the decrease of the X-ray luminosity. This behavior is similar to that found in 4U 1608-52 and 4U 0614+091 (Kaaret et al. (1998)). The spectral parameters of the two BB components in Table 1 seems consistent with the trend shown in Fig.2 of Cui et al. 1998, i.e. the lower the ASM count rate of Aql X-1, the larger the inner disk radius and the lower the BB temperatures are. The $``$ 1.06 keV BB is probably related to a part of the neutron star surface. The decrease of the VLFN after the burst supports the idea. The VLFN in LMXBs is an indication of the time-dependent fusion reactions (fires) on the neutron star surface (Bildsten (1993)), which corresponds to a mass accretion rate larger than that of the bursting stage. The disappearance of the VLFN in Aql X-1 then not only suggests that at least a few percent of the observed X-ray flux was from the fusion reaction on the neutron star surface before the X-ray burst, but also indicates that after the X-ray burst, the time-dependent fusion reactions were probably stopped. This may indicate that the X-ray burst had consumed almost all the nuclear fuel on the neutron star, and the stop of the time-dependent fusion reaction contributes to a significant part of the 7% X-ray flux decrease. The inner disk radii before and after the X-ray burst, derived from Table 1, are about 300 km and 360 km, respectively. For an accreting neutron star of 2.0 $`M_{}`$ and taking $`cos(\theta )`$=1, the upper limits on the Keplerian frequencies corresponding to the two radii are 15 and 12 Hz. The spectral hardening would lead to an even larger inner disk radius thus an even lower frequency (Shimura & Takahara (1995)). On the other hand, the QPO frequency around 800 Hz is probably the lower of the twin peaks, which suggests that the Keplerian frequency at the inner edge of the disk should be around 1075 Hz (Zhang, W. et al. 1998a ). Thus the derived inner disk radii are inconsistent with the beat-frequency model. The difference between the observed QPO frequency and those inferred from the spectral fit probably comes from a lack of detailed knowledge of the spectral components in neutron star X-ray binaries, and the lack of sensitivity below 2 keV of the PCA to determine the parameters of a 0.26 keV blackbody spectral component. Conversely, it is also possible that the identification of the kHz QPOs as the Keplerian frequency at the inner edge of the accretion disk or its beat frequency against the neutron star spin is incorrect. The observed QPO frequency decrease is 37$`\pm `$5 Hz, about 1/3 of that inferred from the QPO frequency vs. count rate relation. In principle, the decrease of the disk BB flux should originate from a decrease of the mass accretion rate. This would introduce a decrease of the QPO frequency. Taking the spectral parameters in Table 2 and using the PCA instrumental information, we estimate that the 2-10 keV PCA count rate of the disk BB is 23$`\pm `$8 cps, and that of the neutron star BB component is 47$`\pm `$6 cps. So the QPO frequency decrease is consistent with the 23 cps count rate decrease of the disk BB component, which is about 1/3 of the total decrease of $``$ 70 cps. Because the PCA effective area is not a constant with photon energy, and the two BB components have quite different BB temperatures, the incident BB photon fluxes (Table 2) can not replace the above count rate estimates when we study the frequency decrease inferred from the PCA count rate. The central radiation force may also affect the QPO frequency. A decrease of the central BB emission from near the neutron star surface would probably introduce an increase of the QPO frequency. Thus two mechanisms would account for the correlation between the QPO frequency and the count rate in Fig. 2. One is that it is the disk BB flux instead of the total flux that is correlated with the QPO frequency. The QPO frequency variations between the data points on the right side branch in Fig. 2 originate from the disk BB flux variations. Thus the data points on the lower left corner may join in the data points of the right side branch in a plot of the QPO frequency vs. the disk BB flux. The other is that a decrease of the central BB radiation force after the X-ray burst would increase the QPO frequency, then the QPO frequency did not follow the correlation relation when the central BB radiation force is nearly constant. Our study indicates that a comprehensive investigation on the correlation between QPO frequency and each spectral component is needed, especially when a flux decrease or spectral transition occurs. We thank an anonymous referee for helpful comments and suggestions, which certainly improved this article. This work was supported by the National Natural Science Foundation of China under grant 19673010 and 19733003.
no-problem/9812/astro-ph9812093.html
ar5iv
text
# ISO–SWS spectroscopy of IC443 and the origin of the IRAS 12 and 25 𝜇m emission from radiative supernova remnants Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA. The SWS is a joint project of SRON and MPE. ## 1 Introduction A significant fraction of the galactic supernova remnants were detected by the IRAS satellite (e.g. Arendt 1989, hereafter A (89)) and their relatively strong FIR emission has been normally interpreted as thermal radiation from dust which is heated by collisions with the hot (million degrees) plasma in the post–shock region (e.g. Braun & Strom braun86 (1986), Dwek et al. dwek\_etal87 (1987)). This emission could be an important cooling mechanism for non–radiative shocks in young SNRs (e.g. Ostriker & Silk ostriker73 (1973)). The spectral shape of the FIR emission reflects the temperature distribution of the dust which in turn depends on the shock speed and on the density, size and composition of the grains (e.g. Draine draine91 (1991)). Young SNRs are expected to have a FIR spectrum characterized by a single temperature because, in the hot ($`\mathrm{2\hspace{0.17em}10}^7`$ K) plasma typical of these objects, the temperatures of collisionally heated dust grains approach the same value independent of grain size (e.g. Dwek dwek87 (1987)). This is in good agreement with the IRAS data of young SNRs (e.g. Cas A, Tycho, Kepler) whose spectral distributions are fairly well fitted by a single temperature blackbody (e.g. A (89)) The IRAS colours of radiative SNRs are much more complex and require an ad hoc combination of dust components of different sizes and temperatures. In particular, the S(12)/S(25) IRAS colours are much bluer than those of young SNRs and require a very large population of very small ($`<50`$ Å) grains (e.g. Arendt et al. arendt92 (1992)). An alternative explanation is that the flux seen in the “blue” IRAS bands is dominated by line emission from the radiative filaments but this possibility could not be verified directly because of the difficulty of measuring the FIR lines prior to ISO. Attempts to estimate the line contamination by extrapolating optical spectral measurements using shock model predictions have led to contradictory conclusions (e.g. Mufson et al. mufson (1986), Arendt et al. arendt92 (1992)). In this Letter we present FIR spectral observations of a prototype radiative SNR which allow a direct comparison between the IRAS and line fluxes. The data are presented in Sect. 2, the results are discussed in Sect. 3 and in Sect. 4 we draw our conclusions. ## 2 Observations and results A complete SWS01 (speed 4, 6700 sec total integration time) spectrum centered at $`\alpha `$=06<sup>h</sup>14<sup>m</sup>32<sup>s</sup>.6 $`\delta `$=22<sup>o</sup>54’05” (1950 coordinates) and roughly corresponding to position 1 of Fesen & Kirshner (fesen80 (1980)) was obtained on October 16, 1997. The data were reduced using standard routines of the SWS interactive analysis system (IA) using calibration tables as of October 1998. Reduction relied mainly on the default pipeline steps, plus removal of signal spikes, elimination of the most noisy band 3 detectors, and flat–fielding. Note that SWS has a quite high spectral resolution ($`R=\lambda /\delta \lambda 1500`$) and is not well suited for measurements of faint continuum fluxes. For the SWS01 mode, in particular, s/n and detector drifts between dark current measurements do not allow measurements of continuum emission at levels much below 1 Jy. The measured IRAS 12 and 25 $`\mu `$m “continuum” fluxes from the SNR would correspond to only $``$0.01 Jy in the SWS aperture and cannot therefore be detected with the spectral observations presented here. The spectral sections including well detected lines are displayed in Fig. 2 and the derived line fluxes are listed in Table 1 together with their contribution to the IRAS fluxes, estimated using the spectral response shown in Fig. 3. The line profiles are not resolved within the noise (i.e. FWHM$`<`$400 km/s), and their centroids are not significantly red/blue–shifted. The most striking result is that the surface brightness of \[NeII\]$`\lambda `$12.8 alone is larger than that observed by IRAS through the 12 $`\mu `$m filter. This apparent contradiction reflects the somewhat higher spatial resolution of ISO–SWS relative to the undersampled IRAS maps. More specifically, the line emitting filaments visible in the H$`\alpha `$ and \[SII\] images of Mufson et al. (mufson (1986)) are quite uniformly distributed over many arcmin, i.e. an area larger than the IRAS map resolution, and the average line surface brightness over this area is roughly a factor of 2–4 lower than that within the ISO–SWS aperture. In other words, the ISO spectra, although centered on a bright optical filament, does not sample a spot of exceptionally large line surface brightness, but rather a “typical” line emitting region. This indicates, therefore, that the IRAS 12 $`\mu `$m “continuum” is indeed dominated by \[NeII\] line emission, i.e. that this line accounts for at least 50% of the observed IRAS flux. A similar reasoning applies to the IRAS 25 $`\mu `$m flux which is strongly contaminated by emission in the \[FeII\]$`\lambda `$26.0 ground state transition plus significant contribution from \[OIV\]$`\lambda `$25.9, \[SIII\]$`\lambda `$18.7 and \[FeII\]$`\lambda `$17.9 (cf. Table 1). Another interesting result is that \[FeII\]$`\lambda `$26.0/\[NeII\]$`\lambda `$12.8 and \[FeII\]$`\lambda `$26.0/\[SiII\]$`\lambda `$34.8 are both a factor of $``$1.7 lower than those observed in RCW103 (Oliva et al. rcw103\_iso (1998)), while the density sensitive \[FeII\]$`\lambda `$17.9/$`\lambda `$26.0 ratio is similar in the two objects. This result indicates that IC443 has a lower Fe gas phase abundance than RCW103 where the Fe/Ne and Fe/Si relative abundances were found to be close to their solar values. This difference may reflect intrinsic differences in the ISM total element abundances or, more likely, imply that the shock in IC443 is slower and therefore less effective in destroying the Fe–bearing grains. This last possibility is also supported by other “speedometers”, i.e. the line surface brightness and the \[NeIII\]/\[NeII\] ratio which are lower by a factor of 7 and 1.6, respectively. ## 3 Discussion The data presented above strongly suggest that most of the 12 and 25 $`\mu `$m flux from the NE rim of IC443 is due to ionic line emission rather than continuum emission from warm dust. We address here the following questions. Can this result be generalized to other radiative SNRs? Are there other lines which may severely contaminate IRAS measurements of this and other remnants? ### 3.1 The IRAS 12 and 25 $`\mu `$m emission from other SNRs The easiest and most direct estimate of the line contribution to the IRAS fluxes requires a measurement of the FIR line intensity from the whole SNR. This is virtually impossible in IC443 and other radiative remnants in the Galaxy because of their very large projected sizes, but is feasible in the LMC where e.g. the remnant N49 has been mapped with SWS by Oliva et al. (in preparation). They find a total \[NeII\]$`\lambda `$12.8 line intensity of $`\mathrm{2\hspace{0.17em}10}^{14}`$ W m<sup>-2</sup> and equal, within the errors, to the F(12$`\mu `$m)=$`\mathrm{2.2\hspace{0.17em}10}^{14}`$ W m<sup>-2</sup> IRAS flux reported by Graham et al. (graham87 (1987)). For remnants where direct measurements of FIR lines is not available, a reasonable estimate of their fluxes can be obtained by scaling optical line measurements using available ISO spectral observations of radiative SNRs, namely IC443 (this paper), RCW103 (Oliva et al. rcw103\_iso (1998)) and N49 (Oliva et al. in preparation). These indicate that the \[NeII\] and \[FeII\]+\[SIII\]+\[OIV\] line contribution to the 12 and 25 $`\mu `$m filters are both roughly equal to the flux of H$`\beta `$. Unfortunately, relatively few optical spectrophotometric observations of SNRs are available in the literature and are altogether missing for several important radiative remnants such as W44 and W49B. Nevertheless, the SW filament of RCW86 has $`\mathrm{\Sigma }(\text{H}\beta )\mathrm{4\hspace{0.17em}10}^7`$ (Leibowitz & Danziger leibowitz (1983)) and very similar to the $`\mathrm{\Sigma }\mathrm{3\hspace{0.17em}10}^7`$ W m<sup>-2</sup> sr<sup>-1</sup> IRAS 12 and 25 $`\mu `$m fluxes found by A (89). Similarly, the NE filament of the Cygnus Loop has $`\mathrm{\Sigma }(\text{H}\beta )\mathrm{1\hspace{0.17em}10}^7`$ (Fesen et al. fesen82 (1982)) and close to the IRAS surface brightness $`\mathrm{\Sigma }\mathrm{8\hspace{0.17em}10}^8`$ W m<sup>-2</sup> sr<sup>-1</sup> (A (89)). These results suggest, therefore, that lines account for most of the IRAS 12 and 25 $`\mu `$m emission from line emitting filaments of radiative SNRs. Another interesting exercise is to compare optical line and IRAS fluxes in a younger remnant such as the Kepler SNR for which accurate line photometric measurements are available (D’Odorico et al. dodorico (1986)). The total H$`\beta `$ emission from the whole remnant is $`\mathrm{2.9\hspace{0.17em}10}^{15}`$ W m<sup>-2</sup> and only 2% and 0.6 % of the IRAS flux in the 12 and 25 $`\mu `$m bands, respectively. This indicates, therefore, that line contamination is negligible in this object. ### 3.2 Contamination from other lines The positions of the most important lines are marked in Fig. 3 where one can see that \[SiII\]$`\lambda `$34.8, although very prominent in SNRs, does not contribute significantly because it falls at a wavelength where IRAS was virtually blind. The same applies to \[NeIII\]$`\lambda `$15.6 which has an intensity comparable to \[NeII\] but falls in the hole between the 12 and 25 $`\mu `$m filters. The possible contribution of \[OI\]$`\lambda `$63.0 to the IRAS 60 $`\mu `$m band was already pointed out by Burton et al. (burton90 (1990)) who observed this line in the southern (“molecular”) rim of IC443 using a 33” aperture spectrometer on the KAO and found line surface brightnesses a factor of $``$2 larger than those measured by IRAS. On the other hand, however, ISO–LWS measurements of \[OI\]$`\lambda `$63.0 through a $``$ 80” aperture centered on the optical filaments of W44 (Reach & Rho reach (1996)) yield a line surface brightness a factor of $``$2.5 lower than the IRAS 60 $`\mu `$m flux<sup>1</sup><sup>1</sup>1Note that the “continuum” in the LWS spectra of W44 by Reach & Rho (reach (1996)) is most probably an instrumental artifact, its level being 10 times brighter than the IRAS 60 $`\mu `$m background+source flux indicating, therefore, that the contamination by \[OI\] is relatively unimportant, at least in this remnant. The H<sub>2</sub> rotational lines could strongly affect the 12 $`\mu `$m band and may explain why the S(12)/S(25) ratio is a factor of $``$2.5 larger on the southern rim of IC443, i.e. at $``$ 06<sup>h</sup>14<sup>m</sup>40<sup>s</sup> +22<sup>o</sup>23’ (cf. Fig. 1). This region is a well known powerful source of H<sub>2</sub> (1,0)S(1)$`\lambda `$2.12 line emission (Burton et al. burton88 (1988)) whose spatial distribution closely resembles the IRAS 12 $`\mu `$m contours. Ground–based observations of the (0,0)S(2)$`\lambda `$12.3 rotational transition were obtained by Richter et al. (richter (1995)) who measured a line flux $``$10 times brighter than (1,0)S(1)$`\lambda `$2.12. Assuming a constant I($`\lambda `$12.3)/I($`\lambda `$2.12) ratio over the large area mapped in the latter transition by Burton et al. (burton88 (1988)) yields an average line surface brightness of about $`\mathrm{1.5\hspace{0.17em}10}^7`$ W m<sup>-2</sup> sr<sup>-1</sup> or 1/3 of the IRAS 12 $`\mu `$m peak surface brightness. Considering that the H<sub>2</sub> spectrum of IC443 is remarkably similar to that of Orion peak1 (e.g. Richter et al. richter (1995)) one then expects similar fluxes in the (0,0)S(3)$`\lambda `$9.66 and (0,0)S(2)$`\lambda `$12.3 lines (Parmar et al. parmar (1994)) or, equivalently, that the two lines should account for about 2/3 of the IRAS 12 $`\mu `$m flux. Finally, it should be noted that IRAS was virtually blind to the H<sub>2</sub>(0,0)S(1)$`\lambda `$17.0 line (cf. Fig. 3) while the highly forbidden ground state transition (0,0)S(0)$`\lambda `$28.2 is most probably too weak to significantly contaminate the 25 $`\mu `$m IRAS band (cf. e.g. Fig. 2 of Oliva et al. rcw103\_iso (1998)). ## 4 Conclusions An ISO–SWS spectrum of IC443 has revealed prominent \[NeII\] and \[FeII\] line emission which, together with \[SIII\] and \[OIV\], account for most of the observed IRAS flux in the 12, 25 $`\mu `$m bands. Simple arguments indicate that this is probably the case in other radiative SNRs and this result suggests that the unusually blue IRAS colours of radiative SNRs simply reflect line contamination, rather than a large population of small grains which are otherwise required to explain the warmer “continuum” emission. Available ground based data also indicate that S(2), S(3) rotational lines of H<sub>2</sub> contribute to a large fraction of the IRAS 12 $`\mu `$m emission from the southern rim of IC443. The relative fluxes of the ionic lines detected by ISO yield a $``$0.6 $`\times `$ solar Fe gas–phase relative abundance which is significantly lower than that found in the much more powerful RCW103 supernova remnant. This may imply that the shock in IC443 is slower and thus less effective in destroying the Fe–bearing grains. This scenario is also supported by the lower line surface brightness and \[NeIII\]/\[NeII\] ratio which both indicate a lower shock speed in IC443. ###### Acknowledgements. We thanks the referee, R. Arendt, for useful comments and criticisms. E. Oliva acknowledges the partial support of the Italian Space Agency (ASI) through the grant ARS–98–116/22. SWS and the ISO Spectrometer Data Center at MPE are supported by DLR (formerly DARA) under grants 50–QI–8610–8 and 50–QI–9402–3.
no-problem/9812/astro-ph9812144.html
ar5iv
text
# Redshifts of New Galaxies ## 1 Introduction Evidence for the association of high redshift objects with low redshift galaxies emerged in 1966 with completion of the systematic study in the Atlas of Peculiar Galaxies. For the most recent summary of the evidence to date see Arp (1998b). In the present report, however, we concentrate on the most recent observations and those which summarize best the empirical properties which characterize the associations. These latest discoveries reinforce a picture of empirical evolution which progresses from newly born, high redshift protogalaxies (quasars) to older, more normal, low redshifts galaxies. ## 2 The Quasars around NGC5985 Fig. 1 shows one of the most exact alignments of quasars and galaxies known. Attention was drawn to this region when it was discovered that a very blue galaxy in the second Byurakan Survey (Markarian et al. 1986) had a quasar of redshift z = .81 only 2.4 arcsec from its nucleus (Reimers and Hagen 1998). Even multiplying by 3 x 104 galaxies of this apparent magnitude or brighter in their survey they estimated only a chance proximity of $`10^3`$. (Nevertheless they took this as proof that it was a chance projection! Also it was not referenced that G.Burbidge, in 1996 in the same Journal, had published extensive list of other quasars improbably close to low redshift galaxies). But the galaxy was a dwarfish spiral, showing no active nucleus from which the quasar could have emerged. Proceeding on the by now overwhelming evidence that Seyfert galaxies eject quasars (Radecke 1997; Arp 1997a) I looked in the vicinity for a Seyfert. There was NGC5985, only 36.9 arcmin away on the sky! The chance of finding a Seyfert as bright as V = 11.0 mag. this close to the z = .81 quasar was less than $`10^3`$. The next obvious question was: Were there other quasars in the field? Fig. 1 shows that there are 6 catalogued quasars discovered in a uniform search of the area in the Second Byurakan Survey. (P. Veron, in this Symposium, reports 66 - 88 % completeness for this survey - probably typical for quasar surveys). But five of these six quasars fall on a line through the Seyfert, with the dwarf galaxy along the same line. What is the probability that three or four objects would accidentally align within a degree or so of as straight line through the Seyfert? Conservatively this can be computed to be of the order of $`10^4`$ to $`10^5`$. But the most astonishing result of all is that if one looks up the position of the minor axis in NGC5985 it turns out, as shown in Figs. 1 and 2, to be a line that looks as though it were drawn through the positions of the objects! Just a simple visual evaluation of Fig. 1 would lead to the conclusion that the configuration was physically significant. A combined numerical probability of the configuration gives a chance of around $`10^9`$ to $`10^{10}`$ of being accidental (Arp 1998). Nevertheless several peer reviewers recommended against publication on the grounds that the accidental probability was ”greater” than this. But, of course, several dozens of cases of anomalous associations had been reported since 1966 with chance probabilities running from $`10^4`$ to $`10^5`$. What is the combined probability of all these previous cases? And what is the motivation to claim each new case is ”a posteriori”? ## 3 The Companion Galaxies Around NGC5985 We have seen that the quasars are aligned along the minor axis of the central Seyfert. Sections that follow summarize that generally both quasars and companion galaxies are aligned along the minor axes. Is this true of the NGC5985 family? In Fig. 2 I have enlarged the central regions of Fig. 1 and plotted all the bright, NGC galaxies in the region. It is apparent that these NGC galaxies, which are fainter than NGC5985 as companions should be, are almost exactly aligned along the WNW extension of the minor axis! Taken together with the dwarf companion on the other side of the minor axis, this leaves no doubt that in the case of NGC5985 the companion galaxies and quasars are aligned exactly along the same minor axis. This then constitutes another proof that these objects of variously higher redshift have some physical relation to the low redshift,central galaxy. There is confirming evidence in the measured redshifts of these companions. As Fig. 2 shows, the one to the ESE is about +230 km/sec with respect to NGC5985 and the one to the WNW is about +400 km/sec. These redshifts are close enough to that of the parent galaxy to conventionally confirm their status as physical companions. But at the same time they exhibit the systematic excess redshift of younger generation galaxies in groups (See Fig. 1 of preceding paper in this volume and Arp 1997). In the following sections we will interpret this excess redshift as indicating that they have more recently evolved from quasars. Since they are older than the quasars, they have had time to fall back from apgalacticon to within a few diameters of the parent galaxy. ## 4 The Empirical Model of Ejection and Evolution As Fig. 3 shows, we can now combine the data from the last 32 years of study of physical groups of extragalactic objects. What results is a sequence of quasars emerging from a large galaxy along its minor axis, increasing in luminosity and decreasing in redshift as they move outward. When they reach a maximum distance of about 500 kpc they have started to turn into compact, active galaxies. As they age further into more normal galaxies they may fall back toward the parent galaxy. In that case they fall back along the minor axis because they emerged with little or no angular momentum component. They move on plunging orbits. Ambartsumian (1958) noted that companion galaxies seemed to be ejected from larger galaxies. Arp (1967;1968) showed evidence that quasars were also ejected from galaxies. The astronomer most knowledgeable about disk galaxies, Erik Holmberg (1969), showed companion galaxies aligned along minor axes. He concluded they must be formed from gas ejected from the nucleus. Burbidge and Burbidge (1997) showed gas ejected along the minor axis of the strong Seyfert, NGC4258. Arp (1997b) showed a number of pairs of X-ray quasars ejected closely along the minor axis of Seyferts (e.g. NGC4235, NGC2639). These observational results are now summarized in Fig. 4. It is seen that the quasars preferentially come out in a cone angle of about $`\pm 20^o`$. The companion galaxies are preferrentially confined within about $`\pm 35^o`$. This difference is in the expected direction because as the quasars reach their maximum extension and slow down, they are vulnerable to perturbation by objects at that distance and hence will fall in again along slightly deviated orbits. Also since the companions are older, the more recent ejection axes of quasars in some cases have moved because of precession of the galaxy or the spin axis of the nucleus. ## 5 Verification of the Model The above model combines many cases, each one of which may only contain a few elements. But it is now possible to show in Fig. 5 the active Seyfert NGC3516. This galaxy is exceptional because it confirms the essentials of the model in a single case. The objects are drawn from a complete sample of bright X-ray objects within about 22 arc min of the Seyfert (Chu et al. 1998). Notice there are 5 quasars and a BL Lac-type object, the latter object representing a transition between a quasar and a more normal galaxy. The redshifts of these six objects are strictly ordered with the highest closest to the Seyfert and the lowest furthest away. They define very well the minor axis of the galaxy. As a side note it is clear that each of the six redshifts fits very closely to the quantized redshift peaks observed in quasars in general. Those peaks are: .06, .30, .60, .96, 1.41, 1.96 (Arp et al. 1990). If these redshifts represented Doppler velocities, of course, quantization would be destroyed by random orientations to the line of sight. So quantization is direct proof that extragalactic redshifts are nonvelocity. The intrinsic redshifts are indicated to evolve in discrete steps as the quasars evolve into galaxies. The quantization of redshifts in smaller steps in the companion galaxies further supports their being end products of this evolution. The NGC5985 case shown in Fig. 2 confirms very well the later stages of smaller redshift where the evolved companion galaxies have fallen back toward the parent galaxy. The very exact alignment of quasars and companion galaxies must mean that the minor axis of NGC5985 has stayed unusually well fixed in space over the time required to evolve from a high redshift quasar to a reasonably normal galaxy, or about 7 x 109 years (Arp 1991). Another case of a very narrow alignment of companion galaxies can be seen around our Local Group center, M31 (Arp 1998a). The length of the NGC5985 line and the brightness of its components suggests that the group is closer than most Seyferts so far investigated. In particular the distribution of quasars is much closer to NGC3516 which appears to be more distant. It is also clear, however, from the measured ellipticities of the images, that the minor axis of NGC3516 is tipped much closer to our line of sight than in NGC5985. That accounts for some foreshortening of the NGC3516 line as well as an apparently greater spread off the line relative to the length of the line. ## 6 Statistics From 1966 onward statistical associations of high redshift quasars with low redshift galaxies were reported (Arp 1987). Significances ranged from 16 to greater than 7.5 sigma. We will give only the latest example here as shown in Fig. 6. There it is shown that a nearly complete sample of Seyfert galaxies have a conspicuous excess of bright X-ray point sources within about 50 arcmin radius (Radecke 1997; Arp 1997a). Since these X-ray sources are essentially all quasar or quasar-like one can see at a glance that there are a large number of quasars physically associated with nearby Seyfert galaxies. ## 7 Theory What would give an intrinsic (non-velocity) redshift to a galaxy - high when it is first born - and decreasing as it ages? The answer was supplied by Narlikar in 1977. It rests on the fact that the standard solution of the Einstein field equations as made by Friedmann in 1922 made one key assumption. It assumed that the elementary particles which constitute matter remain forever constant in time. A more general solution, as Narlikar showed, had the mass of particles increasing with time. This would furnish a series of answers to paradoxes which currently falsify the standard Big Bang solution: 1) Episodic creation near zero mass, giving initial outflow at signal speed, c. 2) Condensation into proto galaxies. 3) An intrinsic redshift initially high and decreasing with time. 4) Evolution into normal galaxies moving with small velocities. 5) Possibility of understanding quantized redshifts. 6) Transformation to local physics using local time scales. The details of this theory are explained in recent publications Arp (1998b). But here it should be emphasized that the observational disproofs of the conventional, singular creation theory can no longer be discarded with the claim that there is no viable alternative theory. The empirical conclusions of Ambartsumian and many others who followed, have now been supported by numerous amplifying observations and a unifying physical theory has been adduced. It is clear from the experience of the last 40 years that influential astronomers will accept neither the observations which falsify the current beliefs nor the theory which enables these observations to be understood. Therefore it is of the utmost importance for each individual researcher to examine and judge the facts for themselves. The usefulness of each person’s career labors and their usefulness to science will depend on their choosing the correct fundamental assumptions. References Ambartsumian, V.A. 1958, Onzieme Conseil de Physique Solvay, ed. R. Stoops, Bruxelles. Arp, H. 1967 Ap.J. 148, 321. Arp, H. 1968, Astrofizika 4, 59. Arp, H. 1987, ”Quasars, Redshifts and Controversies” Interstellar Media, Berkeley Arp, H. 1991, Apeiron Nos. 9-10, 18. Arp, H. 1997, Astrophysics and Space Sciences 250, 163. Arp, H. 1997a, Astron. Astrophys. 319, 33. Arp, H. 1997b, Astron. Astrophys. 328, L17. Arp, H. 1998a, Ap.J. 496, 661. Arp, H. 1998b, ”Seeing Red: Redshift, Cosmology and Acad. Sci.” Apeiron, Montreal Arp, H., Bi, H., Chu Y., Zhu, X. 1990, Astron. Astrophys. 239, 33. Burbidge, G.R., Burbidge, E.M. 1997, Ap.J. 477, L13. Chu, Y., Wei, J. Hu, J., Zhu, X. and Arp, H. 1998, Ap.J. 500, 596. Holmberg, E. 1969, Arkiv of Astron., Band 5, 305. Markarian B.E., Stepanyan D.A., Erastova L.K. 1986 Astrophysics 25, 51 Narlikar, J.V. and Arp, H. 1993, Ap.J. 405, 51. Radecke, H.-D. 1997, Astron. Astrophys. 319, 18. Reimers D., Hagen H.-J. 1998, Astron. Astrophys. 329, L25. Sulentic, J.W., Arp, H. and Di Tullio, G.A. 1978, Ap.J. 220, 47. Zaritsky, D., Smith, R., Frenk, C.S. and White, S.D.M. 1997, Ap.J. 478, L53.
no-problem/9812/astro-ph9812333.html
ar5iv
text
# Search for a possible space-time correlation between high energy neutrinos and gamma-ray bursts ## 1 Neutrino astronomy with MACRO Besides high energy gamma ray production resulting from $`\pi ^0`$ decay, astrophysical beam dump models predict neutrino emission from $`\pi ^\pm `$ decay. Mesons are produced by accelerated protons interacting with matter or photons in an accretion disk (Gaisser Gaisser (1995)). The discovery of TeV gamma-ray emissions has enhanced the potential possibilities of this mechanism and the possible existence of such sources, but energies are not high enough to esclude syncroton radiation or bremsstrahlung and inverse Compton production mechanisms. Neglecting photon absorbtion, it is expected that neutrino fluxes are almost equal to gamma ray ones and that the spectrum has the typical form due to the Fermi acceleration mechanism: $`\frac{dN}{dE}E^{2.0÷2.2}`$. Neutrinos produced in atmospheric cascades are background to the search for astrophysical neutrinos for energies $`10`$ TeV. In fact, atmospheric neutrinos have a softer spectrum than astrophysical neutrinos, since at energies $`100`$ GeV the decay length of mesons in the atmosphere becomes longer than atmospheric depth and the spectrum steepens (differential spectral index $`\gamma `$ 3.7). GRBs are possible sources of high energy $`\nu `$s: in the fireball scenario the beam dump mechanism can lead to $`\nu `$ emission (Halzen Halzen (1996), Waxman Bahcall (1997)). The trancience of the GRB emission, even though constrains the maximum neutrino energy to $`10^{19}`$ eV, improves GRB association with measured neutrino events in underground detectors using arrival directions and times, even though expected fluxes are much lower than atmospheric $`\nu `$ background. The MACRO detector, located in the Hall B of the Gran Sasso underground laboratories, with a surface of $`76.6\times 12`$ m<sup>2</sup> and a height of 9 m, can indirectly detect neutrinos using a system of $`600`$ ton of liquid scintillator to measure the time of flight of particles (resolution $`500`$ ps) and $`20000`$ m<sup>2</sup> of streamer tubes for tracking (angular resolution better than 1 and pointing accuracy checked using moon shadow detection (Ambrosio 1998a )). The time of flight technique allows the discrimination between downward-going atmospheric muons and upward-going events produced in the rock below (average atmospheric neutrino energy $`E_\nu 100`$ GeV) and inside ($`E_\nu 4`$ GeV) the detector by neutrinos which have crossed the Earth. Between $`31\times 10^6`$ atmospheric muons, a sample of 909 upward-going muons is selected with an automated analysis. The data taking has begun since March 1989 with the incomplete detector (Ahlen macro95 (1995)) and since April 1994 with the full detector (Ambrosio 1998b ). In our convention $`1/\beta =\mathrm{\Delta }Tc/L`$, calculated from the measured time of flight $`\mathrm{\Delta }t`$ and the track length between the scintillator layers, is $`1`$ for downward-going muons and $`1`$ for upward-going muons. Events with $`1.25<1/\beta <0.75`$ are selected. We look for a statistically significant excess of $`\nu `$ events in the direction of known $`\gamma `$ and $`X`$-ray sources (a list of 40 selected sources, 129 sources of the $`2^{nd}`$ Egret Catalogue, 2233 Batse GRB, 220 SN remnants, 7 sources with $`\gamma `$ emission above 1 TeV) with respect to fluctuations of the atmospheric $`\nu `$ background. The expected background from atmospheric $`\nu `$s is calculated in declination bands of $`\pm 5^{}`$ around the declination of the source mixing for 100 times local coordinates and times of upward-going $`\mu `$s. We calculate flux limits in half-cones of $`3^{}`$ taking into account reduction factors due to the signal fraction lost outside the cone (which depends on $`\nu `$ spectrum, kinematics of CC interaction, $`\mu `$ propagation in the rock, MACRO angular resolution). We do not find any signal evidence from known sources or of clustering of events (we measure 89 clusters of $`3`$ events and expect 81.2 of them in a cone of $`3^{}`$). Muon flux limits for some sources are: 2.5 for Crab Nebula, 5.6 for MRK421, 3.71 for Her X-1, 0.45 for Vela Pulsar, 0.65 for SN1006 in units of $`\times 10^{14}`$ cm<sup>-2</sup> s<sup>-1</sup>. For most of the considered sources MACRO gives the best flux limits compared to other underground experiments. ## 2 Space-time correlations between GRBs and upward-going muons We look for correlations with 2233 GRBs in the Batse Catalogs 3B and 4B (Meegan Meegan (1997)) collected since 21 Apr. 1991 to 5 Oct. 1998 and 894 of the 909 upward-going muons detected by MACRO during this period (see Fig. 1). Considering Batse angular accuracy, we estimate that a half-cone of $`20^{}`$ ($`10^{}`$) contains 99.8$`\%`$ (96.8$`\%`$) of $`\nu `$ sources (if GRB sources are $`\nu `$ sources, too). The same percentages of upward-going muons are contained in these half-cones from the GRB sources. As a matter of fact, we calculate via Monte Carlo the fraction of signal lost, which depends on the $`\nu `$ spectral index, multiple scattering of muons during propagation in the rock and MACRO angular resolution. Using various cone apertures, we estimate that this fraction is negligible for $`10^{}`$. The area for upgoing muon detection in the direction of the GRBs averaged over all the bursts is 119 m<sup>2</sup>. Its value is small because MACRO is sensitive to neutrinos only in the lower hemisphere and because it was incomplete in the period 1991-1994. We find no statistically significant correlation between neutrino event and GRB directions and detection times. As shown in Fig. 2, we find no events in a window of $`\pm 200`$ inside $`10^{}`$ from GRB directions and 1 event inside $`20^{}`$, which was measured after 39.4 s from the Batse GRB of 22 Sep. 1995 (4B 950922). For this burst the radius of the positional error box in the Batse catalog is 3.86, much smaller than the angular distance of 17.6 at which we find the neutrino event. The expected number of atmospheric $`\nu `$ events is computed with the delayed coincidence technique. We expect 0.035 (0.075) events in $`10^{}`$ ($`20^{}`$). The corresponding upper limits (90$`\%`$ c.l.) for the upward-going muon flux are $`0.87\times 10^9`$ cm<sup>-2</sup> ($`1.44\times 10^9`$ cm<sup>-2</sup>) upward-going muons per average burst. These limits exclude an extreme cosmic string-type model reported in (Halzen Halzen (1996)), which results in $`10^1\mu `$ cm<sup>-2</sup>, while according to a fireball scenario model in (Waxman Bahcall (1997)), a burst at a distance of 100 Mpc producing $`0.4\times 10^{51}`$ erg in neutrinos of around $`10^{14}`$ eV would produce $`6\times 10^{11}`$ cm<sup>-2</sup> upward-going muons. (\*) The MACRO Collaboration M. Ambrosio<sup>12</sup>, R. Antolini<sup>7</sup>, C. Aramo<sup>7,n</sup>, G. Auriemma<sup>14,a</sup>, A. Baldini<sup>13</sup>, G. C. Barbarino<sup>12</sup>, B. C. Barish<sup>4</sup>, G. Battistoni<sup>6,b</sup>, R. Bellotti<sup>1</sup>, C. Bemporad <sup>13</sup>, E. Bernardini <sup>2,7</sup>, P. Bernardini <sup>10</sup>, H. Bilokon <sup>6</sup>, V. Bisi <sup>16</sup>, C. Bloise <sup>6</sup>, C. Bower <sup>8</sup>, S. Bussino<sup>14</sup>, F. Cafagna<sup>1</sup>, M. Calicchio<sup>1</sup>, D. Campana <sup>12</sup>, M. Carboni<sup>6</sup>, M. Castellano<sup>1</sup>, S. Cecchini<sup>2,c</sup>, F. Cei <sup>11,13</sup>, V. Chiarella<sup>6</sup>, B. C. Choudhary<sup>4</sup>, S. Coutu <sup>11,o</sup>, L. De Benedictis<sup>1</sup>, G. De Cataldo<sup>1</sup>, H. Dekhissi <sup>2,17</sup>, C. De Marzo<sup>1</sup>, I. De Mitri<sup>9</sup>, J. Derkaoui <sup>2,17</sup>, M. De Vincenzi<sup>14,e</sup>, A. Di Credico <sup>7</sup>, O. Erriquez<sup>1</sup>, C. Favuzzi<sup>1</sup>, C. Forti<sup>6</sup>, P. Fusco<sup>1</sup>, G. Giacomelli <sup>2</sup>, G. Giannini<sup>13,f</sup>, N. Giglietto<sup>1</sup>, M. Giorgini <sup>2</sup>, M. Grassi <sup>13</sup>, L. Gray <sup>7</sup>, A. Grillo <sup>7</sup>, F. Guarino <sup>12</sup>, P. Guarnaccia<sup>1</sup>, C. Gustavino <sup>7</sup>, A. Habig <sup>3</sup>, K. Hanson <sup>11</sup>, R. Heinz <sup>8</sup>, Y. Huang<sup>4</sup>, E. Iarocci <sup>6,g</sup>, E. Katsavounidis<sup>4</sup>, E. Kearns <sup>3</sup>, H. Kim<sup>4</sup>, S. Kyriazopoulou<sup>4</sup>, E. Lamanna <sup>14</sup>, C. Lane <sup>5</sup>, D. S. Levin <sup>11</sup>, P. Lipari <sup>14</sup>, N. P. Longley <sup>4,l</sup>, M. J. Longo <sup>11</sup>, F. Maaroufi <sup>2,17</sup>, G. Mancarella <sup>10</sup>, G. Mandrioli <sup>2</sup>, S. Manzoor <sup>2,m</sup>, A. Margiotta Neri <sup>2</sup>, A. Marini <sup>6</sup>, D. Martello <sup>10</sup>, A. Marzari-Chiesa <sup>16</sup>, M. N. Mazziotta<sup>1</sup>, C. Mazzotta <sup>10</sup>, D. G. Michael<sup>4</sup>, S. Mikheyev <sup>4,7,h</sup>, L. Miller <sup>8</sup>, P. Monacelli <sup>9</sup>, T. Montaruli<sup>1</sup>, M. Monteno <sup>16</sup>, S. Mufson <sup>8</sup>, J. Musser <sup>8</sup>, D. Nicoló<sup>13,d</sup>, C. Orth <sup>3</sup>, G. Osteria <sup>12</sup>, M. Ouchrif <sup>2,17</sup>, O. Palamara <sup>10</sup>, V. Patera <sup>6,g</sup>, L. Patrizii <sup>2</sup>, R. Pazzi <sup>13</sup>, C. W. Peck<sup>4</sup>, S. Petrera <sup>9</sup>, P. Pistilli <sup>14,e</sup>, V. Popa <sup>2,i</sup>, A. Rainò<sup>1</sup>, A. Rastelli <sup>2,7</sup>, J. Reynoldson <sup>7</sup>, F. Ronga <sup>6</sup>, C. Satriano <sup>14,a</sup>, L. Satta <sup>6,g</sup>, E. Scapparone <sup>7</sup>, K. Scholberg <sup>3</sup>, A. Sciubba <sup>6,g</sup>, P. Serra-Lugaresi <sup>2</sup>, M. Severi <sup>14</sup>, M. Sioli <sup>2</sup>, M. Sitta <sup>16</sup>, P. Spinelli<sup>1</sup>, M. Spinetti <sup>6</sup>, M. Spurio <sup>2</sup>, R. Steinberg<sup>5</sup>, J. L. Stone <sup>3</sup>, L. R. Sulak <sup>3</sup>, A. Surdo <sup>10</sup>, G. Tarlè<sup>11</sup>, V. Togo <sup>2</sup>, D. Ugolotti <sup>2</sup>, M. Vakili <sup>15</sup>, C. W. Walter <sup>3</sup>, and R. Webb <sup>15</sup>. 1. Dipartimento di Fisica dell’Università di Bari and INFN, 70126 Bari, Italy 2. Dipartimento di Fisica dell’Università di Bologna and INFN, 40126 Bologna, Italy 3. Physics Department, Boston University, Boston, MA 02215, USA 4. California Institute of Technology, Pasadena, CA 91125, USA 5. Department of Physics, Drexel University, Philadelphia, PA 19104, USA 6. Laboratori Nazionali di Frascati dell’INFN, 00044 Frascati (Roma), Italy 7. Laboratori Nazionali del Gran Sasso dell’INFN, 67010 Assergi (L’Aquila), Italy 8. Depts. of Physics and of Astronomy, Indiana University, Bloomington, IN 47405, USA 9. Dipartimento di Fisica dell’Università dell’Aquila and INFN, 67100 L’Aquila, Italy 10. Dipartimento di Fisica dell’Università di Lecce and INFN, 73100 Lecce, Italy 11. Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA 12. Dipartimento di Fisica dell’Università di Napoli and INFN, 80125 Napoli, Italy 13. Dipartimento di Fisica dell’Università di Pisa and INFN, 56010 Pisa, Italy 14. Dipartimento di Fisica dell’Università di Roma “La Sapienza” and INFN, 00185 Roma, Italy 15. Physics Department, Texas A&M University, College Station, TX 77843, USA 16. Dipartimento di Fisica Sperimentale dell’Università di Torino and INFN, 10125 Torino, Italy 17. L.P.T.P., Faculty of Sciences, University Mohamed I, B.P. 524 Oujda, Morocco $`a`$ Also Università della Basilicata, 85100 Potenza, Italy $`b`$ Also INFN Milano, 20133 Milano, Italy $`c`$ Also Istituto TESRE/CNR, 40129 Bologna, Italy $`d`$ Also Scuola Normale Superiore di Pisa, 56010 Pisa, Italy $`e`$ Also Dipartimento di Fisica, Università di Roma Tre, Roma, Italy $`f`$ Also Università di Trieste and INFN, 34100 Trieste, Italy $`g`$ Also Dipartimento di Energetica, Università di Roma, 00185 Roma, Italy $`h`$ Also Institute for Nuclear Research, Russian Academy of Science, 117312 Moscow, Russia $`i`$ Also Institute for Space Sciences, 76900 Bucharest, Romania $`l`$ Swarthmore College, Swarthmore, PA 19081, USA $`m`$ RPD, PINSTECH, P.O. Nilore, Islamabad, Pakistan $`n`$ Also INFN Catania, 95129 Catania, Italy $`o`$ Also Department of Physics, Pennsylvania State University, University Park, PA 16801, USA
no-problem/9812/math9812154.html
ar5iv
text
# Estimating Vaccine Coverage by Using Computer Algebra ## 1 Introduction (1.1) Nigel Gay \[Ga\] has estimated the coverage of MMR (measles, mumps, rubella) multivalent vaccination in a fixed age cohort by the following method: The rates $`p(\pm ,\pm ,\pm )`$ of being seropositive with each of the three diseases depend, via a polynomial system $`F`$, on the MMR coverage $`v`$, the exposition factors $`e_i`$, and the rates $`s_i`$ of seroconversion; the index $`i=1,2,3`$ stands for measles, mumps and rubella, respectively. On the other hand, it is the $`p(\pm ,\pm ,\pm )`$ which can be obtained from the available data. Hence, a maximum likelihood approach provides estimations of $`v`$, $`e_i`$, and $`s_i`$. Gay’s approach leads to numerical methods of finding values $`v,e_i,s_i`$ that minimize the distance between $`F_{\pm ,\pm ,\pm }(v,e_i,s_i)`$ and the measured $`p(\pm ,\pm ,\pm )`$. The present paper replaces this part by providing exact formulas describing the inverse of the polynomial map $`F:IR^7IR^8`$. Note that the image of $`F`$ is contained in the hyperplane $`[p(\pm ,\pm ,\pm )=1]`$, i.e. it is 7-dimensional like the source space of $`F`$. The final result providing our estimation of $`v,e_i,s_i`$ may be found in Theorem (3.3). (1.2) We make the same three assumptions used by Gay \[Ga\]: * Vaccinated children who do not seroconvert as a result of vaccination have the same probability of being seropositive as an unvaccinated child of the same age (i.e., $`e_i`$). * In a single individual, seroconversion to each vaccine component is independent. * Risk of exposure to infection is homogeneous within each age cohort and infection with each disease is independent. However, we eliminate another assumption which is silently made in \[Ga\] in that we do not assume that the seroconversion $`s_i`$ for the $`i`$-th disease is independent of age. (1.3) We would like to thank Duco van Straten for the useful discussions concerning the exciting mathematical pattern hidden in the MMR problem and its solution. Moreover, we are greatful to Nigel Gay for sending us his manuscript including the data of the ESEN (European Seroepidemiological Network) Project. ## 2 The MMR system (2.1) First, let us recall from \[Ga\] the involved variables and their mutual relationship. Fixing one of the age cohorts, we denote by * $`v`$ the proportion of children who have received the multivalent vaccine (“MMR coverage”), * $`e_i`$ the rate measuring the exposure to natural infection with disease $`i`$ (“exposition factor”), * $`s_i`$ the proportion of children previously with no detectable antibody to disease $`i`$ who acquire detectable antibody to disease $`i`$ when vaccinated (“seroconversion”). The rate $`q_i`$ measuring the presence of antibodies to disease $`i`$ under the condition of being vaccinated may be easily expressed as $$q_i=e_i+(1e_i)s_i\text{with }i=1,2,3.$$ From these data it is possible to obtain information about the expected antibody prevalence in general. It is encoded in the $`8`$ variables $`p(\pm ,\pm ,\pm )`$ with “$`+`$” at the $`i`$-th place standing for the presence and “$``$” for the absence of antibodies to the $`i`$-th disease. Likewise, we may think about the sign triples as numbers between $`0`$ (meaning “$``$”) and $`7`$ (meaning “$`+++`$”); this allows the shorter description $`p(\pm ,\pm ,\pm )=p(k)=p_k`$. The equations are $$\begin{array}{ccccccc}\hfill p_7& =& p(+,+,+)& =& \hfill vq_1q_2q_3& +& (1v)e_1e_2e_3\hfill \\ \hfill p_6& =& p(+,+,)& =& \hfill vq_1q_2(1q_3)& +& (1v)e_1e_2(1e_3)\hfill \\ \hfill p_5& =& p(+,,+)& =& \hfill vq_1(1q_2)q_3& +& (1v)e_1(1e_2)e_3\hfill \\ \hfill p_4& =& p(+,,)& =& \hfill vq_1(1q_2)(1q_3)& +& (1v)e_1(1e_2)(1e_3)\hfill \\ \hfill p_3& =& p(,+,+)& =& \hfill v(1q_1)q_2q_3& +& (1v)(1e_1)e_2e_3\hfill \\ \hfill p_2& =& p(,+,)& =& \hfill v(1q_1)q_2(1q_3)& +& (1v)(1e_1)e_2(1e_3)\hfill \\ \hfill p_1& =& p(,,+)& =& \hfill v(1q_1)(1q_2)q_3& +& (1v)(1e_1)(1e_2)e_3\hfill \\ \hfill p_0& =& p(,,)& =& \hfill v(1q_1)(1q_2)(1q_3)& +& (1v)(1e_1)(1e_2)(1e_3).\hfill \end{array}$$ Remark: In \[Ga\], the variables $`v`$, $`e_i`$, $`q_i`$, and $`p_k`$ carry a second index pointing to the special age cohort; $`s_i`$ does not because of Gay’s assumption mentioned at the end of (1.1). (2.2) The previous equations express the variables $`p_k`$ in terms of $`v,e_i,q_i`$ or, since $`s_i=(q_ie_i)/(1e_i)`$, in terms of $`v,e_i,s_i`$. Our goal is to describe the inverse dependencies, and we proceed in two steps: First, using elimination theory, we produce in (2.2) and (2.2) for each of the variables $`v,e_i,q_i`$ a separate equation with coefficients in the polynomial ring $`IQ[p_0,\mathrm{},p_7]`$. The surprising fact will be that all these equations are quadratic ones. Then, as a second step, we will check in (2.2) which of the $`2^7`$ combinations actually provide a solution to our system. The results of these investigations are gathered in Theorem (2.2). Before we start this program, we would like to introduce an easy technical trick in which we replace the variables $`p_k`$ by symbolic fractions $`a_k/n`$. By doing so, it changes the above equations in the obvious way. For instance, the first one becomes $$a_7=a(+,+,+)=nvq_1q_2q_3+n(1v)e_1e_2e_3.$$ Since this manipulation increases both the degree and the number of variables, it seemingly complicates the problem. However, using computer algebra systems, the computational time decreases substantially. Moreover, another advantage of our approach is that $`_{k=0}^7p_k=1`$ translates into $`_{k=0}^7a_k=n`$. In particular, when finally applying our formulas, we may directly substitute the number of observed probands in each category for the corresponding variables $`a_k`$. The number $`n`$ equals the size of the cohort. (2.3) Let us start with eliminating $`n,e_i,q_i`$ to obtain an equation for the variable $`v`$ which is, by the way, of major interest. We work with the computer algebra system Singular developed at the University Kaiserslautern, \[GPS\]. Let $`R`$ be a polynomial ring of characteristic zero with 16 variables $`a_k,n,v,e_i,q_i`$. For the monomial order we have to choose a global one, e.g. dp(16). Transforming the 8 equations into an ideal $`IR`$, the command “eliminate(I,n\*e(1)\*e(2)\*e(3)\*q(1)\*q(2)\*q(3))” produces a quadratic equation $$c_1(a_0,\mathrm{},a_7)v^2c_1(a_0,\mathrm{},a_7)v+c_0(a_0,\mathrm{},a_7)=\mathrm{\hspace{0.33em}0}$$ with huge polynomials $`c_1,c_0`$ of degree 6 in the variables $`a_0,\mathrm{},a_7`$. We may also use Singular for the factorization of polynomials. Applied to the coefficient $`c_1`$ as well as to the discriminant of our quadratic polynomial, this yields nice results. With $$\begin{array}{ccc}\hfill f_1& :=& n=\left(\left(a_0+a_3+a_5+a_6\right)+\left(a_7+a_4+a_2+a_1\right)\right)\hfill \\ \hfill f_3& :=& \left(\left(a_0+a_3+a_5+a_6\right)\left(a_7+a_4+a_2+a_1\right)\right)\left(a_0a_7+a_3a_4+a_5a_2+a_6a_1\right)\hfill \\ & & \mathrm{\hspace{0.17em}2}\left(a_0a_7(a_0a_7)+a_3a_4(a_3a_4)+a_5a_2(a_5a_2)+a_6a_1(a_6a_1)\right)\hfill \\ & & +\mathrm{\hspace{0.17em}2}\left(\left(a_3a_5a_6+a_0a_5a_6+a_0a_3a_6+a_0a_3a_5\right)\left(a_4a_2a_1+a_7a_2a_1+a_7a_4a_1+a_7a_4a_2\right)\right)\hfill \\ \hfill f_4& :=& \left(a_0^2a_7^2+a_3^2a_4^2+a_5^2a_2^2+a_6^2a_1^2\right)+\mathrm{\hspace{0.17em}4}\left(a_0a_3a_5a_6+a_7a_4a_2a_1\right)\hfill \\ & & \mathrm{\hspace{0.17em}2}\left(a_0a_7a_3a_4+a_0a_7a_5a_2+a_0a_7a_6a_1+a_3a_4a_5a_2+a_3a_4a_6a_1+a_5a_2a_6a_1\right),\hfill \end{array}$$ we obtain $$c_1=f_1^2f_4\text{and}c_14c_0=f_3^2.$$ In particular, the two solutions for $`v`$ are $$v_{1,2}=\frac{1}{2}\left(1\pm \sqrt{\frac{c_14c_0}{c_1}}\right)=\frac{1}{2}\left(1\pm \frac{f_3(a_0,\mathrm{},a_7)}{f_1(a_0,\mathrm{},a_7)\sqrt{f_4(a_0,\mathrm{},a_7)}}\right).$$ Remarks: * Note that whenever $`v`$ solves the equation, then so does $`(1v)`$. This symmetry may easily be seen in the original 8 equations by switching the variables $`e_i`$ and $`q_i`$. * The formulas for $`f_1,f_3`$, and $`f_4`$ become very natural if we recall that $`a_0,a_3,a_5,a_6`$ correspond to $`a(,,)`$, $`a(,+,+)`$, $`a(+,,+)`$, $`a(+,+,)`$, respectively. These variables are those which have an even number of plus signs. This fact may be illustrated by imaging the variables $`a(\pm ,\pm ,\pm )`$ as sitting in the corners of a cube. Then, $`a_0,a_3,a_5,a_6`$ correspond to the vertices of one of the two inscribed regular tetrahedra. The remaining $`a_7,a_4,a_2,a_1`$ are contained in the opposite corners, respectively. * It has been observed by Duco van Straten that $`f_4`$ equals the hyperdeterminant of the three-dimensional matrix $`A_{}`$ formed by the variables $`a(\pm ,\pm ,\pm )`$, cf. Proposition 14.1.7. in \[GKZ\]. Moreover, $`f_3`$ is a linear combination of the derivatives of $`f_4`$ which follows the usual pattern, $$2f_3=\left(\frac{f_4}{a_0}+\frac{f_4}{a_3}+\frac{f_4}{a_5}+\frac{f_4}{a_6}\right)\left(\frac{f_4}{a_7}+\frac{f_4}{a_4}+\frac{f_4}{a_2}+\frac{f_4}{a_1}\right).$$ Finally, we would like to note that the coefficient $`c_0`$ itself does split into a product of three quadrics: $$c_0=f_{21}f_{22}f_{23}\text{with}\begin{array}{ccc}\hfill f_{21}& :=& (a_0+a_4)(a_7+a_3)(a_1+a_5)(a_6+a_2)\hfill \\ \hfill f_{22}& :=& (a_0+a_2)(a_7+a_5)(a_4+a_6)(a_3+a_1)\hfill \\ \hfill f_{23}& :=& (a_0+a_1)(a_7+a_6)(a_2+a_3)(a_5+a_4).\hfill \end{array}$$ (2.4) Now, we focus on the remaining six variables $`e_i`$ and $`s_i`$. Following the above recipe, we obtain again quadratic equations for each of them, but with much smaller coefficients. They are no longer of degree 6, but quadratic themselves. Notation: With $`A_{}`$ being the three-dimensional matrix formed by the variables $`a(\pm ,\pm ,\pm )`$, we derive the following ordinary $`(2\times 2)`$ matrices from it: * $`A_+(1):=A_+`$ denotes the layer consisting of the entries $`a(+,,)`$, i.e., the right hand face of the cube depicted above; the remaining (left) one forms the matrix $`A_{}(1):=A_{}`$. Similarly, we may define $`A_\pm (2):=A_\pm `$ and $`A_\pm (3):=A_\pm `$. * Considering the sum of the layers, we obtain $`A_\mathrm{\Sigma }(i):=A_+(i)+A_{}(i)`$ for $`i=1,2,3`$. Using this new terminology, we may recover the quadratic $`c_0`$-factors $`f_{2i}`$ from the end of (2.2) as $$f_{2i}=detA_\mathrm{\Sigma }(i)\text{with}i=1,2,3.$$ Fixing a disease index $`i`$, the elimination done by Singular tells us that $`e_i`$ and $`q_i`$ both obey the same quadratic equation. It is $$\left(detA_\mathrm{\Sigma }(i)\right)x^2\left(detA_\mathrm{\Sigma }(i)+detA_+(i)detA_{}(i)\right)x+\left(detA_+(i)\right)=0.$$ The discriminant is the hyperdeterminant $`detA=f_4`$ again. Hence, the solutions for $`e_i`$ and $`q_i`$ are $$\left[(e_i)_{1,2}\text{and}(q_i)_{1,2}\right]=\frac{1}{2}\left(1+\frac{detA_+(i)detA_{}(i)\pm \sqrt{detA}}{detA_\mathrm{\Sigma }(i)}\right)=\frac{1}{2}\left(1+\frac{g_{2i}\pm \sqrt{f_4}}{f_{2i}}\right)$$ with $`g_{2i}`$ being the quadratic polynomials $$g_{2i}:=detA_+(i)detA_{}(i)=\{\begin{array}{cc}a_0a_3+a_1a_2+a_4a_7a_5a_6\hfill & (\text{for}i=1)\hfill \\ a_0a_5+a_1a_4+a_2a_7a_3a_6\hfill & (\text{for}i=2)\hfill \\ a_0a_6+a_1a_7+a_2a_4a_3a_5\hfill & (\text{for}i=3).\hfill \end{array}$$ (2.5) Assuming the general case of $`f_10`$, $`detA_{}0`$, and $`detA_\mathrm{\Sigma }(i)0`$ for each $`i=1,2,3`$, we have narrowed the number of possible values for each of the variables $`v,e_i`$, and $`q_i`$ down to two. It remains to check which of the $`2^7`$ combinations survive to provide an actual solution of the original system (2.2). This can easily be done by considering the sum of those equations out of the original system that correspond to a certain face of the cube depicted in (2.2). For instance, adding up the equations for $`a_7,a_6,a_5`$, and $`a_4`$ provides $$a_7+a_6+a_5+a_4=f_1vq_1+f_1(1v)e_1.$$ All variables have been eliminated except $`v`$, $`q_1`$, and $`e_1`$. This allows us to show that the $`e`$’s must not equal the $`q`$’s. (Assuming $`e_1=q_1`$, we would obtain $`a_7+a_6+a_5+a_4=f_1e_1`$. However, substituting this value of $`e_1`$ into the quadratic equation of (2.2) yields $$f_1^2\left(f_{21}e_{1}^{}{}_{}{}^{2}(f_{21}+g_{21})e_1+detA_+(1)\right)=f_{22}f_{23},$$ which is generally different from zero.) Now, by Remark (2.2)(1), we may assume that, w.l.o.g., $`v=(f_1\sqrt{f_4}+f_3)/(2f_1\sqrt{f_4})`$. Hence, with $`e_1=(f_{21}+g_{21}\sqrt{f_4})/(2f_{21})`$ and $`q_1=(f_{21}+g_{21}\pm \sqrt{f_4})/(2f_{21})`$, the above equation multiplied with $`4f_{21}\sqrt{f_4}`$ becomes $$\begin{array}{ccc}\hfill 4f_{21}\sqrt{f_4}\left(\underset{k=4}{\overset{7}{}}a_k\right)& =& \left(f_1\sqrt{f_4}+f_3\right)\left(f_{21}+g_{21}\pm \sqrt{f_4}\right)+\left(f_1\sqrt{f_4}f_3\right)\left(f_{21}+g_{21}\sqrt{f_4}\right)\hfill \\ & =& 2f_1\sqrt{f_4}\left(f_{21}+g_{21}\right)\pm \mathrm{\hspace{0.17em}2}f_3\sqrt{f_4}.\hfill \end{array}$$ In particular, since $`\mathrm{\hspace{0.17em}2}f_{21}\left(_{k=4}^7a_k\right)=f_1\left(f_{21}+g_{21}\right)+f_3`$, only the signs on top survive in the formulas of $`e_1`$ and $`q_1`$. Finally, one may use Singular again for checking that these values, together with the similar ones for the remaining variables, indeed yield a solution of the original system. This means that we have shown the following Theorem: If $`f_1,f_4,f_{2i}0`$ for $`i=1,2,3`$, then the polynomial system of (2.2), with the adaption $`p_k=a_k/n`$ made in (2.2), has exactly two solutions. They are $$v=\frac{f_1\sqrt{f_4}\pm f_3}{2f_1\sqrt{f_4}},e_i=\frac{f_{2i}+g_{2i}\sqrt{f_4}}{2f_{2i}},q_i=\frac{f_{2i}+g_{2i}\pm \sqrt{f_4}}{2f_{2i}}(i=1,2,3).$$ If some of the above polynomials $`f_{}`$ do vanish, then the system (2.2) might have infinitly many solutions or no solution at all. ## 3 The MMR coverage (3.1) If we apply the previous theory to our statistical problem of estimating the MMR coverage, then $`a_k`$ stands for the number of persons of a prefixed age group observed to have antibody status $`k`$ ($`k=0,\mathrm{},7`$). Thus, $`f_1`$ is the size of the cohort, and this number is automatically positive. On the other hand, we would like to interpret the solutions $`v,e_i,q_i`$, and $`s_i`$ of the MMR system as estimations of the probabilities described in (2.2). In particular, they should be real numbers and, moreover, be contained in the interval $`[0,1]`$. While in \[Ga\] the latter is forced by the numerical program used to solve the system, our solutions may not have these properties. However, this should not be considered problematic, but a feature of our method. If the solutions fall out of the range making sense, this is a strong hint that the input data $`a_k`$ are of poor quality. (3.2) In the following, we will formulate the conditions the input data have to fulfill for yielding apropriate results. Moreover, we will see that, in the statistical context, only one of the two solutions mentioned in Theorem (2.2) survives. Theorem: Let $`a_k`$ be the observed number of people in a fixed age group with antibody status $`k`$. Then, the MMR system has a good statistical solution if and only if $$f_4(\underset{¯}{a})>0\text{and}f_{2i}(\underset{¯}{a})\sqrt{f_4(\underset{¯}{a})}+\left|g_{2i}(\underset{¯}{a})\right|(i=1,2,3).$$ If these conditions are satisfied, then the estimation for $`v,e_i,s_i`$ is $$v=\frac{f_1\sqrt{f_4}+f_3}{2f_1\sqrt{f_4}},e_i=\frac{f_{2i}+g_{2i}\sqrt{f_4}}{2f_{2i}},s_i=\frac{2\sqrt{f_4}}{f_{2i}g_{2i}+\sqrt{f_4}}(i=1,2,3).$$ Proof: Positivity of $`f_4`$ means that the solutions described in Theorem (2.2) are real. Assuming this, we have $$v[0,1]f_1\sqrt{f_4}\pm f_30f_1^2f_4f_3^2.$$ On the other hand, we have seen in (2.2) that $$f_1^2f_4=c_1=(c_14c_0)+4c_0=f_3^2+\mathrm{\hspace{0.17em}4}f_{21}f_{22}f_{23}.$$ Hence, the condition “$`v[0,1]`$” is equivalent to $`f_{21}f_{22}f_{23}>0`$. Since $`s_i=(q_ie_i)/(1e_i)`$, we know that $$e_i,s_i[0,1]0e_iq_i\mathrm{\hspace{0.17em}1}.$$ From Theorem (2.2) we obtain, depending on the choice of the solution, that $`q_ie_i=\sqrt{f_4}/f_{2i}`$ for $`i=1,2,3`$ or that $`q_ie_i=\sqrt{f_4}/f_{2i}`$ for $`i=1,2,3`$. Anyway, for $`q_ie_i`$, the polynomials $`f_{21},f_{22},f_{23}`$ must have the same sign. Together with $`f_{21}f_{22}f_{23}>0`$ obtained above, this means that $`f_{21},f_{22},f_{23}>0`$. In particular, looking at Theorem (2.2), only the solution with the top sign survives. Finally, it is easy to see that the conditions $`e_i0`$ and $`q_i1`$ translate into $`f_{2i}\sqrt{f_4}g_{2i}`$ and $`f_{2i}\sqrt{f_4}+g_{2i}`$, respectively. $`\mathrm{}`$ (3.3) Remark: If one is only interested in the MMR coverage $`v`$, then the conditions ensuring a meaningful result may be weakened. It follows from the proof of the previous theorem that $$f_4(\underset{¯}{a})>0\text{and}f_{2i}(\underset{¯}{a})>0(i=1,2,3).$$ will do. ## 4 Data (4.1) To illustrate our results, we have chosen some data of some country of the ESEN Project, \[Ga\]. These data have not yet been finalized as they might be changed according to a new standardization between the European countries. For that reason, the use of these data here is for illustrative purposes only. The input, i.e., the sampled variables $`a_k`$, may be found in the table (4.4). The first table compares our estimation of $`v,e_1,e_2`$, and $`e_3`$ by age groups (AG) with that obtained by Gay in \[Ga\]; the variables pointing to his values carry a tilde. | AG | $`\stackrel{~}{v}`$ | $`v`$ | $`\stackrel{~}{e}_1`$ | $`e_1`$ | $`\stackrel{~}{e}_2`$ | $`e_2`$ | $`\stackrel{~}{e}_3`$ | $`e_3`$ | $`s_1`$ | $`s_2`$ | $`s_3`$ | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | 1 | 0.227 | 0.227 | 0.003 | 0.005 | 0.019 | 0.019 | 0.014 | 0.011 | 0.950 | 0.861 | 0.974 | | 2 | 0.642 | 0.642 | 0.122 | 0.144 | 0.020 | 0.017 | 0.090 | 0.090 | 0.976 | 0.878 | 0.922 | | 3 | 0.715 | 0.710 | 0.122 | 0.112 | 0.041 | 0.046 | 0.090 | 0.087 | 1.002 | 0.912 | 0.930 | | 4 | 0.837 | 0.824 | 0.251 | 0.279 | 0.041 | 0.054 | 0.106 | 0.219 | 1.003 | 0.886 | 0.922 | | 5 | 0.859 | 0.863 | 0.292 | 0.252 | 0.241 | 0.227 | 0.106 | 0.000 | 1.000 | 0.886 | 0.921 | | 6 | 0.794 | 0.889 | 0.621 | 0.427 | 0.324 | 0.094 | 0.106 | -0.037 | 0.961 | 0.855 | 0.830 | | 7 | 0.645 | 0.847 | 0.756 | 0.550 | 0.502 | 0.006 | 0.256 | 0.258 | 0.949 | 0.938 | 0.678 | | 8 | 0.662 | 0.794 | 0.764 | 0.652 | 0.502 | 0.285 | 0.411 | 0.356 | 0.969 | 0.877 | 0.798 | | 9 | 0.576 | 0.900 | 0.764 | 0.588 | 0.665 | 0.279 | 0.481 | -0.007 | 0.833 | 0.857 | 0.838 | | 10 | 0.478 | 0.940 | 0.906 | 0.667 | 0.734 | 0.049 | 0.631 | 0.450 | 0.906 | 0.892 | 0.660 | The main difference between Gay’s and our results can be found in the values of $`v,e_1,e_2,e_3`$ in the higher age groups. Moreover, while Gay has assumed age independent seroconversion rates, our solutions $`s_1,s_2,s_3`$ do vary with age; the most striking example is the rubella seroconversion $`s_3`$. The comparison of Gay’s values with the age average of our solutions for $`s_1,s_2,s_3`$ is as follows: | Seroconversion by N. Gay: | 0.989 | 0.880 | 0.910 | | --- | --- | --- | --- | | Average of our $`s_1,s_2,s_3`$: | 0.955 | 0.884 | 0.847 | (4.2) We can use the equations of (2.2) to re-calculate the expected antibody prevalence out of the solutions obtained for $`v,e_i,s_i`$. In other words, for each antibody status $`(\pm ,\pm ,\pm )`$ we are looking for the number of people that should have been observed to yield the desired result. Because we used an exact method, it is no surprise that our solutions give exactly back the input data; they fill the $`a_k`$-columns in the following table. On the other hand, using Gay’s solutions, we obtain different values which are contained in the $`\stackrel{~}{a}_k`$-columns: | $``$ | | $`+`$ | | $`+`$ | | $`++`$ | | $`+`$ | | $`++`$ | | $`++`$ | | $`+++`$ | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`\stackrel{~}{a}_0`$ | $`a_0`$ | $`\stackrel{~}{a}_1`$ | $`a_1`$ | $`\stackrel{~}{a}_2`$ | $`a_2`$ | $`\stackrel{~}{a}_3`$ | $`a_3`$ | $`\stackrel{~}{a}_4`$ | $`a_4`$ | $`\stackrel{~}{a}_5`$ | $`a_5`$ | $`\stackrel{~}{a}_6`$ | $`a_6`$ | $`\stackrel{~}{a}_7`$ | $`a_7`$ | | 155.8 | 156 | 2.3 | 2 | 3.1 | 3 | 0.5 | 2 | 1.0 | 1 | 5.0 | 6 | 3.7 | 1 | 37.7 | 38 | | 49.1 | 48 | 5.0 | 5 | 1.1 | 1 | 1.0 | 2 | 7.9 | 9 | 12.7 | 13 | 8.2 | 7 | 90.2 | 90 | | 40.8 | 42 | 4.2 | 4 | 1.8 | 2 | 1.2 | 0 | 6.9 | 6 | 14.6 | 11 | 9.8 | 8 | 107.6 | 114 | | 20.1 | 18 | 2.5 | 5 | 1.0 | 1 | 1.2 | 0 | 8.2 | 8 | 17.7 | 18 | 11.6 | 9 | 129.7 | 133 | | 14.6 | 17 | 1.8 | 0 | 4.7 | 5 | 1.8 | 0 | 7.3 | 7 | 16.1 | 15 | 15.3 | 15 | 153.4 | 156 | | 10.2 | 13 | 1.3 | 0 | 5.0 | 2 | 1.2 | 3 | 17.9 | 14 | 14.8 | 20 | 20.7 | 30 | 145.9 | 135 | | 6.9 | 11 | 2.4 | 4 | 7.0 | 1 | 2.7 | 3 | 21.9 | 16 | 15.1 | 13 | 30.3 | 40 | 128.7 | 127 | | 5.0 | 7 | 3.5 | 4 | 5.0 | 3 | 3.8 | 3 | 16.5 | 15 | 19.1 | 20 | 23.2 | 25 | 135.9 | 135 | | 3.4 | 6 | 3.1 | 1 | 6.7 | 4 | 6.5 | 9 | 11.1 | 11 | 14.4 | 14 | 26.7 | 27 | 122.1 | 122 | | 0.9 | 2 | 1.5 | 2 | 2.4 | 1 | 4.2 | 4 | 8.5 | 7 | 17.1 | 17 | 26.1 | 28 | 121.2 | 121 | (4.3) In the following, we will discuss some of the properties of our solutions. * One should not so much worry about negative rates or rates above 1 as they appear among the $`e_i`$ or $`s_i`$. In all those cases, the values are very close to the allowed range. * Our major concern is caused by the exposition factors $`e_2`$ and $`e_3`$. They seem to be very small in the higher age groups and, additionally, they do not increase with age. For the latter, however, we may use the same explanation as Gay did for the decline of his $`v`$ in older cohorts in that the data arise from different cohorts in each age group. * As already mentioned before, we did not ad hoc assume that the seroconversions $`s_i`$ are age independent. However, as a result of our calculations, we obtained values for mumps and measles that did not greatly vary – and the averages are quite close to Gay’s values. On the other hand, the seroconversion factor for rubella shows an unusual behavior in the higher age groups and we would be interested in an explanation for it. The major difference between Gay’s and our approach is the following: Altmann: We consider each age group separately; this yields a system of 7 equations in 7 variables for each group, allowing exact solutions with easy formulas. Gay: He considers 10 age groups at once, yielding a system with 70 equations in 70 variables. Moreover, he creates additional restrictions by * assuming that the seroconversion $`s_i`$ is age independent (meaning to lose $`27`$ variables), * and by forcing the exposition factors $`e_i`$ to increase with age (meaning to introduce additional inequalities). For the remaining system, Gay uses a numerical approach to find values for $`v(\text{age})`$, $`e_i(\text{age})`$, and $`s_i`$ to fit into the system as best as possible. Exact solutions are of course out of range. Thus, the fact that the above problem (2) does not occur in Gay’s solutions is no surprise at all. It was part of his method to force all these properties which are, however, biologically plausible. An advantage of Gay’s method is that imperfect data in single age groups might be corrected by the better ones. On the other hand, our method tells which data are better or worse and gives information about their quality. Moreover, besides exactness, the main advantage of our approach seems to be that the formulas for $`v,e_i,s_i`$ are mutually independent. Hence, even if one dislikes the results for the $`e_i`$’s or $`s_i`$’s, one has still an explicit formula for the MMR coverage $`v`$ which works well. Doris Altmann Robert Koch Institut Stresemannstr. 90-102 D-10963 Berlin, Germany e-mail: altmannd@rki.de Klaus Altmann Institut für Reine Mathematik Humboldt-Universität zu Berlin Ziegelstr. 13A D-10099 Berlin, Germany e-mail: altmann@mathematik.hu-berlin.de
no-problem/9812/patt-sol9812002.html
ar5iv
text
# Dynamic properties of solitons in the Frenkel-Kontorova Model. Application to incommensurate CDW conductors ## I Introduction A system of interacting particles in sinusoidal external potential (Frenkel-Kontorova (FK) model ) is widely used for description of a broad variety of physical phenomena, such as statics and dynamics of incommensurate phases (see, e.g. ), transport properties in quasi-one dimensional conductors (see Ref. 3 and references therein), adatoms diffusion on a metal surface , etc. Peculiar features of the FK model usually explored are related to the kink-like solitons. Properties of the kinks have been described in a number of publications . The dynamics of the FK model has also been extensively studied, but mostly in relation to dynamics of incommensurate superstructures rather than to the single kink . Whereas it is not completely clear yet at what system parameters the single-kink effects can be still important. Incommensurate charge density wave (CDW) conductors are the physical systems for which the single-kink effects can be very important. Vibration and transport properties of these systems attract great interest due to numerous peculiarities. The most striking among them are: i) the giant peak of unknown origin in the low frequency infrared (IR) spectrum of a number of inorganic CDW conductors such as $`K_{0.3}MoO_3`$, $`(TaSe_4)_2I`$ and $`TaS_3`$ ; ii) nonlinear conductivity and noise generation when conducting dc current by these materials. The aim of the present study is to investigate an impact of both single kink and kink lattice onto IR active phonon spectrum and to specify the range of the model parameters in which its properties can be treated in terms of nearly independent kinks rather than in terms of superstructure, associated with the kink lattice. I believe that after comparison of the numerical results with the experimental data and proper justification of the model it can serve as a bases for understanding of the microscopic nature of the aforementioned peculiar features of CDWs. The investigations were performed in two approaches: i) molecular dynamic (MD) simulation was used for the system to reach an equilibrium state according to the method proposed in Ref., after what all the particles were subjected to a small uniform step-like displacement and subsequent vibrations were analyzed via Fourier-transformation; ii) eigenvector problem (EVP) was solved in the harmonic approximation to study the vibration spectrum of the system. The kinks in this case were taken into account through expansion of the potential energy around particle equilibrium positions obtained from MD simulation. ## II Vibration spectrum in the presence of kinks Let us consider a chain of particles of mass $`m`$ and charge $`e`$ with nearest neighbor interaction in the sinusoidal external potential $`V\left(x\right)=\frac{Va^2}{4\pi ^2}\mathrm{cos}\left(2\pi \frac{x}{a}\right)`$ where $`a`$ is the potential period. In case of harmonic interparticle interaction the motion equation for $`n`$-th particle is $$m\frac{^2U_n}{t^2}+\gamma \frac{U_n}{t}+K_2(2U_nU_{n1}U_{n+1})+\frac{Va}{2\pi }\mathrm{sin}\left(2\pi \frac{U_n}{a}\right)=eE(t),$$ (1) where $`\gamma `$ is phenomenological damping and $`E(t)`$ is external electric field. Let the time dependent position $`U_n`$ of the particle can be represented as $`U_n(t)=na+U_n^0+\delta _n(t),`$where $`U_n^0`$ is quasistatic variable describing a shift of the equilibrium position of the particle with respect to the corresponding potential minimum, $`\delta _n(t)`$ describes a vibration of the particle around the new equilibrium position $`U_n^0`$. Then suggesting $`\delta _n(t)=\delta _n(\omega )\mathrm{exp}(i\omega t)`$ and $`E(t)=E_0\mathrm{exp}(i\omega t)`$ the Eq. (1) can be splitted into two equations $$K_2(2U_n^0U_{n1}^0U_{n+1}^0)+\frac{Va}{2\pi }\mathrm{sin}\left(2\pi U_n^0\right)=0,$$ (2) $$\delta _n(\omega )\left[V\mathrm{cos}\left(2\pi U_n^0\right)\omega ^2+i\omega \gamma \right]+\frac{K_2}{m}(2\delta _n(\omega )\delta _{n1}(\omega )\delta _{n+1}(\omega ))=\frac{e}{m}E_0.$$ (3) Disregarding the trivial case $`U_n^0=0`$ when number of particles $`N_{part}`$ is equal to the number of potential minima $`N_{pot}`$, Eq. (2) describes quasistatic kink-like deformation of the chain (due to neglection of the dynamical term we restrict our consideration by standing kinks only). Eq. (3) describes the particle vibration around the new equilibrium position. In the continues limit Eq. (2) reduces to the sine-Gordon equation with the single-kink solution : $`U_n^0(i)=2a\pi ^1arctg\left\{\mathrm{exp}\left[\pm \frac{2(ni)a}{R_k}\right]\right\},`$ $`R_k=2a\sqrt{\frac{K_2}{V}}`$ can be considered as the kink radius, $`i`$ is the kink position. Substituting this solution into Eq. (3) one can obtain the complex susceptibility $`\chi (\omega )=\frac{1}{E_0}\delta _n(\omega )`$, where the peaks in $`Im\left(\chi (\omega )\right)`$ correspond to resonances $`\omega _r`$ and $`Re\left(\delta _n(\omega _r)\right)`$ corresponds to suitably normalized eigenvector of the mode at $`\omega _r`$. Fig.1 shows the $`\omega (k)`$ plot without (a) and with (b-c) kinks in the model system. One can see the smearing of the resonances in the presence of single kink (Fig.1a) while this smearing is gone in the system with the kink lattice (Fig.1c). Instead, the phonon band folding due to the kink lattice and the additional low frequency vibration arise. The latter is a so-called phason, which is related to translational motion of kink(s) (domain wall(s)). In case of negligible Peierls-Nabarro potential barrier the phason frequency tends to zero. The phason is IR active and looks like a steep increase at $`\omega 0`$ in the optical conductivity spectrum $`\sigma \left(\omega \right)=\omega Im\left(\chi (\omega )\right)`$ (see Fig.2). The high frequency peaks in Fig.2 correspond to phonons, the strongest one being related to in-phase vibration of the particles situated near bottom of potential wells. ## III Similarity between kink and the force constant defect The smearing of the vibrational states in the presence of single kink shown in Fig.1b suggests that the kink is acting like a point defect. Moreover, the particles involved into the kink formation are almost completely eliminated from the high frequency phonon-like normal modes while these particles obviously possess a higher vibration amplitude (local vibration density) at low frequencies (see Fig.3). This is quantitatively illustrated in Fig.4 where the eigenvectors of phason mode and that of IR phonon mode are shown. Accordingly, one could try to describe vibration properties of kinks in terms of localized vibrations around force constant defect. That means the original incommensurate ($`N_{part}N_{pot}`$) FK model is replaced by commensurate one (or the ordinary harmonic chain of particles with harmonic incite and interparticle potentials) and some particle cites possess a defect force constant. The strength of the defect has been determined from equation $$1+\frac{\mathrm{\Delta }V}{N}\underset{k=\frac{\pi }{N}}{\overset{\pi }{}}\frac{1}{V+4K_2\mathrm{sin}^2\left(\frac{k}{2}\right)}=0,$$ (4) what means the zero-frequency gap mode formation in the vicinity of the defect cite. As it is shown in Fig.4 the eigenvector of the gap mode is very close to that of the phason while the eigenvectors of the phonon-like mode nearly coincide in both cases. The corresponding spectrum of the 1D crystal with the force constant defect is also shown in Fig.2. Note, that the localization length of the gap mode $`S_{gap}`$ (the halfwidth of the peak shown by dashed line in Fig.4) is equal to $`R_k/\sqrt{2}`$ in a wide range of $`R_k`$ values (see insert in Fig.4). Thus, one may consider the system with kinks as a defect, or impurity crystal taking for the description of its vibration properties all the results already known. For instance, it is well understood that $`S_{gap}`$ is determined basically by splitting of the gap mode from the bottom of the optical band $`\omega _0=\sqrt{V}`$ and by the optical bandwidth $`2\sqrt{K_2}`$. One may argue therefore that the similarity between phason and the gap mode eigenvectors and the phason eigenvector itself does not depend on the potential anharmonicity provided that its influence on the above mentioned parameters is small enough. It is thought therefore that the obtained results will be applicable for a more realistic interparticle potential too. From the analogy between the kinks and the defects it follows also that the IR phonon mode intensity will show a linear decrease versus $`n_k`$ ($`n_k=\frac{N_k}{N_{part}}`$ is the kink concentration and $`N_k=|N_{part}N_{pot}|`$ is the number of kinks) for low kink concentration. This indeed does take place in certain range of $`R_k`$ values. ## IV When the kink effects are important Although the N-kink solution of Eq. (2) is also available it is more convenient to approximate it with the sum of single-kink solutions. Our MD study of the ground state of a system consisting of 128 particles arranged in 128-$`N_k`$ potential wells with cyclic boundary conditions showed that even for $`N_k1`$ the kink lattice can be perfectly described as a sum of the single-kink solutions with $`R_k2a\sqrt{\frac{K_2}{V}}`$. Namely, for $`N_k=8`$ and $`K_2=4V`$ ($`R_k^{theor}=4.0a`$) the value of $`R_k^{\mathrm{exp}}`$ $`3.94a`$ has been obtained. Similar results have been obtained also for the case when the number of potential wells exceeds that of the particles. The dipole moment spectrum $`I\left(\omega \right)=Im\left[\frac{{\scriptscriptstyle \delta _n(\omega )}}{E_0}\right]`$ has been both calculated from (3) substituting $`U_n^0=\underset{i=1}{\overset{N_k}{}}U_n^0(i)`$ with $`R_k=R_k^{\mathrm{exp}}`$ and obtained from MD simulation via fluctuation dissipative approach for various values of $`\eta =R_kn_k`$. Both approaches agree rather well even at very low frequencies although the harmonic approximation obviously fails at $`\omega =0`$. Two examples of the particles arrangement and corresponding eigenvectors of the IR vibrations are presented in Fig.5. The eigenvectors for $`\eta =0.25`$ can be represented as a superposition of the single-kink eigenvectors while for $`\eta =0.5`$ they are looking quite different: even those particles which still occupy the potential wells and are not involved into the kinks formation, are strongly involved into the characteristic IR vibration (compare (a) and (b) in Fig.5). It should be pointed out that there is no noticeable difference between the commensurate and incommensurate cases (kink lattice period is equal and is not equal to an integer number of $`a`$, respectively) if the kink concentration is not too high. Otherwise the difference manifests itself in a small shift of the zero-frequency peak in Fig.2 from its position. I concentrate here on a question concerned with the intensity of the phonon peaks in Fig.2 as a function of the parameter $`\eta `$. The EVP approach (Eq. (3) with various values of $`\frac{K_2}{V}`$ and $`n_k`$) was used for this study. The results are presented in Fig.6. The integrated intensity $`I_\mathrm{\Sigma }=I\left(\omega \right)𝑑\omega `$ of the phonon peaks reveals a universal dependence on the parameter $`\eta `$ . It was also found that the eigenvector of the strongest IR vibration obey some sort of scaling invariance: the vectors obtained at different $`n_k`$ but for one and the same $`\eta `$ can be transformed to each other by proper scaling of $`a`$. Note, that the parameter $`\eta `$ means a volume fraction (in 1-D case) occupied by the kinks and the observed decrease in $`I_\mathrm{\Sigma }`$ at low $`\eta `$ values can be interpreted as washing out of the high frequency density of states by gap modes associated with kinks. At higher $`\eta `$, when the kinks form real lattice and eventually sinusoidal superstructure due to interaction with each other, the decrease in $`I_\mathrm{\Sigma }\eta `$ slows down because the real kink radius can not exceed at least one half of the kink lattice period. Indeed, the linear decrease in $`I_\mathrm{\Sigma }`$ shown in Fig.6 ends at a cut-off value of $`\eta 0.4`$ (for $`a=1`$) which implies the above mentioned restriction on the kink radius is $`R_k`$ $`0.4k_s^1`$, $`(k_s=n_k)`$ where $`k_s`$ is the kink lattice (or superstructure) wave vector measured in units of $`\frac{\pi }{a}`$. Thus, one can display a range of parameters $`k_s\sqrt{\frac{K_2}{V}}`$ $`0.2`$ in which the single-kink effects are important, i.e. the system can not be explicitly treated in terms of some sinusoidal superstructure related to the kink lattice. Since the IR eigenvectors has been argued to be not very sensitive to anharmonicity one might expect this criterion holds for more realistic potentials too. ## V CDW collective dynamics The low frequency excitation spectrum of the CDW ground state have been widely investigated both theoretically and experimentally (see, for example, reviews and references therein). It is well established that incommensurate CDW ground state is characterized by two specific collective excitations: IR active phase mode and Raman active amplitude mode. The frequency of the former, $`\omega _p,`$ in the incommensurate CDW conductors is of the order of $`1cm^1`$ while the amplitude mode frequency $`\omega _a`$ is about one-two orders of magnitude higher. These vibrations have been observed experimentally in such model CDW conductors as $`K_{0.3}MoO_3`$, $`(TaSe_4)_2I`$ and $`TaS_3`$ . Besides the phase and the amplitude modes an additional vibration obeying giant IR activity has been observed in all the above mentioned compounds . The frequency of this additional feature in $`(TaSe_4)_2I`$ is about $`38cm^1`$ in between the phase ($`1cm^1`$) and the amplitude ($`90cm^1`$) mode frequencies. Several explanations have been proposed to account for the additional giant IR peak, but microscopic origin of this vibration is still not clear (see, for instance, the discussions in Refs.1 and 15). In phenomenological model the additional giant IR peak was thought to result from a bound collective-mode resonance localized around impurity, but again without emphasis of microscopic origin of the model parameters. Below it is shown that the giant IR resonance occurs in the incommensurate CDW system even in the absence of any impurities provided that the dynamical charge transfer between adjacent CDW periods is taken into account and the CDW possesses some kink lattice structure rather than sinusoidal structure. Since the lattice deformation coupled to the CDW is much smaller than the crystal lattice constant it is reasonable to describe the CDW system within the FK model. The CDW periods in this case are associated with particles of mass $`m`$ and charge $`e`$ which are placed into sinusoidal external (crystal lattice) potential $`V\left(x\right)=\frac{Va^2}{4\pi ^2}\mathrm{cos}\left(2\pi \frac{x}{a}\right)`$. The interparticle distance in the commensurate phase is accepted to be equal to $`2a`$ what means the CDW is formed due to dimerization. In case of $`2N_{part}N_{pot}`$ one again obtains incommensurate structure (kink lattice) and the time dependent position $`U_n`$ of the particle can be represented as $`U_n(t)=2na+U_n^0+\delta _n(t)`$. ### A Dynamic charge transfer As it has been demonstrated by Itkis and Brill, spatial redistribution of the charge condensed in CDW takes place under action of static electric field. Obviously a characteristic time for the charge redistribution or, in the other words, for the charge transfer from one CDW period to another is determined by the amplitude mode frequency ($`90cm^1`$). Therefore, in the case of the giant peak frequency the adiabatic condition is fulfilled. To take into account the charge transfer contribution to the IR intensity of any mode let us suppose that the particle charge in our model is determined as $$\stackrel{~}{e}_n(t)=e(1+\beta (\delta _{n+1}(t)\delta _{n1}(t))),$$ (5) what means that the charge is transferred from the region of local compression of the CDW to a region of local dilatation. The factor $`\beta `$ determines the fraction of the particle charge transferred during vibration. The dipole moment $`P(t)`$ is determined as a sum of the part related to the particles displacement $`P_0(t)`$ and the part, related to the charge transfer between adjacent unit cells $`P_{ct}(t)`$ $$\begin{array}{c}P(t)=P_0(t)+P_{ct}(t)=\\ \underset{n}{}e\left[\delta _n(t)+\beta \left(U_n^0U_{n1}^0\right)(\delta _{n+1}(t)\delta _{n1}(t)\delta _n(t)+\delta _{n2}(t))\right].\end{array}$$ (6) In commensurate phase $`U_n^0U_{n1}^0=a`$ for all $`n`$ and the charge transfer dipole moment $`P_{ct}(t)`$ vanishes according to (6). In incommensurate phase the $`P_{ct}(t)`$ value can be rather high. It will be shown that the charge transfer effect is essentially determined by kink-related disturbance of the periodicity in the particle arrangement. Using the criterion of importance of single-kink effects obtained in the preceding section one can examine if the kinks are important for description of the charge density wave conductor $`(TaSe_4)_2I`$. The superstructure wave vector in this system is $`k_s0.085`$ , $`\sqrt{V/m}`$ can be associated with the giant IR peak frequency $`\omega 0.005eV`$ and $`\sqrt{K_2}`$ can be estimated from the phason dispersion $`\sqrt{K_2/m}<0.001eV`$ . Thus, one obtains $`k_s\sqrt{\frac{K_2}{V}}1`$ what implies that the kink effects can be important in this compound. Fig.7a shows the fragment of MD simulation of arrangement of 128 particles over 264 potential wells, i.e. the CDW with superstructure. The 51-th particle (shown by arrow in the figure) is pinned. The conductivity spectra $`I\left(\omega \right)=\omega Im\left[\frac{{\scriptscriptstyle \delta _n(\omega )}}{E_0}\right]`$ obtained from MD simulation are shown in Fig.8 for both pinned and depinned system. The features of the interest are the phase mode (PM) and the peak of CT mode (charge transfer mode), marked by PM- and CT-arrows in Fig.8, respectively. The latter peak is genetically related to the vibration with the wave vector equal to that of the superstructure. The corresponding eigenvectors are shown in Fig.7b. Taking into account that the CDW internal deformation can be adiabatically accompanied by charge redistribution one obtains that the CT peak acquires the giant IR intensity. The conductivity spectra in which the charge transfer effect has been taken into account according to Eqs (5) and (6) are shown in Fig.8 by symbols (depinned chain) and thin solid line (pinned chain). The phase mode intensity is almost independent on the charge transfer effect while the CT mode intensity increases several orders in magnitude. Fig.9 shows that the CT mode intensity $`\left(Im\left[\frac{{\scriptscriptstyle \delta _n(\omega )}}{E_0}\right]\right)`$ with charge transfer contribution decreases with the increase of the parameter $`4a\sqrt{\frac{K_2}{V}}`$ and possesses a universal dependence regardless the superstructure wave vector. Physically this feature can be understood in the following way. The charge transfer dipole moment consists of a sum of elementary dipole moments resulting from the charge transfer over the inter-kink distance. The longer is the latter the higher is the elementary dipole moment, but the smaller is the number of these dipole moments. Thus, the total charge transfer dipole moment does not depend on the kink concentration (probably unless $`4a\sqrt{\frac{K_2}{V}}<a/n_k`$). On the other hand, this dipole moment strongly depends on the kink-mediated distortion of the chain. The latter decreases with the increase of the kink radius resulting in the observed decrease in CT mode intensity in Fig.9. It was experimentally proved that the giant IR peak intensity in $`(TaSe_4)_2I`$ increases with the increase of the sample temperature . This unusual feature can be naturally explained in terms of the charge transfer effect discussed above. Indeed, as it is clear from Fig.9 the integrated intensity of the CT mode as a function of the system parameters can be approximated as (see dashed line in Fig.9) $$IC_0e^2\left[\sqrt{\frac{K_2}{V}}\right]^{2.25},$$ (7) where $`C_0`$ is constant. The interparticle force constant, obviously, depends on the particle charge $`K_2=e^2K_2^{^{}}`$ since the particles are associated with the charges condensed in CDW. The (7) can be then rewritten as $$IC_0e^{0.25}\left[\sqrt{\frac{K_2^{^{}}}{V}}\right]^{2.25}.$$ (8) Thus, despite the decrease of the particle charge (the CDW amplitude) the CT mode integrated intensity increases upon approaching $`T_p261K`$ from below! Due to short range order the particle charge (CDW amplitude) remains finite even at very high temperatures and accounts for the high CT mode intensity well above $`T_p`$. Note one more peculiarity of the CT peak. It does not depend on the number of particles in the coherent CDW domain or, in the other words, on the effective mass of CDW condensate. The latter can explain why the corresponding frequency has nearly the same value in such different compounds as $`K_{0.3}MoO_3`$ and $`(TaSe_4)_2I`$ . ## VI Summary The kink-like solitons in the incommensurate Frenkel-Kontorova model are investigated regarding to their impact on the vibration spectrum. It is found that the IR phonon intensity possesses universal dependence on the product of the kink radius and the kink concentration suggesting some sort of scaling invariance for the corresponding eigenvector. The model accounting for the giant IR peak in the incommensurate inorganic CDW conductors is proposed. It is shown the giant IR peak is related to the fundamental vibration with the wave vector equal to that of the superstructure and the giant IR intensity is caused by dynamical charge transfer accompanying the CDW internal motion. ## VII Acknowledgments This work was partially supported by Russian Ministry of Science through the program ”Fundamental Spectroscopy”. FIGURE CAPTURES Fig.1. Dispersion of vibration band obtained via EVP solution (system of linear equations (3)) for the FK model containing 64 particles arranged over: (a) 64 potential wells (no kinks); (b) 63 potential wells (one kink); (c) 56 potential wells (four kinks). The dark regions correspond to the higher vibration amplitude of particles excited by external field $`E=E_0\mathrm{cos}(kn)\mathrm{cos}(\omega t)`$. Fig.2. Conductivity spectra of the FK model with one kink. (1) is the calculated spectrum; (3) is that obtained by MD simulation (32 particles with cyclic boundary conditions) for $`K_2=4V,`$ $`\sqrt{V}=72`$ $`arb.un`$; (2) is the spectrum corresponding to the force constant defect $`\mathrm{\Delta }V=4.1231V`$ (see text), in the position of the particle no.16. Fig.3. MD study of local density of states distribution over the particles in the commensurate (a) and containing one kink (b) FK model. The particles were initially subjected to random displacements and the temporal evolution of the spatial harmonics was analysed via Fourier-transformation. The dark regions correspond to the maxima in the Fourier spectrum. Fig.4. The kink and the phonon (the strongest peak in Fig.1) eigenvectors obtained as described in the caption to Fig.1. By symbols in the insert is shown the dependence of the gap mode radius upon kink radius $`R_k=2\sqrt{\frac{K_2}{V}}`$. Fig.5. (a) The particle arrangement in the FK model of 128 particles (shown by symbols) in 120 potential wells (solid line). (b) The eigenvectors of the kink-(1,3) and phonon-like (2,4) modes. $`K_2=4V`$ (solid symbols) and $`K_2=16V`$ (open symbols), $`\sqrt{V}=72`$ $`arb.un`$. Fig.6. Integrated intensity of the phonon-like modes upon $`\eta =R_kn_k`$ calculated using (4) for FK model of 128 particles arranged over: (1) 112 potential wells ($`n_k=1/8`$); (2) 120 potential wells ($`n_k=1/16`$); and (3) 124 potential wells ($`n_k=1/32`$). Fig.7. (a) Fragment of particle arrangement obtained via MD simulation in the FK model containing 128 particles arranged over 272 potential wells ($`n_k=1/16`$, see text) for $`K_2=16V`$. The particle No.51 is pinned by an extra local potential. (b) Eigenvectors of phason mode without pinning (dotted line) and with pinning (thick solid line), and those of CT-mode (see Fig.2) shown by thin solid line for both pinned and depinned chain. Fig.8. IR conductivity spectra of the FK model (see caption to Fig.7), calculated using Eqs. (3)-(5) for $`4\sqrt{\frac{K_2}{V}}=8`$ ($`a=1`$). Thin solid line is for $`\beta =0`$ and thick solid line is for $`\beta =30`$ in case of pinned chain. Dashed line is for $`\beta =0`$ and symbols are for $`\beta =30`$ in case of depinned chain. PM are the phase modes and CT are the modes which intensity may contain the charge transfer contribution. For $`\beta =30`$ the $`0.03e`$ of the particle charge is transferred during the CT mode vibration while for $`\beta =0`$ it is $`0`$. Fig.9. Dependence of the CT mode integrated intensity on the potential parameters of the FK model containing 128 particles arranged over 264 minima ($`n_k=1/32`$): (1) is for $`\beta =30`$ and (4) is for $`\beta =0`$; over 272 minima ($`n_k=1/16`$): (2) is for $`\beta =30`$ and (5) is for $`\beta =0`$. (3) is the dependence $`y=0.45/x^{2.25}`$. Arrow shows the parameters for the spectra presented in Fig.8.
no-problem/9812/astro-ph9812339.html
ar5iv
text
# Untitled Document PHOTOMETRY AND PHOTOMETRIC REDSHIFTS OF GALAXIES IN THE HUBBLE DEEP FIELD SOUTH NICMOS FIELD We present an electronic catalog of infrared and optical photometry and photometric redshifts of 323 galaxies in the Hubble Deep Field South NICMOS field at http://www.ess.sunysb.edu/astro/hdfs/home.html. The analysis is based on infrared images obtained with the Hubble Space Telescope using the Near Infrared Camera and Multi-Object Spectrograph and the Space Telescope Imaging Spectrograph together with optical images obtained with the Very Large Telescope. The infrared and optical photometry is measured by means of a new quasi-optimal photometric technique that fits model spatial profiles of the galaxies determined by Pixon image reconstruction techniques to the images. In comparison with conventional methods, the new technique provides higher signal-to-noise-ratio measurements and accounts for uncertainty correlations between nearby, overlapping neighbors. The photometric redshifts are measured by means of our redshift likelihood technique, incorporating six spectrophotometric templates which, by comparison with spectroscopic redshifts of galaxies identified in the Hubble Deep Field North, are known to provide redshift measurements accurate to within an RMS relative uncertainty of $`\mathrm{\Delta }z/(1+z)<0.1`$ at all redshifts $`z<6`$. The analysis reaches a peak $`H`$-band sensitivity threshold of $`AB(16000)=28.3`$ and covers 1.02 arcmin<sup>2</sup> to $`AB(16000)=27`$, 1.27 arcmin<sup>2</sup> to $`AB(16000)=26`$, and 1.44 arcmin<sup>2</sup> to $`AB(16000)=25`$. The analysis identifies galaxies at redshifts ranging from $`z0`$ through $`z>10`$, including 17 galaxies of redshift $`5<z<10`$ and five candidate galaxies of redshift $`z>10`$. (The redshift likelihood functions are given, allowing high-redshift galaxies with sharply peaked redshift likelihood functions to be distinguished from candidate high-redshift galaxies with broad or bimodal redshift likelihood functions.) The analysis can also be used to establish firm upper limits to the surface densities of galaxies as a function of brightness and redshift to redshifts as large as $`z=14`$.
no-problem/9812/hep-ph9812459.html
ar5iv
text
# 1 Introduction ## 1 Introduction The description of the dynamics at high density parton regime is one of the main open questions of the strong interactions theory. While in the region of moderate Bjorken $`x`$ ($`x10^2`$) the well-established methods of operator product expansion and renormalization group equations have been applied successfully, the small $`x`$ region still lacks a consistent theoretical framework (For a review see ). Basically, its is questionable the use of the DGLAP equations , which reflects the dynamics at moderate $`x`$, in the region of small values of $`x`$. The traditional procedure of using the DGLAP equations to calculate the gluon distribution at small $`x`$ and large momentum transfer $`Q^2`$ is by summing the leading powers of $`\alpha _slnQ^2ln(\frac{1}{x})`$, where $`\alpha _s`$ is the strong coupling constant, known as the double-leading-logarithm approximation (DLLA). In axial gauges, these leading double logarithms are generated by ladder diagrams in which the emitted gluons have strongly ordered transverse momenta, as well as strongly ordered longitudinal momenta. Therefore the DGLAP must breakdown at small values of $`x`$, firstly because this framework does not account for the contributions to the cross section which are leading in $`\alpha _sln(\frac{1}{x})`$ . Secondly, because the parton densities become large and there is need to develop a high density formulation of QCD . There has been intense debate on to which extent non-conventional QCD evolution is required by the deep inelastic $`ep`$ HERA data . Good fits to the $`F_2`$ data for $`Q^21GeV^2`$ can be obtained from distinct approaches, which consider DGLAP and/or BFKL evolution equations . In particular, the conventional perturbative QCD approach is very successful in describing the main features of HERA data and, hence, the signal of non-conventional QCD dynamics is hidden or mimicked by a strong background of conventional QCD evolution. Our goal in this paper is the role of the shadowing corrections (SC) in $`F_2`$ and its slope. In the last twenty years, several authors (see for some phenomenological analysis) have performed a detailed study of the shadowing effect although without a strong experimental evidence of this effect in the data, mainly since the main observable, the $`F_2`$ structure function, is inclusive to the effects in the gluon distribution. Recently we have estimated the shadowing corrections to the $`F_2^c`$ and $`F_L`$ at HERA kinematic region using the eikonal approach . These observables are directly dependent on the behavior of the gluon distribution. We have shown that the shadowing corrections to these observables are important, however the experimental errors in these observables are still large to allow a discrimination between our predictions and the DGLAP predictions. Here we estimate the shadowing corrections to the scaling violations of the proton structure function. Basically, there are two possibilities to estimate the SC using the eikonal approach. We can calculate damping factors, which represent the ratio between the observable with and without shadowing, and subsequently apply these factors in the conventional DGLAP predictions. This procedure was used in refs. , also considering a two radius model for the nucleon. In this paper we propose a second procedure to estimate the SC in DIS, where the observables are directly calculated in the eikonal approach and the distinct contributions to the SC are analysed in the same approach, reducing the number of free parameters. A larger discussion about the distinct procedures is made in section II. The recent HERA data on the slope of the $`F_2`$ structure function present at small values of $`x`$ and $`Q^2`$ a different behavior than predicted by the standard DGLAP framework. Basically, the HERA data present a ‘turn over’ of the slope around $`x10^4`$, which cannot be described using the GRV94 parametrization and the DGLAP evolution equations. We show that this behavior is predicted by the eikonal approach considering the shadowing corrections for the gluon and quark sectors. The value of the shadowing corrections depends crucially on the size of the target $`R`$. The value of the effective radius $`R`$ depends on how the gluon ladders couple to the proton; i.e., on how the gluons are distributed within the proton . In this paper we estimate the $`R`$ dependence of the SC. We show that the HERA data on the $`F_2`$ and its slope can be described consistently using $`R^2=5GeV^2`$. This value agrees with the HERA results on the diffractive $`J/\mathrm{\Psi }`$ photoproduction . The steep increase of the gluon distribution predicted by DGLAP and BFKL equations at high energies would eventually violate the Froissart bound , which restricts the rate of growth of the total cross section to $`ln^2(\frac{1}{x})`$. This bound may not be applicable in the case of particles off-mass shell , but in this paper we present an approach for this problem. Basically, we estimate a limit below which the unitarity corrections may be disregarded and show that the recent HERA data surpass this boundary, as predicted in , at small values of $`x`$ and $`Q^2`$. This paper is organized as follows. In section II, the eikonal approach and the shadowing corrections for $`F_2`$ and its slope are considered. We estimate the distinct contributions for the SC and demonstrate that the $`\frac{dF_2(x,Q^2)}{dlogQ^2}`$ data may be described considering the shadowing in the gluon and quark sectors. In section III, we estimate the $`R`$ dependence of the shadowing corrections. In section IV, we present a boundary related to unitarity for $`F_2`$ and $`\frac{dF_2(x,Q^2)}{dlogQ^2}`$ and show that the actual HERA data for small $`x`$ and $`Q^2`$ overcomes this boundary. Therefore, the shadowing corrections should be considered in the calculation of the observables in this kinematic region. Finally, in section V, we present a summary of our results. ## 2 The Shadowing Corrections in pQCD The deep inelastic scattering (DIS) is usually described in a frame where the proton is going very fast. In this case the shadowing effect is a result of an overlap of the parton clouds in the longitudinal direction. Other interpretation of DIS is the intuitive view proposed by V. N. Gribov many years ago for the DIS on nuclear targets . Gribov’s assumption is that at small values of $`x`$ the virtual photon fluctuates into a $`q\overline{q}`$ pair well before the interaction with the target, and this system interacts with the target. This formalism has been established as an useful tool for calculating deep inelastic and related diffractive cross section for $`\gamma ^{}p`$ scattering in the last years . The Gribov factorization follows from the fact that the lifetime of the $`q\overline{q}`$ fluctuation is much larger than the time of the interactions with partons. According to the uncertainty principle, the fluctuation time is $`\frac{1}{mx}`$, where $`m`$ denotes the target mass. The space-time picture of the DIS in the target rest frame can be viewed as the decay of the virtual photon at high energy (small $`x`$) into a quark-antiquark pair long before the interaction with the target. The $`q\overline{q}`$ pair subsequently interacts with the target. In the small $`x`$ region, where $`x\frac{1}{2mR}`$ ($`R`$ is the size of the target), the $`q\overline{q}`$ pair crosses the target with fixed transverse distance $`r_t`$ between the quarks. It allows to factorize the total cross section between the wave function of the photon and the interaction cross section of the quark-antiquark pair with the target. The photon wave function is calculable and the interaction cross section is modeled. Therefore we have that the proton structure function is given by $`F_2(x,Q^2)={\displaystyle \frac{Q^2}{4\pi \alpha _{em}}}{\displaystyle 𝑑zd^2r_t|\mathrm{\Psi }(z,r_t)|^2\sigma ^{q\overline{q}}(z,r_t)},`$ (1) where $`|\mathrm{\Psi }(z,r_t)|^2={\displaystyle \frac{6\alpha _{em}}{(2\pi )^2}}{\displaystyle \underset{i}{\overset{n_f}{}}}e_f^2\{[z^2+(1z)^2]ϵ^2K_1(ϵr_t)^2+m_f^2K_0^2(ϵr_t)^2\},`$ (2) $`\alpha _{em}`$ is the electromagnetic coupling constant, $`ϵ=z(1z)Q^2+m_f^2`$, $`m_f`$ is the quark mass, $`n_f`$ is the number of active flavors, $`e_f^2`$ is the square of the parton charge (in units of $`e`$), $`K_{0,1}`$ are the modified Bessel functions and $`z`$ is the fraction of the photon’s light-cone momentum carried by one of the quarks of the pair. In the leading log$`(1/x)`$ approximation we can neglect the change of $`z`$ during the interaction and describe the cross section $`\sigma ^{q\overline{q}}(z,r_t^2)`$ as a function of the variable $`x`$. Considering only light quarks ($`i=u,d,s`$) $`F_2`$ can be expressed by $`F_2(x,Q^2)={\displaystyle \frac{1}{4\pi ^3}}{\displaystyle \underset{u,d,s}{}}e_f^2{\displaystyle _{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}}}{\displaystyle \frac{d^2r_t}{r_t^4}}\sigma ^{q\overline{q}}(x,r_t).`$ (3) We have introduced a cutoff in the superior limit of the integration in order to eliminate the long distance (non-perturbative) contribution in our calculations. In this paper we assume $`Q_0^2=0.4GeV^2`$ as in our previous works in this subject. We estimated the shadowing corrections considering the eikonal approach , which is formulated in the impact parameter space. Here we review the main assumptions of the eikonal approach. In the impact parameter representation, the scattering amplitude $`A(s,t)`$, where $`t=q_t^2`$ is the momentum transfer squared, is given by $`a(s,b_t)={\displaystyle \frac{1}{2\pi }}{\displaystyle d^2q_te^{i\stackrel{}{q_t}.\stackrel{}{b_t}}A(s,t)}.`$ (4) The total cross section is written as $`\sigma _{tot}(s)=2{\displaystyle d^2b_tIma(s,b_t)},`$ (5) and the unitarity constraint stands as $`2Ima(s,b_t)=|a(s,b_t)|^2+G_{in}(s,b_t)`$ (6) at fixed $`b_t`$, where $`G_{in}`$ is the sum of all inelastic channels. For high energies the general solution of Eq. (6) is: $`a(s,b_t)=i\left[1e^{\frac{\mathrm{\Omega }(s,b_t)}{2}}\right],`$ (7) where the opacity $`\mathrm{\Omega }(s,b_t)`$ is a real arbitrary function, which is modeled in the eikonal approach. Using the $`s`$-channel unitarity constraint (7) in the expression (3), the $`F_2`$ structure function can be written in the eikonal approach as $`F_2(x,Q^2)={\displaystyle \frac{1}{2\pi ^3}}{\displaystyle \underset{u,d,s}{}}e_f^2{\displaystyle _{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}}}{\displaystyle \frac{d^2r_t}{r_t^4}}{\displaystyle d^2b_t\{1e^{\frac{1}{2}\mathrm{\Omega }_{q\overline{q}}(x,r_t,b_t)}\}},`$ (8) where the opacity $`\mathrm{\Omega }_{q\overline{q}}(x,r_t,b_t)`$ describes the interaction of the $`q\overline{q}`$ pair with the target. In the region where $`\mathrm{\Omega }_{q\overline{q}}`$ is small $`(\mathrm{\Omega }_{q\overline{q}}1)`$ the $`b_t`$ dependence can be factorized as $`\mathrm{\Omega }_{q\overline{q}}=\overline{\mathrm{\Omega }_{q\overline{q}}}S(b_t)`$ , with the normalization $`d^2b_tS(b_t)=1`$. The eikonal approach assumes that the factorization of the $`b_t`$ dependence $`\mathrm{\Omega }_{q\overline{q}}=\overline{\mathrm{\Omega }_{q\overline{q}}}S(b_t)`$, which is valid in the region where $`\mathrm{\Omega }_{q\overline{q}}`$ is small, occurs in the whole kinematical region. The main assumption of the eikonal approach in pQCD is the identification of opacity $`\overline{\mathrm{\Omega }_{q\overline{q}}}`$ with the gluon distribution. In the opacity is given by $`\overline{\mathrm{\Omega }_{q\overline{q}}}={\displaystyle \frac{\alpha _s}{3}}\pi ^2r_t^2xG(x,Q^2),`$ (9) where $`xG(x,Q^2)`$ is the gluon distribution. Therefore the behavior of the $`F_2`$ structure function (8) in the small-$`x`$ region is mainly determined by the behavior of the gluon distribution in this region. The use of the Gaussian parametrization for the nucleon profile function $`S(b_t)=\frac{1}{\pi R^2}e^{\frac{b^2}{R^2}}`$, where $`R`$ is a free parameter, simplifies the calculations. In general this parameter is identified with the proton radius. However, $`R`$ is associated with the spatial gluon distribution within the proton, which may be smaller than the proton radius (see discussion in the next section). Using the expression (9) in (8) and doing the integral over $`b_t`$, the master equation for $`F_2`$ is obtained $`F_2(x,Q^2)={\displaystyle \frac{2R^2}{3\pi ^2}}{\displaystyle \underset{u,d,s}{}}e_f^2{\displaystyle _{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}}}{\displaystyle \frac{d^2r_t}{\pi r_t^4}}\{C+ln(\kappa _q(x,r_t^2))+E_1(\kappa _q(x,r_t^2))\},`$ (10) where $`C`$ is the Euler constant, $`E_1`$ is the exponential function, and the function $`\kappa _q(x,r_t^2)=\frac{\alpha _s}{3R^2}\pi r_t^2xG(x,\frac{1}{r_t^2})`$. Expanding the equation (10) for small $`\kappa _q`$, the first term (Born term) will correspond to the usual DGLAP equation in the small $`x`$ region, while the other terms will take into account the shadowing corrections. The slope of $`F_2`$ structure function in the eikonal approach is straightforward from the expression (10). We obtain that $`{\displaystyle \frac{dF_2(x,Q^2)}{dlogQ^2}}={\displaystyle \frac{2R^2Q^2}{3\pi ^2}}{\displaystyle \underset{u,d,s}{}}e_f^2\{C+ln(\kappa _q(x,r_t^2))+E_1(\kappa _q(x,r_t^2))\}.`$ (11) The expressions (10) and (11) predict the behavior of the shadowing corrections to $`F_2`$ and its slope considering the eikonal approach for the interaction of the $`q\overline{q}`$ with the target. In this case we are calculating the SC associated with the passage of the $`q\overline{q}`$ pair through the target. Following we will denote this contribution as the quark sector contribution to the SC. The behavior of $`F_2`$ and its slope are associated with the behavior of the gluon distribution used as input in (10) and (11). In general, it is assumed that the gluon distribution is described by a parametrization of the parton distributions (for example: GRV, MRS, CTEQ) . In this case the shadowing in the gluon distribution is not included explicitly. In a general case we must also estimate the shadowing corrections for the gluon distribution, i.e. in the quark and the gluon sectors. In this case we must estimate the SC for the gluon distribution using the eikonal approach, similarly to the $`F_2`$ case. This was made in and here we only present the main steps of the approach. The gluon distribution can be obtained in the target rest frame considering the decay of a virtual gluon at high energy (small $`x`$) into a gluon-gluon pair long before the interaction with the target. The $`gg`$ pair subsequently interacts with the target, with the transverse distance $`r_t`$ between the gluons assumed fixed. In this case the cross section of the absorption of a gluon $`g^{}`$ with virtuality $`Q^2`$ can be written as $`\sigma ^{g^{}+\mathrm{nucleon}}(x,Q^2)={\displaystyle _0^1}𝑑z{\displaystyle \frac{d^2r_t}{\pi }|\mathrm{\Psi }_t^g^{}(Q^2,r_t,x,z)|^2\sigma ^{gg+\mathrm{nucleon}}(z,r_t^2)},`$ (12) where $`z`$ is the fraction of energy carried by the gluon and $`\mathrm{\Psi }_t^g^{}`$ is the wave function of the transverse polarized gluon in the virtual probe. Furthermore, $`\sigma ^{gg+\mathrm{nucleon}}(z,r_t^2)`$ is the cross section of the interaction of the $`gg`$ pair with the nucleon. Considering the $`s`$-channel unitarity and the eikonal model, equation (12) can be written as $`\sigma ^{g^{}+\mathrm{nucleon}}(x,Q^2)={\displaystyle _0^1}𝑑z{\displaystyle \frac{d^2r_t}{\pi }\frac{d^2b_t}{\pi }|\mathrm{\Psi }_t^g^{}(Q^2,r_t,x,z)|^2\left(1e^{\frac{1}{2}\overline{\mathrm{\Omega }_{gg}}S(b_t)}\right)},`$ where the factorization of the $`b_t`$ dependence in the opacity $`\mathrm{\Omega }_{gg}(x,r_t,b_t)`$ was assumed. Using the relation $`\sigma ^{g^{}+\mathrm{nucleon}}(x,Q^2)=\frac{4\pi ^2\alpha _s}{Q^2}xG(x,Q^2)`$ and the expression of the wave $`\mathrm{\Psi }^g^{}`$ calculated in , the Glauber-Mueller formula for the gluon distribution is obtained as $`xG(x,Q^2)={\displaystyle \frac{4}{\pi ^2}}{\displaystyle _x^1}{\displaystyle \frac{dx^{}}{x^{}}}{\displaystyle _{\frac{4}{Q^2}}^{\mathrm{}}}{\displaystyle \frac{d^2r_t}{\pi r_t^4}}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{d^2b_t}{\pi }}\mathrm{\hspace{0.17em}2}\left[1e^{\frac{1}{2}\sigma _N^{gg}(x^{},\frac{r_t^2}{4})S(b_t)}\right],`$ (13) where $`\overline{\mathrm{\Omega }_{gg}}=\sigma _N^{gg}`$ describes the interaction of the $`gg`$ pair with the target. Using the Gaussian parametrization for the nucleon profile function, doing the integral over $`b_t`$, the master equation for the gluon distribution is obtained as $`xG(x,Q^2)={\displaystyle \frac{2R^2}{\pi ^2}}{\displaystyle _x^1}{\displaystyle \frac{dx^{}}{x^{}}}{\displaystyle _{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}}}{\displaystyle \frac{d^2r_t}{\pi r_t^4}}\{C+ln(\kappa _G(x^{},r_t^2))+E_1(\kappa _G(x^{},r_t^2))\},`$ (14) where the function $`\kappa _G(x,r_t^2)=\frac{3\alpha _s}{2R^2}\pi r_t^2xG(x,\frac{1}{r_t^2})`$. Again, if equation (14) is expanded for small $`\kappa _G`$, the first term (Born term) will correspond to the usual DGLAP equation in the small $`x`$ region, while the other terms will take into account the shadowing corrections. The expressions (10), (11) and (14) are correct in the double leading logarithmic approximation (DLLA). As shown in the DLLA does not work quite well in the accessible kinematic region ($`Q^2>0.4GeV^2`$ and $`x>10^6`$). Consequently, a more realistic approach must be considered to calculate the observables. In the subtraction of the Born term and the addition of the GRV parametrization were proposed to the $`F_2`$ and $`xG`$ cases. In these cases we have $`F_2(x,Q^2)=F_2(x,Q^2)\text{[Eq. (}\text{10}\text{)]}F_2(x,Q^2)\text{[Born]}+F_2(x,Q^2)\text{[GRV]},`$ (15) and $`xG(x,Q^2)=xG(x,Q^2)\text{[Eq. (}\text{14}\text{)]}xG(x,Q^2)\text{[Born]}+xG(x,Q^2)\text{[GRV]},`$ (16) where the Born term is the first term in the expansion in $`\kappa _q`$ and $`\kappa _g`$ of the equations (10) and (14), respectively (see for more details). Here we present this procedure for the $`F_2`$ slope. In this case $`{\displaystyle \frac{dF_2(x,Q^2)}{dlogQ^2}}={\displaystyle \frac{dF_2(x,Q^2)}{dlogQ^2}}\text{[Eq. (}\text{11}\text{)]}{\displaystyle \frac{dF_2(x,Q^2)}{dlogQ^2}}\text{[Born]}+{\displaystyle \frac{dF_2(x,Q^2)}{dlogQ^2}}\text{[GRV]},`$ (17) where the Born term is the first term in the expansion in $`\kappa _q`$ of the equation (11). The last term is associated with the traditional DGLAP framework, which at small values of $`x`$ predicts $`{\displaystyle \frac{dF_2(x,Q^2)}{dlogQ^2}}={\displaystyle \frac{10\alpha _s(Q^2)}{9\pi }}{\displaystyle _0^{1x}}𝑑zP_{qg}(z){\displaystyle \frac{x}{1z}}g({\displaystyle \frac{x}{1z}},Q^2),`$ (18) where $`\alpha _s(Q^2)`$ is the running coupling constant and the splitting function $`P_{qg}(x)`$ gives the probability to find a quark with momentum fraction $`x`$ inside a gluon. This equation describes the scaling violations of the proton structure function in terms of the gluon distribution. We use the GRV parametrization as input in the expression (18). In the general approach proposed in this paper we will use the solution of the equation (16) as input in the first terms of (15) and (17). As the expression (16) estimates the gluon shadowing, the use of this distribution in the expressions (15) and (17), which consider the contribution to SC associated with the passage of $`q\overline{q}`$ pair through the target, allows to estimate the SC to both sectors (quark + gluon) of the observables. Our goal is the discrimination of the distinct contributions to the SC in $`F_2`$ and $`\frac{dF_2(x,Q^2)}{dlogQ^2}`$. In Fig. 1 we present our results for the $`F_2`$ structure function as a function of the variable $`ln(\frac{1}{x})`$ for different virtualities. We have used $`R^2=5GeV^2`$ in these calculations. In the next section the $`R`$ dependence of our results is analysed. We present our results using the expression (15) (quark sector) and using the solution of the equation (16) as input in the first term of (15) (quark \+ gluon sector). The predictions of the GRV parametrization are also shown. We consider the HERA data at low $`Q^2`$ since for $`Q^2>6GeV^2`$ the SC start to fall down (For a discussion of the SC to $`F_2`$ considering the quark sector see ). We can see that at small values of $`Q^2`$ the predictions for $`F_2`$ considering the quark and the quark-gluon sector are approximately identical. However, for larger values of $`Q^2`$ the predictions of the quark-gluon sector disagree with the H1 data . Therefore, the contribution of the gluon shadowing to $`F_2`$ in an eikonal approach superestimates the shadowing corrections at large $`Q^2`$ values. In Fig. 2 we present our results for the SC in the $`\frac{dF_2(x,Q^2)}{dlogQ^2}`$ as a function of $`x`$. The ZEUS data points correspond to different $`x`$ and $`Q^2`$ value. The $`(x,Q^2)`$ points are averaged values obtained from each of the experimental data distribution bins. Only the data points with $`<Q^2>0.52GeV^2`$ and $`x<10^1`$ were used here. The SC are estimated considering the expression (17) (quark sector) and using the solution of the equation (16) as input in the first term of (17) (quark + gluon sector). Moreover, the predictions of the traditional DGLAP framework, which at small values of $`x`$ is given by the expression (18) are also presented. We can see that the DGLAP predictions fail to describe the ZEUS data at small values of $`x`$ and $`Q^2`$. However we see that in the traditional framework (DGLAP + GRV94) a ’turn over’ is also present at small values of $`x`$ and $`Q^2`$. Basically, this occurs since the smaller $`Q^2`$ value used ($`<Q^2>=0.52GeV^2`$) is very near the initial virtuality of the GRV parametrization, where the gluon distribution is ’valence like’. Therefore the gluon distribution and the $`F_2`$ slope are approximately flat in this region. For the second smaller value of $`Q^2`$ used ($`<Q^2>=1.1GeV^2`$) the evolution length is larger, which implies that the gluon distribution (and the $`F_2`$ slope) already presents a steep behavior. The link between these points implies the ’turn over’ presented in Fig. 2. The main problem is that this ’turn over’ is higher than observed in the ZEUS data. This implies that $`xG(x,Q^2)`$ differs from the previous standard expectations in the limit of small $`x`$ and $`Q^2`$. This effect is not observed in the $`F_2`$ structure function since it is inclusive to the behavior of the gluon distribution, which can be verified analysing the predictions of the distinct parametrizations. The gluon distribution predicted by these parametrizations differs in approximately 50 $`\%`$. The prediction of the gluon sector, which is obtained using the solution of the expression (16) as input in (18) is also presented. We can see that at larger values of $`Q^2`$ and $`x`$ all predictions are approximately identical. However, at small values of $`x`$ and $`Q^2`$, the ZEUS data is not well described considering only the quark or the gluon sector to the SC. The contribution of the gluon shadowing is essential in the region of small values of $`x`$ and $`Q^2`$, i.e. a shadowed gluon distribution should be used as input in the eikonalized expression (17) in this kinematic region. Our conclusion is that at small values of $`x`$ and $`Q^2`$ it should be considered the contribution of the gluon shadowing to estimate the SC to $`F_2`$ and its slope in the eikonal approach. While for $`F_2`$ the contribution of the gluon shadowing may be disregarded, it is essential for the $`F_2`$ slope. The $`\frac{dF_2(x,Q^2)}{dlogQ^2}`$ data show that a consistent approach should consider both contributions at small $`x`$ and $`Q^2`$. Before we conclude this section some comments are in order. We show that the $`\frac{dF_2(x,Q^2)}{dlogQ^2}`$ data can be successfully described considering the shadowing corrections in the quark and gluon sectors. A similar conclusion was obtained in , where the eikonal approach was also used to estimate the SC in the quark and gluon sectors, but a distinct procedure was used to estimate the SC for the $`F_2`$ slope. In damping factors are calculated separately for both sectors and applied to the standard DGLAP predictions. The behavior of the gluon distribution at small values of $`Q^2`$ was modeled separately, since the gluon distribution (14) vanish for $`Q^2=Q_0^2`$. This procedure introduces a free parameter $`\mu ^2`$, beyond the usual ones used in the eikonal approach ($`Q_0^2,R^2`$). The distinct procedure proposed here estimates the observables directly within the eikonal approach and the shadowing corrections in the different sectors are calculated within the same approach. In our calculations there are only two free parameters: (i) the cutoff ($`Q_0^2=0.4GeV^2`$) in order to eliminate the long distance contribution, and (ii) the radius $`R`$ ($`R^2=5GeV^2`$). The choice of these parameters is associated with the initial virtuality of the GRV parametrization used in our calculations, and the estimates obtained using the HERA data on diffractive photoproduction of $`J/\mathrm{\Psi }`$ vector meson (see discussion in the next section) respectively . In our procedure the region of small values of $`Q^2Q_0^2`$ is determined by the behavior of the GRV parameterization in this region, since we are using the eq. (16) to calculate the gluon distribution. For $`Q^2=Q_0^2`$ the two first terms of (16) vanish and the gluon distribution is described by the GRV parameterization, i.e. $`xG(x,Q_0^2)=xG(x,Q_0^2)\text{[GRV]}`$. The eikonal approach describes the ZEUS data, as well as the DGLAP evolution equations using modified parton distributions. Recently, the MRST group has proposed a different set of parton parametrizations which consider a initial ’valence-like’ gluon distribution. This parametrization allows to describe the $`F_2`$ slope data without an unconventional effect. This occurs because there is a large freedom in the initial parton distributions and the initial virtuality used in these parametrizations. We believe that only a comprehensive analysis of distinct observables ($`F_L,F_2^c,\frac{dF_2(x,Q^2)}{dlogQ^2}`$) will allow a more careful evaluation of the shadowing corrections at small $`x`$ . ## 3 The radius dependence of the shadowing corrections The value of SC crucially depends on the size of the target . In pQCD the value of $`R`$ is associated with the coupling of the gluon ladders with the proton, or to put it in another way, on how the gluons are distributed within the proton. $`R`$ may be of the order of the proton radius if the gluons are distributed uniformly in the whole proton disc or much smaller if the gluons are concentrated, i.e. if the gluons in the proton are confined in a disc with smaller radius than the size of the proton. Considering the expression (8), assuming $`\mathrm{\Omega }_{q\overline{q}}<1`$ and expanding the expression to $`𝒪(\mathrm{\Omega }^2)`$ we obtain $`F_2(x,Q^2)={\displaystyle \frac{1}{2\pi ^3}}{\displaystyle \underset{u,d,s}{}}e_f^2{\displaystyle _{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}}}{\displaystyle \frac{d^2r_t}{r_t^4}}{\displaystyle d^2b_t\left\{\frac{1}{2}\mathrm{\Omega }_{q\overline{q}}\frac{1}{8}\mathrm{\Omega }_{q\overline{q}}^2\right\}}.`$ (19) Using the factorization of the opacity and the normalization of the profile function we can write $`F_2`$ as $`F_2(x,Q^2)={\displaystyle \frac{1}{2\pi ^3}}{\displaystyle \underset{u,d,s}{}}e_f^2{\displaystyle _{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}}}{\displaystyle \frac{d^2r_t}{r_t^4}}\left\{{\displaystyle \frac{1}{2}}\overline{\mathrm{\Omega }_{q\overline{q}}}{\displaystyle \frac{1}{8}}\overline{\mathrm{\Omega }_{q\overline{q}}}^2{\displaystyle d^2b_tS^2(b_t)}\right\}.`$ (20) The second term of the above equation represents the first shadowing corrections for the $`F_2`$ structure function. Assuming a Gaussian parametrization for the profile function we obtain that the screening is inversely proportional to the radius. Therefore the shadowing corrections are strongly associated with the distributions of the gluons within the proton. In this section we estimate the radius dependence of the shadowing corrections, considering the $`F_2`$ and $`\frac{dF_2(x,Q^2)}{dlogQ^2}`$ data. First we explain why the radius is expected to be smaller than the proton radius. Consider the first order contribution to the shadowing corrections, where two ladders couple to the proton. The ladders may be attached to different constituents of the proton or to the same constituent. In the first case the shadowing corrections are controlled by the proton radius, while in the second case these corrections are controlled by the constituent radius, which is smaller than the proton radius. Therefore, on the average, we expect that the radius will be smaller than the proton radius. Theoretically, $`R^2`$ reflects the integration over $`b_t`$ in the first diagrams for the SC. In Fig. 3 we present the ratio $`R_2={\displaystyle \frac{F_2(x,Q^2)\text{[Eq. (}\text{15}\text{)]}}{F_2(x,Q^2)\text{[GRV]}}},`$ (21) where $`F_2(x,Q^2)[\text{GRV}]=_{u,d,s}e_f^2[xq(x,Q^2)+x\overline{q}(x,Q^2)]+F_2^c(x,Q^2)`$ is calculated using the GRV parametrization. For the treatment of the charm component of the structure function we consider the charm production via boson-gluon fusion . In this paper we assume $`m_c=1.5GeV`$. In Fig. 4 we present the ratio $`R_s={\displaystyle \frac{\frac{dF_2(x,Q^2)}{dlogQ^2}\text{[Eq. (}\text{17}\text{)]}}{\frac{dF_2(x,Q^2)}{dlogQ^2}\text{[GRV]}}}.`$ (22) The function $`\frac{dF_2(x,Q^2)}{dlogQ^2}\text{[GRV]}`$ was calculated using the expression (18) and the GRV parametrization. Our results are presented as a function of $`ln(\frac{1}{x})`$ at different virtualities. We can see that the SC are larger in the ratio $`R_s`$ and that our predictions of SC are strongly dependent of the radius $`R`$. Moreover, we see clearly the SC behavior inversely proportional with the radius. In Fig. 5 we compare our predictions for the SC in the $`F_2`$ structure function and the H1 data as a function of $`ln(\frac{1}{x})`$ at different virtualities and some values of the radius. Our goal is not a best fit of the radius, but eliminate some values of radius comparing the predictions of the eikonal approach and HERA data. We consider only the quark sector in the calculation of SC, which is a good approximation in this observable, as shown in the previous section. The choice $`R^2=1.5GeV^2`$ does not describe the data, i.e. the data discard the possibility of very large SC in the HERA kinematic region. However, there are still two possibilities for the radius which reasonably describe the $`F_2`$ data. To discriminate between these possibilities we must consider the behavior of the $`F_2`$ slope. In Fig. 6 we present our results for $`\frac{dF_2(x,Q^2)}{dlogQ^2}`$ considering the SC only in the quark sector. Although in the previous section we have demonstrate that the contributions of the quark and gluon sectors should be considered, here we will test other possibilities to describe the data: the dependence on the radius $`R`$. Our results show that the best fit of the data occurs at small values of $`R^2`$, which are discarded by the $`F_2`$ data. Therefore, in agreement with our previous conclusions, we must consider a general approach to describe consistently the $`F_2`$ and $`\frac{dF_2(x,Q^2)}{dlogQ^2}`$ data. In Fig. 7 we present our results for $`\frac{dF_2(x,Q^2)}{dlogQ^2}`$ considering the SC in the gluon and quark sector for different values of $`R^2`$, calculated using the general approach proposed in the previous section. The best result occurs for $`R^2=5GeV^2`$, which also describes the $`F_2`$ data. The value for the squared radius $`R^2=5GeV^2`$ obtained in our analysis agrees with the estimates obtained using the HERA data on diffractive photoproduction of $`J/\mathrm{\Psi }`$ meson . Indeed, the experimental values for the slope are $`B_{el}=4GeV^2`$ and $`B_{in}=1.66GeV^2`$ and the cross section for $`J/\mathrm{\Psi }`$ diffractive production with and without photon dissociation are equal. Neglecting the $`t`$ dependence of the pomeron-vector meson coupling the value of $`R^2`$ can be estimated . It turns out that $`R^25GeV^2`$, i.e., approximately 2 times smaller than the radius of the proton. As an additional comment let us say that the SC to $`F_2`$ and its slope may also be analysed using a two radii model for the proton . This analysis is motivated by the large difference between the measured slopes in elastic and inelastic diffractive leptoproduction of vector mesons in DIS. An analysis using the two radii model for the proton is not a goal of this paper, since a definite conclusion on the correct model is still under debate. The summary of this point is that the analysis of the $`F_2`$ and $`\frac{dF_2(x,Q^2)}{dlogQ^2}`$ data using the eikonal model implies that the gluons are not distributed uniformly in the whole proton disc, but behave as concentrated in smaller regions. This conclusion motivates an analysis of the jet production, which probes smaller regions within the proton, using an approach which considers the shadowing corrections. ## 4 A screnning boundary The common feature of the BFKL and DGLAP equations is the steep increase of the cross sections as $`x`$ decreases. This steep increase cannot persist down to arbitrary low values of $`x`$ since it violates a fundamental principle of quantum theory, i.e. the unitarity. In the context of relativistic quantum field theory of the strong interactions, unitarity implies the cross section of a hadronic scattering reaction cannot increase with increasing energy $`s`$ above $`log^2s`$: the Froissart’s theorem . The Froissart bound cannot be proven for off-mass-shell amplitudes , which is the case for deep inelastic scattering . Our goal in this section is by using the $`s`$-channel unitarity (6) and the eikonal approach to estimate a superior limit from which the shadowing corrections cannot be disregarded in $`F_2`$ and its slope. Considering the expression (8) for the $`F_2`$ structure function, we can write a $`b_t`$ dependent structure function given by $`F_2(x,Q^2,b_t)={\displaystyle \frac{1}{2\pi ^3}}{\displaystyle \underset{u,d,s}{}}e_f^2{\displaystyle _{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}}}{\displaystyle \frac{dr_t^2}{r_t^4}}\left\{1e^{\frac{1}{2}\mathrm{\Omega }_{q\overline{q}}(x,r_t,b_t)}\right\}.`$ (23) The relation between the opacity and the gluon distribution (9) obtained in , is valid in the kinematical region where $`\mathrm{\Omega }1`$. In the eikonal approach for pQCD we make the assumption that the relation (9) is valid in all kinematic region. To obtain an estimate of the region where the SC are important we consider a superior limit for the expression (23), which occurs for $`\mathrm{\Omega }1`$. In this limit the second term in the above equation can be disregarded. As the shadowing terms are negative and reduce the growth of the $`F_2`$ structure function, disregarding the shadowing terms we are estimating a superior limit for the region where these terms are not important, i.e. a screnning boundary which establishes the region where the shadowing corrections are required to calculate the observables. The $`b_t`$ dependent structure function in the limit $`\mathrm{\Omega }1`$ is such that $`F_2(x,Q^2,b_t)<{\displaystyle \frac{1}{2\pi ^3}}{\displaystyle \underset{u,d,s}{}}e_f^2{\displaystyle _{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}}}{\displaystyle \frac{dr_t^2}{r_t^4}}.`$ (24) Making the assumption that the $`b_t`$ dependence of the structure function is factorized : $`F_2(x,Q^2,b_t)=F_2(x,Q^2)S(b_t),`$ and considering a Gaussian parametrization for the profile function and its value for $`b_t=0`$ we get ($`n_f=3`$) $`F_2(x,Q^2)<{\displaystyle \frac{R^2}{3\pi ^2}}{\displaystyle _{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}}}{\displaystyle \frac{dr_t^2}{r_t^4}}.`$ (25) As a result $`F_2(x,Q^2)`$ $`<`$ $`{\displaystyle \frac{R^2}{3\pi ^2}}(Q^2Q_0^2)`$ (26) $`<`$ $`{\displaystyle \frac{R^2Q^2}{3\pi ^2}}.`$ The above limit is our estimate for the screnning boundary for the $`F_2`$ structure function. The screnning boundary for the $`F_2`$ slope is straightforward from the expression (24). We get $`{\displaystyle \frac{dF_2(x,Q^2)}{dlogQ^2}}<{\displaystyle \frac{R^2Q^2}{3\pi ^2}}.`$ (27) This expression agrees with the expression obtained in . Clearly expressions (26) and (27) serve only as a rough prescription for estimating the region where the corrections required by unitarity cannot be disregarded. A more rigorous treatment would be desirable, but remains to be developed. Using the above expressions we can make an analysis of HERA data. We use $`R^2=5GeV^2`$ in the calculations. In Fig. 8 we compare our predictions with the $`F_2`$ data from the H1 collaboration. We can see that data at larger values of $`Q^2`$ ($`Q^28.5GeV^2`$) do not violate the limit (26). However, the data at smaller values of $`Q^2`$ and $`x`$ violate this limit. This indicates that we should consider the SC for this kinematical region. In Fig. 9 we present our results for the $`F_2`$ slope. We see that the data for small $`Q^2`$ and $`x`$ ($`Q^22.5GeV^2`$, $`x10^4`$) violate the limit (27), stressing the need of the shadowing corrections. Therefore for small values of $`x`$ and $`Q^2`$ the observables must be calculated using an approach which takes them into account. ## 5 Summary In this paper we have presented our analysis of the shadowing corrections in the scaling violations using the eikonal approach. We shown that the $`\frac{dF_2(x,Q^2)}{dlogQ^2}`$ data can be described successfully considering the shadowing corrections in the quark and gluon sectors. Furthermore, we have considered the radius dependence of these corrections and an unitarity boundary. From the analysis of the $`R`$ dependence of the SC in the eikonal approach we have shown that the value $`R^2=5GeV^2`$ allows to describe the HERA data. This value agrees with the estimate obtained independently in the diffractive $`J/\mathrm{\Psi }`$ photoproduction. Using the eikonal approach and the assumption of $`b_t`$ factorization of the $`F_2`$ structure function a screnning boundary is analysed. This boundary constrains the region where the corrections required by unitarity may be disregarded; or in other words, a limit for applicablity of standard perturbative QCD framework. We have shown that the HERA data at small $`x`$ and $`Q^2`$ violate this limit, which implies that the shadowing corrections are important in the HERA kinematic region. Our conclusion is that the shadowing effect is important already at HERA kinematic region. We believe that the analysis of distinct observables ($`F_L,F_2^c,\frac{dF_2(x,Q^2)}{dlogQ^2}`$) at small values of $`x`$ and $`Q^2`$ will allow to evidentiate the shadowing corrections. ## Acknowledgments MBGD acknowledges enlightening discussions with F. Halzen at University of Wisconsin, S. J. Brodsky at SLAC and E. M. Levin during the completion of this work.
no-problem/9812/astro-ph9812023.html
ar5iv
text
# Quasars and Ultraluminous Infrared Galaxies: At the Limit? ## 1 Introduction Although active quasars constitute only a small fraction of galaxies today, it appears that most large galaxies harbor central massive dark objects (MDOs) with $`M_{MDO}/M_{spheroid}0.006`$ (Kormendy & Richstone (1995); Faber et al. (1997); Magorrian et al. (1998)). These MDOs are plausibly the supermassive black holes required by the current paradigm for active galactic nuclei. This result has been predicted by studies of quasar demographics that show quasar activity is more likely a short-lived phenomenon in a majority of large galaxies than a long-lived one in a small fraction of galaxies (Soltan (1982); Haehnelt & Rees (1993)). This conclusion is based on the assumptions that quasars are found in massive spheroids and radiate at the Eddington rate. Recently, quasar host-galaxy studies have added several pieces of evidence in support of this picture. First, near-infrared and HST imaging programs show that luminous quasars do in fact reside mainly in luminous, early-type hosts (McLeod & Rieke 1995b ; Hutchings (1995); Taylor et al. (1996); McLeod (1997); Bahcall et al. (1997); Boyce et al. (1998); McLure et al. (1998)). Second, there appears to be an upper bound to the quasar luminosity as a function of host galaxy stellar mass (McLeod & Rieke 1995a and references therein). McLeod (1997) pointed out that if the quasar nuclei at this “luminosity/host-mass limit” are emitting at a significant fraction of the Eddington limit, then the black holes must obey a relation similar to the one for normal galaxies with MDOs. This result supports the notion that present day quiescent galaxies harbor “dead quasars” in their nuclei, a conclusion that has recently been supported by McLure et al. (1998). The nature of ultraluminous infrared galaxies (ULIRGs) has been of great interest since their discovery by Rieke & Low (1972), and particularly since IRAS found them in substantial numbers (e.g. Sanders et al. (1988)). It has been widely proposed that infrared galaxies with luminosity of $`10^{12}`$ L or greater derive their energy predominantly from heavily dust-embedded AGNs (e.g. Sanders & Mirabel (1996) and references therein). If this model is correct, virtually all the blue, ultraviolet, and soft x-ray energy from the quasar will be absorbed by the dust and degraded to the far infrared, where the huge luminosity will emerge unavoidably. Comparison with the nuclear luminosity/host-mass limit relation therefore can test the connection between ULIRGs and quasars. The quasar luminosity/host-mass limit was determined first from our groundbased data (McLeod & Rieke 1994a b,1995ab and references therein), where relatively poor resolution precluded a detailed study of the hosts. We report here higher resolution imaging using HST’s Near Infrared Camera and MultiObject Spectrometer (NICMOS). We combine the results with previous data to improve the definition of the relation. We then use data from the literature to compare this relation with its counterpart for ULIRGs. ## 2 NICMOS Observations and Data Reduction To test the robustness of our luminosity/host-mass limit, we observed from our “high-luminosity sample” (McLeod & Rieke 1994b ) all 10 quasars that had not been previously observed with HST. To this we added 6 luminous quasars for which ground-based attempts to resolve a host galaxy had failed. All 16 objects are in the redshift range $`0.13<z<0.40`$ with an average $`z=0.25`$. Each quasar was observed for a single orbit using the NIC2 MULTIACCUM mode in a four position dither pattern. Because this mode allows the observer to use intermediate readouts, we were able to build up an image that was both deep and linear over the entire quasar field. At the end of each orbit, we used the same dither pattern to observe a bright star near the quasar. This star provided a measurement of the point-spread function (PSF). The quasar and PSF star images were reduced using the NICRED package provided by B. McLeod. Full details will be provided in McLeod et al. 1999. ## 3 Determining Host-galaxy Magnitudes We determined host-galaxy magnitudes using a 1-D analysis technique that allowed us to compare with our ground-based results and provided a graphical way to judge the goodness-of-fit of various galaxy models. First, we generated a 1-D radial intensity profile of each quasar, each PSF star, and a “combined PSF” made from a noise-weighted average of all of the PSF stars. The profiles extended to surface brightness limit of $`\mathrm{m}_\mathrm{H}23.2\mathrm{mag}/\mathrm{}^2`$ and excluded light from companions. Second, we normalized each PSF to have the same central intensity as the quasar. Third, we removed the nuclear contribution by subtracting the highest fraction of the normalized PSFs for which the resulting intensity in the first Airy minimum was non-negative. On average, 72% of the light in the central peak was attributed to the nucleus, whereas the full range for the sample was 65-90%. The combined PSF gave comparable results to each quasar’s own PSF. Finally, we numerically integrated the resulting profile to obtain a magnitude for the host. Figure 1 shows the 1-D profiles for one of the hosts to illustrate the effect of subtracting different amounts of the normalized PSF. The host magnitudes derived by integrating under the three profiles shown are H=14.0, 15.0, and 15.4 (top to bottom respectively). As seen in the Figure, most of the difference is in the nuclear contribution; there is very little effect on most of the galaxy. We estimate from tests like these that our host galaxy magnitudes generally have uncertainties of $`0.2`$ mag from the PSF subtraction. Despite the order-of-magnitude difference in the spatial resolutions of the ground-based and NICMOS data, we found excellent agreement in the host magnitudes (within 0.2 mag) for 8 of the 10 quasars that we had imaged previously. For the other two, the ground-based intensities were apparently too high due to contamination from close companions. The resulting magnitudes for all 16 quasars in our sample are plotted in Figure 2a, along with (i) WFPC2 host magnitudes from the literature for the rest of the McLeod & Rieke (1994b) sample; (ii) our own and other ground-based data for other samples previously shown in McLeod & Rieke (1995a,b); and (iii) high-quality data recently published for two additional samples. All magnitudes have been converted to rest-frame values assuming nuclear k-corrections and colors from Cristiani & Vio (1990); galaxy k-corrections and colors appropriate for early-type galaxies, computed from a galaxy spectrum provided by M. Rieke (and shown in Figure 1 of McLeod & Rieke 1995b ); and $`\mathrm{H}_0=80\mathrm{k}\mathrm{m}/\mathrm{s}/\mathrm{Mpc},\mathrm{q}_0=0`$. The redshifts of the $`\mathrm{M}_\mathrm{B}<22`$ quasars on the plot range from $`0.06<z<0.8`$ with an average $`z0.3`$ (90% have $`z<0.5`$). In addition to uncertainties due to PSF subtraction, the host $`\mathrm{M}_\mathrm{H}`$ values carry uncertainties due to the underlying galaxy energy distribution. For the galaxies observed in H, the k-corrections themselves are $`<0.1`$ mag over this redshift range. For the galaxies observed in the visible, however, the uncertainties in the k-corrections and colors can total several tenths of a magnitude. From the 1-D profiles, we also determined that approximately half of these radio-quiet quasars have hosts that are better described by deVaucouleurs laws than by exponentials, in agreement with the other recent results mentioned above. Full details of our 1-D analysis, including the radial profiles, will be published along with the images and a 2-D morphological analysis in McLeod et al. 1999. However, the addition of the new data to Figure 2a allows us to take a new look at the luminosity/host-mass limit now. ## 4 Results ### 4.1 The Eddington Limit for Quasars Figure 2a allows us to test the combined assumptions that all large galaxies contain black holes with the Magorrian et al. (1998) mass fraction and that quasars radiate near the Eddington limit. For quasars near the luminosity/host-mass limit, the galaxy contribution to the B-band light is negligible. Thus, we can convert $`\mathrm{M}_\mathrm{B}`$ to a quasar bolometric luminosity through a bolometric correction $`\mathrm{BC}\nu \mathrm{L}_\nu (\mathrm{B})/\mathrm{L}_{\mathrm{bol}}`$. We adopt as a reference the rest-frame value BC=12 (Elvis et al. (1994)). To estimate black hole masses, we adopt the average value of $`fM_{MDO}/M_{spheroid}0.006`$ from Magorrian et al. (1998). We convert spheroid masses to galaxy absolute magnitudes assuming $`VH`$ = 3.0 for bulge stellar populations and a mass-to-light ratio $`\mathrm{{\rm Y}}_\mathrm{V}=7.2M_{}/L_{}`$, which is an average for the 24 most luminous galaxies ($`\mathrm{M}_\mathrm{V}20`$) in their sample. Thus, both our bulge and black hole masses are traceable to Magorrian et al. (1998), which should reduce systematic errors in the conversion between them. Combining the assumptions described above, the relation between nuclear absolute B magnitude and bulge absolute H magnitude is: $$M_B=M_H2.12.5[log_{10}(ϵ)+log_{10}(\frac{\mathrm{{\rm Y}}_\mathrm{V}}{7.2M_{}/L_{}})+log_{10}(\frac{f}{0.006})log_{10}(\frac{BC}{12})]$$ where $`ϵL/L_{Edd}`$. The diagonal lines in Figure 2a show the positions of $`ϵ=0.1\mathrm{and}1.0`$. For our default values of $`\mathrm{{\rm Y}}_\mathrm{V}`$, f, and $`\mathrm{BC}`$, most quasars fall within the $`ϵ0.20`$ envelope. A complementary analysis by Laor (1998) indicates that quasars like the ones in our samples do in fact follow the Magorrian et al. (1998) relation. If this is true, our results imply that the quasars are radiating at up to 20% of the Eddington rate. These results are consistent with the recent study of McLure et al. (1998) who carried out a similar analysis to ours using visible data and found most of their objects to be radiating at a few percent Eddington. ### 4.2 Bulge/Luminosity Relation for Ultraluminous Infrared Galaxies The discovery that classical quasars emit up to a significant fraction ($`20\%`$) of their Eddington luminosities suggests that ultraluminous galaxies might emit at a similar level. This hypothesis is relatively easily tested because the integrated H-band fluxes of the ULIRGs tend to be dominated by the bulge component of the galaxy (see e.g. the imaging atlases of Smith et al. (1996) and Murphy et al. (1996); also argued by Surace & Sanders (1997)). We therefore estimate the central black hole masses from the integrated absolute H magnitudes, using photometry from McAlary et al. (1979), Carico et al. (1988), Carico et al. (1990), Goldader et al. (1995), Smith et al. (1996), and Murphy et al. (1996). For the latter reference, calibrated near infrared measures are not available so we computed the bulge H magnitude from m<sub>r</sub> and a standard color correction. In all cases, absolute magnitudes are computed as in Murphy et al. (1996). Far infrared luminosities are taken from the same references as the bulge magnitudes. The results are shown in Figure 2b, along with an Eddington limit computed exactly analogously to the limit for the quasars in Figure 2a. While they do not span a wide enough luminosity range to define clearly a luminosity/host-mass relation, the $`10^{12}`$ L ULIRGs radiate at a rate nearly identical to that of quasars of the same luminosity. Most fall within an envelope of $`ϵ0.1`$, and a detailed comparison of the $`ϵ`$ distributions over the whole QSO range indicates a factor of $`2`$ difference in the average Eddington fraction. This factor is likely within the uncertainties due to bolometric corrections, assumptions about measuring the bulge luminosity in ULIRGs, and other causes. It is conceivable that the two types of source have not just very similar but identical behavior relative to the Eddington limit. The close similarity supports the view that a significant portion of the most luminous ULIRGs derive much of their luminosity from embedded AGNs. About 30% of the ULIRGs above Seyfert luminosity fall within a factor of two of $`ϵ=0.1`$ and are candidates to be dominated by embedded AGNs. ### 4.3 Nature of ULIRGs The controversy regarding the nature of ULIRGs is fed because the various indicators give contradictory results. Part of the difficulty is that many indicators can show the presence of an AGN but do not constrain its role in the energetics of the galaxy - an example is high excitation optical emission lines, seen either directly or scattered. Recently, Genzel et al. (1998) have used ISO spectroscopy to argue that the majority (70-80%) of these objects are dominated energetically by star formation, both because the high excitation infrared fine structure lines are weak and because the low excitation lines imply adequate energy generation by starbursts. Rieke (1988) concluded from the lack of hard xrays from this class of object that most of them were powered by starbursts. Although hard xrays could be blocked by very thick accretion tori in some cases (Sanders & Mirabel (1996)), it is improbable that such heavy columns would lie along our line of sight for all cases. Surace et al. (1998) detect knots of star formation in HST images, but they also detect putative nuclei and argue that the star forming knots are not energetically important. Veilleux, Sanders, & Kim (1997) find evidence for energetically important AGNs in at least 20-30% of the ULIRGs from the presence of near infrared broad lines. Lonsdale, Smith, & Lonsdale (1995) find VLBI radio sources at levels consistent with the theory that an AGN dominates the luminosity in 55% of ULIRGs. On the other hand, in followup VLBI observations of one such source, Arp 220, Smith et al. (1998) show that the compact radio emission actually originates in multiple radio supernovae. All four indicators are consistent with $``$ 75% of the $``$ 10<sup>12</sup> L ULIRGs being powered predominantly by starbursts, with $``$ 25% powered by AGNs. Although this general agreement is encouraging, a number of individual galaxies yield contradictory indications of their underlying energy source using differing methods. ## 5 Conclusions Our study of quasar host galaxies supports the hypothesis that all galaxy spheroids contain black holes with $`0.6\%`$ of the stellar mass. In QSOs, the black holes accrete at up to $`20\%`$ of the Eddington rate. In ULIRGs, the nature of the power source remains unclear; however, if they are powered by embedded quasars then they accrete at a similar rate. The large Eddington fractions in both kinds of objects imply a small duty cycle for activity over the Hubble time. Otherwise, the accretion process would produce higher-mass black holes than we infer today. At high redshift, quasars are very luminous but large galaxies might not yet have been assembled. Therefore, we expect that the luminosity/host-mass limit must ultimately break down. This may indicate that rather than the stellar mass, it is the depth of the large-scale potential well of the dark matter halo that is fundamentally related to the mass of the supermassive black hole. Support for this work was provided by NASA through grant number GO-07421.01-96A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. We also gratefully acknowledge support from the Keck Northeast Astronomy Consortium. We especially thank the NICMOS team for putting up a great instrument, Brian McLeod for putting together the NICRED reduction package, and Erin Condy for assistance with the data reduction. We are grateful to the referee for a concise, constructive report that improved the quality of the manuscript.
no-problem/9812/astro-ph9812157.html
ar5iv
text
# Untitled Document THE HST KEY PROJECT ON THE EXTRAGALACTIC DISTANCE SCALE. XV. A CEPHEID DISTANCE TO THE FORNAX CLUSTER AND ITS IMPLICATIONS BARRY F. MADORE<sup>1</sup>, WENDY L. FREEDMAN<sup>2</sup>, N. SILBERMANN<sup>3</sup> PAUL HARDING<sup>4</sup>, JOHN HUCHRA<sup>5</sup>, JEREMY R. MOULD<sup>6</sup> JOHN A. GRAHAM<sup>7</sup>, LAURA FERRARESE<sup>8</sup> BRAD K. GIBSON<sup>6,13</sup>, MINGSHENG HAN<sup>9</sup>, JOHN G. HOESSEL<sup>9</sup> SHAUN M. HUGHES<sup>10</sup>, GARTH D. ILLINGWORTH<sup>11</sup>, DAN KELSON<sup>7</sup> RANDY PHELPS<sup>2</sup>, SHOKO SAKAI<sup>3</sup>, PETER STETSON<sup>12</sup> ——————————– $`1`$ NASA/IPAC Extragalactic Database, Infrared Processing and Analysis Center, Jet Propulsion Laboratory, California Institute of Technology, MS 100-22, Pasadena, CA 91125 $`2`$ Observatories of the Carnegie Institution of Washington, 813 Santa Barbara St., Pasadena, CA 91101 $`3`$ Jet Propulsion Laboratory, California Institute of Technology, MS 100-22, Pasadena, CA 91125 $`4`$ Steward Observatory, University of Arizona, Tucson, AZ 85721 $`5`$ Harvard Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 $`6`$ Mt. Stromlo and Siding Spring Observatories, Institute of Advanced Studies, Private Bag, Weston Creek Post Office, ACT 2611, Australia $`7`$ Department of Terrestrial Magnetism, Carnegie Institution of Washington, 5241 Broad Branch Rd. N.W., Washington D.C. 20015 $`8`$ Hubble Felow, California Institute of Technology, MS 105-24 Robinson Lab, Pasadena, CA 91125 $`9`$ Department of Astronomy, University of Wisconsin, 475 N. Charter St., Madison, WI 53706 $`10`$ Royal Greenwich Observatory, Madingley Road, Cambridge, UK CB3 OHA $`11`$ Lick Observatory, University of California, Santa Cruz, CA 95064 $`12`$ Dominion Astrophysical Observatory, 5071 W. Saanich Rd., Victoria, BC, Canada V8X 4M6 $`13`$ Center for Astrophysics & Space Astronomy, University of Colorado, Boulder CO 80309-0389 ABSTRACT Using the Hubble Space Telescope (HST) thirty-seven long-period Cepheid variables have been discovered in the Fornax Cluster spiral galaxy NGC 1365 (Silbermann et al. 1999). The resulting V and I period-luminosity relations yield a true distance modulus of $`\mu _o=`$ 31.35 $`\pm `$ 0.07 mag, which corresponds to a distance of 18.6$`\pm `$0.6 Mpc. This measurement provides several routes for estimating the Hubble Constant. (1) Assuming this distance for the Fornax Cluster as a whole yields a local Hubble Constant of 70 ($`\pm `$18)<sub>random</sub> \[$`\pm `$7\]<sub>systematic</sub> km/s/Mpc. (2) Nine Cepheid-based distances to groups of galaxies out to and including the Fornax and Virgo clusters yield H<sub>o</sub> = 73 ($`\pm `$16)<sub>r</sub> \[$`\pm `$7\]<sub>s</sub> km/s/Mpc. (3) Recalibrating the I-band Tully-Fisher relation using NGC 1365 and six nearby spiral galaxies, and applying it to 15 galaxy clusters out to 100 Mpc gives H<sub>o</sub> = 76 ($`\pm `$3)<sub>r</sub> \[$`\pm `$8\]<sub>s</sub> km/s/Mpc. (4) Using a broad-based set of differential cluster distance moduli ranging from Fornax to Abell 2147 gives H<sub>o</sub> = 72 ($`\pm `$3)<sub>r</sub> \[$`\pm `$6\]<sub>s</sub> km/s/Mpc. And finally, (5) Assuming the NGC 1365 distance for the two additional Type Ia supernovae in Fornax and adding them to the SNIa calibration (correcting for light curve shape) gives H<sub>o</sub> = 67 ($`\pm `$6)<sub>r</sub> \[$`\pm `$7\]<sub>s</sub> km/s/Mpc out to a distance in excess of 500 Mpc. All five of these H<sub>o</sub> determinations agree to within their statistical errors. The resulting estimate of the Hubble Constant combining all of these determinations is H<sub>o</sub> = 72 ($`\pm `$5)<sub>r</sub> \[$`\pm `$7\]<sub>s</sub> km/s/Mpc. An extensive tabulation of identified systematic and statistical errors, and their propagation, is given. 1. INTRODUCTION Hubble (1929) announced his discovery of the expansion of the Universe nearly 70 years ago. Despite decades of effort, and continued improvements in the actual measurement of extragalactic distances, convergence on a consistent value for the absolute expansion rate, the Hubble constant, $`H_o`$, has been elusive. However, progress on the absolute calibration of the extragalactic distance scale in the last few years has been rapid and dramatic (see, the recent proceedings “The Extragalactic Distance Scale” edited by Livio, Donahue & Panagia 1997 for instance, containing Freedman, Madore & Kennicutt 1997; Mould et al. 1997; Tammann & Federspiel 1997; and also see Jacoby et al. (1992) and Riess, Press & Kirshner 1996). This accelerated pace has occurred primarily as a result of the improved resolution of the Hubble Space Telescope (HST) and its consequent ability to discover classical Cepheid variables at distances a factor of ten further than can routinely be achieved from the ground. As a result, accurate zero points to a number of recently refined methods which can measure precise relative distances beyond the realm of the Cepheids have become available. These combined efforts are providing a more accurate distance scale for local galaxies, and are indicating a convergence among various secondary distance indicators in establishing an absolute calibration of the far-field Hubble flow. The discovery of Cepheids with HST has proven to be very efficient out to and even somewhat beyond distances of $``$20 Mpc. Soon after the December 1993 HST servicing mission the measurement of Cepheids in the Virgo cluster (part of the original design specifications for the telescope) became feasible (Freedman et al. 1994a). The subsequent discovery of Cepheids in the Virgo galaxy M100 (Freedman et al. 1994b; Ferrarese et al. 1996) was an important step in resolving outstanding differences in the extragalactic distance scale (Mould et al. 1995). The Virgo cluster is complex both in its geometric and its kinematic structure, and there still remain large uncertainties in both the velocity and distance to this cluster. Hence, the Virgo cluster is not an ideal test site for an unambiguous determination of the cosmological expansion rate or the calibration of secondary distance indicators. In this paper we discuss the implications of a Cepheid distance to the next major cluster of galaxies, Fornax, which is a simpler system than Virgo. In the companion paper to this one (Silbermann et al. 1999) we present the Cepheid photometry and PL relations for the Cepheids in NGC 1365. In Madore et al. (1998) we briefly discussed the determination of $`H_o`$ based on the distance of NGC 1365 and the Fornax Cluster, in addition to a calibration of a local Hubble expansion-rate plot. The Fornax cluster is comparable in distance to the Virgo cluster (de Vaucouleurs 1975), but it is found almost opposite to Virgo in the skies of the southern hemisphere. The Fornax cluster is less rich in galaxies than Virgo (Ferguson & Sandage 1988), but it is also substantially more compact than its northern counterpart (Figure 1). As a result of its lower mass, the influence of Fornax on the local velocity field is less dramatic than that of the Virgo cluster. And because of its compact nature, questions concerning the membership and location in the cluster of individual galaxies are significantly less problematic; the back-to-front geometry is far simpler and less controversial than that of the Virgo cluster. Clearly, Fornax provides a much more interesting site for a test of the local expansion rate. In the context of the Key Project on the Extragalactic Distance Scale (Kennicutt, Freedman & Mould 1995), there are several important reasons to secure a distance to the Fornax cluster. The Fornax cluster serves as both a probe of the local velocity field and a major jumping-off point for several secondary distance indicators, which can be used to probe a volume of space at least 1,000 times larger. To obtain a distance to the Fornax cluster, the H<sub>0</sub> Key Project sample includes three member galaxies; the first of these, discussed here, is the Seyfert 1 galaxy NGC 1365, a striking, two-armed, barred-spiral galaxy with an active galactic nucleus. Two additional galaxies, NGC 1425 and NGC 1326A, have also been imaged with HST and those data are being processed; preliminary reduction shows that the distance to NGC 1326A (Prosser et al., in preparation) lies within the uncertainties quoted here for NGC 1365. 2. NGC 1365 AND THE FORNAX CLUSTER Three lines of evidence independently suggest that NGC 1365 is a representative, physical member of the Fornax cluster. First, NGC 1365 is almost directly along our line of sight to Fornax: the galaxy is projected only $``$70 arcmin (380 kpc) from the geometric center of the cluster, whereas the radius of the cluster is $``$100 arcmin (540 kpc; Ferguson 1989, and see also Figure 1). In addition, NGC 1365 is also coincident with the Fornax cluster in velocity space. The observed velocity of NGC 1365 (+1,636 km/sec) is only +234 km/sec larger than the cluster mean, and is well inside the cluster velocity dispersion (see below.) Finally, we note that for its rotational velocity, NGC 1365 sits within 0.02 mag of the central ridge line of the apparent Tully-Fisher relation relative to other cluster members defined by recent studies of the Fornax cluster (Bureau, Mould & Staveley-Smith 1996; Schroder 1995). NGC 1365 is large in angular size, and it is very bright in apparent luminosity as compared to any other galaxy in the immediate vicinity of the Fornax cluster. One might question whether on this basis, NGC 1365 is a true member of the Fornax cluster. Correcting for an inclination of 44, (see Bureau et al. 1996) the 21cm neutral hydrogen line width of NGC 1365 is found to be $``$575 km/sec (Bureau et al. 1996; Mathewson, Ford & Buchhorn 1992). A $`\pm 5^{}`$ error in this determination would result in a 10% uncertainty in the derived line width. Using the Tully-Fisher relation as a relative guide to intrinsic size and luminosity, this rotation rate places NGC 1365 among the most luminous galaxies in the local Universe; brighter than M31 or M81, and comparable to NGC 4501 in the Virgo cluster or NGC 3992 in the Ursa Major cluster. Thus, the Tully-Fisher relation predicts that NGC 1365 is expected to be apparently bright, even at the distance of the Fornax cluster, and that its observed global properties are consistent with membership in that cluster. 3. THE MEAN VELOCITY AND VELOCITY DISPERSION OF FORNAX The systemic (heliocentric) velocity and velocity dispersion of the main population of galaxies in Fornax are well defined. A search of the NASA/IPAC Extragalactic Database (NED: http://nedwww.ipac.caltech.edu, version release date 01/98) for galaxies within 6 of the Fornax cluster center and having published redshifts $``$2,500 km/sec produced a sample of 106 galaxies; this was was then supplemented with 4 additional redshifts from ZCAT, (Huchra, Geller, Clemens, Tokarz & Michel 1992; the 1998 edition of ZCAT is available via anonymous ftp from fang.harvard.edu) and 7 recently published dwarf galaxy redshifts from Drinkwater & Gregg (1998), giving a total of 117 redshifts. The distribution of these 117 objects projected on the sky is shown in Figure 2; and two ‘pie diagrams’ illustrating the sample distribution in position-velocity space are shown in Figure 3. In all three representations, ellipticals are shown as filled circles, spirals as open circles. While the core of the cluster is demonstrably dominated by E/S0 galaxies, there is no other obvious segregation of the two populations: spirals and ellipticals being coincident and largely co-spatial. After subdividing the sample by morphological type, 39 spirals/irregular galaxies give V = 1,399 km/sec and $`\sigma =\pm 334`$ km/sec, 78 E/SO galaxies give V = 1,463 km/sec with $`\sigma =\pm 347`$ km/sec. The mean velocity of the spirals agrees with the mean for the ellipticals to within 0.2 $`\times \sigma `$, the velocity dispersion of the system. The combined sample of 117 galaxies has an unweighted mean of V = 1,441 km/sec and $`\sigma =`$ $`\pm 342`$ km/sec which we adopt hereafter (see also Schroder 1995; Han & Mould 1990.) The velocity off-set of +195 km/sec for NGC 1365 with respect to this mean is less than 2/3 of the cluster velocity dispersion. <sup>1</sup> Given that the redshifts are of mixed quality with regard to reported uncertainties, differing by up to two orders of magnitude, we also calculated the systemic velocity of the cluster weighting the individual velocities by the inverse square of the internal errors. That solution gives V = 1,405 km/sec, agreeing with the adopted value to within 3%, despite the fact that it is heavily weighted by only a dozen or so high precision points in the distribution (see Figure 4). 4. HST OBSERVATIONS AND THE CEPHEIDS IN NGC 1365 Using the Wide Field and Planetary Camera 2 on HST, we have obtained a set of 12-epoch observations of NGC 1365. The observing window of 44 days, beginning August 6, and continuing until September 24, 1995, was selected to maximize target visibility, without necessitating any roll of the targeted field of view. Sampling within the window was prescribed by a power-law distribution, tailored to optimally cover the light and color curves of Cepheids with anticipated periods in the range 10 to 60 days (see (3) for additional details). Contiguous with 4 of the 12 V-band epochs (5,100 sec each through the F555W filter), I-band exposures (5,400 sec each through the F814W filter) were also obtained so as to allow a determination of reddening corrections for the Cepheids. All frames were pipeline pre-processed at the Space Telescope Science Institute in Baltimore and subsequently analyzed using two stellar photometry packages, ALLFRAME (Stetson 1994) and DoPhot (Schecter et al. 1993), in order to quantify potential systematic differences in the two reduction programs. Zero-point calibrations for the photometry were adopted from Holtzmann, J. et al. (1995) and Hill et al. (1998), which agree to 0.05 mag on average. Details on the DoPhot and ALLFRAME reduction and analysis of this data set are presented elsewhere (Silbermann et al. 1999). We are also currently undertaking artificial star tests on these frames to quantify the uncertainty due to crowding (Ferrarese et al., in preparation). Detailed information on the 52 Cepheid candidates discovered in NGC 1365 can be found in Silbermann et al. (1999). The phase coverage in all cases is sufficiently dense and uniform that the form of the light curves is clearly delineated. We have adopted a sample of 37 of these variables as being unambiguously classified as high-quality Cepheids on the basis of their distinctively rapid brightening, followed by a long, linear decline phase (for both the DoPhot and ALLFRAME variable-star candidates). Periods, obtained using a modified Lafler-Kinman algorithm (Lafler & Kinman 1965), are statistically good to a few percent, although in some cases ambiguities larger than this do exist as a consequence of the narrow observing window and the restricted number of cycles (between 1 and 5) covered within the 44-day window. A variety of other samples, selected and reduced in a number of different ways are discussed in Silbermann et al. To the dgree that the distances all agree to within their quoted errors the broad conclusions resulting from this paper are not affected by choice of sample. The resulting V and I period-luminosity relations for the select set of 37 Cepheids (using intensity-averaged magnitudes) are shown in the upper and lower panels of Figure 5, respectively. This sample differs slightly from that adopted in Silbermann et al. (1999) only in the fact that the three 50-day Cepheids are retained in this analysis. The derived apparent moduli are $`\mu _V=`$ 31.68$`\pm (0.05)_r`$ mag and $`\mu _I=`$ 31.55$`\pm (0.05)_r`$ mag. Correcting for a derived total line-of-sight reddening of $`E(VI)_{N1365}=`$ 0.14 mag (derived from the Cepheids themselves) gives a true distance modulus of $`\mu _0=`$ 31.35$`\pm (0.07)_r`$ mag. This corresponds to a distance to NGC 1365 of 18.6$`\pm (0.6)_r`$ Mpc, which is within 2% of the value derived from phase-weighted magnitudes of the slightly smaller Cepheid dataset as given in Silbermann et al. The quoted error at this step in the discussion quantifies only the statistical (random) uncertainty generated by photometric errors in the ALLFRAME data combined with the intrinsic magnitude and color width of the Cepheid instability strip. Extensive reviews of the distance to the Fornax cluster (especially in the context of a differential comparison with the distance to the Virgo cluster) can be found in two recent publications (Bureau, Mould & Staveley-Smith 1996, Table 3; Schroder 1996, Table 6.1). The former authors quote a distance of 16.6 $`\pm `$ 3.4 Mpc which in turn was consistent with a value of 16.9 $`\pm `$ 1.1 Mpc reported in the same year by McMillan, Ciardullo & Jacoby (1996). 5. THE HUBBLE CONSTANT We now discuss the impact of a Cepheid distance to the Fornax cluster in estimating the Hubble constant. Before doing so we must make clear the limited context and focussed nature of this paper. We are interested in exploring the consequences of Cepheid-based distances in general and the impact of a Cepheid distance to Fornax in specific, on the determinations of the extragactic distance scale (and the Hubble constant) directly dependent upon the Cepheids. This is not intended to be a review of all measures of the Hubble constant. Nor do we revisit methods that do not penetrate the flow any further than the Cepheids themselves now do. At the time of writing, this latter exclusion applied to the planetary nebula luminosity function (PLNF) method and to the surface brightness fluctuation (SBF) method, neither of which (see Jacoby et al. 1992 for an extensive discussion and review) extended further than Fornax, the subject of this Cepheid paper. In the mean time HST observations by Lauer, Tonry, Postman, Ajhar & Holtzmann (1998) and by Jensen, Tonry & Luppino (1998) have extended the SBF method out to the far field, and they determine a values of $`H_o`$ = 89 and 87 $`\pm `$ 10 km/sec/Mpc. A comprehensive re-analysis of SBF and other methods will be presented in a later series Key Project team papers. This is an interim report. Below we present and discuss several independent estimates of the local expansion rate, where the analysis is based both on the new Fornax distance and the distances to other Key Project galaxies, consistently scaled to a true distance modulus of 18.50 mag (50 kpc) for the Large Magellanic Cloud. At the end we intercompare the results for convergence and consistency. The first estimate is based solely on the Fornax cluster, its velocity and the Cepheid-based distance to one of its members. It samples the flow in one particular direction at a distance of $``$20 Mpc. We then examine the inner volume of space, leading up to and including both the Virgo and Fornax clusters. This has the added advantage of averaging over different samples and a variety of directions, but it is still limited in volume (to an average distance of $``$10 Mpc), and it is subject to the usual caveats concerning bulk flows and the adopted Virgocentric flow model (Table 1). The third estimate comes from using the Cepheid distance to Fornax to lock into secondary distance indicators, thereby allowing us to step out to cosmologically significant velocities (10,000 km/sec and beyond) corresponding to distances greater than 100 Mpc. Averaging over the sky, and working at large redshifts, alleviates the flow problems. Examining consistency between the independent secondary distance estimates, and then averaging over their far-field estimates should provide a more systematically secure value of $`H_o`$ and, more importantly, a measure of its external error. Comparison of the three ‘regional’ estimates (Fornax, local and far-field) then can be used to provide a check on the systematics resulting from the various assumptions made independently at each step. 6. UNCERTAINTIES IN THE FORNAX CLUSTER DISTANCE AND VELOCITY The two panels of Figure 1 show a comparison of the Virgo and Fornax clusters of galaxies drawn to scale, as seen projected on the sky. The comparison of apparent sizes is appropriate given that the two clusters are at approximately the same distance from us. In the extensive Virgo cluster (right panel), the galaxy M100 can be seen marked $``$4 to the north-west of the elliptical-galaxy-rich core; this corresponds to an impact parameter of 1.3 Mpc, or 8% of the distance from the LG to the Virgo cluster. The Fornax cluster (left panel) is more centrally concentrated than Virgo, so that the back-to-front uncertainty associated with its three-dimensional spatial extent is reduced for any randomly selected member. Roughly speaking, converting the total angular extent of the cluster on the sky ($``$3 in diameter; Ferguson & Sandage 1988) into a back-to-front extent, the error associated with any randomly chosen galaxy in the Fornax cluster, translates into a few percent uncertainty in distance; this uncertainty in distance can be reduced when more distances to spirals in Fornax have been measured. Here, we note that the infall-velocity correction for the Local Group motion with respect to the Virgo cluster (and its associated uncertainty) becomes a minor issue for the Fornax cluster. This is the result of a fortuitous combination of geometry and kinematics. We now have Cepheid distances from the Local Group to both the Fornax and Virgo clusters. Combined with their angular separation on the sky, this immediately leads to the physical separation between the two clusters. Under the assumption that the Virgo cluster dominates the local velocity perturbation field at the Local Group and at Fornax, we can calculate the velocity perturbation at Fornax (assuming that the flow field amplitude scales with $`1/R_{Virgo}`$ and characterized by a $`R^2`$ density distribution, Schechter 1980). ¿From this we then derive the flow contribution to the measured line-of-sight radial velocity, as seen from the Local Group. Figure 6 shows the distance scale structure (left panel) and the velocity-field geometry (right panel) of the Local Group–Virgo–Fornax system. An infall velocity of the Local Group toward Virgo of +200 km/sec is obtained by minimizing the velocity residuals for the galaxies with Cepheid-based distances. This value is in good agreement with that estimated by Han & Mould (1990). We adopt 200$`\pm `$100 km/sec, which results in a projected Virgocentric flow correction for Fornax of –45$`\pm `$23 km/sec. 7. $`H_o`$ AT FORNAX, AND ITS UNCERTAINTIES Correcting to the barycentre of the Local Group ($``$90 km/sec) and for the $``$45 km/sec component of the Virgocentric flow derived above, we calculate that the cosmological expansion velocity of Fornax is 1,306 km/sec. Using our Cepheid distance of 18.6 Mpc for Fornax gives $`H_o=70(\pm 18)_r[\pm 7]_s`$ km/sec/Mpc. The first uncertainty (in parentheses) includes random errors in the distance derived from the PL fit to the Cepheid data (see Table 1), as well as random velocity errors in the adopted Virgocentric flow, combined with the distance uncertainties to Virgo propagated through the flow model. The second uncertainty (in square brackets) quantifies the currently identifiable systematic errors associated with the adopted mean velocity of Fornax, and the adopted zero point of the PL relation (combining in quadrature the LMC distance error, a measure of the metallicity uncertainty, and a conservative estimate of the stellar photometry errors). Finally, we note that according to the Han-Mould model (Han & Mould 1990), the so-called “Local Anomaly” gives the Local Group an extra velocity component of approximately $`+`$73 km/sec towards Fornax. If we were to add that correction to our local estimate, the Hubble constant would increase to $`H_o`$ = 74 km/sec/Mpc. Given the highly clumped nature of the local universe and the existence of large-scale streaming velocities, there is still a lingering uncertainty about the total peculiar motion of the Fornax cluster with respect to the cosmic microwave background restframe. Observations of flows, and the determination of the absolute motion of the Milky Way with respect to the background radiation suggest that line-of sight velocities $``$300 km/sec are not uncommon (e.g. Coles & Lucchin 1995 and references therein). The uncertainty in absolute motion of Fornax with respect to the Local Group then becomes the largest outstanding uncertainty at this point in our error analysis: a 300 km/sec flow velocity for Fornax would result in a systematic error in the Hubble constant of $``$20%. We can revisit this issue, however, following an analysis of more distant galaxies made later in this section. 8. THE NEARBY FLOW FIELD We now step back somewhat and investigate the Hubble flow between us and Fornax, derived from galaxies and groups of galaxies inside 20 Mpc, each having Cepheid-based distances and expansion velocities individually corrected for a Virgocentric flow model (see Kraan-Korteweg 1986, for example). These data are presented in Figure 7. At 3 Mpc the M81-NGC 2403 Group (for which both galaxies of this pair have Cepheid distance determinations) gives $`H_o=`$ 75 km/sec/Mpc after averaging their two velocities. <sup>2</sup> If instead we define the M81-NGC 2403 group velocity by the simple average of 16 members (falling within 5 degrees of the primaries, having radial velocities in NED less than +350 km/sec) then the calculated Hubble constant for that group increases by 20% to 90 km/sec/Mpc. Other nearby galaxies with Cepheid distances are problematic: several, like NGC 3109 are borderline Local Group members whose velocities are undoubtedly dominated by M31 and the Milky Way. NGC 300 and other members of the South Polar (Sculptor) Group are appreciably strung out (1.7 to 4.4 Mpc along our line of sight according to Jerjen, Freeman & Binggeli 1998) and so averaging velocities makes no sense until Cepheid distances to the other (significant) members of the group are obtained. Similar reasons were invoked for omitting NGC 5253 until such time as other members of the extended M83 Group have Cepheid distances. Working further out to M101, the NGC 1023 Group and the Leo Group, the calculated values of $`H_o`$ range from 62 to 99 km/sec/Mpc. An average of these independent determinations including Virgo and Fornax, gives $`H_o=`$ 73 $`(\pm 16)_r`$ km/sec/Mpc, where flow uncertainties are added to the random error estimate for later intercomparisons of Hubble constants derived from independent methods and volumes of space. This determination, as before, uses a Virgocentric flow model with a $`1/R_{Virgo}`$ infall velocity fall-off, scaled to a Local Group infall velocity of +200 km/sec. The foregoing determination of $`H_o`$ is again predicated on the assumption that the infall flow-corrected velocities of both Fornax and Virgo are not further perturbed by other mass concentrations or large-scale flows, and that the 25,000 Mpc<sup>3</sup> volume of space delineated by them is at rest with respect to the distant galaxy frame. To avoid these local uncertainties we now step out from Fornax to the distant flow field. There we explore three applications: (i) Use of the Tully-Fisher relation calibrated by published Cepheid distances locally, and now including NGC 1365 and about two dozen additional galaxies in the Fornax cluster. Ultimately these calibrators are tied into the distant flow field at 10,000 km/sec defined by the the Tully-Fisher sample of galaxies in clusters (Aaronson et al. 1980; Han 1992). (ii) Using the distance to Fornax to tie into averages over previously published differential moduli for independently selected distant-field clusters, (iii) Recalibrating the Type Ia supernova luminosities at maximum light, and applying that calibration to events as distant as 30,000 km/sec. 9. BEYOND FORNAX: THE TULLY-FISHER RELATION Quite independent of its association with the Fornax cluster as a whole, NGC 1365 provides an important calibration point for the Tully-Fisher relation which links the (distance-independent) peak rotation rate of a galaxy to its intrinsic luminosity. In the left panel of Figure 8 we show NGC 1365 (in addition to NGC 925 (Silbermann et al. 1996), NGC 4536 (Saha et al. 1996) and NGC 4639 (Sandage et al. 1996) added to the ensemble of calibrators having published Cepheid distances from ground-based data (Freedman 1990), and I-band magnitudes and line widths, measured at 20% of the peak height (Pierce 1994, Pierce & Tully 1992 and references therein). As mentioned earlier NGC 1365 provides the brightest data point in the relation; additional galaxies recently having Cepheid distances measured include NGC 3621 (Rawson 1997), NGC 3351 (Graham 1997) and NGC 2090 (Phelps 1998), and will be included once I-band magnitudes become available. Although we have only the Fornax cluster for comparison at the present time, it is interesting to note that there is no obvious discrepancy in the Tully-Fisher relation between galaxies in the (low-density) field and galaxies in this (high-density) cluster environment. The NGC 1365 data point is consistent with the data for other Cepheid calibrators. Adding in all of the other Fornax galaxies for which there are published I-band magnitudes and inclination-corrected HI line widths provides us with another comparison of field and cluster spirals. In the right panel of Figure 8 we see that the 21 Fornax galaxies (shifted by the true modulus of NGC 1365) agree extremely well with the 9 brightest Cepheid-based calibrators. The slope of the relation is virtually unchanged by this augmentation. The scatter in the individually Cepheid-calibrated data (left panel) is $`\pm `$0.35 mag. This increases to $`\pm `$0.48 mag if the entire Fornax cluster sample is included (right panel). In following applications we adopt $`M_I=8.80(log(\mathrm{\Delta }V)2.445)+20.47`$ as the best-fitting least squares solution (derived from equal weighting of all galaxies and minimizing magnitude residuals) for the calibrating galaxies. Han (1992) has presented I-band photometry and neutral-hydrogen line widths for the determination of Tully-Fisher distances to individual galaxies in 16 clusters out to redshifts exceeding 10,000 km/sec. We have rederived distances and uncertainties to each of these clusters using the above-calibrated expression for the Tully-Fisher relation. The results are contained in Figure 9. A linear fit to the data in Figure 9 gives a Hubble constant of $`H_o=`$ 76 km/sec/Mpc with a total observed scatter giving a formal (random) uncertainty on the mean of only $`\pm `$2 km/sec/Mpc, increasing to $`\pm `$3 (Table 2) when flow uncertainties are added in quadrature. It is significant that neither Fornax nor Virgo deviate to any significant degree from an inward extrapolation of this far-field solution. At face value, these results provide evidence for both of these clusters having only small motions with respect to their local Hubble flow. This value compares favorably with other recent calibrations of the Tully-Fisher relation by Giovanelli et al (1997) who obtain $`H_o=69\pm 5`$ km/sec/Mpc (one sigma) and then by Tully (1998) who finds $`H_o=82\pm 16`$ km/sec/Mpc (95% confidence). 10. OTHER RELATIVE DISTANCE DETERMINATIONS In addition to the relative distances using the Tully-Fisher relation discussed above, a set of relative distance moduli based on a number of independent secondary distance indicators, including brightest cluster galaxies, Tully-Fisher and supernovae is also available (Jerjen & Tammann 1993). We adopt, without modification, their differential distance scale and tie into the Cepheid distance to the Fornax cluster, which was part of their cluster sample. The results are shown in Figure 10 which extends the velocity-distance relation out to more than 160 Mpc. No error bars are given in the published compilation. For a discussion of uncertainties in this sample see Huchra (1995). This sample yields a value of $`H_o=`$ 72 $`(\pm 3)_r`$ km/sec/Mpc (random), with a systematic error of 9% being associated with the distance (but not the velocity) of the Fornax cluster. 11. BEYOND FORNAX: TYPE IA SUPERNOVAE The Fornax cluster elliptical galaxies NGC 1316 and NGC 1380 are host to the well-observed type Ia supernovae 1980N and 1992A, respectively. Although the distances to these galaxies are not measured directly, the new Cepheid distance to NGC 1365, and associated estimate of the distance to the Fornax cluster allows two additional very high-quality objects to be added to the calibration of type Ia supernovae, allowing for the uncertainty in their distances. A preliminary discussion of these objects was given by Freedman et al. (1997). A more extensive discussion of the type Ia supernovae distance scale (including a re-analysis of all of the Cepheid data for type Ia supernova-host galaxies) will be presented in Gibson et al. (1999, in preparation). The galaxies hosting type Ia supernovae for which Cepheid distances have been measured to date include: IC 4182 (1937C), NGC 5253 (1895B, 1972E), NGC 4536 (1981B), NGC 4496 (1960F), NGC 4639 (1990N) (see Sandage et al. 1996), and NGC 4414 (1974G) (Turner et al. 1998). NGC 3627 was host to 1989B, a galaxy which in the Leo Triplet, assumed to be at the same distance as the Leo I Group (given by Cepheids) by Sandage et al. The quality of the supernova observations for this sample is quite mixed; with the exception of 1972E, the (mainly photographic) photometry for the earlier, historical supernovae is of significantly lower quality than the more recent supernovae. We have undertaken a preliminary recalibration of the type Ia supernovae, including SN 1980N and SN 1992A in the analysis. We assume for this purpose that the Cepheid distance to NGC 1365 is representative of the Fornax cluster, and give these two supernovae only half-weight compared to the other objects in the sample to reflect the additional uncertainty in the distance. For consistency, we do this also for SN 1989B. NGC 3627 was also host to SN 1973R, but photographic observations only are available for this object. Along with SN 1895B, SN 1937C, SN 1960F, and SN 1974G, we currently do not include the historical data in this analysis. This procedure differs markedly from that of Sandage et al. (1996), but is consistent with that of Hamuy et al. (1995, 1996). Published supernova magnitudes, errors, and decline rates plus the Cepheid distances are adopted from the above sources. Decline rates and supernova magnitudes for SN 1980N and SN 1992A were obtained from Hamuy et al. (1991) and Phillips (private communication). For 1980N, we adopt peak supernova magnitudes of B = 12.60$`\pm `$ 0.03 mag, V = 12.44$`\pm `$0.03 mag, and $`\delta m_{15}`$ = 1.28. For SN 1992A, we adopt B = 12.49$`\pm `$0.03 mag, V = 12.55$`\pm `$0.03 mag and $`\delta m_{15}`$ = 1.47. These two supernovae are amongst the fastest decliners in the Cepheid-calibrating sample. A decline-rate absolute-magnitude relation for the Cepheid calibrators has been presented by Freedman (1997). It is consistent with that observed for distant supernovae (e.g. Hamuy et al. 1995, 1996). The Cepheid-calibrated supernova sample including the best-observed supernovae SN 1972E, SN 1981B, SN 1990N, giving half-weight to SN 1989B, SN 1980N and SN 1992A, and applied to the distant Type Ia supernovae of Hamuy (1995) gives H<sub>o</sub> = 67 km/sec/Mpc. We have also experimented with various weighting schemes (e.g., including the photographic data; while allowing for its larger uncertainty; excluding the Fornax data completely; or, analyzing the B and V data alone). Resulting values of H<sub>o</sub> lie in the range of 63-67 km/sec/Mpc. Half of the difference between the results of Sandage et al. (1996) (giving H<sub>o</sub> = 57 km/sec/Mpc) is the lower weight placed in the current analysis on the (poorer quality) photographic data. The remaining difference is due to our adoption of the Hamuy et al. (1995) decline-rate correction, and the inclusion of the Fornax supernovae. Sandage et al. adopt no decline-rate correction. The relation that we have adopted is consistent with that of Phillips (1993), Hamuy et al. (1996) and Reiss, Press & Kirshner (1996). Finally, we note that eliminating the new Fornax calibrators from the analysis changes the Hubble constant by $``$1-3 km/sec/Mpc, for a variety of calibrating samples and weighting schemes. 12. COMPARING AND COMBINING THE RESULTS The results of the previous five sections are presented in Table 3. What is the summary conclusion? In the first instance we can simply state that based on a number of different methods calibrated here, $`H_o`$ falls within the range of 67$`(\pm 6)_r`$ and 76$`(\pm 3)_r`$ km/sec/Mpc, with no obvious dependence on the indicative volume of space being probed. Hence, a variety of independent distance determination methods are yielding agreement at the 10% level. With the exception of the common Cepheid PL relation zero point, these various determinations are largely independent; thus their differences are indicative of the true systematic errors affecting each of the methods and their individual underlying assumptions. No single determination stands out as either markedly anomalous or as undeniably superior. How then do we combine these individual results in a summary number with its own uncertainty? We have undertaken two types of approach: a Frequentist approach and a Bayesian one. In the end they only differ in their resulting confidence intervals. We begin by first considering the random errors. In our application of the Frequentist approach (e.g., Wall 1997 and references therein) we simply represent each determination as a probability distribution having its mean at $`H(i)`$, a dispersion of $`\sigma (i)_{random}`$ and unit integral (i.e., equal total weight in the sum). These are shown as the connected dotted lines in the left panel of Figure 11. The solid enveloping line is the resulting sum of the five probability density distributions. The composite probability distribution is somewhat non-Gaussian, but it is still centrally peaked with both the mode and the median coinciding at 72-73 km/sec/Mpc. An estimate of the traditional ($`\pm `$one-sigma) errors can be easily obtained from this distribution by identifying where the cumulative probabilities hit 0.16 and 0.84, respectively. This procedure gives the cited error on the mean for five estimates of $`\pm 5`$ km/sec/Mpc. \[At the suggestion of the referee this exercise was repeated using identical errors of 10% on each of the five estimates. The result is 71$`\pm `$4 km/sec/Mpc.\] The Bayesian estimate is equally straightforward (see Press 1997). Again, taking the individual Hubble constant estimates to be represented by Gaussians we combine them by multiplication, assuming a minimum-bias (flat) prior (Sivia 1996). We note that the Bayesian approach assumes statistical independence; and strictly speaking the considered samples are not completely independent given that they explicitly share a common (Cepheid) zero point, and the Jerjen & Tammann hybrid sample overlaps in part with the pure Tully-Fisher application. Nevertheless we have done the exercise of computing the posterior probability distribution with these caveats clearly stated. Because of the strong overlap in the various estimates the combined solution is both very strongly peaked and symmetric giving a value of $`H_o=74(\pm 3)_r`$ km/sec/Mpc as depicted in the right panel of Figure 11. The one-sigma error on the mean was again determined from the 0.16 and 0.84 cumulative probability distribution points. As already anticipated above the results of the two analyses are indistinguishable except for the high confidence attributed to the number by the Bayesian analysis. These results are summarized graphically in Figure 11 and numerically in Table 3. Systematic errors must be dealt with separately and independently from the random error discussion. While some of the identified systematic errors affect all of the above determinations equally and in the same sense (the LMC distance modulus for instance), others are more ‘randomly’ distributed among the methods and their contributing galaxies. For instance, the (as yet unknown) effects of flows are estimated and scaled for each of the methods here, but they may be large for the Fornax cluster, but smaller and perhaps have a different sign for the ensemble of type Ia supernovae. It seems prudent therefore to simply average the systematic errors while listing out the main components individually. They main systematic error on the finally adopted value of the Hubble constant are: (1) large scale flow fields. These contribute large fractional uncertainties to the nearest estimates of H<sub>o</sub>, but they progressively drop to only a few percent at large distances and/or for samples averaged over many directions (see Table 2 and further discussion below). (2) The zero point of the adopted PL relation. In our case, this is tied directly to the adopted true distance modulus of the LMC. A variety of independent estimates are reviewed by Westerlund (1996) and more recently by Walker (1999); they each conclude that the uncertainty is at the 5% level in distance. Westerlund prefers a true distance modulus of 18.45$`\pm 0.10`$mag, Walker adopts 18.55$`\pm 0.10`$mag. We have consistently used 18.50$`\pm 0.10`$mag throughout this series of papers. Therefore, if the distance to the LMC systematically changes downward/upward (by 10% or 0.2 mag, say) from that adopted here, then our entire distance scale shifts by the same amount, and the derived value of $`H_o`$ would increase/decrease by the same (10%) factor. And, finally, (3) the metallicity dependence of the Cepheid PL relation. This is a complex and much debated topic and the interested reader is referred to Sasselov et al. (1997), Kochanek (1997), Kennicutt et al. (1998), and earlier reviews by Freedman & Madore (1990) for an introduction to the literature. The metallicity of the galaxies for which Cepheid searches have been undertaken span a range in \[O/H\] abundance of almost an order of magnitude, with a median value of –0.3 dex. The calibrating sample of Cepheids in the LMC have a very similar abundance of \[O/H\] = –0.4 dex. These results suggest that even if in individual cases the metallicity effect amounted to 10-20%, the overall effect on the calibration of secondary distance indicators will be less than 5%. Recently concluded observations with NICMOS on HST should further help to constrain the magnitude of this effect. The importance of bulk flow motions on the determination of H<sub>0</sub> varies significantly depending on how far a particular distance indicator can be extended to (see Table 2). For local distance indicators the uncertainties due to unknown bulk motions are the largest contributing source of systematic error to the determination of H<sub>o</sub>, amounting to an uncertainty of 20-25% in the local value of the Hubble constant. The Tully-Fisher relation extends to a velocity distance of about 10,000 km/sec, although most of the observed clusters are not this remote. At 6,000 km/sec, peculiar motions of $``$300 km/sec would individually contribute 5% perturbations; however, with many clusters distributed over the sky, peculiar motions of this magnitude will give only a few percent uncertainty. For type Ia supernovae which extend to beyond 30,000 km/sec, the problem is even less severe. These estimates are consistent with recent studies by Shi & Turner (1998) and Zehavi et al. (1998) which place limits on the variation of H<sub>o</sub> with distance, based on theoretical and empirical considerations, respectively. 14. COSMOLOGICAL IMPLICATIONS A value of the Hubble constant, in combination with an independent estimate of the average density of the Universe, can be used to estimate a dynamical age for the Universe (e.g., see Figure 12). For a value of of $`H_o`$ = 72 $`(\pm 5)_r`$ km/sec/Mpc, the age ranges from a high of $``$12 Gyr for a low-density ($`\mathrm{\Omega }=0.15`$) Universe, to a young age of $``$9 Gyr for a critical-density ($`\mathrm{\Omega }=1.0`$) Universe. These ages change to 15 and 7.5 Gyr, respectively allowing for an error of $`\pm 10`$ km/sec/Mpc. The ages of Galactic globular clusters have until recently tended to fall in the range of 14$`\pm `$2 Gyr (Chaboyer, Demarque, Kernan & Krauss 1996); however, the subdwarf parallaxes obtained by the Hipparcos satellite (e.g. Reid 1997) may reduce these ages considerably. For $`\tau =`$ 14 Gyr and $`\mathrm{\Omega }=1.0`$, $`H_o`$ would have to be $``$45 km/sec/Mpc; interpreted within the context of the standard Einstein-de Sitter model, our value of $`H_o`$ = 72 km/sec/Mpc, is incompatible with a high-density ($`\mathrm{\Omega }=1.0`$) model universe without a cosmological constant (at the 2-sigma level defined by the identified systematic errors.) If, however, $`\tau =`$ 11 Gyr, then the globular cluster and the expansion ages would be consistent to within their mutually quoted uncertainties. Acknowledgements This research was supported by the National Aeronautics and Space Administration (NASA) and the National Science Foundation (NSF), and benefited from the use of the NASA/IPAC Extragalactic Database (NED). Observations are based on data obtained using the Hubble Space Telescope which is operated by the Space Telescope Science Institute under contract from the Association of Universities for Research in Astronomy. LF acknowledges support by NASA through Hubble Fellowship grant HF-01081.01-96A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. We thank the referee, George Jacoby, and the Editor, Greg Bothun, for numerous detailed and insightful comments on the paper. REFERENCES Aaronson, M., Mould, J., Huchra, J., Sullivan, W. T., Schommer, R. A., & Bothun, G. D. 1980, ApJ, 239, 12 Bureau, M., Mould, J. R., & Staveley-Smith, L. 1996 ApJ, 463, 60, 1996 Chaboyer, B., Demarque, P., Kernan, P. J., & Krauss, L. M. 1996, Science, 271, 957 Coles, P., & Lucchin, F., 1995, in Cosmology, Wiley, 399 de Vaucouleurs, G., 1975 in Stars and Stellar Systems, 9, eds. A. R. Sandage, M. Sandage, J. Kristian, Univ Chicago Press: Chicago, p. 557 Drinkwater, M. J., & Gregg, M. D. 1998, MNRAS, 296, L15 Feast, M. W., & Catchpole, R. M. 1997, MNRAS, 286, L1 Ferguson, H. C. 1989, AJ, 98, 367 Ferguson, H. C., & Sandage, A. R. 1988, AJ, 96, 1520 Ferrarese, L. et al. 1996, ApJ, 464, 568 Freedman, W. L. 1990, ApJL, 355, L35 Freedman, W. L., & Madore, B. F. 1990, ApJ, 365, 186 Freedman, W. L. 1997, in “Critical Dialogs in Cosmology”, ed. N. Turok, Princeton 250th Anniversary Conference, June 1996, (World Scientific), pp. 92-129. Freedman, W. L., Madore, B. F. & Kennicutt, R. 1997, in The Extragalactic Distance Scale, eds. M. Livio, M. Donahue & N. Panagia, Cambridge Univ. Press: Cambridge, 171 Freedman, W. L., et al. 1994a, ApJ, 435, L31 Freedman, W. L., et al. 1994b, Nature, 371, 757, Freedman, W. L., et al. 1998, in preparation Giovanelli, R., Haynes, M. P., Da Costa, L. N., Freudling, W., Salzer, J. J., & Wegner, G. 1997, ApJ, 477, L1 Graham, J. A., et al. 1997, ApJ, 477, 535 Hamuy, M., et al. 1991, AJ, 102, 208 Hamuy, M., et al. 1995, AJ, 109, 1 Han, M. S. 1992, ApJS, 81, 35 Han M., & Mould, J. R. 1990, ApJ, 360, 448 Hill, R., et al. 1998, ApJ, 496, 648 Holtzmann, J., et al. 1995, PASP, 107, 156 Hubble, E. P. 1929, Proc. Nat. Acad. Sci., 15, 168 Huchra, J., Geller, M., Clemens, C., Tokarz, S. & Michel, A. 1992, Bull. CDS, 41, 31. Huchra, J. 1995, Heron Island Conference on Peculiar Velocities in the Universe, P. Quinn and W. Zurek, eds. (published on the World Wide Web at http://msowww.anu.edu.au/ heron). Jacoby, G., et al. 1992, PASP, 104, 599 Jensen, J.B., Tonry, J.L., & Luppino, G.A. 1998, ApJ, 505, 111 Jerjen, H., Freeman, K.C., & Binggeli, B. 1998, AJ, in press Jerjen, H., & Tammann, G. A. 1993, A&A, 276, 1 Kennicutt, R. C., Freedman, W. L., & Mould, J. R. 1995, AJ, 110, 1476 Kennicutt, R. C., et al. 1998, ApJ, 498, 181 Kochanek, C. S. 1997, ApJ, 491, 13 Kraan-Korteweg, R. 1986, A&AS, 66, 255 Lafler, J., & Kinman, T. D. 1965, ApJS, 11, 216 Lauer, T.D., Tonry, J.L., Postman, M., Ajhar, E.A., & Holtzmann, J.A. 1998, ApJ., 499, 577 Madore, B. F., Freedman, W. L. & Sakai, S. 1997, in The Extragalactic Distance Scale, eds. M. Livio, M. Donahue & N. Panagia, Cambridge Univ. Press: Cambridge, 239 Madore, B. F., et al. 1998, Nature, 395, 47 Madore, B. F., & Freedman, W. L. 1991, PASP, 103, 933 Madore, B. F., & Freedman, W. L. 1998, ApJ, 492, 110 Mathewson, D. S., Ford, V. L., & Buchhorn, M. 1992, ApJS, 81, 413 McMillan, R., Ciardullo, R., & Jacoby, G.H. 1996, ApJ, 416, 62 Mould, J. R., Sakai, S., Hughes, S., & Han, M. 1997, in The Extragalactic Distance Scale, eds. M. Livio, M. Donahue & N. Panagia, Cambridge Univ. Press: Cambridge, 158 Mould, J. R., et al. 1995, ApJ, 449, 413 Phelps, R., et al. 1998, ApJ, 500, 763 Pierce, M. 1994, ApJ, 430, 53 Pierce, M., & Tully, R.B. 1992, ApJ, 387, 47 Press, W. 1997, in Unsolved Problems in Astrophysics, eds J.P. Ostriker & J.N. Bahcall, Princeton Univ. Press: Princeton, p.49 Rawson, D. M., et al. 1997, ApJ, 490, 517 Reid, N. AJ, 114, 161, 1997 Riess, A. C., Press, W. H. & Kirschner, R. P. 1996, ApJ, 473, 88 Saha, A., Sandage, A. R., Labhardt, L., Tammann, G. A., Macchetto, F. D., & Panagia, N. 1996, ApJ, 466, 55 Sandage, A. R., Saha, A., Tammann, G. A., Labhardt, L., Panagia, N., & Macchetto, F. D. 1996, ApJL, 460, L15 Sasselov, D. D., et al. 1997, A&A, 324, 471 Schechter, P. 1980, AJ, 85, 801 Schechter, P., Mateo, M. Saha, A. 1993, PASP, 105, 1342 Schroder, A. 1995, Doctoral Thesis, University of Basel Shi, X., & Turner, M. 1998, ApJ, 493, 519 Silbermann, N. A., et al. 1996, ApJ, 470, 1 Silbermann, N. A., et al. 1999, ApJ, in press Sivia, D. S., 1996, Data Analysis: A Bayesian Tutorial, Claredon Press: Oxford Stetson, P. B. 1994, PASP, 106, 250 Tammann, G. A. & Federspiel, M. 1997, in The Extragalactic Distance Scale, eds. M. Livio, M. Donahue & N. Panagia, Cambridge Univ. Press: Cambridge, 137 Tully, R. B. 1998, in Cosmological Parameters of the Universe, IAU Symp. 183, ed. K. Sato, Reidel: Dordrecht, (in press) Turner, A., et al. 1998, ApJ, 505, 87 Wall, J. V. 1997, QJRAS, 37, 519 Westerlund, B. 1996, in The Magellanic Clouds, Cambridge Univ. Press: Cambridge Zehavi, I, Reiss, A., Kirshner, R. P., & Dekel, A. 1998, ApJ submitted astrop-ph 9802252 FIGURE CAPTIONS Fig. 1. – A comparison of the distribution of galaxies as projected on the sky for the Virgo cluster (right panel) and the Fornax cluster (left panel). M100 and NGC 1365 are each individually marked by arrows showing their relative disposition with respect to the main body and cores of their respective clusters. Units are arcmin. Fig. 2. – Fornax galaxies with published radial velocities within 6 of the cluster center and having apparent velocities less than 2,500 km/sec. All 117 galaxies used to define the mean velocity (and velocity dispersion) for the Fornax cluster are shown plotted as they appear on the sky. The 78 early-type galaxies are depicted by filled circles; the 39 late-type galaxies are show as open circles. NGC 1365, near the center of the cluster, is individually marked. Fig. 3. – Velocity-position plots for 117 Fornax galaxies. The right-hand portion of the figure shows the galaxies projected in declination down onto a right ascension slice of the sky. NGC 1365 is marked, and seen to be centrally located in both position ans velocity. Open circles are spiral/irregular galaxies; filled circles represent E/S0 galaxies. Fig. 4. – The velocity structure of the Fornax cluster. The upper left insert shows that simple binned histogram of the 117 velocities for galaxies in the Fornax cluster. The distribution is symmetric about 1,400 km/sec and is closely approximated by a Gaussian with a one-sigma width of $``$340 km/sec. Another representation of the velocity density distribution is given in the main portion of the left panel. Here we have represented each galaxy as an individual Gaussian of unit weight centered at its quoted velocity and widened by its published uncertainty (tall spikes represent high-precision velocities; low, broad smears represent uncertain observations). The solid curve is the sum of the individual Gaussians. To obtain a mean and sigma from this probabilistic distribution we refer to the right panel where the cumulative probability density (CPD) distribution is plotted. Horizontal lines at CPD = 0.50, 0.16 and 0.84 cross the distribution curve at the mean velocity and at $`\pm `$one-sigma, respectively. These are to be compared to the simple average and standard deviation shown by the centrally plotted error bar. The close coincidence of the two estimates is a direct reflection of the highly Gaussian nature of the Fornax velocity distribution. At the base of each of the plots the velocity of NGC 1365 is shown, fitting well within the one-sigma velocity dispersion. Fig. 5. – V and I-band Period-Luminosity relations for the 37 Cepheids discovered in NGC 1365. The fits are to the fiducial relations given by (Madore & Freedman 1991) shifted to the apparent distance modulus of NGC 1365. Dashed lines indicate the expected intrinsic ($`\pm `$2-sigma) width of the relationship due to the finite temperature width of the Cepheid instability strip. The solid line is a minimum $`\chi ^2`$ fit to the fiducial PL relation for LMC Cepheids (18), corrected for $`E(BV)_{LMC}=`$ 0.10 mag, scaled to an LMC true distance modulus of $`\mu _o`$ = 18.50 mag, and shifted into registration with the Fornax data. \[Note: Recent results from the Hipparcos satellite bearing on the Galactic calibration of the Cepheid zero point (Feast & Catchpole 1997; Madore& Freedman 1998) indicate that the LMC calibration is confirmed at the level of uncertainty indicated in Table 1, with the possibility that a small (upward) correction to the LMC reddening may be indicated.\] Fig. 6. – Relative geometry (left panel), and the corresponding velocity vectors (right panel) for the disposition and flow of Fornax and the Local Group with respect to the Virgo cluster. The circles plotted at the positions of the Virgo and Fornax clusters have the same angular size as the circles enclosing M100 and NGC 1365 in the two panels of Figure 1. Fig. 7. – The velocity-distance relation for local galaxies having Cepheid-based distances. Circled dots mark the velocities and distances of the parent groups or clusters. The one-sided “error” bars with galaxy names attached mark the velocities associated with the individual galaxies having direct Cepheid distances. The heavy broken line represents a fit to the data giving $`H_o=73`$ km/sec/Mpc. The observed scatter is $`\pm `$12 km/sec/Mpc, and is shown by the thin diverging broken lines. Fig. 8. – Tully-Fisher relations. The left panel shows the absolute I-band magnitude, $`M_I`$ versus the inclination-corrected 21-cm line widths (measured at 20% of the peak) for galaxies having individually determined Cepheid distances. NGC 1365 is the brightest object in this sample; the position of this cluster spiral is consistent with an extrapolation of the relation defined by the lower luminosity field galaxy sample. The right panel shows the calibrating sample (filled circles) superimposed on the entire population of Fornax spiral galaxies for which I-band observations and line widths are available (Bureau, Mould & Staveley-Smith 1996); the latter being shifted to absolute magnitudes by the Cepheid distance to NGC 1365. No errors are tabulated for the field galaxy calibrators; error bars for the Fornax sample are as given in Table 1 in Bureau, Mould & Staveley-Smith. Fig. 9. – The velocity–distance relation for 16 clusters of galaxies out to 11,000 km/sec, having distance moduli determined from the I-band Tully-Fisher relation. A fit to the data gives a Hubble constant of $`H_o=76`$ km/sec/Mpc. The solid lines mark one-sigma bounds on the observed internal scatter. The range of distance and velocity probed directly by Cepheids, as illustrated in Figure 7, is outlined at the bottom left corner of this figure. Fig. 10. – The velocity–distance relation for 17 clusters of galaxies, having published (Jerjen & Tammann 1993) differential distance moduli scaled to the Fornax cluster. A fit to the data gives a Hubble constant of $`H_o=`$72 km/sec/Mpc. As in Figure 7, the solid lines mark one-sigma bounds on the observed internal scatter. Fig. 11. – A graphical representation of Table 3 is given in the left panel, showing the various determinations of the Hubble constant, and the adopted mean. Each value of $`H_o`$ and its statistical uncertainty is represented by a Gaussian of unit area (linked dotted line) centered on its determined value and having a dispersion equal to the quoted random error. Superposed immediately above each Gaussian is a horizontal bar representing the one sigma limits of the calculated systematic errors derived for that determination. The adopted average value and its probability distribution function (continuous solid line) is the arithmetic sum of the individual Gaussians. (This Frequentist representation treats each determination as independent, and assumes no a priori reason to prefer one solution over another.) A Bayesian representation of the products of the various probability density distributions is shown in the right panel. Because of the close proximity and strong overlap in the various independent solutions the Bayesian estimator is very similar to, while more sharply defined than, the Frequentist solution. Fig. 12. – Lines of fixed time representing the theoretical ages of the oldest globular cluster stars are shown for 12, 14 and 16 Gyr, plotted as a function of the expansion rate $`H_o`$ and density parameter $`\mathrm{\Omega }_o`$, for an Einstein-de Sitter universe with the cosmological constant $`\mathrm{\Lambda }=0`$. The thick dashed horizontal line at $`H=72(\pm 5)_r[\pm 7]_s`$ km/sec/Mpc is the average value of the Hubble constant given in Table 3. The parallel (solid) lines on either side of that solution represent the one-sigma random errors on that solution. Systematic errors on the solution for $`H_0`$ are represented by thin dashed lines at 65 and 79 km/sec/Mpc. The only region of (marginal) overlap between these two constraints is in the low density ($`\mathrm{\Omega }<`$ 0.2) regime, unless $`\mathrm{\Lambda }0.`$ If the globular cluster ages are assumed to place a lower bound on the age of the Universe, the region of plausible overlap between the two solutions is more severely restricted to even lower density models.
no-problem/9812/cond-mat9812215.html
ar5iv
text
# Irrelevance of memory in the minority game ## Abstract Abstract: By means of extensive numerical simulations we show that all the distinctive features of the minority game introduced by Challet and Zhang (1997), are completely independent from the memory of the agents. The only crucial requirement is that all the individuals must posses the same information, irrespective of the fact that this information is true or false. Originally inspired by the El Farol problem stated by Arthur in , it has been introduced in a model system for the adaptive evolution of a population of interacting agents, the so called minority game. This is a toy model where inductive, rather than deductive, thinking, in a population of bounded rationality, gives rise to cooperative phenomena. The setup of the minority game is the following: $`N`$ agents have to choose at each time step whether to go in room $`0`$ or $`1`$. Those agents who have chosen the less crowded room (minority room) win, the other loose, so that the system is intrinsically frustrated. A crucial feature of the model is the way by which agents choose. In order to decide in what room to go, agents use strategies. A strategy is a choosing device, that is an object that processes the outcomes of the winning room in the last $`m`$ time steps (each outcome being $`0`$ or $`1`$) and accordingly to this information prescribes in what room to go the next step. The so-called memory $`m`$ defines $`2^m`$ potential past histories (for instance, with $`m=2`$ there are four possible pasts, $`11`$, $`10`$, $`01`$ and $`00`$). A strategy is thus formally a vector $`R_\mu `$, with $`\mu =1,\mathrm{},2^m`$, whose elements can be $`0`$ or $`1`$. The space $`\mathrm{\Gamma }`$ of the strategies is an hypercube of dimension $`D=2^m`$ and the total number of strategies is $`2^D`$. At the beginning of the game each agent draws randomly a number $`s`$ of strategies from the space $`\mathrm{\Gamma }`$ and keeps them forever, as a genetic heritage. The problem is now to fix which one, among these $`s`$ strategies, the agent is going to use <sup>*</sup><sup>*</sup>*We will consider only the non-trivial case $`s>1`$.. The rule is the following. During the game the agent gives point to all his/her strategies according to their potential success: at each time step a strategy gets a point only if it has forecast the correct winning room, regardless of having been actually used or not. At a given time the agent chooses among his/her $`s`$ strategies the most successful one up to that moment (i.e. the one with the highest number of points) and uses it in order to choose the room. The adaptive nature of the game relies in the time evolution of the best strategy of each single agent. In this way the game has a well defined deterministic time evolution, which only depends on the initial distribution of strategies and on the random initial string of $`m`$ bits necessary to start the game. Among all the possible observables, a special role is played by the variance $`\sigma `$ of the attendance $`A`$ in a given room . We can consider, for instance, room $`0`$ and define $`A(t)`$ as the number of agents in this room at time $`t`$. We have, $$\sigma ^2=\underset{t\mathrm{}}{lim}\frac{1}{t}_{t_0}^t𝑑t^{}\left(A(t^{})\frac{N}{2}\right)^2,$$ (1) where $`N/2`$ is the average attendance in the room and $`t_0`$ is a transient time after which the process is stationary . In all the simulations presented in this Letter it has been taken $`t=t_0=10,000`$ for a maximum value of $`N=101`$ and it has been verified that the averages were saturated over these times. The importance of $`\sigma `$ (called volatility in financial context) is simple to understand: the larger is $`\sigma `$, the larger is the global waste of resources by the community of agents. Indeed, only with an attendance $`A`$ as near as possible to its average value there is the maximum distribution of points to the whole population. Moreover, from a financial point of view, it is clear that a low volatility $`\sigma `$ is of great importance in order to minimize the risk. If all the agents were choosing randomly, the variance would simply be $`\sigma _r^2=N/4`$. An important issue is therefore: in what conditions is the variance $`\sigma `$ smaller than $`\sigma _r`$ ? In other words, is it possible for a population of selfish individuals to collectively behave in a better-than-random way ? What has been found first in is that the volatility $`\sigma `$ as a function of $`m`$ has a remarkable behaviour, since actually there is a regime where $`\sigma `$ is smaller than the random value $`\sigma _r`$. In this phase the collective behaviour is such that less resources are globally wasted by the population of agents. A deep understanding of this feature is therefore important. From the very definition of the model and from the behaviour of $`\sigma (m)`$ described above, it seems clear that the memory $`m`$ is a crucial quantity for the two following reasons. First, from a geometrical point of view, $`m`$ defines the dimension of the space of strategies $`\mathrm{\Gamma }`$ and therefore it is related to the probability that strategies drawn randomly by different agents could give similar predictions: the larger is $`m`$, the bigger is $`\mathrm{\Gamma }`$ and the lower is the probability that different players have some strategies in common. Since the non-random nature of the game relies in the presence of correlated choices, that is, exactly in the possibility that different agents use the same strategies, it follows that for very large $`m`$ the game proceeds in a random way This argument works at fixed number of agents $`N`$. Otherwise the relevant variable will be $`2^m/N`$. We discuss this point later.. Secondly, $`m`$ is supposed to be a real memory. Actually, the whole game is constructed around the role of $`m`$ as a memory: at time $`t`$ agents use strategies which process the last $`m`$ events in the past. As a consequence of this, a new minority room will come out and at time $`t+1`$ there will be a new $`m`$-bits past which will differ from the old one for the outcome at time $`t`$. Thus, agents, or better, strategies, choose by remembering the last $`m`$ steps of time history, so that $`m`$ is a natural time scale of the system. Due to this, an explanation of the behaviour of $`\sigma (m)`$ has been proposed in , where the decay rate of the time correlations in the system is compared and related to $`m`$, thus supporting the key interpretation of $`m`$ as a real memory. This memory role of $`m`$ complicates greatly the nature of the problem, since it induces an explicit dynamical feedback in the evolution of the system, such that the process is not local in time. The purpose of this Letter is to show that the memory of the agents is irrelevant. We shall prove that there is no need of an explicit time feedback, to obtain all the distinctive features of the model. In order to prove this statement we consider the same model introduced in and described above, but with the following important difference: at each time step, the past history is just invented, that is, a random sequence of $`m`$ bits is drawn, to play the role of a fake time history. This is the information that all the agents process with their best strategies to choose the room. As we are going to show, this oblivious version of the model gives exactly the same results as the original one, thus proving that the role of $`m`$ is purely geometrical. In Fig.1, the variance $`\sigma `$ as a function of $`m`$ is plotted both for the case with and without memory. The two models give the same results, not only qualitatively, but also quantitatively (see also the data of ). In particular, the minimum of $`\sigma `$ as a function of $`m`$ is found even without memory and cannot therefore be related to it. The dependence of the whole function $`\sigma (m)`$ on the individual number of strategies $`s`$ is another important point. It has been shown for the first time in that the minimum of this curve is shallower the larger is the value of $`s`$. In Fig.2 we show that this same phenomenon occurs for the model without memory. From a technical point of view, note that, once eliminated the role of $`m`$ as a memory, the only quantity involved in the actual implementation of the model is $`D`$, the dimension of the space of strategies $`\mathrm{\Gamma }`$. Therefore, instead of drawing a random sequence of $`m`$ bits, it is much easier to draw a random component $`\mu [1,D]`$ to mimic the past history: each agent uses component $`\mu `$ of his/her best strategy to choose the room. The main consequence of this is that there is no need for being $`D=2^m`$, since we can choose any integer value of $`D`$. In it has been introduced a method by which it is possible to consider non-integer values of $`m`$ in the model with memory. This is useful, since it permits to study the shape of $`\sigma (m)`$ around its minimum, with a better resolution in $`m`$. In the present context, it is trivial to consider non-integer values of $`m`$, since we simply have $`m=\mathrm{log}_2D`$. In this way results identical to are obtained. Once fixed $`s`$, let $`m_c`$ be the value of $`m`$ where the minimum of $`\sigma (m)`$ occurs. In it has been pointed out that for $`m<m_c`$ the variance $`\sigma `$ grows as $`N`$, where $`N`$ is the number of agents, while for $`m>m_c`$ it grows as $`N^{1/2}`$. In Fig.3, $`\sigma `$ as a function of $`N`$ is plotted for the model without memory. The same behaviour as in the model with memory is found. An interesting question is whether $`\sigma `$ is a function of a single scaling variable $`z`$ constructed with $`m`$, $`N`$ and $`s`$. It has been shown in that by considering as a scaling variable $`z=2^m/N=D/N`$ all the data for $`\sigma `$ at various $`m`$ and $`N`$ collapse on the same curve. In this case the relevant parameter is thus the dimension $`D`$ of $`\mathrm{\Gamma }`$, over the number $`N`$ of playing strategies. On the other hand, it has been proposed in a different scaling variable, that is $`z^{}=22^m/sN=2D/sN`$. In this way, the relevant parameter would be the density on $`\mathrm{\Gamma }`$ of the total number of strategies $`sN`$. In Fig.4 we plot $`\sigma ^2/N`$ as a function of $`z^{}`$, at different values of $`D`$, $`N`$ and $`s`$, for the model without memory. We see that the correct scaling parameter is $`z`$ and not $`z^{}`$, since the data with different values of $`s`$ collapse on different curves. The same result is obtained if we perform the simulation with the memory (see ). The two models give once again the same results. Note from Fig.4 that the scaling is not perfect at very low values of $`z^{}`$, that is for very small $`D`$. This is just a trace of the integer nature of the model. From what shown above it is reasonable to conclude that, in order to obtain all the crucial features of the minority game, the presence of an individual memory of the agents is irrelevant. The parameter $`m`$ still plays a major role, but only for being related to the dimension $`D=2^m`$ of the strategies space $`\mathrm{\Gamma }`$. A consequence of this fact is that any attempt to explain the properties of this model, relying on the role of $`m`$ as a memory, can hardly be correct. On the other hand, as already said, the geometrical role of $`m`$ remains. Indeed, some recent attempts to give an analytic description of the model (see ) are only grounded on geometrical considerations about the distribution of strategies in the space $`\mathrm{\Gamma }`$ and go therefore, in our opinion, in the correct direction. The most important result of the present Letter is the existence of a regime where the whole population of agents still behaves in a better-than-random way, even if the information they process is completely random, that is wrong, if compared to the real time history. The crucial thing is that everyone must possess the same information. Indeed, if we invent a different past history for each different agent, no coordination emerges at all and the results are the same as if the agents were behaving randomly (this can be easily verified numerically). In other words, if each individual is processing a different information, the features of the system are completely identical to the random case, irrespective of the values of $`m`$ and $`s`$. The conclusion is the following: the crucial property is not at all the agents’ memory of the real time history, but rather the fact that they all share the same information, whatever false or true this is. As a consequence, there is no room in this model for any kind of forecasting of the future based on the “understanding” of the past. We hope this result to be useful for a future deeper understanding of this kind of adaptive systems. Indeed, before trying to explain the rich structure of a quite complicated model, it is important in our opinion to clear up what are the truly necessary ingredients of such a model and what, on the contrary, is just an irrelevant complication, which can be dropped. In the case of the so-called memory (or brain size, or intelligence), $`m`$, there also has been a problem of terminology: given the original formulation of the model, it seemed that the very nature of a variable encoding the memory or the intelligence of the agents, could warrant by itself a relevance to it , relevance which, as we have seen, was not deserved. Notwithstanding this, we consider the present model still to be very interesting and far from being trivial. Finally, let us note that the passage from a model with memory to a model without memory, is equivalent to substitute a deterministic, but very complicated system, with a stochastic, but much simpler one, which nevertheless gives the same results as the original case and which is therefore indistinguishable from it for all the practical purposes. The use of a stochastic/disordered model to mimic a deterministic/ordered one, is similar in the spirit to what happens in the context of glassy systems, where some disordered models of spin glasses are often used in order to have a better understanding of structural glasses, which contain in principle no quenched disorder . ###### Acknowledgements. I wish to thank Erik Aurell, Francesco Bucci, Juan P. Garrahan, John Hertz and David Sherrington for useful discussions and in particular Irene Giardina for many suggestions and for reading the manuscript. I also thank for the kind hospitality NORDITA (Copenhagen), where part of this work has been done. This work is supported by EPSRC Grant GR/K97783.
no-problem/9812/cond-mat9812320.html
ar5iv
text
# Two-fluid dynamics for a Bose-Einstein condensate out of local equilibrium with the non-condensate \[ ## Abstract We extend our recent work on the two-fluid hydrodynamics of a Bose-condensed gas by including collisions involving both condensate and non-condensate atoms. These collisions are essential for establishing a state of local thermodynamic equilibrium between the condensate and non-condensate. Our theory is more general than the usual Landau two-fluid theory, to which it reduces in the appropriate limit, in that it allows one to describe situations in which a state of complete local equilibrium between the two components has not been reached. The exchange of atoms between the condensate and non-condensate is associated with a new relaxational mode of the gas. \] Recently the authors have given a microscopic derivation of the coupled two-fluid hydrodynamic equations for a trapped Bose-condensed gas . In the present Letter, we report the results of a major extension of the ZGN work which takes into account the effects of collisions between the condensate and non-condensate atoms. These new equations of motion (which we shall call ZGN) allow us to discuss the dynamics when the non-condensate atoms are in local thermal equilibrium with each other (due to collisions between the excited atoms) but are not yet in complete equilibrium with the Bose condensate order parameter. This results in the appearance of a new relaxational collective mode related to the transfer of atoms between the condensate and non-condensate. In order to bring out the new physics in a clear fashion, these equations are solved for a uniform gas. A more complete derivation and discussion is given in Ref.. The non-condensate atoms are described by the distribution function $`f(𝐫,𝐩,t)`$, which obeys the kinetic equation (we take $`\mathrm{}=1`$ throughout) $`{\displaystyle \frac{f}{t}}+{\displaystyle \frac{𝐩}{m}}\mathbf{}f\mathbf{}U\mathbf{}_𝐩f=C_{12}[f]+C_{22}[f],`$ (1) where the effective potential $`U(𝐫,t)U_{\mathrm{ext}}(𝐫)+2g[n_c(𝐫,t)+\stackrel{~}{n}(𝐫,t)]`$ involves the self-consistent Hartree-Fock (HF) mean field. As usual, we treat the interactions in the $`s`$-wave approximation with $`g=4\pi a/m`$. Here $`n_c(𝐫,t)`$ is the condensate density and $`\stackrel{~}{n}(𝐫,t)`$ is the non-condensate density given by $$\stackrel{~}{n}(𝐫,t)=\frac{d𝐩}{(2\pi )^3}f(𝐫,𝐩,t).$$ (2) A kinetic equation essentially equivalent to (1) in the case of a uniform system was given in Refs. . The two collision terms in (1) are given by $`C_{22}[f]4\pi g^2{\displaystyle \frac{d𝐩_2}{(2\pi )^3}\frac{d𝐩_3}{(2\pi )^3}𝑑𝐩_4}`$ (3) $`\times \delta (𝐩+𝐩_2𝐩_3𝐩_4)\delta (\stackrel{~}{\epsilon }_p+\stackrel{~}{\epsilon }_{p_2}\stackrel{~}{\epsilon }_{p_3}\stackrel{~}{\epsilon }_{p_4})`$ (4) $`\times \left[(1+f)(1+f_2)f_3f_4ff_2(1+f_3)(1+f_4)\right],`$ (5) $`C_{12}[f]4\pi g^2n_c{\displaystyle \frac{d𝐩_1}{(2\pi )^3}𝑑𝐩_2𝑑𝐩_3}`$ (6) $`\times \delta (m𝐯_c+𝐩_1𝐩_2𝐩_3)\delta (\epsilon _c+\stackrel{~}{\epsilon }_{p_1}\stackrel{~}{\epsilon }_{p_2}\stackrel{~}{\epsilon }_{p_3})`$ (7) $`\times [\delta (𝐩𝐩_1)\delta (𝐩𝐩_2)\delta (𝐩𝐩_3)]`$ (8) $`\times [(1+f_1)f_2f_3f_1(1+f_2)(1+f_3)],`$ (9) with $`ff(𝐫,𝐩,t),f_if(𝐫,𝐩_i,t)`$. Eq. (9) takes into account the fact that a condensate atom locally has energy $`\epsilon _c(𝐫,t)=\mu _c(𝐫,t)+\frac{1}{2}mv_c^2(𝐫,t)`$ and momentum $`m𝐯_c`$, where the condensate chemical potential $`\mu _c`$ and velocity $`𝐯_c`$ will be defined in the next paragraph. On the other hand, a non-condensate atom locally has the HF energy $`\stackrel{~}{\epsilon }_p(𝐫,t)=\frac{p^2}{2m}+U(𝐫,t)`$. This particle-like dispersion relation means that our analysis is limited to finite temperatures. It follows from this excitation spectrum that in the Landau limit (see later) of our microscopic model, the condensate density is equal to the superfluid density and the non-condensate density is the normal fluid density. To complete our microscopic model, we need to have an equation of motion for the complex condensate order parameter $`\mathrm{\Phi }(𝐫,t)\sqrt{n_c(𝐫,t)}e^{i\theta (𝐫,t)}`$. This equation can be rewritten in terms of $`n_c(𝐫,t)`$ and the condensate velocity $`𝐯_c=\mathbf{}\theta (𝐫,t)/m`$ as $`{\displaystyle \frac{n_c}{t}}+\mathbf{}(n_c𝐯_c)`$ $`=`$ $`\mathrm{\Gamma }_{12}[f],`$ (11) $`m\left({\displaystyle \frac{}{t}}+𝐯_c\mathbf{}\right)𝐯_c`$ $`=`$ $`\mathbf{}\mu _c,`$ (12) where the condensate chemical potential (in the Thomas-Fermi approximation) is given by $$\mu _c(𝐫,t)=U_{\mathrm{ext}}(𝐫)+g[n_c(𝐫,t)+2\stackrel{~}{n}(𝐫,t)].$$ (13) The “source” term $`\mathrm{\Gamma }_{12}[f]`$ in (11) is defined in terms of the $`C_{12}`$ collision term in (9) as $$\mathrm{\Gamma }_{12}[f]\frac{d𝐩}{(2\pi )^3}C_{12}[f(𝐫,𝐩,t)].$$ (14) We observe that $`C_{12}`$ collisions do not conserve the number of atoms in the condensate. The detailed derivation of Eqs. (1-14) is based on a field-theoretic formulation of an interacting Bose fluid with a Bose broken symmetry. Closely related work can be found in Refs. and . The quantum field operators $`\widehat{\psi }(𝐫)`$ are split into a condensate ($`\mathrm{\Phi }\widehat{\psi }`$) and non-condensate ($`\stackrel{~}{\psi }`$) part. The key approximation made in obtaining (1) and (Two-fluid dynamics for a Bose-Einstein condensate out of local equilibrium with the non-condensate) is the neglect of the anomalous pair correlations such as $`\stackrel{~}{\psi }(𝐫)\stackrel{~}{\psi }(𝐫)`$, which we shall refer to as the Popov approximation. In , the $`C_{12}`$ collisions (involving one condensate atom) were not included and thus the source term $`\mathrm{\Gamma }_{12}`$ in (11) was not present. In the extended set of ZGN hydrodynamic equations which follow from (1) and (Two-fluid dynamics for a Bose-Einstein condensate out of local equilibrium with the non-condensate), $`\mathrm{\Gamma }_{12}`$ plays a crucial role. Above the Bose-Einstein transition temperature ($`T_{\mathrm{BEC}}`$), where the Bose condensate order parameter vanishes, the kinetic equation (1) only involves the Uehling-Uhlenbeck collision term $`C_{22}`$ given by (5). This kinetic equation has been discussed extensively in recent years . In this Letter, we restrict ourselves to the region in which $`C_{22}`$ collisions are sufficiently rapid to justify the assumption that the excited-atom distribution function is approximately described by the local equilibrium Bose distribution $$\stackrel{~}{f}(𝐫,𝐩,t)=\frac{1}{e^{\beta [\frac{1}{2m}(𝐩m𝐯_n)^2+U\stackrel{~}{\mu }]}1}.$$ (15) Here, the temperature parameter $`\beta `$, normal fluid velocity $`𝐯_n`$, chemical potential $`\stackrel{~}{\mu }`$, and mean field $`U`$ are all functions of $`𝐫`$ and $`t`$. It is important to appreciate that the local non-condensate chemical potential $`\stackrel{~}{\mu }`$ which appears in (15) is distinct from the local condensate chemical potential $`\mu _c`$, as defined in (13). One may immediately verify that $`\stackrel{~}{f}`$ satisfies $`C_{22}[\stackrel{~}{f}]=0`$, independent of the value of $`\stackrel{~}{\mu }`$. In contrast, one finds from (9) that, in general, $`C_{12}[\stackrel{~}{f}]0`$. This means that even if the excited atoms are in dynamic local equilibrium described by (15), the source term $`\mathrm{\Gamma }_{12}[f]`$ in (Two-fluid dynamics for a Bose-Einstein condensate out of local equilibrium with the non-condensate) will, in general, be finite. More specifically, we have (see also Ref. ) $$\mathrm{\Gamma }_{12}[\stackrel{~}{f}]=\left\{1e^{\beta [\stackrel{~}{\mu }\frac{1}{2}m(𝐯_n𝐯_c)^2\mu _c]}\right\}\frac{n_c}{\tau _{12}},$$ (16) where we have introduced a collision time associated with the $`C_{12}`$ term in (9), $`{\displaystyle \frac{1}{\tau _{12}}}4\pi g^2{\displaystyle \frac{d𝐩_1}{(2\pi )^3}\frac{d𝐩_2}{(2\pi )^3}𝑑𝐩_3(1+\stackrel{~}{f}_1)\stackrel{~}{f}_2\stackrel{~}{f}_3}`$ (17) $`\times \delta (m𝐯_c+𝐩_1𝐩_2𝐩_3)\delta (\epsilon _c+\stackrel{~}{\epsilon }_{p_1}\stackrel{~}{\epsilon }_{p_2}\stackrel{~}{\epsilon }_{p_3}).`$ (18) We note that $`\mathrm{\Gamma }_{12}[\stackrel{~}{f}]`$ in (16) vanishes when $`\stackrel{~}{\mu }=\mu _c+\frac{1}{2}m(𝐯_n𝐯_c)^2`$. However, as we shall see, a state of complete local equilibrium cannot be treated simply by setting $`\mathrm{\Gamma }_{12}=0`$, which was the implicit assumption made in deriving the Landau two-fluid equations in earlier work . With the assumption $`f\stackrel{~}{f}`$, one can derive hydrodynamic equations for the non-condensate by taking moments of (1) in the usual way. These are the analogue of the condensate equations in (Two-fluid dynamics for a Bose-Einstein condensate out of local equilibrium with the non-condensate). Linearizing around the static thermal equilibrium solutions, these hydrodynamic equations are given by $`{\displaystyle \frac{\delta \stackrel{~}{n}}{t}}`$ $`=`$ $`\mathbf{}(\stackrel{~}{n}_0\delta 𝐯_n)+\delta \mathrm{\Gamma }_{12},`$ (20) $`m\stackrel{~}{n}_0{\displaystyle \frac{\delta 𝐯_n}{t}}`$ $`=`$ $`\mathbf{}\delta \stackrel{~}{P}\delta \stackrel{~}{n}\mathbf{}U_0`$ (22) $`2g\stackrel{~}{n}_0\mathbf{}(\delta \stackrel{~}{n}+\delta n_c),`$ $`{\displaystyle \frac{\delta \stackrel{~}{P}}{t}}`$ $`=`$ $`{\displaystyle \frac{5}{3}}\stackrel{~}{P}_0\mathbf{}\delta 𝐯_n\delta 𝐯_n\mathbf{}\stackrel{~}{P}_0`$ (24) $`{\displaystyle \frac{2}{3}}gn_{c0}\delta \mathrm{\Gamma }_{12}.`$ One also has $$\stackrel{~}{n}(𝐫,t)=\frac{d𝐩}{(2\pi )^3}\stackrel{~}{f}(𝐫,𝐩,t)=\frac{1}{\mathrm{\Lambda }^3}g_{3/2}(z),$$ (25) $`\stackrel{~}{P}(𝐫,t)`$ $`=`$ $`{\displaystyle \frac{d𝐩}{(2\pi )^3}\frac{p^2}{3m}\stackrel{~}{f}(𝐫,𝐩,t)}|_{𝐯_n=0}`$ (26) $`=`$ $`{\displaystyle \frac{1}{\beta \mathrm{\Lambda }^3}}g_{5/2}(z),`$ (27) where $`ze^{\beta (\stackrel{~}{\mu }U)}`$ is the local fugacity, $`\mathrm{\Lambda }\sqrt{2\pi /mk_\mathrm{B}T}`$ is the local thermal de Broglie wavelength and $`g_n(z)_{l=1}^{\mathrm{}}z^l/l^n`$. In static equilibrium (denoted by $`0`$), we of course have $`𝐯_{n0}=𝐯_{c0}=0`$ and $`\mu _{c0}=\stackrel{~}{\mu }_0`$. Thus it follows that $`\mathrm{\Gamma }_{12}[\stackrel{~}{f}^0]=0`$. The analogous linearized condensate equations of motion are $`{\displaystyle \frac{\delta n_c}{t}}`$ $`=`$ $`\mathbf{}(n_{c0}\delta 𝐯_c)\delta \mathrm{\Gamma }_{12},`$ (29) $`m{\displaystyle \frac{\delta 𝐯_c}{t}}`$ $`=`$ $`g\mathbf{}(\delta n_c+2\delta \stackrel{~}{n}).`$ (30) Finally, the source term $`\delta \mathrm{\Gamma }_{12}`$ in these equations can be expressed in terms of the fluctuation in the chemical potential difference $`\mu _{\mathrm{diff}}\stackrel{~}{\mu }\mu _c`$, $$\delta \mathrm{\Gamma }_{12}=\frac{\beta _0n_{c0}}{\tau _{12}^0}\delta \mu _{\mathrm{diff}},$$ (31) where $`\tau _{12}^0`$ is the equilibrium collision time obtained from (18) with $`𝐯_c=0`$, $`\epsilon _c=\mu _{c0}`$ and $`\stackrel{~}{f}`$ equal to the absolute equilibrium Bose distribution. We note that adding (20) and (29) gives the usual continuity equation for the total density. We now turn to a discussion of our linearized hydrodynamic equations given by (Two-fluid dynamics for a Bose-Einstein condensate out of local equilibrium with the non-condensate)-(31) for a uniform Bose-condensed gas ($`U_{\mathrm{ext}}(𝐫)=0`$). This means $`\stackrel{~}{n}_0`$, $`n_{c0}`$ and $`\stackrel{~}{P}_0`$ are independent of position. By straightforward calculations , one can then reduce our two-fluid equations to three coupled equations of motion for the three variables $`\delta 𝐯_c`$, $`\delta 𝐯_n`$ and $`\delta \mu _{\mathrm{diff}}`$: $`m{\displaystyle \frac{^2\delta 𝐯_c}{t^2}}=gn_{c0}\mathbf{}(\mathbf{}\delta 𝐯_c)+2g\stackrel{~}{n}_0\mathbf{}(\mathbf{}\delta 𝐯_n)`$ (33) $`+{\displaystyle \frac{\beta _0gn_{c0}}{\tau _{12}^0}}\mathbf{}\delta \mu _{\mathrm{diff}},`$ (34) $`m{\displaystyle \frac{^2\delta 𝐯_n}{t^2}}=\left({\displaystyle \frac{5\stackrel{~}{P}_0}{3\stackrel{~}{n}_0}}+2g\stackrel{~}{n}_0\right)\mathbf{}(\mathbf{}\delta 𝐯_n)`$ (35) $`+2gn_{c0}\mathbf{}(\mathbf{}\delta 𝐯_c){\displaystyle \frac{2n_{c0}}{3\stackrel{~}{n}_0}}{\displaystyle \frac{\beta _0gn_{c0}}{\tau _{12}^0}}\mathbf{}\delta \mu _{\mathrm{diff}},`$ (36) $`{\displaystyle \frac{\delta \mu _{\mathrm{diff}}}{t}}=gn_{c0}\left({\displaystyle \frac{2}{3}}\mathbf{}\delta 𝐯_n\mathbf{}\delta 𝐯_s\right){\displaystyle \frac{\delta \mu _{\mathrm{diff}}}{\tau _\mu }}.`$ (37) Here the relaxation time for the chemical potential difference ($`\mu _{\mathrm{diff}}`$) due to $`C_{12}`$ collisions between the condensate and non-condensate atoms is given by the expression $$\frac{1}{\tau _\mu }\frac{\beta _0gn_{c0}}{\tau _{12}^0}\left(\frac{\frac{5}{2}\stackrel{~}{P}_0+2g\stackrel{~}{n}_0n_{c0}+\frac{2}{3}\stackrel{~}{\gamma }_0gn_{c0}^2}{\frac{5}{2}\stackrel{~}{\gamma }_0\stackrel{~}{P}_0\frac{3}{2}g\stackrel{~}{n}_0^2}1\right),$$ (38) where we have introduced the dimensionless function $`\stackrel{~}{\gamma }_0(g\beta _0/\mathrm{\Lambda }_0^3)g_{1/2}(z_0)`$. If we simply omit the terms involving $`\delta \mu _{\mathrm{diff}}`$ in (34) and (36), we are left with the two coupled ZGN equations for $`\delta 𝐯_n`$ and $`\delta 𝐯_c`$ given in Ref.. We see that our new generalized ZGN equations give rise to a coupling between $`\delta 𝐯_n`$ and $`\delta 𝐯_c`$ and the local variable $`\delta \mu _{\mathrm{diff}}`$, which describes the relative fluctuation in the chemical potentials of the two components. This is the most important new result in the present Letter. It is convenient to solve the three coupled equations in (Two-fluid dynamics for a Bose-Einstein condensate out of local equilibrium with the non-condensate) by introducing velocity potentials $`\delta 𝐯_c\mathbf{}\varphi _c`$ and $`\delta 𝐯_n\mathbf{}\varphi _n`$, and looking for plane wave solutions. We obtain from (37) $$(1i\omega \tau _\mu )\delta \mu _{\mathrm{diff}}=gn_{c0}\tau _\mu \left(\varphi _c\frac{2}{3}\varphi _n\right)k^2.$$ (39) Inserting this expression into (34) and (36) gives two coupled equations for $`\varphi _n`$ and $`\varphi _c`$, $`m\omega ^2\varphi _c=gn_{c0}\left[1{\displaystyle \frac{\beta _0gn_{c0}\tau _\mu }{\tau _{12}^0(1i\omega \tau _\mu )}}\right]k^2\varphi _c`$ (41) $`+2g\stackrel{~}{n}_0\left[1+{\displaystyle \frac{\beta _0gn_{c0}\tau _\mu }{3\tau _{12}^0(1i\omega \tau _\mu )}}{\displaystyle \frac{n_{c0}}{\stackrel{~}{n}_0}}\right]k^2\varphi _n,`$ (42) $`m\omega ^2\varphi _n=2gn_{c0}\left[1+{\displaystyle \frac{\beta _0gn_{c0}\tau _\mu }{3\tau _{12}^0(1i\omega \tau _\mu )}}{\displaystyle \frac{n_{c0}}{\stackrel{~}{n}_0}}\right]k^2\varphi _c`$ (43) $`+\left[{\displaystyle \frac{5\stackrel{~}{P}_0}{3\stackrel{~}{n}_0}}+2g\stackrel{~}{n}_0{\displaystyle \frac{4\beta _0(gn_{c0})^2\tau _\mu }{9\tau _{12}^0(1i\omega \tau _\mu )}}{\displaystyle \frac{n_{c0}}{\stackrel{~}{n}_0}}\right]k^2\varphi _n.`$ (44) Taking the limit $`\omega \tau _\mu 1`$, one finds that $`\delta \mu _{\mathrm{diff}}`$ is decoupled from the velocity potentials $`\varphi _{c,n}`$ and we recover the ZGN results . As expected, in the extreme limit $`\omega \tau _\mu 1`$, the effect of $`C_{12}`$ collisions are negligible and one can simply omit $`\mathrm{\Gamma }_{12}`$. In the opposite limit $`\omega \tau _\mu 0`$, the equations in (Two-fluid dynamics for a Bose-Einstein condensate out of local equilibrium with the non-condensate) yield two phonon-like solutions, $`\omega _{1,2}=u_{1,2}k`$, where the velocities are given by the roots of the equation $`u^4Au^2+B=0`$. It can be shown that the coefficients $`A`$ and $`B`$ are in exact agreement with the analogous coefficients obtained from the usual Landau two-fluid equations . The latter theory uses quite different thermodynamic variables from those used in the present formulation, and the explicit proof of this equivalence requires a lengthy (but straightforward) calculation. It also turns out that the first and second sound velocities ($`u_1`$ and $`u_2`$) given by these results (valid for $`\omega \tau _\mu 1`$) are numerically very close to the velocities given by the ZGN approximation (valid for $`\omega \tau _\mu 1`$). The small differences involve terms of order $`g^2`$ and thus were not picked up in the comparison given in Ref. . The interesting feature of the linearized ZGN equations in (39) and (Two-fluid dynamics for a Bose-Einstein condensate out of local equilibrium with the non-condensate) is the existence of a new mode, associated with the condensate and non-condensate being out of equilibrium ($`\delta \mu _c\delta \stackrel{~}{\mu }`$). To a good approximation (and exact in the $`k0`$ limit), it corresponds to a mode in which $`\delta 𝐯_n=\delta 𝐯_c=0`$, with a frequency given by $`\omega =i/\tau _\mu `$. In the ZGN limit ($`C_{12}=0`$), this reduces to a zero frequency mode. In the Landau limit ($`C_{12}`$ large), it is a heavily damped relaxational mode. According to (Two-fluid dynamics for a Bose-Einstein condensate out of local equilibrium with the non-condensate), this equilibration process also gives rise to a damping of second sound whose magnitude relative to the mode frequency is peaked at $`\omega \tau _\mu 1`$. In Fig. 1, we plot the reciprocals of various relaxation times involved in our linearized ZGN equations, as a function of temperature. We note that $`\tau _{12}^0`$ goes to zero at $`T_{\mathrm{BEC}}`$ and is much smaller than the mean collision time expected for a Maxwell-Boltzmann gas. The extra factors multiplying $`1/\tau _{12}^0`$ in (38) ensure that $`1/\tau _\mu `$ starts to decrease as we approach $`T_{\mathrm{BEC}}`$ from below. This is the expected “critical slowing down” seen in all second order phase transitions involving an order parameter. Our HF mean-field approximation is inadequate in the critical region close to $`T_{\mathrm{BEC}}`$, and leads to a spurious finite limiting value of $`n_{c0}`$ at $`T_{\mathrm{BEC}}`$ . This removes the divergence in $`1/\tau _{12}^0`$, which is also why $`\tau _\mu `$ in Fig. 1 is finite at $`T_{\mathrm{BEC}}`$. If we simply put $`n_{c0}=0`$ in our calculations, we would find that both $`\tau _\mu `$ and $`1/\tau _{12}^0`$ would diverge at $`T_{\mathrm{BEC}}`$. Our discussion in the present Letter is based on the assumption that $`C_{22}`$ collisions produce the local equilibrium distribution $`\stackrel{~}{f}`$ in (15). As a result, our hydrodynamic equations (Two-fluid dynamics for a Bose-Einstein condensate out of local equilibrium with the non-condensate) do not explicitly depend on a relaxation time associated with $`C_{22}`$. This could be included using the standard Chapman-Enskog approach to deal with the deviation of $`f`$ from the local equilibrium function $`\stackrel{~}{f}`$ . However, an estimate of the collision time associated with $`C_{22}`$ is given by the scattering out term in (5), $`{\displaystyle \frac{1}{\tau _{22}^0}}={\displaystyle \frac{4\pi g^2}{\stackrel{~}{n}_0}}{\displaystyle \frac{d𝐩_1}{(2\pi )^3}\frac{d𝐩_2}{(2\pi )^3}\frac{d𝐩_3}{(2\pi )^3}𝑑𝐩_4}`$ (45) $`\times \delta (𝐩_1+𝐩_2𝐩_3𝐩_4)\delta (\stackrel{~}{\epsilon }_{p_1}+\stackrel{~}{\epsilon }_{p_2}\stackrel{~}{\epsilon }_{p_3}\stackrel{~}{\epsilon }_{p_4})`$ (46) $`\times \stackrel{~}{f}_1^0\stackrel{~}{f}_2^0(1+\stackrel{~}{f}_3^0)(1+\stackrel{~}{f}_4^0).`$ (47) This collision time is plotted in Fig. 1, both above and below $`T_{\mathrm{BEC}}`$. The fact that $`\tau _{22}^0\tau _\mu `$ at temperatures $`T0.8T_{\mathrm{BEC}}`$ is very important, since it allows for the possibility that the non-condensate atoms are in local equilibrium with each other but not with the condensate. Above the transition, the value for $`\tau _{22}^0`$ we obtain is in close agreement with the collision time obtained in Ref. . These results for the relaxation times are quite interesting in their own right. The divergence at $`T_{\mathrm{BEC}}`$ (see above remarks) is a consequence of the Bose distribution function being used for $`\stackrel{~}{f}_i^0`$ in (18) and (47). If a Maxwell-Boltzmann distribution function were used, the “divergence” shown in Fig. 1 is removed (the importance of calculating collision times using the correct Bose distribution has also been noted in Ref. ). Moreover, we see that one should not use the classical gas approximation for the collision times when determining the cross-over from a collisionless (or mean-field) to hydrodynamic regimes. The results in Fig. 1 imply that the hydrodynamic domain is much easier to reach at finite temperatures than might have been expected, since the collision time can be much smaller than the analogous classical gas collision time. In summary, starting from a microscopic model, we find that the dynamics of a Bose-condensed gas at finite temperatures can be divided into three distinct regimes: (1) The collisionless regime in which no collision terms are included in the kinetic equation ($`C_{12}=C_{22}=0`$). (2) An intermediate regime in which $`C_{22}`$ collisions between the excited atoms establish local thermal equilibrium ($`\omega \tau _{22}^01`$) but the $`C_{12}`$ collisions do not keep the condensate in equilibrium with the non-condensate . The relaxation time $`\tau _\mu `$ for this equilibration is found to be much larger than the collision time $`\tau _{22}^0`$ for reaching local equilibrium in the non-condensate (see Fig. 1). (3) Complete local equilibrium of the condensate and non-condensate, which arises in the limit $`\omega \tau _\mu 1`$. This is the regime conventionally described by the Landau two-fluid equations . As stated earlier, the ZGN equations exactly reproduce the results of the Landau equations, even though the local dynamical variables used are quite different in the two approaches. In this Letter, we have only discussed the normal modes of our linearized equations in a uniform Bose gas, but similar conclusions are obtained for trapped gases . Our general equations can also be used to discuss the growth and decay of atomic condensates , taking the dynamics of the non-condensate into account. We thank H.T.C. Stoof for first emphasizing the existence of an additional mode in the ZGN equations. T.N. is supported by a JSPS fellowship, while E.Z. and A.G. are supported by research grants from NSERC. E.Z. also acknowledges the financial support of the Dutch Foundation FOM.
no-problem/9812/cond-mat9812286.html
ar5iv
text
# Supersymmetric Field Theory of Non–Equilibrium Thermodynamic System ## 1 Introduction Recently, the microscopic theory of non–equilibrium thermodynamic systems with broken ergodicity and exhibiting the memory effect, has been the subject of major interest. Spin glasses and random heteropolymers , that have received much consideration, are well-known examples of such systems. Despite the bulk of theoretical studies had been employed the replica method to approach the problem analytically, there is increasing interest in alternative methods that go beyond the replica trick. The supersymmetry (SUSY) approach, evolved within stochastic dynamics theory governed by Langevin equation, gives a good example of method of such kind. According to this method, generating functional of Langevin dynamical system is represented as a functional integral over the superfields with Euclidean action by means of introducing Grassmann anticommutating variables. These variables and their products serve as a basis for superfields with components that involve Grassmann fields along with real(complex)–valued ones. As it was shown in , static replica treatment of spin systems bears striking similarity with the dynamics expressed in terms of superspace within the framework of the SUSY method. The latter is based on using of nilpotent variables. Two–point correlator of SUSY field can be written in the form of expansion with coefficients that give correlators of observables such as structure factor $`S`$, and retarded and advanced Green functions $`G_{}`$. The memory and nonergodicity effects are allowed for by incorporating the additional terms $`q`$, $`\mathrm{\Delta }`$ into the correlators. The resulting self–consistent SUSY scheme gives a set of equations for memory $`q`$ and nonergodicity $`\mathrm{\Delta }`$ parameters to be determined as functions of temperature $`T`$ and quenched disorder $`h`$. For SUSY scheme formulation SUSY as a gauge field needs to be reduced to irreducible components. By analogy with electromagnetic field, that can be splitted into vector and scalar fields, 4–component SUSY field can be divided into chiral components that consist of regular and Grassmann constituents . In Sections 2 and 3 SUSY field will be reduced to $`2`$–component nilpotent field. The latter has an advantage over conventional SUSY representation because its components have an explicit physical meaning of order parameter and conjugate field (or amplitude of its fluctuation). This rises the question as to optimal choice of the basis for making expansion of SUSY correlators. Currently, two types of such basis are known . The first one contains 3 components: advanced and retarded Green functions $`G_\pm `$ and structure factor $`S`$. The second basis corresponds to the proper $`4`$–component SUSY field and contains $`5`$ components which, in addition to the above mentioned ones, include a couple of mutually conjugated correlators of the Grassmann fields. In Sect.4 it will be shown that the second basis can be reduced to the first one. The work is organized as follows. In Sect.2, the simplest field scheme is formulated in terms of the $`2`$–component nilpotent fields with second component is taken to be either an amplitude of fluctuation or a conjugate force (see subsections 2.1 and 2.2). In Sect.3 the above–discussed method for reduction of the $`4`$–component proper SUSY field to different $`2`$–component forms is presented. In Sect.4 we show that the reduction results in decrease of the number of components of the SUSY correlator basis, due to the fact that the conjugate correlators of the Grassmann fields are equal to the retarded Green function in accordance with Ward identities. The SUSY perturbation theory, stated for both cubic and quadratic anharmonicities in Sect.5, makes expressions for the SUSY self–energy function simple to calculate. This function enters the SUSY Dyson equation derived in Sect.6 on the basis of effective Lagrangians for both thermodynamic systems with quenched disorder and random heteropolymers. Nonergodicity and memory effects are investigated in Sect.7. The corresponding self–consistent equations are obtained. Behavior of a non–equilibrium thermodynamic system for various values of temperature and quenched disorder intensity is analyzed in Sect.8. Appendices A, B, C provide details concerning the SUSY formalism under consideration. ## 2 Two–component SUSY representation Let us start with the simplest stochastic Langevin equation governing the spatiotemporal evolution of order parameter $`\eta (𝐫,t)`$: $$\dot{\eta }(𝐫,t)D^2\eta =\gamma (V/\eta )+\zeta (𝐫,t),$$ (1) where the dot stands for derivative with respect to time, $`/𝐫`$, $`D`$ is the diffusion–like coefficient, $`\gamma `$ is the kinetic coefficient, $`V(\eta )`$ is the synergetic potential (Landau free energy), $`\zeta (𝐫,t)`$ is a Gaussian stochastic function subjected to the white noise conditions $$\zeta (𝐫,t)_0=0,\zeta (𝐫,t)\zeta (\mathrm{𝟎},0)_0=\gamma T\delta (𝐫)\delta (t),$$ (2) where the angular brackets with subscript $`0`$ denote averaging over the Gaussian probability distribution of $`\zeta `$, $`T`$ is the intensity of the noise (the temperature of thermostat). Further, it is convenient to measure time $`t`$, coordinate $`𝐫`$, synergetic potential $`V`$, and stochastic variable $`\zeta `$, in units $`t_s(\gamma T)^2/D^3`$, $`r_s\gamma T/D`$, $`V_sD^3/\gamma ^3T^2`$, $`\zeta _sD^3/(\gamma T)^2`$ respectively. The equation of motion (1) then reads $$\dot{\eta }(𝐫,t)=\delta V/\delta \eta +\zeta (𝐫,t),$$ (3) where short notation is used for the variational derivative $$\delta V/\delta \eta \delta V\{\eta \}/\delta \eta =V(\eta )/\eta ^2\eta ,V\{\eta \}\left[V(\eta )+\frac{1}{2}\left(\eta \right)^2\right]d𝐫,$$ (4) the coefficient $`\gamma T`$ in Eq.(2) becomes unity and the distribution of variable $`\zeta `$ takes the Gaussian form $$P_0\{\zeta \}\mathrm{exp}\left(\frac{1}{2}\zeta ^2(𝐫,t)d𝐫dt\right).$$ (5) The basis for construction of the field scheme is the generating functional $`Z\{u(𝐫,t)\}={\displaystyle Z\{\eta \}\mathrm{exp}\left(u\eta d𝐫dt\right)\mathrm{D}\eta },`$ (6) $`Z\{\eta (𝐫,t)\}{\displaystyle \underset{(𝐫,t)}{}}\delta \left\{\dot{\eta }+{\displaystyle \frac{\delta V}{\delta \eta }}\zeta \right\}\mathrm{det}\left|{\displaystyle \frac{\delta \zeta }{\delta \eta }}\right|_0,`$ (7) so that its variational derivatives with respect to an auxiliary field $`u(𝐫,t)`$ give correlators of observables (see Eq.(72)). Obviously, $`Z\{u\}`$ represents the functional Laplace transformation of the dependence $`Z\{\eta \}`$, $`\delta `$–function reflects the condition (3), the determinant is Jacobian of the integration variable change from $`\zeta `$ to $`\eta `$. ### 2.1 Fluctuation amplitude as a component of nilpotent field Further development of the field scheme proceed depending on the type of connection between stochastic variable $`\zeta `$ and order parameter $`\eta `$. For thermodynamic system, where the thermostat state does not depend on $`\eta `$, the determinant in Eq.(7) assumes constant value that can be chosen as unity. Then, by using integral representation for $`\delta `$–function $$\delta \{x(𝐫,t)\}=\underset{\mathrm{i}\mathrm{}}{\overset{\mathrm{i}\mathrm{}}{}}\mathrm{exp}\left(\phi xd𝐫dt\right)D\phi $$ (8) with the ghost field $`\phi (𝐫,t)`$ and averaging over distribution (5), we have the functional (7) in the standard form $$Z\{\eta (𝐫,t)\}=\mathrm{exp}\left[S\{\eta (𝐫,t),\phi (𝐫,t)\}\right]D\phi ,$$ (9) where the action $`S=d𝐫dt`$ is measured in units $`S_s=\gamma ^2(T/D)^3`$ with the Lagrangian given by $$L(\eta ,\phi )=(\phi \dot{\eta }\phi ^2/2)+\phi (\delta V/\delta \eta ).$$ (10) In order to obtain a canonical form of the Lagrangian (10) let us introduce the nilpotent field $$\varphi _\phi =\eta +\vartheta \phi $$ (11) with Bose components $`\eta `$, $`\phi `$, and nilpotent coordinate $`\vartheta `$ obeys the relations $$\vartheta ^2=0,d\vartheta =0,\vartheta d\vartheta =1.$$ (12) As is shown in Appendix A, the first bracketed expression in Lagrangian (10) takes the form of kinetic energy in the Dirac field scheme : $$\kappa =\frac{1}{2}\varphi D\varphi d\vartheta .$$ (13a) Hereafter indexes are suppressed. The Hermite operator $`D`$ is defined by equality $$D_\phi =\frac{}{\vartheta }+\left(12\vartheta \frac{}{\vartheta }\right)\frac{}{t}$$ (14) and enjoys the property (136). On the other hand, the algebraic properties (12) of coordinate $`\vartheta `$ allow to rewrite the last term in Eq.(10) in the standard form of potential energy (see Appendix A) $$\pi =V(\varphi )d\vartheta .$$ (13b) The resulting Lagrangian (10) of the Euclidean field theory is $$L\kappa +\pi =\lambda d\vartheta ,\lambda (\varphi )\frac{1}{2}\varphi D\varphi +V(\varphi ).$$ (15) According to Appendix A, the expressions (10), (15) become invariant with respect to transformation $`e^{\epsilon D}`$ given by operator (14) if only a parameter $`\epsilon 0`$ is pure imaginary and the fields $`\eta (𝐫,t)`$, $`\phi (𝐫,t)`$ are complex–valued. Then, operator $`D`$ is the generator of the nilpotent group. After equating the first variation of the functional $$s\{\varphi (\zeta )\}=\lambda (\varphi (\zeta ))d\zeta ,\zeta \{𝐫,t,\vartheta \}$$ (16) to zero, we obtain the Euler–Lagrange equation $$D\frac{\delta \lambda }{\delta D\varphi }+\frac{\delta \lambda }{\delta \varphi }=0,$$ (17) Substituting the expression (15) in Eq.(17) yields the equation of motion $$D\varphi +\delta V/\delta \varphi =0.$$ (18) Projection along axes of usual and nilpotent variables gives the system of the equations $$\dot{\eta }=\delta V/\delta \eta +\phi ,$$ (19) $$\dot{\phi }=\frac{\delta ^2V}{\delta ^2\eta }\phi ,$$ (20) that determines kinetics of the phase transition. Being obtained from the extremum condition for Lagrangian (10) these equations determine the maximum value of the probability distribution $$P\{\eta (𝐫,t),\phi (𝐫,t)\}=Z^1\mathrm{exp}\left(L(\eta ,\phi )d𝐫dt\right),$$ (21) that specifies the partition function $`ZZ\{u=0\}`$ in Eq.(6). Comparison of expression (19) with Langevin equation (3) leads to the conclusion that the quantity $`\phi `$ determines the most probable value of fluctuation of the field conjugated to the order parameter. On the other hand, it means that the initial one–modal distribution (5) transforms into the final two–modal form (21). ### 2.2 Conjugate field as a component of nilpotent field It is well to bear in mind that there is another representation of two–component nilpotent field. Let us introduce field $`f(𝐫,t)`$ defined by the relation $$\dot{\eta }=f+\phi .$$ (22) Then the Lagrangian (10) takes the form $$L(\eta ,f)=\frac{1}{2}\left(\dot{\eta }^2f^2\right)\frac{\delta V}{\delta \eta }f+\frac{\delta V}{\delta \eta }\dot{\eta }.$$ (23) Since the last term of Eq.(23) is the total derivative of $`V`$ with respect to time, its contribution to the partition function gives a factor that is integral over initial and final fields $`\eta _i(𝐫)\eta (𝐫,t_i)`$, $`\eta _f(𝐫)\eta (𝐫,t_f)`$ (here we return to dimensional magnitude of the potential $`V`$). $$Z=\mathrm{exp}\left(\frac{V\{\eta _f\}V\{\eta _i\}}{T}\right)D\eta _iD\eta _f$$ (24) The remaining part of Lagrangian (23) yields the Euler equations $$\ddot{\eta }=\frac{\delta ^2V}{\delta ^2\eta }f,$$ (25) $$f=\delta V/\delta \eta .$$ (26) Differentiating Eq.(19) with respect to time and taking into account Eqs.(20), (22), it is not difficult to derive Eq.(25). As for Eq.(26), it defines $`f(𝐫,t)`$ as the field conjugated to the order parameter $`\eta (𝐫,t)`$. Note that Eq.(26) implies the force $`f`$ explicitly does not depend on the time $`t`$. By analogy with the definition (11) let us introduce now another nilpotent field $$\varphi _f=\eta \vartheta f,$$ (27) where Bose components are the order parameter $`\eta `$ and the force $`f`$ with opposite sign. As it is shown in Appendix A, expression for Lagrangian in terms of $`\varphi _f`$ has the same form as in Eq.(15) with the generator of the nilpotent group given by $$D_f=\left(\frac{}{\vartheta }+\vartheta \frac{^2}{t^2}\right).$$ (28) Note that $`D_f`$ obeys the algebraic relation (136). ### 2.3 Connection between two–component nilpotent representations In this subsection we discuss the relation between the two above two–component nilpotent fields (11) and (27) that makes using of the fields algebraically equivalent. Let us introduce the operators $`\tau _\pm =e^{\pm \vartheta _t}`$, $`_t/t`$ that induce the following transformations of the fields $`\varphi _\phi `$ and $`\varphi _f`$ $$\tau _\pm \varphi _\phi (t)=\varphi _f(t),\tau _\pm \varphi _{\pm f}(t)=\varphi _{\pm \phi }(t).$$ (29) Eq.(29) shows that operators $`\tau _\pm `$ transform the field to its counterpart. So we have the mappings relating the representations. By making expansion in power series over $`\vartheta `$, with help of Eqs.(12), (22) one obtaines $$\varphi _\phi (t\pm \vartheta )=\varphi _f(t),\varphi _{\pm f}(t\pm \vartheta )=\varphi _{\pm \phi }(t),$$ (30) that shows that operators $`\tau _\pm `$ shift the physical time $`t`$ by the nilpotent values $`\pm \vartheta `$: $$\tau _\pm \varphi _\phi (t)=\varphi _\phi (t\pm \vartheta ),\tau _\pm \varphi _{\pm f}(t)=\varphi _{\pm f}(t\pm \vartheta ).$$ (31) The same results can be obtained by using matrix representation defined by Eqs. (137), (139) and (140). On the other hand, the above mappings $`\tau _+\varphi _f=\varphi _\phi `$, $`\tau _{}\varphi _\phi =\varphi _f`$ induce corresponding transformations of the generators (14), (28) $$D_f=\tau _{}D_\phi \tau _+,D_\phi =\tau _+D_f\tau _{}.$$ (32) Note that the action with Lagrangian (15) is covariant with respect to transformations (29), provided $`\dot{f}0`$ (potential $`V`$ does not depend on time explicitly). ## 3 Reduction of proper SUSY fields to the two–component forms The considerations given in previous section rest on the assumption that the Jacobian of variable change from $`\zeta `$ to the order parameter $`\eta `$ is constant. However, in general case determinant of an arbitrary matrix $`|A|`$ can be expressed as an integral over Grassmann conjugate fields $`\psi (𝐫,t)`$, $`\overline{\psi }(𝐫,t)`$, that meet conditions type of Eqs.(12) $$det|A|=\mathrm{exp}\left(\overline{\psi }A\psi \right)\mathrm{d}^2\psi ,\mathrm{d}^2\psi =\mathrm{d}\psi \mathrm{d}\overline{\psi }.$$ (33) Physically, the appearance of new degrees of freedom $`\psi `$, $`\overline{\psi }`$ means that the state of thermostat turns out to be dependent on the order parameter — as it is inherent in self–organized system . As a result, the Lagrangian (10) supplemented with the Grassmann fields $`\psi `$, $`\overline{\psi }`$ takes the form $$(\eta ,\phi ,\psi ,\overline{\psi })=\left(\phi \dot{\eta }\frac{\phi ^2}{2}+\frac{\delta V}{\delta \eta }\phi \right)\overline{\psi }\left(\frac{}{t}+\frac{\delta ^2V}{\delta \eta ^2}\right)\psi .$$ (34) Introducing the four–component SUSY field $$\mathrm{\Phi }_\phi =\eta +\overline{\theta }\psi +\overline{\psi }\theta +\overline{\theta }\theta \phi ,$$ (35) by analogy with previous section the SUSY Lagrangian is $$=\mathrm{\Lambda }\mathrm{d}^2\theta ,\mathrm{\Lambda }(\mathrm{\Phi }_\phi )\frac{1}{2}(\overline{𝒟}_\phi \mathrm{\Phi }_\phi )\left(𝒟_\phi \mathrm{\Phi }_\phi \right)+V(\mathrm{\Phi }_\phi ),\mathrm{d}^2\theta \mathrm{d}\theta \mathrm{d}\overline{\theta },$$ (36) where $`\theta `$, $`\overline{\theta }`$ are Grassmann conjugate coordinates that replace the nilpotent one $`\vartheta `$. As compared with Eq.(15), where the kernel $`\lambda `$ is linear in the generator (14), a couple of the Grassmann non–conjugated operators $$𝒟_\phi =\frac{}{\overline{\theta }}2\theta \frac{}{t},\overline{𝒟}_\phi =\frac{}{\theta }$$ (37) enters the expression for SUSY Lagrangian. The Euler equation for SUSY action reads $$\frac{1}{2}[\overline{𝒟},𝒟]\mathrm{\Phi }+\frac{\delta V}{\delta \mathrm{\Phi }}=0,$$ (38) where the square brackets denote the commutator. Projection of Eq.(38) along the SUSY axes $`1`$, $`\overline{\theta }`$, $`\theta `$, $`\overline{\theta }\theta `$ gives the equations of motion $$\dot{\eta }^2\eta =V/\eta +\phi ,$$ (39a) $$\dot{\phi }+^2\phi =(^2V/\eta ^2)\phi (^3V/\eta ^3)\overline{\psi }\psi ,$$ (39b) $$\dot{\psi }^2\psi =(^2V/\eta ^2)\psi ,$$ (39c) $$\dot{\overline{\psi }}^2\overline{\psi }=(^2V/\eta ^2)\overline{\psi },$$ (39d) that give Eqs.(19), (20) at $`\psi =\overline{\psi }=0`$. It can be readily shown that this system can be obtained from the Lagrangian (34). From Eqs.(39c) and (39d) we obtain the conservation law $`\dot{S}+𝐣=0`$ for the quantities $$S=\overline{\psi }\psi ,𝐣=(\overline{\psi })\psi \overline{\psi }(\psi ).$$ (40) For inhomogeneous thermodynamic systems $`S`$ is a density of sharp boundaries, $`𝐣`$ is a corresponding current . In particular, the approach of the four–component SUSY field complies with the strong segregation limit requirement of copolymer theory . For self–organized system the magnitude $`S`$ gives the entropy, $`𝐣`$ is the probability current . So, for thermodynamic system, where the entropy is conserved, we could disregard the Grassmann fields $`\psi (𝐫,t)`$, $`\overline{\psi }(𝐫,t)=const`$. As a result, the four–component SUSY field (35) is reduced to the two–component form (11). In order to justify this let us write the kinetic term of the Lagrangian (36) in the form$`(1/4)\mathrm{\Phi }_\phi [\overline{𝒟}_\phi ,𝒟_\phi ]\mathrm{\Phi }_\phi `$ where $$\frac{1}{2}[\overline{𝒟}_\phi ,𝒟_\phi ]=\frac{^2}{\theta \overline{\theta }}+\left(12\theta \frac{}{\theta }\right)\frac{}{t}.$$ (41) The expression (41) restricted to two–component form with $`\vartheta \overline{\theta }\theta `$ yields the generator (14) as is needed. It is of interest to note that variable $`\vartheta `$ satisfies (12). In addition, since the self–conjugated value $`\vartheta =\overline{\vartheta }`$ is commutating quantity, it is nilpotent rather than Grassmannian. As in the case of two–component nilpotent fields in Section II, one can go over from the fluctuation amplitude $`\phi `$ to the conjugate force $`f`$ by using Eq.(22). Then, the first bracket in Lagrangian (34) takes the form (23) and instead of the system (39) one obtains the equation (cf. Eq.(25)) $$\ddot{\eta }=(\delta ^2V/\delta \eta ^2)f(\delta ^3V/\delta \eta ^3)\overline{\psi }\psi $$ (42) supplemented with the definition of force (26) and the equations (39c,d) for the Grassmann fields $`\psi (𝐫,t)`$, $`\overline{\psi }(𝐫,t)`$. As above, the equation of motion (42) can be derive by differentiating Eq.(39a) with respect to time and taking into account Eqs.(22), (39b). The corresponding Lagrangian $`(\eta ,f,\psi ,\overline{\psi })`$ takes the SUSY form (cf. Eqs.(36)) $$=\mathrm{\Lambda }\mathrm{d}^2\theta ,\mathrm{\Lambda }(\mathrm{\Phi }_f)\frac{1}{2}\mathrm{\Phi }_f\overline{𝒟}_f𝒟_f\mathrm{\Phi }_f+V(\mathrm{\Phi }_f)$$ (43) with the SUSY field (cf. Eq.(27)) $`\mathrm{\Phi }_f=\eta +\overline{\theta }\psi +\overline{\psi }\theta \overline{\theta }\theta f\mathrm{\Phi }_\phi \overline{\theta }\theta \dot{\mathrm{\Phi }}_f=T_{}\mathrm{\Phi }_\phi ,`$ $`T_\pm e^{\pm \overline{\theta }\theta _t},_t/t`$ (44) and the Grassmann conjugated operators (cf. Eqs.(37)) $$𝒟_f=\frac{}{\overline{\theta }}\theta \frac{}{t},\overline{𝒟}_f=\frac{}{\theta }\overline{\theta }\frac{}{t}.$$ (45) By analogy with Eqs.(29)–(31) with operators $`\tau _\pm =e^{\pm \vartheta _t}`$ replaced by $`T_\pm e^{\pm \overline{\theta }\theta _t}`$, where $`_t/t`$, the SUSY fields (35), (44) can be transformed into each other and couples of the SUSY operators (37), (45) are related by means of transformations (cf. Eqs.(32)): $$𝒟_f=T_{}𝒟_\phi T_+,\overline{𝒟}_f=T_{}\overline{𝒟}_\phi T_+.$$ (46) According to Eq.(45), kernel of kinetic part of the SUSY Lagrangian (43) is (cf. Eq.(41)) $$\overline{𝒟}_f𝒟_f=\left(\frac{}{\theta }\frac{}{\overline{\theta }}+\overline{\theta }\theta \frac{^2}{t^2}\right)+\left(\overline{\theta }\frac{}{\overline{\theta }}\theta \frac{}{\theta }\right)\frac{}{t}.$$ (47) Note that the operator (28) can be obtained from Eq.(47) by taking into account the condition of the Fermion number conservation $`\overline{\theta }(/\overline{\theta })=\theta (/\theta )`$ and by setting $`\overline{\theta }\theta `$ equal to $`\vartheta `$. So, both four–component Grassmann fields (35), (44) with SUSY generators given by Eqs.(37), (45) can be reduced to the corresponding two–component fields, Eqs.(11), (27), with operators (14), (28), respectively. It is worthwhile to mention that such reduction can be obtained according to the SUSY gauge conditions $$𝒟\mathrm{\Phi }=0;\overline{𝒟}\mathrm{\Phi }=0.$$ (48) Indeed, according to definitions (35), (37), (44), (45) the equalities (48) give the relation $$\overline{\theta }\psi +\overline{\psi }\theta 2\overline{\theta }\theta f=0,$$ (49) that reduces the SUSY field (44) to the form (27) with opposite sign before $`f`$, provided $`\vartheta \overline{\theta }\theta `$. Despite of the same number of components, one has to have in mind that the reduced SUSY field from Eq.(27) and couple of Grassmann conjugate chiral SUSY fields (180), which appearance is a consequence of SUSY gauge invariance also (see Appendix B), have different physical meaning. The main distinction is that the first field consists of two Bose components $`\eta `$, $`f`$, whereas the chiral SUSY fields $`\varphi _+`$, $`\varphi _{}`$ are the combinations of Bose $`\eta `$ and Fermi $`\psi `$, $`\overline{\psi }`$ components. Formally, this is due to the fact that for separation of the chiral SUSY fields the conditions (178) of the SUSY gauge invariance are fulfilled not for the initial SUSY field $`\mathrm{\Phi }`$, which satisfies to conditions (48), but for components $`\mathrm{\Phi }_\pm `$, resulting from $`\mathrm{\Phi }`$ under the action of operators $`T_\pm =\mathrm{exp}\left(\pm \overline{\theta }\theta _t\right)`$ (see Eq.(172)). According to the above considerations, the transformation operators $`T_\pm `$, that shift physical time $`t`$ by Grassmann values $`\pm \overline{\theta }\theta `$, relate the SUSY fields (35), (44) and corresponding generators (37), (45). It should be emphasized that only the latter form a pair of Grassmann conjugated operators. The physical reason of this symmetry is that the corresponding equation of (42) is invariant with respect to the time inversion, whereas the equations (39a), (39b) for components of the SUSY field (35) are not. However,in addition to the field $`\mathrm{\Phi }_\phi \mathrm{\Phi }_+`$ obtained from the initial field $`\mathrm{\Phi }_f`$ under the action of operator $`T_+`$, another SUSY field $`\mathrm{\Phi }_{}`$ emerge under the action of operator $`T_{}`$ that shifts the time $`t`$ in opposite direction. From Eqs.(176), (22) it can be seen that the fields $`\mathrm{\Phi }_\pm \mathrm{\Phi }_\phi (\pm t)`$ correspond to opposite directions of time. However, equations (39c), (39d) for the Grassmann components $`\psi (𝐫,t)`$, $`\overline{\psi }(𝐫,t)`$ are invariant under the action of $`T_\pm `$ To break the invariance let us introduce additional operators of transformation $$\stackrel{~}{T}_\pm =\mathrm{exp}\left[\epsilon \left(\delta _\pm \overline{\theta }\psi +\delta _{}\overline{\psi }\theta \right)\right]$$ (50) where source parameter $`\epsilon 0`$; $`\delta _+=1`$, $`\delta _{}=0`$ for the positive time direction and $`\delta _+=0`$, $`\delta _{}=1`$ otherwise. The Euler SUSY equation (38) for transformed superfield $`\stackrel{~}{\mathrm{\Phi }}_\pm \stackrel{~}{T}_\pm \mathrm{\Phi }_\phi `$ is reduced to the components $$\dot{\eta }^2\eta =V/\eta +\phi \epsilon \overline{\psi }\psi ,$$ (51a) $$\dot{\phi }+^2\phi =(^2V/\eta ^2)\phi (^3V/\eta ^3)\overline{\psi }\psi +\epsilon \overline{\psi }\dot{\psi },$$ (51b) $$\dot{\psi }^2\psi =(^2V/\eta ^2)\psi \epsilon \left\{\delta _{}(\dot{\psi }/\psi )\eta +\delta _+\left[(\dot{\eta }\phi )+(^2V/\eta ^2)\eta \right]\right\}\psi ,$$ (51c) $$\dot{\overline{\psi }}+^2\overline{\psi }=(^2V/\eta ^2)\overline{\psi }\epsilon \left\{\delta _+(\dot{\overline{\psi }}/\overline{\psi })\eta \delta _{}\left[(\dot{\eta }\phi )+(^2V/\eta ^2)\eta \right]\right\}\overline{\psi },$$ (51d) where the terms of first order $`\epsilon `$ are kept. These equations give Eqs.(39) at $`\epsilon 0`$, but combination of Eqs.(51c), (51d) at $`\epsilon 0`$ leads to the following equation for the quantities (40) $$\dot{S}+𝐣=\pm \epsilon FS,FV/\eta 2(^2V/\eta ^2)\eta $$ (52) instead of the law of entropy conservation. Since entropy $`S`$ of a closed system ($`𝐣=0`$) increases in time, provided $`F>0`$, in Eq.(52) one has to choose the upper sign corresponding to the positive time direction. So, the operator (50) breaks symmetry with respect to the time reversibility. The above–mentioned condition of positiveness for effective force $`FV/\eta 2(^2V/\eta ^2)\eta `$ means that the effective potential $`V`$ is an increasing convex function of the $`\eta `$ that is inherent in an unstable system. It is of interest to note that near the equilibrium state, where $`V/\eta =0`$, $`\eta 1`$, the force $`F(^2V/\eta ^2)\eta `$ is always positive for unstable system. Finally, in order to visualize the difference between two–component nilpotent fields (11), (27) and chiral fields (180) let us represent the SUSY field (44) as a vector in four–dimensional space with axes $`\theta ^0=\overline{\theta }^01`$, $`\overline{\theta }`$, $`\theta `$, $`\overline{\theta }\theta \vartheta `$. Then conditions (48) of the SUSY gauge invariance mean that field (44) is reduced to the vector (27) belonging to a plane formed by axes 1, $`\vartheta `$. Accordingly, the conditions (178) of the chiral gauge invariance split total SUSY space into a couple of orthogonal subspaces, the first of which has the axes 1, $`\theta `$ and contains the vector $`\varphi _{}`$, and second — axes 1, $`\overline{\theta }`$ and vector $`\varphi _+`$. Since these subspaces are Grassmann conjugated, $`\overline{\varphi }_{}=\varphi _+`$, it is enough to use one of them, considering either vector $`\varphi _{}`$, or $`\varphi _+`$ (see Appendix B). Such program was realized in Ref., whereas the above used nilpotent field (27) is derived by projecting chiral vectors $`\varphi _\pm `$ to a plane formed by axes 1, $`\vartheta `$. It follows that our approach stated on the using nilpotent fields (11), (27) and the theory are equivalent. The SUSY method presented in the book is also based on usage of the chiral fields $`\varphi _{}=\phi i\overline{\psi }\theta `$, $`\varphi _+=\eta +\overline{\theta }\psi `$ (cf. with (180)) that contain the fluctuation $`\phi `$ as a Bose component of the field $`\varphi _{}`$ and the order parameter $`\eta `$ in field $`\varphi _+`$. ## 4 SUSY correlation techniques In this section correlators of the proper SUSY fields (35), (44) will be studied. It will be shown how the relevant correlation techniques can be reduced to the simplest scheme by making use of the two–component field (11). To begin with let us introduce the SUSY correlator $$C(z,z^{})=\mathrm{\Phi }(z)\mathrm{\Phi }(z^{}),z\{𝐫,t,\overline{\theta },\theta \}.$$ (53) From the equation of motion (38) we have the equation for Fourier transform of the bare SUSY correlator $`C^{(0)}(z,z^{})`$ with the potential $`V_0=(1/2)\mathrm{\Phi }^2`$ in the following form $$L_{𝐤\omega }(\theta )C_{𝐤\omega }^{(0)}(\theta ,\theta ^{})=\delta (\theta ,\theta ^{}),L1(1/2)[\overline{𝒟},𝒟],$$ (54) where $`\delta (\theta ,\theta ^{})`$ is the Grassmann $`\delta `$–function $$\delta (\theta ,\theta ^{})=(\overline{\theta }\overline{\theta }^{})(\theta \theta ^{}),$$ (55) $`\omega `$ is the frequency and $`𝐤`$ is the wave vector. The solution of Eq.(54) reads $$C^{(0)}(\theta ,\theta ^{})=\frac{\left(1+(1/2)[\overline{𝒟},𝒟]\right)\delta (\theta ,\theta ^{})}{1(1/4)[\overline{𝒟},𝒟]^2},$$ (56) where the indexes $`\omega `$, $`𝐤`$ are suppressed for brevity. From the definitions (37), (55) and equality $`[\overline{𝒟},𝒟]^2=4\omega ^2`$ (see Eqs.(141)), the bare SUSY correlator for SUSY field (35) can be written in the explicit form $$C_\phi ^{(0)}(\theta ,\theta ^{})=\frac{1+(1\mathrm{i}\omega )(\overline{\theta }\overline{\theta }^{})\theta (1+\mathrm{i}\omega )(\overline{\theta }\overline{\theta }^{})\theta ^{}}{1+\omega ^2}.$$ (57) In the case of the SUSY field (44), by using transformation (46) the above result is found to be modified by adding the term $`\mathrm{i}\omega (\overline{\theta }\theta \overline{\theta }^{}\theta ^{})`$ to the numerator of Eq.(57). It is convenient to introduce the following components as a basis for expansion of SUSY correlators $`T(\theta ,\theta ^{})=1,B_0(\theta ,\theta ^{})=\overline{\theta }\theta ,B_1(\theta ,\theta ^{})=\overline{\theta }^{}\theta ^{},`$ (58) $`F_0(\theta ,\theta ^{})=\overline{\theta }^{}\theta ,F_1(\theta ,\theta ^{})=\overline{\theta }\theta ^{}.`$ Let us define the operator product $$X(\theta ,\theta ^{})=Y(\theta ,\theta ^{\prime \prime })Z(\theta ^{\prime \prime },\theta ^{})\mathrm{d}^2\theta ^{\prime \prime }$$ (59) for superspace functions $`Y`$, $`Z`$. Eq.(59) immediately provide the multiplication rules for the basis operators (58) summarized in Table I: Table I | $`l\backslash r`$ | $`𝐓`$ | $`𝐁_0`$ | $`𝐁_1`$ | $`𝐅_0`$ | $`𝐅_1`$ | | --- | --- | --- | --- | --- | --- | | | | | | | | | $`𝐓`$ | 0 | $`𝐓`$ | 0 | 0 | 0 | | $`𝐁_0`$ | 0 | $`𝐁_0`$ | 0 | 0 | 0 | | $`𝐁_1`$ | $`𝐓`$ | 0 | $`𝐁_1`$ | 0 | 0 | | | | | | | | | $`𝐅_0`$ | 0 | 0 | 0 | $`𝐅_0`$ | 0 | | $`𝐅_1`$ | 0 | 0 | 0 | 0 | $`𝐅_1`$ | The operators $`𝐓`$, $`𝐁_{0,1}`$, $`𝐅_{0,1}`$ then form the closed basis, so that expansions for correlators are (see Eqs.(187), (190)) $`𝐂_\phi =S𝐓+G_+\left(𝐁_0+𝐅_0\right)+G_{}\left(𝐁_1+𝐅_1\right),`$ (60) $`𝐂_f=S𝐓+m_+𝐁_0+m_{}𝐁_1+G_+𝐅_0+G_{}𝐅_1`$ where in accordance with Ward identity (186) corresponding to the first generator (185) term proportional to $`\overline{\theta }\theta \overline{\theta }^{}\theta ^{}`$ is dropped. Inserting SUSY fields (35), (44) into Eq.(53) provides the coefficients of expansions (60) (cf. Eqs.(188), (191)): $`S=|\eta |^2;m_+=\eta ^{}f_{\mathrm{ext}},m_{}=\eta f_{\mathrm{ext}}^{},f_{\mathrm{ext}}f;`$ (61) $`G_+=\phi \eta ^{}=\overline{\psi }\psi ^{},G_{}=\eta \phi ^{}=\overline{\psi }^{}\psi .`$ So, quantity $`S`$ is the autocorrelator of order parameter $`\eta `$ and magnitudes $`m_{}`$ meet the condition $`m_+^{}=m_{}`$ and determine the averaged order parameter $`\eta `$ corresponding to external force $`f_{\mathrm{ext}}f`$. The retarded and advanced Green functions $`G_{}`$ give the response of order parameter $`\eta `$ to fluctuation amplitude $`\phi `$ and vice versa (moreover, functions $`G_\pm `$ determine correlation of the Grassmann fields $`\overline{\psi }`$, $`\psi `$). As it is known , the Fourier transforms $`G_{}(\omega )`$ of retarded and advanced Green functions are analytical in upper and lower half–planes of complex frequency $`\omega `$ with cut along real axis $`\omega ^{}`$. There is the jump $`G_{}(\omega ^{})G_+(\omega ^{})=4\mathrm{i}\mathrm{Im}G_{}(\omega ^{})`$, so that the relations (189), (192) assume the usual form of the fluctuation–dissipation theorem: $$G_\pm (\omega )=m_\pm (\omega )\mathrm{i}\omega S(\omega ),S(\omega ^{})=(2/\omega ^{})\mathrm{Im}G_{}(\omega ^{})$$ (62) where the frequency $`\omega ^{}`$ is real. The expression for bare correlator (57) gives: $`S^{(0)}=m_\pm ^{(0)}=(1+\omega ^2)^1,G_\pm ^{(0)}=(1\pm \mathrm{i}\omega )^1.`$ (63) Integrate the last equation of (62) and taking into account the spectral representation $$C(\omega )=_{\mathrm{}}^{\mathrm{}}\frac{\mathrm{d}\omega ^{}}{\pi }\frac{\mathrm{Im}C(\omega ^{})}{\omega ^{}\omega }.$$ (64) we arrive at useful relation $$S(t=0)=G_\pm (\omega =0)\chi ,$$ (65) where the last identity is the definition of susceptibility $`\chi `$. The expansions (60) make it possible to handle the SUSY correlator (53) as a vector of space constructed as the direct product of the SUSY fields (35) or (44). The representation (35) is of special convenience because it allows using of reduced basis $$𝐀𝐁_0+𝐅_0,𝐁𝐁_1+𝐅_1.$$ (66) Along with $`𝐓`$, they form more compact basis and obey the following multiplication rules: Table II | $`l\backslash r`$ | $`𝐓`$ | $`𝐀`$ | $`𝐁`$ | | --- | --- | --- | --- | | | | | | | $`𝐓`$ | $`0`$ | $`𝐓`$ | $`0`$ | | $`𝐀`$ | $`0`$ | $`𝐀`$ | $`0`$ | | $`𝐁`$ | $`𝐓`$ | $`0`$ | $`𝐁`$ | The expansion Eqs.(60) then takes the form $$𝐂_\phi =S𝐓+G_+𝐀+G_{}𝐁.$$ (67) So, using Ward identities allows to get rid of autocorrelators of the Grassmann fields $`\psi `$, $`\overline{\psi }`$ (see relations (61)). As a result, there are three basic correlators: the advanced and retarded Green functions $`G_\pm `$ and structure factor $`S`$. They yield the most compact expansion (67) for arbitrary SUSY correlator of fields (35). It is ready to show that expansion of the same form can be obtained on the basis of the two–component field (11) representation. Indeed, in this case by comparison between equations of motion (18) and (38) the commutator $`(1/2)[\overline{𝒟},𝒟]`$ in expression (56) should be replaced by generator (14) and nilpotent $`\delta `$–function should be $`\delta (\vartheta \vartheta ^{})=\vartheta +\vartheta ^{}`$. So the resulting bare correlator is $$C^{(0)}(\vartheta ,\vartheta ^{})=\frac{1+(1\mathrm{i}\omega )\vartheta +(1+\mathrm{i}\omega )\vartheta ^{}}{1+\omega ^2}$$ (68) instead of Eq.(57). It is easily to see that using the definitions (cf. Eqs.(58)) $$T(\vartheta ,\vartheta ^{})=1,A(\vartheta ,\vartheta ^{})=\vartheta ,B(\vartheta ,\vartheta ^{})=\vartheta ^{}$$ (69) gives the relevant expansion (67). As a result, in what follows we can use two–component field (11). In particular, for inverse of the SUSY correlator (67) we have $$𝐂^1=G_+^1SG_{}^1𝐓+G_+^1𝐀+G_{}^1𝐁.$$ (70) It is worthwhile to note that according to definitions (66), (58) the basis operators $`𝐀𝐁0`$ provided $`\theta =\theta ^{}`$, so that $`C(\theta ,\theta )=C(\vartheta ,\vartheta )=S`$ and $`{\displaystyle C(z,z)dz}={\displaystyle S(𝐫,t;𝐫,t)d𝐫dt\mathrm{d}^2\theta }=0,`$ (71) $`{\displaystyle C(\zeta ,\zeta )d\zeta }={\displaystyle S(𝐫,t;𝐫,t)d𝐫dtd\vartheta }=0`$ where $`z`$, $`\zeta `$ are sets of variables $`\{𝐫,t,\overline{\theta },\theta \}`$, $`\{𝐫,t,\vartheta \}`$, respectively. In the diagrammatic representation identities (71) imply the absence of bubble graphs contribution. The latter considerably reduces the number of graphs contributing to expansion of the perturbation theory (see below). ## 5 SUSY Perturbation Theory Let us begin with the formula $$C(\zeta ,\zeta ^{})=\frac{\delta ^2Z\{u(\zeta )\}}{\delta u(\zeta )\delta u(\zeta ^{})}|_{u=0},$$ (72) where generating functional (see Eqs.(6), (9)) $$Z\{u\}=\mathrm{exp}\left(\varphi ud\zeta \right)$$ (73) has the form of average over distribution (cf. Eq.(21)) $$P\{\varphi \}=Z^1\mathrm{exp}\left(S\{\varphi \}\right),S\{\varphi \}=\lambda (\varphi )d\zeta ,$$ (74) with the Lagrangian $`\lambda `$ defined by Eq.(15). In the zero–order approximation the action is quadratic $$S_0=\frac{1}{2}\varphi L\varphi d\zeta ,L1+D,$$ (75) where generator $`D`$ is given by Eq.(14). Corresponding distribution takes the SUSY Gaussian form (cf. Eq.(5)) $$P_0\{\varphi \}=\left(\frac{det|L|}{2\pi }\right)^{1/2}\mathrm{exp}\left\{\frac{1}{2}\varphi L\varphi d\zeta \right\}.$$ (76) So for the bare supercorrelator we have the expression $$C^{(0)}(\zeta ,\zeta ^{})=L^1\delta (\zeta ,\zeta ^{}),\delta (\vartheta ,\vartheta ^{})\vartheta +\vartheta ^{},$$ (77) that leads to Eq.(56) with $`(1/2)[\overline{𝒟},𝒟]`$ replaced by $`D`$ if Eqs.(75), (14) are taken into account. The linear operator $`𝐋(𝐂^{(0)})^1`$ in accordance with Eq.(70) takes the form: $`𝐋=L𝐓+L_+𝐀+L_{}𝐁;`$ (78) $`L=1,L_\pm =1\pm i\omega .`$ To proceed, one need to separate out anharmonic part $`S_1\{\varphi \}`$ of exponent in distribution (74) as a perturbation and to make expansion in power series over $`S_1`$. Insertion of this series into Eq.(72) gives $`𝐂(\zeta ,\zeta ^{})={\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(1)^n}{n!}}\varphi (\zeta )\left(S_1\{\varphi \}\right)^n\varphi (\zeta ^{})_0,`$ (79) where subscript ”0” means averaging over the bare distribution (76). Further one has to make factorization by making use of the Wick theorem. Then within the $`n`$–th order of perturbation theory the expression (79) takes the form $$C^{(n)}(\zeta ,\zeta ^{})=C^{(0)}(\zeta ,\zeta _1)\mathrm{\Sigma }^{(n)}(\zeta _1,\zeta _2)C^{(0)}(\zeta _2,\zeta ^{})d\zeta _1d\zeta _2,$$ (80) where $`\mathrm{\Sigma }^{(n)}(\zeta _1,\zeta _2)`$ is the SUSY self–energy function of $`n`$–th order that should be calculated. The result essentially depends on the form of $`V_1(\varphi )`$ that describes self–action effects. In what follows we will analyse two widely used models. ### 5.1 $`\varphi ^4`$–model Let the self–action potential be defined by the quartic dependence $$V_1(\zeta )=\frac{\lambda }{4!}\varphi ^4(\zeta ),\zeta =\{𝐫,t,\vartheta \}$$ (81) with the anharmonicity constant $`\lambda >0`$. Then terms of the first and second orders of series (79) are $$C^{(1)}(\zeta ,\zeta ^{})=\frac{\lambda }{4!}\varphi (\zeta )(\varphi (\zeta _1))^4\varphi (\zeta ^{})_0d\zeta _1,$$ (82a) $$C^{(2)}(\zeta ,\zeta ^{})=\frac{1}{2!}\left(\frac{\lambda }{4!}\right)^2\varphi (\zeta )(\varphi (\zeta _1))^4(\varphi (\zeta _2))^4\varphi (\zeta ^{})_0d\zeta _1d\zeta _2.$$ (82b) Now one has to count the number of possible pairings when using the Wick theorem. In Eq.(82a) the total number of pairings is 12, and the formula (82a) reads $$C^{(1)}(\zeta ,\zeta ^{})=\frac{\lambda }{2}C^{(0)}(\zeta ,\zeta _1)C^{(0)}(\zeta _1,\zeta _1)C^{(0)}(\zeta _1,\zeta ^{})d\zeta _10,$$ (83) where Eqs.(53), (71) are taken into account. In Eq.(82b) the total number of pairings equals 192 and the Wick theorem gives $$C^{(2)}(\zeta ,\zeta ^{})=\frac{\lambda ^2}{6}C^{(0)}(\zeta ,\zeta _1)\left(C^{(0)}(\zeta _1,\zeta _2)\right)^3C^{(0)}(\zeta _2,\zeta ^{})d\zeta _1d\zeta _2.$$ (84) Then, in accord with Eq.(80) the SUSY self–energy function in the second order of perturbation theory reads $$\mathrm{\Sigma }(\zeta ,\zeta ^{})=\frac{\lambda ^2}{6}\left(C(\zeta ,\zeta ^{})\right)^3.$$ (85) Here in terms of usual diagram ideology bare correlator is replaced by exact one. In the diagrammatic representation terms (82a,b) correspond to the following graphs: According to the rule (71) the former does not contribute to correlator, whereas the latter does (85). By analogy with the SUSY correlator (67) it is convenient to expand the SUSY self–energy: $$𝚺=\mathrm{\Sigma }𝐓+\mathrm{\Sigma }_+𝐀+\mathrm{\Sigma }_{}𝐁.$$ (86) To determine the coefficients $`\mathrm{\Sigma }_\pm `$, $`\mathrm{\Sigma }`$ it should be taken into account that the multiplication rules in Eq.(85) differ from ones given by Table II. The reason is that Eq.(85) contains ”element-to-element” products of nilpotent quantities instead of the above operator product. Hence one has to use the multiplication rules given by the Table III: Table III | $`l\backslash r`$ | $`T(\vartheta ,\vartheta ^{})`$ | $`A(\vartheta ,\vartheta ^{})`$ | $`B(\vartheta ,\vartheta ^{})`$ | | --- | --- | --- | --- | | | | | | | $`T(\vartheta ,\vartheta ^{})`$ | $`T(\vartheta ,\vartheta ^{})`$ | $`A(\vartheta ,\vartheta ^{})`$ | $`B(\vartheta ,\vartheta ^{})`$ | | $`A(\vartheta ,\vartheta ^{})`$ | $`A(\vartheta ,\vartheta ^{})`$ | $`0`$ | $`0`$ | | $`B(\vartheta ,\vartheta ^{})`$ | $`B(\vartheta ,\vartheta ^{})`$ | $`0`$ | $`0`$ | As a result, the coefficients of expansion (86) take the form: $$\mathrm{\Sigma }(t)=(\lambda ^2/6)S^3(t),$$ (87a) $$\mathrm{\Sigma }_\pm (t)=(\lambda ^2/2)S^2(t)G_\pm (t).$$ (87b) In the frequency representation that will be needed below we have $$\mathrm{\Sigma }(\omega )=\frac{\lambda ^2}{6}\frac{\mathrm{d}\omega _1\mathrm{d}\omega _2}{(2\pi )^2}S(\omega \omega _1\omega _2)S(\omega _1)S(\omega _2),$$ (88a) $$\mathrm{\Sigma }_\pm (\omega )=\frac{\lambda ^2}{2}\frac{\mathrm{d}\omega _1\mathrm{d}\omega _2}{(2\pi )^2}G_\pm (\omega \omega _1\omega _2)S(\omega _1)S(\omega _2).$$ (88b) The obvious inconvenience of this expressions is the presence of convolutions. To get rid of them let us use the fluctuation–dissipation theorem $$\mathrm{\Sigma }(t=0)=\mathrm{\Sigma }_\pm (\omega =0)$$ (89) in the form of Eq.(65). Then from Eqs.(87a), (65) one obtains: $$\mathrm{\Sigma }_\pm (\omega =0)=(\lambda ^2/6)\chi ^3.$$ (90a) ### 5.2 Cubic anharmonicity Apart from the $`\varphi ^4`$–model studied above, there is a number of physical systems type of copolymers where cubic anharmonicity $$V_1(\zeta )=\frac{\mu }{3!}\varphi ^3(\zeta ),\zeta =\{𝐫,t,\vartheta \}$$ (91) has a dominant role ( $`\mu `$ is the anharmonicity parameter). By analogy with Eqs.(82) it can be shown that the first non–vanishing contribution to the SUSY correlator (80) is $$C^{(2)}(\zeta ^{},\zeta ^{})=\frac{1}{2!}\left(\frac{\mu }{3!}\right)^2\varphi (\zeta )(\varphi (\zeta _1))^3(\varphi (\zeta _2))^3\varphi (\zeta ^{})_0d\zeta _1d\zeta _2.$$ (92) To facilitate the factorization of these products let us depict possible graphs of the second order in cubic anharmonicity $`\mu `$: The first of these graphs contains the bubble and does not contribute to the correlator. The contribution of the second graph is $$\frac{\mu ^2}{2}C^{(0)}(\zeta ,\zeta _1)\left(C^{(0)}(\zeta _1,\zeta _2)\right)^2C^{(0)}(\zeta _2,\zeta ^{})d\zeta _1d\zeta _2.$$ (93) As a result, the SUSY self–energy function reads $$\mathrm{\Sigma }(\zeta ,\zeta ^{})=\frac{\mu ^2}{2}\left(C(\zeta ,\zeta ^{})\right)^2,$$ (94) where the bare SUSY correlators are replaced by exact ones. By using the multiplication rules from Table III the coefficients of the expansion (86) are derived $$\mathrm{\Sigma }(t)=(\mu ^2/2)S^2(t),$$ (95a) $$\mathrm{\Sigma }_\pm (t)=\mu ^2S(t)G_\pm (t).$$ (95b) These expressions, combined with Eqs.(87), determine the SUSY self-energy function completely. By analogy with Eq.(90a) we have the relation $$\mathrm{\Sigma }_\pm (\omega =0)=\mathrm{\Sigma }(t=0)(\mu ^2/2)\chi ^2$$ (90b) Finally, the resulting expressions for coefficients of expansion (86) with both cubic and quartic anharmonicities included are $$\mathrm{\Sigma }(t)=\frac{1}{2}\left(\mu ^2+\frac{\lambda ^2}{3}S(t)\right)S^2(t),$$ (96a) $$\mathrm{\Sigma }_\pm (t)=\left(\mu ^2+\frac{\lambda ^2}{2}S(t)\right)S(t)G_\pm (t),$$ (96b) $$\mathrm{\Sigma }_\pm (\omega =0)=\frac{1}{2}\left(\mu ^2+\frac{\lambda ^2}{3}\chi \right)\chi ^2$$ (96c) ## 6 Self–consistent approach ### 6.1 Effective SUSY Lagrangian Let us start with the total SUSY action taken in the site representation: $$S=S_0+S_1+S_{int};$$ (97) $$S_0\frac{1}{2}\underset{l}{}\varphi _l(t,\vartheta )\left[1+D(\vartheta )\right]\varphi _l(t,\vartheta )dtd\vartheta ,$$ (97a) $$S_1\underset{l}{}V_1\left(\varphi _l(t,\vartheta )\right)dtd\vartheta ,$$ (97b) $$S_{int}V_{int}\{\varphi _l(t,\vartheta ),\varphi _m(t^{},\vartheta ^{})\}\delta (tt^{})dtdt^{}d\vartheta d\vartheta ^{},V_{int}V+W.$$ (97c) where sites are labeled with $`l`$ and the self–action term $`V_1(\varphi _l)`$ (97b), that given by Eqs.(81) and (91), is separated out. The last term $`S_{int}`$ describes the two–particle interaction $`V`$ and the effective potential $`W`$ is caused by averaging over quenched disorder. The potential $`V`$ is assumed to be attractive and takes the standard form $`V={\displaystyle \frac{1}{2}}{\displaystyle \underset{lm}{}}v_{lm}\varphi _m(t,\vartheta )\varphi _l(t^{},\vartheta ^{})\varphi _l(t^{},\vartheta ^{})\varphi _m(t,\vartheta )`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{lm}{}}v_{lm}\varphi _l(t,\vartheta )\varphi _l(t^{},\vartheta ^{})\varphi _m(t,\vartheta )\varphi _m(t^{},\vartheta ^{})`$ (98) that, in the mean–field approximation, provides the following expression $`V{\displaystyle \frac{v}{2}}C(t,\vartheta ;t,\vartheta ){\displaystyle \underset{l}{}}\varphi _l(t^{},\vartheta ^{})\varphi _l(t^{},\vartheta ^{})`$ $`{\displaystyle \frac{v}{2}}C(t,\vartheta ;t^{},\vartheta ^{}){\displaystyle \underset{l}{}}\varphi _l(t,\vartheta )\varphi _l(t^{},\vartheta ^{}).`$ (99) Hereafter $`v_mv_{lm}>0`$ is the interaction constant, $`C(t,\vartheta ;t^{},\vartheta ^{})\varphi _m(t,\vartheta )\varphi _m(t^{},\vartheta ^{})`$ is the SUSY correlator in the site representation. Averaging over quenched disorder in intersite couplings results in the effective attractive interaction $$W=\frac{1}{2}\underset{lm}{}w_{lm}\varphi _l(t,\vartheta )\varphi _l(t^{},\vartheta ^{})\varphi _m(t,\vartheta )\varphi _m(t^{},\vartheta ^{}).$$ (100) By analogy with Eq.(99) it is supposed that $$W\frac{w}{2}C(t,\vartheta ;t^{},\vartheta ^{})\underset{l}{}\varphi _l(t,\vartheta )\varphi _l(t^{},\vartheta ^{}),w\underset{m}{}w_{lm}>0.$$ (101) So, the real interaction (99) contains both diagonal and non–diagonal in $`\vartheta `$ and $`\vartheta ^{}`$ SUSY correlators, whereas the quenched disorder averaging results in non–diagonal expression (101) only. Obviously, within the framework of the replica approach such SUSY structure corresponds to the inter–replica overlapping that is responsible for the specific spin–glass behaviour . Apart from the above contributions to SUSY action (97) it should be taken into account that the quenched disorder in force dispersion results in the additional interaction $$\mathrm{\Delta }S_0=\frac{h^2}{2}\underset{l}{}\varphi _{l\omega }(\vartheta )\delta (\omega )\varphi _{l\omega }(\vartheta )d\omega d\vartheta .$$ (102) where $`\omega `$ is the frequency and the intensity of the quenched disorder $$h^2=\frac{\overline{\left(f_l\overline{f}\right)^2}(\mathrm{\Delta }\phi )^2}{(\mathrm{\Delta }\phi )^2}$$ (103) characterizes the site dispersion of the force $`f_l`$ (overbar denotes the volume average), $`(\mathrm{\Delta }\phi )^2\phi _{\omega =0}^2`$ is mean–squared fluctuation of this force. Then, the mean–field SUSY action takes, in the site–frequency representation, the final form $$S=\underset{l}{}\lambda _{l\omega }(\vartheta )\frac{\mathrm{d}\omega }{2\pi }d\vartheta +\underset{l}{}\lambda _{l\omega }(\vartheta ,\vartheta ^{})\frac{\mathrm{d}\omega }{2\pi }d\vartheta d\vartheta ^{}$$ (104) with the SUSY Lagrangian $$\lambda (\vartheta )\frac{1}{2}\varphi (\vartheta )\left\{\left[1+D(\vartheta )\right]+2\pi h^2\delta (\omega )vS\right\}\varphi (\vartheta )+V_1(\varphi (\vartheta )),$$ (105a) $$\lambda (\vartheta ,\vartheta ^{})\frac{1}{2}(v+w)\varphi (\vartheta )C(\vartheta ,\vartheta ^{})\varphi (\vartheta ^{})$$ (105b) where indexes $`l`$, $`\omega `$ are suppressed for brevity and the generator $`D`$ is given by Eq.(14). In the important case of random heteropolymer the interaction kernels are appeared to be of the form (98), (100), but indexes $`l`$, $`m`$ denote wave vectors but site numbers (see ). So, in this case, the expressions (104), (105) can be modified by replacing site indexes by wave ones. ### 6.2 SUSY Dyson equation The Dyson equation for the above SUSY Lagrangian is $$𝐂^1=𝐋𝚺(v+w)𝐂.$$ (106) Here L is defined by Eq.(78) where the first component is $$L=L_0+vS,L_0=(1+2\pi h^2\delta (\omega )).$$ (107) Projecting Eq.(106) along the ”axes” (69), we come to the key equations written in the frequency representation $$S=\frac{(\mathrm{\Sigma }L_0)G_+G_{}}{1wG_+G_{}},$$ (108a) $$G_\pm ^1+(v+w)G_\pm =L_\pm \mathrm{\Sigma }_\pm $$ (108b) where Eq.(70) is used. These equations accompanied by Eqs.(96) for the components $`\mathrm{\Sigma }`$, $`\mathrm{\Sigma }_\pm `$ of the SUSY self–energy function form the closed system of equations for self–consistent analysis of non–equilibrium thermodynamic system. ## 7 Non–ergodicity and memory effects As it is well–known, the memory is characterized by the Edwards–Anderson parameter $$q=\eta (\mathrm{})\eta (0)$$ (109) that being late–time asymptotics of the correlator results in elongation of the structure factor: $$S(t)=q+S_0(t)$$ (110) where the component $`S_0(t)0`$ at $`t\mathrm{}`$. By analogy with the elongated structure factor (110), the ergodicity breaking is allowed for by adding the term to the retarded Green function $$G_{}(\omega )=\mathrm{\Delta }+G_0(\omega ).$$ (111) The non–ergodicity parameter (irreversible response) in Eq.(111) $$\mathrm{\Delta }=\chi _0\chi $$ (112) is determined by the adiabatic Cubo susceptibility $`\chi _0G_{}(\omega =0)`$ and the thermodynamic one $`\chi G_0(\omega =0)`$.<sup>1</sup><sup>1</sup>1 It is convenient to use the unique response function $`G_{}(\omega )`$ for definition of both susceptibilities $`\chi _0`$ and $`\chi `$, taking into account that quantities $`\chi _0G_{}(\omega =0)`$, and $`\chi G_{}(\omega 0)`$ correspond to the equilibrium (macroscopic) and non–equilibrium (microscopic) states. Then Eqs.(65), (89), (90), (96), where the correlators should be labeled by index 0, imply the limit $`\omega 0`$ instead of the exact equality $`\omega =0`$. If the latter is defined by the standard formula $`\chi =\delta \eta /\delta f_{ext}`$ with external force $`f_{ext}f`$, for determination of the former one has to use the correlation techniques discussed in Sect.IV. To do this let us insert the elongated correlators (110), (111) into expressions (96). Then the renormalized components of the self–energy function take the form $`\mathrm{\Sigma }(t)={\displaystyle \frac{1}{2}}\left(\mu ^2+{\displaystyle \frac{\lambda ^2}{3}}q\right)q^2+\left(\mu ^2+{\displaystyle \frac{\lambda ^2}{2}}q\right)qS_0(t)+\mathrm{\Sigma }_0(t),`$ $`\mathrm{\Sigma }_0(t){\displaystyle \frac{1}{2}}\left(\mu ^2+\lambda ^2q\right)S_0^2(t)+{\displaystyle \frac{\lambda ^2}{6}}S_0^3(t);`$ (113a) $`\mathrm{\Sigma }_\pm (t)=\left(\mu ^2+{\displaystyle \frac{\lambda ^2}{2}}q\right)q\left(\mathrm{\Delta }+G_{\pm 0}(t)\right)+\mathrm{\Sigma }_{\pm 0}(t),`$ $`\mathrm{\Sigma }_{\pm 0}(t)\left(\mu ^2+\lambda ^2q\right)S_0(t)G_{\pm 0}(t)+{\displaystyle \frac{\lambda ^2}{2}}S_0^2(t)G_{\pm 0}(t),`$ (113b) where $`\mathrm{\Sigma }_0`$, $`\mathrm{\Sigma }_{\pm 0}`$ consist of the terms nonlinear in correlators $`S_0`$, $`G_{\pm 0}`$ and the terms proportional to $`S_0\mathrm{\Delta }0`$ are disregarded. In the $`\omega `$–representation, inserting the Fourier–transform of Eqs.(110), (113a) in the Dyson equation (108a), and taking into account Eq.(107) we have $`q_0\left[1w\chi _0^2{\displaystyle \frac{1}{2}}\left(\mu ^2+{\displaystyle \frac{\lambda ^2}{3}}q_0\right)q_0\chi _0^2\right]=h^2\chi _0^2,`$ (114) $`S_0={\displaystyle \frac{(1+\mathrm{\Sigma }_0)G_+G_{}}{1\left[w+(\mu ^2+\lambda ^2q/2)q\right]G_+G_{}}}.`$ (115) The first of these equations corresponds to $`\delta `$–terms ($`\omega =0`$) that are caused by the memory effects, whereas the second one — to non–zero frequencies $`\omega 0`$. In the limit $`\omega 0`$, the product $`G_+G_{}`$ tends to $`\chi ^2`$, so that the pole of the structure factor (115) determines the point of ergodicity breaking for the thermodynamic system $$\chi _0^2=w+\left(\mu ^2+\frac{\lambda ^2}{2}q_0\right)q_0.$$ (116) Substituting Eq.(113b) into the Dyson equation (108b) yields the relation for retarded Green function $$G_{}^1+\left[(v+w)+\left(\mu ^2+\frac{\lambda ^2}{2}q\right)q\right]G_{}+\mathrm{\Sigma }_0(1\mathrm{i}\omega )=0$$ (117) where the $`\omega `$–representation is used. Then, from Eq.(96c) the equation for the thermodynamic susceptibility $`\chi G_{}(\omega 0)`$ is derived $$1\chi +(v+w)\chi ^2+\frac{\mu ^2}{2}\chi \left[(\chi +q)^2q^2\right]+\frac{\lambda ^2}{6}\chi \left[(\chi +q)^3q^3\right]=0.$$ (118) The macroscopic memory parameter $`q_0`$ is given by the equation $$\left(\frac{\mu ^2}{2}+\frac{\lambda ^2}{3}q_0\right)q_0^2=h^2,$$ (119) which is obtained from Eqs.(114), (116) in the limit $`\omega =0`$. ## 8 Discussion Within the framework of the model under consideration the system of Eqs.(114), (118), (116), (119), (112) provides the complete analytical description of the non–ergodic thermodynamic system with quenched disorder. Eqs.(114) and (118) are similar to the equations obtained by Sherrington and Kirkpatrick for determination of isothermal $`\chi _0`$ and thermodynamic $`\chi `$ susceptibilities and corresponding memory parameters $`q_0`$, $`q`$ . The equation (116) defines the point $`T_0`$ of ergodicity breaking and Eq.(112) gives non-ergodicity parameter $`\mathrm{\Delta }`$. The above consideration implies that one should distinguish the macroscopic quantity $`q_0`$, $`\chi _0`$ and microscopic ones $`q`$, $`\chi `$ (the former correspond to frequency $`\omega =0`$, the latter — to the limit $`\omega 0`$). The peculiarity of such a hierarchy is that macroscopic values $`q_0`$, $`\chi _0`$ depend on the amplitude $`h`$ of quenched disorder only, whereas microscopic ones $`q`$, $`\chi `$ — on temperature $`T`$. Respectively, Eqs. (114), (116), where the temperature $`T`$ should be taken equal to its value on the ergodicity breaking curve $`T_0(h)`$, give the macroscopic values $`q_0`$, $`\chi _0`$. What about determination of the microscopic ones $`q`$, $`\chi `$, the Eq.(118) must be added by the equation type of Eq.(114) $`q\left[1w\chi _0^2{\displaystyle \frac{1}{2}}\left(\mu ^2+{\displaystyle \frac{\lambda ^2}{3}}q\right)q\chi _0^2\right]=h^2\chi _0^2,`$ (120) where the memory parameter $`q`$ is taken as microscopic in character. It is well to bear in mind that the field $`h`$, anharmonicity parameters $`\lambda `$, $`\mu `$, and interaction parameter $`w`$, as well as the inverse values of susceptibilities $`\chi _0`$, $`\chi `$ have been measured in units of temperature $`T`$. Further, it is convenient to choose the following measure units: $`T_s=\left({\displaystyle \frac{3}{2}}\right)^{3/2}{\displaystyle \frac{\mu ^4}{\lambda ^3}},h_s={\displaystyle \frac{3}{2}}{\displaystyle \frac{\mu ^3}{\lambda ^2}},v_s=w_s=\left({\displaystyle \frac{3}{2}}\right)^{1/2}\lambda ,`$ $`\chi _s=\left({\displaystyle \frac{3}{2}}\right)^{1/2}{\displaystyle \frac{\lambda }{\mu ^2}},q_s={\displaystyle \frac{3}{2}}{\displaystyle \frac{\mu ^2}{\lambda ^2}}u`$ (121) for quantities $`T`$, $`h`$, $`v`$, $`w`$, $`\chi `$, $`q`$ respectively. As a result, the key equations take the final form: $`\left(1uT\chi \right)+(v+w)T\chi ^2+(\chi /2T)\left[(T\chi +q)^2q^2\right]+`$ $`(\chi /4T)\left[(T\chi +q)^3q^3\right]=0,`$ (122a) $$wT+\left(1+q/2\right)q/2+h^2/q=\chi _0^2,$$ (122b) $$wT+\left(1+q_0/2\right)q_0/2+h^2/q_0=\chi _0^2,$$ (122c) $$\left[1+(3/4)q_0\right]q_0+wT_0=\chi _0^2,$$ (122d) $$(1+q_0)q_0^2=2h^2,$$ (122e) $$\mathrm{\Delta }=uT(\chi _0\chi ).$$ (122f) Behaviour of the system is specified by the last parameter $`u`$ in Eqs.(121), that determines the relation between cubic and quartic anharmonicities. In the case $`u1`$ main contribution is due to the quartic term (81), whereas at $`u1`$ the cubic anharmonicity (91) dominates. The former limit corresponds to strong quenched disorder $`hh_s`$, the latter — to weak field $`hh_s`$. According to Eq.(122e), the dependence of the macroscopic memory parameter $`q_0`$ on the quenched disorder amplitude $`h`$ is governed by the ratio between magnitude $`h`$ and characteristic field $`h_s`$. The linear dependence $$q_0=2^{1/2}(h/\mu ),u1,$$ (123) is realized in the limit $`hh_s`$, whereas the power relation $$q_0=3^{1/3}(h/\lambda )^{2/3},u1$$ (124) corresponds to the case of $`hh_s`$ (in Eqs.(123), (124) measured units are restored). The dependence $`q_0(h)`$ is depicted in Fig.1. For temperatures above the point of ergodicity breaking $`T_0`$ the thermodynamic $`\chi `$, $`q`$ and adiabatic $`\chi _0`$, $`q_0`$ values of susceptibilities and memory parameters, as well as Eqs.(122b), (122c) coincide, so that Eqs.(122a),(122c) deteremine the dependencies $`\chi `$ and $`q`$ on temperature. Accounting Eq.(122d) gives the temperature of the ergodicity breaking $`T_0(h)`$ as a function of field $`h`$ (see Fig.2). The peculiarity of the dependence $`T_0(h)`$ is that the ergodicity breaking temperature takes non–zeroth value $`T_{00}T_0(h=0)`$ at $`h=0`$. Below the ergodicity breaking temperature $`T_0`$ Eqs.(122a), (122b) give the microscopic values of the memory parameter $`q`$ and the susceptibility $`\chi `$ that differs from the macroscopic one $`\chi _0`$ being constant. As a result, we obtaine the typical temperature dependencies of susceptibilities $`\chi `$, $`\chi _0`$ as shown in Fig.3. It is seen that, in accordance with Eq.(122a), at $`T<T_0`$ the thermodynamic susceptibility $`\chi 0`$ only if temperature is above the freezing point $`T_f`$ which is determined by the condition $`\chi /T=\mathrm{}`$, leading to the equation $$(v+w)T_f+T_f\chi +q+(3/4)(T_f\chi +q)^2=\chi ^2,$$ (125) where $`\chi `$, $`q`$ are taken at $`T=T_f`$. The phase diagrams, which depict the ranges of possible thermodynamic states on the plane $`hT`$ for various values of interaction parameters $`w`$ and $`v`$, are shown in Fig.2. In the limit $`h=0`$ the temperatures of ergodicity breaking and freezing are as follows: $$T_{00}=w\left\{\left(1+\frac{v}{2w}+\frac{1}{12}\frac{\lambda ^2}{w^2}\right)+\left[\left(1+\frac{v}{2w}+\frac{1}{12}\frac{\lambda ^2}{w^2}\right)^2+\frac{1}{2}\frac{\mu ^2}{w^2}\right]^{1/2}\right\}^2,$$ (126a) $$T_f4(v+w)\left(1+\frac{\mu ^2+(2/3)\lambda ^2}{4(v+w)^2}\right),$$ (126b) where measured units are restored and the second equality is for $`\mu ^2,\lambda ^2w^2`$. So, cubic and quartic anharmonicities result in an increase of both ergodicity breaking and freezing temperatures. When quenched disorder is large, $`q_0,q_0^2wT_0`$, the isothermal susceptibility $`\chi _0`$ in Eq.(122d) is small, and Eqs.(122a), (122c), (122d) provide the estimate $`\chi _02/uT_0`$. Then, for measured quantities one has: $$T_02^{5/4}\mu (h/\mu )^{1/2},(w/\mu )^2\mu h(\mu /\lambda )^2\mu ;$$ (127a) $$T_02^{1/2}3^{1/3}\lambda (h/\lambda )^{2/3},h(\mu /\lambda )^2\mu ,(w/\lambda )^{3/2}\lambda $$ (127b) for $`\mu ^2\lambda ^2`$ and $`\lambda ^2\mu ^2`$, respectively. So, the non–ergodicity domain is extended indefinitely at strong increasing of quenched disorder. As it is shown in Figs.2a,b, and Fig.4a,b, the dependencies $`T_0(h)`$, $`T_f(h)`$ is non–monotonous if either $`w<0.5`$ or $`v>1`$. Influence of the interaction parameters $`w`$, $`v`$ and the anharmonicity ratio $`u`$ on the temperature dependence of the susceptibility $`\chi `$ is illustrated in Fig.5. According to Fig.5a, increasing $`w`$ causes a decrease of $`\chi `$ and an increase of temperatures $`T_0`$, $`T_f`$. The same behaviour is revealed at increasing parameter $`v`$ (Fig.5b). By contrast, tendency is opposite under an increase in $`u`$ (see Fig.5c). Finally, let us consider behaviour of the non–ergodicity $`\mathrm{\Delta }`$ and memory $`q`$ parameters that are determined by complete system of equations (122). Corresponding dependencies on temperature are shown in Fig.6a,b. At the freezing state, where $`\chi 0`$, the non–ergodicity parameter (122f) linearly depends on temperature because the isothermal susceptibility $`\chi _0`$ is constant. The appearance of finite value of the thermodynamic susceptibility $`\chi `$ above the freezing point $`T_f`$ results in step–like decrease of the value $`\mathrm{\Delta }`$. With further growth of temperature the irreversible response $`\mathrm{\Delta }(T)`$ monotonously decays taking zero value at the ergodicity breaking point $`T_0`$ (see Fig.6a). With increasing temperature from 0 to $`T_0`$ microscopic memory parameter $`q`$ monotonously decriases, taking minimal value at the ergodicity breaking point $`T_0`$. Above this point $`q(T)`$ increases (see Fig.6b). It is seen, that the quenched disorder encrease extendes the temperature domain of the non–ergodicity and causes growth of the memory parameter. In a spirit of generalized picture of phase transition it can be attributed to the fact that the microscopic memory parameter $`q`$ above the point $`T_0`$ corresponds to a soft mode that transforms to a mode of ergodicity restoring below the temperature $`T_0`$. The non–ergodicity parameter $`\mathrm{\Delta }`$ represents the order parameter. The analytical expressions for dependencies $`\mathrm{\Delta }(T)`$, $`q(T)`$ can be obtained only near the ergodicity breaking curve $`T_0(h)`$. For $`h=0`$ and $`T_0=T_{00}`$, from Eq.(122a) assuming that $`0<T_{00}TT_{00}`$, $`\chi \chi _{00}\mathrm{\Delta }/uT_{00}`$, $`\mathrm{\Delta }u\chi _{00}T_{00}`$, up to the first order in small parameters $`\epsilon T/T_{00}1`$ and $`\mathrm{\Delta }(u\chi _{00}T_{00})^1`$ we have for measured units $$\mathrm{\Delta }=A_0\epsilon ,A_0\frac{T_{00}}{w}\left(\frac{w}{\mu }\right)^2\frac{1\frac{\lambda ^2}{6w^2}}{1+\left(\frac{\lambda ^2}{2\mu ^2}+\frac{vw}{\mu ^2}\right)\left(\frac{T_{00}}{w}\right)^{1/2}},\epsilon <0;$$ (128a) $$q=Q\epsilon ,Q\frac{4}{3}\frac{T_{00}}{w}\left(\frac{\lambda w}{\mu ^2}\right)^2\frac{1\frac{\lambda ^2}{12w^2}}{1+\frac{\lambda ^2}{2\mu ^2}\left(\frac{T_{00}}{w}\right)^{1/2}},\epsilon >0,$$ (128b) In the case of $`h0`$ the result for temperature dependence is $$\mathrm{\Delta }=A\epsilon ,A\frac{2}{\lambda ^2\chi _0^2}\left(\frac{1\frac{w}{2}\chi _0^2T_0\frac{\lambda ^2}{12}\chi _0^4T_0^2}{\frac{v}{\lambda ^2\chi _0}+\frac{\mu ^2}{\lambda ^2}+q_0+\frac{1}{2}\chi _0T_0}\right),\mathrm{\Delta },\epsilon <0.$$ (129) Correspondingly, at the fixed temperature Eq.(122a) gives in the linear approximation $`0<q_0qq_0`$: $$\mathrm{\Delta }=B\left(qq_0\right),B^11+\frac{v}{\lambda ^2\chi _0}\left(\frac{\mu ^2}{\lambda ^2}+q_0+\frac{1}{2}\chi _0T_0\right)^1.$$ (130) ## Appendix A For nilpotent representation let us rewrite the Lagrangian (10) in the form of Euclidean field theory $$L=\kappa +\pi ,$$ (131) where the kinetic $`\kappa `$ and potential $`\pi `$ energies are $`\kappa =\phi \dot{\eta }\phi ^2/2,`$ (132) $`\pi =(V/\eta )\phi .`$ (133) In order to obtain the nilpotent form (13a) of the kinetic energy (132), we have to determine the operator $`D`$. The complete form of the dependence of the operator $`D`$ on the nilpotent coordinate $`\vartheta `$ is presented by the expression $$D=a+b(/\vartheta )+c\vartheta +d\vartheta (/\vartheta ),$$ (134) where the coefficients $`a`$, $`b`$, $`c`$, $`d`$ are unknown operators. The substitution of Eqs.(11), (134) into Eq.(13a) and taking into account the properties (12) leads to the expression (132) with the following coefficients: $$a=_t,b=1,c=0,d=2_t,$$ (135) where $`_t/t`$ is the derivative with respect to time. As a result, the operator (134) takes the form (14). It has the property $$D^2=_t^2.$$ (136) While considering the definitions (11), (12), (14) it is easy to see that $`D`$ is a Hermite operator. Under the infinitesimal transformation $`\delta e^{\epsilon D}1\epsilon D`$ with the parameter $`\epsilon 0`$, the values $`t`$ and $`\vartheta `$ acquire the additions $`\delta t=\epsilon `$, $`\delta \vartheta =\epsilon `$ which differ in the sign. Considering the corresponding field addition $`|\delta \varphi _\phi |=\epsilon |D_\phi ||\varphi _\phi |`$, it is convenient to use the matrix form for the nilpotent field (11) and the operator $`D`$: $$|\varphi _\phi |=\left(\genfrac{}{}{0pt}{}{\eta }{\phi }\right),|D_\phi |=\left(\begin{array}{cc}_t& 1\\ 0& _t\end{array}\right),_t\frac{}{t}.$$ (137) According to Eq.(137), the change of the order parameter is proportional to a difference between the rate of change of the order parameter and the fluctuation amplitude, whereas the change of the latter is proportional to its rate with the opposite sign. To prove the equivalence of the term (133) in the Lagrangian (131) and the Grassmann potential energy (13b), let us carry out the formal expansion of the thermodynamic potential in powers of the component $`\vartheta \phi `$ of Eq.(11): $$\pi =\left[V(\eta )+\frac{\delta V}{\delta \eta }\phi \vartheta \right]d\vartheta .$$ (138) Here all the terms of powers higher than 1 are omitted according to the nilpotent condition. Using the integration properties (12), we obtain immediately Eq.(133) as it was required. In the case of the two–component nilpotent field (27), the consideration is fulfilled by analogy. For brevity, let us point out the difference between (27) and the above–considered case (11) only. The corresponding infinitesimal transformation $`\delta \epsilon D`$, where the generator $`D`$ is given by Eq.(28), results in the additions $`\delta t=0`$, $`\delta \vartheta =\epsilon `$, $`|\delta \varphi _f|=\epsilon |D_f||\varphi _f|`$ in which the matrix form $$|\varphi _f|=\left(\genfrac{}{}{0pt}{}{\eta }{f}\right),|D_f|=\left(\begin{array}{cc}0& 1\\ _t^2& 0\end{array}\right),_t\frac{}{t}$$ (139) is used. The generator $`D_f`$ has the property (136). It is easy seen that the Lagrangians (10), (23) are invariant under the transformations given by the generators $`D_\phi `$, $`D_f`$, respectively, provided that the infinitesimal parameter $`\epsilon `$ is pure imaginary, and the fields $`\eta (𝐫,t)`$, $`\phi (𝐫,t)`$, $`f(𝐫,t)`$ are complex–valued. So, for real fields the two–component representations (11), (27) are just convenient approximations. The matrices of the transformation between the fields (139) and (137) (see Eqs.(29), (32)) take the form $$|\tau _\pm |=\left(\begin{array}{cc}1& 0\\ \pm _t& 1\end{array}\right).$$ (140) Let us consider the four–component SUSY fields (35), (44). Instead of Eq.(136), the corresponding couples of operators (37), (45) satisfy the conditions: $`𝒟^2=\overline{𝒟}^2=0,\{\overline{𝒟},𝒟\}=2_t,[\overline{𝒟},𝒟]^2=(2_t)^2;`$ (141) $`\{𝒟_\phi ,𝒟_f\}=\{\overline{𝒟}_\phi ,\overline{𝒟}_f\}=0,\{\overline{𝒟}_\phi ,𝒟_f\}=_t,\{\overline{𝒟}_f,𝒟_\phi \}=3_t,`$ where the curly and square brackets denote anticommutator and commutator, respectively. The generalized anticommutation rules for the operators $`𝒟^{(\pm )}𝒟(\pm t)`$, $`\overline{𝒟}^{(\pm )}\overline{𝒟}(\pm t)`$ corresponding to the opposite directions of the time $`t`$, read: $`\{𝒟^{(\pm )},𝒟^{()}\}=\{\overline{𝒟}^{(\pm )},\overline{𝒟}^{()}\}=\{\overline{𝒟}_f^{(\pm )},𝒟_f^{()}\}=0,`$ $`\{\overline{𝒟}^{(\pm )},𝒟^{(\pm )}\}=\{\overline{𝒟}_\phi ^{(\pm )},𝒟_\phi ^{()}\}=2_t;`$ $`\{𝒟_\phi ^{(\pm )},𝒟_f^{(\pm )}\}=\{\overline{𝒟}_\phi ^{(\pm )},\overline{𝒟}_f^{(\pm )}\}=0,\{\overline{𝒟}_\phi ^{(\pm )},𝒟_f^{(\pm )}\}=_t,\{\overline{𝒟}_f^{(\pm )},𝒟_\phi ^{(\pm )}\}=3_t;`$ $`\{𝒟_\phi ^{(\pm )},𝒟_f^{()}\}=\{\overline{𝒟}_\phi ^{(\pm )},\overline{𝒟}_f^{()}\}=0,\{\overline{𝒟}_\phi ^{(\pm )},𝒟_f^{()}\}=\{\overline{𝒟}_f^{(\pm )},𝒟_\phi ^{()}\}=\pm _t.`$ (142) In Eqs.(141), (142) the coincident indexes are suppressed. The simplest way to prove Eqs.(141), (142) is to introduce the four–rank matrices (see Eqs.(137), (139)): $`|\mathrm{\Phi }_\phi |=\left(\begin{array}{c}\eta \\ \psi \\ \overline{\psi }\\ \phi \end{array}\right),|𝒟_\phi |=\left(\begin{array}{cccc}0& 1& 0& 0\\ 0& 0& 0& 0\\ 2_t& 0& 0& 1\\ 0& 2_t& 0& 0\end{array}\right),|\overline{𝒟}_\phi |=\left(\begin{array}{cccc}0& 0& 1& 0\\ 0& 0& 0& 1\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right);`$ (155) $`|\mathrm{\Phi }_f|=\left(\begin{array}{c}\eta \\ \psi \\ \overline{\psi }\\ f\end{array}\right),|𝒟_f|=\left(\begin{array}{cccc}0& 1& 0& 0\\ 0& 0& 0& 0\\ _t& 0& 0& 1\\ 0& _t& 0& 0\end{array}\right),|\overline{𝒟}_f|=\left(\begin{array}{cccc}0& 0& 1& 0\\ _t& 0& 0& 1\\ 0& 0& 0& 0\\ 0& 0& _t& 0\end{array}\right).`$ (168) The matrices of the transformation between the fields (168) and (155) take the form (cf. Eq.(140)) $$|T_\pm |=\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ \pm _t& 0& 0& 1\end{array}\right).$$ (169) The infinitesimal transformations $`\delta \overline{\epsilon }𝒟`$, $`\overline{\delta }\overline{𝒟}\epsilon `$ give the following additions: $`\delta _\phi \theta =0,\delta _\phi \overline{\theta }=\overline{\epsilon },\delta _\phi t=2\overline{\epsilon }\theta ,\overline{\delta }_\phi \theta =\epsilon ,\overline{\delta }_\phi \overline{\theta }=0,\overline{\delta }_\phi t=0;`$ $`\delta _f\theta =0,\delta _f\overline{\theta }=\overline{\epsilon },\delta _ft=\overline{\epsilon }\theta ,\overline{\delta }_f\theta =\epsilon ,\overline{\delta }_f\overline{\theta }=0,\overline{\delta }_ft=\overline{\theta }\epsilon ;`$ (170) $`|\delta \mathrm{\Phi }_\phi |=\overline{\epsilon }|𝒟_\phi ||\mathrm{\Phi }_\phi |,|\overline{\delta }\mathrm{\Phi }_\phi |=|\overline{𝒟}_\phi ||\mathrm{\Phi }_\phi |\epsilon ;`$ $`|\delta \mathrm{\Phi }_f|=\overline{\epsilon }|𝒟_f||\mathrm{\Phi }_f|,|\overline{\delta }\mathrm{\Phi }_f|=|\overline{𝒟}_f||\mathrm{\Phi }_f|\epsilon .`$ At last, the equation for the SUSY field (35) $$V(\mathrm{\Phi }_\phi )\mathrm{d}^2\theta =\frac{\delta V}{\delta \eta }\phi \overline{\psi }\frac{\delta ^2V}{\delta \eta ^2}\psi $$ (171) is obtained by analogy with Eq.(138) to represent the terms in Eq.(34) that contain the potential $`V\{\eta \}`$ in SUSY form. In the case of the field (44), the multiplier $`\phi `$ must be substituted by $`f`$. ## Appendix B Following the standard field scheme , let us show how the four-component SUSY field (44) is split into a couple of chiral two–component Grassmann conjugated fields $`\mathrm{\Phi }_\pm `$. These SUSY fields are obtained from the initial SUSY field $`\mathrm{\Phi }_f`$ under the following transformations: $$\mathrm{\Phi }_\pm =T_\pm \mathrm{\Phi }_f;T_\pm e^\pm ,\overline{\theta }\theta _t,_t/t.$$ (172) Accordingly, the generators (45) take the form $$𝒟_\pm =T_\pm 𝒟_fT_{}\overline{𝒟}_\pm =T_\pm \overline{𝒟}_fT_{}.$$ (173) Due to the Grassmann nature of the parameter $``$ in the operators $`T_\pm `$, it is convenient to rewrite (173) in the following form: $$𝒟_\pm =𝒟_f\pm [,𝒟_f],\overline{𝒟}_\pm =\overline{𝒟}_f\pm [,\overline{𝒟}_f],$$ (174) where the square brackets denote commutator. In explicit form one has $`𝒟_+=/\overline{\theta }2\theta _t,𝒟_{}=/\overline{\theta };`$ $`\overline{𝒟}_+=/\theta ,\overline{𝒟}_{}=/\theta 2\overline{\theta }_t.`$ (175) Apparently, the operators $`𝒟_+`$, $`\overline{𝒟}_+`$ coincide with the generators $`𝒟_\phi `$, $`\overline{𝒟}_\phi `$, Eqs.(37). According to Eqs.(44), the definitions (172) give $$\mathrm{\Phi }_\pm =\eta +\overline{\theta }\psi +\overline{\psi }\theta \pm \overline{\theta }\theta \left(\dot{\eta }f\right),$$ (176) where the point denotes the derivative with respect to time. The comparison of Eq.(176) with the definition (35) gives the identity $`\mathrm{\Phi }_+\mathrm{\Phi }_\phi `$. The action of the operators (175) on Eq.(176) gets $`𝒟_\pm \mathrm{\Phi }_\pm =\psi \theta \left(\dot{\eta }+f\right)+\underset{¯}{2\overline{\theta }\theta \dot{\psi }},`$ $`\overline{𝒟}_{}\mathrm{\Phi }_{}=\overline{\psi }+\overline{\theta }\left(\dot{\eta }f\right)\underset{¯}{2\overline{\theta }\theta \dot{\overline{\psi }}},`$ (177) where the underlined terms concern only the upper indexes of the left–hand parts. The chiral SUSY fields are fixed by the gauge conditions $$𝒟_{}\mathrm{\Phi }_{}=0,\overline{𝒟}_+\mathrm{\Phi }_+=0,$$ (178) which, in accordance with the definitions (175) signify, that $`\mathrm{\Phi }_{}`$ and $`\mathrm{\Phi }_+`$ are independent of $`\overline{\theta }`$ and $`\theta `$, respectively. On the other hand, taking into account Eqs.(177), the gauge (178) results in the equations $`\psi \theta \left(f+\dot{\eta }\right)=0,`$ $`\overline{\psi }+\overline{\theta }\left(\dot{\eta }f\right)=0`$ (179) for $`\mathrm{\Phi }_{}`$ and $`\mathrm{\Phi }_+`$, correspondingly. Substituting Eqs.(179) into Eq.(176), the final expressions for the chiral SUSY fields are obtained: $`\varphi _{}=\eta +\overline{\psi }\theta ,`$ $`\varphi _+=\eta +\overline{\theta }\psi .`$ (180) These equations give the non–reducible representations of the SUSY fields (35), (44) under the conditions of the gauge (178). The chiral field $`\varphi _+(t)`$ corresponds to the positive direction of the time $`t`$, whereas $`\varphi _{}(t)`$ is related to the negative one . ## Appendix C Let us consider the invariance properties of the SUSY action $$S=\left[K(\mathrm{\Phi }(z))+V(\mathrm{\Phi }(z))\right]dz,K(\mathrm{\Phi })\frac{1}{2}(\overline{𝒟}\mathrm{\Phi })(𝒟\mathrm{\Phi }),z\{𝐫,t,\theta ,\overline{\theta }\}$$ (181) under the Grassmann conjugated transformations $$\delta \mathrm{\Phi }=\underset{\alpha }{}\overline{\epsilon }_\alpha 𝒟^{(\alpha )}\mathrm{\Phi },\overline{\delta }\mathrm{\Phi }=\underset{\alpha }{}\overline{𝒟}^{(\alpha )}\mathrm{\Phi }\epsilon _\alpha ,$$ (182) given by the SUSY generators $`𝒟^{(\alpha )}`$, $`\overline{𝒟}^{(\alpha )}`$ which differ in the time $`t`$ and Grassmann coordinates $`\theta `$, $`\overline{\theta }`$. According to Eq.(171), the potential term in Eq.(181) is SUSY invariant if the kernel $`V(\eta )`$ does not depend on the time $`t`$. Up to inessential total time derivatives, the Grassmann conjugated variations of the remaining kinetic term $`\delta K={\displaystyle \frac{1}{2}}\overline{𝒟}\left({\displaystyle \underset{\alpha }{}}\overline{\epsilon }_\alpha 𝒟^{(\alpha )}\mathrm{\Phi }\right)(𝒟\mathrm{\Phi })+{\displaystyle \frac{1}{2}}(\overline{𝒟}\mathrm{\Phi })𝒟\left({\displaystyle \underset{\alpha }{}}\overline{\epsilon }_\alpha 𝒟^{(\alpha )}\mathrm{\Phi }\right),`$ (183) $`\overline{\delta }K={\displaystyle \frac{1}{2}}\overline{𝒟}\left({\displaystyle \underset{\alpha }{}}\overline{𝒟}^{(\alpha )}\mathrm{\Phi }\epsilon _\alpha \right)(𝒟\mathrm{\Phi })+{\displaystyle \frac{1}{2}}(\overline{𝒟}\mathrm{\Phi })𝒟\left({\displaystyle \underset{\alpha }{}}\overline{𝒟}^{(\alpha )}\mathrm{\Phi }\epsilon _\alpha \right)`$ can be rewritten in the form $$\delta K=\frac{1}{2}\underset{\alpha }{}\overline{\epsilon }_\alpha 𝒟^{(\alpha )}\left[(\overline{𝒟}\mathrm{\Phi })(𝒟\mathrm{\Phi })\right],\overline{\delta }K=\frac{1}{2}\underset{\alpha }{}\overline{𝒟}^{(\alpha )}\left[(\overline{𝒟}\mathrm{\Phi })(𝒟\mathrm{\Phi })\right]\epsilon _\alpha $$ (184) provided that the anticommutators $`\{𝒟,𝒟^{(\alpha )}\}`$, $`\{\overline{𝒟},𝒟^{(\alpha )}\}`$, $`\{𝒟,\overline{𝒟}^{(\alpha )}\}`$, $`\{\overline{𝒟},\overline{𝒟}^{(\alpha )}\}`$ are either equal to zero or proportional to the derivative with respect to the time $`_t`$. According to Eqs.(141), (142) such conditions are fulfilled if only the SUSY generators $`𝒟^{(\alpha )}`$, $`\overline{𝒟}^{(\alpha )}`$ either coincide with the initial operators $`𝒟`$, $`\overline{𝒟}`$, or are reduced to the transformed operators $`𝒟_\pm `$, $`\overline{𝒟}_\pm `$ determined by equations of (173) type, or correspond to the opposite time directions $`𝒟_\pm ^{(\pm )}`$, $`\overline{𝒟}_\pm ^{(\pm )}`$. Being reduced to the derivatives with respect to the time $`t`$ and Grassmann coordinate $`\theta `$, $`\overline{\theta }`$, these operators inserted into Eqs.(184) give, as it was required, zero for the variations of the corresponding action (181). Among the above–mentioned generators, the following ones $`𝒟_{}^{()}={\displaystyle \frac{}{\overline{\theta }}},\overline{𝒟}_{}^{()}={\displaystyle \frac{}{\theta }}+2\overline{\theta }{\displaystyle \frac{}{t}}`$ (185) and their anticommutator $`\{\overline{𝒟}_{}^{()},𝒟_{}^{()}\}=2_t`$ (see Eqs.(142)) are of a special interest for us. The operators (185) are a result of the double action of the transformation $`T_{}`$ on the initial generators $`𝒟_\phi `$, $`\overline{𝒟}_\phi `$, Eqs.(37), that gives the generators $`𝒟_{}`$, $`\overline{𝒟}_{}`$, Eqs.(174), corresponding to the opposite time directions. Therefore, the generators $`𝒟_{}^{()}𝒟_{}(t)`$, $`\overline{𝒟}_{}^{()}\overline{𝒟}_{}(t)`$ given by Eqs.(185) are related to the initial ones $`𝒟_\phi (t)`$, $`\overline{𝒟}_\phi (t)`$ and play a significant role hereinafter. Due to the standard manner, it is easy to show that the above conditions $`\delta S=0`$, $`\overline{\delta }S=0`$ give rise to the Ward identities $$\underset{i=1}{\overset{n}{}}𝒟_i^{(\alpha )}\mathrm{\Gamma }^{(n)}(\{z_i\})=0,\underset{i=1}{\overset{n}{}}\overline{𝒟}_i^{(\alpha )}\mathrm{\Gamma }^{(n)}(\{z_i\})=0$$ (186) for SUSY $`n`$–point $`\mathrm{\Gamma }^{(n)}`$ proper vertices type of the 2–point supercorrelator $`C(z_2,z_1)`$, Eq.(53) and the self–energy superfunction $`\mathrm{\Sigma }(z_2,z_1)`$. Obviously, under $`𝒟^{(\alpha )}_t`$, $`𝒟^{(\alpha )}𝒟_{}^{()}`$ conditions (186) mean that above–mentioned supercorrelator depends on $`t_2t_1`$ and $`\overline{\theta }_2\overline{\theta }_1`$ differences only: $`C_\phi (z_2,z_1)=S(t_2t_1)+(\overline{\theta }_2\overline{\theta }_1)\left[G_+(t_2t_1)\theta _2G_{}(t_2t_1)\theta _1\right]`$ (187) where the space dependence is suppressed, for brevity. In accordance with the SUSY field definition (35), one has: $`S(t_2t_1)=\eta (t_2)\eta (t_1),`$ $`G_+(t_2t_1)=\phi (t_2)\eta (t_1)\vartheta (t_1t_2)=\overline{\psi }(t_2)\psi (t_1)\vartheta (t_1t_2),`$ (188) $`G_{}(t_2t_1)=\eta (t_2)\phi (t_1)\vartheta (t_2t_1)=\overline{\psi }(t_1)\psi (t_2)\vartheta (t_2t_1)`$ where the step function $`\vartheta (t)=1`$ for $`t>0`$ and $`\vartheta (t)=0`$ for $`t<0`$. By virtue of the causality principle, the advanced Green function $`G_+(t_2t_1)`$ which is the factor before $`\overline{\theta }_1\theta _2`$, vanishes at $`t_1<t_2`$, as required. On the other hand, the retarded Green function $`G_{}(t_2t_1)=0`$ at $`t_1>t_2`$, to be the coefficient of $`\overline{\theta }_2\theta _1`$. Moreover, the symmetry condition $`C(z_2,z_1)=C(z_1,z_2)`$ gives rise to the equations $`S(t_2t_1)=S(t_1t_2)`$, $`G_{}(t_2t_1)=G_+(t_1t_2)`$. Inserting the operator $`\overline{𝒟}_{}^{()}`$ into the Ward identity (186) results in the equation $`2\dot{S}(t)=G_+(t)G_{}(t)`$ (189) that is the fluctuation–dissipation relation. All the above statements hold for the self–energy function $`\mathrm{\Sigma }(z_2,z_1)`$ with the components $`\mathrm{\Sigma }(t_2t_1)`$, $`\mathrm{\Sigma }_\pm (t_2t_1)`$ which replace $`S(t_2t_1)`$, $`G_\pm (t_2t_1)`$, correspondingly. It is worthwhile to point out specially the relations in Eqs.(188) which connect the Bose correlators for the components $`\eta `$, $`\phi `$, and the Fermi ones for components $`\psi `$, $`\overline{\psi }`$. The above–used Ward identities (186) allow to obtain these relations as a trivial consequence of the SUSY field definition (35). But such equations can be obtained also more simply. Indeed, the Fermi correlator $`\overline{\psi }\psi `$ is equal to $`(\delta ^2V/\delta \eta ^2)^1`$ in accordance with Eqs.(21), (34). On the other hand, using the susceptibility definition and Eqs.(22), (26) we have $`\eta \phi =\delta \eta /\delta \phi =(\delta \phi /\delta \eta )^1=(\delta ^2V/\delta \eta ^2)^1`$ for the Bose correlator Q.E.D. Obviously, to pass to the correlator of the two–component field (11), it is necessary to replace the factors $`(\overline{\theta }_2\overline{\theta }_1)\theta _2`$, $`(\overline{\theta }_2\overline{\theta }_1)\theta _1`$ in Eq.(187) by the nilpotent coordinates $`\vartheta _2`$, $`\vartheta _1`$, respectively, and to omit the term with $`\overline{\vartheta }_2\vartheta _1`$. The SUSY correlator $`C_f(z_2,z_1)=T_{}(z_2)T_{}(z_1)C_\phi (z_2,z_1)`$ corresponding to the superfields (27) and (44), takes the form $`C_f(z_2,z_1)=S+\overline{\theta }_2\theta _2m_++\overline{\theta }_1\theta _1m_{}\overline{\theta }_2\theta _1G_{}\overline{\theta }_1\theta _2G_+,`$ (190) where the arguments $`t_2t_1`$ are suppressed in the factors $`S`$, $`m_\pm `$, $`G_\pm `$. Moreover, in view of Eq.(22), new functions are introduced (cf. Eqs.(188)) $`m_+(t)=\eta (t)\vartheta (t)f_{\mathrm{ext}},m_{}(t)=\eta (t)\vartheta (t)f_{\mathrm{ext}},f_{\mathrm{ext}}f`$ (191) to represent the connection between the averaged order parameter $`\eta (t)`$ and the external force $`f_{\mathrm{ext}}f`$ (note that the latter is switched at time $`t=0`$ and remains constant). The correlators (191) are related to the Green functions as follows: $`G_\pm (t)=m_\pm (t)\pm \dot{S}(t)`$ (192) and possess the symmetry condition $`m_+(t)=m_{}(t)`$. Captures Fig.1 Dependence of the macroscopic memory parameter $`q_0`$ on the quenched disorder intensity $`h`$. Fig.2 Dependencies of the ergodicity breaking temperature $`T_0`$ (solid line), and the freezing temperature $`T_f`$ (thin line) on the quenched disorder intensity $`h`$ ($`u=0.5`$, $`v=0`$) for different values of the effective interaction parameter: a) $`w`$=0.5; b) $`w`$=0.2. Fig.3 Temperature dependencies of the thermodynamic and adiabatic susceptibilities $`\chi `$ and $`\chi _0`$ for: a) different values of the quenched disorder intensity $`h`$ (curves 1, 2 correspond to $`h=0,4`$) at $`w=0.5`$, $`u=0.5`$, $`v=0`$; b) different values of the effective interaction parameter (curves 1, 2 correspond to $`w=0.5,0.2`$) at $`h=4`$, $`u=0.5`$, $`v=0`$. Fig.4 Dependencies of the ergodicity breaking temperature $`T_0`$ on the quenched disorder intensity $`h`$ and: a) effective interaction parameter $`w`$ at $`u=0.5`$, $`v=0`$; b) proper interaction parameter $`v`$ at $`u=1`$, $`w=0.5`$. Fig.5 The shift of the temperature dependencies of the thermodynamic susceptibility $`\chi (T)`$ caused by variation of: a) effective interaction parameter $`w`$ at $`u=0.5`$, $`v=0`$ (curves 1, 2, 3 correspond to $`w=`$0.5, 1, 1.5); b) proper interaction parameter $`v`$ at $`u=0.5`$, $`w=0.5`$ (curves 1, 2, 3 correspond to $`v`$=0, 1, 1.5); c) anharmonicity ratio $`u`$ at $`w=0.5`$, $`v=0`$ (curves 1, 2, 3 correspond to $`u=`$0.5, 1, 1.5). Fig.6 Temperature dependencies ($`u=w=0.5`$, $`v=0`$) of: a) non–ergodicity parameter $`\mathrm{\Delta }`$; b) microscopic memory parameter $`q`$. (curves 1, 2 correspond to $`h=`$0, 4).
no-problem/9812/cond-mat9812183.html
ar5iv
text
# Maximum Metallic Conductivity in Si-MOS Structures ## Abstract We found that the conductivity of the two-dimensional electron system in Si-MOS structures is limited to a maximum value, $`G_{\mathrm{max}}`$, as either density increases or temperature decreases. This value $`G_{\mathrm{max}}`$ is weakly disorder dependent and ranging from 100 to $`140e^2/h`$ for samples whose mobilities differ by a factor of 4. According to the conventional theory of metals , the conductivity of the two-dimensional carrier system should vanish in the limit of zero temperatures. Recently, an unconventional metallic-like temperature dependence of the conductivity was found in two-dimensional (2D) carrier systems in different materials . The effect manifests itself in the exponentially strong rise of the conductivity $`G`$ (by about one order of magnitude in Si-MOS structures where it is most pronounced ) as the temperature decreases below $`0.3E_F/k_B`$ . The origin of the effect remains under discussion and is intimately related to a question on the ground state conductivity in the $`T=0`$ limit. The existing experiments are taken at finite temperatures (though much less than $`E_F/k_B`$) and it is not absolutely clear whether or not the observed “metallic-like” temperature behavior of $`G`$ corresponds to the ground state conductivity. Since for the Fermi liquid the only possibility is $`G=0`$, it was suggested, that the two-dimensional strongly interacting carrier system can become a perfect metal with infinite conductivity $`G`$, at $`T=0`$ but exhibiting non-Fermi-liquid behavior. It was even suggested that the 2D interacting system could become a superconductor . In order to verify these possibilities, we have extended the measurements to carrier densities about $`100`$ times higher than the critical conductivity $`n_\mathrm{c}`$, at which the exponential decrease of the resistivity sets in . Our investigations are motivated by the fact that as density increases, the Drude conductivity increases and “disorder” ($`1/k_Fl`$) decreases. From the measurements at high density, we expected to verify whether or not the metallic like conductivity survives at high $`G`$ values, to probe the role of Coulomb interaction effects (where the ratio of the Coulomb to Fermi energy decreases proportionally to $`n^{1/2}`$) and of spin-related effects (which should persist as density increases). We have found that the conductivity in (100) Si-MOS structures shows a *maximum* as a function of carrier density. The maximum value, $`G_{\mathrm{max}}100140`$ is weakly dependent on the mobility of the sample (conductivity throughout this paper is in units of $`e^2/h=1/25813`$ Ohm<sup>-1</sup>, and the resistivity $`\rho =1/G`$). The strong exponential dependence of $`G(T)`$ (with $`dG/dT<0`$) which exists at relatively high temperatures $`T0.3E_F/k_B`$ was found to persist up to the highest density studied. However, at low temperatures, $`T<0.007E_F/k_B`$ (and at least for high densities), in the vicinity of $`n=n_{\mathrm{max}}`$, this metallic like dependence transforms into a weak $`\mathrm{ln}T`$ dependence with a positive derivative, $`dG/dT>0`$, thus indicating the onset of a weakly localized state. The ac- and dc-measurements of the conductivity were performed on (100) Si-MOS structures at low dissipated power. Five samples were studied in the density range 0.8 to $`100\times 10^{11}`$ cm<sup>-2</sup>; their relevant parameters are listed in Table 1. In order to adjust the biasing current such as not to destroy the phase coherence in the carrier system, we determined the phase breaking time, $`\tau _\varphi `$, from the weak negative magnetoresistance in low magnetic fields. Measurements were taken in the temperature range 0.29 to 45 K, and, partly, 0.018 to 4 K, by sweeping slowly the temperature during several hours. The data taken on all five samples were qualitatively similar. A typical density dependence of the conductivity in the “metallic” range, $`n=(6100)\times 10^{11}`$ cm<sup>-2</sup>, is shown in Fig. 1 for different temperatures, 0.3 to 41 K. The conductivity, $`G`$, first increases with density, reaches a maximum at $`n=(3543)\times 10^{11}`$ cm<sup>-2</sup>, and then decreases again. Shubnikov-de Haas data taken on a few high mobility samples show the onset of a second frequency at $`n55\times 10^{11}`$ cm<sup>-2</sup>, which is due to population of the second subband. The reversal of the density dependence of the conductivity may be caused by an increase of the scattering rate as $`E_F`$ approaches the bottom of the next subband. Table I shows that the maximum conductivity value is weakly dependent on disorder, $`G=100140`$ for the studied samples. At the same time, the density values $`n_{\mathrm{max}}`$, corresponding to the maximum conductivity, increase by a factor 2 as the mobility decreases by a factor 4. In Fig. 2, the temperature dependence of the conductivity is shown for high densities, $`(880)\times 10^{11}`$ cm<sup>-2</sup>. As density increases, the conductivity, first increases (the curves 1 to 6), reaches a maximum (the curve 6) at a density $`n_{\mathrm{max}}`$ (which is $`32\times 10^{11}`$ cm<sup>-2</sup> for Si-15a), and, finally, decreases with density (curves 7 - 12). This leads to a crossing of the $`G(T)`$\- curves taken at different densities $`n>n_{\mathrm{max}}`$. Such a crossing has also been reported to occur for $`p`$-GaAs/AlGaAs in Ref. . However, in our measurements, the $`G(n)`$ curves for different temperatures, do not intercept at a single density. In Fig. 2, the triangles depict for each curve the temperature $`T^{}=0.007E_F/k_B`$ for the corresponding density. In the region confined between $`T=0.05E_F/k_B`$ and $`0.007E_F/k_B`$, the exponential dependence seems to “saturate”, but in fact, it crosses over, below $`T^{}=0.007E_F/k_B`$, to a weaker dependence. The ”high temperature behavior” (for $`T>T^{}`$) of the conductivity remains metallic-like for all curves in Fig. 2, up to the highest density studied. However, the curves taken for high densities (close to the maximum conductance), at low temperatures clearly show the onset of a localizing $`\mathrm{ln}T`$ dependence with $`dG/dT>0`$. The localizing low temperature dependence is shown in an expanded scale in Fig. 3a. As the temperature is varied, it persists for one order of magnitude, and does not saturate at low temperatures. Its slope $`dG/d\mathrm{ln}T0.35`$ is consistent with the conventional theory of the weak localization . Since the “low-temperature” localizing $`T`$-dependence develops on the background of the strong exponential increase in conductivity at “high temperatures”, we conclude, that the exponential raise in $`G(T)`$ can not be considered as a proof of the metallic conductance, at least for high densities $`nn_c`$. The change of sign of $`dG/dT`$ shown in Fig. 3 a for $`T<3`$ K (for $`nn_{\mathrm{max}}`$) is not caused by significant changes in disorder for densities around $`n_{\mathrm{max}}`$. The conductance $`G`$ (which is $`2\times k_Fl`$ in the Drude approximation for the two valley system) is of the order of 100 ($`k_F`$ is the Fermi wave vector and $`l`$ is the mean free path). Also, for the spin-orbit parameter in the chiral model $`2\mathrm{\Delta }\tau /\mathrm{}48`$ holds ($`\mathrm{\Delta }`$ is the zero magnetic field “spin-splitting” at $`E=E_F`$). Therefore, the above parameters seem to be not important at $`nn_{\mathrm{max}}`$. The picture is less clear for lower densities, $`n(115)\times 10^{11}`$ cm<sup>-2</sup> (see Figs. 3 b and c), where the slope decreases, disappear and finally changes sign to the negative “delocalizing” one $`dG/dT<0`$ . If the above scenario would persist to much lower temperatures, the conductivity $`G(T)`$ data taken for different densities would cross each other at finite temperatures (but at much lower temperature than shown here). This possibility seems to be unphysical and means that at least a part of the data taken for lowest temperatures (most probable, the lower density ones) do not correspond to the ground state conductivity. One can not exclude, therefore, that the low-temperature data may be affected by the tail of the strong metallic-like exponential “high temperature” dependence, extending down to low temperatures. Anyhow, on the basis of the data shown, it seems rather unlikely, that the conductivity will grow to infinity in the $`T0`$ limit, both for high as well as for low carrier densities. In order to reach a more definite conclusion, the measurements have to be taken down to temperatures $`TT^{}0.007E_F/k_B`$. In summary, we have found that the conductivity value in (100) Si-MOS structures is limited to a finite value, $`G_{max}140`$ as density or temperature vary. We found that the strong metallic-like increase in the conductivity as $`T`$ decreases (visible at ”high temperatures” $`T>0.01E_F/k_B`$) and the “low temperature” behavior (for $`T<0.01E_F/k_B`$ ) are rather independent of each other. Despite the observation that the maximum conductivity value is nearly the same for different Si-MOS samples, we do not have evidence that this value is related to a many body ground state . The fact that the maximum in $`G(T)`$ appears at a finite temperature ($`T^{}=0.007E_F/k_B`$) indicates actually a single particle origin. Such a maximum of $`G`$ could be the result of a superposition of a scattering mechanism and weak localization effects. The behavior of the conductivity for lower temperatures requires further studies. V.P. acknowledges discussions with B. Altshuler, M. Baranov, A. Finkel’stein, V. Kravtsov, S. V. Kravchenko, D. Maslov, A. Mirlin, and I. Suslov. The work was supported by RFBR 97-02-17378, by the Programs “Physics of solid-state nanostructures” and “Statistical physics”, by INTAS, NWO, and by FWF P13439, Austria.
no-problem/9812/gr-qc9812088.html
ar5iv
text
# The classical essence of black hole radiation ## I Introduction The study of black holes from astrophysical point of view and by astronomers has blossomed in the last decade because of the dramatic increase in the number of black hole candidates from the sole candidate (Cygnus X-1) some 25 years ago. This in turn requires a deeper familiarity with black hole physics and especially with black hole radiation, for astronomers and classical relativists. Hawking in his original work on the black hole radiation (Ref.) has used a quantum field theoretical approach to arrive at this radiation. In the next few sections we will describe the classical essence of this radiation in a language which is free from the usual quantum field theoretic tools and therefore more suitable for the astronomers and relativists. Our derivation of Hawking radiation will also establish the close connection between the black hole radiation and the existence of an event horizon. ## II Schwarzschild black hole We start with the simplest case of the one parameter family of blackholes, namely the Schwarzschild black hole, which was previously discussed in Ref.. Consider a radial light ray in the Schwarzschild spacetime which propagates from $`r_{in}=2M+ϵ`$ at $`t=t_{in}`$ to the event $`𝒫(t,r)`$ where $`r2M`$ and $`ϵ2M`$. The trajectory can be found using the fact that for light rays $$\mathrm{d}s^2=1\frac{2M}{r}\mathrm{d}t^2\frac{1}{1\frac{2M}{r}}\mathrm{d}r^2r^2\mathrm{d}\mathrm{\Omega }^2=0,$$ for radial light rays, $`\mathrm{d}\theta =\mathrm{d}\varphi =0`$, we have $$\frac{\mathrm{d}r}{\mathrm{d}t}=1\frac{2M}{r},$$ $`(1)`$ from which the trajectory with the requierd initial condition, is $$r=r_{in}2M\mathrm{ln}(\frac{r2M}{2M})+2M\mathrm{ln}(\frac{r_{in}2M}{2M})+tt_{in}tt_{in}+2M\mathrm{ln}(\frac{ϵ}{2M})$$ $`(2)`$ where the last equality uses $`r2M`$ , $`ϵ2M`$. The frequency of a wave will be redshifted as it propagates on this trajectory. This redshift is basically due to the fact that the frequency is measured in terms of the proper time $`\tau `$, which flows differently at different points of a stationary spacetime according to the following relation: $$\tau =\frac{1}{c}\sqrt{g_{00}}x^0.$$ $`(3)`$ The frequency $`\mathrm{\Omega }`$ at $`r2M`$ will be related to the frequency $`\mathrm{\Omega }_{in}`$ at $`r=2M+ϵ`$ by $$\mathrm{\Omega }\mathrm{\Omega }_{in}[g_{00}(r=2M+ϵ)]^{1/2}\mathrm{\Omega }_{in}\left(\frac{ϵ}{2M}\right)^{1/2}=\mathrm{\Omega }_{in}\mathrm{exp}\left(\frac{tt_{in}r}{4M}\right)$$ $`(4)`$ If the wave packet, $`\mathrm{\Phi }(r,t)\mathrm{exp}(i\theta (t,r))`$, centered on this null ray has a phase $`\theta (r,t)`$, then the instantaneous frequency is related to the phase by $`(\theta /t)=\mathrm{\Omega }`$. Integrating (4) with respect to $`t`$, we find the relevant wave mode to be $$\mathrm{\Phi }(t,r)\mathrm{exp}i\mathrm{\Omega }dt\mathrm{exp}\left[4Mi\mathrm{\Omega }_{in}\mathrm{exp}\left(\frac{tt_{in}r}{4M}\right)\right]$$ $`(5)`$ (This form of the wave can also be obtained by directly integrating the wave equation in Schwarzschild space with appropriate boundary conditions). Equation (4) shows that, despite being in a static spacetime, the frequency of the wave (measured by an observer at fixed $`r2M`$) depends nontrivially on $`t`$, for a fixed $`t_{in}`$ and $`ϵ`$. Such an observer will not see a monochromatic radiation. Therefore an observer using the time coordinate $`t`$ will Fourier decompose these modes with respect to the frequency $`\omega `$ defined using $`t`$ as: $$\mathrm{\Phi }(t,r)=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}f(\omega )e^{i\omega t}d\omega $$ $`(6)`$ where $$f(\omega )=_{\mathrm{}}^{\mathrm{}}\mathrm{\Phi }(t,r)e^{i\omega t}dt_0^{\mathrm{}}x^{4iM\omega 1}\mathrm{exp}(4Mix\mathrm{\Omega }_{in})dx$$ $`(7)`$ and $`x=\mathrm{exp}\left([t+t_{in}+r]/4M\right)`$. To evaluate the above integral we rotate the contour to the imaginary axis, i.e. $`xy=ix`$, $$f(\omega )e^{2\pi M\omega }_0^i\mathrm{}Y^{z1}e^Y𝑑Y$$ $`(8)`$ where $`z=4iM\omega `$ and $`Y=4M\mathrm{\Omega }_{in}y`$. Using the fact that the integral in the right hand side of the above relation is one of the representations of Gamma function we get the corresponding power spectrum to be $$|f(\omega )|^2(\mathrm{exp}(8\pi M\omega )1)^1$$ $`(9)`$ where we have used the fact that $`|\mathrm{\Gamma }(ix)|^2=(\pi /x\mathrm{sinh}\pi x)`$. In terms of the conventional units the above relation becomes $$|f(\omega )|^2(\mathrm{exp}(\frac{8\pi GM\omega }{c^3})1)^1(exp(\frac{\omega }{\omega _0})1)^1$$ $`(10)`$ where $$\omega _0=\frac{c^3}{8\pi GM}$$ $`(11).`$ As one can see no $`\mathrm{}`$ appears in the above analysis and $`\omega _0`$ can be thought of as the characteristic frequency of the problem by a radio astronomer who thinks in terms of frequency. On the other hand an X-ray or a $`\gamma `$-ray astronomer -who thinks in terms of photons- will introduce the energy $`E=\mathrm{}\omega `$ into the above relation in the following form: $$|f(\omega )|^2(exp(\frac{\mathrm{}\omega }{\mathrm{}\omega _0})1)^1(exp(\frac{E}{k__BT})1)^1$$ $`(12)`$ which shows that the corresponding power spectrum is Planckian at temperature $$T=\frac{\mathrm{}c^3}{8\pi GMk__B}.$$ $`(13)`$ ## III Reissner-Nordstrom balck hole The same approach can be used to study the radiation in the space of static charged black holes which are charcterized by two parameters M and Q. The equation governing the outgoing null radial geodesics in R-N spacetime has the following form $$\frac{\mathrm{d}r}{\mathrm{d}t}=1\frac{2M}{r}+\frac{Q^2}{r^2}$$ $`(14)`$ In terms of the conventional units the above equation will take the following form $$\frac{\mathrm{d}r}{\mathrm{d}t}=c\frac{2M}{r}(\frac{G}{c})+\frac{Q^2}{r^2}(\frac{G\mathrm{}}{c^2}).$$ $`(15)`$ The event horizon of the Reissner-Nordstrom balck hole is at $`r_+=M+(M^2Q^2)^{1/2}`$. Considering a light ray propagating from $`r_{in}=r_++ϵ`$ at $`t=t_{in}`$ to the event $`𝒫(t,r)`$ where $`rr_+`$ and $`ϵr_+`$ we will find the trajectory in the follwing form $$rtt_{in}+\frac{r_{+}^{}{}_{}{}^{2}}{2(M^2Q^2)^{1/2}}\mathrm{ln}ϵ$$ $`(16)`$ The redshifted frequency $`\omega `$ will be related to the frequency at $`r=r_++ϵ`$ by $$\mathrm{\Omega }\mathrm{\Omega }_{in}[g_{00}(r=r_++ϵ)]^{1/2}\omega _{in}\left(\frac{ϵ}{2M}\right)^{1/2}=\omega _{in}\mathrm{exp}\left(\frac{tt_{in}r}{\frac{(M+(M^2Q^2)^{1/2})^2}{(M^2Q^2)^{1/2}}}\right)$$ $`(17)`$ Now if we repeat the analysis of the Schwarzschild case for R-N spacetime, in exactly the same way, we find that the corresponding power spectrum for a wave packet which has scattered off the R-N black hole and travelled to infinity at late times has the following Planckian form $$|f(\omega )|^2\left(\mathrm{exp}\left[\frac{2\pi [M+(M^2Q^2)^{1/2}]^2}{(M^2Q^2)^{1/2}}\right]\omega 1\right)^1$$ $`(18)`$ at temperature $`T=\frac{(M^2Q^2)^{1/2}}{2\pi [M+(M^2Q^2)^{1/2}]^2}`$ which is the standard result and reduces to that of the Schwarzschild case when $`Q=0`$. ## IV Hawking radiation of a Kerr black hole In applying the approach of the last two sections to the radiation of Kerr black holes we should be more careful because- unlike the Schwarzschild and R-N black holes- the event horizon and infinte redshift surface do not coincide. We will see that in this case the infinte redshift surface acts as a boundary for the outgoing null geodesics originating from inside the ergosphere, on which we should be concerned about the continuty problem. In Kerr spacetime the principal null congruences play the same role as the radial null geodesics in Schwarzschild and R-N spacetimes, so we consider them in our derivation of the Hawking radiation by Kerr black holes. The equation governing the principal null congruences ($`\theta `$ = const.) is given by $$\frac{\mathrm{d}r}{\mathrm{d}t}=1\frac{2M}{r}+\frac{a^2}{r^2}$$ $`(19)`$ If we restrict our attention to the case $`a^2<M^2`$, the above equation can be integrated to give $$t=r+\left(M+\frac{M^2}{(M^2a^2)^{1/2}}\right)\mathrm{ln}|rr_+|+\left(M\frac{M^2}{(M^2a^2)^{1/2}}\right)\mathrm{ln}|rr_{}|$$ $`(20)`$ where $$r_\pm =M\pm (M^2a^2)^{1/2}$$ $`(21)`$ are the event horizons of the Kerr metric. Now as in the previous sections we consider a light ray propagating from point $`r_++ϵ`$ at $`t=t_{in}`$ to the event $`𝒫(r,t)`$ where $`r,tM`$ and $`ϵM`$. Starting from a point very close to the outer event horizon ($`r_++ϵ`$) the trajectory would have the following form $$rtt_{in}+\left(M+\frac{M^2}{(M^2a^2)^{1/2}}\right)\mathrm{ln}ϵ$$ $`(22)`$ The frequency $`\mathrm{\Omega }`$ at $`r`$ will be related to the frequency $`\mathrm{\Omega }_{in}`$ of a light ray emitted by a locally nonrotating observer (Ref.) at $`r=r_++ϵ`$ (inside the ergosphere) by (see appendix A) $$\mathrm{\Omega }=\mathrm{\Omega }_{in}\frac{\left(g_{00}g_{03}^2/g_{33}\right)^{1/2}}{(1+(g_{03}/g_{00})a\mathrm{sin}^2\theta )}\mathrm{\Omega }_{in}ϵ^{1/2}=\mathrm{\Omega }_{in}\mathrm{exp}\left(\frac{(tt_{in}r)(M^2a^2)^{1/2}}{2(M^2+M(M^2a^2)^{1/2})}\right)$$ $`(23)`$ repeating the procedure of the last two sections to the above redshifted frequncy we find the following power spectrum for a wave packet scattered off the Kerr black hole at late times $$|f(\omega )|^2\left(\mathrm{exp}\left[\frac{4\pi [M^2+M(M^2a^2)^{1/2}]}{(M^2a^2)^{1/2}}\right]\omega 1\right)^1$$ $`(24)`$ which is Planckian at temperature $$T=\frac{(M^2a^2)^{1/2}}{4\pi [M^2+M(M^2a^2)^{1/2}]}.$$ $`(25)`$ which is again the standard result (Ref.) and reduces to (13) for $`a=0`$. ## V Discussion In this letter we gave a simple derivation of balck hole radiation which strips the Hawking process to its bare bones and establishes the following two facts: (i) The key input which leads to the Planckian spectrum is the exponential redshift given by equations (4,17 & 23) of modes which scatter off the black hole and travel to infinity at late times, which in turn requires the existence of an event horizon. It is well known that frequencies of outgoing waves at late times in black hole evaporation correspond to super planckian energies of the ingoing modes near the horizon. One might ask where do the ingoing modes corresponding to the outgoing modes come from?. This where the quantum field theory plays its role in the black hole radiation. According to quantum field theory vacuum is a dynamical entity and space is nowhere free of vacuum fluctuations. The vacuum field fluctuations can be thought of as a superposition of ingoing and outgoing modes. A collapsing star will introduce a mismatch between these virtual modes causing the appearance of a real particle at infinity. The calculation shows that the energy carried by the radiation is extracted from the black hole (Ref.). What we have done is to mimic the essence of this process by considering a classical mode propagating from near event horizon to infinity. (ii) The analysis given in the previous sections is entirely classical and no $`\mathrm{}`$ appears anywhere. The mathematics of Hawking evaporation is puerly classical and lies in the Fourier transform of an exponentially redshifted wave mode (for a more detailed discussion of classical versus quantum features see ref. ). ## Appendix A : Gravitational redshift by a Kerr black hole In this appendix we derive the gravitational redshift of a light ray emmitted from inside the ergosphere and received by a Lorentzian observer at infinity (as given by equation (17) of the text). The general relation for the redshift between a source and an observer located at events $`𝒫_1`$ and $`𝒫_2`$ in an stationary spacetime, is given by (Ref) $$\frac{\omega _{_{𝒫_1}}}{\omega _{_{𝒫_2}}}=\frac{(k_au^a)_{𝒫_1}}{(k_au^a)_{𝒫_2}}$$ $`(A1)`$ where $`k^a`$ is the wave vector and $`u^a`$s are the 4-velocities of the source and the observer. The numerator and denominator are evaluated at the events $`𝒫_1`$ and $`𝒫_2`$ respectively. One should note that the null geodesic (or equivalently its tangent vector) joining the source and the observer should be continuous over the boundary which in this case is the infinte redshift surface. The principal null congrunces we are considering here, indeed satisfy this condition. Since there are no static observers inside the ergosphere we choose as our source the locally nonrotating observer (Ref.) whose angular velocity in Boyer-Lindquist coordinates is given by (Ref) $$\mathrm{\Omega }=\frac{g_{03}}{g_{33}}=\frac{2Mra}{(r^2+a^2)^2\mathrm{\Delta }a^2\mathrm{sin}^2\theta }$$ $`(A2)`$ so the 4-velocities of the source and the static Lorentzian observer are given by (Ref) $$u^a|_S=\frac{1}{(g_{00}+2\mathrm{\Omega }g_{03}+\mathrm{\Omega }^2g_{33})^{1/2}}(1,0,0,\mathrm{\Omega })\&u^a|_{\mathrm{}}=(1/\sqrt{g_{00}},0,0,0)$$ $`(A3)`$ Substituting (A2) and (A3) in (A1) we have $$\omega |_{\mathrm{}}=\omega |_S\left(\frac{k_0|_{\mathrm{}}}{(k_0u^0+k_3u^3)|_S}\right)$$ Using the fact that the frequency measured with respect to the coordinate time, $`k_0`$, is constant and that $`k_3/k_0=a\mathrm{sin}^2\theta `$ for the principal null congruences (Ref) we have $$\omega |_{\mathrm{}}=\omega |_S\left(\frac{1}{(u^0a\mathrm{sin}^2\theta \mathrm{u}^3)|_\mathrm{S}}\right)$$ $`(A4)`$ Now Substituting from (A2) and (A3) in (A4) we obtain the following result $$\omega |_{\mathrm{}}=\omega |_S\frac{\left(g_{00}g_{03}^2/g_{33}\right)^{1/2}}{(1+(g_{03}/g_{00})a\mathrm{sin}^2\theta )}$$ $`(A5)`$ which is the relation used in the text.
no-problem/9812/cond-mat9812157.html
ar5iv
text
# Positional Disorder (Random Gaussian Phase Shifts) in the Fully Frustrated Josephson Junction Array (2D XY Model) \[ ## Abstract We consider the effect of positional disorder on a Josephson junction array with an applied magnetic field of $`f=1/2`$ flux quantum per unit cell. This is equivalent to the problem of random Gaussian phase shifts in the fully frustrated 2D XY model. Using simple analytical arguments and numerical simulations, we present evidence that the ground state vortex lattice of the pure model becomes disordered, in the thermodynamic limit, by any finite amount of positional disorder. \] The stability of vortex lattices to random disorder is a topic of considerable recent interest, motivated by studies of the high temperature superconductors. In two dimensions (2D), periodic arrays of Josephson junctions form a well controlled system for investigating similar issues of vortex fluctuations and disorder. Here we consider the effect of “positional” disorder on the vortex lattice of the fully frustrated Josephson array, with $`f=1/2`$ flux quantum of applied magnetic field per unit cell. Positional disorder was first discussed with respect to the Kosterlitz-Thouless (KT) transition for the $`f=0`$ model in zero magnetic field. Early arguments predicting a reentrant normal phase at low temperatures have been revised by recent works which argue that there is a finite critical disorder strength $`\sigma _c\sqrt{\pi /8}`$; for $`\sigma <\sigma _c`$ an ordered state persists for $`0TT_c(\sigma )`$. For the pure $`f=1/2`$ case on a square grid , the ordered state has two broken symmetries: the $`U(1)`$ symmetry (“KT-like” order) associated with superconducting phase coherence, and the $`Z(2)`$ symmetry (“Ising-like” order) associated with the “checkerboard” vortex lattice, in which a vortex sits on every other site. Previous works have considered the effect of positional disorder on this $`f=1/2`$ model; all have concluded that both Ising-like and KT-like order persist for at least small disorder strengths $`\sigma `$. In this work, however, we present new arguments that suggest that, for $`f=1/2`$, the critical disorder is $`\sigma _c=0`$. The Hamiltonian for the Josephson array is given by the “frustrated” 2D XY model , $$[\theta _i]=\underset{i\mu }{}U(\theta _i\theta _{i+\widehat{\mu }}A_{i\mu }),$$ (1) where $`i`$ are the sites of a periodic square grid with basis vectors $`\widehat{\mu }=\widehat{x}`$, $`\widehat{y}`$, the sum is over all nearest neighbor (n.n.) bonds $`i,i+\widehat{\mu }`$ and $`\theta _i\theta _{i+\widehat{\mu }}A_{i\mu }`$ is the gauge invariant phase difference across the bond, with $`A_{i\mu }=(\varphi _0/2\pi )_i^{i+\widehat{\mu }}𝐀𝑑\mathrm{}`$ the integral of the vector potential. Positional disorder arises from random geometric distortions of the bonds of the grid, resulting in, $`A_{i\mu }=A_{i\mu }^{(0)}+\delta A_{i\mu }`$; $`A_{i\mu }^{(0)}`$ is the value in the absence of disorder, and $`\delta A_{i\mu }`$ is the random deviation. We take the $`\delta A_{i\mu }`$ to be independent Gaussian random variables with $$[\delta A_{i\mu }]=0,\mathrm{and}[\delta A_{i\mu }\delta A_{j\nu }]=\sigma ^2\delta _{ij}\delta \mu \nu .$$ (2) $`[\mathrm{}]`$ denotes an average over the quenched disorder. The positionally disordered array is thus also referred to as the XY model with random Gaussian phase shifts. When $`U(\varphi )`$ is the Villain function , the Hamiltonian (1) is equivalent to a dual “Coulomb gas” of interacting vortices , $$[n_i]=\frac{1}{2}\underset{ij}{}(n_if\delta f_i)G_{ij}(n_jf\delta f_j).$$ (3) The sum is over all pairs of dual sites $`i`$, $`j`$, $`n_i`$ is the integer vorticity on site $`i`$, and the interaction $`G_{ij}`$ is the Green’s function for the 2D discrete Laplacian operator, $`\mathrm{\Delta }_{ik}G_{kj}=2\pi \delta _{ij}`$, where $`\mathrm{\Delta }_{ij}\delta _{i,j+\widehat{x}}+\delta _{i,j\widehat{x}}+\delta _{i,j+\widehat{y}}+\delta _{i,j\widehat{y}}4\delta _{ij}`$. For large separations, $`G_{ij}\mathrm{ln}|𝐫_i𝐫_j|`$. The $`f_if+\delta f_i`$ are $`(1/2\pi )`$ times the circulation of the $`A_{i\mu }`$ around dual site $`i`$; $`f`$ is the average applied flux, while $`\delta f_i`$ is the deviation due to the random $`\delta A_{i\mu }`$, $$\delta f_i=\frac{1}{2\pi }\left[\delta A_{i,x}+\delta A_{i+\widehat{x},y}\delta A_{i+\widehat{y},x}\delta A_{i,y}\right].$$ (4) Geometrically distorting a bond increases the flux through the cell on one side of the bond, while reducing the flux through the cell on the opposite side by the same amount. The $`\delta f_i`$ are thus anticorrelated among n.n. sites. Positional disorder is thus the same as random dipole pairs of quenched charges $`\pm \delta f_i`$ . From Eqs. (2) and (4) we get, $$[\delta f_i]=0,\mathrm{and}[\delta f_i\delta f_j]=\frac{\sigma ^2}{4\pi ^2}\mathrm{\Delta }_{ij}.$$ (5) The Hamiltonian (3) can be rewritten as interacting charges in a one body random potential , $$[q_i]=\frac{1}{2}\underset{ij}{}q_iG_{ij}q_j\underset{i}{}q_iV_i,$$ (6) where $`q_in_if`$, and the random potential is $`V_i=_jG_{ij}\delta f_j`$. For $`f=1/2`$, $`q_i=\pm 1/2`$. From Eq. (5), $`[V_i]=0,`$ $`\mathrm{and}[V_iV_j]={\displaystyle \underset{k,l}{}}G_{ik}[\delta f_k\delta f_l]G_{lj}`$ (7) $`=`$ $`{\displaystyle \frac{\sigma ^2}{4\pi ^2}}{\displaystyle \underset{k,l}{}}G_{ik}\mathrm{\Delta }_{kl}G_{lj}={\displaystyle \frac{\sigma ^2}{2\pi }}G_{ij},`$ (8) The $`V_i`$ thus have logarithmic long range correlations. We now use an Imry-Ma type argument to estimate the stability of the doubly degenerate checkerboard ground state to the formation of a square domain of side $`L`$. The energy of such an excitation consists of a domain wall term, $`E_d`$, which is present for the pure case, and a pinning term, $`E_p`$, due to the interaction with the random $`V_i`$. $`E_d(L)`$ has the form , $$E_daL+c\mathrm{ln}L+d.$$ (9) The first term is the interfacial tension of the domain wall; the second term comes from net charge that builds up at the corners of the domain . Calculating $`E_d(L)`$ numerically for a pure system, we find an excellent fit to Eq. (9), with $`a=0.28`$, $`c=0.15`$, and $`d=0.058`$. By Eq. (8), the average pinning energy of the domain $`𝒟`$, $`[E_p]=2_{i𝒟}q_i[V_i]=0`$, but the variance is, $$[E_p^2]=4\underset{i,j𝒟}{}q_i[V_iV_j]q_j=\frac{4\sigma ^2}{2\pi }\underset{i,j𝒟}{}q_iG_{ij}q_j=\frac{4\sigma ^2}{\pi }E_0,$$ (10) where $`E_0=(\pi /32)L^2`$ is the ground state energy of the checkerboard domain . The root mean square pinning energy is thus, $$[E_p]_{rms}=bL,b=\frac{\sigma }{2\sqrt{2}}0.35\sigma .$$ (11) For domains whose energy is lowered by the interaction with $`V_i`$, the typical excitation energy is $`E=E_d[E_p]_{rms}`$. Eqs. (9) and (11) imply that when $`b>a`$, i.e. when $`\sigma >\sigma _c0.8`$, $`E(L)`$ has a maximum at $`L=\xi (c/2\sqrt{2})/(\sigma \sigma _c)`$. Domains of size $`L>\xi `$ will lower their energy by increasing in size, and so disorder the system. Thus, one naively expects that when $`\sigma <\sigma _c`$ the system preserves its Ising-like order, but when $`\sigma >\sigma _c`$ the system is disordered into domains of typical size $`\xi `$. However the leading size dependencies of Eqs. (9) and (11), $`E_d[E_p]_{rms}L`$, are exactly the same as found in the 2D n.n. random field Ising model (RFIM). For the RFIM it is known that 2D is the lower critical dimension, that the randomness causes domains walls at $`T=0`$ always to roughen and so acquire an effective negative line tension, and that the critical disorder is $`\sigma _c=0`$, i.e. any amount of disorder, no matter how weak, destroys the Ising-like order of the pure case. By analogy, we suggest that the positionally disordered $`f=1/2`$ 2D XY model similarly has $`\sigma _c=0`$. Our conclusion, that $`[E_p]_{rms}L`$ as in the 2D RFIM, follows from a subtle cancellation between the long range interactions between charges $`q_i`$, and the long range correlations of the random potential $`V_i`$. To check this prediction, we carry out Monte Carlo (MC) simulations of the Hamiltonian (3) with periodic boundary conditions on $`L\times L`$ square grids. Our MC procedure is as follows . One MC excitation attempt consists of the insertion of a neutral $`n=\pm 1`$ vortex pair on n.n. or next n.n. sites, which is accepted or rejected using the usual Metropolis algorithm. $`L^2`$ such attempts we call one MC pass. At each temperature we typically used $`4000`$ MC passes to equilibrate the system, followed by $`128,000`$ MC passes to compute averages. Every $`100`$ passes we attempt a global excitation reversing the sign of all the charges, $`q_iq_i`$. For each disorder realization we cooled down two distinct “replicas”, starting with different random charge configurations and using different random number sequences. In only about $`3\%`$ of the cases did the two replicas fail to give reasonable agreement. To test for Ising-like order we define an order parameter analogously to an Ising antiferromagnet, $$M=\frac{1}{L^2}\underset{i}{}q_i(1)^{x_i+y_i}.$$ (12) We first consider $`\sigma =0.3`$, smaller than both the naive estimate of $`\sigma _c=0.8`$, and the $`\sigma _c=\sqrt{\pi /8}0.63`$ of the $`f=0`$ model. Fig. 1 plots $`[M^2]`$ vs. $`T`$, averaged over $`200`$ disorder realizations, for sizes $`L=10`$, $`14`$ and $`20`$. All curves start to increase from zero near $`T0.13`$, which is $`T_c(\sigma =0)`$ of the pure model. However $`[M^2]`$ at low $`T`$ decreases steadily with increasing $`L`$. The reason for this becomes clearer if we consider the histogram of values of $`M^2`$ that occur as we sample the different realizations of disorder. We show such histograms in Figs. 1b-d, for the lowest temperature $`T=0.02`$. As $`L`$ increases, the statistical weight shifts from predominantly ordered systems ($`M^2=1/4`$), to predominantly disordered systems ($`M^2=0`$). Assuming that this trend continues, we expect that as $`L\mathrm{}`$, $`[M^2]0`$. To measure the “random field correlation length” $`\xi `$, we consider the vortex correlation function, $$S(𝐤)=\frac{1}{L^2}\underset{i,j}{}e^{i𝐤(𝐫_i𝐫_j)}n_in_j.$$ (13) For the pure case, $`S(𝐤)`$ in the ordered phase has singular Bragg peaks at $`𝐊=\pm \pi \widehat{x}\pm \pi \widehat{y}`$. If the vortex lattice is disordered, these peaks will broaden, and their finite width provides a measure of $`\xi `$. Writing $`𝐤=𝐊+\delta 𝐤`$, and assuming a Lorentzian shape for the disorder averaged peak, $`[S(𝐤)]1/(\delta k^2+\xi ^2)`$, we determine $`\xi `$ by fitting to this form for $`\delta k=0`$, and $`\delta k=2\pi /L`$ . In Fig. 2 we show $`\xi `$ vs. $`\sigma `$ at our lowest $`T=0.02`$, for several system sizes $`L`$. Only for our smallest value $`\sigma =0.25`$ does a finite size effect remain. In this case, however, $`\xi `$ decreases as $`L`$ increases. This is in contrast to the increase of $`\xi `$ with $`L`$ that one would expect if one were approaching a second order transition. This behavior is consistent with that seen in Figs. 1b-d, where as $`L`$ increases, a greater fraction of the disorder realizations result in disordered states. We next fit our results for $`\xi (\sigma )`$ to several possible scaling expressions: (i) $`\xi e^{C/(\sigma \sigma _c)^2}`$, (ii) $`\xi e^{C/(\sigma \sigma _c)}`$, and (iii) $`\xi |\sigma \sigma _c|^p`$. The first has been suggested by Binder for the 2D RFIM. While in Binder’s expression $`\sigma _c=0`$, here we leave it as an arbitrary parameter to be determined from the fit. The second has been suggested for the positionally disordered $`f=0`$ model , in which $`\sigma _c>0`$. The third is the familiar power law form. Using data for only the largest $`L`$ for each $`\sigma `$, the results of these fits are shown in Fig. 2. The value of $`\sigma _c`$ and the $`\chi ^2`$ of the fit for each case is (i) $`\sigma _c=0.0046\pm 0.050`$, $`\chi ^2=67`$;(ii) $`\sigma _c=0.0134\pm 0.055`$, $`\chi ^2=67`$; (iii) $`\sigma _c=0.0013\pm 0.098`$, $`p=2.86\pm 0.84`$, $`\chi ^2=7.6`$. The power law (iii) gives a significantly better fit than (i) or (ii), however all give $`\sigma _c=0`$ within the estimated error. Given the rather limited range of the data, the above fits should be treated with caution. However they do indicate that the data contains no suggestion of a diverging $`\xi `$ at a finite $`\sigma `$. Coupled with our Imry-Ma argument, we thus find a consistent picture suggesting that $`\sigma _c=0`$ for the $`f=1/2`$ 2D XY model. Returning to the case $`\sigma =0.3`$, where Ising-like order has been lost, we now consider whether the system may still have a finite temperature “spin glass” transition to a disordered but frozen vortex state. To test for this we measure the self and cross overlaps , $`Q_{\mathrm{self}}`$ and $`Q_{\mathrm{cross}}`$, $`Q_{\mathrm{self}\alpha }`$ $`=`$ $`{\displaystyle \frac{1}{L^2}}{\displaystyle \underset{i}{}}n_i^{(\alpha )}(t)n_i^{(\alpha )}(t+\tau )`$ (14) $`Q_{\mathrm{cross}}`$ $`=`$ $`{\displaystyle \frac{1}{L^2}}{\displaystyle \underset{i}{}}n_i^{(\alpha )}(t)n_i^{(\beta )}(t).`$ (15) $`\alpha `$ and $`\beta `$ index the two independent replicas. For $`\tau `$ sufficiently large we expect $`Q_{\mathrm{self}\mathrm{\hspace{0.17em}1}}=Q_{\mathrm{self}\mathrm{\hspace{0.17em}2}}=Q_{\mathrm{cross}}`$, if the system is well equilibrated. Averaging Eq. (15) over several values of $`\tau 2000`$ to improve our statistics, we plot $`[Q_{\mathrm{self}\mathrm{\hspace{0.17em}1}}]`$, $`[Q_{\mathrm{self}\mathrm{\hspace{0.17em}2}}]`$ and $`[Q_{\mathrm{cross}}]`$ vs. $`T`$ in Fig. 3a. We see that our system is fairly well equilibrated down to the lowest $`T`$ we study. To test for a spin glass transition, we measure the overlap susceptibility, $$\chi _Q=L^2\left\{[Q_{\mathrm{cross}}^2][Q_{\mathrm{cross}}^2]\right\},$$ (16) which we plot vs. $`T`$ in Fig. 3b for various system sizes. The peak in $`\chi _Q`$ near $`T0.06`$ shows no noticeable increase as $`L`$ increases, thus suggesting that there is no finite temperature spin glass transition. If the vortices are not frozen, but are free to diffuse, one expects that superconducting phase coherence is also destroyed. To explicitly test this we measure the helicity modulus. The Hamiltonian (3) can viewed as representing the XY model with “fluctuating twist” boundary conditions . Using the method of Ref. , we determine the dependence of the total free energy $`F`$ of the corresponding XY model, as a function of the twist $`(\mathrm{\Delta }_x,\mathrm{\Delta }_y)`$ which is applied in a “fixed twist” boundary condition. We then determine the $`(\mathrm{\Delta }_{x0},\mathrm{\Delta }_{y0})`$ that minimizes $`F`$; the helicity modulus tensor is then the curvature of $`F`$ at the minimizing twist, $`\mathrm{{\rm Y}}_{\mu \nu }=^2F/\mathrm{\Delta }_\mu \mathrm{\Delta }_\nu `$. In Fig. 4a we plot $`\mathrm{{\rm Y}}_1`$, the largest of the two eigenvalues of $`\mathrm{{\rm Y}}_{\mu \nu }`$, vs. $`T`$, for $`\sigma =0.3`$ and sizes $`L=10,14,20`$. At all $`T`$, $`\mathrm{{\rm Y}}_1`$ continues to decrease as $`L`$ increases, giving no suggestion of a finite temperature transition. In Figs. 4b-d we plot histograms of the minimizing twist $`\mathrm{\Delta }_0`$ for the three sizes $`L`$. Note, in choosing our random phase shifts $`\delta A_{i\mu }`$, we impose the constraint $`_i\delta A_{i\mu }=0`$ in order to remove one trivial source of $`\mathrm{\Delta }_00`$. We see that the width of the distributions of $`\mathrm{\Delta }_0`$ steadily increases with increasing $`L`$, suggesting that the strength of the random disorder is renormalizing to greater values on larger length scales. To conclude, our results suggest that Ising-like order is destroyed for any finite amount of positional disorder. Further, we found in one specific case that when the Ising-like order vanished, no spin glass order or phase coherence existed either. We speculate that this remains true as well for any finite disorder strength. Although $`\sigma _c=0`$, the finite $`\xi (\sigma )`$ nevertheless can become extremely large for small values of $`\sigma `$. When $`\xi `$ exceeds the size of the experimental or numerical sample, the system will indeed look ordered. We believe this explains previous numerical work on this problem which reported the persistence of Ising-like order at small $`\sigma `$. In the most recent of these works, Cataudella reports at $`\sigma 0.113`$ a finite $`T_c`$ to an Ising-like ordered state. The correlation length exponent that he finds is $`\nu 1.7`$, clearly different from that of the pure model. Using our scaling form (iii) we can estimate that at this value of $`\sigma `$, $`\xi 120`$, much larger than Cataudella’s largest system size of $`L=36`$. His results may thus be reflecting a cross over region at $`L<\xi `$, rather than a true transition. We thank Prof. Y. Shapir for many valuable discussions. This work has been supported by DOE grant DE-FG02-89ER14017.
no-problem/9812/chao-dyn9812009.html
ar5iv
text
# Intermittency and scaling laws for wall bounded turbulence ## Abstract Well defined scaling laws clearly appear in wall bounded turbulence, even very close to the wall, where a distinct violation of the refined Kolmogorov similarity hypothesis (RKSH) occurs together with the simultaneous persistence of scaling laws. A new form of RKSH for the wall region is here proposed in terms of the structure functions of order two which, in physical terms, confirms the prevailing role of the momentum transfer towards the wall in the near wall dynamics. The intermittent behavior of velocity increments in the inertial range of fully developed turbulence has been a subject of renewed interest during the years, starting from the objection that Landau raised to Kolmogorov theory of 1941 (K41). Since then, any theory of the inertial range can not avoid considering the effect of intermittent dissipation of energy on the inertial scales of motion. Under this respect, the Kolmogorov-Obukhov refined similarity hypothesis (RKSH), certainly the most credited , leads to a probability distribution function of velocity increments characterized by the scaling $`<\delta V^p>`$ $``$ $`<ϵ_r^{p/3}>r^{p/3},`$ (1) where $`ϵ_r^q`$ denotes the $`q^{th}`$ moment of the dissipation spatially averaged over a volume of characteristic dimension $`r`$ and the brackets indicate ensemble averaging. Taking into account the scaling properties of the dissipation field, $`<ϵ_r^q>`$ $``$ $`r^{\tau (q)},`$ (2) equation (1) implies that the velocity structure function of order $`p`$ is expressed as a power law of the separation with exponent $`\zeta _p`$ $`=`$ $`\tau (p/3)+p/3.`$ (3) Here, the anomalous correction, $`\tau (p/3)`$, to the K41-exponent accounts for the intermittency of the velocity increments in the inertial range of homogeneous and isotropic turbulence. A substantial extension of the range of scales where similarity is observed has recently been achieved by assuming, as basic quantity, the third order structure function instead of the separation $`r`$, $`<\delta V^p>`$ $``$ $`{\displaystyle \frac{<ϵ_r^{p/3}>}{<ϵ>^{p/3}}}<\delta V^3>^{p/3},`$ (4) as suggested by the Kolmogorov equation . A direct consequence of eq. (4) is the existence of an extended self-similarity (ESS) of the generic structure function of order $`p`$ in terms of the third order moment with exponent $`\zeta _p`$. Since its introduction, the generalized Kolmogorov similarity hypothesis (4) has appeared as the characteristic feature of a vast number of turbulent systems. In the present letter we intend to discuss the issue of intermittency in wall bounded turbulence and its relationship with scaling (ESS) laws, which have been observed , even in regions very close to the wall dominated by quite ordered vortical structures . As shown in fig. (1), we have evidence that intermittency increases moving from the bulk of the fluid towards the wall . In principle, one may attempt to describe this behavior in the framework of RKSH, in its generalized form (4). Hence the larger intermittency (smaller $`\zeta _p`$) would be provided by an increase of intermittent fluctuations of $`ϵ_r`$ (larger values of $`|\tau (p)|`$). In such conditions, the anomaly of the scaling exponents would strongly depend on the local flow properties, loosing, thus, any trait of universality. To assess the self-consistency of this approach, in fig (2) we plot on a logarithmic scale the structure function of order six versus $`<ϵ^2><\delta V^3>^2`$. On the basis of the assumed validity of (4), the plot should result in a straight line of slope $`s=1`$, independent of the distance from the wall. This behavior actually emerges near the center of the channel while in the wall region a quite clear, though small, violation is manifested. Specifically, for $`y^+=31`$ two different scaling laws appear. The one, characterized by slope $`s=1`$, trivially pertains to the dissipative range. The other, with slope $`s=.88`$, which doesn’t satisfy (4), shows a first clear example of failure of RKSH. The previous discussion may suggest a relationship between the increase of intermittency, observed in the near wall region, and the simultaneous breaking of the RKSH. To this regard, it seems interesting to investigate the possible existence of a new form of RKSH valid in the near wall region. In fact RKSH, somehow suggested by the well known “$`4/5`$” Kolmogorov equation (see Frish ), tells us, in physical terms, that the “energy flux” in the inertial range, represented by the term $`(\delta V_r)^3`$, fluctuates with a probability distribution which is the same of $`ϵ_r`$. However, in the case of strong shear, we should expect that a new term, proportional to $`_z<U>(\delta V_r)^2`$, enters the estimate of the energy flux at scale $`r`$. Such a new term, indeed, appears in the analysis performed for homogeneous shear flows (see for instance Hinze ). If this term becomes dominant, as it may occur for a very large shear, one is led to assume that the fluctuations of the energy flux in the inertial range are proportional to $`(\delta V_r)^2`$, i.e. $`ϵ_rA(r)(\delta V_r)^2`$, with $`A(r)`$ a non fluctuating function of $`r`$. Hence, we may expect that a new form of the RKSH should hold which, in its generalized form, reads as $`<\delta V^p>`$ $``$ $`{\displaystyle \frac{<ϵ_r^{p/2}>}{<ϵ>^{p/2}}}<\delta V^2>^{p/2}.`$ (5) The above expression of the new RKSH is given in terms of the structure function of order two, without explicit reference to the separation $`r`$, in the same way as the generalized RKSH (4). In the spirit of the extended self similarity, we assume the new form of RKSH to be valid also in the region very close to the wall, where the shear is certainly prevailing. In order to verify this set of assumptions, we show in fig. (3) a log-log plot of equation (5) for $`p=4`$ at $`y^+=31`$. In the insert, we show for the same plane the compensated plot of both (5) for $`p=4`$ and (4) for $`p=6`$. It follows a quite clear agreement of eq. (5) with the numerical data. In principle, the function $`A(r)`$ might be evaluated theoretically starting from the Kolmogorov equation for anisotropic shear flow (e.g. see ). The increased intermittency of the velocity fluctuations near the wall may be estimated by considering how the flatness $`F(r)`$ grows with $`r0`$, with $`F(r)`$ $`=`$ $`{\displaystyle \frac{<\delta V^4(r)>}{<\delta V^2(r)>^2}}.`$ (6) By combining the definition (6) with (4) and (5) we obtain the following expressions in terms of $`ϵ_r`$, $`F_b={\displaystyle \frac{<ϵ_r^{4/3}>}{<ϵ_r^{2/3}>^2}}`$ $`F_w={\displaystyle \frac{<ϵ_r^{4/2}>}{<ϵ_r^{2/2}>^2}},`$ (7) which are suitable for the bulk and near the wall region, respectively. As we see from fig. (4), both $`F_b`$ and $`F_w`$ diverge for $`r0`$, indicating intermittent behavior in both cases, if we exclude the smallest separations falling into the dissipative range. Clearly $`F_w`$ diverges faster than $`F_b`$. This result is consistent with the corresponding analysis performed directly in terms of structure functions of velocity by means of eq. (6) and provides a further evidence of the validity of (5) near the wall. In fact, the application of $`F_b`$ near the wall doesn’t catch the increase of intermittency of the velocity fluctuations (see fig. (4)). On the other hand, the differences in the statistical properties of the dissipation between the bulk and the near wall region are too small to account for the increase of intermittency of the velocity increments near the wall. This is indirectly confirmed by the observed direct scaling (ESS) of the structure functions with $`<\delta V^3>`$, which implies, starting from eq. (5), $`\widehat{\tau }(p/2)`$ $`=`$ $`\widehat{\zeta }_p{\displaystyle \frac{p}{2}}\widehat{\zeta }_2,`$ (8) where a hat has been introduced here to denote the scaling exponents with respect to $`<\delta V^3>`$. This distinction was not necessary in the bulk region where $`\tau \widehat{\tau }`$. By using expression (8) near the wall and eq. (3) in the bulk region we obtain that the “intermittency correction” $`\widehat{\tau }(q)`$ results to be essentially independent of the distance from the wall, fig. (5). Hence the observed increase of intermittency of the velocity increments seems to be associated more to the structure of the RKSH rather than to the intermittency of dissipation. These theoretical findings seem to be confirmed by experimental results in a flat plate boundary layer obtained recently by Ciliberto and coworkers (private communication). We like here to emphasize that, to verify the new RKSH, we selected on purpose the plane closest to the wall where scaling laws still appear. On the opposite, in the bulk region, the original RKSH holds. At intermediate planes we expect the scaling exponents to emerge from a complex blending of these two basic behaviors, leading to a continuous variation with the distance from the wall . In conclusion, we have found that a quite evident failure of the RKSH occurs in the near wall turbulence in correspondence with the simultaneous appearance of scaling laws. The new form of the RKSH we have proposed in this letter for the wall region is expressed in terms of the structure function of order two, instead of the structure function of order three as in the original form. This may be seen as a statistical representation of the physical features of the near wall region, which is controlled more by the mechanism of momentum transfer rather than by the classical energy cascade. ###### Acknowledgements. We acknowledge very useful discussions with L. Biferale, S. Succi and S. Ciliberto.
no-problem/9812/hep-ph9812388.html
ar5iv
text
# Strange form factors of the proton: a new analysis of the 𝜈 (𝜈̄) data of the BNL–734 experiment ## I Introduction After the measurements of the polarized structure function of the proton, $`g_1`$, in deep inelastic scattering, it turned out, rather surprisingly, that the constant $`g_A^s`$, that characterizes the one–nucleon matrix element of the axial strange current, is of magnitude comparable with the corresponding $`g_A^u`$ and $`g_A^d`$ axial constants. A theoretical analysis of deep inelastic data led to the following values for the axial constants: $`g_A^s=0.10\pm 0.03`$, $`g_A^d=0.43\pm 0.03`$, $`g_A^u=0.83\pm 0.03`$. In a more recent analysis of the data, the value $`g_A^s=0.13\pm 0.03`$ was reported. Though it is subject to several assumptions (extrapolation of $`g_1`$ to the $`x=0`$ point, SU(3)<sub>f</sub> symmetry, etc.), the rather large value of $`g_A^s`$ stimulated new experiments on the measurement of deep inelastic scattering of polarized leptons on polarized nucleons and a lot of theoretical work on the subject (see for example ref.). Alternative approaches which allow to obtain the contribution of the strange quark current to the structure of the nucleon were also developed. Among these, information on strange form factors of the nucleon can be obtained from NC scattering of $`\nu (\overline{\nu })`$ on nucleons and nuclei. Up to now the most detailed investigation of NC $`\nu (\overline{\nu })`$ \- proton scattering was done in the Brookhaven BNL–734 experiment. ¿From the analysis of the data of this experiment a nonzero value of $`g_A^s`$ was found. This result, however, strongly depends on the value of the axial cutoff mass $`M_A`$. For example, in the paper by Garvey et al., from the fit of the BNL data it was found: $`g_A^s=0.21\pm 0.10`$ and $`M_A=1.032\pm 0.036`$ GeV. This fit, however, shows up strong correlations between the values of $`g_A^s`$ and $`M_A`$: the data indeed are also compatible with $`g_A^s=0`$, provided one assumes a slightly larger axial cutoff mass $`M_A=1.086\pm 0.015`$ GeV, which is in agreement with quasielastic neutrino-nucleon data. It is clear that new investigations of NC $`\nu (\overline{\nu })`$–nucleon scattering are necessary in order to draw definite conclusions about the value of $`g_A^s`$. In this paper we calculate the contribution of the strange form factors of the nucleon to the NC over CC neutrino–antineutrino asymmetry and compare our results with the information on it, which one can extract from the data of the BNL–734 experiment. In this experiment the following ratios of cross sections were obtained : $`R_\nu `$ $`=`$ $`{\displaystyle \frac{\sigma _{(\nu p\nu p)}}{\sigma _{(\nu n\mu ^{}p)}}}=0.153\pm 0.007\pm 0.017`$ (1) $`R_{\overline{\nu }}`$ $`=`$ $`{\displaystyle \frac{\sigma _{(\overline{\nu }p\overline{\nu }p)}}{\sigma _{(\overline{\nu }p\mu ^+n)}}}=0.218\pm 0.012\pm 0.023`$ (2) $`R`$ $`=`$ $`{\displaystyle \frac{\sigma _{(\overline{\nu }p\overline{\nu }p)}}{\sigma _{(\nu p\nu p)}}}=0.302\pm 0.019\pm 0.037,`$ (3) where $`\sigma _{\nu (\overline{\nu })}`$ is a total cross section integrated over the incident neutrino (antineutrino) energy and weighted by the $`\nu (\overline{\nu })`$ flux in a way that will be specified below. The first error is statistical and the second is the systematic one. In ref. we have shown that the measurement of neutrino–antineutrino asymmetry $$𝒜_p(Q^2)=\frac{\left({\displaystyle \frac{d\sigma }{dQ^2}}\right)_{\nu p\nu p}\left({\displaystyle \frac{d\sigma }{dQ^2}}\right)_{\overline{\nu }p\overline{\nu }p}}{\left({\displaystyle \frac{d\sigma }{dQ^2}}\right)_{\nu n\mu ^{}p}\left({\displaystyle \frac{d\sigma }{dQ^2}}\right)_{\overline{\nu }p\mu ^+n}}$$ (4) will allow to obtain direct model independent information on the axial ($`F_A^s`$) and magnetic ($`G_M^s`$) strange form factors of the nucleon. Indeed (4) can be rewritten as $$𝒜_p(Q^2)=\frac{1}{4|V_{ud}|^2}\left(1\frac{F_A^s}{F_A}\right)\left(12\mathrm{sin}^2\theta _W\frac{G_M^p}{G_M^3}\frac{G_M^s}{2G_M^3}\right),$$ (5) where $`F_A`$ is the CC axial form factor and $`G_M^3=(G_M^pG_M^n)/2`$ the isovector magnetic form factor of the nucleon. Here we will use the ratios (1)–(3) to obtain an experimental information on the integral asymmetry. In the next Section we will compare this asymmetry with our theoretical calculation, which includes the contribution of the strange nucleon form factors. The Brookhaven experiment was performed using wide band neutrino and antineutrino beams, with an average energy of about 1.3 GeV. Almost $`80\%`$ of the events were due to quasielastic proton knockout from <sup>12</sup>C nuclei and the remaining $`20\%`$ of events were due to elastic neutrino (antineutrino) scattering on free protons. In order to compare theoretical calculations with the BNL data, one must take into account the energy spectrum of the neutrinos \[$`\varphi _\nu (ϵ_\nu )`$\] and antineutrinos \[$`\varphi _{\overline{\nu }}(ϵ_{\overline{\nu }})`$\]. In the case of elastic scattering, we define a folded differential cross section by: $$\frac{d\sigma }{dQ^2}_{\nu (\overline{\nu })p}=\frac{1}{\mathrm{\Phi }_{\nu (\overline{\nu })}}_{0.2\mathrm{G}eV}^{5\mathrm{G}eV}𝑑ϵ_{\nu (\overline{\nu })}\left(\frac{d\sigma }{dQ^2}\right)_{\nu (\overline{\nu })p}\varphi _{\nu (\overline{\nu })}(ϵ_{\nu (\overline{\nu })})$$ (6) where $`\mathrm{\Phi }_{\nu (\overline{\nu })}`$ is the total neutrino (antineutrino) flux and the limits on $`ϵ_{\nu (\overline{\nu })}`$ correspond to the experimental conditions. The differential cross sections are given as a function of the squared momentum transfer, $`Q^2`$, which in the case of free protons is directly obtained from the final proton kinetic energy in the laboratory system by the relation $`Q^2=2MT_p`$ ($`M`$ being the proton mass ). For scattering off <sup>12</sup>C the authors of ref. obtained the equivalent “free scattering data” by correcting for the Fermi motion and binding energy of the hit nucleon: in this case the $`Q^2`$ given by the above relation must be regarded as an effective momentum transfer squared, around which the quasielastic $`\nu (\overline{\nu })`$ scattering on <sup>12</sup>C occurs. A proper interpretation of the results in terms of scattering on the free nucleon requires a reliable understanding of the effects associated with the nuclear, many–body dynamics in both the initial and final states, as well as with the final state interactions (FSI) between the ejected nucleon and the residual nucleus. We have shown that for neutrino (antineutrino) energies of about 1 GeV and larger these effects are within percentage range for ratios of cross sections. Note, however, that FSI sizeably reduce ($`50\%`$) the separated cross sections with respect to the plane wave impulse approximation (PWIA): indeed FSI take into account the existence of other reaction channels besides the quasielastic one and just approximately $`50\%`$ of the reaction events correspond to elastic proton knockout. Therefore, when applied to the individual cross sections, the interpretation of the BNL data as corresponding to elastic scattering on “free” protons is not free from ambiguities. In addition to the above differential cross sections, one can define the total folded cross sections by integrating (6) over the (effective) momentum transfer $`Q^2`$: $$\sigma _{\nu (\overline{\nu })p}=_{0.5\mathrm{G}eV^2}^{1\mathrm{G}eV^2}𝑑Q^2\frac{d\sigma }{dQ^2}_{\nu (\overline{\nu })p},$$ (7) the limits of integration being taken from ref.. The neutrino–antineutrino folded integral asymmetry, $`𝒜_p`$, is obtained from the neutral current to charge current ratio of the differences between the total folded neutrino and antineutrino cross sections: $$𝒜_p=\frac{\sigma _{(\nu p\nu p)}\sigma _{(\overline{\nu }p\overline{\nu }p)}}{\sigma _{(\nu n\mu ^{}p)}\sigma _{(\overline{\nu }p\mu ^+n)}}.$$ (8) This quantity can be written in terms of the ratios (1)–(3) as follows: $$𝒜_p=\frac{R_\nu (1R)}{1RR_\nu /R_{\overline{\nu }}}$$ (9) and from the experimental data we found $$𝒜_p=0.136\pm 0.008(\mathrm{stat})\pm 0.019(\mathrm{syst})$$ (10) where the statistical error has been estimated using the standard quadratic error propagation theory, while for the systematic error we take into account the positive correlation coefficient $`\rho =0.5`$ between systematic errors for $`\nu `$ and $`\overline{\nu }`$ cross sections. ## II Results and discussion In this Section we shall compare the experimental values for the ratios (1)–(3) and for the asymmetry (10) with their theoretical evaluation. We shall discuss the influence of the strange form factors of the nucleon; moreover, for data obtained from scattering of $`\nu (\overline{\nu })`$ on nuclei, the influence of the nuclear medium will be shortly examined. Let us first discuss the sensitivity of the integral asymmetry to different assumptions. First of all we have considered the effect of folding the elastic $`\nu (\overline{\nu })`$ cross sections with the corresponding fluxes. In Fig. 1 we show the integral asymmetry as a function of the axial strange constant, $`g_A^s`$. The electric and magnetic strange form factors have been taken to be zero. In the case of elastic neutrino (antineutrino)–proton scattering the folded integral asymmetry (solid line) is compared with the “unfolded” integral asymmetry evaluated at a fixed $`\nu (\overline{\nu })`$ energy $`ϵ_{\nu (\overline{\nu })}=1`$ GeV (empty dots). The difference between the two curves is less than 2 %, a result which one could have expected from the similarity between the neutrino and antineutrino spectra in the BNL–734 experiment.<sup>*</sup><sup>*</sup>*One could notice that the average energy of the neutrino (antineutrino) spectra of the BNL experiment is about $`ϵ_{\nu (\overline{\nu })}1.3`$ GeV; however between 1 and 2 GeV the unfolded asymmetry varies at most by $`0.3\%`$, which makes irrelevant the fixed value of $`ϵ_{\nu (\overline{\nu })}`$ that we utilize for the unfolded asymmetry. Next we compare the results obtained for elastic scattering on protons to the ones obtained in the impulse approximation (IA) for quasielastic $`\nu (\overline{\nu })`$ scattering on <sup>12</sup>C. Three different approximations are used to describe the nuclear dynamics: first we consider a relativistic Fermi gas (RFG) within the PWIA, namely without distortion of the ejected nucleon wave. For the RFG we show in Fig. 1 both the folded (dashed line) and the unfolded (dotted line) asymmetry. Further we use a relativistic shell model (RSM), both within PWIA (dot–dashed line) and with inclusion of the FSI of the observed nucleon (three–dot–dashed line); the latter is taken into account through a relativistic optical potential (ROP), which is employed together with the RSM (see refs. and for details of these models). Due to the small effect of the folding procedure, which can be argued from the elastic and the RFG cases, for the RSM (without and with the final state interaction) only the unfolded asymmetry is shown, again at $`ϵ_{\nu (\overline{\nu })}=1`$ GeV. We notice that the quasielastic $`\nu (\overline{\nu })`$–nucleus scattering, treated within the RFG and RSM (in PWIA) gives results almost identical to the case of elastic $`\nu (\overline{\nu })`$–proton scattering. The effect of FSI, instead, shows up in a reduction of the asymmetry of at most 2 %, mainly due to Coulomb effects, as pointed out in ref.. Nevertheless the effect of the axial strange constant $`g_A^s`$ on the integral asymmetry remains larger (for $`g_A^s0.05`$) than the effects associated with nuclear models (including FSI) and/or with the folding over the $`\nu (\overline{\nu })`$ spectra. This result justifies previous analyses of the BNL quasielastic data in terms of a Fermi Gas model when ratios of cross sections and/or asymmetries are concerned; indeed for quasielastic processes, ratios of cross sections are basically the same as for the corresponding elastic $`\nu (\overline{\nu })`$–proton processes. However we remind that FSI are not negligible for the single cross sections. On the basis of the previous discussion, in what follows we just consider ratios of folded cross sections for elastic scattering on free protons. In Fig. 2 we demonstrate the effects of strangeness for the ratios (1)–(3) and the integral asymmetry (8). The experimental values for the various quantities are indicated by the shadowed regions: the error band corresponds to one standard deviation, calculated from quadratic propagation of the statistical and systematical errors. We have assumed the usual dipole parameterization both for non–strange and strange form factors, the latter being $`F_A^s(Q^2)=g_A^sG_D^A(Q^2)`$, $`G_M^s(Q^2)=\mu _sG_D^V(Q^2)`$ and $`G_E^s(Q^2)=\rho _s\tau G_D^V(Q^2)`$ ($`\tau =Q^2/4M^2`$), where $`G_D^{V(A)}(Q^2)=(1+Q^2/M_{V(A)}^2)^2`$ and we keep the strengths $`g_A^s`$, $`\mu _s`$ and $`\rho _s`$ as free parameters. We assume the same values for the strange cutoff masses as for the non–strange vector (axial) form factors. We do not discuss here other parameterizations for the $`Q^2`$-dependence of the strange form factors, about which practically nothing is known (see refs. ). A decrease of $`G_M^s`$ and $`F_A^s`$ stronger than dipole at high $`Q^2`$ (as suggested by the quark counting rule) would obviously reduce the global effect of strangeness. However it was shown in previous works that the effect of different parameterizations of the strange form factors is very small in the BNL $`Q^2`$ region, of the order of $`12\%`$. In Fig. 2(a) we show the ratios $`R_\nu `$ and $`R_{\overline{\nu }}`$ versus $`\mu _s`$ for two values of the axial–strange constant: $`g_A^s=0,0.15`$ and three values of the electric strange constant: $`\rho _s=0,\pm 2`$. For the axial cutoff mass we use the value $`M_A=1.032`$ GeV. Both observables are much more sensitive to the axial strange constant $`g_A^s`$, than to $`\rho _s`$: the former gives an effect of the order of $`15\%`$ for $`R_\nu `$ and $`27\%`$ for $`R_{\overline{\nu }}`$. The influence of the magnetic and electric strange form factors, instead, amount, respectively, to $`8\%`$ ($`R_\nu `$), $`7\%`$ ($`R_{\overline{\nu }}`$) for $`\mu _s`$ and to $`4\%`$ ($`R_\nu `$), $`7\%`$ ($`R_{\overline{\nu }}`$) for $`\rho _s`$. Note that the role played by the electric and magnetic strange form factors is similar in the case of antineutrinos, whereas for neutrinos the dependence upon $`\mu _s`$ is clearly stronger. This agrees with the discussion presented in ref. . As it is seen from Fig. 2(a), within the present assumptions for the form factor parameterization and for $`M_A`$, in the rather large range of $`\mu _s`$ and $`\rho _s`$ considered here, a value of the strange axial constant $`g_A^s`$ as large as $`0.15`$ is not favoured by the BNL–734 data. Results of the calculation of the ratio $`R`$ and the integral asymmetry $`𝒜_p`$ are shown in Fig. 2(b): the maximum relative change in the ratio $`R`$, obtained in the ranges of strange parameters considered here, amounts to $`10\%`$ for $`g_A^s`$, $`13\%`$ for $`\mu _s`$ and $`5\%`$ for $`\rho _s`$. Both for $`R`$ and $`𝒜_p`$ the effects induced by the axial and magnetic strange form factors are similar. These effects are clearly larger (in $`R`$) than the ones due to the electric strange form factor. Moreover it is worth noticing that the integral asymmetry does not depend at all upon the electric strange form factor, a result already obtained in ref. for the unfolded asymmetry $`𝒜_p(Q^2)`$. The maximum relative change in $`𝒜_p`$, obtained in the ranges of the strange parameters considered here, is fairly sizeable and amounts to $`12\%`$ (for $`g_A^s`$) and $`14\%`$ (for $`\mu _s`$). All the considered values of the strange parameters are compatible with the asymmetry $`𝒜_p`$ within the experimental errors. However for values of $`g_A^s`$ as large as $`0.15`$ the experimental value of $`R`$ favours $`\mu _s0`$. The experimental relative errors for the various ratios and the asymmetry are: $`12\%`$ ($`R_\nu `$ and $`R_{\overline{\nu }}`$), $`14\%`$ ($`R`$) and $`16\%`$ ($`𝒜_p`$). Thus the experimental uncertainties are of the same order as the effects of the strange form factors of the nucleon. At present the error bands are clearly too large to allow any definite conclusion on the strangeness content of the nucleon: as already pointed out in ref., one needs to considerably reduce the experimental errors. Nevertheless, the results shown in Fig. 2 give indications that may help in the analysis and interpretation of the present and, in our auspices, future data. As we have already noticed, the comparison between the theoretical calculations and the experimental data for the ratio $`R_{\overline{\nu }}`$ (fig. 2a) shows that values of $`g_A^s0.15`$ are clearly disfavoured: moreover a value $`g_A^s=0.10`$ (see ref. ) seems to require negative values of $`\mu _s`$, of the order of $`0.2`$. Yet the analysis of the other observables is not conclusive, mainly because of the width of the experimental bands, which are compatible with many different choices of the strangeness parameters, $`g_A^s`$, $`\mu _s`$, $`\rho _s`$, including the one which sets all of them to zero. Keeping this in mind and without any claim for a definitive evidence, the results seem to favour negative values of the magnetic strange parameter, $`\mu _s`$, if $`g_A^s`$ is relatively large, in agreement with the findings of ref. . We remind here that a value of the strange magnetic form factor of the nucleon has been recently measured at BATES , with the result $`G_M^s(0.1\mathrm{GeV}^2)=0.23\pm 0.37\pm 0.15\pm 0.19`$. This value is affected by large experimental and theoretical uncertainties (the last error refers to the estimate of radiative corrections), but it is centered around a positive $`\mu _s`$, although it is still compatible with zero or negative values of $`\mu _s`$. At present neutrino scattering and parity violating electron scattering experiments appear to be not conclusive for what concerns $`\mu _s`$. We also notice that, if the P–odd asymmetry measured in the scattering of polarized electrons on nucleons will provide a more stringent information on the strange magnetic form factor, then future, precise experiments combining the measurement of $`\nu `$ and $`\overline{\nu }`$–proton scattering could allow a determination of the axial strange form factor and of the electric one. The effect of the latter is clearly smaller than the one associated to the axial and magnetic strange form factors. This is especially true for $`R_\nu `$, whereas $`R_{\overline{\nu }}`$ shows a non–negligible sensitivity to $`\rho _s`$. Concerning the asymmetry \[lower part of Fig. 2(b)\], it is obviously insensitive to the electric strange form factor, according to its definition \[see formula(5)\]; it is, instead, rather sensitive to $`g_A^s`$ and $`\mu _s`$, but, for the time being, the experimental error band does not allow us to discriminate among the various possible choices for $`g_A^s,\mu _s`$ values. Finally let us discuss the role played by the axial cutoff mass $`M_A`$. For this purpose we show in Fig. 3(a) and 3(b) the various observables, $`R_\nu `$, $`R_{\overline{\nu }}`$, $`R`$ and $`𝒜_p`$, versus the axial strange constant $`g_A^s`$ for three different choices of $`M_A`$: $`M_A=0.996,1.032`$ and $`1.068`$ GeV, which are, within $`1\sigma `$, the values obtained in fit number IV of ref.. The values of the magnetic and electric strange parameters, $`\mu _s`$, $`\rho _s`$ have been fixed to zero. In Fig. 3 we compare the theoretical predictions with the ratios (1)–(3) measured in BNL–734 experiment, and with the asymmetry (8), with the same error bands shown in Fig. 2. Solid lines correspond to results for $`M_A=1.032`$ GeV, dashed lines to $`M_A=1.068`$ GeV and dot–dashed to $`M_A=0.996`$ GeV. One can see that by varying $`M_A`$ in the considered range, the ratio $`R_{\overline{\nu }}`$ is changed by $`18\%`$, $`R`$ by $`10\%`$ and $`R_\nu `$ by $`5\%`$; therefore these ratios rather strongly depend on the precise value of $`M_A`$, particularly in the case of antineutrinos. This observation was already pointed out in ref. and stands out more clearly here. Moreover the comparison with the experimental data clearly shows, as in ref. , a strong correlation between $`g_A^s`$ and $`M_A`$, which is particularly evident in $`R_{\overline{\nu }}`$: keeping in mind that here the vector strange form factors are set to zero, the results obtained for $`R_{\overline{\nu }}`$ would disfavour, within one standard deviation, a wide range of values for $`g_A^s`$. In particular $`M_A=1.068`$ GeV is only compatible with $`g_A^s`$–values such that $`g_A^s0.05`$, $`M_A=1.032`$ GeV with $`g_A^s0.1`$, whereas $`M_A=0.996`$ GeV extends the range of allowed $`g_A^s`$–values to $`g_A^s0.15`$. It is worth noticing that the other quantities shown in Fig. 3 are much less restrictive on the chosen values for these parameters. In contrast with the ratios (1)–(3), we have found that the neutrino–antineutrino asymmetry is practically independent on the value of the axial cutoff mass $`M_A`$; this fact makes this quantity more suited to determine $`g_A^s`$ independently from the $`Q^2`$ behaviour of the axial form factors. In conclusion, we have thoroughly re–examined the data of the elastic $`\nu (\overline{\nu })`$–proton scattering BNL–734 experiment. We have checked that the effects of the nuclear structure and interactions on the ratios of cross sections and on the asymmetry considered here are negligible. This is an important prerequisite to draw reliable conclusions on the nuclear strange form factors from this type of experiments. Although our results go in the same direction as some authors have claimed before, we can state here that the experimental uncertainty is still too large to be conclusive about specific values of the strange form factors of the nucleon. A rather wide range of values for the strange parameters, $`g_A^s`$, $`\mu _s`$ and $`\rho _s`$, is compatible with the BNL–734 data and more precise measurements are thus needed in order to determine simultaneously the electric, magnetic and axial strange form factors of the nucleon. A crucial uncertainty for this determination comes from the existing errors on the axial cutoff mass $`M_A`$.Notice that from quasielastic $`\nu (\overline{\nu })`$ scattering the value $`M_A=1.09\pm 0.03\pm 0.02`$ was found. Our investigation indicates that one can extract the strange form factors of the nucleon from ratios like $`R_{\overline{\nu }}`$ only if the axial cutoff mass will be known with much better accuracy than at present. On the contrary, a high precision measurement of the neutrino–antineutrino asymmetry $`𝒜_p`$ could allow to determine the axial and vector strange form factors even without the knowledge of the precise value of $`M_A`$. We can conclude that the uncertainty of the available data does not allow to set stringent limits on the strange vector and axial–vector parameters, but future, more precise measurements could make their determination possible in a model independent way. ### Acknowledgments This work was supported in part by funds provided by DGICYT (Spain) under contract Nos. PB95–0123 and PB95–0533–A, the Junta de Andalucía, in part by Complutense University (Madrid) under project n. PR156/97 and in part by funds provided by the U.S. Department of Energy (D.O.E.) under cooperative research agreement # DF–FC02–94ER40818. S.M.B. acknolwledges the support of the Sonderforschungsbereich 375-95 fuer Astro-Teilchenphysik der Deutschen Forschungsgemeinschaft”; C.M. aknowledges the post–doctoral fellowship under the INFN–MIT “Bruno Rossi” exchange program.
no-problem/9812/quant-ph9812028.html
ar5iv
text
# Adaptive Quantum Homodyne Tomography ## I Introduction The possibility of measuring the quantum state of radiation has been received an increasing interest in the last years , as it opens perspectives for a new kind of experiments in quantum optics, with the possibility of measuring photon correlations on a sub-picosecond time-scale , characterizing squeezing properties , photon statistics in parametric fluorescence , quantum correlations in down-conversion , nonclassicality of states , and measuring Hamiltonians of nonlinear optical devices . Among the many state reconstruction techniques suggested in the literature , quantum homodyne tomography (QHT) of radiation field have been received much attention , being the only method which has been implemented in quantum optical experiments , and recently being extended to estimation of the expectation value of any operator of the field , which makes the method the first universal detectors for radiation. On one hand, QHT takes advantage of amplification from the local oscillator in the homodyne detector, avoiding the need of single-photon resolving photodetectors, hence with the possibility of achieving very high quantum efficiency using photodiodes . On the other hand, the method of QHT is very efficient and statistically reliable, and can be implemented on-line with the experiment. In principle, a precise knowledge of the density matrix would require an infinite number of measurements on identical preparations of radiation. However, in real experiments one has at disposal only a finite number of data, and thus statistical analysis and errors estimation are needed. The purpose of this paper is to analyze the possibility of improving the current QHT technique, in order to minimize statistical errors. We will present a new method that ”adapts” the tomographic estimators to a given finite set of data, improving the precision of the tomographic measurement. Quantum tomography of a single-mode radiation field consists of a set of repeated measurements of the field-quadrature $`\widehat{x}_\varphi =\frac{1}{2}(ae^{i\varphi }+a^{}e^{i\varphi })`$ at different values of the reference phase $`\varphi `$. The expectation value of a generic operator can be obtained by averaging a suitable kernel function $`R[\widehat{O}](x,\varphi )`$ as follows $`\widehat{O}\text{Tr}\left\{\widehat{\varrho }\widehat{O}\right\}={\displaystyle _0^\pi }{\displaystyle \frac{d\varphi }{\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑xp_\eta (x,\varphi )R_\eta [\widehat{O}](x,\varphi ),`$ (1) where $`p_\eta (x,\varphi )`$ denotes the probability distribution of the outcomes $`x`$ for the quadrature $`\widehat{x}_\varphi `$ with quantum efficiency $`\eta `$, and $`R_\eta [\widehat{O}](x,\varphi )`$ is given by $`R_\eta [\widehat{O}](x,\varphi )={\displaystyle \frac{1}{4}}{\displaystyle _0^{\mathrm{}}}𝑑r\mathrm{exp}\left[{\displaystyle \frac{1\eta }{8\eta }}r\right]\text{Tr}\left\{\widehat{O}\mathrm{cos}\left[\sqrt{r}(x\widehat{x}_\varphi )\right]\right\}.`$ (2) In the following we will focus attention only on the case $`\eta =1`$, and we will drop the subscript $`\eta `$ in the notation. As it will appear from the following, the method works equally well also for nonunit quantum efficiency, and a detailed numerical analysis versus $`\eta `$ will be given elsewhere. On the basis of identity (1), it follows that the ensemble average $`\widehat{O}`$ can be experimentally obtained by averaging $`R[\widehat{O}](x,\varphi )`$ over the set of homodyne data, namely $`\widehat{O}=\overline{R[\widehat{O}]}={\displaystyle \frac{1}{N}}{\displaystyle \underset{i=1}{\overset{N}{}}}R[\widehat{O}](x_i,\varphi _i),`$ (3) N being the total number of measurements of the sample. The statistical error of the tomographic measurement in Eq. (3) can be easily evaluated provided that the corresponding kernel function satisfies the hypothesis of the central limit theorem, which assures that the partial average over a block of data is Gaussian distributed around the global average over all data. In this case, the error is evaluated by dividing the ensemble of data into subensembles, and calculating the r.m.s. deviation of each subensemble mean value with respect to the global average. The estimated value of such a confidence interval is given by $`\delta O={\displaystyle \frac{1}{\sqrt{N}}}\left\{\overline{\mathrm{\Delta }R^2[\widehat{O}]}\right\}^{1/2},`$ (4) where $`\overline{\mathrm{\Delta }R^2[\widehat{O}]}`$ is the variance of the kernel over the tomographic probability $`\overline{\mathrm{\Delta }R^2[\widehat{O}]}={\displaystyle _0^\pi }{\displaystyle \frac{d\varphi }{\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑xp(x,\varphi )R^2[\widehat{O}](x,\varphi )\left\{{\displaystyle _0^\pi }{\displaystyle \frac{d\varphi }{\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑xp(x,\varphi )R[\widehat{O}](x,\varphi )\right\}^2.`$ (5) Following this scheme, the tomographic precision in determining matrix elements of the density operator $`\widehat{\varrho }`$ has been discussed in , with asymptotic estimations in Ref. , whereas relevant observables $`\widehat{O}`$ have been analyzed in , also in comparison with the corresponding ideal measurement. The crucial point of the method presented in this paper is that the tomographic kernel $`R[\widehat{O}](x,\varphi )`$ is not unique, since a large class of null functions $`F(x,\varphi )`$ exists that have zero tomographic average for arbitrary state, namely $`\overline{F}={\displaystyle _0^\pi }{\displaystyle \frac{d\varphi }{\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑xp(x,\varphi )F(x,\varphi )\mathrm{\hspace{0.33em}0}.`$ (6) Therefore, addition of null functions to a generic kernel gives a new kernel with the same tomographic average, hence equivalent for the estimation of the same ensemble average $`\widehat{O}`$. On the other hand, adding null functions would modify the kernel variance, whence the statistical error over data. The adaptive tomography method thus consists in optimizing kernel in the equivalence class, in order to minimize the statistical errors. The paper is structured as follows. In Section II we introduce the classes of null functions that will be used in the paper, and describe the adaptive optimization method in detail. In Section III we apply the adaptive method to the tomography of the density matrix in the photon number representation. In Section IV we analyze the improvement of precision in tomographic measurement of some relevant field-observables. Section V briefly describes the effects of systematic errors on the effectiveness of the method. Finally, Section VI closes the paper by summarizing the main results. ## II Adaptive Tomography The following functions have vanishing tomographic expectation (6) $`G_n^+(x,\varphi )=e^{i(1+n)2\varphi }g_+(xe^{i\varphi })G_n^{}(x,\varphi )=e^{i(1+n)2\varphi }g_{}(xe^{i\varphi }).`$ (7) In Eq. (7) $`n0`$ and $`g_\pm (z)`$ are analytic functions of $`z`$. The set $`𝒢`$ of null functions defined in Eqs. (7) forms a vector space over $``$, and each class $`𝒢^\pm =\left\{G_n^\pm \right\}`$ separately is closed under multiplication (without inverse). In order to prove vanishing expectation (6) for $`G_n^\pm (x,\varphi )`$ we consider the Taylor expansion of functions $`g_\pm (xe^{i\varphi })`$ $`g_\pm (xe^{i\varphi })={\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}c_k^\pm x^ke^{\pm ik\varphi },`$ (8) which allows to write $`\overline{G_n^\pm }={\displaystyle _0^\pi }{\displaystyle \frac{d\varphi }{\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑xp(x,\varphi )e^{\pm i(1+n)2\varphi }g_\pm (xe^{i\varphi })={\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}c_k^\pm {\displaystyle _0^\pi }{\displaystyle \frac{d\varphi }{\pi }}e^{\pm i(k+2+2n)\varphi }\widehat{x}_\varphi ^k,`$ (9) where $`\mathrm{}`$ denotes the usual ensemble average. Using the Wilcox decomposition formula one can write $`\widehat{x}_\varphi ^k={\displaystyle \frac{k!}{2^k}}{\displaystyle \underset{p=0}{\overset{[[k/2]]}{}}}{\displaystyle \underset{s=0}{\overset{k2p}{}}}{\displaystyle \frac{a^sa^{k2ps}}{2^pp!s!(k2ps)!}}e^{i(2p+2sk)\varphi },`$ (10) where $`[[x]]`$ denotes the integer part of $`x`$. Eq. (10) together with the identity $`{\displaystyle _0^\pi }{\displaystyle \frac{d\varphi }{\pi }}e^{iq\varphi }=\{\begin{array}{cc}0& \text{q even}\hfill \\ 1& \text{q = 0}\hfill \\ \frac{2i}{\pi q}& \text{q odd}\hfill \end{array},`$ (14) prove that $`{\displaystyle _0^\pi }{\displaystyle \frac{d\varphi }{\pi }}e^{\pm i(k+2+2n)\varphi }\widehat{x}_\varphi ^k=0,n0,k0,`$ (15) hence $`{\displaystyle _0^\pi }{\displaystyle \frac{d\varphi }{\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑xp(x,\varphi )G_n^\pm (x,\varphi )=0,n0,`$ (16) namely $`G_n^\pm (x,\varphi )`$ are null functions for $`n0`$. In the following, we will focus attention on three particular sets of null functions. The type-I null functions are obtained from Eq. (7) by choosing $`n=0`$ and $`g(xe^{i\varphi })x^ke^{ik\varphi }`$ for a given $`k`$, and will be denoted by $`F_k^I(x,\varphi )`$, namely $`F_k^I(x,\varphi )=x^ke^{i(k+2)\varphi }k=0,1,\mathrm{}.`$ (17) The type-II null-functions correspond to the simple choice $`g(xe^{i\varphi })1`$, i. e. $`F_n^{II}(\varphi )=e^{i(1+n)2\varphi }n=0,1,\mathrm{}.`$ (18) Finally, the type-III null functions are a kind of intermediate choice between type I and type II classes, and are defined as follows $`F_l^{III}(x,\varphi )=x^{k[l]}e^{i(k[l]+2+2n[l])\varphi }l=0,1,\mathrm{},`$ (19) where $`k[l]`$ and $`n[l]`$ are given in Table I. In the following we will use the notation $`F_k(x,\varphi )`$, dropping the type index I-III, when the identity under consideration holds for all three types. Let us consider a generic real kernel $`R[\widehat{O}](x,\varphi )`$. By adding $`M`$ null functions keeping the kernel as real, we have a new kernel $`K[\widehat{O}](x,\varphi )`$ $`K[\widehat{O}](x,\varphi )=R[\widehat{O}](x,\varphi )+{\displaystyle \underset{k=0}{\overset{M1}{}}}\mu _kF_k(x,\varphi )+{\displaystyle \underset{k=0}{\overset{M1}{}}}\mu _k^{}F_k^{}(x,\varphi ),`$ (20) where $`F_k(x,\varphi )𝒢^+`$, $`F_k^{}(x,\varphi )𝒢^{}`$, and $`\mu _k`$ are complex coefficients. By definition we have $`\overline{K[\widehat{O}]}=\overline{R[\widehat{O}]}`$, whereas the variance of the new kernel $`K[\widehat{O}](x,\varphi )`$ is given by $`\overline{\mathrm{\Delta }K^2[\widehat{O}]}=\overline{\mathrm{\Delta }R^2[\widehat{O}]}+2\left\{{\displaystyle \underset{k,l=0}{\overset{M1}{}}}\mu _k\mu _l^{}\overline{F_kF_l^{}}+{\displaystyle \underset{k=0}{\overset{M1}{}}}\mu _k\overline{R[\widehat{O}]F_k}+{\displaystyle \underset{k=0}{\overset{M1}{}}}\mu _k^{}\overline{R[\widehat{O}]F_k^{}}\right\}.`$ (21) In deriving the above formula we use the fact that both $`𝒢^+`$ and $`𝒢^{}`$ are closed under multiplication. The variance of the modified kernel function in Eq. (21) can be minimized with respect to the coefficients $`\mu _k`$, leading to the linear set of equations $`{\displaystyle \underset{l}{}}\mu _l\overline{F_kF_l^{}}=\overline{R[\widehat{O}]F_k^{}}.`$ (22) It is convenient to rewrite the optimization equations (22) in matrix form as follows $`𝐀\mu =𝐛.`$ (23) where $`𝐀`$ is the Hermitian $`M\times M`$ matrix $`A_{kl}=\overline{F_kF_l^{}}={\displaystyle _0^\pi }{\displaystyle \frac{d\varphi }{\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑xp(x,\varphi )F_k(x,\varphi )F_l^{}(x,\varphi ),`$ (24) and $`𝐛`$ is the complex vector $`b_k=\overline{R[\widehat{O}]F_k^{}}={\displaystyle _0^\pi }{\displaystyle \frac{d\varphi }{\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑xp(x,\varphi )R[\widehat{O}](x,\varphi )F_k^{}(x,\varphi ).`$ (25) Notice that the vector $`𝐛`$ depends on both the kernel $`R[\widehat{O}]`$ and the state $`\widehat{\varrho }`$ under examination, whereas the matrix $`𝐀`$ depends on the state only. By substituting Eq. (22) in Eq. (21) and inverting Eq. (23) we obtain $`\mathrm{\Delta }^2[\widehat{O}]\overline{\mathrm{\Delta }R^2[\widehat{O}]}\overline{\mathrm{\Delta }K^2[\widehat{O}]}=2{\displaystyle \underset{k,l=0}{\overset{M1}{}}}\mu _kA_{kl}\mu _l^{}=2{\displaystyle \underset{k,l=0}{\overset{M1}{}}}b_k\left(A^1\right)_{kl}b_l^{}0,`$ (26) which expresses the variance decrease in terms of $`𝐀`$ and $`𝐛`$. Let us summarize the optimization procedure for the kernel $`R[\widehat{O}](x,\varphi )`$. After collecting an ensemble of $`N`$ tomographic data, the quantities $`𝐀`$ and $`𝐛`$ are evaluated as tomographic experimental averages. Then, by solving the linear system (23) one obtains the coefficients $`\mu _k`$ which are used to build the optimized kernel $`K[\widehat{O}](x,\varphi )`$. At this point, the same data set is used to average $`K[\widehat{O}](x,\varphi )`$ and, upon dividing the set into subensembles, the experimental error is evaluated, whose square now is reduced by the quantity $`\mathrm{\Delta }^2[\widehat{O}]/N`$. The actual precision improvement of the tomographic measurement depends both on the state under examination (which affects both $`𝐛`$ and $`𝐀`$) and on the operator $`\widehat{O}`$, whose kernel enters only in the expression of $`𝐛`$. An explicit expression for $`A_{kl}`$ can be obtained by means of Eq.(10), and generally depends on the type of null function that are involved. For type-II null functions it reduces to the identity matrix, independently on the state $`A_{kl}^{II}=\delta _{kl}\text{type-II null functions},`$ (27) $`\delta _{kl}`$ denoting Kronecker delta. For type-I null functions one has $`A_{kl}^I={\displaystyle \frac{(k+l)!}{2^{k+l}}}{\displaystyle \underset{p=0}{\overset{min(k,l)}{}}}{\displaystyle \frac{a^{lp}a^{kp}}{2^pp!(lp)!(kp)!}}\text{type-I null functions}.`$ (28) The explicit expression for coherent and Fock states is $`A_{kl}^I`$ $`=`$ $`\alpha ^{kl}{\displaystyle \frac{(k+l)!}{k!}}\mathrm{\hspace{0.25em}2}^{k2l}L_l^{kl}(2|\alpha |^2)\text{coherent state }|\alpha (kl)`$ (29) $`A_{kl}^I`$ $`=`$ $`\delta _{kl}{\displaystyle \frac{2^{kn+1}}{n!\sqrt{\pi }}}{\displaystyle _0^{\mathrm{}}}𝑑ye^{y^2}y^{2k}H_n^2(y)\text{Fock state }|n,`$ (30) where $`H_n(x)`$ denotes Hermite polynomials. Notice that for Fock states the matrix is diagonal (which is true also for type-II and type-III null functions). ## III Adaptive tomography of the density matrix In this section we apply the adaptive method to the tomographic measurement of the density matrix in the photon number representation. We evaluate the variance reduction $`\mathrm{\Delta }^2[|nm|]`$ in Eq. (26) for $`\widehat{O}=|nm|`$ corresponding to the tomographic measurement of the matrix elements $`\varrho _{nm}=m|\widehat{\varrho }|n`$. We consider the different types of null functions, and calculate $`\mathrm{\Delta }^2[|nm|]`$ versus the number $`M`$ of added null functions, for either coherent states, squeezed vacuum, Fock states, and the ”Schrödinger-cat” like superposition of coherent states given by $`|\psi ={\displaystyle \frac{1}{2\sqrt{1+\mathrm{exp}(2|\alpha |^2)}}}\left[|\alpha +|\alpha \right].`$ (31) In order to see the new adaptive method at work Monte Carlo simulated experiments are presented. Tomographic kernels for the matrix elements in the Fock basis have been firstly presented in Ref. , with extension to non unit quantum efficiency in Ref. , and factorization identities for the kernel in Ref. . However, none of the above methods allows for an explicit analytical evaluation of the vector $`𝐛`$ in Eq. (25). For this reason, we compute $`\mathrm{\Delta }^2[|nm|]`$ numerically presenting results in terms of the relative variance reduction $`\gamma `$, defined as follows $`\gamma =1{\displaystyle \frac{\overline{\mathrm{\Delta }K^2[\widehat{O}]}}{\overline{\mathrm{\Delta }R^2[\widehat{O}]}}}={\displaystyle \frac{\mathrm{\Delta }^2[\widehat{O}]}{\overline{\mathrm{\Delta }R^2[\widehat{O}]}}}.`$ (32) A complete removal of fluctuations would correspond to $`\gamma =1`$. ### A Coherent States The adaptive method leads to a significant error reduction for detection of matrix elements $`m|\widehat{\varrho }|n`$ of coherent states. Our results indicates that type-I null functions are the most effective, and that the larger is the amplitude $`\alpha `$ of the coherent state, the larger the noise reduction. In Fig. 1 numerical results are presented for diagonal elements $`n|\widehat{\varrho }|n`$ for intensity $`|\alpha |^2=5`$. In Fig. 1(a) the noise reduction $`\gamma `$ is given versus the number $`M`$ of added type-I null functions. One can see that the noise reduction $`\gamma `$ saturates for large $`M`$, and better levels $`\gamma `$ of reduction are achieved for smaller $`n`$. In Fig. 1(b) the noise reduction is reported versus $`n`$ for $`M=30`$. In Fig. 2 we report the results from a Monte Carlo experiment for $`|\alpha |^2=3`$, with optimization performed with $`M=6`$ null functions. The reduction of statistical errors for low values of $`n`$ is evident. The noise reduction for the off-diagonal matrix elements behaves similarly to the diagonal ones, being more effective for low indices. In Fig. 3 the noise reduction $`\gamma `$ versus $`n`$ and $`m`$ of the matrix element $`m|\widehat{\varrho }|n`$ is plotted for a coherent state with $`|\alpha |^2=10`$, and for the three types of null functions. The type-I null functions are generally more effective, though not uniformly over all indices $`n`$ and $`m`$. ### B Squeezed states and Schrödinger cat states Results for squeezed states and ”cat” superposition of coherent states are presented in the same subsection, since they behave similarly. This is due to the fact that both states have phase-dependent features, that reflect in a similar odd-even oscillation in the photon number probability distribution. In Figs. 4 and 5 the noise reduction for both cases is plotted for the three types of null functions, for $`M=10`$. From the plots it is apparent that type-II null functions are now the most effective ones, especially for off-diagonal matrix elements, though the same level of noise reduction for low $`n`$ and $`m`$ can also be obtained using type-I and type-III null functions. In Fig. 6 results from a Monte Carlo simulated adaptive tomography on a squeezed vacuum are reported for $`\widehat{n}=4`$ and $`M=10`$. Matrix elements before and after optimization can be compared, showing the error reduction at work. ### C Fock states For Fock states the matrix $`𝐀`$ is diagonal for all types of null functions, and therefore the optimization procedure just consists of the evaluation of the vector $`𝐛`$. The kernels for the matrix elements have the form $`K[|nm|](x,\varphi )=f_{n,m}(x)\mathrm{exp}(i(nm)\varphi )`$, where $`f_{n,m}(x)`$ has the parity of $`nm`$ . This fact, together with the integral (14) makes straightforward to show that $`b_k^Ib_k^{II}b_k^{III}0k,`$ (33) namely no improvement should be expected for the precision of quantum tomography on Fock states. ## IV Adaptive tomographic measurements of observables The tomographic estimation of the ensemble average $`\widehat{O}`$ of a radiation operator $`\widehat{O}`$ can be obtained by averaging the kernel $`R[\widehat{O}](x,\varphi )`$ given in Eq. (2). However, Eq. (2) needs a procedure that exploits the null function equivalence, and is given in Ref. . For this reason, for simplicity here we use the Richter formula , which expresses the kernels for the normally ordered moments as follows $`R[a^{}{}_{}{}^{n}a_{}^{m}](x;\varphi )=e^{i(mn)\varphi }{\displaystyle \frac{H_{n+m}(\sqrt{2}x)}{\sqrt{2^{n+m}}\left(\genfrac{}{}{0pt}{}{n+m}{n}\right)}},`$ (34) $`H_n(x)`$ being the Hermite polynomial of order $`n`$. We apply the adaptive method to the tomographic detection of the most relevant observables: intensity, quadrature and complex field amplitude. The optimization method is here particularly useful, as the tomographic detection of these observables using the Richter kernel is very noisy . In contrast to the case of matrix elements given in Section III, here some analytical evaluations can be carried out. We consider measurements performed on coherent states, squeezed vacuum, Fock states and cat superposition of coherent states. It turns out that addition of just few null functions to the Richter kernels generally results in a large improvement of the tomographic precision, again with the exception of Fock states where no improvement can be obtained. ### A Intensity The tomographic detection of intensity is obtained by averaging the kernel $`R[a^{}a](x)=2x^2{\displaystyle \frac{1}{2}}.`$ (35) The vectors $`𝐛`$ needed for the optimization procedure are given by $`b_k^I=\overline{R[a^{}a]F_k^I}=\overline{2x^{k+2}e^{i(k+2)\varphi }}`$ $`=`$ $`{\displaystyle \frac{a^{(k+2)}}{2^{1+k}}}`$ (36) $`b_k^{II}=\overline{R[a^{}a]F_n^{II}}=\overline{2x^2e^{i(n+1)2\varphi }}`$ $`=`$ $`\{\begin{array}{cc}\frac{a^2}{2}& n=0\hfill \\ 0& n0\hfill \end{array}`$ (39) $`b_k^{III}=\overline{R[a^{}a]F_l^{III}}=\overline{2x^{k[l]+2}e^{i(k[l]+2+n[l])\varphi }}`$ $`=`$ $`\{\begin{array}{cc}\frac{a^2}{2}& l=0\hfill \\ 0& l0\hfill \end{array}.`$ (42) From Eqs. (39) and (42) it follows that only $`F_0^I(x,\varphi )`$ and $`F_0^{II}(\varphi )F_0^{III}(\varphi )\mathrm{exp}(2i\varphi )`$ are effective in reducing the variance. We solved analytically the optimization equations (23) for type-I null functions, and also in this case it turns out that for all the states here considered, only the single null function $`F_0^I(\varphi )`$ is needed, namely one has $`\mu _0=b_0\mu _k=0,k1.`$ (43) The corresponding reduction of variance is easily obtained from Eq. (26), and is given by $`\mathrm{\Delta }^2[a^{}a]={\displaystyle \frac{1}{2}}a^2a^2.`$ (44) Actually, $`\mathrm{\Delta }^2[a^{}a]`$ can compensate the leading term of the variance of the original Richter kernel , which, in turn, is given by $`\overline{\mathrm{\Delta }R^2[a^{}a]}=\widehat{\mathrm{\Delta }n^2}+{\displaystyle \frac{1}{2}}\left[a^2a^2+2a^{}a+1\right].`$ (45) This means that the variance of the optimized kernel $`\overline{\mathrm{\Delta }K^2[a^{}a]}`$ becomes much closer to the intrinsic intensity fluctuations $`\widehat{\mathrm{\Delta }n^2}`$ than the original noise $`\overline{\mathrm{\Delta }R^2[a^{}a]}`$. In order to appreciate such noise reduction we compare the two noise ratios $`\delta n_R=\sqrt{{\displaystyle \frac{\overline{\mathrm{\Delta }R^2[a^{}a]}}{\widehat{\mathrm{\Delta }n^2}}}}\delta n_K=\sqrt{{\displaystyle \frac{\overline{\mathrm{\Delta }K^2[a^{}a]}}{\widehat{\mathrm{\Delta }n^2}}}}.`$ (46) For coherent states $`|\alpha `$ we obtain $`\delta n_R=\sqrt{2+{\displaystyle \frac{1}{2}}\left(|\alpha |^2+{\displaystyle \frac{1}{|\alpha |^2}}\right)}\delta n_K=\sqrt{2+{\displaystyle \frac{1}{2|\alpha |^2}}},`$ (47) that is, from an asymptotically linearly increasing function of $`|\alpha |`$ the ratio becomes a constant $`\delta n_K\sqrt{2}`$. Similar expressions are obtained for other kind of state: the noise ratio saturates to $`\delta n_K\sqrt{3/2}`$ for either squeezed vacuum and cat states. In Fig. 7 results from a Monte Carlo simulation of the tomographic measurement of intensity on coherent states show the noise reduction obtained when using the optimized kernel. The noise reduction obtained by adding the single null function $`F_0(\varphi )`$ can be easily evaluated also for the generic diagonal moment $`a^na^n`$, using the formula $`e^{i2\varphi }R[a^na^n](x)={\displaystyle \frac{n}{n+1}}R[a^{(n+1)}a^{n1}](x),`$ (48) which leads to $`b_0=\overline{R[a^na^n]e^{i2\varphi }}={\displaystyle \frac{n}{n+1}}a^{(n+1)}a^{n1},`$ (49) namely $`\mathrm{\Delta }^2[a^na^n]=2|b_0|^2`$. We just mention that optimizing the kernel $`R[a^2a^2](x)`$ is useful to improve detection of the second order correlation function $`g^{(2)}=a^2a^2/a^{}a^2`$. ### B Quadrature The optimization procedure has been tested also on the kernel $`R[\widehat{x}](x,\varphi )=2x\mathrm{cos}\varphi `$, corresponding to the measurement of the quadrature operator $`\widehat{x}=\frac{1}{2}(a+a^{})`$. Similarly to the intensity case, the type-II and type-III null functions do not play a role in improving precision, whereas type-I functions give $`b_k=2^{k1}a^{(1+k)}`$ in Eq. (23). In this way the optimization procedure can be carried analytically also in this case. The results indicate that for coherent states it is enough to add the first null function $`F_0^I(\varphi )`$, whereas for squeezed vacuum and cat states only the odd-index functions $`F_{2s+1}^I(x,\varphi )`$ contribute to noise reduction. In this case the main term is due to $`F_1^I(x,\varphi )`$, whereas higher order functions improve the variances only by a few percent. For coherent states the variance reduction from $`F_0^I(x,\varphi )`$ is given by $`\mathrm{\Delta }^2[\widehat{x}]={\displaystyle \frac{1}{2}}a^{}a={\displaystyle \frac{1}{2}}|\alpha |^2,`$ (50) which completely compensates the leading term in the variance of the original Richter kernel $`\overline{\mathrm{\Delta }R^2[\widehat{x}]}=\widehat{\mathrm{\Delta }x^2}+{\displaystyle \frac{1}{2}}a^{}a+{\displaystyle \frac{1}{4}}.`$ (51) For squeezed vacuum and cat states the variance reduction due to $`F_1(x,\varphi )`$ is $`\mathrm{\Delta }^2[\widehat{x}]={\displaystyle \frac{1}{2\left(1|a|^2+2a^{}a\right)}}\left[|a|^2\left(a^2+a^2+{\displaystyle \frac{1}{2}}+a^{}a\right)+|a^2|^2\right].`$ (52) Upon defining the noise ratio $`\delta x_K`$ in analogy to Eq.(46) $`\delta x_K=\sqrt{{\displaystyle \frac{\overline{\mathrm{\Delta }K^2[a^{}a]}}{\widehat{\mathrm{\Delta }x^2}}}},`$ (53) from Eqs. (50) and (52) we get the constant $`\delta x_K=\sqrt{2}`$ for coherent states, independently on $`|\alpha |^2`$, whereas for squeezed vacuum and cat states the noise ratio saturates to $`\delta x_K\sqrt{5/4}`$. In Fig. 8 results from a simulated experiments of tomographic measurement of the quadrature on coherent states are shown for $`|\alpha |^2=3`$. There the histograms of the original Richter kernel and of the optimized kernel are compared. The optimized kernel has a sharper distribution, which is peaked at the mean value $`\widehat{x}=\sqrt{3}`$. For this reason, it is quite obvious that the optimized kernel $`K[\widehat{x}](x,\varphi )`$ gives a more precise determination of $`\widehat{x}`$ than the the original kernel $`R[\widehat{x}](x,\varphi )`$. ### C Field amplitude The tomographic kernel for the measurement of the complex field amplitude $`a`$ is given by $`R[a](x,\varphi )=2xe^{i\varphi }`$, and its fluctuations should be compared with those from the ideal measurement of $`a`$, which could be achieved by ideal eight-port or six-port homodyne detection. The optimization procedure depends on the choice for the definition of statistical error for a complex quantity. If one considers the real or the imaginary part separately, the procedure coincides with the optimization of the precision in independent measurements of two conjugated quadratures. On the other hand, in order to take into account both noises jointly, we minimize the quantity $`\overline{\mathrm{\Delta }_{}K^2[a]}={\displaystyle \frac{1}{2}}\left\{\overline{\left|K[a]\right|^2}\left|\overline{K^2[a]}\right|^2\right\},`$ (54) corresponding to the average of noises for real and imaginary parts, namely the trace of the noise covariance matrix. Now, the equivalence class of kernel functions is written as follows $`K[a](x,\varphi )=R[a](x,\varphi )+{\displaystyle \underset{p=0}{\overset{M1}{}}}\mu _pF_p(x,\varphi )+{\displaystyle \underset{p=0}{\overset{M1}{}}}\nu _pF_p^{}(x,\varphi ).`$ (55) $`\mu _p`$ and $`\nu _p`$ being two independent sets of complex coefficients. The optimization procedure is similar to the real case, and is reduced to solving the two linear systems $`𝐀\mu =𝐛𝐀\nu =𝐜,`$ (56) where $`𝐜`$ is given by $$c_p=\overline{R[\widehat{O}]F_p}.$$ By inverting Eqs. (56), one obtains the noise reduction $`\mathrm{\Delta }_{}^2[a]=\overline{\mathrm{\Delta }_{}R^2[a]}\overline{\mathrm{\Delta }_{}K^2[a]}={\displaystyle \underset{p,q=0}{\overset{M1}{}}}\left[b_p\left(A^1\right)_{qp}b_q^{}+c_p\left(A^1\right)_{pq}c_q^{}\right].`$ (57) Also in the present case it is sufficient to consider only type-I functions. The optimization vector $`𝐛`$ is given by $`b_k=2^ka^{(1+k)}`$. Similarly to the case of the quadrature, the optimization procedure shows that for coherent states only $`F_0^I(\varphi )`$ is needed, whereas for squeezed vacuum and cat states only the odd-index functions $`F_{2s+1}^I(x,\varphi )`$ contribute to noise reduction, and the main term comes from $`F_1^I(x,\varphi )`$. In this way for coherent states one obtains $`\mathrm{\Delta }_{}^2[a]={\displaystyle \frac{1}{2}}|\alpha |^2,`$ (58) whereas for squeezed vacuum and cat states one has $`\mathrm{\Delta }_{}^2[a]={\displaystyle \frac{1}{2\left(1|a|^2+2a^{}a\right)}}\left[|a|^2\left(a^2+a^2+{\displaystyle \frac{1}{2}}+a^{}a\right)+|a^2|^2\right].`$ (59) Eqs. (58) and (59) should be compared with the noise-figure of the original Richter kernel $`\overline{\mathrm{\Delta }_{}^2R[a]}={\displaystyle \frac{1}{2}}\left[2a^{}a+1|a|^2\right],`$ (60) and with the intrinsic noise of a generalized measurement of the amplitude $`\widehat{\mathrm{\Delta }_{}a^2}={\displaystyle \frac{1}{2}}\left[a^{}a+1|a|^2\right].`$ (61) The noise ratios thus equals $`\delta a_K=1`$ for coherent states, whereas saturates to $`\delta a_K\sqrt{3/2}`$ for both squeezed vacuum and cat states. Remarkably, for coherent states the heterodyne noise is reached, namely tomographic detection has ideal noise. ## V Effects of systematic errors Throughout this paper the tomographic kernels have been optimized by adding low order null functions. Higher order functions oscillate more rapidly. Since the method involves only the average of these functions on a small sample of data, fast oscillations in $`\varphi `$ and higher power of $`x`$ would introduce more noise, and including too many null functions would increase the error instead of reducing it. In Fig. 9 an example of such pathology is given. Another point that should be mentioned is that in the tomographic detection here considered the phase $`\varphi `$ is a random parameter in $`[0,\pi ]`$. A discrete scanning by equally-spaced phases would introduce systematic errors that would mask the benefits from the optimization. Actually, for non-random uniform scanning, the null function $`F_0(\varphi )`$ has no effects when added to phase independent kernels, whereas the other null functions have a much reduced effect, and obviously do not eliminate the systematic error due to the finite mesh of the deterministic scanning. ## VI Summary and Conclusions In this paper we have presented an adaptive method to optimize tomographic kernels, improving the precision of the tomographic measurement. The method has been analyzed in detail for coherent states, Fock states, squeezed vacuum, and ”Schrödinger-cat” states. With the exception of Fock, states the method generally provides a sizeable reduction of statistical errors. For coherent states the improvement mainly concerns the small-index matrix elements, whereas for squeezed vacuum and cat states also far off-diagonal elements are improved. The error reduction is much more significant for the measurement of intensity, quadrature and field amplitude, where for coherent states, squeezed vacuum, and cat states the ratio between tomographic noise and uncertainty of the considered observable saturates for increasing energy. In this case, we can definitely assert that quantum tomography is a quasi-ideal measurement, as it adds only a small amount of noise as compared to ideal detection. ## Acknowledgments We would thank Dirk -G. Welsch, Mohamed Dakna and Nicoletta Sterpi for useful discussions. M. G. A. Paris has been partly supported by the “Francesco Somaini” foundation. This work is part of the INFM contract PRA-1997-CAT.
no-problem/9812/hep-ph9812340.html
ar5iv
text
# Light Gluino Predictions for Jet Cross Sections in Tevatron Run II ## Abstract The CDF inclusive jet transverse energy cross section at $`1.8TeV`$ suggests anomalous behavior at both low and high transverse energies. In addition the scaled ratio of the $`0.63TeV`$ to $`1.8TeV`$ data lies significantly below the standard model prediction and suggests structure not attributable to standard model processes. These anomalies are in line with what would be expected in the light gluino scenario. We perform a unified fit and extrapolate to two TeV to predict the results at run II. preprint: UAHEP9812 November 1998 hep-ph/9812340 The CDF collaboration at Fermilab has published a study of the jet inclusive transverse energy cross section in $`p\overline{p}`$ cross sections at $`1.8TeV`$ which suggest the possibility of anomalous behavior in both the low and high transverse energy regions. D0 has not published results in the low transverse energy region but has presented data at high transverse energy which appear to be consistent with either the CDF result or the standard model. The apparent anomaly at high $`E_T`$ seen by CDF could, therefore, be a statistical fluctuation. It has also been suggested that these results are compatible with the standard model if the gluon distribution at high x is appreciably higher than expected on the basis of previous fits. On the other hand, the anomalous behavior observed by CDF in both the low and high $`E_T`$ regions is also consistent with that expected if the gluino of supersymmetry is light (below $`10GeV`$ in mass is sufficient) . Although all direct searches for a light gluino have turned up negative, many indirect indications of such a light color octet parton have been noted. A partial list is contained in the references of . The measured inclusive cross section at center of mass energy $`\sqrt{s}`$ to produce a jet of transverse energy $`E_T`$ averaged over a certain rapidity interval is theoretically expected to have the form $`d\sigma /dE_T=\alpha _s(\mu )^2s^{3/2}F(X_T,{\displaystyle \frac{\mathrm{\Lambda }}{\sqrt{s}}},{\displaystyle \frac{m}{\sqrt{s}}})+𝒪(\alpha _s^3)`$ (1) Here $`\mu `$ is the scale parameter, $`X_T=2E_T/\sqrt{s},\mathrm{\Lambda }`$ is the QCD dimensional transmutation parameter, and m represents any of the masses of the strongly interacting particles in the theory. Taken to all orders the cross section is independent of $`\mu `$ but at finite order the theoretical result depends on $`\mu `$ which must therefore be treated as a parameter of the theory. The CDF best fits correspond to $`\mu =E_T/2`$. At high energy the scaling function F depends only on $`X_T`$ . The CDF data for this cross section compared to the next-to-leading order (NLO) QCD predictions are below unity at low $`E_T`$ and rise dramatically above unity at high $`E_T`$. In the Supersymmetry (SUSY) treatment of this behavior was attributed to three phenomena. * With a light gluino the strong coupling constant runs more slowly being higher than the standard model at high $`\mu `$ and lower at low $`\mu `$. * The production of gluino pairs increases F by a roughly uniform factor of 1.06 for all $`E_T`$ * A squark, if present, will cause a bump in the cross section at about $`m_{\stackrel{~}{Q}}/2`$. The fit of used the CDF suggested value of $`\mu `$ and a value of $`\mathrm{\Lambda }`$ corresponding to $`\alpha _s(M_Z)=0.113`$ and a squark mass of about $`106GeV`$. The theoretical ratio of the SUSY prediction relative to the standard model prediction is relatively insensitive to higher order corrections since both will have roughly equal higher order enhancements. In this work the CTEQ3 parton distribution functions (pdf’s) were used. In a later study , CDF considered the scaled ratio of the inclusive jet $`E_T`$ cross sections at $`630GeV`$ and $`1.8TeV`$. $`r(X_T)={\displaystyle \frac{s^{3/2}d\sigma /dE_T(\sqrt{s}=630GeV)}{s^{3/2}d\sigma /dE_T(\sqrt{s}=1800GeV)}}`$ (2) Since at both energies, $`\sqrt{s}`$ is much greater than the QCD scale parameter $`\mathrm{\Lambda }`$ and all the quark masses of the standard model (except the top quark which contributes negligibly at these energies), the standard model prediction modulo residual corrections from higher order and from scaling violation in the pdf’s is just $`r(X_T)={\displaystyle \frac{\alpha _s^2(\lambda X_T0.630GeV/2)}{\alpha _s^2(\lambda X_T1.8TeV/2)}}`$ (3) We have assumed here that the appropriate choice of $`\mu `$ is $`\lambda E_T`$ with $`\lambda =1/2`$ being the result of the CDF best fit to the $`1.8TeV`$ data. The full standard model prediction with corrections incorporated seriously overestimates the CDF data. In addition there is a possible structure in r that, if real, might suggest the existence of a strongly interacting particle in the $`100GeV`$ region with a production cross section many times larger than that of top. As always, there is the possibility that the anomaly is due to systematic errors although it would be surprising if such errors induced structure in $`E_T`$. In fact, the D0 experiment does not confirm the existence of structure in r suggesting, therefore, an explanation in terms of systematic errors. Although the systematic errors could easily affect the normalization of the r parameter, it would be surprising if they affect the point to point errors. These systematic errors derive primarily from the lower energy (630 GeV) data and hence the existence or non-existence of structure should be definitively resolved by comparing the ratio of the $`2TeV`$ data which will be available beginning in the year 2000 with the $`1.8TeV`$ data. Although the energy step is small, the greatly increased luminosity in run II coupled with the small systematic errors in the $`1.8TeV`$ data should guarantee sufficient sensitivity to settle the question. The features observed by CDF in the scaling ratio are those expected in the light gluino scenario . The slower fall-off of $`\alpha _s`$ predicts that the r parameter should be generally lower than the standard model expectations in agreement with the data. In addition a squark in the $`100GeV`$ mass range would provide a bump in each cross section at roughly fixed $`E_T=m_{\stackrel{~}{Q}}/2`$. This would lead to a dip-bump structure separated by a factor of 1.8/0.63 in $`X_T`$ in qualitative agreement with the CDF data. If the bump had occured at lower $`X_T`$ than the dip there would have been no possibility of a fit in any model where the structure was attributed to a new particle. Reference provided two fits to the CDF data. The first used the CTEQ3 pdf’s and the scale choice $`\mu =E_T/2`$ with a squark mass of $`130GeV`$. In the CTEQ3 pdf’s there are, of course, no initial state gluinos so the cross section bump derives from the reaction $`qgq\stackrel{~}{g}\stackrel{~}{g}`$ (4) with an intermediate squark in the $`q\stackrel{~}{g}`$ channel. The dynamics are such that the initial state gluon splits into two dominantly collinear gluinos one of which interacts with the initial state quark to produce the intermediate squark. Other non-resonant light gluino contributions to the cross section come from the parton level processes $`q\overline{q}\stackrel{~}{g}\stackrel{~}{g}`$ (5) $`gg\stackrel{~}{g}\stackrel{~}{g}`$ (6) If the gluino is light it should have a pronounced presence in the proton dynamically generated from the gluon splitting discussed above. Two groups have analyzed deep inelastic scattering allowing for a light gluino and presented fits to the gluino pdf as well as modifications of the other pdf’s due to the gluino presence. In ref. we compared the scaling violation using the Rückl-Vogt pdf’s with that using the CTEQ3 set. With intrinsic gluinos there are extra contributions to the jet inclusive cross sections from the processes $`g\stackrel{~}{g}g\stackrel{~}{g}`$ (7) $`q\stackrel{~}{g}q\stackrel{~}{g}`$ (8) $`\stackrel{~}{g}\stackrel{~}{g}\stackrel{~}{g}\stackrel{~}{g}`$ (9) $`\stackrel{~}{g}\stackrel{~}{g}gg`$ (10) $`\stackrel{~}{g}\stackrel{~}{g}q\overline{q}`$ (11) The second process replaces the higher order reaction of Eq. (4) and provides a direct channel pole at the mass of the squark leading to a peak in the transverse energy cross sections. We treat the squark as a resonance in the quark-gluino channel. Each of these reactions of course is subject to higher order corrections but these tend to cancel in the scaling ratio and in the ratio of the SUSY transverse energy cross section to that of the standard model. In this second fit the scale $`\mu `$ was chosen to be the parton-parton CM energy. The purpose of the current work is to return to the inclusive jet transverse energy cross section and seek a combined fit to this plus the scaling curve allowing for intrinsic gluinos in the proton. Fitting both the scaling curve $`r(X_T)`$ and the $`1.8TeV`$ cross section is equivalent to fitting the transverse energy cross section at both $`1.8TeV`$ and $`0.63TeV`$. Using the parameters of this combined best fit we then present the predictions for the $`E_T`$ cross section at $`2TeV`$ CM energy of run II and the scaling curve for $`1.8TeV/2TeV`$. The primary parameters of the combined fit are the scale $`\mu `$, the QCD $`\mathrm{\Lambda }`$ parameter or equivalently $`\alpha _s(M_Z)`$, and the squark mass $`m_{\stackrel{~}{Q}}`$. We find the optimal values $`\mu =0.6E_T`$ (12) $`\alpha _s(M_Z)=0.116`$ (13) $`m_{\stackrel{~}{Q}}=133GeV`$ (14) In the fit we estimate NLO corrections by the K factor $`1+10\alpha _s(\mu )/\pi `$ and we simulate resolution smearing by increasing the width of the squark by a factor of 2 from its SUSY QCD prediction $`2\alpha _sm_{\stackrel{~}{Q}}/3`$. In addition it is known that the systematic errors in the $`630GeV`$ data form a fairly broad band . We therefore allow the scaling data to float by a uniform factor near unity. The results are presented in figures 1-4. Figure 1 shows the fit to the $`1.8TeV`$ jet inclusive $`E_T`$ cross section averaged over the CDF rapidity range $`0.1<|\eta |<0.7`$. In order to compare with the data of ref. , the light gluino prediction is plotted relative to the QCD prediction given to us in a private communication by the author of that reference. At high $`E_T`$ the fit goes through the lower range of the CDF errors which suggests it is also consistent with the D0 data. The fit qualitatively reproduces the dip at low $`E_T`$ and shows a peak at low $`E_T`$ due to the $`133GeV`$ squark. Figure 2 shows the scaling function $`r(X_T)`$ as given in the light gluino scenario with a $`133GeV`$ squark and as given by the standard model. The data has been moved up by a uniform factor of 1.2 which is consistent with the effect of systematic errors in the $`630GeV`$ data. The height and width of the dip-bump structure is in qualitative agreement with the expectations of the light gluino plus $`133GeV`$ squark model. One might expect that a full simulation including hadronization and detector acceptance could somewhat shift this mass. If a squark exists at $`133GeV`$ it should be apparent in the $`e^+e^{}`$ annihilation cross section through the quark-squark-gluino final state . The L3 data shows what is possibly an upward statistical fluctuation in the hadronic cross section in $`e^+e^{}`$ annihilation in the $`130GeV`$ region. Since the gluino decays are expected to leave very little missing energy, the quark-squark-gluino final state might also explain an apparent surplus in the visible energy cross section at high $`E_{vis}`$ . In addition, a SUSY symmetry breaking scale of $`133GeV`$ would, in the light gluino scenario, predict stop quarks in the region just above the top and could explain some anomalies in the top quark events and lead to an enhancement in the deep inelastic cross section at high $`Q^2`$ and high hadronic mass . If there is indeed a light gluino and a squark in the $`100135GeV`$ region, a dip-bump structure should also be found at Lep II in the scaling ratio of the inclusive dijet cross section in $`e^+e^{}`$ annihilation. $`r(M^2/s)={\displaystyle \frac{s^2d\sigma /dM^2(\sqrt{s}=E_1)}{s^2d\sigma /dM^2(\sqrt{s}=E_2)}}`$ (15) where both $`E_1`$ and $`E_2`$ are above the squark mass. Since the squark decays in the present model into quark plus gluino, the excess should be in the four-jet sample but should not appear in the pair production of two high mass states. In figure 3, we show the predictions for the jet inclusive $`E_T`$ cross section in $`p\overline{p}`$ collisions at the energy $`2TeV`$ relative to the standard model expectations. The curve shows a pronounced peak at $`m_{\stackrel{~}{Q}}/2`$ and is generally $`5`$ to $`10\%`$ below unity due to the slower running of $`\alpha _s`$ in the light gluino case and to the scaling violations in the parton distribution functions. In Figure 4 the scaling ratio $`r(X_T)={\displaystyle \frac{s^{3/2}d\sigma /dE_T(\sqrt{s}=1.8TeV)}{s^{3/2}d\sigma /dE_T(\sqrt{s}=2TeV)}}`$ (16) is plotted for the case of light gluino plus $`133GeV`$ squark and for the case of light gluino but no squark present (non-resonant solid line). The dash-dotted curve gives the prediction of the standard model. Although we have not attempted to estimate hadronization corrections nor resolution smearing (apart from doubling the squark width), we expect that run II will be sensitive to the predicted peaks if they exist and will therefore either discover or rule out a squark in the $`100GeV`$ mass region in conjunction with a light gluino. With additional information on dijet mass and angular distributions , the Run II measurements are sensitive to a light gluino with a squark up to $`1TeV`$. Since most of the value of SUSY would be lost with squarks so high in mass, Run II should definitively settle the question as to whether the light gluino indications including those referenced in are the first signs of SUSY or merely an amazing string of coincidences attributable to systematic errors. ###### Acknowledgements. This work was supported in part by the US Department of Energy under grant no. DE-FG02-96ER-40967. We acknowledge useful conversations on the Fermilab data with I. Terekhov.
no-problem/9812/astro-ph9812448.html
ar5iv
text
# Multicolor Polarimetry of Selected Be Stars: 1995–98 ## 1 INTRODUCTION Be stars are non-supergiant B-type stars whose spectra have, or had at some time, one or more Balmer lines in emission. The mystery of the Be phenomenon is that the emission, which is well understood to originate from a flattened circumstellar envelope or disk, can come and go episodically on time scales of days to decades. This has yet to be explained as a predictable consequence of stellar evolution theory, although many contributing factors have been discussed, including rapid rotation, radiation-driven stellar winds, nonradial pulsation, flarelike magnetic activity, and binary interaction. For the unfamiliar reader, the review of Be stars by Slettebak (1988) will provide an excellent introduction. Recent optical interferometry combined with spectropolarimetry has directly confirmed that the circumstellar envelopes surrounding Be stars are equatorially flattened (Quirrenbach et al. 1997). Given the small observed polarization of only about 1%, Monte Carlo computer simulations of polarization by electron scattering of the starlight in the circumstellar envelope (Wood, Bjorkman, & Bjorkman 1997) constrain it to be an extremely thin disk, with opening angle (half width in latitude) on the order of 3$`\mathrm{°}`$. This example shows that polarimetry is a very useful observational technique for studying physical properties of the envelopes, with the ultimate purpose of understanding their origin. Toward this end it is of great interest to measure polarization variations and the time scales on which they occur, in order to characterize the physical processes involved. This paper is the fourth status report on an ongoing program of annual polarimetric monitoring of a sample of bright northern Be stars begun in 1985 (McDavid 1994 and references therein). The program stars are listed in Table 1, with visual magnitudes taken from Hoffleit and Jaschek (1982) and spectral types and $`v\mathrm{sin}i`$ values from Slettebak (1982). A broadband filter system (Table 2) was chosen to extend the time base of continuous systematic observations for the study of long-term variability, taking advantage of earlier work which dates back to the 1950s. After 10 years on the 0.9 m telescope at McDonald Observatory the project was relocated to a new polarimeter on the 0.4 m telescope at Limber Observatory, where more flexible scheduling allows the study of variability on a greater variety of time scales. Advancements in commercially available instrumentation have made it possible to do so without compromising the quality of the data. ## 2 ANYPOL: A GENERIC LINEAR POLARIMETER AnyPol got its name from the fact that it is completely generic and incorporates no new principles of design or construction. In most respects, including the color system as specified by prescription, it is a simplified and miniaturized version of the McDonald Observatory polarimeter (Breger 1979): a rapidly rotating Glan-Taylor prism as an analyzer, followed by a Lyot depolarizer, Johnson/Cousins $`\mathrm{UBVR}I`$ glass filters, a Fabry lens, and an uncooled S-20 photomultiplier tube. It is very compact, with stepper motors and belt drives for separate analyzer and filter wheel modules which fit into a single main head unit. A simple postviewer using a right-angle prism at one position of the filter wheel is adequate for finding and centering, and three aperture sizes are available in a slide mechanism with a pair of LEDs for backlighting. The control system for AnyPol is based on a 66 MHz 486 PC with plugin multichannel analyzer and stepper motor controller cards, interfaced to the polarimeter head through an electronics chassis unit containing the power supplies and microstepping drivers. A point-and-click display serves to control all selectable functions and parameters and also shows 10 s updates of the measurement in progress, including a graph of the data buffer. The mathematical details of the data processing were adapted from the control program for the Minipol polarimeter of the University of Arizona (Frecker & Serkowski 1976). On the 0.4 m telescope at Limber Observatory the photon count rates are comparable to those obtained with the 0.9 m telescope at McDonald Observatory with a neutral density filter of 10% transmission which was necessary for the bright stars ($`V=2`$–5) in the Be star monitoring program. Observational error estimates are derived from the repeatability of multiple independent measurements, and they are in general agreement with checks based on the residuals in the fit to the modulated signal and the uncertainty derived from photon counting statistics. Typical errors are on the order of 0.05% in the degree of polarization and 2$`\mathrm{°}`$ in position angle. One of the main sources of error is a slight variation in the speed of the motor driving the analyzer, which becomes significant at the level of a few hundredths of a percent. All observations are corrected for an instrumental polarization on the order of 0.10%, tracked by repeated observations of unpolarized standard stars from the list of Serkowski (1974). The position angle is calibrated by observing polarized standard stars from the list of Hsu & Breger (1982). ## 3 OBSERVATIONS The targets for the annual monitoring program were selected to include Be stars with a variety of different characteristics and with the longest possible history of continuous observation. They fall into summer and winter groups, so the basic observing strategy is to make the polarization measurements during about one week in summer and one week in winter. The only interruption in the project has been the loss of 1994 while the new polarimeter was under construction. One individual measurement consists of 3 cycles through all 5 filters with a 200 s integration time on each filter. If there is a bright Moon, a sky cycle is taken to correct for the background polarization. The result for each single filter is taken to be the mean and standard deviation of the 3 integrations in that filter. (The standard deviation is a more conservative error estimate than the standard deviation of the mean, but it may be more realistic because 3 measurements is a very small sample.) During a typical observing run for this project, about 3 to 5 observations of each program star are collected on different nights during the same week. ## 4 ANALYSIS OF VARIABILITY The first goal in analyzing the data is to identify clear cases of variable polarization over the time scales covered by this installment of the project. The data are presented in Tables 3–12, which begin with the month and year of the observing run and the number of measurements in each filter. The $`q`$ and $`u`$ normalized Stokes parameters, the degree of polarization $`p`$, and the polarization position angle $`\theta `$ are all given as the mean and standard deviation for each run. The last column shows the average error in $`p`$ and $`\theta `$ for a single measurement. In addition to the program Be stars, two polarized standard stars were also observed as checks on the stability of the system: 2H Cam (HD 21291, HR 1035, $`V=4.21`$, B9 Ia) in winter and o Sco (HD 147084, HR 6081, $`V=4.54`$, A5 II) in summer. As in previous papers in this series, summary tables (Tables 13–15) were constructed giving the means and standard deviations of the measurements for each star in each filter over the four annual data sets. The quantities in angled brackets are also four-year averages, and the quantities in rows labeled “GAV” for “grand average” are averages over all five filters. Since $`p`$ and $`\theta `$ carry a statistical bias, $`q`$ and $`u`$ are more appropriate quantities for evaluating variability (Clarke & Stewart 1986). Nevertheless, it may provide more physical insight to study $`p`$ and $`\theta `$ to see if the variability is mainly in polarization degree or position angle. In fact, with the limited number of measurements at hand, statistical tests must be applied with caution regardless of which set of parameters is used. We can apply various simple 3$`\sigma `$ criteria to conservatively identify variability, as in previous papers in this series. Looking for night-to-night variability, we see that Tables 3–12 show only two cases in which $`dq`$ or $`du`$ is greater than 3$`dpi`$: $`du^B`$ of $`\gamma `$ Cas in 01/96 and $`du^B`$ of o And in 06/95. Both cases are negligible, since they occur in only one filter and during only one observing run. For year-to-year variability, Tables 13–15 show not a single case in which $`dq`$ or $`du`$ is greater than 3\<$`dpi`$\>. The conclusion is that we can demonstrate no statistically significant variability in the polarization of any of the program stars over the latest 4-year time period. Since variable polarization was detected in the two previous reports on this project, it seems advisable to search for an explanation by comparing the Limber Observatory system with the system used at McDonald Observatory. For this purpose Table 13 of McDavid (1994) is reproduced here as Table 16 for direct comparison with the present Table 15. These observations of polarized standard stars show very clearly that the two systems match extremely well. The only outstanding difference is the typical precision of a single observation, which is higher in the McDonald system. This is readily understood since the McDonald estimates were based on theoretical photon counting statistics, while the Limber estimates are based on experimental scatter in repeated measurements. With a larger value for the error in a single observation, the Limber system is sometimes a less sensitive detector of variability, but it may also give more realistic results. Work is underway to make a complete data set available at the Strasbourg astronomical Data Center (CDS), including all of the annual observations from both McDonald Observatory and Limber Observatory published to date in this series of papers. The times will be given in decimal years for better precision than the current specification of month and year. ## 5 DISCUSSION The 3$`\sigma `$ criterion used here and in the previous papers of this series is very conservative, practically guaranteeing the validity of any detections of variability. This project has shown that such unquestionable detections are by no means common. However, as the data base has now grown to cover more than a decade in time, some patterns of long-term variability are beginning to appear, even though they may not have been previously recognizable at the 3$`\sigma `$ level. What follows is a commentary on the behavior of each individual program star, illustrated with q-u plots and graphs of intrinsic polarization as a function of time in all 5 filters over the entire duration of the monitoring program. In each q-u plot the data points are filled circles, the mean is a cross drawn to the size of the average error of a single measurement, the standard deviation is represented by a dotted ellipse centered on the mean, and three times the average error of a single measurement is represented by a solid ellipse centered on the mean. For each star there is one additional q-u plot showing the mean value for each filter. Note that for all the program stars except 48 Lib it is possible to fit the 5 single-filter data points with a straight line which passes close to the origin. This implies that either the interstellar component of the polarization is small or its position angle is nearly the same as that of the intrinsic component. In either case, the straight line fit gives a good approximation to the position angle of the intrinsic polarization. Any elongation of the distribution of data points along that general direction in the single-filter q-u plots is good evidence for intrinsic polarization that is variable in degree but constant in position angle, as is commonly expected for polarization caused by electron scaterring in an equatorially flattened axisymmetric disk. This technique makes it possible to identify variable intrinsic polarization even when it is too small to meet the 3$`\sigma `$ test. The graphs of polarization degree and position angle as a function of time for each filter show the intrinsic polarization, calculated by vectorially subtracting the interstellar component determined by Poeckert, Bastien, & Landstreet (1979) or by McLean & Brown (1978), using a Serkowski law of the form $`p_{ISi}=p_{max}\mathrm{exp}[1.15\mathrm{ln}^2(\lambda _{max}/\lambda _i)]`$ with parameters as summarized in Table 17. ### 5.1 Gamma Cas Gamma Cas provides a good example of how the mean values of the polarization in 5 filters can sometimes be nearly collinear in the q-u plane, so that a straight line fit can give a good approximation to the position angle of the intrinsic polarization (see Figure 1, lower right panel). The individual filter plots all show some evidence for elongation of the data point patterns along this direction (especially in $`U`$ and $`R`$), which is good evidence that there is some real low-level variability. Figure 2, however, shows only slight changes from year to year. The polarization of $`\gamma `$ Cas, and therefore the state of its circumstellar envelope, appears to have been mostly stable since this monitoring program began. ### 5.2 Phi Per The polarization of $`\varphi `$ Per has been almost certainly variable from year to year, as may be seen in Figure 3, where the data patterns are clearly elongated along the direction indicated by a straight line fit to the filter means. Some sinusoidal tendencies can be seen in Figure 4, especially in the $`R`$ bandpass. Periodogram analysis suggests a period on the order of 11 to 12 years. If the circumstellar disk is tilted with respect to the binary orbital plane, as advanced by Clarke & Bjorkman (1998), it might be expected to precess with a similar period. However, the almost perfectly constant position angle of the polarization argues against this explanation for the periodic polarization. ### 5.3 48 Per With $`v\mathrm{sin}i`$ = 200 km $`𝚜^\mathrm{𝟷}`$, 48 Per is probably viewed at a relatively low inclination of its rotation axis to the line of sight. This is consistent with its relatively small polarization, even though it is a strong H$`\alpha `$ emitter. It is interesting to see from Figure 6 that the position angle shows stronger variability than the degree of polarization, including a hint of a 4–5 year cycle in the $`B`$ filter. A precessing bar-shaped nonuniformity embedded in the disk would be expected to generate this kind of variability. The absence of a preferred direction in the q-u plots (Figure 5) lends further strength to this interpretation. ### 5.4 Zeta Tau Zeta Tau is one of the most highly polarized and strongly variable of all the program stars. A look at Figure 8 shows a slow and steady rise in the degree of polarization with occasional mild outbursts or local maxima, while the position angle remains constant. The filter averages in Figure 7 are very nearly collinear, and the individual filter plots clearly show alignment in that direction. Work is in progress on a possible correlation between continuum polarization and $`V/R`$ variations of the H$`\alpha `$ emission line profile of $`\zeta `$ Tau as a test of the theory of “one-armed” density perturbations of the circumstellar disk (Okazaki 1997) and their effects on the polarization. Hopefully the results will place some constraints on the nature of Be disks and the processes leading to their formation. ### 5.5 48 Lib In Figure 9 the intrinsic position angle of the polarization of 48 Lib is poorly determined because the interstellar component is large and has a very different position angle than the intrinsic component. In Figure 10 the interstellar polarization has been removed, resulting in a better straight line fit that passes acceptably close to the origin to approximate the intrinsic position angle. This angle is indeed favored by the elongations of the q-u data sets in the individual filters. Figure 11 shows that the degree of intrinsic polarization is nearly cyclic with a period on the order of 4–5 years, although somewhat noisy due to short-term variations. As in the case of $`\zeta `$ Tau there is a correlation between the polarization period and that of $`V/R`$ in the H$`\alpha `$ emission line, and it is currently being pursued in the same context. ### 5.6 Chi Oph Chi Oph has the lowest $`v\mathrm{sin}i`$ of all the program stars: 140 km $`𝚜^\mathrm{𝟷}`$. It also has one of the least degrees of intrinsic polarization, as would be expected if its rotation axis is only slightly inclined to the line of sight. In Figure 12 the fit of the intrinsic position angle line is somewhat weak, and the individual filter plots have only a slight tendency for alignment. The graphs of Figure 13 show how small the intrinsic polarization is and how poorly determined the position angle is as a result. Apart from an interesting low-amplitude ripple in the degree of polarization, there is little evidence for any significant variability. ### 5.7 Pi Aqr Figure 14 shows what $`>3\sigma `$ variable polarization of a Be star should look like in the q-u plane. The degree of intrinsic polarization of $`\pi `$ Aqr has changed more by far than that of any other star on the program list. The strong alignment evident in the q-u plots indicates that the position angle is very stable. This is also demonstrated by the position angle graphs of Figure 15. When monitoring began in 1985, $`\pi `$ Aqr had nearly the largest (if not the largest) polarization of any Be star in the sky. Now, 13 years later, that polarization has almost completely disappeared. If we could explain exactly what happened, we would be close to understanding the Be phenomenon itself. Did a dynamic disk lose a source of continuous replenishment? If there were such a source, what might have turned it off? Was the disk a static structure? If so, what prompted it to dissipate? Were there any changes in the underlying star? There are still far more questions than answers. ### 5.8 Omicron And Since 1986 the polarization of o And has been increasing almost uniformly except for two or three minor outbursts and dropouts. The relatively small degree of polarization makes the position angle difficult to determine (see Figure 16), but Figure 17 shows good definition of the intrinsic position angle based on fitting to the filter means. The polarization is well-behaved in the single filter q-u plots, which show that there is real variability by their strong alignment to the intrinsic position angle. This gives a good record of the gradual buildup of a polarizing Be envelope or disk. I am very grateful to Michel Breger and Santiago Tapia for introducing me to the basics of astronomical polarimetry. I also thank Paul Krueger for his skillful machine work in transforming AnyPol from line drawings on paper into the reality of metal. Jon Bjorkman’s constructive suggestions as referee helped to clarify this presentation in many ways. Fig.1.-Normalized Stokes parameter plots of the polarization of $`\gamma `$ Cas (see text for explanation of the symbols). Fig.2.-Degree and position angle of the intrinsic polarization of $`\gamma `$ Cas. Fig.3.-Normalized Stokes parameter plots of the polarization of $`\varphi `$ Per (see text for explanation of the symbols). Fig.4.-Degree and position angle of the intrinsic polarization of $`\varphi `$ Per. Fig.5.-Normalized Stokes parameter plots of the polarization of 48 Per (see text for explanation of the symbols). Fig.6.-Degree and position angle of the intrinsic polarization of 48 Per. Fig.7.-Normalized Stokes parameter plots of the polarization of $`\zeta `$ Tau (see text for explanation of the symbols). Fig.8.-Degree and position angle of the intrinsic polarization of $`\zeta `$ Tau. Fig.9.-Normalized Stokes parameter plots of the polarization of 48 Lib (see text for explanation of the symbols). Fig.10.-Normalized Stokes parameter plots of the intrinsic polarization of 48 Lib (see Subsection 5.5 for comment). Fig.11.-Degree and position angle of the intrinsic polarization of 48 Lib. Fig.12.-Normalized Stokes parameter plots of the polarization of $`\chi `$ Oph (see text for explanation of the symbols). Fig.13.-Degree and position angle of the intrinsic polarization of $`\chi `$ Oph. Fig.14.-Normalized Stokes parameter plots of the polarization of $`\pi `$ Aqr (see text for explanation of the symbols). Fig.15.-Degree and position angle of the intrinsic polarization of $`\pi `$ Aqr. Fig.16.-Normalized Stokes parameter plots of the polarization of o And (see text for explanation of the symbols). Fig.17.-Degree and position angle of the intrinsic polarization of o And.
no-problem/9812/cond-mat9812041.html
ar5iv
text
# The quantum Heisenberg antiferromagnet on the square lattice ## Abstract The pure-quantum self-consistent harmonic approximation, a semiclassical method based on the path-integral formulation of quantum statistical mechanics, is applied to the study of the thermodynamic behaviour of the quantum Heisenberg antiferromagnet on the square lattice (QHAF). Results for various properties are obtained for different values of the spin and successfully compared with experimental data. We consider the quantum Heisenberg antiferromagnet on the square lattice (QHAF), whose Hamiltonian reads $$\widehat{}=J\underset{<\mathrm{𝐢𝐣}>}{}\widehat{𝑺}_𝐢\widehat{𝑺}_𝐣;$$ (1) $`J`$ is positive, the sum runs over all the couples $`<\mathrm{𝐢𝐣}>`$ of neighbouring sites on the square lattice, and the quantum operators $`\widehat{𝑺}_𝐢`$ obey the angular momentum commutation relations $`[S_𝐢^\alpha ,S_𝐣^\beta ]=iS_𝐢^\gamma \delta _{\mathrm{𝐢𝐣}}\epsilon ^{\alpha \beta \gamma }`$ with $`|\widehat{𝑺}_𝐢|^2=S(S+1)`$. Several real compounds are well described, as far as their magnetic behaviour is concerned, by this model with $`S=1/2`$ (La<sub>2</sub>CuO<sub>4</sub>, Sr<sub>2</sub>CuO<sub>2</sub>Cl<sub>2</sub>), $`S=1`$ (La<sub>2</sub>NiO<sub>4</sub>, K<sub>2</sub>NiF<sub>4</sub>) and $`S=5/2`$ (KFeF<sub>4</sub>, Rb<sub>2</sub>MnF<sub>4</sub>), and a consequently rich experimental analysis of the subject has been developed in the last ten years. From the theoretical point of view, an equally rich reservoir of results, from both analytical and numerical approaches, is now available; nevertheless, there are still many open questions, and different conclusions have been recently drawn by several authors . To study the QHAF, we have used the pure-quantum self-consistent harmonic approximation (PQSCHA) , which is a semiclassical method based on the path-integral formulation of quantum statistical mechanics. Its main feature is that of exactly describing the classical behaviour and fully take into account the linear part of the quantum contribution to the thermodynamics of the system, so that the self-consistent harmonic approximation is only used to handle the pure-quantum nonlinear contribution. The fundamental goal of the PQSCHA is that of reducing the evaluation of quantum statistical averages to classical-like expressions involving properly renormalized functions, the fundamental one being the effective Hamiltonian $`_{\mathrm{eff}}`$. If $`\beta =T^1`$, $`N`$ is the number of lattice sites, $`𝒔_𝐢`$ is a classical vector on the unitary sphere ($`|𝒔_𝐢|=1`$), and $`d^N𝒔`$ indicates the phase-space integral for a classical magnetic system, the quantum statistical average of a physical observable described by the quantum operator $`\widehat{O}`$ turns out to be $`\widehat{𝒪}=1/𝒵d^N𝒔\stackrel{~}{𝒪}\mathrm{exp}(\beta _{\mathrm{eff}})`$, where $`𝒵=d^N𝒔\mathrm{exp}(\beta _{\mathrm{eff}})`$ is the partition function. Both $`_{\mathrm{eff}}`$ and $`\stackrel{~}{𝒪}`$ depend on $`T`$ and $`S`$, and the determination of their explicit form, starting from the expression of the original quantum operators, is indeed the core of the method . The effective Hamiltonian for the QHAF, i.e. relative to Eq.(1), is $$\frac{_{\mathrm{eff}}}{J\stackrel{~}{S}^2}=\theta ^4\underset{<\mathrm{𝐢𝐣}>}{}𝒔_𝐢𝒔_𝐣+𝒢(t),$$ (2) where $`\stackrel{~}{S}=S+1/2`$, $`t=T/J\stackrel{~}{S}^2`$ and $`𝒢(t)`$ is a uniform term that does not affect the evaluation of statistical averages. The renormalization coefficient $`\theta ^2=\theta ^2(t,S)<1`$ is easily evaluated, for any given $`t`$ and $`S`$, by self-consistently solving two coupled equations . ¿From Eq.(2) we see that the quantum effects leave the simmetry of the Hamiltonian unchanged and introduce an energy scaling factor $`\theta ^4`$, naturally defining the effective classical temperature $$t_{\mathrm{eff}}=\frac{t}{\theta ^4(t,S)}$$ (3) that will enter all the PQSCHA results. The partition function, for instance, is $`𝒵=\mathrm{exp}[\beta 𝒢(t)]𝒵_{\mathrm{cl}}(t_{\mathrm{eff}})`$, where $`𝒵_{\mathrm{cl}}(t_{\mathrm{eff}})`$ is the partition function of the classical model at a temperature $`t_{\mathrm{eff}}`$; $`O_{\mathrm{cl}}(t_{\mathrm{eff}})`$ will hereafter mean the value taken by the quantity $`O`$ in the classical Heisenberg antiferromagnet at a temperature $`t_{\mathrm{eff}}`$. The internal energy per site is easily found to be $`u(t)=\theta ^4(t,S)u_{\mathrm{cl}}(t_{\mathrm{eff}})`$, while the correlation functions $`G(𝐫)\widehat{𝑺}_𝐢\widehat{𝑺}_{𝐢+𝐫}`$, with $`𝐢(i_1,i_2)`$ and $`𝐫(r_1,r_2)`$ any vector on the square lattice, turn out to be $$G(𝐫,t)=\stackrel{~}{S}^2\theta _𝐫^4G_{\mathrm{cl}}(𝐫,t_{\mathrm{eff}});$$ (4) the renormalization coefficients $`\theta _𝐫^2=\theta _𝐫^2(t,S)`$ are such that $`\theta _𝐫^2`$ does not depend on $`𝐫`$ for large $`|𝐫|`$, and $`\theta _𝐫^2=\theta ^2`$ for $`|𝐫|=1`$. From Eq.(4), we find the PQSCHA expression for the staggered susceptibility $`\chi _𝐫()^{r_1+r_2}G(𝐫,t)/3`$ to be $$\chi =\frac{1}{3}\left[S(S+1)+\stackrel{~}{S}^2\underset{𝐫0}{}()^{r_1+r_2}\theta _𝐫^4G_{\mathrm{cl}}(𝐫,t_{\mathrm{eff}})\right].$$ (5) The PQSCHA result for the correlation length, defined by the asymptotic expression $`G(𝐫)\mathrm{exp}(|𝐫|/\xi )`$ for large $`|𝐫|`$, is $`\xi (t)=\xi _{\mathrm{cl}}(t_{\mathrm{eff}})`$, meaning that the correlation length of the QHAF at a temperature $`t`$ equals that of its classical counterpart at a temperature $`t_{\mathrm{eff}}`$. Once the problem has been reduced, by the PQSCHA, to a renormalized classical one, the ingredients needed to obtain the temperature and spin dependent thermodynamic properties of the QHAF are the renormalization coefficients $`\theta _𝐫^2(t,S)`$, whose evaluation is a simple matter of a fraction of second on a standard PC, and the temperature dependence of the corresponding properties of the classical model, typically obtained by classical Monte Carlo simulations . In the following we will focus our attention on the staggered susceptibility and the correlation length, as experimental data for these quantities are available for various compounds. Such compounds are usually characterized by a crystal structure in which the magnetic ions form parallel planes and mainly interact if belonging to the same plane; a weak interplanar interaction is responsible for a low-temperature 3D transition, and it introduces also an anisotropy term. Keimer et al. have shown that in the classical limit, and to one-loop level, the relation between $`\xi `$ in presence of the anisotropy term, and $`\xi _0`$ of the fully isotropic model, is given by $`\xi =\xi _0/(1\alpha \xi _0^2)^{1/2}`$, where $`\alpha `$ is a parameter describing the relative strength of anisotropy; following Lee et al. we shall employ the above formula (with some refinements which lead to substitute $`\alpha `$ with its renormalized counterpart $`\alpha _{\mathrm{eff}}`$ ) to compare our PQSCHA results with experimental data. In Figs. 1 and 2 we present our results for the correlation length of the QHAF for $`S=5/2`$, and for the same quantity and the staggered susceptibility for $`S=1`$. For $`S=5/2`$ the experimental data refer to Rb<sub>2</sub>MnF<sub>4</sub> and KFeF<sub>4</sub> : the anisotropy term is seen to be very well described by the approach described above. For $`S=1`$ QMC and experimental data for the two compounds La<sub>2</sub>NiO<sub>4</sub> and K<sub>2</sub>NiF<sub>4</sub> are reported. We may thus confidently conclude that the thermodynamic behaviour of the QHAF is properly described by the PQSCHA, i.e. in term of a renormalized classical Heisenberg antiferromagnet. The easy-axis anisotropy, sometimes crucial to analyse the low-temperature experimental data, has been considered following Keimer et al. , in the PQSCHA framework. Finally, we would like to recall that our results help to shed some light on the reasons of the failure of the theory based on the non-linear $`\sigma `$ model approach , in describing the QHAF for $`S1`$ in the temperature region where experimental data are available.
no-problem/9812/cond-mat9812238.html
ar5iv
text
# Mean-Field HP Model, Designability and Alpha-Helices in Protein Structures \[xx Preprint NCU/CCS-1998-1010; NCHC-phys-1998-1024; NSC-CTS-981001 ## Abstract Analysis of the geometric properties of a mean-field HP model on a square lattice for protein structure shows that structures with large number of switch backs between surface and core sites are chosen favorably by peptides as unique ground states. Global comparison of model (binary) peptide sequences with concatenated (binary) protein sequences listed in the Protein Data Bank and the Dali Domain Dictionary indicates that the highest correlation occurs between model peptides choosing the favored structures and those portions of protein sequences containing alpha-helices. \] The three-dimensional structure of proteins is a complex physical and mathematical problem of prime importance in molecular biology, medicine and pharmacology . It is believed that the folding instruction of a protein is encoded in its amino acid sequence and from model studies much has been learned about protein structure and folding kinetics . Yet much still remains to be understood. This simple fact is already intriguing: the number of possible globular structures for a peptide of typical length - about 300 amino acids - is practically infinite; the number of proteins whose structures are known empirically or hypothetically is more than a hundred thousand and is growing rapidly with time; the number of classes of native protein structures is about five hundred and is believed unlikely to exceed a thousand in the long run . Numerical simulations based on lattice models have shown that structures of exceptionally high designability \- those that attract a large number of protein sequences to conform to it - do exist . Why such structures would emerge is however not well understood. Protein folding also has an outstanding temporal feature: the initial collapse to globular shape and the formation of $`\alpha `$-helices are completed in less than $`10^7`$ seconds , while the rest of the folding takes up to ten seconds to complete. In this report, based on results from a mean-field lattice model we observe that structures with high designability are preponderant in a type of substructure that suggests $`\alpha `$-helices in real proteins and we explain the reason for this phenomenon. This notion is supported by global comparisons of model structural sequences with (binary) sequences constructed from sets of proteins of known structure: the Protein Data Bank (PDB) and the Dali Domain Dictionary (DDD) . Since the mean-field in the model represents the hydrophobic potential that is known to cause the initial collapse of a peptide to a globular shape, the results may explain why the initial collapse and the formation of $`\alpha `$-helices occur essentially simultaneously and rapidly, and are temporally separated from other slower folding processes that are driven by far-neighbor inter-residual interactions. In the minimal model for protein folding, the HP model of Dill et al. , the 20 kinds of amino acids are divided into two types, hydrophobic and polar. This reduces a peptide chain of length $`N`$ to a binary “peptide” $`𝐩=(p_1,p_2,\mathrm{},p_N)`$, where $`p_i=`$0 (1) if the amino acid at the $`i`$th position on the chain is polar (hydrophobic). The structure of a protein is represented by a self-avoiding path compactly embedded on a lattice $``$, and the energy associated with a peptide conforming to a particular structure is computed from the contact energies between the nearest-neighbor residues that are not adjacent along the peptide. A set of well tested contact energies derived from proteins of known structure is the Miyazawa-Jernigan matrix, which is however well approximated by an effective mean-field potential expressing the hydrophobicities of the residues . In the binary form of this approximation the Hamiltonian of the HP model is reduced to that of a mean-field model: $$H(𝐩,𝐬)=𝐩𝐬=\frac{1}{2}(|𝐬𝐩|^2𝐩^2𝐬^2)$$ (1) where $`𝐬=(s_1,s_2,\mathrm{},s_N)`$ is a binary “structure” converted from a self-avoiding path with the assignment: $`s_i=1`$ (0) if the $`ith`$ site is a core (surface) site on the lattice . Empirical observation suggests that protein folding proceeds in two steps, a first stage of fast collapse and formation of alpha-helices (and probably some not properly folded beta-sheets) presumably caused mainly by hydrophobic interactions under polymeric constraints, followed by a second stage of slow annealing caused by far-neighbor inter-residue interactions that gives the final native state . Since Eq.(1) is a local, mean-field approximation that leaves out residual - i.e., left over from mean-field averaging - far-neighbor interactions, it can be relied on to account for only the first stage. We denote by $`𝒮`$ the set of all distinct structures $`𝐬`$ on $``$ and by $`𝒫`$ the set of all possible peptides p of length $`N`$. For each p the selection of the s giving the minimum $`H`$ defines a mapping from $`𝒫`$ to $`𝒮`$. There are p’s that are mapped to more than one s’s or are mapped to s’s that correspond to more than one self-avoiding paths. Such p’s are removed from $`𝒫`$ and their target s’s are removed from the competition for high designability, because a peptide that does not conform to a unique structure at all times is not expected to survive the evolutionary selection process . It has also been shown that not admitting degenerate states in a coarse-grained model is similar to removing peptides that has low foldability in a finer-grained model . (Many states that are degenerate in the present coarse-grained model would in a model with higher energy resolution be states of different symmetries with nearly degenerate energies. The energy landscape for the ground state among these would likely contain deep local minima and a peptide choosing such a ground state would likely be a poor folder.) The mapping then partitions what remains in $`𝒫`$ into classes, with all the p’s in each class mapped to a single s, whose designability is simply the number of p’s in the class. For this mapping the right-hand-side of Eq.(1) reduces to being proportional to $`|𝐬𝐩|^2`$, the Hamming distance the two points p and s in an $`N`$-dimensional unit hypercube. Then the designability of an s is equal to the Voronoi polytope around it in this hypercube . Excepting those removed for degeneracy, $`𝒫`$ is just the set of all the vertices - on a unit hypercube. In comparison, owing to the constraints of compactness and self-avoidance imposed on paths on $``$, points in $`𝒮`$ are sparsely distributed in the hypercube so that $`𝒮𝒫`$. For example, on a $`6\times 6`$ square lattice, the number of elements in (including those to be removed for degeneracies) $`𝒫`$ is $`2^{36}=68719476736`$, while those in $`𝒮`$ is 30408 (but only 18213 of them have no path degeneracy). If the points in $`𝒮`$ were uniformly distributed in the hypercube, then the Voronoi polytope around each s would be the same and every s would have the same designability. But owing to boundary effects and geometric constraints imposed on the compact paths on $``$, the distribution of s’s in $`𝒮`$ cannot be uniform, those s’s residing in regions in the hypercube that are of especially low density (in s’s) will then have especially high designability. We now examine how geometric constraints cause the emergence of s’s with especially high designabilities by first attempting to replace the constraints by a set of explicit algebraic “rules”. Consider a structure in $`𝒮`$ to be a chain of 0’s and 1’s linked by $`N1`$ links of three types, 0-0, 1-0 or 0-1, 1-1, with $`n_{00}`$, $`n_{10}`$ and $`n_{11}`$ being the numbers of such links, respectively. The structure is partitioned by the 1-0 links into $`n_{10}+1`$ “islands” of contiguous 1’s or 0’s. (Peptides in $`𝒫`$ may be similarly described, but the only constraint on any p is that the total number of 0’s and 1’s be $`N`$.) For $``$ being a square lattice with side $`L`$, two of the most important constraining rules are: ($`i`$) A single 0 may only occur at an end of a path; ($`ii`$) An isolated single 1 may only either occur at or be one 0-island away from an end of a path. Space does not allow us to give more than another relatively simple example (with $`L>4`$): For a path having the pattern $`𝐬=(1\mathrm{}1)`$ (both the ends of the path are 1-sites), $`2n_{00}+n_{10}=8L8`$ and $`2n_{10}4L12`$. It is in fact extremely difficult if not impossible to exhaust the complete set of such rules needed to reduce $`𝒫`$ to $`𝒮`$. For our purpose it suffices to identify a large enough set of rules which reduces $`𝒫`$ to a $`𝒮^{}`$ that is a sufficiently close to $`𝒮`$ for us to understand the origin and characteristics of structures of high designability. In Fig.1(a) the number of elements in $`𝒫`$(open circle), $`𝒮^{}`$ (solid circle) and $`𝒮`$ (open triangle) on a $`6\times 6`$ lattice are respectively plotted against $`n_{10}`$. The total number of elements under the curve for $`𝒫`$ gives the total number of sites in the hypercube. $`𝒮^{}`$ is slightly greater than $`𝒮`$ but is much smaller than $`𝒫`$. (The boundary of $`𝒮^{}`$ owes its roughness to the incompleteness of the set of rules used to construct it.) It is seen that whereas for $`𝒫`$ the maximum possible value for $`n_{10}`$ is $`4L4=32`$, for $`𝒮`$ and $`𝒮^{}`$ the corresponding maximum is much less: $`n_{max}=14`$. As $`n_{10}`$ approaches $`n_{max}`$ from below, the number of elements in $`𝒮^{}`$ decreases rapidly whereas those in $`𝒫`$ increases toward a maximum. It happens that in the hypercube the smallest Hamming distance between two structures is approximately proportional to the difference in their respective $`n_{10}`$ numbers. This is evident in Fig.1(b), where the smallest Hamming distance is plotted against the difference in $`n_{10}`$ for all the pairs among the 30408 binary structures on a $`6\times 6`$ lattice, and is consistent with results given in in which $`x(p)`$ (the degree of clustering of hydrophobic residues) is analogous to $`n_{10}`$. Since allowed structures with $`n_{10}`$’s having values close to $`n_{max}`$ live in a region of the hypercube that is also most heavily populated by peptides, it follows that they would on average have a large Voronoi polytope, and hence are most likely to have the highest designabilities. This is substantially borne out by the results shown in Figs.1(c) and (d) computed for the allowed structures on the $`4\times 7`$ and $`6\times 6`$ lattices, respectively, where average designability is plotted against $`n_{10}`$. The average designability does not exactly peak at $`n_{10}=n_{max}`$ but rather at $`n_{10}`$’s just less than $`n_{max}`$. Why this should be so is not yet clearly understood. Structures with maximum $`n_{10}`$ are the most constrained and are very few in number so that otherwise secondary details might have had a larger effect on their designabilities. Preference for large $`n_{10}`$’s has also been observed in other 2D and 3D lattices. The relation between high designability and sparse population is further illustrated in Figs.1(e) and (f), where the number of structures within a Hamming distance $`R_H`$ of a given structure is plotted against the designability of that structure. In (e), where $`R_H`$=5, it is seen that structures with high designability have far fewer near neighbors than structures with low designability. In (f), where $`R_H`$=25, it is seen that all structures have approximately the same large numbers of near and far neighbors. Now something interesting emerges. A structure with its $`n_{10}`$ (almost) maximized but not allowed to have single 1’s or 0’s except at its ends (rules ($`i`$) and ($`ii`$)) will have a preponderance of the 4-mer (1100) in the interior, so that large stretches of it will have the form $`(\mathrm{}11001100\mathrm{})`$ which suggests the linear structure of $`\alpha `$-helices on a lattice. A corollary is that structures with core to surface ratios close to unity are favored by designability. This implies a diameter of approximately 10 residues for an ideal protein, which is consistent with the typical size of 300 to 1000 amino acids in natural proteins. Note that the selection of structures with maximized $`n_{10}`$ is a consequence of the geometric property of the Hamiltonian (1) in hyperspace and does not depend on the specifics of a lattice. That larger $`n_{10}`$’s are favored is a notion qualitatively consistent with the conclusion drawn from recent studies on folding kinetics that optimal structures are also minimally frustrated . To see if what we have observed so far has anything to do with real proteins we compare five sequences, $`𝒫`$<sub>1-5</sub>, each being a concatenation of a set of real protein or ($`6\times 6`$) lattice binary peptides: $`𝒫_1`$, (a) the representative non-redundant 2886 proteins (sequence similarity smaller than 90%) culled from the 9257 entries in PDB , or (b) the even less redundant set of 1394 entries of protein domains from DDD , converted to binary sequences based on hydrophobicity ; $`𝒫_2`$, the sections in $`𝒫_1`$ that fold into $`\alpha `$-helices; $`𝒫_3`$, the sections in $`𝒫_1`$ that fold into $`\beta `$-sheets; $`𝒫_4`$, the 27006 peptides in $`𝒫`$ mapped to the 15 structures of the highest designabilities; $`𝒫_5`$, the 24134 peptides in $`𝒫`$ mapped to the 1545 structures of the lowest designabilities. Interestingly the H/P ratios of the five sequences are all very close to 1; the percentage of hydrophobic residues contained in each is respectively $`50.00\%`$, $`49.75\%`$, $`56.18\%`$, $`50.43\%`$ and $`49.05\%`$ for $`𝒫_1`$ (from PDB) through $`𝒫_5`$. Fig.1(a) shows that neither distribution of peptides in $`𝒫`$<sub>4</sub> (solid triangle) and $`𝒫`$<sub>5</sub> (open square) vs. $`n_{10}`$ is random, which corroborates the results of . In particular, in $`𝒫`$<sub>5</sub> ($`𝒫`$<sub>4</sub>) peptides with larger (smaller) $`n_{10}`$’s are slightly favored over those with smaller (larger) $`n_{10}`$’s. Let $`f_i^{(l)}(m)`$ be the frequency, normalized to that of a sequence with an H/P ratio of unity (if the word has $`n_H`$ H’s and the actual frequency of the word is $`f`$, then the normalized frequency is $`(n_H/n_P)^{n_H}f`$), of the $`m`$th binary word of length $`l`$ occurring in sequence $`𝒫_i`$ and let $`F_i^{(l)}(m)=(f_i^{(l)}(m)\overline{f}_i^{(l)})/Z`$ be the normalized frequency distribution function, where $`\overline{f}_i^{(l)}=2^l_mf_i^{(l)}(m)`$ is the mean frequency and $`Z=(_m(f_i^{(l)}(m)\overline{f}_i^{(l)})^2)^{1/2}`$ is the norm. The relations $`_mF_i^{(l)}(m)=0`$ and $`_m(F_i^{(l)}(m))^2=1`$ hold. The pairwise overlaps $`O_{ij}^{(l)}=_{m=1}^{2^l}F_i^{(l)}(m)F_j^{(l)}(m);i=1,2,3;j=4,5`$ that measure correlations between $`𝒫_i`$ and $`𝒫_j`$ for $`l`$=4-14 are given in Figs.2(a) and (b), where the real protein sequences used are from PDB and DDD, respectively. The two sets of overlaps are qualitatively similar. It is seen that $`𝒫_4`$ ($`𝒫_5`$) is positively (negatively) correlated with $`𝒫_1`$ and $`𝒫_2`$. For all values of $`l`$ the strongest correlation occurs between the model sequence of high designability ($`𝒫_4`$) and the real protein sequence rich in $`\alpha `$-helices ($`𝒫_2`$). The sequence of high designability is poorly correlated with the sequence rich in $`\beta `$-sheets ($`𝒫_3`$) and, as expected, the strongest anti-correlation occurs between the two model sequences of high ($`𝒫_4`$) and low ($`𝒫_5`$) designabilities. Even though the favoring of surface-core repeats by peptides folding into high-designability structures is most likely not lattice specific (provided the H/P ratio is close to one), the particular choice of the (1100) repeat has the characteristic of a square lattice. For instance, on a hexagonal lattice the predominant repeat would more likely be (10) rather than (1100). There is some justification for selecting a square lattices over a hexagonal because in real proteins the backbone does not favor small-angle bends. On the other hand, real proteins do not live on lattices and the equivalent of (10) repeats does occur in real proteins where $`\beta `$-sheets are exposed to solvent. Thus the low correlation between $`𝒫_3`$ and $`𝒫_4`$ is to some extent an artifact of the square lattice, and it may be better to interpret the (1100) repeats on a square lattice as representing $`\alpha `$ type and some $`\beta `$ type repeats (but not the latter’s foldings) in real proteins. Our study suggests that the rough formation of $`\alpha `$-helices and some $`\beta `$-sheets and the collapse of proteins into globular shapes are primarily determined by hydrophobicity. Since only the mean-field part of the inter-residue interaction is included in the model, this implies that details of the residual inter-residue interaction that determine the final shape of the native state are not important at this stage. It has been pointed out that structures of high designability in a lattice model with two-letter amino acid alphabet may not be especially designable for higher-letter alphabets . Although the situation may be different on a lattice larger than the $`5\times 5`$ lattice used in , it does remain to be verified whether our findings persist in finer-grained and more realistic models. If it does, then we can better understand why the formation of $`\alpha `$-helices and the collapse would happen on a similar time scale, of the order $`10^7`$s, why the formation of $`\beta `$-sheets would take somewhat longer (about $`10^6`$s) , and why these time scales would be so much shorter than the time needed to complete the rest of the folding ($`10^1`$s to $`10`$s). This scenario is in any case consistent with the finding in a recent statistical analysis of experimental data: local contacts play the key role in fast processes during folding . This work is partly supported by grants NCHC88-CP-A001 to ZYS from the National Center for High-Performance Computing, and NSC87-M-2112-008-002 to HCL and NSC87-M-2112-007-004 to BLH from the National Science Council (ROC). HCL thanks Simon Fraser University for hospitality in the Summer of 1998 during which part of the paper was written.
no-problem/9812/astro-ph9812380.html
ar5iv
text
# NEW NEAR-INFRARED SPECTROSCOPY OF THE HIGH REDSHIFT QUASAR B 1422+231 AT 𝑧 = 3.62 ## 1. INTRODUCTION Since near-infrared (NIR) spectroscopy of high-redshift ($`z>2`$) quasars provides information about their rest-frame optical spectroscopic properties, it is now possible to systematically compare the spectroscopic properties (e.g., excitation, ionization, and chemical abundances) of high-$`z`$ and low-$`z`$ quasars. Although the first NIR spectroscopic observations of high-$`z`$ quasars were made nearly two decades ago (Hyland, Becklin, & Neugebauer 1978; Puetter et al. 1981; Soifer et al. 1981), high-quality NIR spectra have been obtained for only $``$ 10 high-$`z`$ quasars to date (e.g., Carswell et al. 1991; Hill et al. 1993; Elston et al. 1994; Kawara et al. 1996 \[hereafter K96\]; Taniguchi et al. 1997a; Murayama et al. 1998; see also Taniguchi et al. 1997b). However, despite the relatively small number of objects observed, several very interesting properties of high-$`z`$ quasars have emerged. In particular, Hill et al. (1993) and Elston et al. (1994; hereafter ETH) suggested that unusually strong optical Fe II emitter may be common in the high-$`z`$ universe ($`2<z<3.4`$); to date, five out of eight high-$`z`$ quasars observed have very strong optical Fe II emission with EW(Fe II)/EW(H$`\beta `$)$`1`$ (see Murayama et al. 1998). By comparison, a much smaller fraction of far-infrared (FIR) selected AGN ($`L_{\mathrm{FIR}}10^{11}L_{\mathrm{}}`$) have strong optical Fe II emission (cf. Lípari et al. 1993). Recently, we obtained NIR spectra of the radio-loud, flat-spectrum, high-$`z`$ quasar B 1422+231 (Patnaik et al. 1992; Lawrence et al. 1992) using the Mayall 4 m telescope at Kitt Peak National Observatory (KPNO) (K96). Although this quasar is at $`z=3.62`$, its optical magnitude is sufficiently bright ($`m_\mathrm{r}`$ =15.5; Yee & Bechtold 1996), due to gravitational lensing (Patnaik et al. 1992; Lawrence et al. 1992), to allow it to be studied by NIR spectroscopy. The NIR spectra show that the flux ratio $`F(\text{Fe II }\lambda \lambda \text{4434–4684})/F(\mathrm{H}\beta )`$ is much less than that of the other high-$`z`$ quasars (K96), and in fact is similar to those of radio-loud, flat-spectrum, low-$`z`$ quasars with “normal” optical Fe II emission. This suggested that high-$`z`$ quasars may exhibit a range of values of $`F(\text{Fe II }\lambda \lambda \text{4434–4684})/F(\mathrm{H}\beta )`$ similar to what has been observed for low-$`z`$ quasars. In order to confirm this result, we have obtained new NIR spectra using the University of Hawaii (UH) 2.2 m telescope. In this paper, we present our new NIR spectroscopy of B 1422+231 and compare it with our previous measurements. ## 2. OBSERVATIONS AND DATA REDUCTION We observed B 1422+231 on 1996, April 7 (UT) using the K-band spectrograph (KSPEC; Hodapp et al. 1996) at the Cassegrain focus (f/31) of the UH 2.2 m telescope in combination with the UH tip-tilt system (Jim et al. 1998). The cross-dispersed echelle design of KSPEC provided simultaneous coverage of the entire 1–2.5 µm wavelength region. The projected pixel size of the HAWAII 1024 $`\times `$ 1024 array was 0$`\stackrel{}{\mathrm{.}}`$167 along the slit and $``$ 5.6 Å at 2 µm along the dispersion direction. We used a 0$`\stackrel{}{\mathrm{.}}`$96 wide slit oriented East-West and centered on the intensity peak of component B (Patnaik et al. 1992) of B 1422+231 (see Figure 1). However, comparing the two images given in Figure 1, we find that actual light from component B is 59 % of the total measured flux and the remaining 41 % is associated with light contamination from components A and C. Therefore, the flux of our spectra is 1.7 times as bright as real component B flux. Thirty exposures, each of 180 sec integration, taken under photometric conditions, were obtained by shifting the position of the object along the slit at intervals of 5″ between each integration. The total integration time was 5400 sec. An A-type standard star, HD 106965 (Elias et al. 1982), was observed for flux calibration. Another A-type star, HD 136754 (Elias et al. 1982), was also observed before and after observing B 1422+231 to correct for atmospheric absorption. Spectra of an incandescent lamp and an argon lamp were taken for flat-fielding and wavelength calibration, respectively. Typical widths of the spatial profiles of the standard star spectra were $``$ 0$`\stackrel{}{\mathrm{.}}`$5 (FWHM) throughout the night. Note that our previous NIR spectroscopy of this quasar at KPNO was obtained using a 1.44 arcsec slit under 1.4 – 2.1 arcsec seeing conditions (K96). Data reduction was performed with IRAF<sup>1</sup><sup>1</sup>1Image Reduction and Analysis Facility (IRAF) is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. using standard procedures as outlined in Hora & Hodapp (1996). Sky and dark counts were removed by subtracting the average of the preceding and following exposures, and then the resulting frame was divided by a normalized dome flat. The target quasar was not bright enough to trace its position in each frame with sufficient accuracy. Therefore, we first fit the spectral positions of the standard star spectra with a third-order polynomial function which properly traces the echelle spectrum. These fitting results were then applied to the quasar spectra. Using this procedure, we extracted the quasar spectra with an aperture of 3″ which was determined to be the typical width where the flux was $``$ 10 % of the peak flux along the spatial profile of the standard star. In order to subtract the sky emission, we used the data just adjacent to the 3″ aperture. The wavelength scale of each extracted spectrum was calibrated to an accuracy of 18 km s<sup>-1</sup> at 1.1 µm, 20 km s<sup>-1</sup> at 1.6 µm, and 19 km s<sup>-1</sup> at 2.0 µm, based on both the argon emission lines of the calibration lamp and on the telluric OH emission lines. The spectral resolutions (FWHM) measured from the argon lamp spectra were $``$ 500 km s<sup>-1</sup> at 1.1 µm, $``$ 450 km s<sup>-1</sup> at 1.6 µm, and $``$ 500 km s<sup>-1</sup> at 2.0 µm. Finally, the spectra were median combined in each band. Atmospheric absorption features were removed using the spectra of the A-type star HD 136754 because A-type stars are best suited for correcting for atmospheric absorption features. However, since A-type stars inherently have hydrogen recombination absorption lines (e.g., the Brackett series in $`H`$ and $`K`$ bands and the Paschen series in $`I`$ and $`J`$ bands), we removed these features using Voigt profile fitting before the correction. In order to check whether this procedure worked we also applied the same atmospheric correction for the spectra of M-type stars whose data were obtained on the same night. Comparing our corrected spectra of the M-type stars with their published spectra (Lançon & Rocca-Volmerange 1992), we found that our correction procedure works appropriately. Finally, in order to calibrate the flux scale, we used the spectrum of the standard star HD 106965 (A2, $`K`$=7.315) divided by a 9000 K blackbody spectrum, which fits the $`JHKL`$ magnitude of the standard (Elias et al. 1982) with only 1.2 % deviation. Photometric errors were determined to be $`<`$ 10 % over all observed wavelengths. ## 3. RESULTS AND DISCUSSION Figure 3 shows the spectra of B 1422+231 (red line) in the $`IHK`$ bands together with both our previous measurement at KPNO (blue line; K96) and the Large Bright Quasar Survey (LBQS) composite spectrum shifted to $`z=3.62`$ (black dashed line; Francis et al. 1991). Since the efficiency of KSPEC in the $`J`$-band is not high, we have not used the $`J`$-band data in this paper. Although our new measurement was made with a narrower slit (0$`\stackrel{}{\mathrm{.}}`$96 wide) than that used by K96 (1$`\stackrel{}{\mathrm{.}}`$44 wide), the $`H`$\- and $`K`$-band fluxes are slightly higher than those of the previous measurement. K96 tried to carefully correct for the effect of seeing on the relative flux calibration among the spectral bands because they could take only one band spectrum at a time. However, the time variation of the seeing in their observations made it difficult to perform the correction very accurately. Since our new $`I`$ to $`K`$-band NIR spectra were obtained simultaneously, our new measurement should be more reliable than that of K96. Our new $`I`$-band spectrum detects C III\]$`\lambda `$1909 emission clearly. Further, some unidentified emission features at 2000 Å and 2080 Å as well as the dip at 2200 Å in the rest frame, which are shown in the average spectrum of LBQS quasars (Francis et al. 1991), are also seen. The presence of \[O III\]$`\lambda `$5007 emission has also been confirmed. The continuum flux of component B between 1330 Å and 1380 Å, which was obtained by Impey et al. (1996) 13 months before our observations, was assumed for the continuum level on the short wavelength side of our spectra. We applied the factor 1.7 to this continuum level in order to correct the contaminated light from component A and C for our spectra (see previous section). The continuum level on the long wavelength side of our spectra was chosen by fitting a power-law continuum plus a Fe II template simultaneously to minimize the residual between 4450 Å and 4750 Å and beyond 5100 Å. We used the Balmer continuum template that was generated to approximate the emission-line strengths seen in 0742+318 (see Figure 3d of Wills et al. 1985). The Fe II emission profile of the very strong Fe II-emitting low-ionization BAL quasar PG 0043+039 (Turnshek et al. 1994) was used as the Fe II template. Due to a relatively poor fit over the entire wavelength range when using a single template, two Fe II templates were used: one for shortward of 3000 Å and the other for longward of 3000 Å. The power-law continuum that we derive is given by $`f_\nu \nu ^{0.88}`$. This spectral index is steeper than the $`f_\nu \nu ^{0.54}`$ power-law derived in K96. Yee & Bechtold (1996) report that B 1422+231 had become brighter by 0.12 mag during 13 months (see also Kundić et al. 1997), but variability of 0.12 mag would change the spectral index to $`0.88\pm 0.08`$ at most. Therefore, it is unlikely that the difference of the spectral index between K96 and our current work is due to the variability inherent in this quasar. Thus, we conclude that the change in the calculated spectral index is due to our improved absolute NIR spectrophotometry by simultaneous observations of $`IHK`$ bands using KSPEC. Figure 3 shows the rest frame spectrum of B 1422+231 with the power-law subtracted and the best fit synthetic spectrum comprised of Fe II and Balmer continuum emission as well as other broad-lines. The fluxes, equivalent widths, and line widths of the detected emission lines are summarized in Table 1 together with the previous measurements of K96. Our new data show that the ratio Fe II(UV)/H$`\beta `$ and Fe II(optical: $`\lambda \lambda `$3500–6000)/H$`\beta `$ are higher than those of K96 by factors of 1.6 and 3.3, respectively. These differences are mainly due to adopting the different power-law continuum described above. However, our new Fe II(optical)/H$`\beta `$ ratio for B 1422+231 is still in the observed range for low-$`z`$, radio-loud quasars (Boroson & Green 1992; see Taniguchi et al. 1997a; Murayama et al. 1998). The \[O III\]$`\lambda `$5007/H$`\beta `$ ratio is nearly the same between the two measurements. We also note that there appears to be evidence for excess emission at $`\lambda <`$ 2100 Å in the rest frame (see the second panel of Figure 3). Our new measurements yield a flux ratio of Fe II(UV)/Fe II(optical) $``$ 8.0. Since Wills et al. (1985) give a range of 4 $`<`$ Fe II(UV)/Fe II(optical) $`<`$ 12 for low-$`z`$ quasars, B 1422+231 is typical of low-$`z`$ quasars in this respect. We also obtain a flux ratio of Fe II(optical: $`\lambda \lambda `$4484–4684)/H$`\beta 0.53`$ which is significantly smaller than the range of values ($``$ 1.5 – 2) found by Hill et al. (1993) for quasars with $`z`$ 2 – 2.5 . This demonstrates that it is perhaps dangerous to attempt a measurement of the iron abundance using solely optical Fe II emission features as suggested by Wills et al. (1985). Finally we speculate about the formation epoch of the host galaxy of B 1422+231. Our new measurement has confirmed that the Fe II(total)/H$`\beta `$ ratio for B 1422+231 is higher-than-normal with respect to what is found for low-$`z`$ quasars. Since the low-$`z`$ quasars are believed to be associated with nuclei of massive galaxies, their chemical abundances are expected to be higher than or roughly equal to solar. Therefore, the observed higher Fe II(total)/H$`\beta `$ ratio suggests that the iron abundance of B 1422+231 is at least comparable to the solar value. If this is the case, it is expected that the majority of iron would come from Type Ia supernovae. Yoshii, Tsujimoto, & Nomoto (1996) derived $``$ 1.5 Gyr for the lifetime of SN Ia progenitors from an analysis of the O/Fe and Fe/H abundances in solar neighborhood stars. If the Fe enrichment started at 1.5 Gyr after the onset of the first epoch of star formation, the host galaxy of B 1422+231 would have formed at $`z9`$ or earlier for $`q_0`$ = 0.0 and $`H_0`$ = 75 km s<sup>-1</sup> Mpc<sup>-1</sup> (K96). We are very grateful to the staff of the UH 2.2 m telescope. In particular, we would like to thank Andrew Pickles for his technical support and assistance with the observations. This work was financially supported in part by Grants-in-Aid for Scientific Research (Nos. 07044054 and 09640311) from the Japanese Ministry of Education, Science, Sports, and Culture and by the Foundation for Promotion of Astronomy, Japan. TM is thankful for support from a Research Fellowship from the Japan Society for the Promotion of Science for Young Scientists. This research has made use of the NASA/IPAC Extragalactic Database (NED) and the NASA Astrophysics Data System Abstract Service. Comments on continuum fitting In our spectral fitting procedure, we have used the global continuum which was determined using the continuum from the rest-frame UV to optical. However, local continua have often been used to measure the optical Fe II/H$`\beta `$ ratio for low-$`z`$ quasars because of the lack of rest-frame UV spectra (e.g., Boroson & Green 1992). Since the adopted continuum affects the measurements of emission-line fluxes (see Murayama et al. 1998), we examine such differences in the flux measurement for the case of B 1422+231. In Figure 4, we show the spectral fitting results for both global continuum and the local continuum cases. It is shown that the local continuum fit tends to give lower fluxes for the concerned emission lines; the measured fluxes for H$`\gamma `$+\[O III\]$`\lambda `$4363, H$`\beta `$, \[O III\]$`\lambda `$ 5007, and optical Fe II are given in Table 1. Comparing these fluxes with those measured using a global fit to the continuum, we find that the optical Fe II flux based on the local continuum is four times lower than that determined using a global continuum fit, although the \[O III\]$`\lambda `$5007 flux is nearly the same using both the local and global continuum fits. Therefore, we suggest that previous measurements of optical Fe II/H$`\beta `$ ratios for low-$`z`$ quasars and Seyfert nuclei may be underestimated by a factor of a few. We also note that a local continuum fit should be used if one would like to first compare new results with previous published values.
no-problem/9812/cond-mat9812039.html
ar5iv
text
# Absence of a Finite-Temperature Melting Transition in the Classical Two-Dimensional One-Component Plasma \[ ## Abstract Vortices in thin-film superconductors are often modelled as a system of particles interacting via a repulsive logarithmic potential. Arguments are presented to show that the hypothetical (Abrikosov) crystalline state for such particles is unstable at any finite temperature against proliferation of screened disclinations. The correlation length of crystalline order is predicted to grow as $`\sqrt{1/T}`$ as the temperature $`T`$ is reduced to zero, in excellent agreement with our simulations of this two-dimensional system. PACS numbers: 64.70.-p,74.60.-w, \] It has been commonly assumed for many years now that for the physically important case of particles moving in two dimensions interacting with each other via a repulsive logarithmic potential (a situation sometimes called the two-dimensional one-component plasma problem) one would have the usual phases expected on the KTHNY scenario. This scenario describes two-dimensional melting as a defect–mediated phenomenon (Halperin, Nelson and Young ) and is based on ideas of Kosterlitz and Thouless . It is supposed that the crystalline phase – a triangular lattice – melts at a continuous transition into an hexatic liquid due to the proliferation of dislocations. The hexatic liquid becomes an ordinary liquid at temperatures which permit the creation of disclinations. This scenario is well-established for particles with short-range interactions , but we will argue that particles interacting with a logarithmic potential behave completely differently. For them we can show that the crystalline state is unstable at any temperature against the proliferation of (screened) disclinations and as a consequence the system stays in the liquid state down to arbitrarily low temperatures. The ground state of the system is of course crystalline; the correlation length of short-range crystalline order is predicted to grow as $`\sqrt{1/T}`$ as the temperature $`T`$ approaches zero. Our numerical simulations reported here confirm this behavior. The one-component plasma problem is of considerable physical significance as it relates to the thermodynamics of vortices –“the particles” – in thin film superconductors. For thin enough films the screening length in the intervortex potential may be greater than the transverse dimensions of the film, which makes the logarithmic potential an accurate approximation for the potential. Most papers on thin film superconductors assume the vortices have a freezing transition at low enough temperatures (for a review see Ref. ), although clear experimental evidence for this is lacking. For a contrary view, however, see and references cited therein. For particles interacting with a repulsive potential a device is needed to stop them escaping to infinity. In numerical studies of two-dimensional melting the most commonly used device is periodic boundary conditions. Unfortunately the use of this boundary condition with either short-range interactions or with the logarithmic interaction produces an apparently first order transition between the crystal and liquid states rather than the KTHNY scenario. (For a review of early work on short-range interactions see ; for some more recent work see Refs. or ). This is probably a finite size effect: studies on systems with over 60,000 particles indicate that the van der Waals loops associated with the apparent first-order transition shrink in these very large systems , . We ourselves have found that placing the particles on the surface of a sphere is very effective for short-range interactions : no van der Waals loops occur with this topology and the results obtained even with modest numbers of particles are in excellent agreement with expectations based on KTHNY theory. As a consequence the numerical work which we are reporting in this paper has been carried out for the two-dimensional system represented by the surface of a sphere. The ground state configuration of the particles on the sphere has to contain at least 12 disclinations (5-fold rings) by Euler’s theorem. We have made extensive studies of these ground states and discovered that for larger systems the disclinations are screened by lines of dislocations , . These defects within the crystalline state seem to overcome the problem of the spurious first order transition induced by finite size effects when periodic boundary conditions are employed and so enable one to get results closer to those obtaining in the thermodynamic limit. It is noteworthy that an early simulation of the one-component plasma on the surface of a sphere did not find a finite temperature phase transition either, in agreement with our results. The Hamiltonian for particles moving on the surface of the sphere interacting via a logarithmic potential is $$H=J\underset{i<j}{}ln(|𝐫_i𝐫_j|/R),$$ (1) where $`𝐫_i`$ is the position of the $`i`$th particle on the surface of the sphere, $`R`$ is the radius of the sphere and $`J`$ is a measure of the magnitude of the repulsive forces between the particles. The key feature which distinguishes the logarithmic potential from other potentials is that all the stationary states of $`H`$ have zero dipole moment . This was proved by noting that the force on the $`i`$th particle due to all the others must be directed radially for any equilibrium configuration since otherwise the particle would move along the sphere, so $$\underset{ji}{}\frac{𝐫_i𝐫_j}{|𝐫_i𝐫_j|^2}=f_i𝐫_i.$$ (2) By multiplying both sides by $`𝐫_i`$ one can show that $`f_i=(N1)/2R^2`$ where $`N`$ is the total number of particles. By summing Eq. (2) over all $`i`$ and using the fact that $`𝐫_i𝐫_j`$ is antisymmetric in $`i`$ and $`j`$, it follows that the dipole moment, $`_i𝐫_i`$, is zero. No other potential has this feature and it has important consequences. Our basic contention is that thermally excited screened disclinations will destroy crystalline order for particles interacting with each other via a logarithmic potential. The energy cost of an unscreened disclination is $`O(N)`$ and such a disclination will not be thermally created in a crystalline state. However, disclinations can be “screened” by a cloud of dislocations and it turns out that the energy cost of such screened disclinations can be much smaller, of $`O(ln(N))`$ for non-logarithmic potentials and $`O(1/N)`$ for the logarithmic potential. The phenomenon of screening of a disclination by dislocations is well-known , , and is illustrated in Fig. 1. The figure shows a five-fold coordinated ring – a disclination – screened by five lines of dislocations where the dislocations are spaced a distance $`cl_0`$ apart; $`l_0`$ is the lattice spacing. The strain field of the central disclination can be largely cancelled by that arising from the lines of dislocations. The resulting strain field from the dislocations along a line may be approximated at large distances by that which arises from a positive and negative disclination at each end of the line with a “disclination” charge of size $`q_s=1/c`$ (where $`q_s=+1`$ for a fivefold disclination). If we consider a line of dislocations with $`c=5`$ and five lines as in Fig. 1 then the strain field of the central disclination is exactly screened away. As shown in Ref. the contribution of the disclinations at the other end of the lines can be made arbitrarily small by allowing the spacing of the dislocations to increase with distance $`r`$ from the disclination as $`c(r)=5+S/l_0g(r/S)`$ with the condition that $`g(0)=0`$; $`S`$ is the size or scale of the screened disclination. Then the residual charge associated with the screened disclination can be made as small as $`O(l_0/S)`$ but, as we shall show, must be as small as $`O((l_0/S)^2)`$ for the special case of the logarithmic potential. First let us review some features of two-dimensional continuum elasticity theory . Small strains $`u_{ij}(𝐫)`$ are related to the stress field by Hooke’s law, $`\sigma _{ij}=B\delta _{ij}u_{kk}+2\mu (u_{ij}\delta _{ij}u_{kk}/2)`$, where $`B`$ is the bulk modulus and $`\mu `$ is the shear modulus. In the presence of topological defects it is convenient to introduce the Airy stress function, $`\chi `$, defined by $`\sigma _{ij}=ϵ_{ik}ϵ_{jl}_k_l\chi `$. A fivefold disclination is defined by a change in bond angle $`2\pi /6`$ when a path encircles the defect. Dislocations are defined by their Burgers vector density field $`𝐛(𝐫)`$ which for the dislocations in Fig. 1 points perpendicular to the line upon which they lie. The stress field is related to the densities of disclination $`s(𝐫)`$ and dislocations via $$\frac{1}{Y_2}^4\chi =s(𝐫)ϵ_{ik}_kb_i(𝐫)\stackrel{~}{s}(𝐫),$$ (3) where $`Y_2=4B\mu /(B+\mu )`$. For a single disclination at the origin as in Fig. 1, $`s(𝐫)=(2\pi /6)\delta (𝐫)`$. $`\stackrel{~}{s}(𝐫)`$ can be regarded as a total disclination density made up of a “free” disclination density $`s(𝐫)`$ and a “polarization” contribution $`ϵ_{ik}_kb_i`$ from dislocations. The energy of the screened disclination expressed in Fourier space is $$E=\frac{1}{2}Y_2\frac{d^2q}{(2\pi )^2}\frac{1}{q^4}\stackrel{~}{s}(𝐪)\stackrel{~}{s}(𝐪).$$ (4) The expectation for non-logarithmic interaction potentials is that the screening can at best make the Fourier transform of $`\stackrel{~}{s}`$, $`\stackrel{~}{s}(𝐪)=ql_0f(qS)`$ when the amount of “disclination charge” within a region of radius $`S`$ around the disclination is of $`O(l_0/S)`$. Substituting this form for $`\stackrel{~}{s}(𝐪)`$ into Eq. (4), one finds that in a system of $`N`$ particles the energy of the screened disclination would be of $`O(lnN)`$ – which explains why screened disclinations would be unlikely to modify the KTHNY scenario for non-logarithmic interactions. The change in the particle density due to the presence of the topological defects is, when Fourier transformed, given by $$\mathrm{\Delta }\rho (𝐪)=nu_{ii}(𝐪)=\frac{n}{2B}q^2\chi (𝐪),$$ (5) where $`n`$ is the number density of the particles. A feature of the logarithmic potential is that for it the bulk modulus $`B`$ is not constant but diverges at small wavevector: $`B(q)=2\pi Jn^2/q^2`$. The shear modulus is well-behaved: $`\mu =Jn/8`$. Equations (4) and (5) can still be used if the replacements $`Y_24\mu `$ and $`BB(q)`$ are employed. At small wave-vector it follows that for the logarithmic potential $$\mathrm{\Delta }\rho (𝐪)=\frac{1}{8\pi }\stackrel{~}{s}(𝐪).$$ (6) Only for the logarithmic case does a finite small $`q`$ limit exist for the density change $`\mathrm{\Delta }\rho `$ associated with a screened disclination. We now exploit the fact that all stationary states of $`H`$ have vanishing dipole moment to show that for the case of logarithmic interactions between the particles the screening of the disclination is more efficient than for non-logarithmic interactions. The Fourier transform of the particle density is defined by $$\rho (𝐪)=\frac{1}{N}\underset{i}{}e^{i𝐪.𝐫_i}.$$ (7) (Formally, as our system is the surface of a sphere rather than a plane we should use spherical harmonics rather than plane waves, as was done in Ref., but the distinction is unimportant for our argument). Consider now a state which differs from the groundstate by the presence of a screened disclination of size $`S`$. The density difference $`\mathrm{\Delta }\rho (𝐪)`$ of the two states must differ as $`q0`$ by terms of $`O(q^2)`$ ; (if one of the states had had a dipole moment then one can see from expanding the exponential in Eq. (7) that $`\mathrm{\Delta }\rho `$ would have been of $`O(q)`$). Eq. (6) implies that $`\stackrel{~}{s}(𝐪)=q^2l_0^2f(qS)`$. By Fourier transforming $`\stackrel{~}{s}(𝐪)`$ one can then show that the “disclination charge” within a distance S of the center of the disclination is of $`O((l_0/S)^2)`$. Substituting this form for $`\stackrel{~}{s}`$ into Eq. (4) one finds that the energy of the screened disclination is of order $`J(l_0/S)^2`$. By increasing the scale S it can be made arbitrarily small. At a temperature $`T`$ a region of linear extent $`\xi `$, where $`J(l_0/\xi )^2=T`$, will be unlikely to contain a disclination and so $`\xi `$ will be a measure of the short-range crystalline order present in the system at temperature $`T`$. $`\xi `$ diverges as $`\sqrt{1/T}`$ as $`T0`$. This means that by investigating numerically the structure factor one can find from the widths of its peaks the correlation length $`\xi `$ and its temperature dependence will tell us whether the arguments above are valid. We studied using molecular dynamics, specifically a velocity Verlet algorithm , a system of $`N`$ particles confined to move on the surface of a sphere and interacting with the logarithmic potential. Reduced units were used, i.e. $`m=k_\mathrm{B}=R=J=1`$, where $`m`$ is the mass of the particle and $`k_\mathrm{B}`$ is the Boltzmann constant. The acceleration $`𝐚_i`$ of the $`i`$th particle equals $`𝐟_i/m`$, where $`𝐟_i`$ is the force produced by the other particles on the $`i`$th particle. After a small time interval $`\delta t`$ the position of the particle will be $`𝐱_i=𝐫_i(t)+𝐯_i(t)\delta t+\frac{1}{2}𝐚_i(t)\delta t^2`$, where $`𝐯_i(t)`$ is the velocity of the particle. In general $`𝐱_i`$ will not lie on the surface of the sphere. The $`i`$th particle is brought back to the surface by acting on it with a fictitious force $`2\lambda _i𝐫_i(t)`$ where $$\lambda _i=\frac{𝐫_i(t)𝐱_i\sqrt{\left[𝐫_i\left(t\right)𝐱_i\right]^2R^2\left[\left|𝐱_i\right|^2R^2\right]}}{R^2\delta t^2}.$$ (8) Then, the velocity Verlet algorithm updates particle positions using the equation: $$𝐫_i(t+\delta t)=𝐱_i\lambda _i𝐫_i(t)\delta t^2.$$ (9) We chose $`\delta t=0.005(mR^2/J)^{1/2}`$. The velocities of the particles were chosen from a Boltzmann distribution appropriate to the temperature $`T`$ and were re-selected at equally spaced time intervals . The system was equilibrated at high temperatures, then the temperature was slowly reduced. For each temperature we determined the structure factor which is related to the Fourier transform of the pair correlation function $`h(r)`$ by : $$S(q)=1+2\pi \rho R^2_0^\pi h(R\theta )\mathrm{sin}\theta J_0(qR\theta )𝑑\theta ,$$ (10) where $`J_0`$ is the Bessel function of zeroth order. This adaption of the conventional relation to particles moving on the surface of the sphere is valid provided $`q`$ is of $`O(1)`$ and not of $`O(1/R)`$. Peaks in the structure factor grow as the temperature is reduced at wavevectors $`q`$ corresponding to the reciprocal lattice vectors $`|𝐆|`$ of the triangular lattice expected for the groundstate. The correlation length, $`\xi `$, is the inverse of the width of the first peak of the structure factor. To determine it, we fitted the first peak to a Lorentzian curve. We studied systems of 1442 and 2252 particles. The simulation time for each temperature was 100,000$`\delta t`$. In Fig. 2 $`\xi `$ is plotted against $`\sqrt{1/T}`$. The vertical line is drawn where other authors found a first-order melting transition using periodic boundary conditions . The predicted behavior $`\xi \sqrt{1/T}`$ is clearly seen in Fig. 2. When the temperature was reduced to $`T=0.01`$ the correlation length for the system of 1442 particles reached the system size and stopped growing as the temperature was reduced further. However, the simulation with 2252 particles indicates that this levelling off is just a finite size effect. True equilibration in numerical studies of two-dimensional melting phenomena is always problematic and may be the cause of the scatter in Fig. 2. The apparent absence of a finite temperature phase transition for particles interacting with a logarithmic potential cannot be attributed to the fact that we have done the simulation for the two-dimensional geometry represented by the surface of a sphere. We can demonstrate this by simulating particles interacting with a $`1/r^{12}`$ potential also moving on the surface of a sphere. A finite temperature melting transition of KTHNY character is seen. We had already found indications of such a melting transition but the numbers of particles used in that reference were rather small. Using the Verlet algorithm described above we are able to simulate much larger systems eg. 5882 particles. The calculations for the short-range potentials run faster than with the logarithmic potential as it is possible to use look-up tables of nearest neighbors. In the KTHNY picture, the correlation length has the following density dependence along an isotherm for a $`1/r^{12}`$ potential : $$\xi (\rho )\mathrm{exp}\left(\frac{b}{\left((\rho _c/\rho )^61\right)^\nu }\right),$$ (11) where $`\rho `$ is the density and $`\nu =0.36963\mathrm{}`$. In Fig. 3, ln$`\xi `$ is plotted versus $`(1/\rho ^61)^{0.36963}`$. We have assumed that $`\rho _c=1`$, a value obtained by other authors working at the temperature which we used, $`T=1`$. The slope of the straight line is $`1`$ which corresponds well with KTHNY predictions. Note again that finite size effects cut off the growth of $`\xi `$ when it is of $`O(R)`$. Thus simulations on the sphere do produce for a non-logarithmic potential the expected crystalline phase. In summary, we have shown that thermal excitation of screened disclinations removes at non-zero temperature the crystalline phase of the vortex system. Numerical simulations have confirmed our prediction that as the temperature is lowered the correlation length of short-range crystalline order should grow as $`\xi \sqrt{1/T}`$. APG would like to acknowledge a grant and financial support from the Dirección General de Investigación Científica y Técnica, project number PB 96/1118 and EPSRC under grant GR/K53208. We have had useful discussions with H. Bokil.
no-problem/9812/math9812123.html
ar5iv
text
# On random sections of the cube ## 1 Introduction The principal object in this paper is the expected number of $`j`$-dimensional faces (in short, $`j`$-faces) of a random $`k`$-dimensional central section (in short, $`k`$-section) of the $`n`$-cube $`B_{\mathrm{}}^n=[1,1]^n`$ in $`\mathrm{}^n`$. We denote this number by $`f(j,k,n)`$. The normalized rotation invariant measure on the set $`G_{n,k}`$ of all $`k`$-dimensional subspaces of $`\mathrm{}^n`$ provides the probabilistic framework. Section $`2`$ contains a calculation of the expected number of vertices of a random $`k`$\- section of the $`n`$-cube. The result is: $$f(0,k,n)=2^k\left(\genfrac{}{}{0pt}{}{n}{k}\right)\sqrt{\frac{2k}{\pi }}_0^{\mathrm{}}e^{kt^2/2}\gamma _{nk}(tB_{\mathrm{}}^{nk})𝑑t,$$ (1) where $`\gamma _{nk}`$ denotes the $`(nk)`$-dimensional Gaussian probability measure. In $`\mathrm{\S }3`$ we derive a lower bound for $`f(j,k,n)`$ for every $`1j<k<n`$. The main result is: $$\frac{f(0,kj,n)}{f(j,k,n)}<\sqrt{\frac{2}{\pi }}\left(\frac{j(kj)}{nk+j}\right)^{1/2}_0^{\mathrm{}}e^{\frac{j(kj)}{nk+j}t^2/2}\gamma _j(tB_{\mathrm{}}^j)𝑑t.$$ The lower bound for $`f(j,k,n)`$ derived from this inequality, combined with (1), leads in some cases to asymptotically best possible results. For example, in $`\mathrm{\S }3`$ we deduce from it the following asymptotic formula, for fixed integers $`1l<m`$: $$f(nm,nl,n)\frac{(2n)^{ml}}{(ml)!},\text{as}n\mathrm{}.$$ (2) The notation $`a_nb_n`$ means: $`a_n/b_n1`$ as $`n\mathrm{}`$. (2) can be interpreted as follows: the probability that a random fixed-codimensional subspace of $`\mathrm{}^n`$ intersects a fixed-codimensional face of the $`n`$-cube, tends to $`1`$ as $`n\mathrm{}`$. The formula (2) itself follows also from the work of Affentranger and Schneider. (See remark 1 of section 3 below). In , they found a formula for the expected number $`\text{E}(f_j(\mathrm{\Pi }_kB_1^n))`$ of $`j`$-faces of an orthogonal projection of an $`n`$-polytope $`P`$ onto a $`k`$-dimensional random subspace. Formula (5) of reads as follows: $$\text{E}(f_j(\mathrm{\Pi }_kP))=f_j(P)2\underset{s0}{}\underset{F_j(P)}{}\underset{G_{k+1+2s}(P)}{}\beta (F,G)\gamma (G,P).$$ (3) Here $`_j(P)`$ denotes the set of $`k`$-faces of $`P`$, and $`f_j(P)=\text{card}_j(P)`$. $`\beta (F,G)`$ denotes the internal angle (, p. 297) of the face $`G`$ at its face $`F`$, and $`\gamma (G,P)`$ — the external angle (, p. 308) of $`P`$ at its face $`G`$. It is shown in that (3) implies that if $`0j<k`$ are fixed integers, then as $`n\mathrm{}`$, $$\text{E}(f_j(\mathrm{\Pi }_kT^n))\frac{2^k}{\sqrt{k}}\left(\genfrac{}{}{0pt}{}{k}{j+1}\right)\beta (T^j,T^{k1})(\pi \mathrm{log}n)^{(k1)/2}.$$ (4) Here $`T^n`$ stands for the regular $`n`$-simplex. In a very recent work, , Böröczky, Jr. and Henk showed that (3) implies the same asymptotic formula (4) also for $`\text{E}(f_j(\mathrm{\Pi }_kB_1^n))`$, where $`B_1^n`$ is the regular cross-polytope. In addition, they found an asymptotic formula for the internal angles $`\beta (T^j,T^{k1})`$, when $`k/j^2\mathrm{}`$. Therefore if $`j`$ is fixed, $`k`$ is much larger than $`j^2`$ and $`n`$ much larger than $`k`$, then explicit estimates for $`\text{E}(f_j(\mathrm{\Pi }_kB_1^n))`$ are available. See for more details. Explicit asymptotic formulas for $`\text{E}(f_j(\mathrm{\Pi }_kT^n))`$, were established independently by Vershik and Sporyshev (), when $`j,k`$ are both proportional to $`n`$ and $`n\mathrm{}`$. A simple duality argument shows that $$\text{E}(f_j(\mathrm{\Pi }_kB_1^n))=f(kj1,k,n).$$ Choose $`j=k1`$ in (4). Applying the result for $`B_1^n`$, one has $$f(0,k,n)=\text{E}(f_{k1}(\mathrm{\Pi }_kB_1^n))\frac{2^k}{\sqrt{k}}(\pi \mathrm{log}n)^{(k1)/2},\text{as }n\mathrm{}.$$ (5) The last asymptotic formula follows also from (1). In fact, if $`\{g_i\}_{i=1}^m`$ are independent $`N(0,1)`$ (that is, with mean $`0`$ and variance $`1`$) Gaussian variables then $`\gamma _m(tB_{\mathrm{}}^m)`$ coincides with the probability of the event $`\{\mathrm{max}_{1im}|g_i|t\}`$. This probabilistic interpretation allows a straightforward evaluation of the asymptotic behavior of the integral in (1), when $`k`$ is fixed and $`n\mathrm{}`$. Formula (1) also yields information about $`f(0,k,n)`$ for $`k`$ not necessarily fixed. For example, if $`k=n1`$, then the integral in (1) can be computed and the result is: $$f(0,n1,n)=\frac{2^nn}{\pi }\mathrm{arctan}\frac{1}{\sqrt{n1}}\frac{2^n\sqrt{n}}{\pi }.$$ (6) Particular values of the last formula were computed numerically in . (Table $`2`$). For the expected number of vertices of random sections of fixed co-dimension, we have the following inequality, which is a consequence of (1). $$f(0,nd,n)\left(\genfrac{}{}{0pt}{}{n}{d}\right)2^n\left(\frac{1}{\pi }\mathrm{arctan}\frac{1}{\sqrt{nd}}\right)^d,(d1).$$ Equality holds for $`d=1`$. To obtain a lower bound for $`f(j,k,n)`$, it turns out that it is useful to know an estimate for the Gaussian measure of a cone generated by a section of a face of a cube. In $`\mathrm{\S }3`$ we find such an estimate, by modifying K. Ball’s calculation of the maximal volume of a cube–section, based on Brascamp-Lieb’s inequality. (). Dvoretzky’s theorem on almost Euclidean sections asserts that there exists a function $`k(\epsilon ,n)1`$, tending to infinity as $`n\mathrm{}`$ for each fixed $`\epsilon >0`$, such that if $`K`$ is an $`n`$-dimensional centrally symmetric convex body (that is, a convex compact set in $`\mathrm{}^n`$ with non-empty interior, satisfying $`K=K`$), and $`\epsilon >0`$, then for each $`1kk(\epsilon ,n)`$ there exists a $`k`$-dimensional subspace $`X`$, and a linear automorphism $`T`$ of $`X`$ for which $$XB_2^nT(XK)(1+\epsilon )(XB_2^n),$$ (7) where $`B_2^n`$ denotes the Euclidean unit ball. The proof of Dvoretzky’s theorem in shows that $`k(\epsilon ,n)c\epsilon ^2|\mathrm{log}\epsilon |^1\mathrm{log}n`$, for some absolute constant $`c>0`$. That proof determined the best possible dependence of $`k`$ on $`n`$. The dependence of $`k`$ on $`\epsilon `$ was improved by Gordon , who discovered another proof of Dvoretzky’s theorem with $`k(\epsilon ,n)c\epsilon ^2\mathrm{log}n`$. Both proofs are probabilistic; they show that not only there exist almost Euclidean sections, but actually most sections are such. More precisely, if $`X`$ is a random subspace whose dimension does not exceed $`k(\epsilon ,n)`$, then the probability that the section $`XK`$ is $`(1+\epsilon )`$-Euclidean (common terminology for expressing that (7) holds), tends to $`1`$ as $`n\mathrm{}`$. These facts motivate an investigation of the random $`f`$-vector $`\{f(j,k,n)\}_{j=0}^{k1}`$, especially since it is well known that every $`k`$-dimensional symmetric polytope that has $`2n`$ facets is affinely equivalent to a $`k`$-section of an $`n`$-cube. ## 2 Vertices Let $`G_{n,k}`$ denote the set of $`k`$-dimensional subspaces of $`\mathrm{}^n`$. We will denote its normalized rotation invariant measure by “Prob”. Recall that this measure is related to the normalized Haar measure $`H`$ of the orthogonal group $`\mathrm{O}(n)`$ by the equality $$\text{Prob }\{XB\}=H\{g\text{O}(n):g[e_i]_{i=1}^kB\},$$ where $`B`$ is a Borel subset of $`G_{n,k}`$ and $`[e_i]_{i=1}^k`$ is the $`k`$-dimensional subspace spanned by the first $`k`$ unit vectors in $`\mathrm{}^n`$. Fix $`XG_{n,k}`$. For each $`0jk1`$, the set of $`j`$-faces of the polytope $`XB_{\mathrm{}}^n`$ coincides with the set of intersections of $`(nk+j)`$-faces of $`B_{\mathrm{}}^n`$ with $`X`$. Every $`(nk+j)`$-face of $`B_{\mathrm{}}^n`$ has the same probability to be intersected. Therefore if one particular $`(nk+j)`$-face $`F_{nk+j}`$ is fixed, then the expected number of $`j`$-faces of the section $`XB_{\mathrm{}}^n`$ is equal to: $$2^{kj}\left(\genfrac{}{}{0pt}{}{n}{kj}\right)\text{Prob }\{XF_{nk+j}\mathrm{}\}.$$ Let $`C(F_{nk+j})`$ denote the cone generated by $`F_{nk+j}`$: $$C(F_{nk+j})=\underset{xF_{nk+j}}{}\{tx:t0\}.$$ Put $`C_1(F_{nk+j})=C(F_{nk+j})\mathrm{𝕊}^{n1}`$. For every subspace $`X`$, $$XF_{nk+j}\mathrm{}(X\mathrm{𝕊}^{n1})C_1(F_{nk+j})\mathrm{}.$$ For $`n=0,1,\mathrm{}`$ we denote by $`\sigma _n`$ the normalized rotation-invariant measure on the unit-sphere $`\mathrm{𝕊}^n`$ in $`\mathrm{}^{n+1}`$. The next lemma will prove useful for dealing with intersections of subsets of the sphere with random subspaces. ###### Lemma 2.1 Let $`l,m,n`$ be positive integers satisfying $`l+mn1`$. Suppose that $`A\mathrm{𝕊}^m`$ and $`B\mathrm{𝕊}^l`$ are Borel subsets. Then for $`p=l+mn+1`$, $$_{\mathrm{O}(n)}\sigma _p(gBA)𝑑H(g)=\sigma _l(B)\sigma _m(A).$$ (8) To prove the lemma one observes that for fixed $`A`$ (resp. $`B`$) the integral defines an invariant measure on $`\mathrm{𝕊}^l`$ (resp. $`\mathrm{𝕊}^m`$); the conclusion follows from that. Lemma $`2.1`$ is now applied to $`B=X\mathrm{𝕊}^{n1}`$, which we denote by $`\mathrm{𝕊}^{k1}`$, and to $`A=C_1(F_{nk+j})`$. For $`l=k1`$ and $`m=nk+j`$ equality (8) becomes: $$_{\mathrm{O}(n)}\sigma _j(g\mathrm{𝕊}^{k1}A)𝑑H(g)=\sigma _{nk+j}(A).$$ (9) We are ready to compute the expected number of vertices. The Gaussian measure in $`\mathrm{}^m`$ whose density is $`(2\pi )^{m/2}\mathrm{exp}(_1^mx_i^2/2)`$ is denoted by $`\gamma _m`$. ###### Proposition 2.2 The expected number of vertices of a random $`k`$-dimensional central section of the $`n`$-cube is given by the formula $$f(0,k,n)=2^k\left(\genfrac{}{}{0pt}{}{n}{k}\right)\sqrt{\frac{2k}{\pi }}_0^{\mathrm{}}e^{kt^2/2}\gamma _{nk}(tB_{\mathrm{}}^{nk})𝑑t.$$ *Proof*. For each $`g\text{O}(n)`$ we have $$g\mathrm{𝕊}^{k1}C_1(F_{nk})=(\text{span}(g\mathrm{𝕊}^{k1})C(F_{nk}))\mathrm{𝕊}^{n1}.$$ For almost every $`g`$ the intersection $`\text{span}(g\mathrm{𝕊}^{k1})C(F_{nk})`$ is either the origin itself, or else a ray emanating from the origin. Therefore the intersection $`g\mathrm{𝕊}^{k1}C_1(F_{nk})`$ is either empty or a singleton, for almost every $`g`$. Choose $`j=0`$ in (9), with $`A=C_1(F_{nk})`$. Since the measure $`\sigma _0`$ is concentrated on two points giving mass $`1/2`$ to each, we deduce from $`(9)`$ that $$\text{Prob }\{XF_{nk}\mathrm{}\}=2\sigma _{nk}(C_1(F_{nk})).$$ (10) To compute the r.h.s of $`(10)`$, consider an $`(nk)`$-dimensional cube of edge-length $`1`$ inside $`\mathrm{}^{nk+1}`$, at a distance $`\sqrt{k}`$ from the origin, form the cone it generates, and compute the measure of its intersection with the sphere $`\mathrm{𝕊}^{nk}`$. Invoking polar coordinates we see that $$\sigma _{nk}(C_1(F_{nk}))=\gamma _{nk+1}(C(F_{nk})).$$ By rotational symmetry of the Gaussian measure we may assume that $`F_{nk}`$ is specifically the set $`\{x:|x_i|1,\mathrm{\hspace{0.17em}1}ink,x_{nk+1}=\sqrt{k}\}`$. The intersection of the hyper-plane $`\{x_{nk+1}=t\}`$ with $`C(F_{nk})`$ is an $`(nk)`$-dimensional cube of edge-length $`\frac{t}{\sqrt{k}}`$. Therefore by Fubini’s theorem $`\gamma _{nk+1}(C(F_{nk}))`$ $`={\displaystyle \frac{1}{\sqrt{2\pi }}}{\displaystyle _0^{\mathrm{}}}e^{t^2/2}\gamma _{nk}({\displaystyle \frac{t}{\sqrt{k}}}B_{\mathrm{}}^{nk})𝑑t`$ $`=\sqrt{{\displaystyle \frac{k}{2\pi }}}{\displaystyle _0^{\mathrm{}}}e^{kt^2/2}\gamma _{nk}(tB_{\mathrm{}}^{nk})𝑑t.`$ The last equality, together with $`(10)`$, implies the desired formula. The next lemma points out the precise asymptotic behavior of $`f(0,k,n)`$ when $`k`$ is fixed and $`n\mathrm{}`$, and also that of $`f(nm,nl,n)`$, when $`l,m`$ are fixed and $`n\mathrm{}`$. (To be used in $`\mathrm{\S }3`$.) ###### Lemma 2.3 Suppose that $`\{\alpha _n\}_{n=1}^{\mathrm{}}`$ is a sequence of real numbers that has a positive limit $`\alpha `$. Then as $`n\mathrm{}`$, $$_0^{\mathrm{}}e^{\alpha _nt^2/2}\gamma _n(tB_{\mathrm{}}^n)𝑑t\mathrm{\Gamma }(\alpha )\frac{\pi ^{\alpha /2}}{\sqrt{2}}\frac{(\mathrm{log}n)^{(\alpha _n1)/2}}{n^{\alpha _n}}$$ (11) where $`\mathrm{\Gamma }`$ is the Gamma function. *Proof*. Let $`F_n(t)=\text{Prob}\{\mathrm{max}_i|g_i|t\}`$, where $`g_1,\mathrm{},g_n`$ are independent $`N(0,1)`$-Gaussian variables. We have $$\gamma _n(tB_{\mathrm{}}^n)=\left(\sqrt{\frac{2}{\pi }}_0^te^{x^2/2}𝑑x\right)^n=F_n(t).$$ For $`n>1`$, put $$a_n=\frac{1}{\sqrt{2\mathrm{log}n}},\text{and}b_n=\sqrt{2\mathrm{log}n}\frac{\mathrm{log}(\pi \mathrm{log}n)}{2\sqrt{2\mathrm{log}n}}.$$ The well known tail approximation $$\sqrt{\frac{2}{\pi }}_t^{\mathrm{}}e^{x^2/2}𝑑x=\sqrt{\frac{2}{\pi }}\frac{1+o(1)}{t}e^{t^2/2}\text{as}t\mathrm{},$$ (12) combined with a simple calculation, implies that $$\underset{n\mathrm{}}{lim}F_n(a_nx+b_n)=\mathrm{exp}(e^x),x\mathrm{}.$$ (13) A change of variables gives: $`{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}e^{\alpha _nt^2/2}\gamma _n(tB_{\mathrm{}}^n)𝑑t=a_n{\displaystyle \underset{b_n/a_n}{\overset{\mathrm{}}{}}}e^{\alpha _n(a_nx+b_n)^2/2}F_n(a_nx+b_n)𝑑x`$ $`={\displaystyle \frac{\pi ^{\alpha _n/2}}{\sqrt{2}}}{\displaystyle \frac{(\mathrm{log}n)^{(\alpha _n1)/2}}{n^{\alpha _n}}}e^{o(1)}{\displaystyle \underset{\mathrm{}}{\overset{\mathrm{}}{}}}e^{x^2o(1)}e^{\alpha _nx(1o(1))}F_n(a_nx+b_n)\chi _n(x)𝑑x.`$ Here $`\chi _n`$ stands for the characteristic function of the interval $`[b_n/a_n,\mathrm{})`$. All four terms of the integrand in the last integral are non-negative for each $`x`$. For $`x0`$ and sufficiently large $`n`$ we have $`e^{\alpha _nx(1o(1))}<e^{\alpha x/2}`$, while the rest of the terms are majorized by $`1`$. For $`x<0`$ and sufficiently large $`n`$, we have $`F_n(a_nx+b_n)<2\mathrm{exp}(e^{|x|})`$ and $`e^{\alpha _nx(1o(1))}<e^{2\alpha |x|}`$. Thus in both cases if $`n`$ is sufficiently large, the integrand is dominated by an integrable function. By (13), the integrand converges pointwise to the function $`e^{\alpha x}\mathrm{exp}(e^x)`$; Lebesgue’s bounded convergence theorem can be applied: $`\underset{n\mathrm{}}{lim}{\displaystyle _{b_n/a_n}}e^{x^2o(1)}e^{\alpha _nx(1o(1))}F_n(a_nx+b_n)𝑑x`$ $`={\displaystyle _{\mathrm{}}^{\mathrm{}}}e^{\alpha x}\mathrm{exp}(e^x)𝑑x`$ $`=\mathrm{\Gamma }(\alpha ).`$ The proof of Lemma $`2.3`$ is complete. Taking $`\alpha _nk`$ in Lemma 2.3 and bearing in mind Proposition $`2.2`$ re-proves the following result, which was mentioned in the introduction. ###### Corollary 2.4 For fixed $`k`$, $$f(0,k,n)\frac{2^k}{\sqrt{k}}(\pi \mathrm{log}n)^{(k1)/2},\mathrm{as}n\mathrm{}.$$ We turn now to the case of fixed co-dimension. The next result is deduced from proposition $`2.2`$. ###### Proposition 2.5 For $`d1`$, $$f(0,nd,n)\left(\genfrac{}{}{0pt}{}{n}{d}\right)2^n\left(\frac{1}{\pi }\mathrm{arctan}\frac{1}{\sqrt{nd}}\right)^d,(d1).$$ Equality holds for $`d=1:`$ $$f(0,n1,n)=\frac{2^nn}{\pi }\mathrm{arctan}\frac{1}{\sqrt{n1}}.$$ (14) *Proof*. Consider the probability measure $`d\mu (t)=2\sqrt{\frac{k}{\pi }}e^{kt^2}dt`$ on the half-line $`[0,\mathrm{})`$. Put $$\mathrm{\Phi }(t)=\frac{2}{\sqrt{\pi }}_0^te^{x^2}𝑑x.$$ Then $$\gamma _{nk}(tB_{\mathrm{}}^{nk})=\left(\sqrt{\frac{2}{\pi }}_0^te^{x^2/2}𝑑x\right)^{nk}=\mathrm{\Phi }^{nk}(t/\sqrt{2}).$$ Therefore $`\sqrt{{\displaystyle \frac{2k}{\pi }}}{\displaystyle _0^{\mathrm{}}}e^{kt^2/2}\gamma _{nk}(tB_{\mathrm{}}^{nk})𝑑t`$ $`=\sqrt{{\displaystyle \frac{2k}{\pi }}}{\displaystyle _0^{\mathrm{}}}e^{kt^2/2}\mathrm{\Phi }^{nk}(t/\sqrt{2})𝑑t`$ (15) $`={\displaystyle _0^{\mathrm{}}}\mathrm{\Phi }^{nk}(t)𝑑\mu (t)`$ $`\left({\displaystyle _0^{\mathrm{}}}\mathrm{\Phi }(t)𝑑\mu (t)\right)^{nk}.`$ Elementary calculation shows that $$_0^{\mathrm{}}e^{kt^2}\mathrm{\Phi }(t)𝑑t=\frac{1}{\sqrt{\pi k}}\mathrm{arctan}\frac{1}{\sqrt{k}}.$$ A combination of (15) with proposition $`2.2`$ gives the desired inequality, after a replacement of $`k`$ by $`nd`$. Observe that for $`k=n1`$ (that is, $`d=1`$), there is equality in the inequality of (15). Remarks 1. For $`n=3`$ we get from (14): $`f(0,2,3)=(24/\pi )\mathrm{arctan}\frac{1}{\sqrt{2}}4.7`$. Therefore a random $`2`$-section of the $`3`$-cube is more likely to be a parallelogram than a hexagon. 2. Bárány and Lovász proved in that (in particular) almost every $`k`$-section of the $`n`$-cube has at least $`2^k`$ vertices. Clearly this is a precise lower bound. For $`k=n1`$, our result shows that the expected value is asymptotically $`\sqrt{n}/\pi `$ times the minimal value. 3. The asymptotic behavior of the integral $$_0^{\mathrm{}}e^{kt^2/2}\gamma _{nk}(tB_{\mathrm{}}^{nk})𝑑t$$ for fixed $`k`$ and $`n\mathrm{}`$ was determined in (following ), and was used to prove formula (4) of the introduction. See also . The asymptotic result is basically a corollary of the classical tail approximation of a single $`N(0,1)`$-Gaussian variable. Our approach to the proof of Lemma $`2.3`$ seems to simplify the analysis. 4. As was indicated in the introduction, we can choose $`\epsilon =\frac{c}{\sqrt{\mathrm{log}n}}`$ for some constant $`c>0`$, and then with high probability a random $`2`$-section of the cube is $`(1+\frac{c}{\sqrt{\mathrm{log}n}})`$-Euclidean. It is well known that among all centrally symmetric polygons having $`2m`$ vertices, the regular $`2m`$-gon minimizes the Banach-Mazur distance to the Euclidean disc; the minimal distance is $`(\mathrm{cos}(\pi /2m))^1`$. Consequently with high probability we have $$(\mathrm{cos}(\pi /2m))^1<1+\frac{c}{\sqrt{\mathrm{log}n}}.$$ Hence most $`2`$-sections of the $`n`$-cube have at least $`C(\mathrm{log}n)^{1/4}`$ vertices, for some positive constant $`C`$. By Corollary $`2.2`$ (after a suitable rearrangement) $$f(0,2,n)=2\sqrt{\pi }\text{E}(\underset{1in}{\mathrm{max}}|g_i|),$$ which is of the order of magnitude of $`\sqrt{\mathrm{log}n}`$. Summarizing these observations, we conclude: a typical $`2`$-section of the $`n`$-cube is $`(1+\frac{c}{\sqrt{\mathrm{log}n}})`$\- Euclidean, hence it cannot have too few vertices — it has at least $`C(\mathrm{log}n)^{1/4}`$ vertices with probability that tends to $`1`$ as $`n\mathrm{}`$. It does not however tend to be a regular polygon, because the expected number of its vertices is too high for that. ## 3 Other faces We now turn to the case $`j1`$, and prove the following result. ###### Proposition 3.1 For $`j1`$, the following inequality holds. $$\frac{f(0,kj,n)}{f(j,k,n)}<\sqrt{\frac{2}{\pi }}\left(\frac{j(kj)}{nk+j}\right)^{1/2}_0^{\mathrm{}}e^{\frac{j(kj)}{nk+j}t^2/2}\gamma _j(tB_{\mathrm{}}^j)𝑑t.$$ The starting point in the proof of Proposition $`3.1`$ is (9) of Lemma $`2.1`$. Again, we choose $`A=C_1(F_{nk+j})`$. The random variable $`g\sigma _j(g\mathrm{𝕊}^{k1}A)`$, which is defined on $`\mathrm{O}(n)`$, has values in $`[0,1]`$. Hence $$_{\mathrm{O}(n)}\sigma _j(g\mathrm{𝕊}^{k1}A)𝑑H(g)=_0^1H\{g:\sigma _j(g\mathrm{𝕊}^{k1}A)t\}𝑑t.$$ (16) The integrand is non-increasing, and $$H\{g:\sigma _j(g\mathrm{𝕊}^{k1}A)0\}=\text{Prob }\{XF_{nk+j}\mathrm{}\},$$ (17) because the event $`\{g\mathrm{𝕊}^{k1}A\mathrm{}\text{and}\sigma _j(g\mathrm{𝕊}^{k1}A)=0\}`$ has Haar measure zero. Therefore by (9): $`\sigma _{nk+j}(A)`$ $`\text{Prob }\{XF_{nk+j}\mathrm{}\}sup\{t:H\{g:\sigma _j(g\mathrm{𝕊}^{k1}A)t\}>0\}`$ $`\text{Prob }\{XF_{nk+j}\mathrm{}\}sup\{\sigma _j(g\mathrm{𝕊}^{k1}A):g\mathrm{O}(n)\}.`$ Let $$t_{j,k,n}=sup\{\sigma _j(g\mathrm{𝕊}^{k1}A):g\mathrm{O}(n)\}.$$ By (9), $`(15)`$ and $`(16)`$ we get $$\text{Prob}\{XF_{nk+j}\mathrm{}\}\frac{\sigma _{nk+j}(A)}{t_{j,k,n}}.$$ Hence by (10) $$f(j,k,n)2^{kj}\left(\genfrac{}{}{0pt}{}{n}{kj}\right)\frac{\sigma _{nk+j}(A)}{t_{j,k,n}}=\frac{\frac{1}{2}f(0,kj,n)}{t_{j,k,n}}.$$ We must bound $`t_{j,k,n}`$ from above. Since $`A`$ is contained in a half-space, a trivial bound is $`t_{j,k,n}\frac{1}{2}`$. In some cases this bound can be significantly improved. The main lemma in this section is the following. ###### Lemma 3.2 If $`1j<k<n`$, then $$t_{j,k,n}\frac{1}{\sqrt{2\pi }}\left(\frac{j(kj)}{nk+j}\right)^{1/2}_0^{\mathrm{}}e^{\frac{j(kj)}{nk+j}t^2/2}\gamma _j(tB_{\mathrm{}}^j)𝑑t.$$ The next lemma will be used in the proof of Lemma 3.2. ###### Lemma 3.3 Given a positive number $`\tau >0`$, a $`j`$-dimensional subspace $`Y`$ of $`\mathrm{}^m`$ and a point $`y_0Y`$, the following inequality holds. $$\gamma _j((Y\tau B_{\mathrm{}}^m)y_0)\gamma _j(\tau \sqrt{m/j}B_{\mathrm{}}^j).$$ (18) *Proof*. Let $`Q`$ denote the orthogonal projection onto $`Yy_0`$. As usual, $`\{e_i\}_{i=1}^m`$ are the standard unit vectors in $`\mathrm{}^m`$. Put $`u_i=Qe_i/Qe_i`$ if $`Qe_i0`$, and $`u_i=0`$ otherwise; put $`c_i=Qe_i^2`$ and $`\alpha _i=y_0,e_i`$ for $`1im`$. ($`,`$ is the standard scalar product.) Then $`Y\tau B_{\mathrm{}}^m`$ $`=\{yY:|y,e_i|\tau i\}`$ $`=\{yY:|yy_0,e_i+y_0,e_i|\tau i\}`$ $`=\{yY:{\displaystyle \frac{\alpha _i\tau }{\sqrt{c_i}}}yy_0,u_i{\displaystyle \frac{\alpha _i+\tau }{\sqrt{c_i}}}\}.`$ Therefore $$(Y\tau B_{\mathrm{}}^m)y_0=\{xYy_0:\frac{\alpha _i\tau }{\sqrt{c_i}}x,u_i\frac{\alpha _i+\tau }{\sqrt{c_i}}\}$$ Now we can imitate K. Ball’s argument from concerning sections of maximal volume. Instead of the Lebesgue measure, we have to consider the Gaussian measure. In $`Yy_0`$, the identity operator can be written as $`_1^mc_iu_iu_i`$. In particular, $$\underset{i=1}{\overset{m}{}}c_i=j,\text{and}x^2=\underset{i=1}{\overset{m}{}}c_ix,u_i^2,xYy_0.$$ Therefore the Gaussian measure in $`Yy_0`$ is equal to $$(2\pi )^{j/2}\mathrm{exp}(\underset{i=1}{\overset{m}{}}c_ix,u_i^2/2)dx.$$ Let $`\chi _i`$ denote the characteristic function of the interval $`[\frac{\alpha _i\tau }{\sqrt{c_i}},\frac{\alpha _i+\tau }{\sqrt{c_i}}]`$. Then, by the above, $`\gamma _j((Y\tau B_{\mathrm{}}^m)y_0)`$ $`=(2\pi )^{j/2}{\displaystyle _{Yy_0}}\left({\displaystyle \underset{i=1}{\overset{m}{}}}\chi _i(x,u_i)e^{c_ix,u_i^2/2}\right)𝑑x`$ (19) $`=(2\pi )^{j/2}{\displaystyle _{Yy_0}}{\displaystyle \underset{i=1}{\overset{m}{}}}(\chi _i(x,u_i)e^{x,u_i^2/2})^{c_i}dx`$ $`(2\pi )^{j/2}{\displaystyle \underset{i=1}{\overset{m}{}}}\left({\displaystyle _{(\alpha _i\tau )/\sqrt{c_i}}^{(\alpha _i+\tau )/\sqrt{c_i}}}e^{s^2/2}𝑑s\right)^{c_i}.`$ The last inequality is a consequence of Brascamp-Lieb’s inequality, which is stated in as follows: LemmaLet $`(u_i)_1^m`$ be a sequence of unit vectors in $`\mathrm{}^n`$ and $`(c_i)_1^m`$ a sequence of positive numbers so that $$\underset{1}{\overset{m}{}}c_iu_iu_i=I_n.$$ For each $`i`$, let $`f_i:\mathrm{}[0,\mathrm{})`$ be integrable. Then $$_\mathrm{}^n\underset{i=1}{\overset{m}{}}f_i(u_i,x)^{c_i}dx\underset{i=1}{\overset{m}{}}\left(_{\mathrm{}}f_i\right)^{c_i}.$$ The $`i`$’th integral in the product of $`(19)`$ is not larger than $`_{\tau /\sqrt{c_i}}^{\tau /\sqrt{c_i}}e^{s^2/2}𝑑s`$. Hence the last expression in $`(19)`$ is bounded above by $$(2\pi )^{j/2}\underset{i=1}{\overset{m}{}}\left(2_0^{\tau /\sqrt{c_i}}e^{s^2/2}𝑑s\right)^{c_i},$$ which is maximized when all the $`c_i`$’s are equal. Hence $`\gamma _j((Y\tau B_{\mathrm{}}^m)y_0)`$ $`\left(\sqrt{{\displaystyle \frac{2}{\pi }}}{\displaystyle _0^{\tau \sqrt{m/j}}}e^{s^2/2}𝑑s\right)^j`$ (20) $`=\gamma _j(\tau \sqrt{m/j}B_{\mathrm{}}^j).`$ The proof of Lemma $`3.3`$ is complete. Proof of Lemma 3.2 For $`g\mathrm{O}(n)`$ $`\sigma _j(g\mathrm{𝕊}^{k1}A)`$ $`=\gamma _{j+1}\left(C(F_{nk+j})\text{span}(g\mathrm{𝕊}^{k1})\right)`$ $`=\gamma _{j+1}\left(C[F_{nk+j}\text{span}(g\mathrm{𝕊}^{k1})]\right).`$ The second equality is a consequence of the identity $`C(F_{nk+j})X=C(F_{nk+j}X)`$, which trivially holds for every subspace $`X\mathrm{}^n`$. Fix a subspace $`XG_{n,k}`$ for which the section $`XF_{nk+j}`$ is $`j`$-dimensional; almost every $`XG_{n,k}`$ has this property. Let $`C`$ denote the $`(j+1)`$-dimensional cone generated by $`XF_{nk+j}`$; put $`X_0=\text{span}C`$. By $`M`$ we denote the affine subspace spanned by $`XF_{nk+j}`$, and by $`d`$, its distance from the origin of $`X`$. The Gaussian measure of the cone $`C`$ is computed as follows. Take the unit vector $`\xi X_0`$ which is orthogonal to $`M`$, and for which $`d\xi M`$. For $`t>0`$, put $`W_t=\{xX_0:x,\xi =t\}`$. Observe that $`CW_t=(t/d)(XF_{nk+j})`$. Let $`P`$ denote the orthogonal projection from $`X_0`$ onto $`W_0`$. By Fubini’s theorem: $`\gamma _{j+1}(C)`$ $`={\displaystyle \frac{1}{\sqrt{2\pi }}}{\displaystyle _0^{\mathrm{}}}e^{t^2/2}\gamma _j(P(CW_t))𝑑t`$ (21) $`={\displaystyle \frac{1}{\sqrt{2\pi }}}{\displaystyle _0^{\mathrm{}}}e^{t^2/2}\gamma _j(P(t/d)(XF_{nk+j}))𝑑t.`$ Our task is to estimate the expression $`\gamma _j(P\tau (XF_{nk+j}))`$ for every $`\tau >0`$. We will need to discuss Gaussian measures in different subspaces. Whenever $`M`$ is an $`m`$-dimensional subspace of $`\mathrm{}^n`$ and $`qM`$, let $`\mathrm{𝔾}_{M,q}`$ denote the measure $`(2\pi )^{m/2}\mathrm{exp}(xq^2/2)dx`$. In case $`M`$ is an $`m`$-dimensional linear subspace of $`\mathrm{}^n`$ and $`q=0`$ we shall simply write $`\mathrm{𝔾}_{M,0}=\gamma _m`$. If $`T`$ is an isometry of $`\mathrm{}^n`$, then for every Borel subset $`SM`$ we have $$\mathrm{𝔾}_{M,q}(S)=\mathrm{𝔾}_{TM,Tq}(TS).$$ (22) Let us momentarily assume that $`\tau =1`$. Let $`q`$ denote the nearest point of $`M`$ to the origin of $`X`$. Both $`M`$ and the range of the projection $`P`$ are $`j`$-dimensional affine subspaces of $`X_0`$. We have $$P(XF_{nk+j})=(XF_{nk+j})q,$$ hence by (22) $$\mathrm{𝔾}_{M,q}(XF_{nk+j})=\gamma _j(P(XF_{nk+j})).$$ (23) Now let $`L`$ denote the affine subspace spanned by $`F_{nk+j}`$, whose origin $`O_L`$ is taken as the center of the face $`F_{nk+j}`$. (So if $`X`$ passes through the center of $`F_{nk+j}`$, then $`q=O_L`$.) $`M`$ is also a $`j`$-dimensional affine subspace of $`L`$. By (22), $$\mathrm{𝔾}_{M,q}(XF_{nk+j})=\mathrm{𝔾}_{M(qO_L),O_L}\left((XF_{nk+j})(qO_L)\right).$$ Applying the same argument for arbitrary $`\tau >0`$ we conclude that $$\gamma _j(P\tau (XF_{nk+j}))=\mathrm{𝔾}_{\tau M\tau (qO_L),\tau O_L}\left(\tau (XF_{nk+j})\tau (qO_L)\right).$$ (24) We may think of $`\tau L`$ as $`\mathrm{}^{nk+j}`$, of $`\tau F_{nk+j}`$ as $`\tau B_{\mathrm{}}^{nk+j}`$, and of $`\tau (XF_{nk+j})`$ as an affine $`j`$-dimensional section of $`\tau B_{\mathrm{}}^{nk+j}`$. Thus for each $`t>0`$ Lemma $`3.3`$ can be used with $`\tau =t/d`$ and $`m=nk+j`$. By the definition of $`d`$, we have $`d\sqrt{kj}`$. Combining (18),(21) and $`(24)`$ we deduce that $`\gamma _{j+1}(C)`$ $`{\displaystyle \frac{1}{\sqrt{2\pi }}}{\displaystyle _0^{\mathrm{}}}e^{t^2/2}\gamma _j\left(t\left({\displaystyle \frac{nk+j}{j(kj)}}\right)^{1/2}B_{\mathrm{}}^j\right)𝑑t`$ $`={\displaystyle \frac{1}{\sqrt{2\pi }}}\left({\displaystyle \frac{j(kj)}{nk+j}}\right)^{1/2}{\displaystyle _0^{\mathrm{}}}\mathrm{exp}({\displaystyle \frac{j(kj)}{nk+j}}t^2/2)\gamma _j(tB_{\mathrm{}}^j)𝑑t.`$ The proof of lemma 3.2 and thus of proposition $`3.1`$ is complete. By using the asymptotic formulas of section $`2`$, namely Lemma $`2.3`$ and Corollary $`2.4`$, we can now prove the following result, which shows that the lower bound for $`f(j,k,n)`$ derived from proposition $`3.1`$ is, in some cases, asymptotically best possible. ###### Corollary 3.4 For fixed integers $`1l<m`$, $$f(nm,nl,n)\frac{(2n)^{ml}}{(ml)!}\text{as}n\mathrm{}.$$ (25) *Proof*. Put $`\alpha _n=(ml)(nm)/(nm+l)`$. By Proposition $`3.1`$, $$\frac{f(0,ml,n)}{f(nm,nl,n)}<\sqrt{\frac{2\alpha _n}{\pi }}_0^{\mathrm{}}e^{\alpha _nt^2/2}\gamma _{nm}(tB_{\mathrm{}}^{nm})𝑑t.$$ (26) Put $`b_n=(\mathrm{log}(nm))^{(\alpha _n1)/2}/(nm)^{\alpha _n}`$ and $`c_n=(\mathrm{log}n)^{(ml1)/2}`$. Let $`d_n`$ denote the right hand side of $`(26)`$, from which we get $$f(nm,nl,n)\frac{b_n}{c_n}>\frac{f(0,ml,n)}{c_n}\frac{b_n}{d_n}.$$ Since $`lim_n\mathrm{}\alpha _n=(ml)`$, Lemma $`2.3`$ implies that $$\underset{n\mathrm{}}{lim}\frac{b_n}{d_n}=\frac{1}{\pi ^{(ml1)/2}\mathrm{\Gamma }(ml)\sqrt{ml}}.$$ Moreover, by Corollary $`2.4`$, $$\underset{n\mathrm{}}{lim}\frac{f(0,ml,n)}{c_n}=\frac{2^{ml}\pi ^{(ml1)/2}}{\sqrt{ml}}.$$ Thus, the sequence $`f(nm,nl,n)\frac{b_n}{c_n}`$ is larger than a sequence that tends to $`2^{ml}/(ml)!`$ as $`n`$ tends to infinity. On the other hand we have $`f(nm,nl,n)<2^{ml}\left(\genfrac{}{}{0pt}{}{n}{ml}\right)`$, so $$f(nm,nl,n)\frac{b_n}{c_n}<2^{ml}\left(\genfrac{}{}{0pt}{}{n}{ml}\right)\frac{b_n}{c_n},$$ and since $`b_n/c_nn^{lm}`$, the r.h.s here tends to $`2^{ml}/(ml)!`$. Consequently, $$\underset{n\mathrm{}}{lim}f(nm,nl,n)\frac{b_n}{c_n}=\frac{2^{ml}}{(ml)!}.$$ The required asymptotic formula follows immediately. The proof of Corollary $`3.4`$ is complete. Remarks 1. The previous corollary implies that the number of $`(nm)`$-faces of a random $`(nl)`$-section of the $`n`$-cube tends to concentrate near the value $`2^{ml}\left(\genfrac{}{}{0pt}{}{n}{ml}\right)`$, which bounds it from above. So for example, a typical $`1`$-co-dimensional section of the $`n`$-cube will have $`2no(n)`$ facets as $`n\mathrm{}`$. This result can also be deduced from the identity (3). Indeed, by duality we have $`f(nm,nl,n)=\text{E}(f_{ml1}(\mathrm{\Pi }_{nl}(B_1^n)))`$, and replacing $`T^n`$ by $`B_1^n`$ in the proof of Theorem 2 in , (the details of this replacement appear in ; see the proof of Theorem 1.1 there) we get the previous corollary. 2. According to a remark made in , the number $`f(j,k,n)`$ is equal to the expected number of $`(kj1)`$-faces of the convex hull of $`\pm G_1,\mathrm{},\pm G_n`$, where the $`G_i`$’s are independent copies of a $`k`$-dimensional Gaussian vector. Hence, the results for $`f(0,k,n)`$ can be interpreted as results for the expected number of facets of the convex hull of $`\{\pm G_i\}_1^n`$ in $`\mathrm{}^k`$. For example, we can translate the first remark at the end of section $`2`$ to the following statement: If $`3`$ points in the plane are chosen at random, then their symmetric convex hull is more likely to be a parallelogram than a hexagon. Acknowledgements I thank Itai Benjamini for getting me interested in the subject of this paper, and for several stimulating discussions. I also thank the referees for their useful comments. | Yossi Lonke | Current address: | | --- | --- | | Institute of Mathematics | Department of Mathematics | | The Hebrew University of Jerusalem | Case Western Reserve University | | Jerusalem 91904, Israel | Cleveland, Ohio 44106-7058 | | | email: jrl16@po.cwru.edu |
no-problem/9812/nucl-ex9812010.html
ar5iv
text
# Statistical Exploration of Fragmentation Phase Space and Source Sizes in Nuclear Multifragmentation ## Abstract The multiplicity distributions for individual fragment $`Z`$ values in nuclear multifragmentation are binomial. The extracted maximum value of the multiplicity, $`m_Z`$, is found to depend on $`Z`$ according to $`m_Z=Z_0/Z`$, where $`Z_0`$ is the source size. This is shown to be a strong indication of statistical coverage of fragmentation phase space. The inferred source sizes coincide with those extracted from the analysis of fixed multiplicity charge distributions. Is nuclear multifragmentation a statistical or a dynamical process? Let us consider a fragmentation space defining the number and masses/charges of the produced fragments. The thermal population of each cell can be expressed in terms of suitable Boltzmann factors. The thermal nature of this population can be tested in a variety of ways. A very visual way is the Arrhenius plot where the log of the population is plotted vs. the reciprocal temperature $`1/T`$. The question however, remains whether, apart from the Boltzmann factor, all the cells of the fragmentation space are uniformly explored. A test of such a uniform filling implies the knowledge of the size (mass/charge) of the source. A new empirical feature observed in many reactions has led us to a way to verify the uniform filling of fragmentation space and to determine simultaneously the source size. It has been shown that the intermediate mass fragment (IMF) multiplicity distribution $`P_n`$ at any given transverse energy $`E_t`$ is empirically given by a binomial distribution $$P_n=\frac{m!}{n!(mn)!}p^n(1p)^{mn}.$$ (1) This implies that fragments are emitted nearly independently of each other, so that the probability $`P_n`$ of observing $`n`$ fragments can be written by combining a single one-fragment emission probability $`p`$ according to Eq. (1). The parameter $`m`$ (the total number of throws) represents the maximum possible number of fragments, which is immediately related to the source size. The simplest statistical equilibrium model of multifragmentation has exactly the structure of Eq. (1). Let us assume that the source is made up of $`m`$ fragments. The “outside” fragments have energy $`ϵ_2`$, and those “inside” have energy $`ϵ_1`$. A generic partition of $`n`$ fragments outside and $`mn`$ fragments inside has the probability: $$P_n=\frac{m!}{n!(mn)!}\frac{e^{(nϵ_2+(mn)ϵ_1)/T}}{(e^{ϵ_1/T}+e^{ϵ_2/T})^m}$$ (2) which leads to Eq. (1) when $$p=\frac{e^{ϵ_2/T}}{e^{ϵ_1/T}+e^{ϵ_2/T}}.$$ (3) Thus, a simple way to obtain the size of the source is to multiply $`m`$ by the fragment size. When the definition of IMF covers a range of atomic numbers (IMF: $`Z_{th}Z20`$, with $`Z_{th}`$ equal to 3), one should multiply $`m`$ by a “suitably” averaged $`Z`$. In fact, a dependence of $`m`$ on the lower threshold $`Z_{th}`$ has been found such that $`m(Z_{th})\times Z_{th}constant`$ . The natural next step is to restrict the fragment definition to a single atomic number $`Z`$. A straightforward generalization of Eq. (1) to the production of fragments with charges 1, 2, …$`Z_0`$ is given by the multinomial distribution $$P=\frac{Z_0!}{n_1!n_2!\mathrm{}n_{Z_0}!}p_1^{n_1}p_2^{n_2}\mathrm{}p_{Z_0}^{n_{Z_0}}$$ (4) with $$Z_0=\underset{Z}{}Zn_Z.$$ (5) Here there is no single quantity $`m`$ as in Eq. (1), since the constraint is now on the total charge rather than on the total number of fragments. However, this leads immediately to a simple scaling that must be obeyed if the fragmentation phase space is to be completely explored. Let us first consider, as a familiar example, the fragmentation space spanned by the Euler number partitions. This case is particularly simple because all partitions are unbiased, or equally probable. This would correspond, for the different model defined in Eqs. (4) and (5), to setting $`p_i`$=constant. Let us indicate with $`W(Z_0)`$ the statistical weight (number of partitions) associated with the integer or “charge” $`Z_0`$. If we now select the partitions containing at least $`n_Z`$ integers of size $`Z`$, their number is given by the number of partitions of the residue $$P_{n_Z}=W(Z_0n_ZZ)$$ (6) by definition. Since the “charge” constraint is applied “minimally,” what counts is only the product $`Zn_Z`$, rather than the individual $`Z`$ values. Consequently, the weight of the residue does not change if we substitute $`n_ZZ`$ with $`n_Z^{}Z^{}`$ provided that $`n_ZZ=n_Z^{}Z^{}`$. In other words, $`P_{n_Z,Z}=P_{n_Z^{},Z^{}}`$ if $`n_ZZ=n_Z^{}Z^{}`$. This gives immediately the scaling laws $$n_Z/n_Z^{}=Z^{}/Z$$ (7) or for the extreme value of $`n_Z=m_Z`$, $$m_Z=Z_0/Z.$$ (8) This result follows directly from the uniform exploration of the fragmentation phase space. It amounts to a complete decoupling of all the fragmentation paths. Once the system attains the charge loss of $`q=nZ`$, it is indifferent to how this was attained. Any path $`q=n_iZ_i`$ is equivalent and substitutable. Similarly, for the remaining charge $`Z_0q`$. This argument works when all partitions are unbiased, as for instance in the Euler number partition mentioned above. However, in the binomial/multinomial distributions of Eqs. (1) and (4), each partition is weighted by the probabilities $`p_i`$ (reflecting $`Q`$-value effects) which may or may not be Boltzmann factors. Yet, the binomial/multinomial analysis elegantly and automatically separates out the $`Q`$-value dependent probability $`p`$ from the parameter $`m_Z`$ which now contains direct information on the accessible fragmentation space and must follow the scaling given by Eq. (8). Thus the $`1/Z`$ scaling is the counterpart of the Arrhenius plot which verifies the thermal (Canonical) population of each fragmentation configuration. When the restriction to individual $`Z`$ values is made experimentally , the multiplicity distributions are found to be nearly Poissonian, namely $`mp<<m`$. This introduces interesting simplifications in the analysis and interpretation of the data, but at the cost of a loss of scale. In the Poisson limit the average multiplicity $`n=mp`$ is the only accessible parameter, and the decomposition into $`m`$ and $`p`$ becomes impossible. The recovery of scale for an individual $`Z`$ is highly desirable in view of the possibility that the number of throws $`m_Z`$ (for a single species $`Z`$) might obey the simple scaling of Eq. (8). Thus, we have attempted binomial fits of the multiplicity distributions for individual $`Z`$ values in an effort to extract $`m_Z`$. Fortunately, a number of reactions (<sup>36</sup>Ar+<sup>197</sup>Au at 35 to 110 $`A`$MeV , <sup>129</sup>Xe+<sup>27</sup>Al,<sup>51</sup>V,<sup>nat</sup>Cu, <sup>89</sup>Y, at 50 $`A`$MeV , and <sup>129</sup>Xe+<sup>197</sup>Au at 50 to 60 $`A`$MeV ) have been studied with good $`Z`$ resolution and high statistics. We first consider the asymmetric, intermediate-energy reactions in reverse kinematics exemplified by <sup>129</sup>Xe+<sup>nat</sup>Cu at 50 $`A`$MeV, for which we can expect a single dominant fragment source. Examples of both binomial and Poisson fits to the carbon yield from this reaction are shown in panel a) of Fig. 1. An improvement of the fit by using the binomial expression is observed for large fold numbers. A similar improvement is observed for each $`Z`$ in all reactions listed in this letter. The $`E_t`$ dependence of the parameters $`m_Z`$ from the binomial fits to the multiplicity distributions associated with each fragment atomic number leads to several observations. For each $`Z`$ value, $`m_Z`$ increases to a near constant value with increasing $`E_t`$. We approximate this behavior with the form $`m_Z=m_Z^0\mathrm{tanh}fE_t`$. The parameter $`m_Z^0`$ represents the saturation value of $`m_Z`$ for large $`E_t`$ and $`f`$ controls the rise of $`m_Z`$ with increasing $`E_t`$. The solid lines in panel b) of Fig. 1 are the empirical fits to $`m_Z`$ values extracted for lithium and oxygen emission from the reaction <sup>129</sup>Xe+<sup>nat</sup>Cu at 50 $`A`$MeV. The other discontinuous lines are fits to data not shown ($`Z`$=4-7). Furthermore, at all $`E_t`$ values there is an overall decrease of $`m_Z`$ with increasing fragment $`Z`$ value in agreement with the expected scaling $`Zm_Z=Z_0`$. This remarkable dependence is exemplified in panel c) of Fig. 1 and in Fig. 2. By applying the expected scaling ($`Zm_Z`$), all of the fits to the <sup>129</sup>Xe+<sup>nat</sup>Cu data collapse together, resulting in the approximate source “size” as a function of $`E_t`$. A weighted average ($`Z_0`$) of the data over different exit channels, constructed according to $$Z_0(E_t)=\underset{Z}{}Zm_Z(E_t)a_Z,$$ (9) is shown by the symbols in panel c) of Fig. 1. The weight $`a_Z`$ is the standard weight (proportional to the inverse square of the individual errors). A similar behavior is observed in two additional asymmetric reactions <sup>129</sup>Xe+<sup>51</sup>V and <sup>89</sup>Y (see Fig. 1d). The $`E_t`$ dependence of the source size is tantalizing. The source size increases quickly to a saturation value. The fact that $`E_t`$ is related to impact parameter as well as to the total excitation energy may explain the observed features. In the highly asymmetric reverse kinematic reactions one quickly achieves sufficient overlap to produce a dominant Xe-like source as one moves from peripheral to central collisions. As a special case of the $`1/Z`$ scaling, the “saturation” $`m_Z`$ values from central collisions of the reverse kinematics reactions (the top 5% of the $`E_t`$ scale, shown by the hatched regions in panel b) of Fig. 1), are shown in Fig. 2. The open symbols represent the scaled quantity $`Zm_Z=Z_0`$. The solid lines are weighted averages for the different reactions. The same data in the form $`m_Z/Z_0`$ vs. $`Z`$ are shown by the solid symbols, where the $`1/Z`$ dependence is manifested by the good agreement of the data with the values of $`1/Z`$ (solid line). This striking $`1/Z`$ dependence of the parameter $`m_Z`$ is observed for all asymmetric systems measured. These overall results for asymmetric reactions suggest the dominance of a single source, strongly support the hypothesis of uniform (statistical) exploration of the fragmentation phase space, and lead to the interpretation of $`Z_0=Zm_Z`$ as the source “size.” The $`1/Z`$ scaling is general and should be observed in models used to describe multifragmentation. In the less asymmetric reactions <sup>129</sup>Xe+<sup>197</sup>Au at 50 and 60 $`A`$MeV for which at least two sources are plausible, we shall refer directly to $`Z_0=Zm_Z`$ as the source size, although we shall see that now it depends on the fragment $`Z`$ value as well as on $`E_t`$. The weak decrease of the source size with increasing fragment size $`Z`$, already observable in <sup>129</sup>Xe+<sup>89</sup>Y (Fig. 2), becomes more visible in the case of the <sup>197</sup>Au target. At low fragment $`Z`$ values, the source size $`Z_0`$ is $`70`$ and it decreases monotonically with increasing fragment size $`Z`$ to to a source size $`Z_0`$ of approximately 40-50. This fragment size dependence seems to suggest that for the <sup>89</sup>Y target and, most of all, for the <sup>197</sup>Au target there may be a distribution of sizes, the higher $`Z`$ fragments being emitted preferentially by the smaller source(s). The reactions <sup>36</sup>Ar+<sup>197</sup>Au at 35, 50, 80, 110 $`A`$MeV give a picture intermediate between the <sup>129</sup>Xe+<sup>197</sup>Au and the <sup>129</sup>Xe induced reverse kinematics reactions. They also give information of the source size dependence on bombarding energy. The source size at low fragment $`Z`$ increases from $`Z_030`$ to $`Z_060`$ as the bombarding energy increases, $`A`$MeV, while at higher fragment $`Z`$ the source size increases from $`Z_020`$ to $`Z_040`$. In previous work , it was empirically shown that the observed charge distributions resulting from nuclear multifragmentation obey the following invariant form: $$P_n(Z)\mathrm{exp}\left(\frac{B(Z)}{\sqrt{E_t}}cnZ\right)$$ (10) where $`n`$ is the total intermediate mass fragment multiplicity of the event; $`E_t`$ is the total transverse energy; and $`B(Z)`$ is the “barrier” distribution. From thermodynamic considerations and percolation simulations, it was shown that $`c`$ in Eq. (10) vanishes when the gas of IMFs is in equilibrium with a liquid (residue source, or percolating cluster), and assumes a value $`1/Z_0`$ ($`Z_0`$ being the source size) when the source is wholly vaporized . Experimentally, the parameter $`c`$ undergoes an evolution with (transverse) energy from approximately zero to a positive constant . Thus the source size evolves from near infinity (an “infinite” reservoir of fragments) to the actual size of the source. With the exception of the reactions <sup>129</sup>Xe+<sup>27</sup>Al,<sup>51</sup>V, Cu, <sup>89</sup>Y, this limit is attained. Thus, it is possible to plot the value of $`Z_0`$ determined from $`m_Z`$ against that obtained from the $`c`$ parameter (both for the top 5% most central collisions in $`E_t`$). Such a plot is shown in Fig. 4. The result is striking. Not only are the two quantities well correlated, but they also agree quite well in absolute value. This good agreement gives confidence that we have gained direct access to the source size. This source (sources) is specifically the entity that generates the fragments through “chemical equilibrium”. It does not contain the pre-equilibrium part which is often incorporated in other source reconstruction methods. In conclusion, we have shown that: a) the binomial (nearly Poissonian) multiplicity distributions for individual fragment atomic numbers permit the extraction of the parameter $`m_Z`$, the number of throws; b) $`m_Z`$ for reactions where a single source is clearly dominant has the form $`m_Z=Z_0/Z`$; c) the $`1/Z`$ dependence is dramatic proof that the fragmentation phase space is statistically explored; d) source size(s) can be extracted and should reflect the region(s) where chemical (as opposed to physical) equilibrium is achieved; e) these source sizes agree with the sizes obtained from the analysis of multiplicity selected charge distributions in the $`E_t`$ range where a single gas phase, or thermodynamic bivariance, prevails. Acknowledgments This work was supported by the Nuclear Physics Division of the US Department of Energy, under contract DE-AC03-76SF00098. One of us (L.B) acknowledges a fellowship from the National Sciences and Engineering Research Council (NSERC), Canada. Present address: Indiana University Cyclotron Facility, 2401 Milo B. Sampson Ln, Bloomington, IN 47408
no-problem/9812/cond-mat9812187.html
ar5iv
text
# Competition Between Stripes and Pairing in a 𝑡-𝑡'-𝐽 Model \[ ## Abstract As the number of legs $`n`$ of an $`n`$-leg, $`tJ`$ ladder increases, density matrix renormalization group calculations have shown that the doped state tends to be characterized by a static array of domain walls and that pairing correlations are suppressed. Here we present results for a $`tt^{}J`$ model in which a diagonal, single particle, next-near-neighbor hopping $`t^{}`$ is introduced. We find that this can suppress the formation of stripes and, for $`t^{}`$ positive, enhance the $`d_{x^2y^2}`$-like pairing correlations. The effect of $`t^{}>0`$ is to cause the stripes to evaporate into pairs and for $`t^{}<0`$ to evaporate into quasi-particles. Results for $`n=4`$ and 6-leg ladders are discussed. \] Neutron scattering experiments on La<sub>1.6-x</sub>Nd<sub>0.4</sub> Sr<sub>x</sub>CuO<sub>4</sub> show evidence of a competition between static (quasi-static) stripes and superconductivity. Here the stripes consist of (1,0) domain walls of holes separating $`\pi `$-phase shifted, antiferromagnetic regions. For $`x=0.12(x1/8)`$, the intensity of the charge and spin superlattice peaks is largest and $`T_c`$ is less than 5K. As $`x`$ deviates from this value, the relative intensity of the magnetic superlattice peaks decrease and the superconducting transition temperature $`T_c`$ increases. High field magnetization studies provide evidence that in this material superconducting can coexist with static (or quasi-static) stripe order. However, the fact that $`T_c`$ is a minimum where the superlattice peaks are most intense suggests that static stripe order competes with superconductivity. We are interested in understanding whether a $`tJ`$-like model can exhibit this type of behavior. In studies of $`n`$-leg, $`tJ`$ ladders we have previously found evidence for stripe formation. In particular, for $`n=`$ 3 and 4 legs we have found evidence for both stripes and pairing . These systems have open boundary conditions. However, in wider ladders ($`n=6`$ and $`n=8`$) with cylindrical boundary conditions, where the stripes close on themselves rather than having free ends, the stripes appeared to be more static and the pairing correlations were found to be suppressed. This suppression of the pairing correlations was also observed when an external potential was applied to further pin the stripes. If the formation of static stripes could be suppressed, one might hope to find enhanced pairing correlations. It is not clear whether the complete elimination of stripes or only a slight destabilization would be more favorable to pairing correlations. We have been investigating various interaction terms which could destabilize stripes. Here we focus on the effect of a next-near-neighbor diagonal hopping $`t^{}`$. Effective hopping parameters have been evaluated from band structure calculations and finite CuO cluster calculations. For the hole-doped cuprates $`t^{}`$ is found to be negative while for the electron-doped cuprates it is positive. Both $`t^{}`$ and the one-electron hopping $`t^{\prime \prime }`$, which connects next-near-neighbor sites along the (0,1) or (1,0) axis, have been used in $`tt^{}t^{\prime \prime }J`$ models to fit ARPES data . In addition, Lanczos calculations by Tohyama and Maekawa on $`tt^{}J`$ clusters and Monte Carlo calculations on $`tt^{}`$ Hubbard lattices show that $`t^{}>0`$ tends to stabilize the commensurate $`(\pi ,\pi )`$ antiferromagnetic correlations. Recently, exact diagonalization and density-matrix renormalization group (DMRG) calculations on small clusters and four-leg ladders have found that $`t^{}<0`$ destabilizes stripes. Furthermore, it was concluded that a small positive $`t^{}`$ did not destabilize the stripes on these systems. Here we will consider the effect of $`t^{}`$ on both open four leg and cylindrical six leg ladders. In addition to considering the affect of $`t^{}`$ on stripe stability, we will measure its affect on pairing correlations. We find that stripes are destabilized for either sign of $`t^{}`$, and that pairing is suppressed for $`t^{}<0`$, and enhanced for $`t^{}>0`$. This latter effect is surprising, since superconducting transition temperatures are generally higher for hole doped cuprates ($`t^{}<0`$) than for electron doped ($`t^{}>0`$). The $`tt^{}J`$ Hamiltonian which we will study has the form $`H=t{\displaystyle \underset{ijs}{}}`$ $`(c_{is}^+c_{js}+c_{js}^+c_{is})t^{}{\displaystyle \underset{ij^{}s}{}}(c_{is}^+c_{js}+c_{js}^+c_{is})`$ (1) $`+J{\displaystyle \underset{ij}{}}(\stackrel{}{S}_i\stackrel{}{S}_j{\displaystyle \frac{1}{4}}n_in_j)`$ (2) Here $`ij`$ are near-neighbor sites, $`ij^{}`$ are diagonal next-near-neighbor sites, $`\stackrel{}{S}_i=\frac{1}{2}c_{is}^+\sigma _{ss^{}}c_{is}`$, $`n_i=c_i^+c_i+c_i^+c_i`$, and $`c_{is}^+(c_{is})`$ creates (destroys) an electron of spin $`s`$ at site $`i`$. No double occupancy is allowed. We will use DMRG calculations to explore the charge, spin, and pairing correlations on doped four and six leg ladders. The calculations reported below keep up to 1200 states per block, with truncation errors of about $`10^4`$, and from six to ten finite system sweeps. We have checked the inclusion of $`t^{}`$ in our program by comparing the results for the rung hole density on a $`14\times 4`$ system with the results of Tohyama, et. al. ; precise agreement was found. In a previous study of the 4-leg $`tJ`$ ladder, we found that four-hole diagonal domain walls formed as the doping increased. In Figure 1(a) and (b) we show the rung density $$n_r(\mathrm{})=\underset{i=1}{\overset{4}{}}n_\mathrm{}i$$ (3) versus $`\mathrm{}`$ for $`J/t=0.35`$ on a $`12\times 4`$ lattice with 8 holes and open boundary conditions. For $`t^{}=0`$, we clearly see the formation of two domain walls, signaled by two broad peaks in $`n_r(\mathrm{})`$. As $`t^{}/t`$ is varied, one clearly sees that the static domain wall structure is suppressed for either sign of $`t^{}`$. For this same $`12\times 4`$ lattice, we have studied the pair-field correlation function $$D(\mathrm{})=\mathrm{\Delta }_{i+\mathrm{}}\mathrm{\Delta }_i^+$$ (4) with $`\mathrm{\Delta }_i^+`$ a pair creation operator which creates a singlet $`d_{x^2y^2}`$ pair centered on the $`i^{\mathrm{th}}`$ site of the second leg. Figure 1(b) shows a plot of $`D(\mathrm{})`$ versus $`\mathrm{}`$ for the $`4\times 12`$ ladder for $`J/t=0.35`$ with 8 holes and various values of $`t^{}/t`$. As $`t^{}/t`$ initially increases, the pairing correlations are enhanced but as $`t^{}/t`$ becomes greater than $`0.3`$, they are suppressed. They are suppressed for $`t^{}`$ negative, with very strong suppression occuring for $`t^{}0.2`$. Results for the charge density and spin structure of a $`12\times 6`$ lattice with $`J/t=0.5`$ and 8 holes are shown in Figure 2. Here we have taken cylindrical boundary conditions, i.e. periodic in the $`y`$-direction, open in the $`x`$-direction. In this case, for $`t^{}/t=0`$, the holes form two transverse domains each containing 4 holes. The $`\pi `$-phase shifted antiferromagnetic regions which are separated by these domains are clearly visible in Figure 2 for $`t^{}=0`$. The DMRG calculation has selected a particular spin order, breaking symmetry; as the number of states kept per block increases, the magnitude of this spin order decreases, and the exact ground state would have no net spin on any site. However, here the spin order serves to illustrate the underlying spin correlations in the exact ground state, which we expect to be a superposition of the broken symmetry state rotated to all possible directions. As $`t^{}/t`$ increases, we again see a suppression of the charge order and in addition the $`\pi `$-phase shifted antiferromagnetic regions disappear. This is also true for $`t^{}`$ negative. For $`t^{}=0.3`$, we see that Néel spin order, without any $`\pi `$ phase shifts, is now the broken symmetry state. As previously noted, Lanczos and Monte Carlo calculations indicated that a positive $`t^{}`$ tended to stabilize the commensurate $`(\pi ,\pi )`$ antiferromagnetic correlations, which is consistent with our results. The rung density shown in Figure 3(a) provides a more quantitative display of the suppression of the charge domains walls. In this case, a finite magnitude of $`t^{}`$ seems to be necessary to substantially reduce the charge density structure. The domain walls in $`L\times 6`$ ladders at $`t^{}=0`$ are stable bound states of two hole pairs, and a finite change in the parameters of the systems is needed to break them up. We believe that $`L\times 6`$ cylindrical systems have unusually stable domain walls, and that more generally a smaller value of $`|t^{}|`$ would destabilize the stripes. Here, we see that the stripes are suppressed for $`t^{}=0.2`$, and completely destabilized for $`t^{}=0.3`$. Figures 3(c) and (d) show the pair-field correlations $`D(\mathrm{})`$ versus $`\mathrm{}`$ for various values of $`t^{}/t`$ for the $`12\times 6`$ ladder. We see when the stripes are weakened by a positive $`t^{}`$, pairing correlations are strongly enhanced. The optimal $`t^{}`$ appears to be near $`t^{}=0.2`$. Pairing is once again suppressed for negative $`t^{}`$, even when the domain walls are destabilized. From a weak coupling point of view, our results on the effect of $`t^{}`$ on pairing are surprising. In weak coupling, the effect of $`t^{}<0`$ is to shift the van Hove singularity in the density of states away from half-filling, so that the singularity may occur near the Fermi level in a doped system. Thus, one might have expected to find an enhancement in pairing for $`t^{}<0`$. However, in the $`tJ`$ model, we find a suppression of the pairing. In strong coupling, one can understand this effect. Consider a pair of holes, and imagine we fix one hole and let the other hole hop around it. Consider the phase of the wavefunction of the second hole on the four sites next to the first hole. It appears that $`t^{}<0`$ will directly favor a +-+- $`d`$-wave phase pattern as the second hole hops around the first, whereas $`t^{}>0`$ would favor the $`s`$-wave pattern ++++. However, the actual phase of a pair is a relative phase between a system with $`N`$ holes and one with $`N+2`$ holes. If one considers a $`2\times 2`$ $`t`$-$`J`$ system, one finds that the 2-hole ground state has $`s`$-wave rotational symmetry, whereas the undoped state has $`d`$-wave rotational symmetry. The $`d`$-wave nature of the pairing comes from the difference in these rotational symmetries. Consequently, $`t^{}<0`$, by suppressing the 2-hole ++++ pattern, actually suppresses $`d`$-wave pairing, while $`t^{}>0`$ can enhance it. Consider the $`2\times 2`$ system. The energy of the undoped system is independent of $`t^{}`$; we find $`E(0)=3J`$. The energy of the one hole system depends only weakly on $`t^{}`$; for $`t^{}`$ small, we find $`E(1)=J1/2(J^2+12t^2+4Jt^{}+4t^2)^{1/2}`$. For $`J=0.35`$, $`t=1`$, this varies with $`t^{}`$ as $`E2.090870.1005t^{}`$. The energy of the two-hole system, in contrast, depends strongly on $`t^{}`$: $`E(2)=J/2t^{}(32t^2+(J+2t^{})^2)^{1/2}`$. The pair binding energy is defined as $$E_b=2E(1)E(2)E(0).$$ (5) The dependence of the pair binding energy on $`t^{}`$ is dominated by $`E(2)`$, and we find that $`t^{}>0`$ strongly enhances the pair binding. On larger systems, the detailed energetics are more complex, but a similar effect occurs. In Fig. 4, we show the energy per hole of several systems as a function of $`t^{}`$. The systems allow us to compare the stability of paired states, striped states, and states with isolated holes. The first system is a single hole in an $`8\times 8`$ open system, with a staggered antiferromagnetic field of strength 0.1 on the edges to approximate the magnetic coupling to the rest of the system, which is assumed to be undoped. The second system is similar, but has two holes. We plot the energy difference between these systems and the same system without holes, divided by the number of holes. The third system is a $`16\times 6`$ system, with open boundary conditions, and staggered fields of magnitude 0.1 with a $`\pi `$ phase shift applied on the first and last chain. These boundary conditions favor the development of a stripe down the center of the ladder. Then we subtract the energy of an undoped $`16\times 6`$ system, also with staggered fields, but without the phase shift. We expect that finite size effects are not neglible, and these could shift the striped phase curve relative to the other two curves. However, we believe the general trends are reliable. That is, the striped system is lowest in energy near $`t^{}=0`$, but becomes unstable as $`t^{}`$ becomes less than $`0.1`$, or as $`t^{}`$ increases above a value slightly greater than $`0.0`$. Thus, the striped region is quite narrow as a function of $`t^{}`$. This conclusion differs somewhat from that of Ref. , where it was found that stripes were enhanced for $`0<t^{}<0.2`$, but were suppressed for larger values of $`t^{}`$. For positive $`t^{}`$, the new stable state has pairs of holes, as Tohyama, et. al. also found for $`t^{}0.5`$. For $`t^{}<0.1`$, the near degeneracy between one and two holes indicates that the holes are not bound into pairs: instead, the stripes break up into quasiparticles. These observations are consistent with enhanced pairing correlations for $`t^{}>0`$, and suppressed pairing correlations for $`t^{}<0`$. Note that $`N`$-leg, $`tJ`$ ladders provide perhaps the simplest models which exhibit many phenomenologically similar characteristics to those observed in the cuprates. Here, for two different $`tt^{}J`$ ladders, we find that a diagonal, next-near-neighbor hopping suppresses the formation of static stripes and that for $`t^{}>0`$ this can lead to an enhancement of the $`d_{x^2y^2}`$ pairing correlations, while $`t^{}<0`$ we find suppression of pairing. We thank M.P.A. Fisher, S.A. Kivelson, A. Millis, S. Sachdev, and E. Dagotto for interesting discussions. D.J. Scalapino would like to acknowledge the Aspen Center for Physics where the interplay of stripes and superconductivity in these models was discussed. S.R. White acknowledges support from the NSF under grant # DMR98-70930 and D.J. Scalapino acknowledges support from the NSF under grant # DMR95-27304.
no-problem/9812/astro-ph9812047.html
ar5iv
text
# 1 Introduction ## 1 Introduction Following the publication of the first paper on the advection dominated accretion flows by Narayan and Yi (1994) over one hundred papers appeared on this subject, with ever more sophisticated physics and ever more sophisticated treatment of geometry (cf. Gammie and Popham 1998, and references therein). However, there is no fully two dimensional treatment <sup>*</sup><sup>*</sup>*published 1998, Acta Astronomica, 48, 667. published so far, and there is no clear link between the recent papers and those written about two decades ago on the theory of thick disks (e.g. Jaroszyński, Abramowicz and Paczyński 1980, Paczyński and Wiita 1980, and references therein). The purpose of this paper is to present a toy model of a disk accreting onto a black hole but not radiating, i.e. advection dominated, presented in the spirit of the early 1980s, which seems to be simpler and more transparent than the spirit dominating the late 1990s. ## 2 Thin Pseudo-Newtonian Disk I shall use pseudo-Newtonian gravitational potential (Paczyński and Wiita 1980): $$\mathrm{\Psi }=\frac{GM}{RR_g},\mathrm{\Psi }^{}\frac{d\mathrm{\Psi }}{dR}=\frac{GM}{(RR_g)^2},R_g\frac{2GM}{c^2},$$ $`(1)`$ which has the property that a test particle has a marginally stable orbit at $`R_{ms}=3R_g`$, and marginally bound orbit at $`R_{mb}=2R_g`$, just as it is the case with particles orbiting Schwarzschild black hole. $`R`$ is the spherical radius. Throughout this paper cylindrical coordinates $`(r,z)`$ will be used, with $`R=\left(r^2+z^2\right)^{1/2}`$. In the following we consider a thin accretion disk fully supported against gravity by the centrifugal acceleration. The disk is in the equatorial plane of the coordinate system. It follows the motion of test particles on circular orbits, with the rotational velocity $`v(r)`$, angular velocity $`\mathrm{\Omega }(r)`$, specific angular momentum $`j(r)`$, and total specific energy $`e(r)`$ given as $$v=\left(r\mathrm{\Psi }^{}\right)^{1/2}=\left(\frac{GM}{r}\right)^{1/2}\left[\frac{r}{rr_g}\right],$$ $`(2a)`$ $$\mathrm{\Omega }=\frac{v}{r}=\left(\frac{\mathrm{\Psi }^{}}{r}\right)^{1/2}=\left(\frac{GM}{r^3}\right)^{1/2}\left[\frac{r}{rr_g}\right],$$ $`(2b)`$ $$j=vr=\left(r^3\mathrm{\Psi }^{}\right)^{1/2}=\left(GMr\right)^{1/2}\left[\frac{r}{rr_g}\right],$$ $`(2c)`$ $$e=\mathrm{\Psi }+\frac{v^2}{2}=\left(\frac{GM}{2r}\right)\left[\frac{(r2r_g)r}{(rr_g)^2}\right],$$ $`(2d)`$ where we adopted $`r_gR_g`$. It follows that $$\frac{de}{dr}=\mathrm{\Omega }\frac{dj}{dr}.$$ $`(3)`$ The inner edge of a thin accretion disk is at $$r_{in}=r_{ms}=3r_g,$$ $`(4)`$ where the binding energy and the specific angular momentum reach their minima: $$e_{ms}=\frac{c^2}{16},$$ $`(5)`$ $$j_{ms}=1.5\times 6^{1/2}\times \frac{GM}{c}3.674\frac{GM}{c}.$$ $`(6)`$ The matter falls freely into the black hole once it crossed the $`r_{ms}`$, conserving its angular momentum and total energy. The total thin disk luminosity is given with the formula: $$L_d=\dot{M}e_{in}=\dot{M}e_{ms}=\left(\dot{M}\right)\frac{c^2}{16},(\mathrm{thin}\mathrm{disk}),$$ $`(7)`$ where I adopt a convention that $`\dot{M}<0`$ for the accretion flow. In a steady state accretion of a thin disk the equations of mass and angular momentum conservation may be written as $$\dot{M}=const.$$ $`(8a)`$ $$\dot{J}=\dot{M}j+g=const.$$ $`(8b)`$ where $`g`$ is the torque acting between two adjacent rings in the disk. With no torque at the inner edge of the disk at $`r_{in}=r_{ms}`$ we have $$g=\left(\dot{M}\right)\left(jj_{in}\right)=\left(\dot{M}\right)\left(jj_{ms}\right).$$ $`(9)`$ The rate of energy flow in the disk is given as $$\dot{E}=\dot{M}e+g\mathrm{\Omega }.$$ $`(10)`$ This is not constant, as the disk must radiate energy dissipated by viscous stresses in order to remain thin. The energy balance equation may be written as $$2F\times 2\pi r=\frac{d\dot{E}}{dr}=g\left(\frac{d\mathrm{\Omega }}{dr}\right)=\left(\dot{M}\right)\left(jj_{in}\right)\left(\frac{d\mathrm{\Omega }}{dr}\right).$$ $`(11)`$ where $`F`$ is the energy radiated per unit disk area from each of its two surfaces. The eq. (11) may be integrated from $`r_{ms}`$ to infinity to obtain the same result as that given with the eq. (7). Note, that disk viscosity was not specified. All we needed were the conservation laws and the assumption of a steady state accretion of a geometrically thin disk. Small geometrical thickness implied that no significant internal energy could be stored within the disk. ## 3 Thick Pseudo-Newtonian Disk I shall consider now geometrically thick, axisymmetric disk, with the surface given with the relation $`z_s(r)`$, where $`(r,z)`$ are the two cylindrical coordinates, and we have $$R=\left(r^2+z^2\right)^{1/2}.$$ $`(12)`$ The disk is assumed to be in a hydrostatic equilibrium. This implies that the three vectors have to balance at its surface: gravitational acceleration, centrifugal acceleration, and the pressure gradient divided by gas density. This implies that angular velocity $`\mathrm{\Omega }_s`$ at the disk surface is given as (cf. Jaroszyński et al. 1980, Paczyński and Wiita 1980): $$\mathrm{\Omega }_s^2=\left(\frac{\mathrm{\Psi }_s^{}}{R_s}\right)\left(\frac{z_s}{r}\frac{dz_s}{dr}+1\right),$$ $`(13a)`$ which is equivalent to $$\frac{de_s}{dr}=\mathrm{\Omega }_s\frac{dj_s}{dr},$$ $`(13b)`$ where subscript ‘s’ indicates that the corresponding quantities are defined at the disk surface. Thick disk has a large radial pressure gradient, and it remains in a hydrostatic equilibrium for $`r>r_{in}`$, and it has a cusp at $`r_{in}`$. The matter falls freely into the black hole inwards of $`r_{in}`$, which is located between the marginally stable and marginally bound orbits, i.e. $$r_{mb}<r_{in}<r_{ms}$$ $`(14)`$ where a no torque inner boundary condition is applied. I shall not analyze the transition from a sub-sonic accretion flow at $`r>r_{in}`$ to a supersonic flow at $`r<r_{in}`$ (cf. Loska 1982). The total disk luminosity is given with the formula similar to eq. (7): $$L_d=\dot{M}e_{in}=\left(\dot{M}\right)\frac{GM(r_{in}2r_g)}{2(r_{in}r_g)^2},(\mathrm{thick}\mathrm{disk}).$$ $`(15)`$ The specific binding energy at $`r_{in}`$ has the range $$0>e_{in}>\frac{c^2}{16}\mathrm{for}r_{mb}<r_{in}<r_{ms}.$$ $`(16)`$ Note, that while the total luminosity of a thick disk may be larger than that of a thin disk, the energy radiated per each gram of matter accreted into the black hole is smaller in the thick disk case. ## 4 Thick Advection Dominated Disk A thin accretion disk has to radiate in order to remain thin. If the disk is not radiating away the energy which is generated by viscous stresses then it must be thick in order to accommodate this energy in its interior. Let us consider a disk which is thin for $`r>r_{out}`$, but is thick for $`r_{out}>r>r_{in}`$. Note, that the disk is also thin at its cusp, i.e. at $`r=r_{in}`$. Our new disk radiates energy only where it is thin, i.e. for $`r>r_{out}`$, where the disk surface brightness $`F`$ is given with the eq. (11), and the total disk luminosity is given by the formula similar to the eq. (7) and (15): $$L_d=\dot{M}e_{in}.$$ $`(17)`$ All this luminosity is radiated by the thin disk at $`r>r_{out}`$, and none is radiated by the thick disk at $`r_{out}>r>r_{in}`$, by assumption. The structure of the thick disk, extending from $`r_{in}`$ to $`r_{out}`$ is of interest for us, as by assumption the disk is 100% advection dominated in this range of radii, i.e. the mass, angular momentum, and energy entering it at $`r_{out}`$ must come out of it at $`r_{in}`$: $$\dot{M}_{in}=\dot{M}_{out}.$$ $`(18a)`$ $$\dot{J}_{in}=\left(\dot{M}j+g\right)_{in}=\left(\dot{M}j+g\right)_{out}=\dot{J}_{out},$$ $`(18b)`$ $$\dot{E}_{in}=\left(\dot{M}e+g\mathrm{\Omega }\right)_{in}=\left(\dot{M}e+g\mathrm{\Omega }\right)_{out}=\dot{E}_{out}$$ $`(18c)`$ with $`j`$ and $`\mathrm{\Omega }`$ having their ‘Keplerian’ values (cf. eqs. 2) at the two ends, as the disk is thin at both ends, and there is no torque at $`r_{in}`$, i.e. $`g_{in}=0`$. The torque at $`r_{out}`$ follows from the conservation of angular momentum (18b): $$g_{out}=\left(\dot{M}\right)\left(j_{out}j_{in}\right),$$ $`(19)`$ which, together with the energy conservation law give us: $$e_{out}e_{in}=\left(j_{out}j_{in}\right)\mathrm{\Omega }_{out}.$$ $`(20)`$ Note, that all quantities in the eq. (20) are unique functions of either $`r_{in}`$ or $`r_{out}`$ (cf. eqs. 2), and therefore the eq. (20) gives the relation between $`r_{in}`$ and $`r_{out}`$. If the outer radius of the advection dominated disk is specified, then the inner radius of the disk can be calculated with the eq. (20), and the eq. (17) gives the total luminosity of the thin disk which is assumed to extend from $`r_{out}`$ to infinity. The relation between $`r_{out}`$ and $`r_{in}`$ is shown in Figure 1. The larger is $`r_{out}`$, the smaller is $`r_{in}`$, and the less energy is radiated per one gram of accreted matter. The variation of the inner thick disk radius $`r_{in}`$, and the binding energy at the inner thick disk radius $`e_{in}`$ with the outer thick disk radius $`r_{out}`$ is shown in Fig. 1. Notice, that for very large $`r_{out}`$ we have asymptotically $$\frac{r_{in}}{r_g}2+3\frac{r_g}{r_{out}},\frac{e_{in}}{e_{ms}}12\frac{r_g}{r_{out}},\mathrm{for}\frac{r_{out}}{r_g}1.$$ $`(21)`$ We also have $$e_{out}\frac{c^2}{4}\frac{r_g}{r_{out}},e_{in}3e_{out}\frac{3c^2}{4}\frac{r_g}{r_{out}},\mathrm{for}\frac{r_{out}}{r_g}1.$$ $`(22)`$ So far we used only conservation laws to constrain our thick advective disk. If we want to find the disk shape we must make some additional assumptions. There are two general inequalities which must be satisfied by the matter at the disk surface (cf. Jaroszyński et al. 1980, Paczyński and Wiita 1980): $$\frac{dj_s}{dr}>0,\frac{d\mathrm{\Omega }_s}{dr}<0,$$ $`(23)`$ supplemented with the condition of hydrostatic equilibrium at the thick disk surface, as expressed with the eq. (13b), and the conditions that the disk must be geometrically thin at $`r_{in}`$ and $`r_{out}`$. There is a lot of freedom in choosing thick disk structure that satisfies all the conditions listed in the previous paragraph. For our toy model we adopted $$j_s=j_{in}\left[1+b\left(\frac{x_s}{x_{in}}1\right)^a+b\left(\frac{x_s}{x_{in}}1\right)^{3a}\right]^{1.5/a},$$ $`(24)`$ and $$e_s=e_{in}+_{r_{in}}^{r_s}\mathrm{\Omega }_s\frac{dj_s}{dr}𝑑r=e_{in}+_{r_{in}}^{r_s}\frac{1}{2r^2}\frac{dj_s^2}{dr}𝑑r.$$ $`(25)`$ Thin disk conditions at $`r_{in}`$ are satisfied automatically with the eqs. (24) and (25), and we have $$j_{in}=\frac{\left(GMr_{in}^3\right)^{1/2}}{r_{in}r_g},e_{in}=\frac{GM(r_{in}2r_g)}{2(r_{in}r_g)^2}.$$ $`(26)`$ The parameters $`a`$ and $`b`$ have to be adjusted so that thin disk conditions are satisfied at $`r_{out}`$, where the eqs. (24) and (25) must give $$j_{out}=\frac{\left(GMr_{out}^3\right)^{1/2}}{r_{out}r_g},e_{out}=\frac{GM(r_{out}2r_g)}{2(r_{out}r_g)^2}.$$ $`(27)`$ As an example a toy disk model with $`r_{out}=100r_g`$ is shown in Fig. 2 and Fig. 3. The inner disk radius $`r_{in}=2.026031\mathrm{}r_g`$ was obtained from the condition given with the eq. (20). The eqs. (27) were satisfied for $`a=1.32048586889\mathrm{}`$ and $`b=0.000000442466655\mathrm{}`$. ## 5 Discussion The shape of our toy disk as shown in Fig. 2 is an artifact of the strong assumptions made in this paper. In particular, a rapid transition from a thin disk to a thick disk at $`r_{out}`$ is a direct consequence of the assumption that 100% of all energy dissipated locally is radiated away locally for $`r>r_{out}`$, while none is radiated away for $`r<r_{out}`$. In any realistic disk there will be partial radiation at all radii, and the variation of the efficiency is likely to change gradually, with no abrupt changes in the disk thickness. While the detailed shape of the thick disk must be uncertain as long as we have no quantitative understanding of disk viscosity, the formation of a thick disk and pushing its inner radius towards the marginally bound orbit is a very general property. It was noticed two decades ago with supercritical accretion disks of Jaroszyński et al. (1980) and Paczyński and Wiita (1980). In our toy model the inability of the disk to radiate energy dissipated within $`r_{out}>r>r_{in}`$ forces the disk to become thick, and pushes its inner cusp towards $`r_{mb}`$, lowering the efficiency with which rest mass is converted into radiation. The farther out the $`r_{out}`$ is assumed to be the closer $`r_{in}`$ is pushed towards the $`r_{mb}`$ in order to lower the efficiency. While we retain the term ‘advective’ to describe our thick disk, it should be stressed that as we require the disk to be thin at its inner cusp at $`r_{in}`$, there is no advection of any heat into the black hole, as all enthalpy of the thick disk has been used up to press its inner radius towards the $`r_{mb}`$. However, the kinetic energy of thick disk matter at $`r_{in}`$ is much larger than is the kinetic energy of a thin disk, which must have its inner radius located at $`r_{ms}`$. In the pseudo-Newtonian potential the ratio of the two is $`v_{mb}^2/v_{ms}^2=8/3`$ (cf. eq. 2a). There is a somewhat paradoxical aspect of our toy model. Some kind of viscosity has to reduce angular momentum from $`j_{out}`$ at the outer disk boundary to $`j_{in}`$ at its inner boundary, and this must generate either heat or some other form of internal energy which puffs up the disk, and it would seem that this energy has to be advected through $`r_{in}`$ into the black hole. Figure 4 shows the distribution of angular momentum for our thick disk model (solid line), and for the ‘Keplerian’ angular momentum given with the eq. (2c) (dashed line). It is clear that thick disk angular momentum is almost constant for $`r<10r_g`$, and that all dissipation takes place only in the outer parts of the thick disk. All the same the entropy must be higher at $`r_{in}`$ than it is at $`r_{out}`$. However, high entropy does not have to imply high internal energy if the gas density is low at $`r_{in}`$. Obviously, our requirement for the disk to be thin at $`r_{in}`$ is ad hoc, motivated by our desire to have as simple structure as possible. Our toy model may require uninterestingly low accretion rate to satisfy all the conditions that are imposed at its inner boundary: hydrostatic equilibrium, low geometrical thickness, high entropy and low internal energy. It is very likely that at the accretion rate of any interest some of these conditions are broken. It is likely that the cusp at $`r_{in}`$ opens up considerably, the speed of sound is not negligible compared to rotational velocity, and the transonic flow carries a non trivial amount of internal energy into the black hole (Loska 1982). However, the conservation laws do not require the advected thermal energy to be large, as demonstrated by our toy model. A generic feature of any thick disk is the formation of a narrow funnel along the rotation axis. It is far from clear how realistic is the presence of the funnel, as some instability might fill it in, and make the accretion nearly spherical near the black hole. On the other hand, if the funnel forms and lasts, it may help collimate a jet-like outflow. Unfortunately, our lack of quantitative understanding of viscous processes in accretion flows makes it impossible to prove what topology of solutions is realistic under which conditions. The main virtue of the toy model presented in this paper is the set of assumptions that was made; this set is very different from the assumptions which are used in the recently booming industry of advection dominated accretion flows. While the assumptions adopted in this paper are ad hoc, so are the assumptions adopted in any thick disk models. In particular, there is nothing ‘natural’ about the popular assumption that the accretion flow is self-similar. Another popular assumption: constant ‘alpha’ parameter is ad hoc as well. It is useful to read the old paper by Galeev, Rossner and Vaiana (1979) to realize that the very concept of the ‘alpha’ parameter is ad hoc. Disk properties which depend on any ad hoc assumptions should not be taken seriously. Acknowledgments. It is a pleasure to acknowledge useful comments by the anonymous referee which helped to improve the discussion of the toy model. Dr. J.-P. Lasota pointed out the errors in the original form of eq. (24) and the numerical values of constants ‘a’ and ‘b’; I am very grateful to him for his help. ## REFERENCES * Galeev, A. A., Rosner, R., and Vaiana, G. S. 1979, Astrophys. J., 229, 318. * Jaroszyński, M., Abramowicz, M. A., and Paczyński, B. 1980, Acta Astron., 30, 1. * Loska, Z. 1982, Acta Astron., 32, 13. * Narayan, R., and Yi, I. 1994, Astrophys. J., 428, L13. * Paczyński, B., and Wiita, P. 1980, Astron. Astrophys., 88, 23. * Popham, R., and Gammie, C. F. 1998, Astrophys. J., 504, 419.
no-problem/9812/hep-ph9812308.html
ar5iv
text
# 1 Introduction ## 1 Introduction Over the last several decades, experiments have revealed a striking generational pattern of quark and charged lepton masses and mixings. In each charged sector there is a strong hierarchy of mass eigenvalues between the three generations, $`m_3m_2m_1`$, and the three angles describing mixing between the left-handed charge 2/3 quarks and charge $`1/3`$ quarks are all small. The origin of this pattern, and of the precise values of the flavor observables, has been greatly debated, with several diverse approaches and very many competing theories. Despite this diversity, a common theme can be identified: the fermion masses are to be understood in an expansion, in which the leading order term for each charged sector has the form $$m^{(0)}=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& A\end{array}\right).$$ (1) This gives the leading order results $`m_3m_2=m_1=0`$, and vanishing mixing angles to the third generation. Indeed, for the charged sectors it has seemed self evident that this is the leading order structure, and the debate has centered on higher order terms in the expansion. The Super-Kamiokande measurements of the atmospheric neutrino fluxes cast considerable doubt on (1) as the correct leading order form, at least in the lepton sector. These measurements are of great importance not just because they provide strong evidence for neutrino masses: they may also fundamentally change our view of the pattern of flavor symmetry breaking. The interpretation of this atmospheric $`\nu `$ flux data in terms of neutrino oscillations implies a large mixing angle ($`\theta >32^{}`$) between $`\nu _\mu `$ and some other neutrino state, which could have large $`\nu _\tau `$ or singlet neutrino components, but only a small $`\nu _e`$ component. Is it possible to reconcile this observation with the leading order form (1)? This issue is especially important in unified theories, where relations are expected between the textures for the various charged sectors. We are aware of three possible resolutions, each of which can be criticized: * In a three generation theory, with each generation containing a right-handed neutrino, it is possible to write down textures for charged leptons, ($`m_E`$), Dirac neutrino masses, ($`m_{LR}`$), and right-handed Majorana masses, ($`m_{RR}`$), which all reduce to (1) at leading order, but which give a leading order form to $`m_{LL}=m_{LR}m_{RR}^1m_{LR}^T`$ which is very different from (1), and has dominant terms giving $`\theta _{\mu \tau }O(1)`$. However, for this to happen the 23 and 33 entries of $`m_{LL}`$ need to be comparable, and since they arise from different terms in $`m_{LR}`$ and $`m_{RR}`$ the large value for $`\theta _{\mu \tau }`$ appears to be accidental. * In a three generation theory, even if both $`m_E`$ and $`m_{LL}`$ have the leading order form of (1), the 23 entries may not be much smaller than the 33 entries, generating a significant $`\theta _{\mu \tau }`$ . For example, in this conventional hierarchical scheme, the ratios of eigenvalues in the charged and neutral sectors suggest charged and neutral contributions to $`\theta _{\mu \tau }`$ of about $`14^{}`$ and $`18^{}`$ respectively. Providing the relative sign is such that these contributions add, large $`\mu `$-$`\tau `$ mixing can result. This is an important observation, because it shows that the conventional picture, where all textures have the leading order form of (1), is not excluded by the Super-Kamiokande data. However, the data does prefer an even bigger angle: the conventional picture is now disfavored.<sup>1</sup><sup>1</sup>1One might try to argue that the conventional picture could even give $`45^{}`$ mixing if the hierarchy between the two $`\mathrm{\Delta }m^2`$ is reduced. This is permissable if one of the solar neutrino experiments, or the standard solar model, is incorrect . However, in this case the 23 entry in $`m_{LL}`$ is no longer small enough to be considered subleading. * In a theory with more than three light neutrino states it may be that (1) gives the correct leading order neutrino masses terms between the 3 left-handed states, but there is some additional mass term coupling $`\nu _\mu `$ to a light singlet state leading to large mixing between these states . Such schemes are certainly non-minimal, and must answer three questions. Why is there a singlet state? Why is it so light? Why is it coupled to $`\nu _\mu `$ rather than to $`\nu _\tau `$ or $`\nu _e`$? Furthermore, during big bang nucleosynthesis the fourth state is kept in thermal equillibrium by oscillations, and the resulting extra contribution to the energy density is disfavored by observations which allow the primordial abundances of D and <sup>4</sup>He to be inferred . In view of these criticisms of the conventional leading order texture (1), in this paper we study an alternative, straightforward and direct interpretation of the data: in a three generation theory large $`\theta _{\mu \tau }`$ arises because $`m_{LL}`$ and/or $`m_E`$ have a leading order form which differs from (1). In the bulk of this paper, we perform an analysis to find all possible leading order textures for $`(m_E,m_{LL})`$, subject to a simplicity assumption, such that there is a hierarchy of neutrino mass splittings: $`\mathrm{\Delta }m_{23}^2\mathrm{\Delta }m_{12}^2`$ as prefered by atmospheric and solar neutrino data.<sup>2</sup><sup>2</sup>2In a texture analysis is done to find the possible leading-order forms for $`m_{LL}`$, in the charge-diagonal basis, that have either maximal mixing for $`\theta _{23}`$ alone or maximal mixing for both $`\theta _{23}`$ and $`\theta _{12}`$. ## 2 Texture Analysis: Rules In theories with three light neutrinos, the leading order, real, diagonal mass matrices consistent with $`\mathrm{\Delta }m_{12}^2\mathrm{\Delta }m_{23}^2`$ are $$\overline{m}_{LL}^I=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& \alpha \end{array}\right)\overline{m}_{LL}^{II}=\left(\begin{array}{ccc}\alpha & 0& 0\\ 0& \alpha & 0\\ 0& 0& 0\end{array}\right)\overline{m}_{LL}^{III}=\left(\begin{array}{ccc}\alpha & 0& 0\\ 0& \alpha & 0\\ 0& 0& \beta \end{array}\right)\overline{m}_{LL}^{IV}=\left(\begin{array}{ccc}\alpha & 0& 0\\ 0& \alpha & 0\\ 0& 0& \alpha \end{array}\right)$$ (2) for the neutrinos, and $$\overline{m}_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& \gamma \end{array}\right)$$ (3) for the charged leptons. In $`\overline{m}_{LL}^{III}`$, $`\alpha `$ and $`\beta `$ are of the same order but not equal. The diagonal mass matrices are related to $`m_{LL}`$ and $`m_E`$, the mass matrices in the flavor basis, by unitary transformations: $$m_{LL}=V_\nu ^{}\overline{m}_{LL}V_\nu ^{}m_E=V_{E_L}\overline{m}_EV_{E_R}^{}$$ (4) The leptonic mixing matrix $`V=V_{E_L}^{}V_\nu `$ then relates the neutrino weak and mass eigenstates according to $$\nu _{e_i}=V_{ij}\nu _j$$ (5) and can be parametrized by $$V=R_{23}(\theta _{23})R_{13}(\theta _{13})\left(\begin{array}{ccc}1& 0& 0\\ 0& e^{i\beta }& 0\\ 0& 0& 1\end{array}\right)R_{12}(\theta _{12})\left(\begin{array}{ccc}1& 0& 0\\ 0& e^{i\alpha _1}& 0\\ 0& 0& e^{i\alpha _2}\end{array}\right).$$ (6) If $`\mathrm{\Delta }m_{23}^2>210^3`$ eV<sup>2</sup>, then results from the Chooz experiment require $`\theta _{13}<13^{}`$ . In fact, even if $`\mathrm{\Delta }m_{23}^2<210^3`$ eV<sup>2</sup>, fits to the Super-Kamiokande atmospheric data (for $`\mathrm{\Delta }m_{12}^2\mathrm{\Delta }m_{23}^2`$) alone restrict $`\theta _{13}<20^{}`$ . In light of these constraints we will assume that the leading order contribution to $`\theta _{13}`$ vanishes, giving $$V=R_{23}(\theta _{23})R_{12}(\theta _{12})\left(\begin{array}{ccc}1& 0& 0\\ 0& e^{i\alpha _1}& 0\\ 0& 0& e^{i\alpha _2}\end{array}\right),$$ (7) with $`\theta _{23}`$ of order unity, as suggested by Super-Kamiokande results. Our aim is to perform a systematic search for leading order leptonic mass matrices $`m_{LL}`$ and $`m_E`$ that have the following features: * Diagonalizing them gives $`\overline{m}_E`$ of (3) for the charged leptons and one of the $`\overline{m}_{LL}`$’s of (2) for the neutrinos. * They produce a leptonic mixing matrix that can be paramatrized as in (7), with $`\theta _{23}1`$. * Their forms offer the hope of a simple explanation in terms of flavor symmetries. Because we are particularly interested in leading order $`m_{LL}`$ and $`m_E`$ that can be simply understood using flavor symmetries, we constrain their forms by allowing only the following exact relations between non-vanishing elements: * They may be equal up to a phase. * They may be related so as to give a vanishing determinant or sub-determinant. The latter class of relations is allowed because, as discussed in , vanishing determinants arise naturally when heavy particles are integrated out, as in the seesaw mechanism. As an illustration of how these rules are used, consider applying a (2-3) rotation, first on $`\overline{m}_{LL}^I`$, and second on $`\overline{m}_{LL}^{II}`$. In the first case we get a neutrino mass matrix of the form $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& \frac{B^2}{A}& B\\ 0& B& A\end{array}\right).$$ (8) Our rules allow this matrix because the relation among elements yields a vanishing sub-determinant. In the second case the transformation gives $$m_{LL}=\left(\begin{array}{ccc}\frac{B^2}{A}+A& 0& 0\\ 0& \frac{B^2}{A}& B\\ 0& B& A\end{array}\right)$$ (9) (ignoring possible phases). This matrix is not allowed because the relation between the 11, 22, and 33 entries is not essential for the vanishing of any determinant. Cases such as (9) are not excluded because it is impossible to attain them from a theory with a flavor symmetry. Rather, they are excluded for reasons of simplicity: in our judgement it is more difficult to construct such theories, compared with theories for textures with all non-zero entries independent, equal up to a phase, or related to give a vanishing determinant. Given a pairing of leading order ($`m_{LL}`$, $`m_E`$) that satisfies our simplicity requirement, and that has mass eignenvalues consistent with (2) and (3), it is straightforward to determine whether or not $`\theta _{23}1`$ is satisfied. Unfortunately, the remaining requirement, $`\theta _{13}0`$, is rendered meaningless by the leading order relation $`m_e=m_\mu =0`$. This is easily seen by rotating the left-handed doublets to diagonalize $`m_{LL}`$, and then rotating the right-handed charged leptons to give $$m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& A\end{array}\right),\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& B\\ 0& 0& A\end{array}\right),\mathrm{or}\left(\begin{array}{ccc}0& 0& C\\ 0& 0& B\\ 0& 0& A\end{array}\right)+\mathrm{perturbations}.$$ (10) We can diagonalize each leading order piece in (10) by applying (at most) a diagonal phase rotation followed by (1-2) and (2-3) rotations. This indicates that, if we ignore the perturbations responsible for the muon mass, we are free to choose $`\theta _{13}=0`$ for any leading order ($`m_{LL}`$, $`m_E`$) pairing. Although it is impossible to use the $`\theta _{13}0`$ requirement to restrict lepton mass matrices based on leading order considerations alone, it is true that it is easier for some ($`m_{LL}`$, $`m_E`$) pairings than it is for others to add perturbations that give $`\theta _{13}0`$. As we will see, some pairings require special relations among the perturbations that seem difficult to understand by symmetry considerations. To exclude these cases we impose a final requirement on our leading order ($`m_{LL}`$, $`m_E`$) pairings: * It must be possible to add to $`m_{LL}`$ and $`m_E`$ perturbations that establish $`\theta _{13}0`$ and that satisfy the same simplicity requirements already imposed on the leading order entries: non-vanishing perturbations must be either independent, equal up to a phase, or related in a way that gives a vanishing determinant. We require that the perturbations in $`m_E`$ give $`m_\mu 0`$ while preserving $`m_e=0`$. For the case of three nearly degenerate neutrinos we require that the perturbations in $`m_{LL}`$ establish $`\mathrm{\Delta }m_{12}^2\mathrm{\Delta }m_{23}^2`$, and for the remaining cases, where $`\mathrm{\Delta }m_{23}^20`$ is established at leading order, we require that the perturbations in $`m_{LL}`$ lift the degeneracy between $`\nu _1`$ and $`\nu _2`$. A simple example will clarify our motives for adding this requirement. Starting with the leading order textures $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& \frac{B^2}{A}& B\\ 0& B& A\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& E\\ 0& 0& D\end{array}\right),$$ (11) it is easy to find perturbations that satisfy our criteria. For instance, if we add them according to $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& \frac{B^2}{A}& B\\ 0& B& A+ϵ_2\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& E\\ 0& ϵ_1& D\end{array}\right),$$ (12) then in the basis where $`m_{LL}`$ is diagonal we have $$m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& ϵ_{1}^{}{}_{}{}^{}& E^{}\\ 0& ϵ_{2}^{}{}_{}{}^{}& D^{}\end{array}\right),$$ (13) so that $`\theta _{13}0`$ is indeed satisfied, and the leading order matrices of (11) are allowed. Things do not work as simply if we instead begin with the leading order pair $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& \frac{B^2}{A}& B\\ 0& B& A\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& F\\ 0& 0& E\\ 0& 0& D\end{array}\right).$$ (14) After rotating the lepton doublets to diagonalize $`m_{LL}`$ we need the form of $`m_E`$ (including perturbations responsible for the muon mass) to have a perturbation only in the 32 entry: $$m_E=\left(\begin{array}{ccc}0& 0& F\\ 0& 0& E^{}\\ 0& ϵ& D^{}\end{array}\right).$$ (15) Otherwise, after performing (1-2) and (2-3) rotations to diagonalize the leading order piece of $`m_E`$, we are still left with an additional large (1-2) rotation required to diagonalize the perturbations, which induces a large $`\theta _{13}`$. One must therefore require that, in the flavor basis, the perturbations enter the charged lepton mass matrix as in $$m_E=\left(\begin{array}{ccc}0& 0& F\\ 0& ϵB& E\\ 0& ϵA& D\end{array}\right),$$ (16) where $`A`$ and $`B`$ are the masses that appear in $`m_{LL}`$. The non-trivial exact relation required between the perturbations in $`m_E`$ and the leading order entries in $`m_{LL}`$ indicates that, for generic $`A`$ and $`B`$, the textures in (14) do not fulfill our criteria for leading order ($`m_{LL}`$,$`m_E`$). Note, however, that for the special case $`A=B`$, the perturbations in (16) are equal, so that the leading order pairing $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& A& A\\ 0& A& A\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& F\\ 0& 0& E\\ 0& 0& D\end{array}\right)$$ (17) is allowed by our rules. ## 3 Texture Analysis: Results The program of our analysis is as follows. First, for each diagonal neutrino and charged lepton mass matrix of (2) and (3), we write down all possible forms for leading order $`m_{LL}`$ and $`m_E`$ in the flavor basis, consistent with our simplicity requirement restricting relations between non-vanishing elements. For each leading order ($`m_{LL}`$,$`m_E`$) pairing obtained in this way, we then determine whether there are perturbations that satisfy the criteria described in the preceding paragraphs. For example, for the case $`\overline{m}_{LL}=\overline{m}_{LL}^I`$ of equation (2), the possible forms for $`m_{LL}`$ in the flavor basis are $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& A\end{array}\right),m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& \frac{B^2}{A}& B\\ 0& B& A\end{array}\right),\mathrm{and}m_{LL}=\left(\begin{array}{ccc}\frac{C^2}{A}& \frac{BC}{A}& C\\ \frac{BC}{A}& \frac{B^2}{A}& B\\ C& B& A\end{array}\right),$$ (18) and all matrices obtained from these by permuting flavor basis indices. Note that each relation among elements in these matrices leads to a vanishing sub-determinant, and is thus allowed. These forms for $`m_{LL}`$ may be paired with either $$m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& D\end{array}\right),m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& E\\ 0& 0& D\end{array}\right),\mathrm{or}m_E=\left(\begin{array}{ccc}0& 0& F\\ 0& 0& E\\ 0& 0& D\end{array}\right),$$ (19) where only the left-handed charged leptons, and not necessarily the right-handed charged leptons, are in their flavor basis<sup>3</sup><sup>3</sup>3Because we consider forms for $`m_{LL}`$ obtained from those in (18) by permuting flavor basis indices, there is no need to do the same for $`m_E`$. For example, we consider $`m_{LL}=\left(\begin{array}{ccc}\frac{B^2}{A}& 0& B\\ 0& 0& 0\\ B& 0& A\end{array}\right)`$, but not $`m_E=\left(\begin{array}{ccc}0& 0& B\\ 0& 0& 0\\ 0& 0& A\end{array}\right)`$.. Some ($`m_{LL}`$,$`m_E`$) pairings from (18) and (19) are immediately excluded because they do not give $`\theta _{23}1`$, $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& A\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& D\end{array}\right)$$ (20) being an obvious example. Other pairings, like that of (14) for generic $`A`$ and $`B`$, are excluded because it not possible to add perturbtions that satisfy our requirements. Some pairings that do work require exact relations between perturbations, as do the leading order textures in (17), while other pairings, such as the one in (11), can accept independent perturbations. Performing our analysis for each $`\overline{m}_{LL}`$ of equation (2) leads to the pairings listed in Tables 1-4. Tables 1 and 2 list leading order ($`m_{LL}`$, $`m_E`$) pairings that can take perturbations with independent magnitudes; these twelve textures we call “generic.” Tables 3 and 4 contain pairings that instead require exact relations among perturbations, giving a further ten “special” textures. In Tables 1 and 2 we write the possible forms for $`m_E`$ as<sup>4</sup><sup>4</sup>4More precisely, the various possible forms for $`m_E`$ can each be brought into one of these three forms by appropriate rotations of the right-handed charged leptons. The perturbations are taken to have comparable magnitudes, but in each matrix only $`ϵ_1`$ need be non-zero. $$I\left(\begin{array}{ccc}0& 0& ϵ_3\\ 0& ϵ_1& ϵ_2\\ 0& 0& D\end{array}\right),II\left(\begin{array}{ccc}0& 0& ϵ_2\\ 0& ϵ_1& E\\ 0& 0& D\end{array}\right),\mathrm{and}III\left(\begin{array}{ccc}0& 0& F\\ 0& 0& E\\ 0& ϵ_1& D\end{array}\right).$$ (21) In Tables 3 and 4, we instead write $`m_E`$ explicitly and provide an example, for each pairing, of how perturbations can be added to give acceptable masses and mixings. Because for $`m_{LL}`$ there is often considerable freedom in how perturbations can be added, we show only leading order elements of $`m_{LL}`$, unless exact relations among these perturbations are required (as they are in the pairings with degenerate neutrinos in Table 3). Some of the pairings of leading order $`m_{LL}`$ and $`m_E`$ in these tables lead to equivalent physics; for instance, the masses $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& A\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& E\\ 0& 0& D\end{array}\right)$$ (22) are related by a simultaneous (2-3) rotation on both the charged leptons and neutrinos to the combination $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& \frac{B^2}{A}& B\\ 0& B& A\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& E^{}\\ 0& 0& D^{}\end{array}\right).$$ (23) As a consequence (22) and (23) give the same form for the leptonic mixing matrix and are thus physically indistinguishable. For our purposes, (22) and (23) represent distinct cases because theories that predict the mass matrices of (22) in the flavor basis will be different from those that predict the mass matrices of (23). In other words, the apparent redundancy among some of the pairings of Tables 1 - 4 arises because our rules were implemented with model-building purposes in mind. In fact, some of the leading order ($`m_{LL}`$,$`m_E`$) combinations that at first sight seem to lead to the same physics emerge as less similar once we consider the effects of perturbations. For example, due to the degeneracy of $`\nu _1`$ and $`\nu _2`$, we can find a simultaneous transformation that brings the matrices $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& A\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& F\\ 0& 0& E\\ 0& 0& D\end{array}\right)$$ (24) into the forms $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& A\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& E\\ 0& 0& D\end{array}\right).$$ (25) However, we know that this degeneracy is lifted by perturbations in $`m_{LL}`$, so that the similarity between (24) and (25) is somewhat artificial. For the matrices in (25), the perturbations alone determine $`\theta _{12}`$, which can turn out to be arbitrarily large or small. For the matrices in (24), on the other hand, we generally expect $`\theta _{12}1`$, barring an unlikely near-cancellation between the (1-2) rotation induced by the perturbations in $`m_{LL}`$ and the (1-2) rotation required to diagonalize $`m_E`$ at leading order. Note that conversely, the physical equivalence we identified between (22) and (23) does not rely on the degeneracy between $`\nu _1`$ and $`\nu _2`$, so that these matrices are on similar footing with regard to their response to perturbations. For each pairing in Tables 1 - 4 we have identified whether, as in (25), the size of $`\theta _{12}`$ is fixed entirely by perturbations, so that no indication is given regarding which solutions to the solar neutrino problem are favored (denoted by “U”), or whether, as in (24), we typically have $`\theta _{12}1`$, so that large angle solutions are favored (denoted by “LA”). Although we have not listed them explicitly, there are in fact pairings of leading order $`m_{LL}`$ and $`m_E`$ that require small angle MSW solutions to the solar neutrino problem . For example, the pairing $$m_{LL}=\left(\begin{array}{ccc}0& A& 0\\ A& 0& 0\\ 0& 0& 0\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& E\\ 0& 0& E\\ 0& 0& D\end{array}\right)$$ (26) is a special case of one of the combinations in 7) from Table 2<sup>5</sup><sup>5</sup>5We regard mass matrices obtained by setting, for example, $`A=B`$ in matrices from Tables 1 - 4 as special cases, and do not list them independently, even though matrices like $`m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& E\\ 0& 0& D\end{array}\right)`$ and $`m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& D\\ 0& 0& D\end{array}\right)`$ are quite different from a model builder’s perspective, since different symmetries would be required to motivate them., and can only give a small angle MSW solution. Up to this point we have said nothing about complex phases in our matrices. To ensure that the leading order relation $`\mathrm{\Delta }m_{12}^2=0`$ holds, we must require that in the $`m_{LL}`$’s of pairings 6) and 10), the $`A`$’s and $`B`$’s share the same phase, up to the freedom to send $`\nu _ie^{i\alpha _i}\nu _i`$. This means, for instance, that the $`m_{LL}`$ in 6) actually stands for $$m_{LL}=\left(\begin{array}{ccc}Be^{2i\alpha }& Ae^{i(\alpha +\beta )}& 0\\ Ae^{i(\alpha +\beta )}& Be^{2i\beta }& 0\\ 0& 0& 0\end{array}\right),$$ (27) with $`\alpha `$ and $`\beta `$ arbitrary and $`A`$ and $`B`$ real. In all other pairings, the phases of $`A`$ \- $`F`$ and the various $`ϵ`$’s are independent<sup>6</sup><sup>6</sup>6Moreover, the freedom to send $`\nu _ie^{i\alpha _i}\nu _i`$ allows the two $`A`$’s in the $`m_{LL}`$ of 5), for instance, to have different phases.. ## 4 Some Special Textures In this section we discuss specific features of some of the more interesting pairings in Tables 1 - 4. $`\theta _{23}=\frac{\pi }{4}`$ One simple possibility, consistent with data from Super-Kamiokande, is that the leading order lepton masses give precisely $`\theta _{23}=\frac{\pi }{4}`$. For a neutrino mass matrix that requires no (2-3) rotation, the charged lepton mass matrix $$m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& D\\ 0& 0& D\end{array}\right)$$ (28) gives maximal mixing. Conversely, if the charge lepton mass matrix assumes the form $$m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& D\end{array}\right),$$ (29) then the forms for $`m_{LL}`$ from Tables 1 and 2 that give $`\theta _{23}=\frac{\pi }{4}`$ are $$m_{LL}=\left(\begin{array}{ccc}0& A& A\\ A& 0& 0\\ A& 0& 0\end{array}\right),m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& A& A\\ 0& A& A\end{array}\right),\mathrm{and}m_{LL}=\left(\begin{array}{ccc}0& A& A\\ A& B& B\\ A& B& B\end{array}\right).$$ (30) Other pairings that give maximal mixing require exact relations among perturbations, and can be found in 17) and 18) of Table 3. Neutrinos as hot dark matter If there exist three light neutrinos whose splittings obey $`\mathrm{\Delta }m_{12}^2\mathrm{\Delta }m_{23}^210^3eV^2`$, then for neutrino masses to be cosmologically significant requires a high degree of degeneracy. Furthermore, there is a bound from neutrinoless double $`\beta `$ decay experiments that, in the basis where the charged lepton masses are diagonal, $`m_{LL}^{}{}_{ee}{}^{}<.5eV`$ . Lowest-order mass matrices that give degenerate neutrinos and $`m_{LL}^{}{}_{ee}{}^{}=0`$ are thus of special interest, as they evade this experimental constraint and allow the neutrino mass scale to be cosmologically relevant. We find two combinations of $`m_{LL}`$ and $`m_E`$ that satisfy these criteria: $$m_{LL}=\left(\begin{array}{ccc}A& 0& 0\\ 0& A& 0\\ 0& 0& A\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& iE\\ 0& 0& E\\ 0& ϵ& D\end{array}\right)$$ (31) and $$m_{LL}=\left(\begin{array}{ccc}0& A& 0\\ A& 0& 0\\ 0& 0& A\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& ϵ_2& E\\ 0& ϵ_1& D\end{array}\right).$$ (32) We include perturbations in $`m_E`$ because without them, the element $`m_{LL}^{}{}_{ee}{}^{}`$ is not defined. Zeroth order lepton mass matrices We call the contributions to $`m_{LL}`$ and $`m_E`$ that survive in the limit of unbroken flavor symmetry “zeroth order” masses. Consider the case of an abelian flavor symmetry, and suppose that both $`m_{LL}`$ and $`m_E`$ have non-vanishing elements at zeroth order. If in addition the zeroth order form of $`m_E`$ in the flavor basis is invariant under $`\nu _i\nu _j`$, then it follows that $`\nu _i`$ and $`\nu _j`$ are not distinguished by the flavor symmetry: if $`\nu _i\nu _ih_u`$ is an allowed operator, then so are $`\nu _i\nu _jh_u`$ and $`\nu _j\nu _jh_u`$. As a consequence it must be true that $`m_{LL}`$ as well is invariant under $`\nu _i\nu _j`$, and moreover that the (i-j) space of $`m_{LL}`$ must have either all entries zero, or all entries non-zero. Following this reasoning, we find that, for abelian flavor symmetries, the only pairings from Tables 1 - 4 that are candidate zeroth order mass matrices are $$m_{LL}=\left(\begin{array}{ccc}0& A& B\\ A& 0& 0\\ B& 0& 0\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& E\\ 0& 0& D\end{array}\right),$$ (33) and $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& \frac{B^2}{A}& B\\ 0& B& A\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& E\\ 0& 0& D\end{array}\right).$$ (34) Simple seesaw-based models for the combinations in (33) and (34) were described in ; several other models have been based on the $`m_{LL}`$ of (34) . Democratic mass matrices The pairing $$m_{LL}=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& A\end{array}\right)m_E=\left(\begin{array}{ccc}0& 0& D\\ 0& 0& D\\ 0& 0& D\end{array}\right),$$ (35) which is a special case ($`D=E=F`$) of one of the combinations in 1), results from a leading order democratic form for the charged matrix . A democratic form for the neutrino mass matrix is generically excluded, since it gives too large a value for $`\theta _{13}`$. However, it is allowed with special perturbations, as shown in pairing 16). ## 5 Limitations of Texture Analysis Some of the requirements imposed on $`m_{LL}`$ and $`m_E`$ in section 2 were motivated by a desire to concentrate on mass matrices that could result most easily from flavor symmetries. One may wonder what we have missed in this regard - are there forms for $`m_{LL}`$ and $`m_E`$ that violate our rules but that nevertheless are plausable as consequences of flavor symmetry? One reason that examples of such matrices can in fact be found is that if $`m_{LL}`$ arises by the seesaw mechanism , then our rules should really be applied to $`m_{RR}`$ and $`m_{LR}`$, and $`m_{LL}`$ should be derived from these matrices according to $$m_{LL}=m_{LR}m_{RR}^1m_{LR}^T.$$ (36) For example, the matrices $$m_{RR}=\left(\begin{array}{ccc}0& A& A\\ A& A& 0\\ A& 0& 0\end{array}\right)\mathrm{and}m_{LR}=\left(\begin{array}{ccc}0& 0& A\\ 0& A& A\\ A& A& A\end{array}\right)$$ (37) are certainly consistent with our rules, while the resulting mass matrix for the light neutrinos, $$m_{LL}=A\left(\begin{array}{ccc}1& 2& 3\\ 2& 4& 5\\ 3& 5& 6\end{array}\right),$$ (38) clearly is not. Examples like this are not difficult to find, but it does seem to be true that in most simple cases, if $`m_{RR}`$ and $`m_{LR}`$ satisfy our rules, then so does $`m_{LL}`$. Another limitation of our approach is that our rules prohibit matrix elements from being related by factors of $`\frac{1}{2}`$, $`\frac{1}{3}`$, etc., that could arise from Clebsch coefficients associated with the flavor group. For example, in the basis where the charged lepton masses are diagonal, the form $$m_{LL}=A\left(\begin{array}{ccc}0& \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}& \frac{1}{2}& \frac{1}{2}\\ \frac{1}{\sqrt{2}}& \frac{1}{2}& \frac{1}{2}\end{array}\right)$$ (39) allows (for large enough $`A`$) neutrinos to compose a significant fraction of the dark matter in the universe without violating double $`\beta `$ decay constraints . This texture corresponds to the pairing 12) of Table 2, with $`B`$ chosen to be $`\frac{A}{\sqrt{2}}`$, in violation of our rules. On the other hand, the procedure we have used has given us a pairing of $`m_{LL}`$ and $`m_E`$ that leads precisely to the same physics as does (39). In particular, (32) for the special case $`D=E`$ is identical to (39) as far as the physics is concerned, and we expect that these forms may be easier to motivate with flavor symmetries. ## 6 Conclusions The Super-Kamiokande data on atmospheric neutrino fluxes suggests that the leading-order fermion mass matrices may not have the conventional form of (1), at least in the lepton sector. What leading order forms for lepton mass matrices are suggested by atmospheric and solar neutrino oscillations? We have derived the complete set of leading order ($`m_{LL}`$,$`m_E`$) pairings consistent with $`\mathrm{\Delta }m_{12}^2\mathrm{\Delta }m_{23}^2`$, $`\theta _{13}0`$, and $`\theta _{23}1`$, subject to a simplicity requirement: non-zero entries of a matrix may be equal up to a phase, or may have a precise relationship which leads to a vanishing determinant, otherwise they are independent. This simplicity requirement, motivated by an interest in textures that follow most easily from flavor symmetries, reduces an infinitely large class of matrices to the ($`m_{LL}`$,$`m_E`$) combinations listed in Tables 1 - 4. These combinations are divided into twelve generic cases and ten special cases, according to whether the perturbations involve exact relations. For the twelve generic cases we also give the possible forms for the perurbations responsible for the muon mass. The diverse pairings we have derived lead to a variety of physics. Some give degenerate neutrinos and thus leave considerable freedom in the overall mass scale, while others, with hierarchical masses, fix the mass scale of the heaviest neutrino at $`3\times 10^2`$eV, according to Super-Kamiokande results. The various pairings also give different predictions for $`\theta _{12}`$, and hence require different resolutions to the solar neutrino problem. Certainly each of our mass matrices is incomplete, because only by specifying all perturbations can the physics be fully established, yet in our view the approach we have taken offers a simple starting point for considering what mass matrices to aim for in constructing realistic theories of lepton masses.
no-problem/9812/cond-mat9812384.html
ar5iv
text
# Temperature dependence of electric resistance and magnetoresistance of pressed nanocomposites of multilayer nanotubes with the structure of nested cones ## Abstract Bulk samples of carbon multilayer nanotubes with the structure of nested cones (fishbone structure) suitable for transport measurements, were prepared by compressing under high pressure ($``$ 25 kbar) a nanotube precursor synthesized through thermal decomposition of polyethylene catalyzed by nickel. The structure of the initial nanotube material was studied using high-resolution transmission electron microscopy. In the low-temperature range (4.2–100 K) the electric resistance of the samples changes according to the law $`\mathrm{ln}\rho (T_0/T)^{1/3}`$, where $`T_07`$ K. The measured magnetoresistance is quadratic in the magnetic field and linear in the reciprocal temperature. The measurements have been interpreted in terms of two-dimensional variable-range hopping conductivity. It is suggested that the space between the inside and outside walls of nanotubes acts as a two-dimensional conducting medium. Estimates suggest a high value of the density of electron states at the Fermi level of about $`5\times 10^{21}`$ eV<sup>-1</sup>cm<sup>-3</sup>. , , , , , thanks: Author for correspondence (tsebro@sci.lebedev.ru). Investigations of electric transport properties of carbon nanotubes has attracted great attention recently. According to theoretical concepts , an isolated nanotube can be either a metal, or semimetal, or insulator, depending on such structural parameters as its diameter, chirality, and the number of concentric layers in it. Despite enormous difficulties in measurements of electric parameters of isolated nanotubes or nanotube bundles, several attempts undertaken recently have been successful . The latest published measurements clearly indicate the presence of both metallic and insulating nanotubes in a single set of samples prepared in the same conditions. The authors emphasized that each multilayer nanotube manifested its specific conducting properties, thus indicating a strong correlation between structural and electric parameters. In this connection, it is interesting to study, in addition to the transport properties of isolated carbon nanotubes, the conducting properties of bulk nanotube materials, in which contacts between nanotubes and/or their sections are randomly distributed. In our previous publication we reported on the conductivity temperature dependence and structure (see also Ref. ) of carbon nanotube films fabricated by evaporating graphite in an electron beam. The data of those experiments were interpreted in terms of a three-dimensional model of hopping conductivity with a Coulomb gap about the Fermi level (the resistivity was described by the law $`\mathrm{ln}\rho (T_0/T)^{1/2}`$). The density of states at the Fermi level for films that contained, as shown by structural investigations, mostly one-layer carbon nanotubes (isolated or assembled in bundles) and had a relatively high conductivity was estimated to be $`g(\mu )10^{21}`$ eV<sup>-1</sup>cm<sup>-3</sup>. On the other hand, films containing multilayer carbon nanotubes were characterized by fairly large values of resistivity, which changed with temperature to Mott’s law, $`\mathrm{ln}\rho (T_0/T)^{1/4}`$. In this case, estimates of the density of states, $`g(\mu )10^{18}`$ eV<sup>-1</sup>cm<sup>-3</sup>, corresponded to $`g(\mu )`$ for amorphous carbon. Amorphous carbon in significant quantities was detected on the outside surfaces of multilayer nanotubes in such films by electron microscopy , and it seems that the conductivity of such films can be attributed to the presence of carbon. It is well known that, in addition to one-layer and multilayer nanotubes with walls made of coaxial carbon layers, there are nanotubes whose walls consist of nested truncated cones (these are the so-called fishbone-type structures ). Such nanocones are usually detected at the ends of carbon nanotubes, but can also exist in the form of independent objects among products of arc discharges in a helium atmosphere , commonly used in synthesizing carbon nanotubes. In our recent work we demonstrated that thermal decomposition of polyethylene with nickel used as a catalyst is a fairly efficient technique for fabrication of large quantities of fishbone nanotubes. This technique allows one to manufacture in a relatively short time considerable quantities (several grams) of fairly homogeneous nanotube material. According to the data of thermal analysis in oxiding atmosphere, the nickel content in this material is less then 15% by mass. Nickel is present in the material in the form of nanoparticles, which can be eliminated completely by thermal processing of the nanocomposite in vacuum at temperatures of up to 2800. In this paper we present our measurements of electric resistance versus temperature and magnetoresistance of bulk nanocomposite samples fabricated by pressing the initial powder of carbon fishbone nanotubes. The structure of the carbon phase in the initial powder was imaged by a Philips EM 430ST transmission electron microscope of high resolution at an accelerating voltage of 200 kV. These measurements demonstrated that the major part of the initial carbon material was multilayer carbon nanotubes with lengths of several micrometers, outside diameter of 40–50 nm, and internal channel diameter of 9–20 nm. The tubes consisted of almost rectilinear sections with lengths of 100–300 nm turned with respect to one another. Figure 1 shows as an example electron micrographs of the composite nanotube material at (a) low, (b) medium, and (c) high resolution. The analysis of micrographs indicated that the nanotube walls were composed in most cases of 40–65 tapered graphite layers. The taper angle varied along the tubes in the range of 16–35. The inside diameter was also variable. The dimensions and shapes of wider sections of the inside channel corresponded to those of catalytic nickel nanoparticles, which were detected in most cases at the ends of the nanotubes. We observed either so-called bamboo structures (with taper angles of 20 to 25) or, more frequently, fishbone structures with larger taper angles. Bulk samples that could be used in transport measurements were fabricated by cold pressing of nanotube powder under high ($``$ 25 kbar) pressure. Samples were shaped as bars with dimensions of $`1\times 2\times 3`$ mm. Contacts for measuring current and voltage across samples were made from a conducting epoxy paste. Note that the samples were fairly strong and their resistivity at room temperature was relatively low: $`\rho (300K)1\mathrm{\Omega }`$cm. The resistance was measured as a function of temperature down to the liquid-helium temperature in magnetic fields of up to 75 kOe. In all samples under investigation, the resistance changed with temperature most rapidly (about one order of magnitude) in the temperature range between liquid helium and $``$100 K, and the resistance followed the law $$R(T)=R_0exp[(T_0/T)^{1/3}],$$ (1) which is typical of variable-range hopping conductivity in two dimensions. Figure 2 shows as an example two curves of $`\mathrm{ln}R`$ vs. $`T^{1/3}`$ plotted for samples Nos. 14 and 15. It is known that in this case $`T_0`$ in Eq. (1) is given by $$T_0=\frac{13.8}{k_Bg^{}(\mu )a^2},$$ (2) where $`g^{}(\mu )`$ is the two-dimensional density of states at the Fermi level and $`a`$ is the localization length. To the best of our knowledge, this is the first observation of the dependence $`\mathrm{ln}R(T_0/T)^{1/3}`$ in a system with a relatively low resistivity. Another interesting feature of our measurements is low $`T_0`$ (for example, we found $`T_0`$ = 7,3 K in sample No. 15, and in all tested samples $`T_0`$ was within the interval of 6.5–7.5 K), which directly indicates, in accordance with Eq. (2), that the density of states at the Fermi level is high. In this connection, it is of interest to measure the magnetoresistance, especially as a function of temperature, since these measurements would allow us to estimate directly the localization length $`a`$ and then derive the two-dimensional density of states $`g^{}(\mu )`$ using Eq. (2). It is known that in systems with variable-range hopping conductivity, the magnetoresistance is positive and (in moderate magnetic fields) is given by the expression $$\mathrm{ln}\left[\frac{\rho (H)}{\rho (0)}\right]=t\left(\frac{a}{\lambda }\right)^4\left(\frac{T_0}{T}\right)^{3/p}A(T)H^2,$$ (3) where $`\lambda `$ is the magnetic length, $`t`$ is a dimensionless factor of about 0.0025, and $`p=D+1`$ (where $`D`$ is the system dimensionality). Since $`p`$ = 3 holds in the case under consideration, it follows from Eq. (3) that the magnetoresistance at a fixed magnetic field should be inversely proportional to the temperature. An example of magnetoresistance measurements versus magnetic field at $`T`$ = 4.2 K for sample No. 15 is given in Fig. 3. One can see that the magnetoresistance is adequately described by a quadratic function of $`H`$ in the range of moderate magnetic fields, $`H<`$30 kOe, and in higher magnetic fields it tends to a linear function. The behavior of magnetoresistance in low magnetic fields is especially interesting. As a rule, the magnetoresistance is negative on the section of the curve around zero and becomes positive in fields higher than 7 kOe. As a result, we have a small, broad region of negative resistivity at about 3–4 kOe. Moreover, several additional narrow local minima (see the inset to Fig. 3) are observed superposed on this broad peak. Note that the peaks in the inset to Fig. 3 are not caused by noise, although their amplitudes are very small. Experiments with repeated accumulation and averaging of the signal dedicated to testing the reproducibility of such measurements were performed (the results obtained by this procedure are the ones plotted in the inset to Fig. 3), and these experiments proved that the curves were reproducible, even after warming the samples to the room temperature. It seems that the negative magnetoresistance of the samples and local minima are due to the discrete structure of the conducting network formed by nanotubes. The broadest minimum in the magnetoresistance at 3–4 kOe is tentatively related to the average cell dimension in the network, and local minima are ascribed to some additional characteristic dimensions in the random network. When the applied magnetic field reaches a value such that the magnetic flux through a network cell equals the magnetic flux quantum $`hc/e`$, the amplitude of the tunneling between nanotubes increases, which causes a drop in the total resistance of the system. A simple estimate yields a cell dimension of the conducting network of about 120 nm at $`H_{min}3.5`$ kOe, which seems plausible, given the structure of the nanotube material shown by the electronic microscope. The magnetoresistance of sample No. 15 as a function of temperature under a magnetic field of 75 kOe is plotted in Fig. 4 in terms of $`\mathrm{ln}[R(H)/R(0)]`$ and $`T^1`$. It is clear that the magnetoresistance at low temperatures is reasonably well described by a linear function of $`T^1`$, in accordance with Eq. (3). The localization length derived from these measurements is $`a`$ = 17 nm. Thus, the two-dimensional density of states at the Fermi level estimated using these data and Eq. (2) is $`g^{}(\mu )`$ $`7.5\times 10^{15}`$ eV<sup>-1</sup>cm<sup>-2</sup>. Assuming that the space between the inside and outside walls of nanotubes acts as a two-dimensional medium, we can estimate the three-dimensional density of states $`g(\mu )`$ at the Fermi level. Using the relation $`g^{}(\mu )=g(\mu )d`$, where $`d`$ is the average nanotube wall thickness (in this specific case it is about 15 nm), we have $`g(\mu )`$ $`5\times 10^{21}`$ eV<sup>-1</sup>cm<sup>-3</sup>. It seems also interesting to estimate the two-dimensional and three-dimensional densities, $`n_S`$ and $`n_V`$, of current carriers. This can be done using the equation $$n_S=2g^{}(\mu )ϵ_0(T),$$ (4) where $`ϵ_0(T)`$ is the energy band near the Fermi level containing current carriers contributing to the hopping conductivity . In the two-dimensional case, this band width is given by the equation $$ϵ_0(T)=\frac{(k_BT)^{2/3}}{[g^{}(\mu )a^2]^{1/3}}.$$ (5) At $`T`$ = 25 K we find from Eqs. (4) and (5) $`n_S`$ $`9\times 10^{12}`$ cm<sup>-2</sup>, $`n_V`$ $`6\times 10^{19}`$cm<sup>-3</sup>. Thus, we have interpreted the low-temperature transport measurements of pressed samples of randomly distributed carbon nanotubes with a nested-cones structure in terms of the two-dimensional variable-range hopping conductivity. We have assumed that the space between the inside and outside walls on nanotubes acts as a two-dimensional medium. In our previous publication the low-temperature properties of carbon nanotubes were interpreted in terms of the three-dimensional model of hopping conductivity with a Coulomb gap in the density of states near the Fermi level. In both cases, the resistance is described at low temperatures by the law $`\mathrm{ln}\rho (T_0/T)^{1/n}`$ with small $`T_0`$, which implies that these carbon nanotube materials, with their various morphologies, are characterized by very high densities of electron states at the Fermi level of $`10^{21}`$ eV<sup>-1</sup>cm<sup>-3</sup>, which is a value typical of metals. This result is important for understanding the fundamental electronic properties of carbon nanotubes and related materials and may also prove quite useful from the viewpoint of practical applications. The work was supported by the Russian Scientific Technological Program Topical Issues in Physics of Condensed Media, branch Fullerenes and Atomic Clusters (project No. 96147) and International Center for Science and Technology (project No. 079). Translation provided by Russian Editorial office.
no-problem/9812/cond-mat9812249.html
ar5iv
text
# Statistics of photodissociation spectra: nonuniversal properties ## Abstract We consider the two-point correlation function of the photodissociation cross section in molecules where the fragmentation process is indirect, passing through resonances above the dissociation threshold. In the limit of overlapping resonances, a formula is derived, relating this correlation function to the behavior of the corresponding classical system. It is shown that nonuniversal features of the two-point correlation function may have significant experimental manifestations. The photodissociation of molecules, such as the radicals $`HO_2`$ and $`NO_2`$, is an indirect process comprised of two steps. In the first step, a photon excites the molecule to an energy above the dissociation threshold. Then fragmentation proceeds via energy redistribution within the vibrational degrees of freedom, or tunneling from binding to unbinding energy surfaces of the adiabatic electronic potential. A barrier, which separates quasi-stable states from continuum modes, hinders the immediate dissociation of the excited molecule. On these long lived resonances, the dynamics of the system is chaotic. Therefore, the photodissociation cross section, exhibits a complicated behavior as function of the photon energy, behavior which suggests a statistical analysis of the problem. This approach has been taken, recently, by Fyodorov and Alhassid who analyzed correlations of the photodissociation cross section in the framework of random matrix theory. In this framework, one is able to describe the universal properties of the photodissociation cross section, but not the individual imprints of each molecule. It is well known, however, that the statistical properties of quantum chaotic systems bear a simple relation to the underlying classical dynamics. The main purpose of this paper is to relate the statistical characteristics of the photodissociation cross section to correlation functions associated with the classical dynamics of the molecule. Consider a molecule, in the ground state $`|g`$, excited by a light pulse to an energy above the dissociation threshold, and let $``$ denote the Hamiltonian of the system on the excited electronic surface. It will be assumed that $``$ represents an open system with several open dissociation channels. The photodissociation cross section of the molecule, in the dipole approximation, is given by $$\sigma (ϵ)=\eta \text{Im}g|𝒟\frac{1}{ϵ^+}𝒟|g,$$ (1) where $`ϵ`$ is the energy measured from the ground state of the molecule and $`ϵ^\pm =ϵ\pm i0`$, $`𝒟=𝐝\widehat{𝐞}`$ is the projection of the electronic dipole moment operator of the molecule, $`𝐝`$, on the polarization, $`\widehat{𝐞}`$, of the absorbed light, and $`\eta =ϵ/c\mathrm{}ϵ_0`$, $`c`$ being the speed of light, and $`ϵ_0`$ the electric permitivity. We focus our attention on the dimensionless two-point correlation function: $$K(\omega )=\frac{\sigma (ϵ\frac{\mathrm{}\omega }{2})\sigma (ϵ+\frac{\mathrm{}\omega }{2})}{\sigma (ϵ)^2}1,$$ (2) where $`\mathrm{}`$ denotes an energy averaging over a classically small energy interval centered at $`ϵ`$ which, nonetheless, includes a large number of resonances. It will be assumed that the excitation energy, $`ϵ`$, is sufficiently high such that the mean spacing between the vibrational modes of the molecule is smaller than the energy broadening due to the finite life time of the system in the excited states. This is the regime of overlapping resonances. Turning to the classical counterpart of our system, let $`\rho (𝐱)`$ be the initial density distribution of the excited molecule in phase space. Here $`𝐱=(𝐫,𝐩)`$ is a phase space point, $`𝐫`$ and $`𝐩`$ being $`d`$ dimensional vectors of coordinates and conjugate momenta, respectively. The classical autocorrelation function of $`\rho (𝐱)`$ is defined as $$C(t)=\frac{\rho (𝐱(t))\rho (𝐱)}{\rho (𝐱)^2},$$ (3) where $`𝐱(t)`$ is the end point of the trajectory starting at $`𝐱(0)=𝐱`$, and evolving for a time $`t`$ according to the classical equations of motion: $`\dot{𝐫}=/𝐩`$ and $`\dot{𝐩}=/𝐫`$. The averaging, $`\mathrm{}`$, is over all initial points, $`𝐱`$, on the energy shell $`ϵ=(𝐱)`$ (see definition in Eq. (10) below). Another classical correlation function, closely related to $`C(t)`$, is defined by restricting the average in (3) only to those points $`𝐱`$ which lie on the periodic orbits of the system with period $`\tau `$, where $`\tau t`$. This function will be denoted by $`C_\tau (t)`$, and its precise definition will be given later on (see Eq. (23)). The central result of this paper is a formula which expresses the two-point correlation function of the photodissociation cross section, $`K(\omega )`$, in terms of the classical correlation functions, $`C(t)`$, and $`C_\tau (t)`$. Namely, within the semiclassical approximation, $`K(\omega )`$ takes the form $`K(\omega )K_1(\omega )+K_2(\omega ),`$ (4) where $`K_1(\omega )={\displaystyle \frac{2}{\pi \beta \mathrm{}}}\text{Re}{\displaystyle _0^{\mathrm{}}}𝑑te^{i\omega t}C(t),\text{and}`$ (5) $`K_2(\omega )={\displaystyle \frac{1}{\pi ^2\beta \mathrm{}^2}}\text{Re}{\displaystyle _0^{\mathrm{}}}𝑑te^{i\omega t}{\displaystyle _{\frac{t}{2}}^{\frac{t}{2}}}𝑑\tau C_t(\tau ).`$ (6) Here $`\beta =1`$ for Hamiltonians with time reversal symmetry, and $`\beta =2`$ for systems without this symmetry. It is assumed that $``$ have no other discrete symmetry. The above results will be derived by semiclassical methods. The overlap of resonances implies that the behavior of $`\sigma (ϵ)`$ is dominated by short time dynamics, and therefore the semiclassical approximation is justified. A semiclassical treatment of this problem will also clarify the role of various classical trajectories, and help interpreting formulae (5-6). In the semiclassical limit, the Green function is approximated by a sum of two terms: $`GG_W+G_{osc}`$. The first, known as the Weyl term, is a smooth function of the energy, $`ϵ`$: $$𝐫^{}|G_W(ϵ^\pm )|𝐫=\frac{d𝐩}{(2\pi \mathrm{})^d}\frac{e^{\frac{i}{\mathrm{}}𝐩(𝐫^{}𝐫)}}{ϵ^\pm (𝐩,\frac{𝐫^{}+𝐫}{2})}.$$ (7) It is large when $`|𝐫𝐫^{}|`$ is of order of the particle wavelength, and therefore represents a local function in space. The second contribution, $`G_{osc}`$, is nonlocal in space but oscillatory in the energy. It is expressed as a sum over classical trajectories: $$𝐫^{}|G_{osc}(ϵ^\pm )|𝐫=\underset{\nu }{}A_\nu ^\pm e^{\pm \frac{i}{\mathrm{}}S_\nu (ϵ;𝐫^{},𝐫)},$$ (8) where $`S_\nu (ϵ;𝐫^{},𝐫)`$ is the classical action of the $`\nu `$-th trajectory from $`𝐫`$ to $`𝐫^{}`$ with energy $`ϵ`$, and $`A_\nu ^+=(A_\nu ^{})^{}`$ is the corresponding amplitude. This decomposition of the Green function implies that the average, $`\sigma (ϵ)`$, comes only from the Weyl contribution, $`G_W`$. A straightforward calculation yields $`\sigma (ϵ)=\rho (𝐱)`$, where $$\rho (𝐱)=𝑑𝐫^{}e^{\frac{i}{\mathrm{}}𝐩𝐫^{}}𝐫\frac{𝐫^{}}{2}|\psi \psi |𝐫+\frac{𝐫^{}}{2}$$ (9) is the Wigner function of the (unnormalized) state $`|\psi =𝒟|g`$. This function is real, and upon small smearing in phase space yields a positive definite function which can be interpreted as the initial density distribution of the excited molecule. The microcanonical average, $``$, of a general function $`f(𝐱)`$ is defined as $$f(𝐱)=\frac{d𝐱}{(2\pi \mathrm{})^d}\delta (ϵ(𝐱))f(𝐱).$$ (10) Notice that $`f`$ and $`f`$ do not have the same dimensions. To simplify our notations, from now on we work in units where $`\mathrm{}=1`$. The correlator of $`\sigma (ϵ)`$ can be written in the form: $`\sigma (ϵ{\displaystyle \frac{\omega }{2}})\sigma (ϵ+{\displaystyle \frac{\omega }{2}})={\displaystyle \frac{\eta ^2}{2\pi ^2}}{\displaystyle \underset{i=1}{\overset{4}{}}d𝐫_i\rho (𝐫_1,𝐫_2)\rho (𝐫_3,𝐫_4)}`$ (11) $`\times \text{Re}G(ϵ^{}{\displaystyle \frac{\omega }{2}};𝐫_\mathrm{𝟐},𝐫_\mathrm{𝟏})G(ϵ^++{\displaystyle \frac{\omega }{2}};𝐫_\mathrm{𝟒},𝐫_\mathrm{𝟑}),`$ (12) where $`\rho (𝐫,𝐫^{})=𝐫|\psi \psi |𝐫^{}`$. Considering the connected part of the correlator, only the oscillatory terms of the Green functions, $`G_{osc}`$, contribute. Since these are strongly oscillating functions of the coordinates, the main contribution to the integral (12) comes from two types of orbits: (a) general orbits with initial and final points far apart, but $`𝐫_\mathrm{𝟐}𝐫_\mathrm{𝟑}`$ and $`𝐫_\mathrm{𝟏}𝐫_\mathrm{𝟒}`$. (b) Returning orbits with $`𝐫_\mathrm{𝟏}𝐫_\mathrm{𝟐}`$ and $`𝐫_\mathrm{𝟑}𝐫_\mathrm{𝟒}`$. The corresponding contributions to (12) will be denoted by $`I_a`$ and $`I_b`$, respectively. When $``$ is also time reversal symmetric, additional contribution, equal to $`I_a`$, comes from orbits with $`𝐫_\mathrm{𝟏}𝐫_\mathrm{𝟑}`$ and $`𝐫_\mathrm{𝟐}𝐫_\mathrm{𝟒}`$. The calculation of $`I_a`$ involves two main approximations. One is to expand the actions of the Green functions (8) to the first order in $`\omega `$, $`𝐫_3𝐫_2`$, and $`𝐫_4𝐫_1`$. For this purpose we use the relation: $`S_\nu (ϵ\pm \omega /2;𝐫^{}+\delta 𝐫^{},𝐫+\delta 𝐫)S_\nu (ϵ;𝐫^{},𝐫)\pm t_\nu \omega /2+𝐩_\nu ^{}\delta 𝐫^{}𝐩_\nu \delta 𝐫`$, where $`t_\nu `$ is the time of the orbit, $`𝐩_\nu `$ is the initial momentum of the trajectory (at point $`𝐫`$), and $`𝐩_\nu ^{}`$ is the final momentum (at point $`𝐫^{}`$). The second approximation is to neglect quantum interference effects (weak localization corrections) and employ the “diagonal approximation”. In this approximation, the double sum on trajectories in the product of Green functions, $`G_{osc}(ϵ^+)G_{osc}(ϵ^{})_{\nu \nu ^{}}A_\nu A_\nu ^{}^{}e^{iS_\nu iS_\nu ^{}}`$, is replaced by a single sum in which one keeps only pairs of trajectories, $`(\nu ,\nu ^{})`$, related by symmetry. Noticing that $`\rho (𝐫_1,𝐫_2)\rho (𝐫_3,𝐫_4)=\rho (𝐫_1,𝐫_4)\rho (𝐫_3,𝐫_2)`$, and integrating on the coordinate differences $`𝐫_3𝐫_2`$, and $`𝐫_4𝐫_1`$, one thus obtains: $$I_a\frac{\eta ^2}{2\pi ^2}\text{Re}𝑑𝐫𝑑𝐫^{}\underset{\nu }{}|A_\nu |^2\rho (𝐱_\nu ^{})\rho (𝐱_\nu )e^{it_\nu \omega },$$ (13) where $`𝐱_\nu =(𝐫,𝐩_\nu )`$ and $`𝐱_\nu ^{}=(𝐫^{},𝐩_\nu ^{})`$ are the initial and final phase-space coordinates of the $`\nu `$-th trajectory. Now, to evaluate the sum over $`\nu `$, we use the sum rule: $`(2\pi )^{d1}{\displaystyle \underset{\nu }{}}|A_\nu |^2g(𝐱_\nu ^{},𝐱_\nu ,t_\nu )=`$ (14) $`{\displaystyle _0^{\mathrm{}}}𝑑t𝑑𝐩^{}𝑑𝐩g(𝐱^{},𝐱,t)\delta (ϵ(𝐱))\delta (𝐱^{}𝐱(t)),`$ (15) where $`g(𝐱^{},𝐱,t)`$ is a general function of the time $`t`$, the initial $`𝐱`$, and the final $`𝐱^{}`$ phase space points. $`𝐱(t)`$ is the phase space coordinate, after time $`t`$, of a trajectory starting from $`𝐱(0)=𝐱`$. Eq. (14) can be proved by performing the integrals on its right hand side. For this purpose it is convenient to use a local coordinate system with one coordinate parallel to the orbit, and $`d1`$ perpendicular to it. Substituting (14) in (13), and integrating over $`𝐱`$ yields $$I_a\frac{2\eta ^2}{(2\pi )^{d+1}}\text{Re}_0^{\mathrm{}}𝑑t𝑑𝐱e^{it\omega }\delta (ϵ(𝐱))\rho (𝐱(t))\rho (𝐱).$$ (16) Using the definition of microcanonical average (10) and the correlation function (3), one obtains $`K_1(\omega )=2I_a/\rho ^2\beta `$, where $`\beta `$ is the symmetry factor associated with the additional contribution in the case of time reversal symmetry. To evaluate $`I_b`$, the term associated with returning trajectories, we expand the actions of the Green functions in (12), to first order in $`𝐫_2𝐫_1`$ and $`𝐫_4𝐫_3`$. Integrating over these coordinate differences, the result takes the form $`I_b=\eta ^2\text{Re}\left\{I(\omega )I^{}(\omega )\right\}/2\pi ^2`$, where $$I(\omega )𝑑𝐫\underset{\nu }{}A_\nu \rho (𝐱_\nu )e^{iS_\nu (ϵ+\frac{\omega }{2};𝐫,𝐫)}.$$ (17) Here $`𝐱_\nu =(𝐫,\overline{𝐩}_\nu )`$ is a phase space point corresponding to the $`\nu `$-th returning trajectory at point $`𝐫`$, where $`\overline{𝐩}_\nu =(𝐩_\nu ^{}+𝐩_\nu )/2`$ is the average of the initial, $`𝐩_\nu `$, and final, $`𝐩_\nu ^{}`$, momentum of the trajectory. The integral over $`𝐫`$ is now performed in the stationary phase approximation. The stationary phase condition, $`S_\nu (ϵ;𝐫,𝐫)/𝐫=𝐩_\nu 𝐩_\nu ^{}=0`$, implies that the initial and final momentum of the orbit are equal, therefore, the main contribution to the integral (17) comes from the vicinities of periodic orbits. The result can be expressed as a sum over periodic orbits: $`I(\omega )=_pA_p\rho _pe^{iS_p(ϵ)+it_p\omega /2}`$, where $`A_p`$, $`t_p`$ and $`S_p`$ are the amplitude, period, and action of the $`p`$-th periodic orbit respectively. $`\rho _p=𝑑t\rho (𝐱_p(t))`$ is the time integral of $`\rho (𝐱)`$ along the orbit $`𝐱_p(t)`$. Next, we analyze the product $`I(\omega )I^{}(\omega )`$, which in the diagonal approximation is given by $`(2/\beta )|A_p|^2\rho _p^2e^{it_p\omega }`$. Following Eckhardt et al., $`\rho _p^2={\displaystyle _0^{t_p}}𝑑\tau {\displaystyle _0^{t_p}}𝑑\tau ^{}\rho (𝐱_p(\tau ))\rho (𝐱_p(\tau ^{}))=`$ (18) $`{\displaystyle _0^{t_p}}𝑑t{\displaystyle _{\frac{t_p}{2}}^{\frac{t_p}{2}}}𝑑t^{}\rho (𝐱_p(t{\displaystyle \frac{t^{}}{2}}))\rho (𝐱_p(t+{\displaystyle \frac{t^{}}{2}}))=`$ (19) $`=t_p{\displaystyle _{\frac{t_p}{2}}^{\frac{t_p}{2}}}𝑑t^{}\rho (𝐱(t^{}))\rho (𝐱)_p,`$ (20) where $`(\mathrm{})_p=t_p^1𝑑t(\mathrm{})`$ denotes the time average along the $`p`$-th periodic orbit. Thus $`I(\omega )I^{}(\omega )=(2/\beta )t_p|A_p|^2e^{it_p\omega }\rho (𝐱(t^{}))\rho (𝐱)_p`$. The sum over the periodic orbits can be converted into an integral over time using the relation $`{\displaystyle \underset{t<t_p<t+\delta t}{}}t_p|A_p|^2\mathrm{}_p=`$ (21) $`=\delta t{\displaystyle 𝑑𝐱\delta [ϵ(𝐱)]\delta [𝐱_{}𝐱_{}(t)](\mathrm{})},`$ (22) where $`𝐱_{}`$ is a phase space coordinate on the energy shell $`(𝐱)=ϵ`$, while $`𝐱_{}(t)`$ is the position of the system after time $`t`$, starting from $`𝐱_{}(0)=𝐱_{}`$. Using (21), the sum over periodic orbits is converted into a time integral which yields formula (6) for $`K_2(\omega )`$, with the correlation function: $`C_\tau (t)={\displaystyle 𝑑𝐱\frac{\rho (𝐱(t))\rho (𝐱)}{\rho ^2}\delta [ϵ(𝐱)]\delta [𝐱_{}𝐱_{}(\tau )]}.`$ (23) The identification of the classical trajectories which underly the main contribution to the correlator $`K(\omega )`$, clarifies the origin of fluctuations in $`\sigma (ϵ)`$. These have two sources: the wave functions of the system, and the density of states. $`K_1(\omega )`$, associated with general trajectories, is related to fluctuations in the wave functions, while $`K_2(\omega )`$, coming from the periodic orbits of the system, is related to fluctuations in the density of states. To proceed further, one need to characterize the behavior of the correlation functions $`C(t)`$ and $`C_\tau (t)`$. For this purpose we assume that the classical dynamics of the system is characterized by two time scales which are well separated. Such a situation appears, for example, in almost close chaotic systems. There the long time scale is the typical time for dissociation, while the short time scale is the time which takes for a classical density distribution to relax to the ergodic distribution on the energy shell. We denote by $`_0`$ the closed Hamiltonian which controls the dynamics of the system for time shorter than the dissociation time. Under these assumptions $`C(t)\mathrm{\Delta }C_t(\tau )e^{\gamma |t|}/\mathrm{\Delta }`$, where $`\gamma `$ is the dissociation rate, and $`\mathrm{\Delta }`$ is the mean spacing between resonances, i.e. $`(2\pi \mathrm{})^d/\mathrm{\Delta }=𝑑𝐱\delta (ϵ_0(𝐱))`$. We assume $`\mathrm{\Delta }`$ to be approximately constant within the interval of energy averaging, and from now on work in units where $`\mathrm{\Delta }=1`$. If $`\mathrm{\Delta }`$ is not constant, $`\sigma (ϵ)`$ should be unfolded appropriately. An infinite separation between the time scales mentioned above corresponds to the limit of random matrix theory. In this case $`K(\omega )`$ reduces to $$K_u(\gamma ,\omega )\frac{2}{\beta \pi }\left(\frac{\gamma }{\omega ^2+\gamma ^2}+\frac{1}{2\pi }\frac{\gamma ^2\omega ^2}{(\gamma ^2+\omega ^2)^2}\right).$$ (24) The first and the second terms of this formula come from $`K_1(\omega )`$ and $`K_2(\omega )`$ respectively. As can be seen, these components correspond to the leading and the subleading terms of an expansion in $`1/\gamma `$. Formula (24), describing the correlations in the universal regime, was first derived by Fyodorov and Alhassid using the nonlinear $`\sigma `$-model. Here we confirm their conjecture that for overlapping resonances ($`\gamma >1`$) $`K(\omega )`$ can be derived by semiclassical methods. Yet, the range of validity of formulae (4-6) goes far beyond the universal regime. They account also for system specific contributions to $`K(\omega )`$, which might be of the same order of the universal result (24). To be concrete, we focus on the nonuniversal corrections due to the leading term, $`K_1(\omega )`$, and consider a representative behavior of decay of correlations in open chaotic systems: $$C(t)e^{\gamma |t|}+\alpha e^{\gamma _2|t|}\mathrm{cos}(\omega _2t).$$ (25) Here $`\gamma _2`$ is the real part of the second Ruelle resonance of the classical system, while $`\omega _2`$ is its imaginary part. $`\alpha `$ is a constant of order unity which depends on details of the system and the initial state (9). The situation where $`\omega _20`$ appears usually when a specific short periodic orbit has strong influence on the dynamics of the system, since every typical trajectory stays in its vicinity for a long time. Thus, $`\omega _22\pi /t^{}`$, where $`t^{}`$ is the period of the orbit. Substitution of (25) in (4-6) yields: $$K(\omega )K_u(\gamma ,\omega )+\frac{\alpha }{\pi \beta }\underset{\pm }{}\frac{\gamma _2}{\gamma _2^2+(\omega \pm \omega _2)^2}.$$ (26) This formula is plotted for various parameters in Fig. 1. The solid line represents the universal form (24). The dashed line, corresponding to $`\omega _2=0`$, shows the typical behavior of systems where the decay of correlations is of diffusive nature. In this case the main deviation from universality appears as an increase of correlations near the peak of $`K(\omega )`$, at $`\omega =0`$. The dotted line corresponds to a nonzero value of $`\omega _2`$, and characterizes the behavior of ballistic systems. Here the nonuniversal contribution is located in the tail of $`K(\omega )`$, near $`\omega =\omega _2`$, where the universal term (24) is already negligible. These plots demonstrate the significance of system specific contributions to $`K(\omega )`$. In conclusion, we derived the semiclassical relation between the two point correlation function, $`K(\omega )`$, and the classical correlation functions $`C(t)`$, and $`C_\tau (t)`$. The first quantity characterizes the quantum mechanical process of photodissociation, while $`C(t)`$, and $`C_\tau (t)`$ characterizes the dynamics of the classical counterpart of the system. In contrast with the two-point correlation function of the density of states, where periodic orbits play the major role, here we have found that the main contribution to $`K(\omega )`$ comes from general orbits. Moreover, the nonuniversal contributions to the correlator, $`K(\omega )`$, can be of the same order of the universal result of random matrix theory (the relative magnitude being proportional to $`\gamma /\gamma _2`$). These results can be used in order to analyze the experimental data of complex molecules, extract information regarding their classical dynamics, and construct effective models for these molecules. I would like to thank Yan Fyodorov for pointing out the problem of the semiclassical derivation of formula (24), and to Shmuel Fishman, Raphy Levine, and Nadav Shnerb for useful discussions and comments. This work was initiated at the “Extended Workshop on Disorder, Chaos and Interaction in Mesoscopic System” which took place in Trieste 1998. I thank the I.C.T.P. for the generous hospitality.
no-problem/9812/cond-mat9812432.html
ar5iv
text
# Anomalous Roughness in Dimer Type Surface Growth ## Abstract We point out how geometric features affect the scaling properties of non-equilibrium dynamic processes, by a model for surface growth where particles can deposit and evaporate only in dimer form, but dissociate on the surface. Pinning valleys (hill tops) develop spontaneously and the surface facets for all growth (evaporation) biases. More intriguingly, the scaling properties of the rough one dimensional equilibrium surface are anomalous. Its width, $`WL^\alpha `$, diverges with system size $`L`$, as $`\alpha =\frac{1}{3}`$ instead of the conventional universal value $`\alpha =\frac{1}{2}`$. This originates from a topological non-local evenness constraint on the surface configurations. The theory of non-equilibrium dynamic statistical processes has developed rapidly in recent years. Driven systems display intriguing scaling properties and can undergo various types of dynamic phase transitions . Kardar-Parisi-Zhang type surface growth is an example . There, the properties of the depositing (evaporating) particles are not specified, but are implicitly presumed to be geometric featureless monomers. In surface catalysis type processes the geometric shape of the molecules matters. The onset of the catalytic process is associated with a so-called absorbing state dynamic phase transition . Monomers give rise to directed percolation and dimers to directed Ising type transitions. Subtle geometric features are known to be important in equilibrium crystal surface phase transitions as well. The competition between surface roughening and surface reconstruction depends on topological details of the crystal symmetry. Those determine whether a reconstructed rough phase can exist or not . Geometric features are also important in diffusing particle systems. The shape of diffusing particles introduces conservations and leads to anomalous decay of particle density autocorrelations . Therefore, the natural question arises, whether and how the shape of the deposited particles influences the growth and equilibrium properties of crystal surfaces. Consider a crystal built from atoms of type $`X`$. Assume that deposition always takes place in dimer form, $`X_2`$, aligned with the surface. The dimer attaches to 2 horizontal nearest neighbour surface sites. It looses its dimer character after becoming part of the crystal. Evaporation can take place only in $`X_2`$ molecular form, but a different partner is allowed. In this letter we study the one dimensional (1D) version of this process. This can apply to step shapes during step-flow type growth on vicinal surfaces. The adsorbed particles do not diffuse in this version of our model. However, topological features that drive our results are preserved, even when monomer diffusion is allowed but limited to terraces. Jumps across steps are unlikely due to Schwoebel barriers . So our main results do not alter in systems with diffusion. We describe the 1D surface configurations in terms of integer height variables $`h_i=0,\pm 1,\pm 2,\mathrm{}`$. They are subject to the so-called restricted solid-on-solid (RSOS) constraint, $`h_ih_{i+1}=0,\pm 1`$, and periodic boundary conditions, $`h_{L+i}=h_i`$. The dynamic rule is as follows. First, select at random a bond $`(i,i+1)`$. If the two sites are not at the same height, no evaporation nor deposition takes place. If the two sites are at the same height, deposition of a dimer covering both sites is attempted with probability $`p`$, or evaporation of a dimer with probability $`q=1p`$. Processes are rejected if they would result in a violation of the RSOS constraint. The first surprise is that the surface always facets during growth and evaporation, although the surface is rough in equilibrium. The second surprise is that the equilibrium surface width $`WL^\alpha `$ scales with an anomalous small exponent $`\alpha =0.29\pm 0.04`$. The data could be consistent with an even smaller value due to the strong finite size scaling corrections in Fig. 1(a). 1D surfaces, irrespective of being in equilibrium or in a stationary growing (evaporating) state, display, with only a few very specific exceptions, the universal roughness exponent $`\alpha =\frac{1}{2}`$. The up-down aspect of the steps become uncorrelated beyond a definite correlation length, and therefore the surface roughness obeys random walk statistics at large length scales, which implies $`\alpha =\frac{1}{2}`$. Our dissociating dimer deposition process circumvents this universal argument by means of a novel type of non-local topological constraint. The dimer aspect implies that all surface height levels must be occupied by an even number of particles. However, due to the dissociative nature of the dimers that information is not preserved locally. The “evenness” constraint is non-local. At local length scales the surface looks the same as in monomer deposition processes, but the global surface is much less rough. We checked this numerically. Define a window of length $`b`$. The surface roughness scales as $`Wb^\alpha `$, with $`\alpha =\frac{1}{2}`$ for $`bL`$, but crosses-over to the global finite size scaling exponent $`\alpha 0.29\pm 0.04`$ for $`bL`$. We performed a detailed numerical study of the properties of even-visiting random walks that are globally restricted to visit each site an even number of times. The results, together with analytical scaling arguments, yield the value $`\alpha =\frac{1}{3}`$. This value lies within the numerical error bars in Fig. 1(a) for the dimer deposition model. The details of our random walk study are rather technical and will be presented elsewhere , but the essence can be captured by the following intuitive scaling argument. Consider the even-visiting random walks for time interval $`0<t<T(=L)`$. We assign a defect variable to each site, to mark that it has been visited by the random walker an odd/even number of times up to time $`t`$. The even-visiting constraint is satisfied when all defects disappear at time $`T`$. Initially, the random walker does not feel the constraint and diffuses freely for $`t<\tau _{free}T`$. The defects are uniformly spread over a region of size $`\xi \tau _{free}^{1/2}`$. Then it stops spreading and starts to heal the defects. The healing time for a single defect in the region of size $`\xi `$ is order of $`\xi ^2`$. By assuming that the healing process for each defect is independent, we estimate the total healing time $`\tau _{heal}\xi ^{d+2}`$ with spatial dimensionality $`d`$. As $`\tau _{heal}\tau _{free}`$, we conclude that $`\xi T^{1/(2+d)}`$, i.e., $`\alpha =1/3`$ for $`d=1`$. Existence of a time scale $`\tau _{free}`$ explains the crossover behavior of the surface roughness in the window length $`b`$. Similarly, we can argue that the surface width scales with $`\alpha =\frac{1}{3}`$ in generalized $`n2`$ dissociating $`nmer`$ type deposition processes. The dynamic critical exponent $`z`$ at the equilibrium point follows from how the surface width diverges as function of time, $`Wt^\beta `$. We find numerically that $`\beta =0.111\pm 0.002`$, see Fig. 1(b). This suggests the value $`z=3`$, since $`z=\alpha /\beta `$. The equilibrium surface roughness is unstable with respect to growth and evaporation. It facets immediately. This phase transition is second order. The correlation lengths that characterize the faceted structure diverge. Before addressing this issue we need to describe and explain the faceted phase. The valleys in the growing surface are sharp and the hill tops rounded, see Fig. 2. This shape is inverted for $`p<q`$. The faceting is caused by the spontaneous formation of pinning valleys during growth (pinning hill tops during erosion, for $`p<q`$). Consider for example, dimers on a flat surface for $`p>q`$. Odd segments between them act as the nuclei of pinning valleys. Such valleys can not be filled by direct deposition. The only way to get rid of them is by lateral movement of the sub-hills. In finite systems the surface grows in shocks. An initial rough or flat configuration grows fast at first, but pinning valleys appear randomly at all surface heights. The interface develops into a rough faceted structure with many sub-hills and growth almost stops. From here on the surface advances only when sub-hills anneal out by the lateral movement of the pinning valleys. The annealing time of a sub-hill scales exponentially with its size. This exponentially slowing down healing process leads ultimately to a faceted $`W`$ shape with only two remaining pinning valleys. Their lifetime diverges exponentially with the lattice size. After their demise the surface experiences a growth spurt, and the cycle restarts all over. The mechanism for lateral movement of pinning valleys is exchange of active bonds between ramps. Active bonds are locations along the ramp where a dimer can deposit or evaporate. Most steps on the ramps are only one or two atomic units wide and therefore dynamically inactive, see Fig. 2. Active bonds move up or down the ramps by deposition or evaporation of dimers. The growth bias $`p>q`$ gives them an upward drift. This must lead to an exponential distribution. Fig. 3(a) shows the logarithm of the active bond distribution, $`\rho (x)`$, versus the horizontal distance $`x`$ from the center of a hill top for various values of $`p>q`$. The straight lines for large $`x`$ confirm the exponential distribution of active bonds along the ramps, $`\rho (x)\mathrm{exp}[x/\xi _f]`$. We determined this from odd lattice sizes, in particular $`L=257`$, where the surface contains an odd number of pinning valleys and therefore reaches a $`V`$ shaped stationary state in which it remains pinned at all times. Every now and then an active bond moves in the opposite direction, against the flow, and reaches the valley bottom. That pinning valley moves by two lateral lattice constants when the active bond jumps across onto the other ramp. The probability for this is very small, and scales exponentially with the ramp length, but it is larger from the lower ramp, and therefore the valley bottom moves in the direction of the lower hill, and actually accelerates, because that lower hill keeps shrinking. Near the rounded hill tops, the surface remains highly active and initially the active bond density does not decrease significantly with $`x`$. This defines a second characteristic length scale, $`\xi _0`$, representing the flatness of the rounded hill tops, see Fig. 2. Surprisingly, both lengths, $`\xi _f`$ and $`\xi _0`$, diverge at $`p=q`$. Fig. 3(b) shows that the curves in Fig. 3(a) collapse according to a single length scaling form $$\rho (ϵ,x)=ϵ^{\beta _\rho }(ϵ^\nu x)$$ (1) with $`\nu =1.0(1)`$ and $`\beta _\rho =0.0(1)`$. This means that on approach of the $`p=q`$ critical point the hills maintain their shape in the sense that $`\xi _0`$ and $`\xi _f`$ diverge simultaneously and with the same exponent $`\xi _0\xi _f(pq)^1`$. It is surprising that both length scales of the faceted phase diverge at the equilibrium point. The structure of the rough phase would be much more complex when one of them remained finite. This actually happens in the following generalization of the model. Recently, Alon et. al. added to the conventional monomer type RSOS model growth the constraint that evaporation from flat segments is forbidden. It remains unclear how this can be experimentally implemented, but the interesting aspect of their model is the presence of a roughening transition, belonging to the directed percolation (DP) universality class , and unconventional roughness properties at this transition. Our model becomes a directed Ising (DI) type generalization of this when we disable digging on flat surface segments. Modify the evaporation probability $`q`$ to $`rq`$ when both neighbours are at the same height as the update pair $`(i,i+1)`$, i.e., $`h_{i1}=h_i=h_{i+1}=h_{i+2}`$. At $`r=0`$, the no-digging limit, it becomes impossible to dig into the crystal layers beneath the current lowest exposed level. That level itself becomes frozen as well when it fills-up completely. Figure 4 shows the phase diagram. The rough equilibrium point broadens into a rough phase (the shaded area). Along the $`DIE`$ phase boundary, the surface growth is zero. Inside the rough phase the surface grows, but slowly. Its scaling properties are complex and obscured numerically by huge corrections to scaling. The surface width $`W`$ seems to grow logarithmically in time for $`L>2^{10}`$, and maybe the stationary state width scales logarithmically as well, but extracting the true scaling properties is a rather hopeless endeavor. The origin of this complexity is easy to pin point. The rough surface grows slowly, although the bare coupling constants $`p<q`$ favours evaporation. It performs a delicate balancing act. The surface erosion picture from $`r=1`$ still holds along faceted ramp segments. There the surface evaporates due to the downward drift velocity of active bonds along the slope, but this is frustrated by the emergence of pinning hill tops. The surface grows at flat surface segments due to an upward pressure created by the reduced digging probability factor $`r`$, but the formation of pinning valleys limits this. Moreover, the non-local evenness constraint is at work as well. Growth and evaporation are dynamically balanced only along the faceting transition line $`DIE`$ (Fig. 4). Everywhere else the rough surface grows slowly. The properties of the two faceted phases confirm the above intuitive picture. The erosion faceted phase signals the local stability of eroding ramps inside the rough phase, while the growth faceted phase indicates that flat segments persist. We have numerical evidence showing that the active bond characteristic length $`\xi _f`$ of the erosion faceted phase does not diverge along the roughening transition line $`DIE`$ for $`r<1`$. This confirms that eroding ramps remain locally stable. On the other side of the phase diagram, the flatness length scale $`\xi _0`$ of the growth faceted phase does not diverge along the $`p=q`$ roughening line. This confirms the persistence of locally stable flat segments inside the rough phase. To illustrate this, we present some of the details of the latter. Recall that along $`r=1`$, the active bond distribution $`\rho (x)`$ for different $`p`$ collapses onto a single curve (Fig. 3$`(b)`$). For $`r<1`$ this fails. At the transition point $`p=q`$, $`\rho (x)`$ scales algebraically, as $`\rho (x)x^1`$, see Fig. 5$`(a)`$, but only beyond the central flat part of the hills. The flatness length scale, $`\xi _0`$ remains finite. Its value varies as $`\xi _0|1r|^\nu `$, with $`\nu =1.0(1)`$ along the line $`p=q`$. In Fig. 5$`(a)`$, $`\xi _0`$ can be as large as $`\xi _040`$. This explains the poor finite size convergence of the surface roughness inside the rough phase. The scaling properties of the $`p=q`$ faceting transition follow from the behaviour of the active bond density. Numerically, the surface width scales at $`p=q`$ as $`WL`$, like in the faceted phase. From the perspective of the rough phase the faceting transition takes place when the total active bond density, $`\overline{\rho }=\frac{1}{L}_0^L\rho (x)𝑑x`$, vanishes, because in the faceted phase the $`\xi _0`$ segments of the rounded hill tops are of measure zero compared to the ramp segments. We find $`\overline{\rho }(qp)^{\beta _\rho }`$ with $`\beta _\rho `$ very close to 1. At $`p=q`$ itself, the powerlaw $`\rho (x)x^1`$ predicts that $`\rho `$ scales with system size as $`\overline{\rho }\mathrm{ln}L/L`$. The numerical data in Fig. 5$`(b)`$ confirm this. Finally, $`\overline{\rho }`$ decays at $`p=q`$ algebraically in time with exponent $`0.32(1)`$. This suggests a dynamic exponent $`z=3.1(2)`$. Erosion below the currently lowest exposed level becomes strictly forbidden at $`r=0`$. The evaporation faceted phase becomes flat, and a directed Ising (DI) type roughening transition takes place at $`p=p_{DI}=0.317(1)`$. Hinrichsen and Odor already documented this. They independently introduced the $`r=0`$ limit of our model. They also report that the surface width scales at $`p_{DI}`$ as $`\sqrt{\mathrm{log}(t)}`$, and inside the rough phase as $`\mathrm{log}(t)`$. In summary, the presence of a non-local topological constraint on equilibrium surface configurations in dissociating dimer type surface growth, leads to anomalous reduced surface roughness, with exponent $`\alpha =\frac{1}{3}`$ instead of the conventional value $`\alpha =\frac{1}{2}`$. Moreover, the growing (evaporating) surface is always faceted, due to the spontaneous creation of pinning valleys (hill tops). Under other circumstances, in particular when the digging probability on flat surface segments is being suppressed, an intermediate slowly growing rough phase appears with complex scaling properties and strong corrections to scaling, due to the presence of large internal length scales. This research is supported by NSF grant DMR-9700430, by the KOSEF through the SRC program of SNU-CTP, and by the Korea Research Foundation (98-015-D00090).
no-problem/9812/cond-mat9812242.html
ar5iv
text
# Magnetic and Charge Correlations of the 2-dimensional 𝑡-𝑡'-𝑈 Hubbard model ## I Introduction The spin rotation invariant slave boson representation is applied to the $`tt^{}U`$-model. This model is expected to be relevant to the physics of high-temperature superconductors, since it includes a reasonable description of their band structure. It is a good candidate for the description of itinerant ferromagnetism too. Both behaviors are expected to occur in different regions of the phase diagram. Indeed it can be thought of as consisting of three characteristic regions, depending on whether the magnetic fluctuations are strong or not, and whether they are ferromagnetic or anti-ferromagnetic. The size of these regions is tuned by $`t^{}`$. In the non-interacting limit the role of $`t^{}`$ is to shift the van Hove singularity that lies in the middle of the band for $`t^{}=0`$ to the lower band edge for $`t^{}=t/2`$ or to the upper band edge for $`t^{}=t/2`$. The extension of this physics to the weak coupling regime has been extensively studied by Lin and Hirsch , and Bénard et al. , and Lavagna and Stemman . They found that, for large negative $`t^{}`$, the physics is dominated by strong ferromagnetic fluctuations in the low density domain, and by strong antiferromagnetic fluctuations in the vicinity of half-filling. Quantum Monte Carlo (QMC) simulations have been performed too. In particular Veilleux et al. confirmed this behavior, and thus put it on a stronger basis. They also established that the static and uniform magnetic susceptibility goes over a maximum when the system is doped off half-filling. Recently Hlubina et al. studied the same model at densities corresponding to the van Hove singularity, and found that the system is an itinerant ferromagnet for large negative $`t^{}`$, and an itinerant antiferromagnet for small negative $`t^{}`$. Unfortunately these techniques can only be applied in the weak to intermediate coupling regime, because of the minus sign problem for QMC simulations, and because the RPA is intrinsically a weak coupling approach. For strong coupling one usually resorts to variational methods , (for a recent discussion see ). In order to cover the entire parameter range it is tempting to apply the Kotliar and Ruckenstein slave boson approach . It not only proved to yield ground state energies very close to the exact ones, but very realistic values for the structure factors too . Until now little attention has been paid to the charge structure factor. The aim of this brief report is two-fold. First we calculate the $`q`$-dependent magnetic susceptibility in order to determine in which domain of the phase diagram the antiferromagnetic, ferromagnetic and incommensurate fluctuations dominate, both in the intermediate and strong coupling regime. Second we calculate the charge structure factor. We then show that strong magnetic fluctuations are systematically accompanied with a clear reduction of the charge structure factor, i.e. by a frustration of the charge dynamics. In this model this happens for negative $`t^{}`$, in the hole doped region. For positive $`t^{}`$, the charge structure factor is enhanced and the magnetic fluctuations suppressed. We obtained this result using the spin-rotation invariant (SRI) slave boson representation of the Hubbard model , and the expression for the spin and charge dynamical susceptibilities we recently derived and applied to the Hubbard model . We note that this representation has been recently revisited by Ziegler et al. . Our expressions for the susceptibilities remain unchanged by their considerations. ## II Formalism In this work we calculate the spin and charge dynamical susceptibilities of the two dimensional $`tt^{}U`$ model. The Hamiltonian reads: $$H=\underset{i,j,\sigma }{}t_{i,j}c_{i\sigma }^+c_{j\sigma }+U\underset{i}{}n_in_i$$ (1) We consider the case where the hopping integral is $`t_{i,j}=t`$ for nearest neighbors, $`t_{i,j}=t^{}`$ for next-nearest neighbors and $`t_{i,j}=0`$ otherwise. To this aim we apply the spin-rotation invariant (SRI) slave boson formulation of the Hubbard model to the one-loop calculation of the susceptibilities that we applied to the Hubbard model. In this framework the dynamical spin susceptibility is given by $$\chi _s(\stackrel{}{k},\omega )=\frac{\chi _0(\stackrel{}{k},\omega )}{1+A_\stackrel{}{k}\chi _0(\stackrel{}{k},\omega )+A_1\chi _1(\stackrel{}{k},\omega )+A_2[\chi _1^2(\stackrel{}{k},\omega )\chi _0(\stackrel{}{k},\omega )\chi _2(\stackrel{}{k},\omega )]},$$ (2) where $`\chi _n(\stackrel{}{k},\omega )`$ $`=`$ $`{\displaystyle \underset{\stackrel{}{p},i\omega _n,\sigma }{}}(t_\stackrel{}{p}+t_{\stackrel{}{p}+\stackrel{}{k}})^nG_{0\sigma }(\stackrel{}{p},i\omega _n)G_{0\sigma }(\stackrel{}{p}+\stackrel{}{k},\omega +i\omega _n)(n=0,1,2)`$ (3) In the low frequency regime Eq. (2) has an RPA form, to which it reduces in the weak coupling limit. It nevertheless differs from it in two important respects. First the effective interaction $`A_\stackrel{}{k}`$ does not grow indefinitely as $`U`$ grows, but saturates at a fraction of the average kinetic energy in the strong coupling regime. Second it is $`k`$-dependent. Therefore if an magnetic instability of the paramagnetic phase at a given density develops towards an incommensurate phase, characterized by a wave vector $`\stackrel{}{q}`$, this wave vector will be different from the wave vector $`\stackrel{}{p}`$ at which $`\chi _0(\stackrel{}{p},0)`$ reaches its maximum. Thus at a given density, the wave-vector $`\stackrel{}{q}`$ characterizing the phase towards which the paramagnetic phase can be unstable to, depends on the interaction strength, in contrast to the ordinary RPA. It numerically turns out that the $`\stackrel{}{k}`$-dependence of $`A_\stackrel{}{k}`$ is enhanced by increasing the interaction strength. $`A_\stackrel{}{k}`$ is typically largest for $`k=0`$, and such is $`\chi _1(\stackrel{}{k})`$. The contribution involving $`A_2`$ is smallest, and has little influence on the magnetic properties. The numerous undefined symbols in Eqs. (2-3) can be gathered from Ref. , except for a misprint there: the third line of Eq. (A8) should read: $$\frac{^2z}{d^2}=\frac{2\sqrt{2}p_0\eta }{1+\delta }\left(2d+x+\frac{6xd^2}{1+\delta }\right).$$ (4) ## III Results We now proceed to the numerical results. We first calculate the density dependence of the static (but $`\stackrel{}{q}`$-dependent) magnetic susceptibility. In order to magnify the effect of $`t^{}`$, we perform the calculation for $`t^{}=0.47t`$. Had we chosen $`t^{}=0.5t`$, then the van Hove singularity would lie right at the lower band edge. For $`U=4t`$ and $`\beta =2`$ we display the density-dependence of $`\chi _s`$ for several $`\stackrel{}{q}`$-vectors in Fig. 1. For these parameters the paramagnetic phase does not show magnetic instability. At a particular doping the maximum of $`\chi _s`$ (in its $`\stackrel{}{q}`$-dependence) tells us towards which phase an instability will develop. We checked numerically that this really happens at lower temperature. In the vicinity of half-filling $`\chi _s`$ is largest for the commensurate vector $`Q=(\pi ,\pi )`$. In the low-density range $`\chi _s`$ is maximal for $`q=0`$. This range is very large and extends from $`\delta 0.38`$ to $`\delta =1`$, $`\delta `$ being the hole doping. In this domain the fluctuations are predominantly ferromagnetic, because the system is making use of the van Hove singularity to reduce its free energy. Between these two regimes there is a small window where the instability is towards an incommensurate phase with $`\stackrel{}{q}`$ along the diagonal of the Brillouin zone. We find that there is a value of the doping, that we denote $`\delta _0`$, beyond which $`\chi _s`$ is largest for $`q=0`$. $`\delta _0`$ is seen to decrease with increasing interaction. We note that the doping $`\delta _0`$ at which $`\chi _s`$ is largest for $`q=0`$ decreases with increasing interaction. For $`U=0`$ we found it to be $`\delta _0=0.42`$, while for $`U15t`$ $`\delta _0`$ goes to zero. We thus obtain that, for strong coupling, the paramagnetic phase is unstable towards ferromagnetism over the entire doping range. This dependence of $`\delta _0`$ on $`U`$ can be traced back to the $`q`$-dependence of the effective interaction $`A_\stackrel{}{k}`$ entering Eq. (2), as discussed below Eq. (3). It turns out that the $`q`$-dependence is weak for weak coupling, and gets stronger with increasing $`U`$. This plays a crucial role in assessing towards which phase a magnetic instability may develop. We note that this effect is neglected in the usual RPA and in the two-particle self-consistent approach . In those approaches all what matters is the $`q`$-dependence of the bare susceptibility $`\chi _0`$. We note that including $`t^{}`$ changes dramatically the phase diagram as compared to the $`t^{}=0`$ case. In the latter case ferromagnetism may only show up for very strong coupling ($`U66t`$) and in a narrow doping region located around $`\delta 15\%`$ . The influence of $`t^{}`$ on the doping dependence of the uniform susceptibility is displayed on Fig. 2 for $`U=4t`$ and Fig. 3 for $`U=20t`$. For moderate coupling decreasing $`t^{}`$ changes the monotonic behavior of $`\chi _s(\delta )`$ into a non-monotonic one, which is typical of high-$`T_c`$ materials. The height of the maximum increases with $`t^{}`$, and its location is shifted towards higher doping. This behavior, as well as the location and the height of the maximum, agree with the QMC data of Veilleux et al. . We thus conclude that the non-monotonic behavior of $`\chi _s`$ for moderate coupling mostly results from band-structure effects. Experimentally the non-monotonic behavior of $`\chi _S`$ has been observed in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> and the maximum is reached for $`x0.25`$ ; in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-x</sub> $`\chi _S`$ only increases with increasing hole doping, and one may assume that a maximum is reached for doping values that cannot be reached experimentally. According to Hybertsen et al. , $`t^{}=0.16t`$ is relevant to La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>, in which case our approach yields the location of the maximum of $`\chi _S`$ at $`\delta 0.20`$. However the dependence of $`\chi _S`$ on $`\delta `$ is too weak to reproduce the experimental data. In the strong coupling regime $`\chi _s`$ has a maximum in its doping-dependence for $`t^{}=0`$ . Decreasing $`t^{}`$ results into an enhancement of the maximum of $`\chi _s`$, and into a shift of it towards larger doping. For large $`t^{}`$ its location coincides with the van Hove singularity. Accordingly this non-monotonic behavior of $`\chi _s`$ for strong coupling results from a combination of interaction and band structure effects. We calculated $`\chi _s(\stackrel{}{q})`$ for other values of $`\stackrel{}{q}`$ too. For $`t^{}=0.47t`$ it turned out that $`\chi _s(\stackrel{}{q}=0)`$ is largest over the entire doping range. Thus raising up the interaction leads to a dramatic widening of the ferromagnetic domain. This is in agreement with the variational calculation of Pieri et al. , who investigated in more detail the low density route to ferromagnetism due to Müller-Hartmann . We now turn to the charge structure factor. The latter is calculated according to Eq.(13) and Eq.(17) of Ref. . We recall that the comparison of the slave boson charge structure factor to existing QMC data displays a quantitative agreement. Here we perform the calculation for finite $`t^{}`$, for $`U=4t`$, $`\beta =8`$ and quarter-filling and display the result on Fig. 4. As compared to the $`t^{}=0`$ result, decreasing $`t^{}`$ (i.e. $`t^{}/t`$ becoming increasingly negative) substantially suppresses the charge structure factor, especially around $`(0,\pi )`$, but also around $`(\pi ,\pi )`$ for the largest $`t^{}`$. This suppression results from a frustration of the charge dynamics. We note that this suppression takes place in the parameter regime where the tendency towards magnetism is strongest. This leads us to propose that this is a general situation: frustrated charge dynamics and magnetic instabilities are occurring simultaneously in strongly correlated systems. This is well known for the Hubbard model at half-filling where the charge dynamics is so strongly frustrated that the system becomes insulating and the physics is dominated by strong anti-ferromagnetic fluctuations. Here the effect is less dramatic since the system remains metallic, but clearly noticeable. In the opposite case of a positive $`t^{}`$ (i.e. $`t^{}/t>0`$) the charge structure factor gets essentially flat along the side of the Brillouin zone. It is particularly enhanced in the vicinity $`(0,\pi )`$. This can be partly understood using a local picture. Assuming that the charges can form either a checkerboard or a stripe pattern, one sees that the number of frustrated bonds is larger for the former pattern. This plays little role if $`t^{}`$ is negative, since one hop along the diagonal costs energy, but an increasingly important one when $`t^{}`$ is increasingly positive. Clearly no such charge ordering occurs here, but tendencies towards such patterns emerge. The frustration effect is absent, and we indeed did not find any sign of a magnetic instability. The correlation frustrated charge dynamics-enhanced magnetic fluctuations can be traced back to the relationship between the local susceptibilities and the density $$\frac{1}{\beta }\underset{\stackrel{}{k},i\nu _n}{}\left(\chi _S(\stackrel{}{k},i\nu _n)+\chi _c(\stackrel{}{k},i\nu _n)\right)=2n,$$ (5) which follows from the Pauli principle. Indeed if the local charge susceptibility is reduced, the magnetic one is enhanced, and vice versa. In Fig. 5 we display the density dependence of the charge structure factor, for $`U=4t,\beta =8`$ and $`t^{}=0.47t`$. Under an increase of the density, $`S_c`$ first goes up, until $`\delta 0.25`$ and then goes down while going closer to half-filling. In weak coupling one would expect $`S_c`$ to decrease upon doping, but the opposite behavior holds in a dense strongly correlated system. In summary we studied the charge and magnetic properties of the $`2`$-dimensional $`tt^{}U`$ model. We found that a negative $`t^{}`$ has a strong influence on the phase diagram, a very large portion of it being dominated by strong ferromagnetic fluctuations. We also showed that frustrated charge dynamics and strong magnetic fluctuations occur simultaneously. We found a tendency toward striped phases only for large positive $`t^{}`$. ## IV Acknowledgments We thank P. Wölfle for valuable comments and his steady encouragement. We are grateful to T. Kopp, H. Kroha, H. Beck and M Capezzali for useful discussions. R. F. is grateful for the warm hospitality at the Institut für Theorie der Kondensierten Materie of Karlsruhe University, where part of this work has been done. This work has been supported by the Deutsche Forschungsgemeinschaft through Sonderforschungsbereich 195. One of us (RF) is grateful to the Fonds National Suisse de la Recherche Scientifique for financial support.
no-problem/9812/astro-ph9812270.html
ar5iv
text
# The Active Corona of HD 35850 (F8 V) ## 1 Introduction By analogy to the Sun, magnetic heating in the chromospheres, transition regions, and coronae of late-type stars is observed as cooling in the Balmer lines, the Ca II lines, and in UV, FUV, EUV, and soft X-ray continuum and line emission. Results from the Mt. Wilson Ca II H and K survey of single lower main-sequence field stars has provided compelling evidence of Solar-like chromospheric active regions, activity cycles, differential rotation, and occasional Maunder minima in mid-F to early-M stars (e.g., Noyes et al. 1984; Baliunas et al. 1996; Gray & Baliunas 1997; Baliunas et al. 1995). Coronal heating and magnetic dynamos on the Sun and in stars have been discussed extensively in the literature (cf. Haisch & Schmitt 1996). The various flares, microflares, nanoflares, and brightenings observed on the Sun are thought to be manifestations of current sheet reconnection and much of the coronal heating debate is focussed on whether or not these transient events can account for the radiative output of the solar corona (e.g., Oreshina & Somov 1998). While both continuous (e.g., MHD waves) and stochastic (nanoflaring) processes probably contribute to magnetic coronal heating, the dominant process has yet to be identified. Empirically, coronal emission on active, cool, main-sequence stars is similar to flare emission from solar active regions. For example, Benz & Güdel (1994) find a similar correlation between X-ray luminosity and non-thermal continuum radio luminosity for solar flares and active stars. On the Sun, mildly relativistic electrons accelerated along coronal magnetic field lines produce gyrosynchrotron radio emission. The presence of persistent but variable nonthermal radio emission from cool stars indicates a continual replenishment of the relativistic electron population while a correlation between coronal and radio emissions suggests a causal relationship between electron acceleration in magnetically confined loops and coronal heating. In the 50,000–200,000 K regime, further evidence of continual flaring comes from the broad emission-line components seen in high-resolution UV spectra of active dwarfs and RS CVn binaries. Because the broad-component line widths are 2–4 times the thermal width and because the broad components are blue-shifted relative to the narrow components, Wood et al. (1996) suggest that the broad components are produced by transition-region explosive events. Two well-studied examples of extreme main-sequence magnetic activity are the Pleiades-age, rapid rotators AB Doradus (K0 V) and EK Draconis (G0 V). Spectro-polarimetric monitoring of AB Dor shows long-lived, cool magnetic spots and latitudinal differential rotation (Donati & Collier Cameron 1997). ROSAT PSPC soft X-ray photometry of AB Dor (Kürster et al. 1997) shows rotationally modulated flare and quiescent emission with the same phase and period (12.4 h) as the photospheric spots. ASCA and EUVE spectra of AB Dor and EK Dra indicate coronal emission-measure distributions, $`\mathrm{Em}(T)`$, with peaks at 5–8 MK and 20–30 MK (Mewe et al. 1996; Güdel et al. 1997). The 5–8 MK component has been observed in other moderately active coronal sources, most notably Capella (Brickhouse, Raymond, & Smith 1995). As has been pointed out by Gehrels & Williams (1993), optically thin plasma tends to accumulate at temperatures where the cooling curve $`\mathrm{\Omega }(T)`$ has a positive slope. This is a manifestation of the Parker (1953) & Field (1965) instability: when $`\frac{d\mathrm{\Omega }(T)}{dT}>0`$, plasma cools more efficiently at slightly higher temperatures and less efficiently at slightly lower temperatures. At temperatures where $`\frac{d\mathrm{\Omega }(T)}{dT}<0`$, plasma quickly cools to lower temperatures. For solar-abundance plasmas, $`\frac{d\mathrm{\Omega }(T)}{dT}>0`$ in the ranges 0.7–1.2 MK, 4.5–7 MK, and above 21 MK. Thus, the shape of the cooling curve explains the dip from 2–4 MK and from 9–20 MK seen in many EM distributions and provides some physical justification for two- and three-temperature coronal models. The temperature of the hot component and the relative amount of hot emission measure must reflect a balance between magnetic heating and radiative cooling. For stars with significant emission measure above 20 MK, some mechanism must be super-heating the coronal plasma at fairly regular intervals. For three active, solar-mass stars observed with the ASCA SIS (EK Dra, HN Pegasii, and $`\kappa ^1`$ Ceti in order of decreasing activity), Güdel (1997) has modeled $`\mathrm{Em}(T)`$ by assuming X-rays are produced by an ensemble of flaring and cooling loops. The Güdel (1997) model is a modification of the solar nanoflare model proposed by Kopp & Poletto (1993). While the ASCA data cannot constrain many model parameters (number of loops, loop dimension, and mean magnetic field strength), the models can only reproduce the observed $`\mathrm{Em}(T)`$ provided that the power-law distribution of flare and microflare energies is $`\alpha 2`$. High S/N spectra of other active dwarfs are needed to test the viability of continuous flaring and to establish the importance of this mechanism as a function of rotation rate and effective temperature. In this paper, we present EUV and X-ray spectroscopy of HD 35850 = HR 1817 (F8 V). HD 35850 was detected serendipitously with EXOSAT (Cutispoto et al. 1991) and follow-up optical spectroscopy has established it as a nearby, single, solar-metallicity Pleiades-age ($`t10^8`$ yr), rapid rotator (Tagliaferri et al. 1994). We are particularly interested in HD 35850 because it is probably in the extreme state of magnetic activity for single, main-sequence F stars. There is considerable interest in probing the magnetic dynamo in F stars because they possess relatively shallow convection zones. Kim & Demarque estimate that, in a $`10^7`$ yr-old $`1.1`$$`1.2M_{}`$ star like HD 35850, the convective turnover time is $`\tau _\mathrm{c}20`$ d. In a young $`1.0M_{}`$ star like EK Dra, $`\tau _\mathrm{c}40`$ d. ## 2 Observations and Data Analysis HD 35850 was observed by the Extreme Ultraviolet Explorer from 1995 October 23 08:17:07 UT to 1995 October 30 07:20:34 UT. The EUVE Deep Survey/Spectrometer consists of three aligned grazing-incidence telescopes. The telescope beams are intercepted by short-, medium-, and long-wavelength reflection gratings and detected with microchannel-plate detectors (SW, MW, and LW, respectively). The undispersed portions of the three beams are focussed onto the Deep Survey microchannel-plate detector, DS. This way, EUVE obtains simultaneous, time-resolved 75–120 Å broad-band photometry and 70–160 Å, 170–370 Å, and 300–525 Å medium-resolution spectra (cf. Haisch, Bowyer, & Malina 1993). In support of this EUVE observing program, Ca II HK spectra were obtained on 1996 October 2–12. HD 35850 was observed with the ASCA SIS and GIS for $`18`$ ks on 1995 March 12. The reduction and analysis of these data are described below. ### 2.1 EUVE Deep Survey Data The EUVE Deep Survey/Spectrometer records photon events whenever the source is visible by the satellite, including times when the satellite is passing through the South Atlantic Anomaly (SAA). These SAA passages are characterized by increased background levels which lead to significant and poorly characterized loss of telemetry, nicknamed “primbsching”. Correcting for earth occultations, the on-source time was $`203`$ ks. Correcting for instrument and telemetry dead times, the net exposure time was $`198`$ ks. The Deep Survey (DS) events were screened at various count-rate and primbsch-correction thresholds. Light curves were generated with bin times ranging from 100 to 2000 s. Source events were extracted from a $`25`$ pixel radius circle centered on the PSF centroid. Background events were extracted from an annulus with inner and outer radii of $`37`$ and $`67`$ pixels, respectively. In Figure 1, we show the background-subtracted, dead-time and primbsch-corrected 75–120 Å DS light curve for HD 35850 using 1000 s bins and including times when primbsching (monitor Det7Q1dpc) was below 25%. The mean DS background-corrected count rate of HD 35850 is 0.20 counts s<sup>-1</sup>. No large flares are evident in the DS light curve: the variability amplitude is approximately 100% and the most significant deviation from the mean count rate is a $`5\sigma `$ peak near MJD 18.45. The light curve, however, shows small to moderate flares throughout the observation. At least 7 events have a peak count rate $`3\sigma `$ above the apparent “quiescent” level of 0.20 counts s<sup>-1</sup>. To test the hypothesis that the observed counts come from a non-variable source, we have determined the cumulative distribution function of the unbinned photon arrival times for the observed data and for simulated data from a constant, 0.2 counts s<sup>-1</sup> source. A two-sample Kolmogorov-Smirnov test statistic (Feigelson & Babu 1992) was calculated using the observed and simulated distributions; the probability that the two data sets were drawn from the same parent population is low ($`P10^9`$). We conclude that the DS light curve exhibits significant variability. We have used the binned DS light curve to compute the discrete Fourier transform (Scargle 1989), shown in Figure 2. There is significant power around 0.98 d and its aliases. For HD 35850, $`v\mathrm{sin}i50`$ km s<sup>-1</sup> and $`R_{}1.18R_{}`$ implies $`P_{\mathrm{rot}}/\mathrm{sin}i1.1`$ d, consistent with the 0.98-d period. However, Halpern & Marshall (1996) report that 0.98 d is a beat period associated with the satellite’s passage through the SAA. For example, during the HD 35850 observation, the Det7Q1dpc monitor shows strong periodic signals at 0.984 d and at 0.0656 d. Similar periods have been seen in other primbsch-corrected light curves (Marshall 1998, private communication). We thus conclude that some or all of the 0.98-d signal seen in the DS light curve results from SAA passages. We note that the only period seen in HD 35850’s power spectrum that cannot be attributed to primbsching is at 1.40 d. Until further observations can be carried out, we tentatively identify 1.40 d as the rotation period of HD 35850 (see §2.2). ### 2.2 McMath-Pierce Optical Spectroscopy Medium-resolution Ca II HK spectra were obtained with the Solar-Stellar Spectrograph at the National Solar Observatory’s McMath-Pierce Telescope on 1996 October 2–12, ten days before the EUVE observation. The spectra around HK are typical of active, late-type dwarfs, with broad absorption troughs and bright, narrow emission cores. To calculate the equivalent width of the H and K emission, we measured the excess emission in the line cores, normalizing by the integrated continuum intensity in a 1 Å box far from line center. In Figure 3 we plot the equivalent width of the Ca II H (dots) and K (asterisks) emission cores. While the equivalent widths are highly variable on short time scales, HD 35850 was only visible for a few hours each night. The limited sampling makes it difficult to estimate $`P_{\mathrm{rot}}`$ with these data alone. In Figure 4, we have phase-folded the summed H and K equivalent widths using $`P_{\mathrm{rot}}=1.40`$ d as derived from the DS photometry. ### 2.3 EUVE SW and MW Spectrometer Data For the SW, MW, and LW spectroscopic data, the photon event data were screened to eliminate times of high background using the IRAF EUV package. Specifically, data were excluded when the total detector count rate exceeded 40 counts s<sup>-1</sup>. The SW, MW. and LW photon event lists were used to create wavelength-calibrated FITS images and the SW, MW, and LW images were then analyzed within IDL. We note that during the HD 35850 observation, the SW microchannel-plate detector developed a hot spot 8–12 detector pixels from the dispersion axis, close to the location of the Fe XXII 114 Å line. A light curve of the hot spot intensity was used to identify the times when the hot spot flickered; those times were removed from the photon event list. Some hot pixels were still visible in the SW image, so a $`9\times 9`$ pixel box around the hot spot was set to the local background level. Spectra have been extracted from the SW, MW, and LW images using a modified version of the IUE SIPS spectral extraction algorithm (Lenz & Ayres 1992). The optimal extraction routine is specifically tailored to low S/N spectra from photon-counting devices like the EUVE and IUE spectrometers. Briefly, a raw SW, MW, or LW $`2048\times 2048`$ pixel image is trimmed and compressed to $`1024\times 290`$ and 2-pixel Gaussian smoothed. Since the spectra are over-sampled (no less than 7 detector pixels per resolution element), 2-pixel compression and smoothing improves S/N per bin without compromising spectral resolution. Using a predefined background region away from the source spectrum chosen to reduce curved edge effects caused by the EUVE reflection gratings, a background image is generated. A background-subtracted source image and corresponding error image are then created. The source image is used measure the cross-dispersion profile. To perform an optimal extraction, each row in the source image is weighted by the cross-dispersion profile and the source spectrum and corresponding error are extracted using a 7-pixel aperture. No emission lines longward of 365 Å were detected in the LW count spectrum, presumably because of increased interstellar absorption at long wavelengths. Since the lines shortward of 365 Å are measured in the MW spectrum, we did not consider the LW spectrum in our analysis. In Figures 5 and 6 (bottom panels), we show the flux-calibrated SW and MW spectra. #### 2.3.1 IDL Line and Continuum Fitting In order to measure line fluxes, we have constructed a line list using the emissivity lists of Brickhouse et al. (1995), Monsigniori Fossi & Landini (1993), and Mewe et al. (1985), in order of priority. Lines were grouped to match the SW and MW spectral resolution (0.5 and 1 Å, respectively), allowing us to account for emissivity from blended lines. Based on the ionization states of the brightest identified lines, e.g., Fe XV, Fe XVI, and Fe XX, we made a starting guess at the plasma temperature ($`\mathrm{log}T6.8`$) and assumed a density $`n_\mathrm{e}=10^{11}`$ cm<sup>-3</sup>. The line emissivities, spectrometer effective area curves, and exposure time were used to provide a starting guess for each line strength. The observed spectrum was then fit with multiple Gaussians, varying wavelength ($`\pm `$ one resolution element) and line strength. Lines with fewer than $`2\sigma `$ counts were eliminated from the fit and the procedure was repeated. This way, line counts and upper limits were determined for all high-emissivity Fe IX to Fe XXIV lines. The SW spectrum shows a small but detectable thermal bremsstrahlung continuum. To determine the SW continuum luminosity from 85–135 Å, the continuum bins were fit with two parameters (emission measure and temperature) using the continuum emissivity model of Mewe et al. (1985). The Hipparcos distance to HD 35850 is 26.8 pc (Perryman et al. 1997). For HD 35850, the 85–135 Å continuum luminosity is $`L_{\mathrm{cont}}4.6\pm 0.4\times 10^{28}`$ ergs s<sup>-1</sup>. In Figs. 5 and 6 (top and middle panels), we show the observed and fit count spectra with line identifications. Detected lines are indicated with dotted lines and upper limits are indicated with dashed lines. In Tables 1 and 2, we list the ion, possible blends, temperature of peak emissivity, laboratory and measured wavelengths, a detection flag, measured line luminosities, and the signal-to-noise ratio for detected lines. The line luminosities in Tables 1 and 2 use $`N_\mathrm{H}=1.7\times 10^{18}`$ cm<sup>-2</sup> (see §2.3.3). In all, 28 distinct lines are detected above $`2\sigma `$ in the SW and MW spectra. #### 2.3.2 Electron-Density and Column-Density Estimates The Fe XV and Fe XVI $`\lambda \lambda 285,335,365`$ line ratios are relatively insensitive to density and provide the best estimate of interstellar $`N_\mathrm{H}`$ in the EUVE bandpass. We find $`N_\mathrm{H}=1.9\pm 0.4\times 10^{18}`$ cm<sup>-2</sup>. Two density-sensitive line ratios can be measured accurately from the SW spectrum of HD 35850: Fe XXII $`\lambda \lambda 114,117`$ and Fe XXI $`\lambda \lambda 102,129`$. In Figure 7 we plot the observed SW spectrum (histogram) and fitted lines and continuum (solid line). The density-sensitive lines are indicated with dashed lines. The line Fe XXI line ratio is $`0.22\pm 0.05`$. The predicted branching ratio (solid curve) and observed line ratio (cross with 1$`\sigma `$ error bar) are plotted versus density in Figure 8. Fig. 8 suggests that $`\mathrm{log}n_\mathrm{e}<11.6`$ cm<sup>-3</sup>. Consequently, we have chosen $`\mathrm{log}n_\mathrm{e}=11.0`$ cm<sup>-3</sup> for the IDL and SPEX emission-measure analyses. The Fe XXII line ratio is $`0.47\pm 0.14`$, suggesting $`n_\mathrm{e}>10^{13}`$ cm<sup>-3</sup>. This ratio is unusually high and the implied density does not agree with the more reliable Fe XXI measurement. The detector hot spot (§ 2.3) may have contributed to the anomalous 114 Å flux. We note that line ratio calculations may have systematic uncertainties of up to 50% (Brickhouse et al. 1995) and that the derived density limit is only approximate. #### 2.3.3 IDL Emission-Measure Analysis The measured line luminosities and upper limits have been used to estimate the coronal emission-measure distribution, $`\mathrm{Em}(T)`$. Previous analyses of EUVE spectra (e.g., Güdel et al. 1997; Mewe et al. 1996) have generally relied on global fits to estimate $`\mathrm{Em}(T)`$. For low S/N spectra like these, we prefer to use the bright lines in Tables 1 and 2 because (i) the emissivities of brighter lines have been fairly well established and (ii) the $`\chi ^2`$ statistic is not dominated by continuum bins. We have excluded the He II $`\lambda 304`$ line from our analysis. On the Sun, the He II Ly$`\alpha `$ line is not a reliable temperature indicator: some, perhaps most, of the He II emission occurs in the chromosphere as a result of back-heating from the corona. We note also that around $`\lambda 255`$, the Fe XXIV line is blended with He II. The combined flux from both lines is listed as an upper limit in Table 2. For the remaining lines, our fitting procedure minimizes $`\chi ^2`$ between the measured and predicted line luminosities by varying the amplitude and shape of $`\mathrm{Em}(\mathrm{log}T)`$ (in $`\mathrm{\Delta }\mathrm{log}T=0.1`$ bins). We use exponential Chebyshev polynomials (Lemen et al. 1989) to describe $`\mathrm{Em}(\mathrm{log}T)`$. In addition to detected emission lines, the undetected Fe X to Fe XIV lines have been used as additional constraints in the non-linear least-squares fitting. The SW continuum luminosity serves to constrain Fe abundance. Best-fit emission-measure distributions have been derived for a grid of column densities, Fe abundances, and Chebyshev-polynomial orders: $`0.0N_\mathrm{H}4.0\times 10^{18}`$ cm<sup>-2</sup>, $`0.1Z2.0`$, and $`2n11`$, where $`Z`$ is the coronal Fe abundance relative to the solar photospheric value of Anders & Grevesse (1989). We obtain acceptable $`\chi _\nu ^2`$ values for all $`n6`$ and the resulting $`\mathrm{Em}(T)`$ are characterized by peaks at $`\mathrm{log}T6.8`$ and $`\mathrm{log}T7.4`$. The best-fit column densities, abundances, and emission-measure distributions are not a strong function of $`n`$. As a result, we choose $`n=6`$, the lowest polynomial order for which we obtain good fits. Fits using different order polynomials orders, column densities, and abundances yield qualitatively similar results: prominent peaks in the emission-measure distribution at $`\mathrm{log}T6.8`$ and $`\mathrm{log}T7.4`$, although the sharpness of the high-T peak is not well constrained. The relative strength of the high-T component increases with increasing abundance. The best-fit emission-measure distributions all go to zero at very high temperatures ($`\mathrm{log}T>7.7`$). In Figure 9, we plot $`\mathrm{log}\mathrm{\Delta }\chi ^2`$ as a function of $`N_\mathrm{H}`$ and $`Z`$. The best fit ($`\chi _\nu ^21.07`$) is obtained for $`N_\mathrm{H}=1.7\times 10^{18}`$ cm<sup>-2</sup> and $`Z=1.15`$. The 68% and 90% confidence contours are plotted for 9 free parameters ($`Z`$, $`N_\mathrm{H}`$, and $`n=0,\mathrm{},6`$). The EUVE line-to-continuum ratio yields $`Z=1.15_{0.35}^{+0.75}`$. To obtain substantially sub-solar abundances ($`Z<0.5`$), would require more than doubling the continuum level (dark solid line in Fig. 7). This is not compatible with the background-subtracted SW spectrum. We have experimented with various background-subtraction algorithms (smoothed, unsmoothed, polynomial fit): all produce bright lines and a weak continuum. We find adequate fits for a range of column densities and abundances, and, in particular, we find $`\chi _\nu ^21.15`$ for $`N_\mathrm{H}=1.4\times 10^{18}`$ cm<sup>-2</sup> and $`Z=1`$. We adopt these values in Figs. 10–14. In Figure 10, the observed and predicted line luminosities for detected (filled circles) and undetected (open triangles) lines are compared, with a factor of two deviation indicated with dashed lines. Note the anomalously high Fe XXII line flux (see §2.3.2). In Figure 11, we plot $`\mathrm{Em}(\mathrm{log}T)`$ per $`\mathrm{log}T`$ bin of 0.1 dex as derived from the IDL line analysis (dash-dotted line). #### 2.3.4 SPEX Differential Emission-Measure Analysis In order to verify the results of the IDL line analysis, we performed a differential emission measure analysis of the optimally-extracted EUVE SW and MW spectra using the SPEX code (see Kaastra et al. 1992 for a detailed description of the DEM method and the SPEX Collisional Ionization Equilibrium model). For fixed values of $`N_\mathrm{H}`$ and $`Z`$, we derived $`\mathrm{Em}(T)`$ using the SPEX regularization method. In Fig. 11, we plot $`\mathrm{Em}(T)`$ for $`N_\mathrm{H}=1.4\times 10^{18}`$ cm<sup>-2</sup> and $`Z=1`$ (solid line). The IDL and SPEX analyses yield similar emission-measure distributions, with 55–60% of the emission measure in the hotter component above $`\mathrm{log}T=7`$. ### 2.4 ASCA SIS data An 18-ks ASCA exposure of HD 35850 obtained in 1995 has been analyzed by Tagliaferri et al. (1997). We use their reduced ASCA SIS0 spectrum to further constrain the EUVE results. Because the SIS cannot resolve individual emission lines from coronal sources, we cannot use the IDL line analysis method described in § 2.1.4. Consequently, we fitted the ASCA SIS0 spectrum using the SPEX DEM code. For the SIS0 fits, $`N_\mathrm{H}`$ was fixed at $`1.4\times 10^{18}`$ cm<sup>-2</sup>. The best-fit abundance using the DEM collisional ionization equilibrium (CIE) model is $`Z0.5`$ with acceptable fits in the range 0.34–0.81. The ASCA upper bound on $`Z`$ is plotted in Fig. 9 as a dashed line. We note that $`Z0.8`$ is marginally compatible with EUVE and ASCA. To illustrate, the ASCA SIS0 data are shown in Figure 12 (points) with the DEM model spectrum (histogram) for $`Z=1`$. Discrepancies between the SIS data and the $`Z=1`$ model are seen around 1.2 keV and 2.4 keV. These are the spectral features which normally drive the fit towards lower abundances. Brickhouse et al. (1997) have identified a complex of lines from 1.0–1.3 keV from highly excited ($`n>5`$), highly ionized (Fe XVII to Fe XXV) states that are missing from the plasma codes in XSPEC and SPEX. The missing lines may partly explain the large discrepancy around 1.2 keV. In Figure 13, we show the corresponding ASCA SPEX $`\mathrm{Em}(\mathrm{log}T)`$ for $`Z=1`$. Like the EUVE emission-measure distributions, the ASCA $`\mathrm{Em}(T)`$ peaks at $`\mathrm{log}T`$ of 6.7 and 7.4. Finally, we note that the EUVE count rates are higher than expected from the SIS spectrum. Based on the SIS emission-measure analysis, the expected EUVE DS count rate is approximately 0.16 counts s<sup>-1</sup>, indicated in Fig. 1 as a dashed line. HD 35850 appears to have been more active (by $`25\%`$) during the EUVE observation. ### 2.5 Comparison with Previous Results For HD 35850, we find that the EUVE line-to-continuum ratio indicates approximately solar coronal Fe abundance while a SPEX DEM analysis of the ASCA SIS0 spectrum is consistent with moderately sub-solar abundances. We note that HD 35850’s photospheric Fe abundance is close to solar (Tagliaferri et al. 1994). Analyses of other active coronal sources often find coronal abundances far below the measured photospheric abundances (e.g., AB Dor, Mewe et al. 1996). The ASCA SIS and GIS spectra of HD 35850 were fit by Tagliaferri et al. (1997) using a number of multi-temperature plasma models in XSPEC (Arnaud 1996). For example, the best-fit SIS0 MEKAL parameters indicate a two-temperature corona ($`kT_10.6`$ keV and $`kT_21.2`$ keV) with sub-solar abundances ($`0.12<Z<0.25`$). Tagliaferri et al. (1997) also performed a combined PSPC/SIS/GIS analysis using a 5-ks ROSAT PSPC observation of a field near HD 35850. The combined analysis involved comparing 2-T and 3-T MEKAL and RS models in XSPEC with solar and non-solar abundances and using different cross-detector normalization constraints (see their Table 5 and Figure 3). The XSPEC fits generally favor sub-solar abundance models; the best-fit MEKAL 3-T model yields $`Z=0.34\pm 0.04`$, $`T_1=0.52\pm 0.07`$, $`T_2=0.78_{0.09}^{+0.15}`$, and $`T_3=1.9_{0.4}^{+1.0}`$, with $`47\%`$ of the total emission measure in the coolest component. This cool component is the $`\mathrm{log}T=6.8`$ component seen in the EUVE DEM. The two hottest components appear to represent the hotter DEM component. Our DEM analysis of the ASCA SIS spectra yields somewhat higher abundances ($`0.34<Z<0.81`$) than Tagliaferri et al. (1997) and somewhat lower Fe abundance than our EUVE analysis. Systematic differences between ASCA and EUVE analyses have been reported for other bright coronal sources. For example, Brickhouse et al. (1997) obtained simultaneous observations of Capella with ASCA and EUVE in 1996 March. As with previously obtained EUVE spectra of Capella, they find essentially solar photospheric Fe abundance. On the other hand, the plasma codes in XSPEC yield consistently poorer fits to the ASCA SIS spectra of Capella and require sub-solar abundances ($`Z0.7`$). Note: After the submission of this paper, a preprint of a paper to appear in Astronomy and Astrophysics by Mathioudakis & Mullan (1998) came to our attention which analyzes the EUVE observations of HR 1817=HD 35850. This paper and the Mathioudakis & Mullan paper arrive at substantially different conclusions regarding the Fe abundance of HD 35850. Mathioudakis & Mullan (1998) visually compare the observed EUVE SW and MW spectra with synthetic spectra based on the 3-T MEKAL model suggested by Tagliaferri et al. (1997). They conclude that the EUVE spectra are consistent with Tagliaferri et al. (1997) and are inconsistent with any solar-abundance model, particularly because the Fe XIII–Fe XV lines in the MW spectrum are so weak. While it is difficult to visually compare spectra, their observed and synthetic MW spectra appear to be quite different: the predicted lines are weak and the the low-Z model appears to over-predict the MW continuum (see Figures 2 and 5 in Mathioudakis & Mullan 1998). Given the limitations of ASCA and EUVE, higher-resolution, higher-S/N AXAF HETG or XMM RGS spectra may be needed to determine HD 35850’s coronal abundance. ## 3 Microflaring The double-peaked EM distribution observed on three active, solar-type stars has been modeled by Güdel (1997) on the basis of a simplified stochastic flare model derived from the solar nanoflare model of Kopp & Poletto (1993). Rather than approaching the EM distribution problem hydrostatically, as is the case when using loop scaling laws, Kopp & Poletto (1993) use a simplified hydrodynamic model to treat a large number of flares. The flare model reduces the loop hydrodynamics to a point model with a chromospheric energy sink. A series of energy pulses of finite duration is fed into the loop. This heating energy is lost by conduction into the chromosphere and by radiation into space. Without flaring, all loops are kept at the same equilibrium temperature dictated by hydrostatic loop scaling laws (Rosner, Tucker, & Vaiana 1978). Therefore, our model starts out at a lower threshold temperature and does not consider cooler loops. The salient feature of this model is its phenomenological similarity with more sophisticated hydrodynamic simulations in terms of the emission measure, temperature, and radiation history. The point model cannot treat the more complicated 1-D problem of radiation and conduction occurring between the corona, the transition region, and the chromosphere. The need to simulate a large number of flares requires such a simplified approach. Flares are ignited in each loop randomly distributed in time but satisfying a statistical number distribution in total flare energy $`E`$ above a pre-defined threshold energy. I.e., $`\mathrm{d}N/\mathrm{d}EE^\alpha `$ where $`N(E)`$ is the number density of flares in the energy interval $`[E,E+\mathrm{d}E]`$. On the basis of solar and stellar optical flare monitoring, $`1.8<\alpha <2.0`$ has been suggested by Hudson (1991). For consistency with Güdel (1997), we chose a threshold energy $`E>10^{27}`$ ergs. In order to reduce the number of free parameters, all loops have the same height and vary only in their thickness between a minimum of $`6.4\times 10^6`$ cm and a maximum of $`4.3\times 10^9`$ cm to accommodate the range of flare energies. Each run produces a large number of small flares and only a small number of very large flares. Because no very large flares were observed by EUVE, the largest flares in the simulations were eliminated. The total EM distribution is determined by averaging (in time and space) the EM distribution of all loops. Similarly, the light curve is the superposition of coronal radiation leaving the star into $`2\pi `$ steradians, i.e., half the radiative losses are absorbed in the chromosphere, half are radiated into space. We have experimented with various parameters and find that our model reproduces the required coronal luminosity and EM distribution only if we assume compact loops, with a semi-length of $`2.6\times 10^9`$ cm and essentially full surface coverage. The simulations provide fairly tight constraints on the power-lax index of flare energies ($`\alpha 1.8`$). In this case, the equilibrium temperature is $`6\times 10^6`$ K. Our best-fit microflare model for HD 35850 uses 1100 loops, distributed evenly over the star’s surface, and accommodating $`2.64\times 10^7`$ flares during a typical observation lasting $`5\times 10^5`$ s. The resulting EM distribution is shown in Figure 14. The soft X-ray light curves from these simulations show the same 1–12 hour variability seen in ASCA SIS and EUVE DS light curves. Because the power-law distribution of flares favors small (microflare) events, the simulated light curves also show lower-amplitude, shorter-time scale variability seen in higher-S/N X-ray observations of dMe stars but not discernible for HD 35850. The model EM distribution exhibits the characteristic two-temperature structure with a broad intermediate minimum. However, in this model the EM structure is not just a result of radiative cooling as suggested by Gehrels & Williams (1993). It is a consequence of the balance between flare heating, cooling by conduction, and cooling by radiation. Given our simplistic model, it is noteworthy that the two-temperature structure (or rather the broad minimum around 10 MK) observed on many active normal stars can be explained, in part, by a hydrodynamic property of frequently-flaring loops. ## 4 The Active Corona of HD 35850 We summarize the combined EUVE and ASCA results for HD 35850 in Table 3. The ASCA SIS spectral analysis suggests sub-solar abundances and the EUVE line-to-continuum ratio indicates approximately solar photospheric abundances. Although $`Z0.8`$ is marginally consistent with the EUVE and ASCA spectra, all the ASCA analyses yield systematically lower Fe abundance than the EUVE Fe line-to-continuum ratio. Assuming photospheric abundances, the corona is characterized by emission measure in a warm component from 5–8 MK and a hot component from 21–30 MK. Like other rapidly rotating, Pleiades-age dwarfs, HD 35850 appears to represent an activity extremum for main-sequence solar-type stars. HD 35850’s inverse Rossby number is $`\tau _\mathrm{c}/P_{\mathrm{rot}}20.7/1.4015`$ and its X-ray surface flux is $`F_\mathrm{X}1.8\times 10^7`$ ergs s<sup>-1</sup> cm<sup>-2</sup>. For AB Dor (K0 V), $`F_\mathrm{X}1.7\times 10^7`$ ergs s<sup>-1</sup> cm<sup>-2</sup> (Hempelmann et al. 1995) and $`\tau _\mathrm{c}/P_{\mathrm{rot}}>120`$. For EK Dra (G0 V), $`F_\mathrm{X}1.5\times 10^7`$ ergs s<sup>-1</sup> cm<sup>-2</sup> and $`\tau _\mathrm{c}/P_{\mathrm{rot}}15`$. AB Dor, HD 35850, and EK Dra appear to have saturated or nearly saturated X-ray activity and, despite having more rapid rotation and a deeper convection zone, AB Dor’s X-ray surface flux is the same as HD 35850’s. The EUVE DS light curve of HD 35850 shows about one moderate-amplitude (40% increase) flare per day. The X-ray and EUV variability and the presence of substantial emission measure above 20 MK suggests that some flare-like mechanism must be heating the corona. To test this hypothesis, we have modeled HD 35850’s light curves and $`\mathrm{Em}(T)`$ using the hydrodynamic microflare point model of Güdel (1997). The simulations suggest a power-lax index of flare energies $`\alpha 1.8`$. On the Sun, transient brightenings seen in Yohkoh Soft X-ray Telescope images of solar active regions provide a direct measure of the microflaring amplitude and power-law index. Microflaring can provide, at most, 20% of the heating rate required to power the active-region corona (Shimizu 1995). Ofman, Davila, & Shimizu (1996) have shown that the transient brightenings seen by Yohkoh may be a consequence of resonant absorption of global-mode Alfvén waves in coronal loops, excited by random footpoint motions of these loops. A more recent analysis of Yohkoh data suggests that a steeper distribution of smaller flares, the so-called nanoflares, may also be present (Shimizu & Tsuneta 1997). Whether the microflares occur as a result of Alfvén waves or magnetic reconnection, they cannot account for the radiative output of the Solar corona. For the rapidly rotating F and G dwarfs, however, the microflare models are able to reproduce the observed luminosity, EM distribution, and variability. To better test microflaring on stars, accurate coronal density and abundance measurements as a function of temperature are needed to better constrain the emission-measure distribution. Photospheric magnetic field strength and filling fractions can further constrain loop models. Also, higher S/N, higher cadence hard and soft X-ray light curves will provide a statistically robust estimate of the distribution of flares. Planned observations of AB Dor with the Advanced X-ray Astrophysics Facility’s High-Energy Transmission Grating Spectrometer will help resolve some of the density and abundance issues discussed in this paper and may lead to more quantitative tests of coronal heating models. MG would like to thank Tom Ayres for providing some of the IDL routines to perform optimal extraction and line fitting. MG made extensive use of the NASA Astrophysics Data System abstract service, the HEASARC database at NASA/GSFC, and the Simbad database at the Centre de Données astronomiques de Strasbourg. The authors would like to thank an anonymous referee for many helpful suggestions. This research was supported under NASA grant NAG-2891 to the University of Colorado.
no-problem/9812/solv-int9812012.html
ar5iv
text
# Optical Fiber Communications: Group of the Nonlinear Transformations ## Abstract A new method for finding solutions of the nonlinear Shrödinger equation is proposed. Comutative multiplicative group of the nonlinear transformations, which operate on stationary localized solutions, enables a consideration of fractal subspaces in the solution space, stability and deterministic chaos. An increase of the transmission rate at the optical fiber communications can be based on new forms of localized stationary solutions, without significant change of input power. The estimated transmission rate is $`50Gbit/s`$, for certain available soliton transmission systems. The propagation of pulsed light in an optical fiber can be described by the nonlinear Schrödinger equation, $$i\frac{q(\xi ,\tau )}{\xi }+\frac{1}{2}\frac{^2q(\xi ,\tau )}{\tau ^2}+q(\xi ,\tau )^2q(\xi ,\tau )=0,$$ $`(1)`$ where $`q(\xi ,\tau )`$ is a complex envelope function of the effective electric field amplitude and $$\xi x,\tau (tx\frac{k}{\omega }).$$ $`(2)`$ The higher order dispersion and the effect of fiber loss are neglected here . We take $$q=q_0e^{i\frac{q_0^2}{2}\xi }y(\tau ),$$ $`(3)`$ where $`y(\tau )`$ is a real function, and get $$y\frac{1}{q_{0}^{}{}_{}{}^{2}}\frac{d^2y}{d\tau ^2}2y^3=0.$$ $`(4)`$ The solution of this equation $$y_0(\tau )=\frac{1}{\mathrm{cosh}q_0\tau }$$ $`(5)`$ describes the optical soliton. Its unchangeable shape is a property that makes it attractive for applaying to ultra high speed optical communications . The equation (1) is completely integrable one. The inverse scattering transformation method yields the general solutions of such nonlinear partial differential equations. Our aim is to propose here an alternative approach to the nonlinear Schrödinger equation and discus applicability of the obtained results to optical fiber communications. The equation (4) describes a stationary pulse in optical fiber. We take a localized solution $`y(\tau )`$ of this equation and define the nonlinear operator $`H_{c_1}`$: $$H_{c_1}y=\underset{j=1}{\overset{\mathrm{}}{}}c_jy^j,$$ $`(6)`$ where $`c_j`$ are real coefficients. Does $`H_{c_1}y`$ satisfy the equation (4)? The case $`y=y_0`$ is considered yet and the answer is positive . Putting $`H_{c_1}y`$ into the equation (4), we find that $`H_{c_1}y`$ is actualy a solution of this equation if $$c_{2j}=0,$$ $`(7)`$ while $`c_{2j+1}`$ satisfy the recursive relation $$c_{2j+1}=\frac{1}{2j(j+1)}\{j(2j1)c_{2j1}\underset{n=2}{\overset{2j}{}}c_{2j+1n}\underset{k=1}{\overset{n1}{}}c_{nk}c_k\},$$ $`(8)`$ where $`c_1`$ is an arbitrary coefficient. Using the relations (6)-(8), with $`c_1=1`$, we get $$H_1y=y.$$ $`(9)`$ In the following text, $`H_{c_1}`$ will mean both the series (6) and the recursion (8) with (7). For a localized $`y(\tau )`$ and a finite $`c_1`$, convergence of the series (6) can be numerically tested. Our calculations yield that $`H_{c_1}y`$ is localized too. Therefore, using different values of $`c_1`$, we are able to get uncountable many new localized solutions of the equation (4) from only one known localized solution (fig. 1). In the following text ”the solution” will mean ”the localized solution of the equation (4)”. The solution value preciseness will be limited only by the number of calculated coeficients. The solution in form different from (6) does not exist. Each solution pair $`z(\tau )`$ and $`y(\tau )`$ must be in a relation $`z=H_{c_1}y`$, with specific value of $`c_1`$: $$c_1=\underset{\tau \pm \mathrm{}}{lim}\frac{z(\tau )}{y(\tau )}.$$ $`(10)`$ Starting with a solution $`y(\tau )`$ we can construct the complete solution space. There is an analogy to the superposition principle from linear theory. According to the relation (10), a solution is determined by its asymptotics. The nonlinear Schrödinger equation has infinite number of symmetries corresponding to the conserved quantities: total energy, momentum, Hamiltonian, … . We find that there are actually uncountable conserved quantities. Let us consider the total energy only (for $`H_{c_1}y`$): $$q_0^2_{\mathrm{}}^{\mathrm{}}(c_1y+c_3y^3+c_5y^5+\mathrm{})𝑑\tau .$$ $`(11)`$ We can choose uncountable different values of $`c_1`$ and use the relations (7) and (8). The relations (6)-(8) yield $$H_{a_1}H_{b_1}=H_{a_1b_1}.$$ $`(12)`$ Hence $$\{H_{c_1};c_10\}$$ $`(13)`$ is the comutative multiplicative group of the nonlinear transformations (GNT). Group properties of the GNT originate from group properties of real numbers $`c_10`$. For example, $$H_{c_1}H_{1/c_1}=H_1.$$ $`(14)`$ For definite coefficient $`c_1`$ and solution $`y(\tau )`$, we can construct a fractal subspace of the solution space. The fractal subspace covers solutions of form $$H_{c_1}H_{c_1}\mathrm{}H_{c_1}y.$$ $`(15)`$ In the phase plain, a fractal subspace is represented by a geometrical fractal (fig. 2). For optical fiber communications it is important question whether small disturbations will destroy the information carrying pulses. Solution parameters, amplitude (pulse width) and velocity (frequency), are affected by various perturbations: outside produced noise, incoherence of the light source, fiber inhomogenities, absorption, amplifier noise, soliton interactions… It is the experimental fact that optical solitons (equation (5)) are unlikely to be destroyed by perturbations - they are very robust. We expect that at least the part of new solutions we have expresed here are actually stable. We are going to consider this problem theoretically, although it will be open until an experimental verification. The GNT method enables the following statement: the stability of a solution $`y(\tau )`$ is equivalent to the relation $$\underset{ϵ0}{lim}H_{1+ϵ}y=y.$$ $`(16)`$ The relations (6)-(8) and (16) yield $$y(\tau )1.$$ $`(17)`$ A localized solution of the equation (4) is stable one if and only if the relation (17) holds (fig. 1a,b). As well as for the KdV soliton , the classical argument about the counterbalance between nonlinearity and dispersion is not sufficient to explain the stability. Consideration of the Lyapunov exponent, $$\lambda (c_1)=\underset{j\mathrm{}}{lim}\frac{1}{j}ln\frac{dc_{2j+1}}{dc_1},$$ $`(18)`$ yields that deterministic chaos will appear at close packing of solitons, when $`c_1`$ is large enough (fig. 3a). The deterministic chaos we can expect for $`c_1>2.4`$. Near $`c_1=1`$, stability is exceptional (fig. 3b). New forms of localized stationary solutions of the nonlinear Schrödinger equation enable an increase of the transmission rate at the optical fiber communications, without significant change of input power. An information may be contained in the special form of soliton (fig. 1a,b). The known optical soliton, described by (5), is one of many possible stationary pulses. Let us consider an available soliton transmission system. If the fiber core cross sectional area is $`S=60\mu m^2`$, the carrier wavelength is $`\lambda =1.55\mu m`$, the soliton pulse (equation (5)) width is $`\tau _s=25ps`$, the peak power is $`P_m=2.1mW`$, and the separation between two adjacent solitons is $`3\tau _s`$, then the transmission rate is $`10Gbit/s`$. In the same transmission system, using stable pulses of form $`H_{c_1}y`$ with different $`c_1`$ (fig. 1a), the transmission rate will be greater. It becomes equal to $`50Gbit/s`$, at 40 photons resolution of energies. The new forms of stable solutions (fig. 1b) make possible increase of the transmission rate in the same system. In conclusion, we have proposed the GNT method for solving of the nonlinear Schrödinger equation. New forms of the stationary localized solutions, usable for an improvement of the optical fiber communications, are obtained. The authors would like to thank H.J.S.Dorren for useful discussion of the GNT method. This work was supported by the Soros Fund Open Society (Bosnia and Herzegovina) and the World University Service (Austria).
no-problem/9812/astro-ph9812155.html
ar5iv
text
# Precession of collimated outflows from young stellar objects ## 1. Introduction Most T Tauri stars are observed to be in multiple systems (Mathieu 1994). There is also indirect evidence that the sizes of circumstellar disks contained within binary systems are correlated with the binary separation (Osterloh & Beckwith 1995, Jensen, Mathieu & Fuller 1996). This suggests that binary companions are responsible for limiting the sizes of the discs through tidal truncation (Papaloizou & Pringle 1977, Paczyński 1977). There are indications that, in binaries, the plane of at least one of the circumstellar discs and that of the orbit may not necessarily be aligned. The most striking evidence for such noncoplanarity is given by HST and adaptive optics images of HK Tau (Stapelfeldt et al. 1998, Koresko 1998). Also, observations of molecular outflows in star forming regions commonly show several jets of different orientations emanating from an unresolved region the extent of which can be as small as $`100`$ AU (Davis, Mundt & Eislöffel 1994). These jets are usually believed to originate from a binary in which circumstellar disks are misaligned, and in some of these systems a binary has indeed been resolved. In such cases, we expect tidal interaction to induce the precession of the disk (possibly both disks) which is not in the orbital plane, and thus of any jet it drives. Furthermore, a number of jets seem to be precessing (see below). The above discussion leads us to interpret this precessional motion as being driven by tidal interactions between the disk from which the jet originates and a companion on an inclined orbit. In this Letter we consider several observed systems in the light of this model. In § 2 we review the theory of precessing warped disks. In § 3 we apply it to observed systems. We first consider cases where a precessing jet has been observed and calculate the parameters of the binaries in which tidal interactions would produce the observationally inferred precession frequencies. We next study cases where misaligned jets have been observed and, assuming or using the fact that the source of these outflows is a binary, we calculate the precession frequency that would be induced by tidal interactions in the binary and the lengthscale over which the jets should ’wiggle’ or bend as a result of this precessional motion. In § 4 we give a summary and discussion of our results. ## 2. Theory of precessing warped disks We consider a binary system in which the primary and the secondary have a mass $`M_p`$ and $`M_s`$, respectively. We suppose that the primary is surrounded by a disk of radius $`R`$ with negligible mass so that precession of the orbital plane can be neglected. The binary orbit is assumed circular with radius $`D`$ and is in a plane with inclination angle $`\delta `$ to the plane of this disk. In general, $`\delta `$ will be evolving with time. However, this evolution, which is not necessarily toward coplanarity, occurs on a long timescale (equal or larger than the disk viscous timescale) if the warp is not too severe (Papaloizou & Terquem 1995), so that we will consider here $`\delta `$ as a constant. This means that although the outer parts of the disk may be driven out of the initial disk plane on a relatively short timescale, the inner parts will retain their orientation with respect to the orbital plane for a timescale comparable to the disk viscous timescale. Since jets are expected to originate from the disk inner parts, their orientation relative to the orbital plane will be determined by that of the disk inner regions. The secular perturbation caused by the companion leads to the precession of the disk about the orbital axis, as in a gyroscope. The disk is expected to precess as a rigid body if it can communicate with itself through some physical process on a timescale less than the precession period. In non–self gravitating protostellar disks, communication is governed by bending waves (Papaloizou & Lin 1995, Larwood et al. 1996, Terquem 1998). The condition for rigid body precession is then satisfied if $`H/r>\left|\omega _p\right|/\mathrm{\Omega }_0,`$ where $`\omega _p`$ is the (uniform) precession frequency in the disk and $`\mathrm{\Omega }_0`$ is the angular velocity at the disk outer edge (Papaloizou & Terquem 1995). An expression for $`\omega _p`$ has been derived by Papaloizou & Terquem (1995). Here we just give an approximate expression, that we derive by assuming that the disk surface density is uniform and that the rotation is Keplerian: $$\omega _p=\frac{15}{32}\frac{M_s}{M_p}\left(\frac{R}{D}\right)^3\mathrm{cos}\delta \sqrt{\frac{GM_p}{R^3}},$$ (1) where $`G`$ is the gravitational constant. The assumptions under which this expression is valid (see Terquem 1998 for a summary) are usually satisfied in the case of the relatively wide binaries we will be discussing here and for the values of $`D/R`$ we will be considering. Although we have assumed a circular orbit, an eccentric binary orbit can be considered by replacing $`D`$ by the semi-major axis and multiplying the precession frequency by $`(1e^2)^{3/2}`$, where $`e`$ is the eccentricity. On this basis we consider that the discussion presented below should remain valid for $`0<e<1/2.`$ ## 3. Application to particular systems ### 3.1. Expected parameters of binary systems with precessing jets Observations of molecular outflows in star forming regions show in some cases “wiggling” knots (or a helical pattern in projection onto the plane of the sky), which can be interpreted as being the result of the precession of the outflowing jet. In this section we will assume that such precession is caused by tidal interaction between the disk from which the outflow originates and a companion star in a noncoplanar orbit. In cases where the outflow has lasted many precession periods the implication is that the “wiggling” should be periodic. If we indeed assume that this is the case, the observations give the projected wavelength $`\lambda _{proj}.`$ When the angle $`i`$ between the outflow and the line of sight can be estimated, the actual wavelength $`\lambda =\lambda _{proj}/\mathrm{sin}i`$ can be derived. The precession period is then given by $`T=\lambda /v`$, where $`v`$ is the outflow velocity, and $`\left|\omega _p\right|=2\pi /T.`$ Furthermore, if the outflow is precessing because the disk plane and that of the orbit are misaligned, the angle $`\delta `$ between these two planes is equal to the angle between the central flow axis and the line of maximum deviation of the flow from this axis. This angle can also be observed. In all the cases studied in this section, $`\delta `$ is small enough ($`1020\mathrm{°}`$) so that we will consider $`\mathrm{cos}\delta =1`$. In this section we will adopt $`M_p=0.5`$ $`M_{\mathrm{}}`$. Since we do not know whether the jet which is observed to precess originates from the primary or the secondary, we will consider mass ratios $`M_s/M_p`$ between 0.5 and 2. The lowest (largest) values would correspond to the case where the jet originates from the primary (secondary). These values are typical for pre–main sequence binaries. Finally, the disk from which the outflow originates is expected to have its size truncated by tidal interaction with the companion star in such a way that $`D/R`$ lies between 2 and 4. Since Larwood et al. (1996) have shown that tidal truncation is only marginally affected by lack of coplanarity, in this section we will consider $`2D/R4`$. Assuming a fixed ratio $`R/D,`$ equation (1) can then be used to calculate the disk radius: $$R=\left(\frac{15}{32}\frac{M_s}{M_p}\mathrm{cos}\delta \frac{\sqrt{GM_p}}{\left|\omega _p\right|}\right)^{2/3}\left(\frac{R}{D}\right)^2.$$ (2) Since $`M_p`$ appears with the power $`1/3`$, an uncertainty of a factor 2 over $`M_p`$ is equivalent to an uncertainty of only a factor 1.26 over $`R`$. The main uncertainty over $`R`$ comes through the ratio $`R/D`$. We now consider some particular protostellar systems in the light of the above discussion. Cep E, at a distance of 730 pc, drives two outflows almost perpendicular to each other Eislöffel et al. (1996), which suggests that this source is a binary. In addition, they have interpreted the morphology of one of these jets as due to precession, and they have inferred $`T=400`$ years. Figure 1.a shows a plot of $`D`$ against $`R`$, as calculated from equation (2). We see that $`R`$ lies in the range $`110`$ AU while $`D`$ lies in the range $`420`$ AU. Binary separation would be 0.”005 to 0.”03, which is not currently resolvable. V1331 Cyg is located at a distance of 550 pc. Visible line emission shows a very faint and diffuse feature in the vicinity of this object, which appears to be a strongly ’wiggling’ jet (Mundt & Eislöffel 1998). The observations give $`\lambda _{proj}0.5`$ pc (a full period is observed). Using $`i42\mathrm{°}`$ (Mundt & Eislöffel 1998), we derive $`\lambda =0.71`$ pc. Since $`v300`$ km s<sup>-1</sup> (Mundt & Eislöffel 1998), we get $`T=2,300`$ years. Figure 1.b shows a plot of $`D`$ against $`R`$, as calculated from equation (2). We see that $`R`$ lies in the range $`333`$ AU while $`D`$ lies in the range $`1366`$ AU. Binary separation would be 0.”02 to 0.”1. This upper value may be possible to resolve with the VLA or adaptive optics in the near–infrared. RNO 15–FIR, located at a distance of 350 pc, drives a molecular outflow which appears to be ’wiggling’ (Davis et al. 1997). It is possible to interpret this morphology as due to precession within the uncertainty of the measurement (see Fig. 7 of Davis et al. 1997). From the observations, we derive $`\lambda _{proj}=0.065`$ pc. Assuming $`i=45\mathrm{°}`$ (Cabrit 1989) and $`v=10`$ km s<sup>-1</sup> (Davis et al. 1997), we get $`\lambda =0.092`$ pc and $`T=9,000`$ years. Figure 1.c shows a plot of $`D`$ against $`R`$, as calculated from equation (2). We see that $`R`$ lies in the range $`882`$ AU while $`D`$ lies in the range $`33165`$ AU. Binary separation would be 0.”09 to 0.”47. Such a separation may be possible to resolve with the VLA or adaptive optics in the near-infrared. ### 3.2. Expected precession in binary systems with nonaligned jets In the systems presented in this section, misaligned “binary” jets have been observed. Since it is very unlikely that one single source can drive two jets with very different orientations, it is assumed that each of the outflows originates from its own component of a binary system. In some cases a binary has actually been resolved, in other cases observations only allow us to put an upper limit on the separation of the hypothetical binary. Since the outflows are not parallel, it is probable that the disks which surround these sources are themselves misaligned. Therefore, at least one of these disks is not in the orbital plane and should precess. We evaluate here the precession period $`T`$ and give an estimate of the lengthscale $`\lambda =v/T`$ over which the outflows should ’wiggle’ as a result of this precessional motion. Since in general we do not know from which member of the binary each jet originates, we will here again consider mass ratios $`0.5M_s/M_p2`$, unless otherwise specified. We will also take $`M_p=0.5`$ M unless otherwise specified, and $`\mathrm{cos}\delta =1`$ (the results can be scaled for different values of $`\delta `$ since $`\omega _p\mathrm{cos}\delta `$). T Tau is a binary located in Taurus, at a distance of 140 pc. Observations show that two almost perpendicular jets originate from this system (Böhm & Solf 1994). A disk of estimated radius $`R27`$–67 AU has been resolved around the visible component, T Tau N (Akeson, Koerner & Jensen 1998). Here we assume that the disk around T Tau N is not in the orbital plane. We take $`D=102`$ AU (Ghez et al. 1991) and for $`R`$ the observational values reported above. We fix $`M_p=0.7`$ M$`_{\mathrm{}}.`$ Since we are interested in the precession of the jet emanating from the primary, we consider $`0.5M_s/M_p1`$. We then get $`T5,000`$$`4\times 10^4`$ years. Since $`v=70`$ km s<sup>-1</sup> for the jet emanating from T Tau N (Eislöffel & Mundt 1998), $`\lambda 0.4`$–3 pc for this jet. These values of $`\lambda `$ are larger than the scale over which the jet has been observed so far (which about 0.1 pc), so that a bending rather than ’wiggling’ may be detectable in that case. HH1 VLA 1/2 is located at a distance of 480 pc. The two sources VLA 1 and 2, separated by 1,400 AU, drive the two misaligned jets HH 1–2 and HH 144, respectively (Reipurth et al. 1993). We assume that this system is bound and noncoplanar, and that tidal truncation has operated such that $`2D/R4`$ (note however that, in such a young and wide system, $`D/R`$ may actually be significantly larger). We also assume that the angle between the line of sight and the orbital plane is 45$`\mathrm{°}`$, so that $`D1,980`$ AU. Then $`T4\times 10^5`$$`4\times 10^6`$ years. Since $`v200`$ km s<sup>-1</sup> for both flows (Eislöffel, Mundt & Böhm 1994), we get $`\lambda 77`$–870 pc. Since $`T`$ is probably comparable to the age of the system, a bending rather than wiggling of the jet may be expected on a scale of a few pc. We note that ’wiggling’ or bending of the jets has been suggested on the current observed scale, which is about 0.5 pc for both jets (Reipurth et al. 1993). This clearly cannot be due to interaction between VLA 1 and 2, but it may be the sign that this system contains more sources. This is supported by the existence of at least two more outflows with different orientations (Eislöffel et al. 1994). HH 111 IRS: Perpendicular to the HH 111 jet, located at a distance of 480 pc in Orion, is another outflow called HH 121 (Gredel & Reipurth 1993; Davis et al. 1994). We assume that the source of these two outflows is a binary. From the unresolved central source in VLA observations (Rodríguez 1997) we infer an upper limit on the separation $`D`$ of about 0.”1, or 48 AU. By adopting $`D=48`$ AU and $`2D/R4`$, we find that $`T1,000`$–8,000 years. Since $`v350`$ km s<sup>-1</sup> for HH 111 (Reipurth, Raga & Heathcote 1992), $`\lambda 0.5`$– 6 pc for this jet. The lowest of these values is close to the extent over which the jet has been observed so far, which is 0.45 pc in projection (Reipurth 1989). We note that tidal effects in a putative binary system have been invoked by Gredel & Reipurth (1993) as a possible cause of the asymmetry of HH 121. HH 24 SVS63: The region of HH 24, located in Orion at a distance of 480 pc, contains several highly collimated outflows (Eislöffel & Mundt 1997) and a hierarchical system of four or even five young stars. The sources SSV 63E and SSV 63W are separated by about 4,500 AU in projection onto the sky (Davis et al. 1997). SSV 63W is itself a binary. On the images taken by Davis et al. (1997) and given to us, we have measured that its projected separation is 920 AU. We find that SSV 63E is probably a triple system: the projected separation is 350 AU between SSV 63E A and B and 975 AU between SSV 63E A and C. We will take these projected separations as indicative values of the actual separations. At least two outflows with very different orientations originate from SSV 63E. These are HH 24 G (Mundt, Ray & Raga 1991) and HH 24 C/E (Solf 1987; Eislöffel & Mundt 1997) , which extend over 0.2 and about 1 pc, respectively. SSV 63W is the source of another parsec–scale outflow, HH 24 J (Eislöffel & Mundt 1997). Here again we fix $`2D/R4`$. The velocities of the jets, in km s<sup>-1</sup>, are about 140, 180 (if we assume the radial and tangential velocities to be similar), 370 and 50 for HH 24 J, HH 24 G, HH 24 C/E and for the redshifted lobe HH 24 E, respectively (Jones et al. 1987). The different interactions within the two systems then lead to $`\lambda `$ between 5 and 556 pc, which is at least one order of magnitude larger than the extent over which the jets have been observed so far. This indicates that a bending rather than a ’wiggling’ would be more likely to be observed. L1551 IRS5 is located in Taurus, at a distance of 150 pc. It has been suggested that two jets could be emanating from this source (Moriarty–Schieven & Wannier 1991, Pound & Bally 1991). Furthermore, Rodríguez et al. (1998) have shown that this system is a binary with separation $`45`$ AU and they have resolved two circumstellar discs for which they infer $`R10`$ AU. Their results are also consistent with the presence of two outflows which appear to be misaligned (see their Fig. 2). We take $`D`$ between 45 and 63 AU, corresponding to an angle between the orbital plane and the line of sight in the range 0–45$`\mathrm{°}`$, and we fix $`R=10`$ AU. Then $`T4,000`$$`5\times 10^4`$ years. Since for the jet which has been unambiguously observed $`v200`$ km s<sup>-1</sup> (Sarcander, Neckel & Elsässer 1985), we derive $`\lambda 1`$–10 pc. The projected extent over which the jet has been observed so far is about 1 pc (Moriarty–Schieven & Snell 1988). Therefore, even though a full period may not yet be seen, ’wiggling’ could already appear in the observations. We note that the outflow as observed appears to be very complex, and it may well be seen to precess. ## 4. Discussion and Summary In this Letter we have considered several protostellar systems where either a precessing jet or at least two misaligned jets have been observed. In the case where a jet is seen to precess (or rather interpreted as precessing), we have assumed it originates from a disk which is tidally perturbed by a companion on an inclined orbit, and we have evaluated the parameters of the binary system. For Cep E, V1331 Cyg and RNO 15–FIR, we found the separation to range from a few AU up to 160 AU and the disk size to be between 1 and 80 AU. These numbers correspond to what is expected in pre–main sequence binaries (see Mathieu 1994 and references therein). We note that larger separations for this range of disk sizes would be associated with longer precession timescales, and thus it would be more difficult to detect the precessional motion over the observed lengthscale of the jet. A bending rather than ’wiggling’ would be expected in that case. In the case where misaligned jets have been observed, we have assumed or used the fact that the source of these outflows is a binary, and we have calculated the precession frequency that would be induced by tidal interactions in the binary and the lengthscale over which the jets should ’wiggle’ as a result of this precessional motion. For T Tau, HH1 VLA 1/2 (which may actually be a hierarchical system) and HH 24 SVS63, it may be possible to detect a bending of the jets rather than ’wiggling’ on the current observed scale of the jets. In HH 111 IRS and L1551 IRS5 (assuming there are indeed two misaligned outflows in this system), ’wiggling’ may be detected on the projected scale (0.5–1 pc) over which the jets have been observed. Our results are consistent with the existence of noncoplanar binary systems in which tidal interactions induce jets to precess. Some of the predictions of this Letter could be tested observationally in a near future. We thank Chris Davis for supplying us promptly with the images of SSV63. We acknowledge the Isaac Newton Institute for hospitality and support during its programme on the Dynamics of Astrophysical Discs, when this work began. CT is supported by the Center for Star Formation Studies at NASA/Ames Research Center and the University of California at Berkeley and Santa Cruz.
no-problem/9812/astro-ph9812266.html
ar5iv
text
# Stellar populations in the dwarf spheroidal galaxy Leo I ## 1 Introduction Many of the nine dwarf spheroidal galaxies (dSphs) clustering around the Milky Way are known to have had a complicated evolutionary history as suggested from clear evidences of star formation, continuously or in bursts, over a wide period of time (see, e.g., Mighell 1997 \[Carina\], Beauchamp et al. 1995 \[Fornax\], Lee et al. 1993 \[Leo I\], Mighell & Rich 1996 \[Leo II\], Da Costa 1984 \[Sculptor\], and the comprehensive review by Mateo 1998). For the not–too-far dSphs, several CMDs that reached the main–sequence turnoff (MSTO) have been published, allowing the analysis of their stellar content. For the more distant galaxies, only in the very recent time the Hubble Space Telescope is providing the deep CMDs (see, e. g., Mighell & Rich 1996) that are necessary to fully understand the evolutionary history of these faint members of the Local Group (van den Bergh 1994). In this work we present a study of the stellar populations in the dSph Leo I, based on archival Wide Field Planetary Camera 2 (WFPC2) data. This galaxy, discovered by Harrington & Wilson (1950) during the first Palomar sky survey, is thought to be among the most distant satellites of the Milky Way and therefore it plays an important role in determining the mass of our galaxy (Zaritsky et al. 1989; Zaritsky 1991). On the other hand, the variable star survey carried out by Hodge & Wright (1978) showed an unusual large number of anomalous Cepheids, and the CMD published by Lee et al. (1993 \[L93\]) ”…shows no suggestion for any Horizontal Branch typical of other dSphs”. Furthermore, it should be added that all the published CMDs (see also Fox & Pritchet 1987; Reid & Mould 1991; Demers, Irwin & Gambu 1994 \[DIG\]) suggest that the stars of Leo I have a younger mean age than that of the other dSphs, but the so far published data - even those from the deepest CCD photometry - do not reach the faint magnitudes we need to estimate definitively the stellar age(s). Section 2 deals with the observations and data reduction. The CMD for Leo I is discussed in Section 3, together with a review of former works on this galaxy. The theoretical scenario used for interpreting the stellar content of Leo I is presented in Section 4, while our original analysis, with a discussion of the resultant distance and age, is given in Section 5. The summary of the main results follows in Section 6. ## 2 Observations and data reduction The data for Leo I have been requested and retrieved electronically from the ESO/ST-ECF archive in Garching (Munchen). The galaxy was observed with the Hubble Space Telescope WFPC2 on 1994 March 5 through the F555W ($`V`$) and F814W ($`I`$) filters. The WFPC2 aperture was centered on the target position $`\alpha _{2000}`$ = $`10^h`$ $`08^m`$ $`26.58^s`$, $`\delta _{2000}`$ = $`12^o`$ $`18^{^{}}`$ $`33.4^{^{\prime \prime }}`$, and eight observations were obtained: three 1900 s plus one 350 s exposures in F555W, and three 1600 s plus one 300 s exposures in F814W. These observations (part of the HST Cycle 4 program GTO/WFC 5350) were placed in the public data archive on 1995 March 5. Correction to the raw data for bias, dark and flat-field were performed using the standard HST pipeline. Subsequent data reduction, for the WF cameras, was made using MIDAS routines ROMAFOT and DAOPHOT II packages. The next steps were schematically as follows. First, a filter median was applied upon each frame, using a software in ROMAFOT in order to remove cosmic rays from single frames. To push the star detection limit to as faint a level as possible, we coadded all images taken through the same filter. Then, we used the deepest I coadded frame to search for stellar objects in each chip. All the objects identified in this search were fitted in all frames I and V using DAOPHOT II and the hybrid weighted technique described by Cool and King (1995). Two DAOPHOT detection passes were carried out, separated by photometry and subtraction of all stars found in the first pass. Faint stars hidden within the PSF skirts of brighter companions were thereby revealed and added to the list of detected stars to be measured. For each chip and each filter the PSF was build by using not less than 15 bright and isolated stars in each frame. The single measures were averaged, and an average instrumental magnitude was derived for each object in each colour. A total of 36.634 stars were ultimately detected and measured in the three WFs frames. Corrections to $`0.5^{^{\prime \prime }}`$ aperture were made in each case; we transformed the F814W and F555W instrumental magnitudes into the WFPC2 “ground system” using Eq. 6 of Holtzman et al. (1995). Completeness tests were carried out by adding 5$`\%`$ of the original number of stars for each selected bin (0.4 mag) of magnitudes to the original coadded F555W and F814W frames. ‘Artificial’ stars were added randomly with the same instrumental color distribution of the real stars detected in the frames. The ‘artificial’ frames were then processed using DAOPHOT II in a manner identical to that applied to the original data. The completeness was finally derived as the ratio $`N_{rec}/N_{add}`$ of the artificial stars generated. We have considered as recovered only those stars which have been found in the same spatial position and inside the same magnitude bin with respect to the added stars. The results of these tests are shown in Table 1, which shows that the 50$`\%`$ completeness level occurs at F555W $``$ 26.0. Since we only perform qualitative analysis, i.e., we do not make comparisons between observative/theoretical luminosity functions, such a procedure is fully satisfactory for the present investigation. The internal errors were also estimated computing the rms frame to frame scatter of the instrumental magnitudes obtained for each stars (see Table 2). Recently, the Leo I galaxy has been also studied by Gallart et al. (1999). They choose to normalize their HST photometry to L93 calibration. According to the quoted authors, the L93 photometry is regarded as reliable, because the large number of calibration stars in their field. With the aim to check qualitatively our photometry, we made the colours distribution histogram of the objects belonging to the clump of stars near the Red Giant Branch with magnitude in the range 21.5$`V`$22.6 mag (see Fig. 1 and the following discussion); a comparison with the same histogram realized by L93 (see their Figure 8) shows that the peaks of the two distributions fall in the same color bin (i.e., $`0.80(VI)0.85`$). Such result make us quite confident on the reliability of our photometry: differences between our calibration and that of L93 are inside the adopted bin width (i.e., $`\mathrm{\Delta }(VI)=0.05`$). For a detailed star-by-star comparison, we address the interested reader to Gallart et al. (1999). ## 3 CMD of Leo I As already stated, this section deals with the main features of the CMD of Leo I stars as well as with a review of previous works. In our belief, this would provide the best framework for our results, presented in the following sections. ### 3.1 General morphology The $`V`$-$`(VI)`$ color-magnitude diagram of Leo I, as based on 36.634 stars down to a limiting magnitude of $`V`$27.5 mag ($`I`$26.5 mag) is displayed in Fig. 1. The principal features, which are discussed in detail in the following section, are here summarized. Red Giant Branch. There is a well-defined red giant branch (RGB) with the tip (TRGB) seen at $`V_{TRGB}`$19.4$`\pm 0.10`$ mag ($`I_{TRGB}`$18.0$`\pm 0.10`$ mag), in agreement with the L93 study. The few stars located above the TRGB are likely to belong to the asymptotic giant branch (see L93 and DIG). The observed color dispersion read at $`V`$=20.0 mag ($``$ 0.5 mag below TRGB) is $`\mathrm{\Delta }(VI)`$0.10 mag, which is slightly larger than the L93 value (0.08 mag). It is known that the color dispersion along RGB can derive from photometric errors as well as from metallicity and age spread. However, at $`V`$=20.0 mag the mean photometric error (see Table 2) is $``$ 0.015 mag, leaving an intrinsic dispersion of (0.10<sup>2</sup>-0.015<sup>2</sup>)<sup>0.5</sup> = 0.098 mag which will be discussed in the following. Horizontal Branch. As already shown from previous studies, there is no evidence of the ”flat” portion of the horizontal branch (HB) which is typical of Galactic globular clusters and other dSphs. However, the clump of red giant stars seen at $`(VI)`$ 0.7-0.9 mag and 21.5$`V`$22.6 mag are bona fide central helium–burning stars, even though more massive than those observed in old stellar systems. The sequence of stars with 20.0$`V`$21.5 and 0$`(VI)`$0.7 could represent the more massive tail of these helium-burning stars (see Section 4). Main Sequence stars. The most impressive feature in the color-magnitude diagram of Leo I is the MSTO region seen at V$``$ 22.60$`\pm `$0.20 mag, which is also the luminosity of the faintest helium-burning clumping stars (the lower envelope of HB stars is taken at $`V_{HBLE}=22.60\pm `$0.05). Such a feature would suggest by itself the presence of stars with age near 1–2 Gyr (see Caputo & Degl’Innocenti 1995). Moreover, there is also a small group of brighter main–sequence stars with 21.0$`V`$22.6 mag. These ”blue stragglers” could be mass transfer binaries or, if normal main sequence stars, they might be witnesses of a small population of very young stars. Beside the above clear evidence of a young stellar component, one notices the well developed subgiant branch extending below the clumping red giant stars. The faint stars at the base of the subgiant branch (BSGB) are seen at $`(VI)`$0.8 mag and $`V_{BSGB}=25.00\pm `$ 0.20 mag. The observed difference in magnitude between the TRGB and BSGB stars is $`\mathrm{\Delta }V(BSGBTRGB)`$ 5.6 mag, which is similar to the values observed in the CMDs of Galactic globular clusters, thus suggesting an old stellar population of $``$ 10–15 Gyr. ### 3.2 Variable stars As stated at the very beginning, the most striking difference between Leo I and the other dwarf spheroidals is the lack of RR Lyrae stars and, conversely, the large number of anomalous Cepheids. Hodge & Wright (1978) measured blue magnitude, period and amplitude for 12 variables (with an estimate of 75% completeness), while L93 suggested that the 45 stars observed with 21.2 $`V`$ 19 mag and 0$`(VI)`$ 0.6 mag are anomalous Cepheid candidates. None of the Hodge & Wright (1978) variables are located in our color-magnitude diagram and for the stars in our CMD seen with 20.0$`V`$ 21.5 and 0 $`(VI)`$0.6 we have no way to confirm the variability. ### 3.3 Metallicity and reddening There are somewhat conflicting results concerning the mean metallicity of Leo I. Previous estimates based on CMD features vary from \[Fe/H\]=-1.0$`\pm `$0.3 (Reid & Mould 1991), to \[Fe/H\]=-1.6$`\pm `$0.4 \[DIG\] and \[Fe/H\]=-2.1$`\pm `$0.1 \[L93\], depending on the assumed distance modulus, while moderate resolution spectra of two red giants (Suntzeff 1992, see L93) suggest \[Fe/H\]$``$-1.8. On the other hand, we show in Section 4 that the occurrence of a significant number of anomalous Cepheids is by itself a clear indication that Leo I is a metal–poor stellar system with an overall metallicity between $`Z`$= 0.0001 and 0.0004 (see also Castellani & Degl’Innocenti 1995; Caputo & Degl’Innocenti 1995; Bono et al. 1997 \[BCSCP\]). As for the reddening, the relatively high galactic latitude of Leo I suggest a low foreground reddening. The blue extinction reported by Burstein & Heiles (1984) is $`A_B`$=0.09 mag. On this basis, following the Cardelli, Clayton & Mathis (1989) relations, we will adopt $`E(BV)`$=0.02 mag and $`E(VI)`$=0.04 mag. ### 3.4 Distance and Age Given the lack of HB stars at the RR Lyrae gap, previous estimates of the Leo I distance have mostly used the TRGB \[L93: $`(mM)_0=22.18\pm `$0.11 mag\], the median magnitude of the red giant clump \[DIG: $`(mM)_0=21.7\pm `$0.12 mag\] and the carbon stars \[DIG: $`(mM)_0=21.5\pm `$0.3 mag\]. As early discussed by DIG, the problem with these distance indicators is that the results depend on the adopted metallicity (see also Cassisi, Castellani & Straniero 1994) in the sense that the deduced distance modulus increases with decreasing the metal content. We wish to add that some of the theoretical constraints discussed in this paper suggest a dependence of the TRGB luminosity on the age which has to be taken into account when dealing with young stellar populations. As for the age, the absence of blue horizontal branch, the presence of several carbon stars, and the clumping red giant stars yielded DIG to suggest an upper age limit of $``$ 7 Gyr for the dominant stellar population, with no obvious evidence of an older stellar component (their CMD does not reach the main sequence turnoff). The deeper CCD photometry presented by L93 revealed an increased number of main sequence stars at $`V`$ 23.5 mag, consistent with the presence of young stars of $``$ 3 Gyr. More recently, the L93 measurements have been interpreted by Caputo, Castellani & Degl’Innocenti (1995, 1996) as evidence of even younger stellar populations ($``$ 2.0–1.5 Gyr). ## 4 Theoretical background In order to provide a clear and complete reference framework, let us summarize the primary theoretical constraints which are relevant for the present investigation. They are derived from the evolutionary models (both hydrogen and central helium-burning phases) with masses from 0.6$`M_{}`$ to 2.2$`M_{}`$, original helium $`Y_0`$=0.23 and metallicity $`Z`$=0.0001, 0.0004 already presented by Castellani & Degl’Innocenti (1995) and BCSCP. As a first point, we show in Fig. 2 theoretical isochrones with $`Z`$=0.0001 and selected ages, transformed into the observational plane $`M_V`$-$`(VI)`$ by means of the stellar atmosphere models provided by Castelli, Gratton & Kurucz (1997a,b). Besides the well known evidence that with increasing age the RGB color becomes redder and the MSTO point becomes fainter, one notices the clear variation of the TRGB luminosity, which fades at the lower ages. As shown in the lower panel of Fig. 3, where the predicted absolute magnitude of TRGB is plotted versus the age, such a behaviour is less pronounced with $`Z`$=0.0004. For the purpose of present paper, we present in the upper panel of Fig. 3 the corresponding variation of the luminosity at the base of the subgiant branch. From the theoretical isochrones with $`Z`$=0.0001 we derive that at $`M_V`$=-2.0 mag ($``$ 0.5 mag below TRGB) the color variation with age is $$(VI)_{M_V=2}=1.13+0.11logt,$$ $`(1)`$ while the $`Z`$=0.0004 isochrones are redder by $`\mathrm{\Delta }(VI)`$0.04 mag, at constant age. Such a result yields that the predicted contribution of the age to the observed color dispersion along RGB is significantly larger than previously adopted. As an example, L93 assumes a difference $`\mathrm{\Delta }(VI)`$=0.03 mag between the 3.5 and 15 Gyr isochrones, whereas the present results would give $``$0.07 mag. On these grounds, one understands that it is necessary to estimate the actual age spread before deriving the metallicity dispersion from the observed RGB width. Passing to the central helium-burning phase, let us remind that for stellar structures experiencing a strong He-flash, i.e. for evolving masses smaller than $`1.0M_{}`$, the age $`t_{fl}`$ and the mass $`M_{c,fl}`$ of the He-core at the RGB tip depend slightly on the evolutionary mass, but significantly on the chemical composition. On the contrary, when increasing the stellar mass, these evolutionary parameters strongly depend on both the mass (as a consequence of the changes in the electron degeneracy level inside the core during the RGB evolution) and chemical composition. The data listed in Table 3 and plotted in Fig. 4 show that, if the mass $`M_{pr}`$ of the star is lower than $`2.2M_{}`$, then both $`t_{fl}`$ and $`M_{c,fl}`$ are decreasing functions of $`M_{pr}`$. The consequences on the subsequent zero age horizontal branch (ZAHB) are easily understandable (see also Castellani & Degl’Innocenti 1995; Caputo & Degl’Innocenti 1995). With increasing the age, the maximum permitted mass $`M_{HB,max}`$ for central He-burning stars ($`M_{HB,max}`$ is equal to the mass of the RGB progenitor $`M_{pr}`$ in the hypothesis of no mass–loss during the RGB phase) decreases, whereas, following the corresponding variation of $`M_{c,fl}`$, the luminosity of the ZAHB model at $`(VI)_0`$ 0.70 mag tends to increase (see last column in Table 3). On the other hand, the evolutionary calculations show that the effective temperature of a ZAHB model decreases with increasing the mass, reaching the minimum value of $`\mathrm{log}T_e`$3.74 ($`Z`$=0.0001) or $``$ 3.72 ($`Z`$=0.0004) around 1.0-1.2 $`M_{}`$. After that, the more massive models with $`Z`$=0.0001 present higher luminosity and larger effective temperature, causing a ZAHB ”turn-over” and the development of a ”upper horizontal branch” (UHB). On the contrary, the models with $`Z`$=0.0004 and mass in the range of 1.3$`M_{}`$ to 1.5$`M_{}`$ are characterized by higher luminosity and roughly constant effective temperature. Consequently, with $`Z`$=0.0004 the ZAHB ”turnover” is occurring after 1.5$`M_{}`$. All these features are presented in Figures 5 and 6, where ZAHB sequences (solid line) of stars with the same RGB progenitor (see the labelled $`M_{pr}`$) but having experienced different degrees of mass-loss, are displayed. The same figures show the post–ZAHB evolution of the most massive ($`M_{HB,max}`$=$`M_{pr}`$) central He–burning model (dashed line). Note that a further increase of the metallicity up to $`Z`$=0.001 would shift the ZAHB ”turn-over” to masses significantly larger than 2.0$`M_{}`$ (Demarque & Hirshfeld 1975; Hirshfeld 1980). In order to have an immediate insight into the connection between the central–helium burning evolution and radial pulsation, we show in Figures 7a and 7b the evolution of HB models with mass $`M_{HB}`$=$`M_{pr}`$, but with the effective temperature of the model scaled to the red edge of the instability strip (FRE). The location of FRE, as well as the adopted width of the instability strip, is provided from the pulsational models discussed by BCSCP. The first straightforward result is that He-burning stars with evolutionary mass in the range of $`1.0M_{}`$ to $`1.2M_{}`$ (for $`Z`$=0.0001) or in the range of $`0.8M_{}`$ to $`1.7M_{}`$ (for $`Z`$=0.0004) are confined near the red giant branch, out of the instability strip. Thus, no variable stars are expected within these mass ranges. Moreover, one derives that the lowest mass for the occurrence of massive pulsators brighter that RR Lyrae stars, i.e. anomalous Cepheids, is 1.3$`M_{}`$ with $`Z=`$0.0001 and 1.8$`M_{}`$ with $`Z`$=0.0004. In terms of age, this could mean that the anomalous Cepheids should have ages younger than $``$ 3 and 1 Gyr, with $`Z`$=0.0001 and $`Z`$=0.0004, respectively (see Table 3). Finally, one may notice that with $`t_{fl}`$ 15 Gyr the evolution of the most massive HB model at $`Z`$=0.0004 ($`M_{HB}=M_{pr}=0.75M_{}`$) is confined at the red side of the RR Lyrae instability strip, whereas with $`Z`$=0.0001 the most massive HB model (0.80$`M_{}`$) evolve within the instability strip. Thus, even with a null mass-loss, central He-burning stars with $`Z`$=0.0001 and age $``$ 15 Gyr are expected to populate the RR Lyrae gap. ## 5 Revising the distance modulus and the age of Leo I The theoretical isochrones presented in Fig. 2 show that with increasing age the maximum luminosity of RGB stars increases whereas the minimum luminosity of SGB stars decreases. Thus, if composite stellar populations are present in Leo I, then the oldest stars have to be seen at BSGB and TRGB. By starting from these simple considerations, we combine in Fig. 8 the theoretical data already shown in Fig. 3 with the observed values $`V_{TRGB}=19.40\pm `$ 0.10 mag and $`V_{BSGB}=25.00\pm `$ 0.20 mag, aiming at checking the possibility of a unique solution for the Leo I distance modulus by using these two observables. As a result, we obtain that the apparent distance modulus of Leo I, as given from its oldest stellar component, is $`(mM)_V`$=22.00$`\pm `$ 0.15 mag. Moreover, the data in Fig. 8 suggest that the age of these stars is in the range of 10.0–15.0 Gyr and 9.0–13.0 Gyr, with $`Z`$=0.0001 and $`Z`$=0.0004, respectively. However, if further observations will confirm the lack of RR Lyrae stars, then from the data plotted in Fig. 7a we could add that the Leo I oldest stellar component cannot be older than $``$ 10 Gyr, with $`Z`$=0.0001. We can straightway check the derived distance modulus $`(mM)_V`$=22.00$`\pm `$ 0.15 mag by comparing observed data of Leo I anomalous Cepheids with the theoretical predictions given by the BCSCP pulsating convective models. Figure 9 shows the period–luminosity diagram for Leo I variables<sup>1</sup><sup>1</sup>1The $`B`$-magnitudes from Hodge & Wright (1978) are corrected with $`(mM)_B`$=22.02, according to the adopted reddening $`E(BV)`$=0.02 mag.. It is quite evident that the predictions conform very well to the observed data, supporting the above distance modulus. The comparison between theoretical isochrones and the CMD of the stars in Leo I, corrected with $`(mM)_V`$=22.00 mag and $`E(VI)`$=0.04 mag, is displayed in Fig. 10a ($`Z`$=0.0001) and Fig. 10b ($`Z`$=0.0004). As a whole, these figures provide further support to the result that the oldest stars in Leo I were formed $``$ 10 Gyr or $``$ 13 Gyr (with $`Z`$=0.0001 and 0.0004, respectively) ago. Moreover, the brightest MSTO stars seen at $`V22.60\pm 0.20`$ mag conform quite well the 1 Gyr isochrones, while the bright blue stragglers should have even younger ages ($``$ 700 Myr). The absence of distinct MSTOs, as those seen in Carina (see Smecker–Hane et al. 1996), gives evidence against episodic bursts and, as a whole, we conclude that Leo I has forming stars rather continuously, even though at lower level during the last billion years, from about 10 Gyr or 13 Gyr ago (depending on the adopted metallicity) to at least $``$ 1 Gyr ago. Our conclusions are not discordant with the results of Gallart et al. (1998) which suggest that Leo I experienced a major increase of star formation from $``$ 6 to 2 Gyr ago, with some prior episodes lasting 2-3 Gyr and a decreasing activity until 500-200 Myr ago. The derived spread of ages leads \[see Eq. (1)\] to a predicted color dispersion along RGB of $``$ 0.11 mag, which is consistent with the intrinsic RGB width ($``$0.10 mag). This result seems to exclude a substantial metallicity dispersion of the Leo I stars. On the other hand, it has been shown that the presence of anomalous Cepheids is a clear indication of young ($``$ 3 Gyr) and metal–poor ($`Z`$ 0.0004) stellar population. Thus, we are able to conclude that the actual metal dispersion content of Leo I is at the most in the range of $`Z`$=0.0001 to $`Z`$=0.0004. It has been shown (see, e.g., Caputo & Degl’Innocenti 1995) that in stellar systems in which is present a not-too-old stellar population, the observed star distribution along the RG clump can provide safe constraints on the allowed range of stellar ages. Now we wish to adopt a similar approach to investigate how the above age dispersion agrees with the observed clump of central helium-burning stars. For this aim, the CMD of Leo I stars with $`V`$ 23.00 mag is displayed in Fig. 11 together with HB evolutionary tracks with $`Z`$=0.0001 and $`M_{HB}=M_{pr}`$ (the dashed line refers to the model with 2.0$`M_{}`$), and by adopting the two extreme values of the distance modulus derived from the previous analysis. With $`(mM)_V`$=21.85 mag (lower panel), the absolute magnitude of the lower envelope is equal to $`M_V^{HBLE}`$=0.75$`\pm `$0.05 mag, suggesting (see Table 3) that the stars at the HBLE are $``$ 2 Gyr old and have a RGB progenitor with mass $`M_{pr}1.4M_{}`$. However, when considering the mass distribution along the corresponding ZAHB locus, the mass of the star at $`(VI)_0=0.7`$ mag is of about 0.8$`M_{}`$, a result which would imply a substantial mass-loss during the RGB phase (or at the He-flash). As for the remaining stars forming the clump seen at $`(VI)`$ 0.7-0.9 mag and 21.5$`V`$22.6 mag, the comparison with our HB evolutionary models shows that they are matched by the HB evolution of models with mass ($`M_{HB}=M_{pr}`$) equal to 0.9$`M_{}`$ ($``$ 10 Gyr), 1.0$`M_{}`$ ($``$ 7 Gyr) and 1.2$`M_{}`$ ($``$ 4 Gyr). Similarly, the stars with 20.0$`V`$21.5 and 0$`(VI)`$0.7, which observationally define the UHB, appear reasonably fitted with more massive (and younger) HB models from 1.4$`M_{}`$ (2.2 Gyr) up to $`2.0M_{}`$ ($``$ 0.7 Gyr), assuming no mass-loss. Let us notice, that we are not neglecting the possibility that a mass-loss phenomenon could affect the progenitors of such HB structures during the RGB evolution. Here we are interested only to the location in the CMD of the more massive (and brightest) HB star for each fixed assumption on the RGB progenitor. All the other less massive ZAHB structures are located at lower luminosity (see previous discussions and Caputo & Degl’Innocenti 1995). For the same reason, in principle it could be possible that such CMD region is populated by HB structures with still more massive - and then younger - RGB progenitor, which have suffered an efficient mass-loss phenomenon during the previous evolutionary phase. Nevertheless, this occurrence does not seem supported at all by the comparison between the full CMD diagram and theoretical isochrones, performed in Fig. 10. By adopting $`(mM)_V`$=22.15 mag (upper panel), the absolute magnitude of the HBLE stars $`M_V^{HBLE}`$=0.45$`\pm `$0.05 mag, which is consistent with the location of the 10 Gyr old ZAHB (corresponding to the 0.9$`M_{}`$ progenitor) and masses (on the ZAHB) near 0.9$`M_{}`$, thus implying a negligible mass-loss. As a whole, for the clumping red giant stars with $`(VI)`$ 0.7-0.9 mag and 21.5$`V`$22.6 mag we derive masses (during the central He-burning phase) from 0.9 to 1.3$`M_{}`$ and ages in the range of 10 to 3 Gyr, respectively, assuming no mass-loss for the RGB progenitor. As for the UHB stars with 20.0$`V`$21.5 and 0$`(VI)`$0.7, they appear somehow brighter than the evolutionary tracks of the most massive (and younger) models, rather supporting the smaller distance modulus. In passing, we wish to note the fine agreement between the shape of the observed clump of stars and the location of our HB tracks. However, we notice also some discrepancy between observed data and theoretical models with $`Z`$=0.0001 as due to the blue loop of the evolutionary sequences which extends hotter than the observed color of the clumping stars. As shown in Fig. 12, such a discrepancy is removed if the $`Z`$=0.0004 models are taken into consideration. By adopting $`(mM)_V`$=21.85 mag (lower panel), the HBLE stars turn out to be $``$ 2.3 Gyr old, with a mass near 0.75$`M_{}`$ to be compared with $`M_{pr}`$=1.4$`M_{}`$. On the other hand, the remaining clumping red giants with $`(VI)`$ 0.7-0.9 mag and 21.5$`V`$22.6 mag agree with the HB evolution of models from 0.80 to 1.4$`M_{}`$ and ages from 13 to 2.3 Gyr, respectively, assuming no mass-loss. Similarly, the brightest stars with 20.0$`V`$21.5 and 0$`(VI)`$0.7 seem to require masses up to 2.2$`M_{}`$ (dotted line), assuming no mass-loss for the progenitor. Adopting $`(mM)_V`$=22.15 mag (upper panel) yields that the absolute magnitude of the HBLE stars is somehow brighter than the 0.80$`M_{}`$ model with age of $``$ 13 Gyr (i.e. the maximum age derived from isochrone fitting), suggesting that the distance modulus of Leo I is not larger than $`(mM)_V`$=22.00 mag. With such a value, we derive that the HBLE stars have mass 0.80$`M_{}`$ and age $``$ 13 Gyr, while for the remaining clumping red giant stars we obtain 1.0–1.6$`M_{}`$ and 1–7 Gyr, respectively, assuming no mass-loss. However, also for this metallicity the fit of the UHB stars seems to support the smaller distance modulus. In conclusion, by taking into account all the various features of the CMD, our best estimates for the metallicity and the distance modulus of Leo I are $`Z`$=0.0004 and $`(mM)`$=21.90$`\pm `$0.05 mag. The resulting ages of the stellar components are from 1 to 13 Gyr, with few stars as young as $``$ 700 Myr, as derived from theoretical isochrones and HB evolutionary models. Finally the analysis of the clumping red giants seem to suggest that the younger stellar populations (i.e. those with massive RGB progenitors) suffered a substantial mass-loss during the RGB phase or at the He-flash. Before closing this analysis, it seems worth mentioning that the recent improvements of the stellar evolutionary models have produced younger ages for the Galactic globular clusters (see, e.g., Cassisi et al. 1998 and references therein). Even though the whole ”improved physics” is subject of deep investigation (Castellani & Degl’Innocenti 1998), we decided to compute a set of ”new” evolutionary models with $`Z`$=0.0002, taking into account also the inward diffusion of helium and heavy elements. ¿From the new isochrones shown in figure 13, we obtain slightly larger distance modulus ($`(mM)_V`$=22.10$`\pm `$ 0.15 mag) and ages in the range of $``$ 0.7 to 10 Gyr. As for the He-burning stars, the new models yield that the red giant clumping stars have ages and masses in the range of $``$ 1 to 10 Gyr and 0.85 to 1.3$`M_{}`$ respectively, while the UHB stars require masses up to 2.2 $`M_{}`$ (see figure 14). ## 6 Summary As clearly stated in several works (see the comprehensive review by Mateo (1998) and reference therein) the possibility to obtain a deep insight into the intrinsic evolutionary properties of the main stellar population(s) in dSphs represents a pivotal tool in order to understand not only the properties of our nearest extragalactic neighbours but also to improve our knowledge of the Galaxy. Indeed, the decoding of the evolution history of dSphs in the Local Group sheds light on the formation and evolution of our own Milky Way. Moreover, a thorough understanding of the evolution history of dSphs and more in general of the Local Group is a fundamental precondition in order to understand the observational features of high-redshift, unresolvable galaxies. However, it is clear that accurate analysis of the star formation history and reliable knowledge of age(s) and metallicity distribution rely on several observational features, as derived from as much accurate as possible CMD diagrams reaching the faintest Main Sequence magnitudes. In the present analysis we have adopted this approach in order to investigate into the stellar populations of the dSph galaxy Leo I. The more relevant points can be summarized as it follows. * By using HST archival data, an accurate photometric investigation has been carried out. This occurrence has allowed us to obtain a CMD with more than 36.600 stars which reaches very faint magnitudes ($`V`$ 27.5 mag). The tests performed during the photometric analysis show that our photometry reaches the 50% completeness level at the magnitude (F555W) $`26.0`$. * The CMD of Leo I is characterized by a well-defined RGB and by a HB clumped near the RGB. The observed TRGB is seen at $`V`$=19.40$`\pm `$0.10 mag and the HB morphology does not show any evidence for a flat distribution near the RR Lyrae instability strip. Moreover, the well developed SGB extends below the red giant clump, down to $`V`$=25.00$`\pm `$0.20 mag and the brightest MSTO is located at $`V22.60\pm 0.20`$ mag, with some few stars even brighter and bluer. * By adopting a reference theoretical scenario for both hydrogen and central helium-burning stars with $`Z`$=0.0001 and $`Z`$=0.0004, the observed maximum luminosity of RGB and minimum luminosity of SGB are used to derive a distance modulus of $`(mM)_V`$=22.00$`\pm `$0.15 mag. Furthermore, the resulting estimates for the age of the oldest stellar population turn out to be in the range of 10–15 Gyr and 9–13 Gyr with $`Z`$=0.0001 and 0.0004, respectively. However, when considering also the lack of RR Lyrae stars in Leo I, we conclude that the oldest stellar component in Leo I is at the most 10 Gyr old, with $`Z`$=0.0001. * Such distance modulus evaluation has found further support by comparing the distribution of anomalous Cepheids in the $`<M_B>\mathrm{log}P`$ plane with the location of the instability strip boundaries, as predicted by convective pulsating models. * By adopting the above distance modulus, the comparison of the Leo I CMD with theoretical isochrones yields that the brightest MSTO stars are consistent with an age of the order of 1 Gyr, for both the two adopted metallicities. * When all these evidences are accounted for, it is possible to reach the conclusion that the star formation process in Leo I has started at about 10 Gyr or 13 Gyr ago, depending on the adopted metallicity, and it stopped about 1 Gyr ago, without any clear evidence for the star formation occurring by single episodic bursts. This results appear in satisfactory agreement with the scenario outlined by Mateo (1998, his figure 8b) and Grebel (1998); * such a dispersion of age is consistent with the mass of central helium–burning stars, which are derived to be in the range of 0.75–0.9$`M_{}`$ to 2.0–2.2$`M_{}`$, depending on the adopted metal content. * The estimated age range of the main stellar components in Leo I provides a consistent explanation for the observed color spread along RGB, without the need for invoking a substantial metallicity dispersion as early suggested by L93. * Finally, the use of the theoretical evolutionary scenario based on a updated physical inputs (Cassisi et al. 1998) does not change in remarkable way the results achieved: the main effects are a slight increase of the distance modulus ($``$ 0.10 mag) and a slight decrease of the age for the older stellar component ($``$ 1 Gyr). We are grateful to an anonymous referee for the pertinence of her/his comments regarding the content and the style of an early draft of this paper, which improved its readability. Figure captions
no-problem/9812/hep-ph9812486.html
ar5iv
text
# References TIFR/TH/98-51 December 1998 hep-ph/9812486 Large extra dimensions and deep-inelastic scattering at HERA Prakash Mathews<sup>1</sup><sup>*</sup><sup>*</sup>*prakash@theory.tifr.res.in, Sreerup Raychaudhuri<sup>2</sup>sreerup@iris.hecr.tifr.res.in, K. Sridhar<sup>1</sup>sridhar@theory.tifr.res.in 1) Department of Theoretical Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Bombay 400 005, India. 2) Department of High Energy Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Bombay 400 005, India. ABSTRACT In scenarios motivated by string theories, it is possible to have extra Kaluza-Klein dimensions compactified to rather large magnitudes, leading to large effects of gravity at scales down to a TeV. The effect of the spin-2 Kaluza-Klein modes on the deep-inelastic cross-section at HERA is investigated. We find that the data can be used to obtain bounds on the effective low energy scale, $`M_S`$. The Standard Model (SM) has proved enormously successful in providing a description of particle physics upto energy scales probed by current experiments, which is in the region of several hundred GeV. In the SM, however, one assumes that effects of gravity can be neglected, because the scale where the effects of gravity become large i.e. the Planck scale ($`M_P=1.2\times 10^{19}`$ GeV) is vastly different from the TeV scale. The separation between the TeV scale and the Planck scale is what manifests itself as the hierarchy problem, whose solution has become one of the foci of the search for the correct physics beyond the SM. This problem is exacerbated in traditional unification scenarios: the scale of grand-unification is of the order of $`10^{16}`$ GeV and again implies a huge desert. Further, in spite of the unification scale being so close to the Planck scale, traditional unification models make no reference whatsoever to gravity. Recent advances in string theory provide indications for a major paradigm shift – in particular, unification of gravity with other interactions now seems possible in the strongly-coupled limit of string theory called $`M`$-theory . But other interesting effects may also manifest at lower energies, for example, it was pointed out that the fundamental scale of string theory can be as low as a TeV . Of tremendous interest to phenomenology is the possibility that the effects of gravity could become large at very low scales ($``$ TeV), because of the effects of large extra Kaluza-Klein dimensions where gravity can propagate . The starting point for such a scenario is a higher-dimensional theory of open and closed strings . The extra dimensions of this theory are then compactified to obtain the effective low-energy theory in 3+1 dimensions, and it is assumed that $`n`$ of these extra dimensions are compactified to a common scale $`R`$ which is relatively large, while the remaining dimensions are compactified to much smaller length scales which are of the order of the inverse Planck scale. In such a scenario, the SM particles correspond to open strings, which end on a 3-brane. It implies that SM particles are localised on this 3-brane, and are, therefore, confined to the $`3+1`$-dimensional spacetime. On the other hand, the gravitons (corresponding to closed strings) propagate in the $`4+n`$-dimensional bulk. The relation between the scales in $`4+n`$ dimensions and in $`4`$ dimensions is given by $$M_\mathrm{P}^2=M_S^{n+2}R^n,$$ (1) where $`M_S`$ is the low-energy effective string scale. This equation has the interesting consequence that we can choose $`M_S`$ to be of the order of a TeV and thus get around the hierarchy problem. For such a value of $`M_S`$, it follows that $`R=10^{32/n19}`$ m, and so we find that $`M_S`$ can be arranged to be a TeV for any value $`n>1`$. Effects of non-Newtonian gravity can become apparent at these surprisingly low values of energy. For example, for $`n=2`$ the compactified dimensions are of the order of 1 mm, just below the experimentally tested region for the validity of Newton’s law of gravitation and within the possible reach of ongoing experiments . In fact, it has been shown that is possible to construct a phenomenologically viable scenario with large extra dimensions, which can survive the existing astrophysical and cosmological constraints. While the lowering of the string scale leads to the nullification of the hierarchy problem, the residual problem is that of stabilising the extra large dimensions. This problem has been recently addressed in some papers . Moreover, the effect of the Kaluza-Klein states on the running of the gauge couplings i.e. the effect of these states on the beta functions of the theory have been studied and it has been shown that the unification scale can be also lowered down to scales close to the electroweak scale <sup>1</sup><sup>1</sup>1Efforts to lower the compactification scale have been made earlier in Ref. . The effects of Kaluza-Klein states on the running of couplings was first investigated in Ref. .. For recent investigations on different aspects of the TeV scale quantum gravity scenario and related ideas, see Ref. . Below the scale $`M_S`$ the following effective picture emerges : there are the Kaluza-Klein states, in addition to the usual SM particles. The graviton corresponds to a tower of Kaluza-Klein states which contain spin-2, spin-1 and spin-0 excitations. The spin-1 modes do not couple to the energy-momentum tensor and their couplings to the SM particles in the low-energy effective theory are not important. The scalar modes couple to the trace of the energy-momentum tensor, so they do not couple to massless particles. Other particles related to brane dynamics (for example, the $`Y`$ modes which are related to the deformation of the brane) have effects which are subleading, compared to those of the graviton. The only states, then, that contribute are the spin-2 Kaluza-Klein states. These correspond to a massless graviton in the $`4+n`$ dimensional theory, but manifest as an infinite tower of massive gravitons in the low-energy effective theory. For graviton momenta smaller than the scale $`M_S`$, the effective description reduces to one where the gravitons in the bulk propagate in the flat background and couple to the SM fields which live on the brane via a (four-dimensional) induced metric $`g_{\mu \nu }`$. Starting from a linearized gravity Lagrangian in $`n`$ dimensions, the four-dimensional interactions can be derived after a Kaluza-Klein reduction has been performed. The interaction of the SM particles with the graviton, $`G_{\mu \nu }`$, can be derived from the following Lagrangian: $$=\frac{1}{\overline{M}_P}G_{\mu \nu }^{(j)}T^{\mu \nu },$$ (2) where $`j`$ labels the Kaluza-Klein mode and $`\overline{M}_P=M_P/\sqrt{8\pi }`$, and $`T^{\mu \nu }`$ is the energy-momentum tensor. Using the above interaction Lagrangian the couplings of the graviton modes to the SM particles can be calculated , and used to study the consequences at colliders of this TeV scale effective theory of gravity. In particular, direct searches for graviton production at $`e^+e^{}`$, $`p\overline{p}`$ and $`pp`$ colliders, leading to spectacular single photon + missing energy or monojet + missing energy signatures, have been suggested . The virtual effects of graviton exchange in $`e^+e^{}f\overline{f}`$ and in high-mass dilepton production , and in $`t\overline{t}`$ production at the Tevatron and the LHC have been studied. The bounds on $`M_S`$ obtained from direct searches depend on the number of extra dimensions. Non-observation of the Kaluza-Klein modes yield bounds which are around 500 GeV to 1.2 TeV at LEP2 and around 600 GeV to 750 GeV at Tevatron (for $`n`$ between 2 and 6) . Indirect bounds from virtual graviton exchange in dilepton production at Tevatron yields a bound of around 950 GeV . Virtual effects in $`t\overline{t}`$ production at Tevatron yields a bound of about 650 GeV . In view of the fact that the effective Lagrangian given in Eq. 2 is suppressed by $`1/\overline{M}_P`$, it may seem that the effects at colliders will be hopelessly suppressed. However, in the case of real graviton production, the phase space for the Kaluza-Klein modes cancels the dependence on $`\overline{M}_P`$ and, instead, provides a suppression of the order of $`M_S`$. For the case of virtual production, we have to sum over the whole tower of Kaluza-Klein states and this sum when properly evaluated provides the correct order of suppression ($`M_S`$). The summation of time-like propagators and space-like propagators yield exactly the same form for the leading terms in the expansion of the sum and this shows that the low-energy effective theories for the $`s`$ and $`t`$-channels are equivalent. In the present work, we study the effect of the virtual graviton exchange on the $`e^+p`$ deep-inelastic scattering cross-section at HERA. The presence of the new couplings from the low-energy effective theory of gravity, lead to new $`t`$-channel diagrams in the $`e^+q(\overline{q})`$ or $`e^+g`$ initial state. We use the couplings as given in Refs. , and summing over all the graviton modes, we find the following expressions for the cross-sections involving the virtual graviton exchange (in the following we use the notation $`d\widehat{\sigma }^{a(i)}/d\widehat{t}`$, for the process $`e^+ie^+i`$, where $`i=q,g`$ is the parton in the initial state, and the process is mediated by the exchange of $`a`$; and the notation $`d\widehat{\sigma }^{ab(i)}/d\widehat{t}`$, for the interference of processes $`e^+ie^+i`$ mediated by the exchanges of the virtual particles $`a`$ and $`b`$, respectively): $`{\displaystyle \frac{\mathrm{d}\widehat{\sigma }(e^+qe^+q)}{\mathrm{d}\widehat{t}}}`$ $`=`$ $`{\displaystyle \frac{\mathrm{d}\widehat{\sigma }^{\mathrm{SM}}}{\mathrm{d}\widehat{t}}}+{\displaystyle \frac{\mathrm{d}\widehat{\sigma }^{G(q)}}{\mathrm{d}\widehat{t}}}+{\displaystyle \frac{\mathrm{d}\widehat{\sigma }^{\gamma G(q)}}{\mathrm{d}\widehat{t}}}+{\displaystyle \frac{\mathrm{d}\widehat{\sigma }^{ZG(q)}}{\mathrm{d}\widehat{t}}},`$ (3) $`{\displaystyle \frac{\mathrm{d}\widehat{\sigma }^{G(q)}}{\mathrm{d}\widehat{t}}}`$ $`=`$ $`{\displaystyle \frac{\pi \lambda ^2}{32M_S^8}}{\displaystyle \frac{1}{\widehat{s}^2}}\left[32\widehat{u}^4+64\widehat{u}^3\widehat{t}+42\widehat{u}^2\widehat{t}^2+10\widehat{u}\widehat{t}^3+\widehat{t}^4\right],`$ (4) $`{\displaystyle \frac{\mathrm{d}\widehat{\sigma }^{\gamma G(q)}}{\mathrm{d}\widehat{t}}}`$ $`=`$ $`{\displaystyle \frac{\pi \alpha e_q\lambda }{2M_S^4}}{\displaystyle \frac{1}{\widehat{s}^2\widehat{t}}}(2\widehat{u}+\widehat{t})^3,`$ (5) $`{\displaystyle \frac{\mathrm{d}\widehat{\sigma }^{ZG(q)}}{\mathrm{d}\widehat{t}}}`$ $`=`$ $`{\displaystyle \frac{\pi \alpha \lambda }{2\mathrm{sin}^22\theta _wM_S^4}}{\displaystyle \frac{\left[C_V^eC_V^q(2\widehat{u}+\widehat{t})^3+C_A^eC_A^q(6\widehat{u}^2+6\widehat{u}\widehat{t}+\widehat{t}^2)\right]}{\widehat{s}^2(\widehat{t}m_Z^2)}},`$ (6) $`{\displaystyle \frac{\mathrm{d}\widehat{\sigma }(e^+ge^+g)}{\mathrm{d}\widehat{t}}}`$ $`=`$ $`{\displaystyle \frac{\mathrm{d}\widehat{\sigma }^{G(g)}}{\mathrm{d}\widehat{t}}}={\displaystyle \frac{\pi \lambda ^2}{2M_S^8}}{\displaystyle \frac{\widehat{u}}{\widehat{s}^2}}\left[2\widehat{u}^3+4\widehat{u}^2t+3\widehat{u}\widehat{t}^2+\widehat{t}^3\right],`$ (7) $`\lambda `$ is the coupling at the effective scale $`M_S`$ and is expected to be of $`𝒪(1)`$, but its sign is not known $`apriori`$ and $`C_V=T^32\mathrm{s}\mathrm{i}\mathrm{n}^2\theta _WQ_f`$ and $`C_A=T_3`$ are the usual vector and axial-vector couplings of the fermions to the $`Z`$. In our work we will explore the sensitivity of our results to the choice of the sign of $`\lambda `$. The $`e^+g`$ subprocess is, of course, absent in the SM, and is completely a result of introducing the new interactions. The new interactions also contribute in the $`e^+q`$ channel, where there is an interference between the SM amplitude and the amplitude due to the new physics. The H1 and ZEUS collaborations at HERA have presented their results from their combined 1994-97 runs in terms of the quantity $$R=\frac{d\sigma /dQ_e^2|_{\mathrm{exp}}}{d\sigma /dQ_e^2|_{\mathrm{SM}}},$$ (8) where $`Q_e^2`$ is the momentum-transfer squared constructed from the $`e^+`$ track. In our parton-level simulation, $`Q_e^2\widehat{t}`$. We use this observable to compare against $$R=\frac{d\sigma /dQ_e^2|_{\mathrm{SM}+\mathrm{NSM}}}{d\sigma /dQ_e^2|_{\mathrm{SM}}},$$ (9) where NSM denotes non-Standard Model, and use the data to put bounds on the value of $`M_S`$. The cross-section $`d\sigma /dQ_e^2`$ is given as $$\frac{d\sigma (e^+pe^+jet)}{dQ_e^2}=𝑑xf_{i/p}(x)\frac{d\widehat{\sigma }}{d\widehat{t}},$$ (10) $`f_{i/p}`$ denotes the probability of finding a parton $`i`$ in the proton. The sum in Eq. 10 runs over the contributing subprocesses. The results of our numerical evaluation of the cross-section are shown in Figure 1. We have plotted $`R`$ as a function of $`Q_e^2`$, as obtained from our calculation and compared it with each experiment separately. For our computations, we use the cuts as used in the two experiments and we have used CTEQ4 parton densities taken from PDFLIB . The curves represent the 95% C.L. bounds from a $`\chi ^2`$ fit to each set of data. These fits yield the bound on $`M_S`$ to be 543 (436) GeV for $`\lambda =+1(1)`$ for the H1 data and 567 (485) GeV for $`\lambda =+1(1)`$ for the ZEUS data. For higher values of MS, the non-Standard contribution decreases, so that the curves move closer to $`R=1`$. The data on $`R`$ from H1 and ZEUS collaborations, in fact, show a deviation from the SM for $`Q_e^2`$ values beyond $`10^4`$ GeV<sup>2</sup>. The errors on the ratio $`R`$ at these large values of $`Q_e^2`$ are large, so this discrepancy with the SM prediction is not very significant statistically. The result of including the non-Standard Model contribution is to improve the $`\chi ^2`$ of our fit to the data. We find that for $`\lambda =1`$, the theoretical prediction, with the non-Standard Model contribution, fits the data rather well, even accounting for the modest dip in $`R`$ at a value of $`Q_e^2`$ just below $`10^4`$ GeV<sup>2</sup>! This behaviour follows from the fact that the interference term in the quark-initiated sector dominates at relatively low $`Q_e^2`$, and this gives a negative contribution for $`\lambda =1`$. As one moves to larger $`Q_e^2`$, the gluon-initiated contribution starts to dominate and gives the increase at large $`Q_e^2`$. Given that a discrepancy with the SM seen in the experiments exists, though it is not statistically compelling, the bounds we derive from the data are not as strong as compared to those derived from Tevatron data on dilepton production and $`t\overline{t}`$ production . In the event that the HERA experiments improve their data in the large $`Q_e^2`$ region and find good agreement with the SM, the bounds presented in this paper are likely to improve considerably. For example, with the 20 fold increase in luminosity planned in HERA experiments in the next few years, assuming that the data are centred around the Standard Model prediction we estimate that the bounds on $`M_S`$ would go up to around 600 (925) GeV for $`\lambda =+1(1)`$. We have studied the effect of large extra dimensions and a TeV scale gravity on the deep inelastic scattering cross-section at HERA. The fits of the theoretical curves to the data yield the value of the effective string scale, $`M_S`$, to be $`>`$ 543 (436) GeV for $`\lambda =+1(1)`$ for the H1 data and $`>`$ 567 (485) GeV for $`\lambda =+1(1)`$ for the ZEUS data. The bounds are likely to increase with any improvement in the data, especially at large $`Q_e^2`$.
no-problem/9812/hep-ph9812457.html
ar5iv
text
# FOUR ISSUES IN CORRELATIONS AND FLUCTUATIONS ## 1 Introduction The purpose of this talk, as I understand it, is to introduce briefly the subject of the Session to the participants who are not experts in the field. In trying to do this I shall heavily borrow from my recent summary of the Matrahaza workshop . Since I am now much less constrained by the program of the meeting, however, this account shall reflect more adequately my personal views on the subject. I restrict myself to the four issues: (i) Bose-Einstein interference; (ii) Intermittency; (iii) QCD and multiparticle correlations; (iv) Event-by-Event fluctuations. ## 2 Bose-Einstein interference The discussion of correlations in multiparticle production is at present largely dominated by effects of the Bose-Einstein interference. Let me thus start by a brief reminder what is this all about<sup>1</sup><sup>1</sup>1The physics of Bose-Einstein correlations was recently extensively reviewed by G.Baym . The practical problem we face can be formulated as follows: given a calculation (or a model) which ignores identity of particles, the question is how to ”correct” it in order to take into account the effects of quantum interference (which is the consequence of identity<sup>2</sup><sup>2</sup>2 It should be understood that this problem is very common in quantum mechanical calculations, as illustrated, e.g., by evaluation of Feynman diagrams. I would like to thank J.Pisut and K.Zalewski for discussions of this question. ). Let us thus suppose that we have an amplitude for production of N particles $`M_N^{(0)}(q)`$ ($`q=q_1,\mathrm{}q_N`$), calculated with the identity of particles being ignored. The rules of quantum mechanics tell us that, to take the identity of particles into account, we have to replace $`M_N^{(0)}(q)`$ by a new amplitude $`M_N(q)`$ which is a sum over all permutations of the momenta $`(q_1,\mathrm{}q_N)`$ $$M_N^{(0)}(q)M_N(q)\underset{P}{}M_N^{(0)}(q_P).$$ (1) This would be the end of the story if particle production was described by a single matrix element. In general, however, we have to average over parameters which are not measured and therefore the correct description of the multiparticle final state is achieved in terms of the density matrix $$\rho _N^{(0)}(q,q^{})=\underset{\omega }{}M_N^{(0)}(q,\omega )M_N^{(0)}(q^{},\omega ),$$ (2) rather than in terms of a single production amplitude. The sum in (2) runs over all quantum numbers $`\omega `$ which are not measured in a given situation. $`\rho ^{(0)}(q,q^{})`$ gives all available information about the system in question. At this point it is useful to note that, when tranformed into (mathematically equivalent) Wigner representation $$W_N(\overline{q},x)=d(\mathrm{\Delta }q)e^{ix\mathrm{\Delta }q}\rho _N^{(0)}(\overline{q},\mathrm{\Delta }q)$$ (3) ($`\overline{q}=(q+q^{})/2;\mathrm{\Delta }q=qq^{}`$) it gives information about the distribution of momenta and positions of the particles (see, e.g., for a discussion of this point). Using (1) and (2) one easily arrives at the formula for the corrected (i.e., with identity of particles taken into account) density matrix $`\rho _N(q,q^{})`$ and one finally obtains the observed multiparticle density $$\mathrm{\Omega }_N(q)=\frac{1}{N!}\underset{P,P^{}}{}\rho ^{(0)}(q_P,q_P^{})$$ (4) where the sum runs over all permulations $`P`$ and $`P^{}`$ of the momenta $`(q_1,\mathrm{}q_N)`$. The factor $`\frac{1}{N!}`$ appears because the phase space for $`N`$ identical particles is $`N!`$ times smaller than the phase space for $`N`$ non-identical particles. The formula (4) is in common use<sup>3</sup><sup>3</sup>3Using the symmetry properties of the density matrix, the double sum in (4) can be reduced to a single sum. The factor $`\frac{1}{N!}`$ is then absent. and is the basis of our further discussion. ### 2.1 A theoretical laboratory: independent particle production The case of independent particle production is an attractive theoretical laboratory which, although not expected to describe all details of the data, reveals -nevertheless- some generic features of the problem. This was first recognized by Pratt . In terms of the density matrix, the independent production means that the density matrix factorizes into a product of single-particle density matrices $$\rho _N^{(0)}(q,q^{})=\rho ^{(0)}(q_1,q_1^{})\rho ^{(0)}(q_2,q_2^{})\mathrm{}.\rho ^{(0)}(q_N,q_N^{})$$ (5) and that the multiplicity distribution is the Poisson one $$P^{(0)}(N)=e^\nu \frac{\nu ^N}{N!}.$$ (6) It turns out that in the case of a Gaussian density matrix the problem can be solved analytically. The main results (valid also in the general case of an arbitrary density matrix ) can be listed as follows. (a) All correlation functions $`K_p(q_1,\mathrm{},q_p)`$ and the single particle distribution $`\mathrm{\Omega }(q)`$ can be expressed in terms of one (hermitian) function $`L(q,q^{})=L^{}(q^{},q)`$ of two momenta: $`\mathrm{\Omega }(q)=L(q,q);K_2(q_1,q_2)=L(q_1,q_2)L(q_2,q_1);`$ $`K_3(q_1,q_2,q_3)=L(q_1,q_2)L(q_2,q_3)L(q_3,q_1)+L(q_1,q_3)L(q_3,q_2)L(q_2,q_1),`$ (7) and analogous formulae for higher correlation functions. (b) At very large phase-space density of particles, the distribution approaches a singular point representing the phenomenon of Bose-Einstein condensation: almost all particles populate the eigenstate of $`\rho ^{(0)}(q,q^{})`$ corresponding to the largest eigenvalue. The resulting multiplicity distribution is very broad (almost flat) so that, e.g., the probability of an event with no single $`\pi ^0`$ produced is non-negligible <sup>4</sup><sup>4</sup>4This effect was also considered in connection of the possible production of the Disoriented Chiral Condensate . The present argument adds another obstacle on the difficult road to observation of DCC. . It should be not surprizing if the very restrictive condition of independent production, as expressed by (5,6) is not realized in nature. Nevertheless the comparison of the relations (7) with data is interesting, since they are a sort of reference point allowing to judge if the observed multiparticle correlations are ”large” or ”small” with respect to the observed two-body correlations. The existing evidence leads to rather interesting, although controversial, conclusions. At the Matrahaza meeting, Lorstad demonstrated that there are practically no genuine three-particle correlations<sup>5</sup><sup>5</sup>5The importance of the absence of 3-particle correlations in heavy ion collisions was emphasized already some time ago . in S-Pb collisions at CERN SPS. Since the two-particle correlations are clearly visible, this observation is not easy to reconcile with Eq.(7). It was earlier shown by Eggers et al that the UA1 data are also in contradiction with (7), although in this case the 3-body correlations seem to be too large to satisfy (7). On the other hand, it was shown recently by Arbex et al that the NA22 data agree well with (7). This striking difference between the behaviour of heavy ion and ”elementary” collisions is certainly very interesting and deserves further attention. We cannot thus consider the results obtained from (5,6) to be a realistic description of the data. Nevertheless, the main conclusion about the possibility of Bose-Einstein condensation remains an interesting option which is worth a serious consideration . ### 2.2 Monte Carlo simulations In this situation, the practical method to study the effects of BE symmetrization on particle spectra is to implement it into the Monte Carlo codes. A ”minimal” method of performing this task was suggested some time ago . The idea is to take an existing code (which reproduces the distribution of particle momenta, i.e. the diagonal elements of the density matrix) and to introduce an ansatz for the off-diagonal elements of the multiparticle density matrix (2)<sup>6</sup><sup>6</sup>6As seen from (3) this corresponds to introducing an - a priori arbitrary - distribution of particle emision points in configuration space.. Each event generated by the MC code is then given a weight which is calculated as the ratio of symmetrized distribution \[Eq.(4)\], and the unsymmetrized one. In this way the modification of the original spectra is kept at the minimum. A practical realization of this idea (in its simplest version) has been developped by the Cracow group and shall be presented by Fialkowski at this meeting. They propose the unsymmetrized density matrix in the form $$\rho _N^{(0)}(q,q^{})=P_N(\overline{q})\underset{i=1}{\overset{N}{}}w(q_iq_i^{})$$ (8) where $`P_N(q)`$ is the probability of a given configuration obtained in JETSET and $`w`$ is a Gaussian. This prescription does not modify the diagonal elements of the unsymmetrized density matrix ($`w(0)=1`$) and, moreover, does not introduce any new correlations between emission points of the produced particles (when transformed into Wigner representation, Eq.(4), the product $`w(q_iq_i^{})`$ becomes the product $`w(x_i)`$). Thus (8) can indeed be considered as a minimal modification of the existing code. The autors find that this prescription represents well the existing data on two-particle correlations and that they can recover the experimental multiplicity distribution by a simple rescaling with the formula $`P(N)P(N)cV^N`$, without the necessity of refitting the JETSET parameters. They also studied production of pair of $`W`$ bosons at LEP II and found fairly strong effects of quantum interferrence. One may note, however, that in present version of the model the position of particle emission point is not correlated with its momentum, whereas such correlation is likely to be present in reality. Consequently, the obtained results may be overestimating the effect. This deficciency is easy to repair but on the prize of introducing more parameters. A more fundamental approach has been pursued since some time by Andersson and Ringner and shall be presented here by Todorova-Nova. It is based on the paper by Andersson and Hoffman . In this case the ”uncorrected” matrix element represents the decay of one Lund string which is then symmetrized according to the procedure explained in introduction. Two particle correlations are well described and several interesting effects are predicted. Among them: (a) the longitudinal and transverse correlations are expected to be different because they are controlled by two different physical mechanisms; (b) Three particle correlations are predicted non-vanishing and were actually calculated; (c) $`WW`$ production was studied and no significant mass shift is expected; (d) No multiplicity shift in the $`W`$ decay is predicted. This last conclusion is a consequence of the fact that, in case of more than one string present in the final state, no symmetrization between particles stemming from different strings is performed. This corresponds to the assumption that the strings are created at a very large distance from each other. One thus may expect that in a more realistic treatment some multiplicity shift should be present <sup>7</sup><sup>7</sup>7A contribution to this problem was presented recently by B.Buschbeck et al. . Finally, let me add that in both and the ”interconnection effect” (which has tendency to reduce the multiplicity) is neglected. The full phenomenological analysis of the data is therefore certainly more complicated, as we shall hear from de Jong. ### 2.3 Probing the space-time structure Much attention is also devoted to the information one may obtain from the data on quantum interference about the space-time structure of the multiparticle system created in the collision \[c.f.Eq.(4)\]. Although such analyses have a somewhat limited scope, as they (i) provide only information about the system at the freeze-out and (ii) require several additional assumptions - they give nevertheless a unique opportunity to investigate this problem. Most of the caveats are thus usually postponed to the future (and better data) and the analysis is carried on. The recently presented investigations were based on the hydrodynamic approach. Some of them were discussed here already during the Session on Heavy Ion Interactions. The main features <sup>8</sup><sup>8</sup>8A more detailed description is given in . are: (i) The shape of the particle emission region is consistent with the in-out scenario of Bjorken ; (ii) The longitudinal size of the ”fireball” from which a bulk of particles are emitted is several times larger in heavy ion collisions than in hadron-hadron interactions; (iii) Particle emission process starts rather late in heavy ion collisions (after about 4 fm in S-Pb interactions ), as compared to the elementary collision where it happens immediately after collision ; (iv) the emission process, once started, does not last for a long time: less than 2 fm for elementary and about 3 fm for S-Pb interactions. These features clearly indicate that a heavy ion collision is indeed followed by creation of an longitudinally expansing ”fireball” in which some kind of matter is ”boiling” for a considerable time. Once it is sufficiently cooled, however, its decay is rather fast. It is presumably too early to claim that it is formed from the quark-gluon plasma but nevertheless this behaviour is rather suggestive. ## 3 Intermittency Intermittency , postulates scaling of the multiparticle spectra. A rather complete review of the subject is now available , therefore I shall restrict myself to few remarks expressing my personal view on the progress achieved in the last decade. (i) The scaling hypothesis can be formulated in many ways and, indeed, much work was devoted to improvements and generalizations of the original proposal expressed in terms of the (normalized) factorial moments $$F_q<n(n1)\mathrm{}(nq+1)>/<n>^q(binsize)^{f_q}.$$ (9) The result was an impressive progress in the developpment of more sophisticated tools which are much better suited for investigation of many, sometimes very detailed, aspects of the problem. Let me particularly emphasize the importance of the correlation integrals, first introduced in . Their use was decisive in improving accuracy of the data and thus to substantiate the evidence for the effect. (ii) On the experimental side, the analysis of high precision data (particularly those of NA22 and UA1 experiments) allowed to establish -beyond a reasonable doubt- a close connection between intermittency and Bose-Einstein correlations, as suggested almost immediately after first experimental evidence for increasing factorial moments. This observation allowed to understand the scaling of the momentum spectra as a reflection of the -more fundamental- scaling in configuration space . I have impression that the importance of this fact is not yet fully appreciated. (iii) Recently, a general solution of the model of multiplicative cascade was obtained . One can thus hope for a significant progress in understanding the scaling phenomenon. (iv) Finally, let me also mention another interesting development, namely the generalization of the notion of scaling by the idea of self-affinity . I think it is an interesting direction to pursue and I hope that more data on the subject shall be soon available.We shall hear more about this from Liu. The major disappointment I see after all these years is that -in fact- no convincing theoretical basis was found for the phenomenon of intermittency, although it seems to be indeed an universal feature of particle spectra <sup>9</sup><sup>9</sup>9The second order phase transition was invoked by many authors as a possible explanation. This is certainly a valid idea but it does not explain universality of the phenomenon.. Does it mean that the simple effect one observes is a purely accidental result of summation of much more complex contributions? Perhaps. Nevertheless, I am convinced that the search for a more fundamental reason of the apparent scaling in particle spectra is worth to continue. ## 4 QCD and multiparticle correlations It is now rather well established that the average multiplicity and single particle spectra are well described by perturbative QCD supplemented with the principle of parton-hadron duality . In my opinion, at present, the real challenge to the idea of parton-hadron duality is to explain the data on differential correlation functions . Indeed, it is hard to understand how the momenta of the produced hadrons can follow so closely the momenta of the created partons that the correlations between them are not washed out<sup>10</sup><sup>10</sup>10This problem is much less serious if one considers only the integrated correlation functions. In this connection, see the discussion at the recent meeting on Correlations and Fluctuations, Matrahaza, June 1998 and contibutions to this session by Metzger and Chekanov. . Therefore a non-trivial extension of the principle of parton-hadron duality must be formulated in order to give quantitative meaning to perturbative calculations of multiple production. It was therefore rather recomforting to learn that indeed the predictions of perturbative QCD formulated some time ago , are badly violated by the L3 data . On the other hand, the same data are well described by the JETSET code. The conclusion is that the hadronization part is not correctly taken into account by the simple (naive?) parton-hadron duality. More about that later in this Session by Mandl. This is not to say that the subject is closed: Ochs pointed out that the tested QCD calculations included several simplifying assumptions (the most important among them seems the neglect of energy-momentum consevation) and thus it is not obvious which part of the result is actually responsible for the failure. It is clear, nevertheless, that further work on these lines must seriously address the problem of parton-hadron duality and its range of application. ## 5 Event-by-event analysis Event-by-event analysis clearly emerges as a next logical step in studies of multiparticle fluctuations. The subject is not yet well developped, however, and neither the physics nor methods sufficiently understood to define precisely what we are really searching for. Therefore, I can only list a few ideas of potential interest. I am fully aware that some of them may not work and that others, more interesting, may well be proposed in the near future. There are two basic reasons why event-by event fluctuations attract attention. The first one, more spectacular, is to look for large deviations of some events from the average, with the hope of finding a hitherto unobserved effect. The second, more pragmatic, is to measure the distribution of a quantity defined for a single event and thus obtain additional information, helping to understand the physics of the process. This is well illustrated by multiplicity distribution which is the simplest event-by-event analysis one may think of. It was studied since long time <sup>11</sup><sup>11</sup>11 Contributions related to multiplicity distributions shall be presented by Hegyi, Ploszajczak and Blazek. and was of great help in understanding the physics of multiparticle production. Let me now go to my list: (i) Recently, Stodolsky proposed to study fluctuations in transverse momentum (see also ). The idea is that, if the transverse momentum distribution in an event can be related to its ”temperature”, one obtains in this way the distribution of ”temperatures” of the events. Now, if thermodynamics is a correct decription of the process in question, the fluctuations of temperature can be related to to heat capacity $`C_V`$ of the system : $$\frac{(\mathrm{\Delta }T)^2}{T^2}=\frac{k}{C_V}$$ (10) where $`k`$ is the Boltzmann constant. This obviously may be very helpful in searching for phase transition. Even far from phase transition, however, a measurement of this kind can provide a lot of information on (a) the properties of the system in question and (b) whether it is indeed close to thermodynamic equlibrium. To take a simple example: In case of ideal gas one has $`E=C_VT`$, where $`E`$ is the energy of the system. We thus obtain $$\frac{(\mathrm{\Delta }T)^2}{T^2}=\frac{kT}{E}.$$ (11) The point is that both L.H.S. and R.H.S. of this equation can be measured and thus one may hope to estimate the deviation of the system from the ideal gas approximation. <sup>12</sup><sup>12</sup>12Recently, the first results on temperature fluctuations were presented by NA49 coll. . (ii) Another important issue was raised by Hwa . He pointed out the essential difference between the determination of fractal parameters in case of dynamical systems and in case of systems of many particles. In the dynamical system one can generate the time sequence and thus estimate how fast the different trajectories diverge. In case of multiparticle systems we do not have a time sequence and thus we have to rely on patterns. The question in this case is: how different are the patterns of different events. Hwa proposed to measure the pattern of an event by the factorial moment associated with it. One can then ask the question how this measure fluctuates from event to event. Studying moments of this distribution provides a measure of event-to-event fluctuation<sup>13</sup><sup>13</sup>13To study the moments of the factorial moments was suggested already some time ago .. When they are considered as function of bin size, it is possible to define appropriate fractal dimensions which conveniently summarize the information. For the details the reader is referred to the original paper . I personally feel that this is an important conceptual step in our thinking about the problem, although I am not fully convinced that the proposed measure cannot be improved. (iii) The studies of possible phase transitions in the multiparticle systems produced in high-energy collisions suggest that the fractal behaviour may strongly fluctuate from one event to another. It follows that it is essential to be able to study the fractal behaviour in event-by-event analysis. The feasibility of this program was investigated recently . It seems to be rather promising. (iv) The factorial moments can also be considered as a very sensitive signature for clustering of particles in small bins of momentum phase-space. Indeed, a factorial moment of order $`q`$ is sensitive only to the clusters containig at least $`q`$ particles. This obviously eliminates very effectively any background. This point was recently illustrated by KLM collaboration analysing the collisions of 160 GeV/A Pb nucleons in emulsion . They found that events may differ drastically in the behaviour of their factorial moments, while no great difference is seen when they are analysed by other methods. (v) The distribution of the HBT radii obtained from individual events was also recognized since some time as a very interesting object to study. Recently, first data on this subject were presented by NA49 collaboration . Although statistics is still limited (and the authors themselves do not attach too much meaning to the details of the plot) the results clearly show that the measurement is feasible and we may well hope for some exciting news in not-too-distant future. ## Acknowledgements I would like to thank N.Antoniou and C. Ktorides for the kind hospitality at Delphi. The encouragement from W.Kittel is highly appreciated. This work was supported in part by the KBN Grant No 2 P03B 086 14.
no-problem/9812/cond-mat9812221.html
ar5iv
text
# Synchronization in a ring of pulsating oscillators with bidirectional couplings ## references L.F. Abbott and C. Van Vreeswijk, ”Asynchronous states in networks of pulse-coupled oscillators” Phys. Rev. E 48, 1483-1490. A. Corral, C.J. Pérez, A. Díaz-Guilera, and A. Arenas, ”Synchronization in a lattice model of pulse-coupled oscillators”, Phys. Rev. Lett. 75, 3697-3700. A. Díaz-Guilera, A. Arenas, A. Corral, and C.J. Pérez, ”Stability of spatio-temporal structures in a lattice model of pulse-coupled oscillators”, Physica 103D, 419-429. A. Díaz-Guilera, A. Arenas, and C.J. Pérez, ”Mechanisms of synchronization and pattern formation in a ring of coupled-oscillators”, Phys. Rev. E 57, 3820 G. Goldsztein and S.H. Strogatz, Int. J. Bifurcation and Chaos 5, 983 H. Ito and L. Glass, Physica 56D, 84. N. Ikeda, ”Model of bidirectional interaction between myocardial pacemakers based on the phase response curve” Biol. Cybern. 43, 157-167. Y. Kuramoto, ” Collective synchronization of pulse-coupled oscillators and excitable units”, Physica 50D, 15-30. R. Mirollo and S.H. Strogatz, ”Synchronization of pulse-coupled biological oscillators” SIAM J. Appl. Math. 50, 1645-1662. C.J. Pérez, A. Corral, A. Díaz-Guilera, K. Christensen, and A. Arenas, , ”Self-organized criticality and synchronization in lattice models of coupled dynamical systems” Int. J. Mod. Phys 10, 1111-1151. C.S. Peskin, Mathematical Aspects of Heart Physiology, Courant Institute of Mathematical Sciences, New York University, (New York) 268. M. Sousa Vieira, A.J. Lichtenberg, and M.A. Lieberman, Int. J. Bifurcation and Chaos 4, 1563. A. Treves, ”Mean-field Analysis of neuronal spike dynamics” Network 4, 259-284.
no-problem/9812/cond-mat9812350.html
ar5iv
text
# Observation of p-wave Threshold Law Using Evaporatively Cooled Fermionic Atoms ## Abstract We have measured independently both s-wave and p-wave cross-dimensional thermalization rates for ultracold $`{}_{}{}^{40}K`$ atoms held in a magnetic trap. These measurements reveal that this fermionic isotope has a large positive s-wave triplet scattering length in addition to a low temperature p-wave shape resonance. We have observed directly the p-wave threshold law which, combined with the Fermi statistics, dramatically suppresses elastic collision rates at low temperatures. In addition, we present initial evaporative cooling results that make possible these collision measurements and are a precursor to achieving quantum degeneracy in this neutral, low-density Fermi system. While many examples of quantum degenerate fermionic systems are found in nature (electrons in metals and nucleons in nuclear matter, for example), low density Fermi systems are exceedingly rare. However, techniques similar to those that led to the observation of Bose-Einstein condensation (BEC) in atomic systems can be exploited to realize a quantum degenerate, dilute gas of fermionic atoms. Novel phenomena predicted for this system include the suppression of inelastic collisions , linewidth narrowing through suppression of spontaneous emission , and the possibility of a phase transition to a superfluid-like state at sufficiently low temperatures . Just as was found in the case of BEC in alkali atoms, knowledge of the binary elastic collision cross-sections and interatomic potentials is crucial for experiments on fermionic species. Both evaporative cooling and the prospect of fermionic superfluidity depend on the cold collision parameters. For example, accurate prediction of magnetic-field Feshbach resonances , which could be used to realize Cooper pairing of fermionic atoms, hinges on detailed understanding of the interatomic potentials. In this letter, we present measurements of elastic collision cross sections for evaporatively cooled <sup>40</sup>K, including a direct observation of p-wave threshold behavior and the resultant strong suppression of the collision rate. Among the stable fermionic alkali atoms, <sup>40</sup>K yields the greatest range of possibilities for evaporative cooling strategies and interaction studies because of its large atomic spin, F (F=9/2 and F=7/2 hyperfine ground states). In addition to having a large number of spin states that can be held in the usual magnetic traps, potassium has two bosonic isotopes, <sup>39</sup>K and <sup>41</sup>K, which could be used for future studies of mixed boson-fermion dilute gases. Forced evaporative cooling , which has proven essential for achieving quantum degeneracy in bosonic alkali gases, relies on elastic collisions for rethermalization of the trapped atomic gas. However, evaporative cooling strategies for fermionic samples are complicated by the fact that atomic collisions at these low temperatures (100’s of $`\mu `$K and below) are predominantly s-wave in character for bosonic atoms, while the Pauli exclusion principle prohibits s-wave collisions between spin-polarized fermions. Evaporative cooling for fermionic atoms must then proceed either through p-wave collisions or through sympathetic cooling . Since the s-wave and p-wave binary elastic cross sections are not well known for <sup>40</sup>K , we have made measurements of elastic collision rates in a magnetic trap. The Fermi-Dirac quantum statistics of <sup>40</sup>K provide a unique opportunity to observe p-wave collisions directly. Exploiting this fact, we have seen evidence for a p-wave shape resonance and have observed threshold behavior of the p-wave cross section. To our knowledge, this measurement represents the first direct verification of the p-wave threshold law for neutral scatterers. We use a double-MOT (magneto-optical trap) apparatus to trap and pre-cool <sup>40</sup>K atoms prior to loading them into a purely magnetic trap. Operation of the MOT’s employs two MOPA (master-oscillator power-amplifier) diode laser systems , each frequency stabilized using the DAVLL (dichroic absorption vapor laser lock) technique which accomplishes a large frequency range for locking. Also, we have developed an atom source that is enriched with 5$`\%`$ <sup>40</sup>K (whose natural abundance is 0.01$`\%`$). This system allows us to trap any of the three potassium isotopes, and to trap $`10^8`$ <sup>40</sup>K atoms (four orders of magnitude more <sup>40</sup>K atoms than previous efforts ). Immediately before loading the sample into the magnetic trap, the atoms are cooled to approximately 150 $`\mu `$K during a Doppler cooling stage of the MOT in which the trapping light is jumped closer to resonance. Further, an optical pumping pulse transfers the majority of the atoms into magnetically trappable states in the F=9/2 hyperfine ground state. The atoms are then loaded into a cloverleaf magnetic trap and after an initial evaporative cooling stage, we are left with roughly $`10^7`$ <sup>40</sup>K atoms at 60 $`\mu `$K and a peak density of $`10^9`$ cm<sup>-3</sup>. The cloverleaf magnetic trap provides a cylindrically symmetric harmonic potential, with a characteristic radial frequency $`\nu _r=44\pm 1`$ and an axial frequency $`\nu _z`$ of $`19\pm 1`$ Hz for loading. The lifetime of the atoms in the magnetic trap, limited by collisions with room-temperature background atoms, has an exponential time constant of $`300\pm 50`$ s, giving ample time for thermal relaxation studies as well as for evaporation. The radial frequency and the bias magnetic field can be altered smoothly by changing the current through a pair of Helmholtz bias coils. We determine elastic collision cross sections from measurements of cross-dimensional thermalization rates in the magnetic trap. The sample is taken out of thermal equilibrium by changing $`\nu _r`$ through a ramp of the bias coil current. For the measurements reported here $`\nu _r`$ lies between 44 and 133 Hz. The change in $`\nu _r`$ occurs adiabatically (slow compared to the atomic motion in the trap) but much faster then the rate of collisions between atoms. Since the axial frequency is essentially unchanged, energy is added to (or removed from) the cloud in only the radial dimension. Elastic collisions then move energy between the radial and the axial dimensions, and the thermal relaxation is observed by monitoring the time evolution of the cloud’s aspect ratio. To avoid perturbations to the image due to the spatially dependent magnetic fields, the trap is turned off suddenly and the cloud is imaged after 2.7 ms of free expansion. The aspect ratio of the cloud is observed via absorption imaging using a 9.1 $`\mu `$s pulse of light resonant with the 4S<sub>1/2</sub>, F=9/2 to 4P<sub>3/2</sub>, F=11/2 transition. Optical depth is calculated from the image captured on a CCD array and then surface fit to a gaussian distribution to find the rms cloud size in both the radial and axial dimensions. An example of the cloud evolution following a change in trap potential is shown in Fig. 1. Since the expanded cloud sizes are proportional to the square root of the cloud energy in each dimension, the exponential time constant for the redistribution of energy, $`\tau `$, can be extracted from an appropriate fit to the aspect ratio vs time. To rule out significant relaxation through trap anharmonicities, we have verified that the relaxation rate $`1/\tau `$ scales linearly with the number of trapped atoms $`N`$. To obtain the elastic collision cross section $`\sigma `$ from our measurements of thermal relaxation rates, we use the relation: $`1/\tau =\frac{2}{\alpha }n\sigma v`$, where $`n`$ is the density-weighted density of the trapped atoms given by $`\frac{1}{N}n(r)^2d^3r`$, $`v`$ is the rms relative velocity between two atoms in the trap, and $`\alpha `$ is the calculated average number of binary collisions per atom required for thermalization. The product $`nv`$ depends on both the size and temperature, T, of the trapped sample. These are measured by observing the expansion of an equilibrated sample after release from the magnetic trap. The rate of expansion yields the temperature, while an extrapolation back to the release time gives the initial sizes. Using the trap potential calculated from the field coil geometry we have checked that the measured initial sizes and temperatures are consistent to within their uncertainties. The mean number of collisions each atom undergoes, $`\alpha `$, during one relaxation time constant was determined from a numerical simulation of the experiment using classical Monte Carlo methods . For a harmonic trapping potential, the relaxation simulation yields $`\alpha _s=2.5`$ for s-wave collisions and $`\alpha _p=4.1`$ for p-wave collisions. The ratio $`\alpha _p/\alpha _s`$ can also be determined analytically through an integration over the angular dependence of scattering. This gives $`\alpha _p/\alpha _s=5/3`$, consistent with the Monte Carlo results. The primary results of this paper are shown in Fig. 2. While ordinarily one cannot measure higher order partial wave contributions to the collision cross section directly, the Fermi-Dirac statistics of <sup>40</sup>K allow us to probe p-wave and s-wave interactions independently. The p-wave cross section $`\sigma _p`$ is determined from measurements using a spin-polarized sample ($`|F=9/2,m_F=9/2>`$ atoms), where s-wave collisions are prohibited by the quantum statistics. The s-wave cross sections $`\sigma _s`$ are determined from data obtained using a mixture of two spin states, $`|9/2,9/2>`$ and $`|9/2,7/2>`$. The magnitude of the p-wave cross section is surprisingly large and we find that <sup>40</sup>K has a p-wave shape resonance at a collision energy of roughly 280 $`\mu `$K. At temperatures well below the resonant energy (less than 30 $`\mu `$K), a fit to $`\sigma _p`$ vs $`T`$ gives $`\sigma _pT^{2.0\pm 0.3}`$. Thus, we have directly observed the expected threshold behavior $`\sigma _pE^2`$. In contrast, $`\sigma _s`$ exhibits little temperature dependence. With these very different temperature dependencies, the collision rate changes by over two orders of magnitude at our lowest temperatures depending on the spin mixture of the fermionic atom gas. To explore this effect further we measure the thermalization rate vs spin polarization at 9 $`\mu `$K (see Fig. 3). We control the relative populations of $`|9/2,9/2>`$ and $`|9/2,7/2>`$ atoms in a two-component cloud with a microwave field that drives transitions to untrapped spin states in the F=7/2 ground state manifold. The trap bias magnetic field breaks the degeneracy of the hyperfine ground-state splitting (1.286 GHz at zero field ) so that the different spin-states can be removed selectively (see Fig. 3 inset). For the data shown in Fig. 3 the fraction of atoms in the $`|9/2,9/2>`$ state $`f_{m_F=9/2}\frac{N_{m_F=9/2}}{N_{m_F=9/2}+N_{m_F=7/2}}`$, was varied smoothly from 70 to 100$`\%`$ by varying the power of an applied microwave field (frequency swept) that removes a portion of the $`|9/2,7/2>`$ atoms. The thermalization of mixed spin-state samples depends on both s-wave and p-wave collisions. The data in Fig. 3 can be fit to a simple model given by: $`1/\tau =n_{1,2}({\displaystyle \frac{2}{\alpha _s}}\sigma _s+{\displaystyle \frac{2}{\alpha _p}}{\displaystyle \frac{\sigma _p}{2}})v+(n_{1,1}+n_{2,2}){\displaystyle \frac{2}{\alpha _p}}\sigma _pv,`$ (1) where $`n_{i,j}`$ is the density-weighted density between two species given by $`n_{i,j}=\frac{1}{N_1+N_2}n_i(r)n_j(r)d^3r`$ and the subscripts 1 and 2 stand for the two relevant spin states. Since the magnetic moments of the $`|9/2,9/2>`$ and $`|9/2,7/2>`$ atoms are only slightly different we make the simplifying assumption that these states have identical spatial profiles in the trap. A fit using the above model with $`\sigma _s`$ and the ratio $`\sigma _p/\sigma _s`$ as free parameters shows good agreement with the data in Fig. 3. In addition to demonstrating the type of control over collision rates that is available in a trapped gas of fermionic atoms, this measurement of $`\sigma _p/\sigma _s`$ at low temperature provides a sensitive constraint on the triplet scattering length. The s-wave cross-sections shown in Fig. 2 were extracted using the above equation, however at these low temperatures $`\sigma _p`$ is relatively small and the measured thermalization rates are due primarily to s-wave interactions. To compute the scattering cross sections for comparison with these data, we first identify singlet and triplet potassium potentials that have been determined spectroscopically. At large interatomic separation $`R`$ these potentials are matched smoothly to the long-range form of Marinescu et al. . We add an additional correction to these potentials’ inner walls , enabling us to tune the scattering lengths over their entire ranges $`\mathrm{}<a<\mathrm{}`$. We set the singlet scattering length’s value at $`a_s=104a_0`$ where $`a_0`$ is the Bohr radius, but leave the triplet scattering length $`a_t`$ as a free parameter to be determined by the experiment. We note that the present experiment is relatively insensitive to the value of $`a_s`$ since even the $`|9/2,9/2>+|9/2,7/2>`$ process is strongly triplet-dominated and no singlet resonance occurs near threshold; indeed, varying $`a_s`$ over its range of uncertainty, $`101<a_s<107`$ , does not change the fit. After computing cross sections as a function of collision energy, we determine temperature-dependent cross sections by computing a thermal average over collision events, weighted by the collision energy. Using this type of thermal averaging to account for a temperature-dependent cross section is supported by Monte Carlo studies . To make a fit to the data, we compute $`\chi ^2`$ while floating both $`a_t`$ and a multiplicative factor $`ϵ`$ which scales simultaneously the computed $`\sigma _s(T)`$ and $`\sigma _p(T)`$. This factor is required to accommodate a $`\pm 50\%`$ systematic uncertainty in the experimental determination of absolute cross sections (primarily from $`N`$). Our global best fit occurs for $`a_t=157\pm 20a_0`$ and $`ϵ=1.6`$, with a reduced $`\chi ^2`$ of 3.8; the corresponding cross sections are plotted as lines in Fig. 2. The uncertainty in $`a_t`$ reflects a doubling of the fit $`\chi ^2`$ and includes a $`2a_0`$ uncertainty arising from varying $`C_6`$ over its range $`3600<C_6<4000`$ a.u. . Our nominal potential gives a p-wave shape resonance at $`280\mu `$K in collision energy, with an asymmetric lineshape whose FWHM is $`400\mu `$K. The relatively small uncertainty on the value of $`a_t`$ is attributable to the fact that we can simultaneously fit s-wave and p-wave collision data having little relative uncertainty. The value of $`a_t`$ for <sup>40</sup>K determined here does not agree well with reference , highlighting the importance of low temperature data in determining accurate potentials. Our measurement is however in good agreement with the value $`a_t=194_{42}^{+172}`$ obtained recently from an analysis of photoassociation spectroscopy of $`{}_{}{}^{39}K`$ . This agreement between two fundamentally different experiments is very encouraging, and suggests that the potassium scattering lengths are now fairly well determined . Using our new value, we have tabulated in Table 1 the resulting triplet scattering lengths for collisions between the different potassium isotopes. One of the important applications of these thermalization rate measurements is in determining the feasibility of various evaporative cooling strategies for this fermionic atom gas. The large p-wave cross section makes evaporation of a spin-polarized sample possible, but only at $`T20\mu K`$. To reach lower temperatures, which is likely to be necessary for producing quantum degenerate samples, evaporation of a mixed spin-state sample should work well given the suitably large s-wave elastic collision cross section. Indeed, for the data presented in this letter we vary the temperature of the sample through forced evaporation. To facilitate evaporative cooling, we increase the collision rate by ramping to a $`\nu _r=133\pm 5`$ Hz trap. Evaporation then proceeds using a mixed spin state sample and applying a microwave field to remove selectively the most energetic atoms (in both spin states). With these initial attempts at evaporatively cooling of <sup>40</sup>K we can lower the sample temperature from 100 $`\mu `$K to 5 $`\mu `$K. We see runaway evaporation, where the collision rate in the gas increases as the temperature decreases. While the samples discussed in this work are still far from quantum degeneracy, this initial success and our measurement of a large s-wave elastic collision cross section bode well for future progress toward this goal. This work is supported by the National Institute of Standards and Technology and the National Science Foundation. The authors would like to express their appreciation for useful discussions with C. Wieman and E. Cornell.
no-problem/9812/astro-ph9812466.html
ar5iv
text
# Origin and Propagation of Fluctuations of Turbulent Magnetic Fields ## 1. Introduction In a highly conductive and strongly turbulent plasma like the solar photosphere random small-scale deviations of the flow field from mirror symmetry are expected to incessantly generate small magnetic flux loops of random orientation. This is a miniature analogue of the large-scale dynamo process occurring in a globally non-mirror-symmetric flow, for which reason it is known as small-scale dynamo action ( , , ). Owing to the random orientation of the loops, no net large-scale field will arise, but a non-zero mean magnetic energy density $`B^2`$ and mean unsigned flux density $`|B|`$ results. Most of this turbulent flux resides in magnetic structures with scales much smaller than the characteristic scale of the turbulent velocity field (which is the granular scale of $`1000`$km in the solar case). Traditional Zeeman magnetography is “blind” to these fields as the net circular polarization of the mixed polarity small-scale field cancels out in a resolution element. It has long been recognized that the Hanle effect (magnetic depolarization of linearly polarized radiation) may offer a way to detect the turbulent fields ( ). The observational study of turbulent fields has however been hampered by the shortage of observational data, by the lack of a reliable radiative transfer theory for polarized radiation in magnetic fields and by the fact that beside turbulent fields, the Hanle effect is also due to resolved magnetic elements (network, ephemeral active regions) and to the overlying canopy fields (especially for lines formed higher in the atmosphere). Nevertheless, in recent years important advances have been made in all these areas ( , , , , , , ). On of the most intriguing recent discoveries is the finding of () that the degree of linear polarization $`Q/I`$ shows large amplitude random variations over the solar disk. While, as mentioned above, canopy fields, network elements and ephemeral active regions may also contribute to the observed Hanle depolarization, it is still likely that at least part of this observed spatial variation is due to the presence of similar fluctuations in the flux density $`|B|`$ of the turbulent photospheric magnetic field. The presence of fluctuations should not come as a surprise from a theoretical point of view. After all, in the turbulent solar photosphere any physical quantity should show fluctuations, and even variations of an amplitude comparable to the mean value are commonplace. What is more surprising is the spatial scale of the fluctuations. While the observations have a resolution of about $`1\mathrm{"}`$ along the slit, the dominant variations seem to occur on the much larger scale of $`10^4`$km. Variations on the granular scale are of much smaller amplitude. (See e.g. Fig. 3 in .) The real theoretical challenge is therefore to understand how the dominant scale of fluctations of turbulent flux density can be so much larger than the turbulence scale? One popular explanation for the existence of large-scale photospheric structures (such as supergranulation) is that they are the “imprints” of processes going on deeper down in the convective zone where the characteristic scales (determined by the pressure scale height $`H`$) are larger. Alternatively, it is of course also possible that the large scales are due to some local photospheric process like an inverse cascade. In order to resolve this problem, we need a model for the generation and transport of turbulent magnetic flux that takes into account both the generation and saturation processes constituting the small-scale dynamo and the turbulent transport of $`|B|`$ throughout the underlying convective zone. In the following such a model will be presented. ## 2. The spectrum of fluctuations: a linear model The evolution equation for the unsigned flux density $`|B|`$ of the turbulent field should read something like $$_t|B|=\underset{\text{turbulent diffusion}}{\underset{}{(\beta |B|)}}+\underset{\begin{array}{c}\text{linear generation}\\ \text{(small-scale dynamo)}\end{array}}{\underset{}{|B|/\tau _+}}\underset{\begin{array}{c}\text{nonlinear saturation}\\ q>0\end{array}}{\underset{}{(|B|/B_0)^q|B|/\tau _{}}}$$ (1) As the transport of the highly intermittent magnetic field in a turbulent plasma is due to advection of flux tubes irrespective of their polarity, the transport terms are expected to be identical to those for the signed field $`B`$. For simplicity, here we consider an isotropic turbulent diffusion with diffusivity $`\beta =lv/3`$ (first term on the r.h.s.), $`l`$, and $`v`$ being the correlation length and r.m.s. amplitude of the turbulent velocity field. The linear generation term corresponding to a small scale dynamo would lead to an exponential growth of a homogeneous $`|B|`$ field to infinity, were it not for a higher order term leading to the saturation of $`|B|`$ at a finite value $`B_0`$, induced by the curvature force (last term on r.h.s.). Numerical simulations and closure calculations ( , , , ) show that $`B_0`$ is about an order of magnitude lower than the equipartition flux density: $`B_0B_{\text{eq}}/10`$. As the coefficients $`1/\tau _+`$ and $`1/\tau _{}`$ are functionals of the turbulent velocity field they are expected to show considerable fluctuations around their mean values. Hence, the flux density $`|B|`$ determined by the equilibrium of generation and nonlinear saturation processes will also fluctuate around its mean value $`\overline{|B|}B_0`$. It is plausible to assume $$\overline{1/\tau _{}}=\overline{1/\tau _+}=1/\tau v/l.$$ (2) Furthermore, introducing the notation $`|B|/B_0=1+b`$, it greatly simplifies the treatment if one assumes $`b1`$. (In Section 3 we will consider the problem to what extent this simplification affects the results.) This allows us to linearize equation (1). In the case when $`b`$ varies much faster with depth than $`B_0`$ we obtain $$_tb=(\beta b)qb/\tau +(1/\tau _+1/\tau _{})^{}$$ (3) Now the second term on the r.h.s. describes the net mean restoring effect of generation and saturation terms, tending to reduce the fluctuation $`b`$, while the last term is the fluctuation generation term or forcing term arising owing to the turbulent fluctuations in the coefficients. We further introduce the (physically plausible) assumption that the Fourier spectrum of this forcing term is dominated by those isotropic modes whose vertical phase is such that at the depth where the pressure scale height equals the inverse of their horizontal wavenumber $`k`$ they are maximal: $$(1/\tau _+1/\tau _{})^{}=\frac{1}{\tau }\underset{k}{}\widehat{b}_f(k,z)\mathrm{exp}[i(kx+ky+\omega t)]$$ (4) (Note that throughout this paper we take the logarithmic pressure $`\mathrm{ln}P`$ as independent variable in the vertical direction, the depth $`z`$ being just a shorthand notation for a function $`z(\mathrm{ln}P)`$, determined by a convection zone model. $`x`$ and $`y`$ are the horizontal coordinates.) For the solution of equation (3) we take a similar Ansatz: $$b=\underset{k}{}\widehat{b}(k,z)\mathrm{exp}[i(kx+ky+\omega t)]$$ (5) Let us consider one mode only. (Note that in this case $`\widehat{b}`$ can always be considered real as by virtue of our assumption about the vertical phase of modes $`\widehat{b}(k,z)`$ may be written as $`\widehat{b}_1(k)\mathrm{cos}[\mathrm{ln}P\mathrm{ln}P_0(k)]`$, and the initial phase in $`\widehat{b}_1`$ can be chosen freely by a displacement of the time axis.) We substitute (4) and (5) into (3), simplify, and take the real part. To simplify the notation, from this point onwards we omit the hats. With this we arrive at $$d_z(\beta d_zb)+(2k^2\beta +q/\tau )b=b_f/\tau $$ (6) This equation determines the Fourier amplitude $`b(z,k)`$ of each mode of horizontal wavenumber $`k`$ in the spectrum of turbulent magnetic field fluctuations. Vertical diffusion is now separated in the first term; horizontal diffusion and the restoring force constitute the second and third terms on the l.h.s. The r.h.s. is the Fourier amplitude of the forcing. $`b_f`$ is the fluctuation amplitude produced in time $`\tau `$ by the forcing if other terms were not present. As the forcing is due to the action of the fluctuating flow field on the existing magnetic field, it is plausible to assume $`b_f/\tau B_k/B_0\tau _k`$, where $`B_k`$ is the corresponding Fourier amplitude in the spectrum of $`|B|`$, and $`\tau _k(k,z)`$ is the eddy turnover time. This spectrum may be determined from closure calculations, numerical simulations and observations. Its simplest representation is by two power laws joining in a peak at $`k10/l`$. Thus, we represent the r.h.s. by $$b_f(z,k)/\tau (z,k)=10^{2/3p}(k/k0)^p,k_01/H(z),$$ (7) where $`p=p_1`$ for $`k<10k_0`$ and $`p_2`$ otherwise. (Note that in fact a spectrum with two breaks might be more realistic as the spectrum of $`\tau `$ is peaked at a lower wavenumber of $`k1/l`$; our purpose here, however, is just to present a simple example calculation.) On the basis of the observed properties of photospheric magnetic fields and motion, $`p_11.5`$ seems to be a realistic choice. The value of $`p_2`$ will be found to be irrelevant to the solution below. For the solution of equation (6) the parameters $`\beta `$ and $`\tau `$ are interpolated from a convective zone model. $`q`$ is evaluated by fitting a solution of equation (1), with the diffusive term and the perturbations of the coefficients neglected, to profiles of $`|B|^2(t)`$ resulting from the closure model of (); this yields $`q0.1`$. The boundary conditions may be chosen as closed ($`F\beta d_zb=0`$) or open ($`F=vb`$); experimenting has shown that this does no exert a very strong influence on the resulting mode profiles. For the numerical solution of equation (6) we take $`\mathrm{ln}P`$ as independent variable and use a relaxational method. The vertical profiles of some Fourier modes in the spectrum of fluctuations of the turbulent magnetic field are shown in Figure 1. As expected, the modes with larger horizontal scales have their maxima in deeper layers. Computing a large number of modes and plotting their amplitudes near the surface against $`k`$ yields the Fourier spectrum of the fluctuations in the turbulent magnetic field (Fig. 2). Experimenting with the parameters in equation (6) we find that the choice of $`p_2`$ is irrelevant for the spectrum, $`B_0/B_{\text{eq}}`$ determines its amplitude, while $`p_1`$ and $`q`$ determine the shape of the spectrum. A striking feature of the spectra in Figure 2 is that their maxima fall to significantly lower wavenumbers than $`k_0=1/l`$, i.e. the characteristic scale of the fluctuations in turbulent magnetic flux density is much larger than the typical scale of turbulent motions. The physical background of this phenomenon is that the high wavenumber components are more efficiently suppressed by diffusion ($`2k^2\beta b`$ term in eq. (6)). The wavenumber of the maximum increases with $`q`$ (dashed vs. dash-dotted curves), as a stronger nonlinear saturation reduces the role of the diffusive terms in the equation. On the other hand, the spectral amplitude at even lower wavenumbers depends strongly on the spectral index $`p_1`$ of the forcing (dashed vs. solid curves) —clearly, the shallower the forcing spectrum, the more energy is input at the larger scales. ## 3. The effect of nonlinearity The results presented in the previous section were computed under the assumption that the fluctuations of $`|B|`$ have a small relative amplitude. This assumption is rather dubious in the light of the large Fourier amplitudes found in the model (Fig. 1). In order to have an idea about the extent to which nonlinear effects may modify the linear results, in this section we present an alternative model that calculates the fluctuating field without the assumption of linearity, at the cost of a strong simplification of the geometry: only $`k=0`$ modes are considered. This obviously implies that spatial spectra or correlation lengths cannot be studied; instead, we will compare the temporal autocorrelation of the fluctuations in the linear and nonlinear cases. For $`k=0`$ modes $`d_x=d_y=0`$ so we write equation (1) in the form $$_t|B|=d_z(\beta d_z|B|)+|B|/\tau \left[1(|B|/B_0)^q\right]+B_f/\tau $$ (8) where the last term corresponds to fluctuation forcing by to the turbulent fluctuations of the velocity field. We model this term as a Gaussian stationary random process with correlation time $`\tau (z)`$, vertical correlation “length” 1 (in $`\mathrm{ln}P`$ units), and a mean displacement of $`0.3B_0`$ over $`\tau `$ . Equation (8) is then integrated numerically starting from an arbitrary perturbed initial state. An example solution is presented in Figure 3. Figure 4 shows the autocorrelation of the fluctuating flux density as a function of the time shift for different cases. It is apparent (dotted vs. dashed curves) that the correlation time is primarily determined by the value of the $`q`$ nonlinearity parameter, lower $`q`$ values corresponding to longer correlation times. This result is a close analogue to the findings of Section 2 with respect to the spatial correlations. On the other hand, switching off the diffusive term in equation (8) (dash-dotted vs. dashed curves) has no significant effect, showing that diffusive quenching of small spatial scales is not the mechanism responsable for the extended correlations here. Replacing equation (8) with its linear equivalent (solid vs. dashed curves) does not lead to great modifications in the results. This may reassure us to some extent of the reliability of the findings of Section 2 above. ## 4. Conclusion The results of Section 2 now enable us to answer the question posed at the end of the Introduction. Turbulent transport processes indeed result in a dominant scale for the fluctuations of the turbulent magnetic flux density that is an order of magnitude larger than the turbulence scale. The physical mechanism behind this phenomenon is that small-scale fluctuations are more efficiently damped by diffusion, shifting the spectral peak to lower wavenumbers. The low value of the $`q`$ nonlinearity parameter suggested by closure calculations and simulations also favors larger dominant scales. On the other hand, the possibility that deeper structures are “imprinted” on the photospheric pattern can apparently be discarded. Switching off the first term (vertical diffusion) in equation (6) does not lead to any reduction of the dominant scales (dotted vs. dashed curves in Fig. 2; dash-dotted vs. dashed in Fig. 4). All this shows that the observations of strongly varying Hanle depolarization over the solar disk may be understood from a theoretical point of view even if all the depolarization is attributed to turbulent fields. Further advances on both the observational and theoretical side may make more detailed comparisons between the models and the observations possible, thereby offering the prospect of a direct observational diagnostics of the properties of the turbulent dynamo. ### Acknowledgments. This work was funded by the DGES grant no. 95-0028.
no-problem/9812/astro-ph9812301.html
ar5iv
text
# (December 1998) UM-P-98/62 RCHEP-98/18 Mirror matter and primordial black holes ## ACKNOWLEDGMENTS RRV is supported by the Australian Research Council. NFB is supported by the Commonwealth of Australia and the University of Melbourne.
no-problem/9812/astro-ph9812092.html
ar5iv
text
# From Quasars to Extraordinary N-body Problems ## Introduction This meeting is held to honour George Contopoulos for his great contributions to dynamical systems theory and the N-body problem. I shall pay my tribute to him in three parts, * Placing his contributions in the proud history of those who have made major contributions to the N-body problem. * Since nothing but the best is good enough to honour George I present to him a copy of my best paper (Lynden-Bell, 1969) and include here a résumé of its arguments that led to the current theory of quasars. There are questions my paper raised 29 years ago which are still unexplored. * With Prof. Ruth Lynden-Bell (my wife) I present our new extraordinary N-body problems which we solve for all initial conditions. These problems can also be solved in quantum mechanics when the hyper-keplerian potential energy is $$V=N\stackrel{~}{Z}e^2/r,$$ $`(1)`$ where $$r^2=\underset{i=1}{\overset{N}{}}\frac{m_i}{M}\left(𝐱_i\overline{𝐱}\right)^2,$$ $`(2)`$ $$M=m_i$$ $`(3)`$ i.e., $`r`$ is the mass-weighted-root-mean-square radius of the N-body system. I quote here the energy and degeneracy of the $`n`$th quantum state but we shall publish derivations elsewhere. $$E_n=\frac{2M\mathrm{}^2\left(N\stackrel{~}{Z}e^2\right)^2}{\left[2n+3(N2)\right]^2}$$ $`(4)`$ the degeneracy of this $`N`$ particle state is $$g(n,N)=\frac{\left[n+3(N2)\right]!}{(n1)!(3N4)!}\left[2n+3(N2)\right],$$ $`(5)`$ for $`N=2`$, $`g`$ reduces to $`n^2`$ and the energy reduces that of the hydrogen atom for which $`M=m_p+m_e`$. To see this, $`\stackrel{~}{Z}`$ is replaced by $`\frac{1}{2}(m_pm_e)^{1/2}/M`$ when $`r`$ is replaced by the separation of the electron from the proton. We have written the coefficient of the potential energy in the clumsy form $`N\stackrel{~}{Z}e^2`$ so that the analogy to hydrogenic atoms can be readily seen by physicists. ## 1 Contributions to the N-body problem (excluding Agatha Christie’s) The N-body problem probably started with Newton although Hooke would undoubtedly dispute it since he seems to have conceived the idea independently but had not the mathematical ability to work out its consequences. As Chandrasekhar (1996) has shown, Newton’s Principia (Newton 1687, Cajori 1934) has much to teach us even today (see Lynden-Bell & Nouri-Zonoz, 1998). Recent studies of the Portsmouth papers have shown that Newton developed most of the perturbation theory that was hitherto attributed to the mathematical astronomers of the 18th and 19th centuries. Newton’s method was to store up the momentum generated by perturbations and then deliver it as an impulse that changed the motion from one ellipse to another. This of course gives him the equations for the variations of the orbital elements which are the meat of perturbation theory. My brief résumé of the N-body problem’s history is: | Newton 1687 | Orbit Theory and the general solution of | | --- | --- | | | the first extraordinary N-body problem | | Laplace 1795 | Perturbation Theory for near circular | | | orbits | | Poincaré 1892–99 | Topological Methods | | Whittaker 1913 (1959) | Adelphic Integrals as series | | Contopoulos 1956– | Third Integrals and Chaos | | Kolmagorov-Arnold-Moser | Invariant Tori & Arnold diffusion | To these theoretical studies we must add the numerical computation of the N-body problem and here Aarseth’s name stands out as a persistent pioneer exploring this problem (Aarseth, 1974) although many others have contributed, especially Heggie (1975) through his work on triple interactions. Both in globular cluster theory and in dynamical systems Henon’s work stands out for its beauty (Henon, 1961, 1969, 1974) while Antonov (1962) was responsible for a fundamental advance in the understanding of gravitational thermodynamics later popularised and extended to negative specific heats and gravitational phase transitions by the author (Lynden-Bell & Wood, 1968; Lynden-Bell & Lynden-Bell, 1977)and by Thirring (1972). Betteweiser & Sugimoto (1984) were responsible for giving the gravothermal instability a delightful new twist in their discovery of the inverse gravothermal catastrophe that leads to giant thermal oscillations. But let me return to what George Contopoulos taught me at our many contacts since 1961. To set the scene I had written my thesis in 1960 which contained a new derivation of what potentials had local first integrals of the motion besides the energy and the angular momentum about the axis. The main part of the work was the derivation of these different classes of potential while other parts of the thesis contained the time dependent evolution of accretion disks<sup>1</sup><sup>1</sup>1they got that name only later and a first attempt to apply Jeans’s (1928) gravitational instability to make a theory of the spiral structure of galaxies. The beliefs of those times are well illustrated by the first edition of Landau & Lifshitz’s book on classical mechanics; either a dynamical system was separable and integrable or it was ergodic (by which was meant that almost all orbits visited all volumes of the phase space accessible under the energy constraint). Having classified the special forms of potential that had local integrals I expected that most other potentials would show ergodic behaviour. From the inequality of the $`z`$ and $`R`$ dispersions of the stars in the Galaxy it was clear that there must be another integral other than $`E`$ and $`h`$ for the Milky Way so I had begun trying to fit Eddington (1915) (now called Stakle) potentials to galaxies. It was quite shattering when at the 1961 IAU general assembly in Berkeley, George Contopoulos (1960) showed that orbits in most smooth potentials behaved as though there were third integrals. Suddenly the special interest of the special potentials fell away — they were not the only systems with 3rd integrals, merely those for which we knew the exact analytical form of those integrals. They seemed now to be mathematical curiosities rather than systems fundamental to the dynamics of real galaxies. Three years later George organised a very instructive IAU symposium (No.25) at Thessaloniki on the Theory of Orbits in the Solar System and Stellar Systems. Here he brought into contact the celestial mechanics fraternity, with their long history of analytically calculating orbits in the solar system by perturbation theory, with us new boys who were attempting to understand the statistics of the orbits in the more complicated potentials of galaxies; George Contopoulos (1965, 1966) here taught us that many of the problems were common to both fields and showed how fertile it was to bring different communities who knew different things to the same conference – his wide interests have made him especially good at that throughout his life and this 1998 conference is no exception. For brevity, I shall skip contacts at Besançon on the N-body problem where George presented Poisson Bracket series for third integrals and we were introduced to Lie series. In 1973 at Saas Fée George gave lectures in which he introduced me to the wonders of modern dynamical theory – topological methods incomplete chaos and the KAM theorem. It opened my eyes to so much that was new to me that I retreated back to more directly astronomical topics preferring the contact with astronomy to the unchartered seas revealed by this new alliance between computers and topology. Two years ago at Salsjobaden in a conference on the Dynamics of Barred Spirals, George again broke open a new field (Contopoulos, 1997). His invariant dynamical spectra (described also in his contribution here) taught us how to measure and classify chaos, even complete chaos! I have picked out a tiny fraction of George Contopoulos’s work (1975) and mentioned things I learned from our direct contacts. He will no doubt deduce that I am not a very attentive pupil but it would be mean not to mention a lovely paper on the light distributions of elliptical galaxies (Contopoulos, 1956) because it is a beautiful work to which I constantly have to refer my astronomical colleagues! The essence of this paper can be deduced by the following argument. Consider a spherical galaxy with any radial light profile. Now flatten its density distribution by linear contraction along any axis. This contraction can be resolved into one along and one perpendicular to the line of sight. The one along makes no difference while the one perpendicular flattens the circular distribution of observed light into one stratified on similar ellipses. If a further contraction is made along another axis we can apply the same argument again since ellipses contracted along any direction remain ellipses. So we arrive at George’s beautiful theorem that if the density distribution of an elliptical galaxy is stratified on similar concentric ellipsoids then the light seen will be stratified on similar concentric ellipses whatever the orientation of the galaxy to the line of sight. ## 2 Background to the Accretion Disc Theory of Quasars My own best work is “Galactic Nuclei as Collapsed Old Quasars” written in 1969. Then the discovery of quasars by Schmidt using Hazard’s accurate position for one of Ryle’s radio sources was still recent and quasars themselves were enigmatic objects more especially so because even the brightest 3C273 and 3C48 too did not seem to be associated with clusters of galaxies. No-one then knew that Michell in his wonderfully percipient paper of 1784 had predicted both giant black holes and how they would be discovered! Even the name black hole only came into general use in 1970! In my 1969 paper I refer to “Schwarzschild throats”. Laplace’s translation of Michell’s work into French (without attribution!) was not common reading among astronomers either. Among the modern works on quasars as accretion discs, priority goes to Salpeter’s fine 1964 letter to the Astrophysical Journal. Turning against the then common view that quasars were not associated with clusters of galaxies he worked out the consequences of a large black hole moving through a galaxy and accreting according to the Hoyle & Lyttleton formula. He derived the power emitted per unit accretion rate by considering the binding energy of the last stable circular orbit and deduced a number of consequences of such black holes accreting as they wandered through the interstellar gas of a galactic disc. Five years elapsed before I wrote my paper. Originally unaware of Salpeter’s (1964) note I luckily learned of it before the proofs came and so was able to add a sentence and a reference to his work. My aim was to show that the very small nuclei already known in the centres of galaxies were likely to be stars gathered around the giant-black-hole remnants of quasars. At the time, 1969, we already knew that the Optical Violently Variable or OVV quasars could change by a magnitude from one night to the next. Geoffrey Burbridge (1958) had been insistent that the giant radio sources needed $`10^{61}`$ ergs in fast electrons and magnetic field, while Ryle (1968) had emphasised that quasars would not be distinguished from such sources by radio measurements. Now $`10^{61}`$ ergs weigh $`\frac{1}{2}10^7`$M. If one entertained the idea that these ergs came from nuclear energy then the 1% mass conversion efficiency of nuclear burning means that $`10^9`$M are needed. However putting $`10^9`$M within the light-variation-time length-scale of 10 light hours gives a gravitational binding energy of $`10^{62}`$ ergs – on such a hypothesis $`10^{62}`$ ergs of gravitational energy would have been lost, all in order to burn $`10^9`$M of hydrogen into Helium and thereby get the mere $`10^{61}`$ ergs needed. This shows that in assuming nuclear power we nevertheless conclude that most of the energy comes from gravity. So the nuclear idea is not sensible and we should assume a preponderant gravity power and a somewhat smaller mass $`10^8`$M. If conversion of mass into radiation is not 100% efficient quasars must leave behind massive remnants of $`>10^710^8`$M and because they have radiated their binding energy they have insufficient energy to re-expand. Since the masses are far beyond the Chandrasekhar limit there are no other final resting places other than giant black holes. Turning to the numbers of quasars derived by Sandage and estimating possible lifetimes, I deduced $$\genfrac{}{}{0pt}{}{\mathrm{Number}\mathrm{of}\mathrm{clusters}}{\mathrm{of}\mathrm{galaxies}}<\genfrac{}{}{0pt}{}{\mathrm{Number}\mathrm{of}\mathrm{dead}}{\mathrm{quasars}}<\genfrac{}{}{0pt}{}{\mathrm{Number}\mathrm{of}}{\mathrm{galaxies}}$$ Thus the nearest dead quasar must be nearer than M87 and there may be as many dead quasars as there are massive galaxies. How could we hide dead quasars of $`10^8`$M when they still gravitate? They would naturally be centres of attraction for stars so it is natural to find such a body at the centre of an exceptionally dense region. Galactic Nuclei then became the obvious candidates so I looked at the Galaxy, the Magellanic Clouds, M31, M32, M81, M82, NGC4151, M87, etc. and estimated possible black hole masses from the 1969 data on their nuclei, many of which were due to pioneering work by Merle Walker, see Figure 1. I also drew on the accretion discs of my thesis and, finding the gaseous viscosity too low, I estimated a magnetic viscosity. This was based on the shearing of the disc causing magnetic reconnection and continual flaring above the disc. Indeed I found that the protons got most of the energy as they more readily achieved “runaway”. Particle energies up to $`10^{13}`$eV were readily generated and hard emission would result when this hit the disc material. While the energy was primarily dissipated into such fast cosmic rays they would collide with the disc and heat it to temperatures $`Tr^{3/4}`$ for $`r2GM_0/c^2`$. Adding together such black body rings of emission, I got the disc spectrum $`S_\nu \nu ^{1/3}\mathrm{exp}(h\nu /kT_{\mathrm{max}})`$ where $`T_{\mathrm{max}}`$ the maximum temperature in Kelvin was $`6.6\times 10^4F_3^{1/4}M_7^{1/3}`$; here $`F_3`$ is the mass flux in units of $`10^3`$M/yr with $`M_7`$ the black hole’s mass in units of $`10^7`$M. I did not estimate how much hard emission would come from the initial collisions of the cosmic rays with the disk but a $`10^{13}`$ eV cosmic ray is certainly capable of emitting hard $`\gamma `$ rays at its first few collisions. Even today, 29 years later, I think this model deserves more attention as a serious rival to the currently popular advection models. The following year Jim Bardeen wrote a particularly fine paper which showed how accretion would spin up a Schwarzschild hole and after a finite mass was accreted leave it growing as a near-limiting Kerr hole of significantly greater efficiency. I gave a paper on these models at the 1970 Vatican Symposium on the nuclei of galaxies and in 1971 re-estimated the luminosity functions of quasars and mini quasars by developing the C<sup>-</sup> method. That year Ekers was stimulated to look with higher radio resolution at the Galactic Centre (Ekers & Lynden-Bell, 1971) and I reviewed the then known data on it with Rees (Lynden-Bell & Rees, 1971). A year or two later attention turned to lower mass black holes with the discovery of many X-ray binaries by the UHURU satellite. Papers by Pringle & Rees (1972) and Shakura & Sunyaev (1973, 1976) applied such ideas on a smaller scale and with Pringle I applied them (1974) to star formation both with and without magnetic fields. In 1978 I introduced the thick Kerr and Schwarzschild vortices in the hope of getting a more natural collimation mechanism than that of Blandford & Rees (1974) but the very narrow jets are still inadequately understood. ## 3 General Exact Solution to an Extraordinary N-body Problem I now return to the N-body problem and the little known fact that in Principia Newton (1687) solved an N-body problem in which every body attracts every other one and he solved it for all initial conditions! His was the first of the class of extraordinary N-body problems which Ruth Lynden-Bell and I have been studying. Newton took the force between two bodies $`i`$ and $`j`$ to be $`F=km_im_j(𝐱_j𝐱_i)`$. To get the total force in particle $`i`$ he summed over $`j`$ and since the $`j=i`$ term is zero we may sum over all $`j`$ to obtain $$𝐅_i=\underset{j}{}𝐅_{ij}=km_iM(\overline{𝐱}𝐱_i)$$ $`(6)`$ where $`\overline{𝐱}`$ is the position vector of the centre of mass which of course moves uniformly in a straight line and $`M`$ is the total mass of the system. Thus with this linear mass-weighted law, that Newton would never have ascribed to Hooke, the total force on the $`i^{\mathrm{th}}`$ body is directed to the centre of mass and proportional to the distance from it. Therefore Newton found that each body describes a centred ellipse about the centre of mass which itself moves uniformly. This completes Newton’s solution. In his case the potential energy is $$V=\frac{1}{2}K\underset{i<}{}\underset{j}{}m_im_j(𝐱_i𝐱_j)^2=\frac{1}{2}kM\underset{i}{}m_i(𝐱_i\overline{𝐱})^2=\frac{1}{2}kM^2r^2.$$ $`(7)`$ Generalising some work on statistical mechanics by Ruth Lynden-Bell we were led to consider the dynamics of N-body systems with the more general potential energy $`V=V(r)`$ where $`r`$ is given above. (cf. equation (2)). We define a mass weighted radius $`𝐫`$ in 3$`N`$ dimensions by $$𝐫=(\sqrt{\frac{m_1}{M}}\left(𝐱_1\overline{𝐱}\right),\sqrt{\frac{m_2}{M}}\left(𝐱_2\overline{𝐱}\right),\sqrt{\frac{m_N}{M}}\left(𝐱_N\overline{𝐱}\right)),$$ $`(8)`$ so the first 3 of the $`N`$ coordinates tell us where particle 1 is, the next 3 where particle 2 is, etc. Notice that $`|𝐫|`$ is the $`r`$ we defined previously. Equations of motion of the particles in centre of mass coordinates then lead directly to the equation $$M\ddot{𝐫}=V^{}(r)\widehat{𝐫}$$ $`(9)`$ where $`\widehat{𝐫}=𝐫/r`$ is the unit radial vector in 3$`N`$ space. One readily sees that $$r_\alpha \ddot{r}_\beta r_\beta \ddot{r}_\alpha =0,$$ $`(10)`$ so $$r_\alpha \dot{r}_\beta r_\beta \dot{r}_\alpha =L_{\alpha \beta }=L_{\beta \alpha }=\mathrm{const}.$$ $`(11)`$ Furthermore $$L^2=\frac{1}{2}L_{\alpha \beta }L_{\alpha \beta }=\frac{1}{2}\left(r_\alpha \dot{r}_\beta r_\beta \dot{r}_\alpha \right)\left(r_\alpha \dot{r}_\beta r_\beta \dot{r}_\alpha \right)=\left[r^2(\dot{𝐫})^2(𝐫\dot{𝐫})^2\right]$$ $`(12)`$ where $`L^2`$ is the constant defined by the first equality. The energy in centre of mass coordinates is given therefore by $$\frac{1}{2}M\dot{𝐫}^2+V(r)=E=\frac{1}{2}M(\dot{r}^2+L^2r^3)+V(r).$$ $`(13)`$ This determines $`r(t)`$ as a periodic function if $`E<0`$ so there is no violent relaxation in these systems and they vibrate eternally. Differentiating (13) we find $$M(\ddot{r}L^2r^3)=V^{}(r).$$ $`(14)`$ This is the same equation of motion as that for the central distance to an object in planar motion which angular momentum $`L`$ about a centre of force with potential $`V(r)`$. It is natural to imagine such a planar orbit and to invent an angle $`\varphi `$ such that $`\varphi =0`$ at some pericentre and $$r^2\dot{\varphi }=L,$$ $`(15)`$ we may then imagine an orbit in two dimensional polar coordinates $`r,\varphi `$ and following Newton we shall cling to the geometry by eliminating the time in favour of $`\varphi `$. Now $$\ddot{𝐫}=\frac{d^2}{dt^2}(r\widehat{𝐫})=\frac{d}{dt}\left(\dot{r}\widehat{𝐫}+\frac{L}{r}\frac{d\widehat{𝐫}}{d\varphi }\right)=\ddot{r}\widehat{𝐫}+\frac{L^2}{r^3}\frac{d^2\widehat{𝐫}}{d\varphi ^2}$$ $`(16)`$ where two terms in $`\dot{r}Lr^2d\widehat{𝐫}/d\varphi `$ cancel at the last step. Inserting this result into our equation of motion (9) and using (14), we deduce the wonderfully simple equation $$d^2\widehat{𝐫}/d\varphi ^2+\widehat{𝐫}=0,$$ $`(17)`$ whose solution is $$\widehat{𝐫}=𝐀\mathrm{cos}\varphi +𝐁\mathrm{sin}\varphi $$ $`(18)`$ where $`𝐀`$ and $`𝐁`$ are constant 3$`N`$-vectors which obey $`A^2=B^2=1`$ and $`𝐀𝐁=0`$ in order that $`\widehat{𝐫}`$ should be a unit vector for all $`\varphi `$. Three further constraints on $`𝐀`$ and $`𝐁`$ follow from the fixed centre of mass. They are detailed in our paper but need not concern us here. We now have the general solution, the centre of mass moves uniformly in a line and the particles pursue orbits about it of the form $$𝐫=r(\varphi )(𝐀\mathrm{cos}\varphi +𝐁\mathrm{sin}\varphi ),$$ $`(19)`$ where $`r(\varphi )`$ is the form of the two dimensional orbit governed by equations (13) and (15). These can be integrated explicitly for the Isochrone potential $`Vk/(b+s)`$, $`s^2=r^2+b^2`$ and for the Kepler and harmonic oscillator potentials. For the Kepler case $`r(\varphi )=\mathrm{}/(1+e\mathrm{cos}\varphi )`$ so the solution is of the pleasing form $$𝐫=\mathrm{}(1+e\mathrm{cos}\varphi )^1(𝐀\mathrm{cos}\varphi +𝐁\mathrm{sin}\varphi ).$$ If we concentrate on the particle $`i`$, we find its orbit lies in the plane perpendicular to $`𝐀_i\times 𝐁_i`$ where $`i`$ denotes the three components corresponding to particle $`i`$. Taking $`x,y`$ coordinates in that plane and eliminating $`\varphi `$ we find that the orbit is quadratic. If $`e`$ were zero it would be a central ellipse, while if $`|𝐀_i|`$ and $`|𝐁_i|`$ are equal and orthogonal it gives a Keplerian eccentric ellipse. In the general bound case the ellipse has neither its centre nor its focus at the centre of mass $`r=0`$. These systems obey the equilibrium Virial theorem in the form $`2𝒯rV^{}=0`$, so for the hyper-keplerian case $`Vr^1`$ it takes the more familiar form $`2𝒯+V=0`$. One may work out the microcanonical statistical mechanics and find that $`E=\frac{3}{2}(N2)kT`$ so that the heat capacity $`C=\frac{3}{2}(N2)k`$ which is clearly negative as for other gravitating systems (Lynden-Bell & Wood, 1968) and black holes. If $`V`$ takes the form $$V=\{\begin{array}{cc}\mathrm{}\hfill & r<b\hfill \\ kM^2/r\hfill & b<r<R\hfill \\ \mathrm{}\hfill & r>R\hfill \end{array}$$ corresponding to a gravitating system which cannot get too small or too big then a Canonical ensemble is possible and the negative specific heat region of the microcanonical ensemble is replaced by a giant first order phase transition as in our earlier model (Lynden-Bell & Lynden-Bell, 1977). ## 3b Generalisation We may extend these extraordinary N-body problems by taking $`V`$ to be of the more general form $$V=V_0(r)+r^2V_2(\widehat{𝐫})$$ the only restriction on the second term being that it scales under expansion as $`r^2`$. Those familiar with separable systems in 3 dimensions will know that for such potentials $`(\frac{1}{2}m𝐡^2+V_2)`$ is constant along an orbit where for that case $`V_2=V_2(\theta ,\varphi )`$ and $`𝐡=𝐫\times 𝐯`$. The generalisation to 3$`N`$ dimensions is the first integral $`\frac{1}{2}ML^2+V_2(\widehat{𝐫})=\frac{1}{2}M^2`$ say (note that due to the $`V_2`$ term, $`^2`$ does not have to be positive). The energy equation now reads $$E=\frac{1}{2}M\dot{𝐫}^2+V_0+r^2V_2=\frac{1}{2}M(\dot{r}^2+L^2r^2)+V_0+r^2V_2=\frac{1}{2}M(\dot{r}^2+^2r^2)+V_0$$ so the $`r`$ motion pulsates for ever as before. These systems show no violent relaxation in their breathing mode which pulsates (or evolves $`E>0`$) independently of the complication of the $`\widehat{𝐫}`$ motion. Since $`V_2(\widehat{𝐫})`$ is still free to choose, that motion can be as complicated as we like to make it. Defining a new time $`\tau `$ by $`d/d\tau =1/r^2d/dt`$ the equations of motion for $`\widehat{𝐫}`$ as a function of $`\tau `$ are totally independent of the $`r`$ motion, having a reduced Lagrangian system of their own in $`\tau `$-time. An interesting case to consider is the statistical mechanics of a “hard cone” gas in which $`V_2`$ is large and repulsive only in very small regions where two particles are nearly in the same direction as seen from the mass centre. This corresponds to the small hard sphere gas so beloved of textbooks. Carrying out that statistical mechanics, which is totally independent of any $`r`$ motion that may be going on, we obtain a new system at equilibrium in its $`\widehat{𝐫}`$ coordinates but pulsating or evolving in $`r`$. We have shown (Lynden-Bell & Lynden-Bell, 1998) this equilibrium to be best described in terms of the peculiar velocity $`𝐯_i`$ relative to a “Hubble flow” $`H(𝐱_i\overline{𝐱})`$ where $`H=\dot{r}/r`$ that is $$𝐯_i=\dot{𝐱}_i\dot{\overline{𝐱}}H(𝐱_i\overline{𝐱})$$ $$f(𝐯_i,𝐱\overline{𝐱})\mathrm{exp}\left[\left(\stackrel{~}{\beta }r^2\frac{1}{2}m_iv_i^2\right)\frac{\stackrel{~}{\beta }r_i^2}{2r^2}\right].$$ Thus the distribution is Maxwell-Boltzmann relative to be mean Hubble flow with a temperature proportional to $`r^2(t)`$ and the profile is gaussian with a dispersion proportional to $`r(t)`$. It is notable that the ‘equilibrium’ of the $`\widehat{𝐫}`$ coordinates is maintained throughout the pulsation just as the Planck distribution of cosmic black-body radiation in the Universe is maintained without interaction during the expansion of the Universe. Thus whether the relaxation to equilibrium of the angular coordinates is longer than or shorter than the pulsation time of $`r`$ is not relevant because ‘equilibrium’ once attained is maintained throughout the pulsation, it does not have to be recreated as each radius $`r`$ is attained.
no-problem/9812/astro-ph9812128.html
ar5iv
text
# Evolution of spectral parameters during a pre-eclipse dip of Her X-1 Table 2 is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html. ## 1 Introduction The study of transient dips in the X-ray lightcurves of X-ray binaries can help to provide insight into the accretion processes in these objects. The dips are thought to be caused by cold material with an appreciable column density of $`10^{23}\text{cm}^2`$ and higher passing through the line of sight to the neutron star. Since the dipping activity occurs primarily around the upper conjunction of the neutron star (i.e., shortly before the X-ray eclipse), this material is most probably due to the “splash” at the location where the accretion stream hits the accretion disk. A recent review on X-ray dip energy spectra has been published by White et al. (1995). X-ray dips of the X-ray binary Her X-1 were first described by Giacconi et al. (1973). Two different kinds of dips are seen: pre-eclipse dips, which occur at orbital phases $`\mathrm{\Phi }_{\mathrm{orb}}=0.7`$$`1.0`$, and anomalous dips, which are observed at $`\mathrm{\Phi }_{\mathrm{orb}}0.2`$. During the “Main High” phase of the 35 d cycle of Her X-1, the pre-eclipse dips seem to “march” towards earlier orbital phases (Crosa & Boynton (1980); Giacconi et al. (1973)). Interpretations of the marching behavior have been given in terms of a periodic mass transfer driven by the changing radiation pressure at the inner Lagrangian point (Crosa & Boynton (1980)), and by the splash of the accretion stream onto a warped accretion disk (Schandl (1996)). Note that the observational basis for the “marching behavior” has recently been challenged by Leahy (1997). Previous observational studies dedicated to pre-eclipse dips of Her X-1 have been presented by Ushimaru et al. (1989), Choi et al. (1994), Leahy et al. (1994), and Reynolds & Parmar (1995). These authors found that the dip spectra can be modeled by a temporally variable absorption of the non-dip spectrum. In addition to this absorption component, a further weak unabsorbed component is present during the dips, as first seen in Tenma data (Ushimaru et al. (1989)) and verified by later observations. This component has been interpreted as being due to Compton scattering of the primary neutron star radiation by an extended corona into our line of sight. Although the unabsorbed component always contributes to the observed X-ray spectrum, it can only be identified during dips and during the “low state” of Her X-1 (Mihara et al. (1991)). Due to the comparably small effective areas of the earlier instruments, a study of the detailed temporal evolution of the spectral parameters during a dip has not been possible in these observations. Therefore, earlier investigators were forced to combine data obtained during (non-consecutive) time intervals with similar measured count-rates to obtain spectra with a signal to noise ratio sufficient for spectral analysis. This approach is not without problems, however, since a strong variation of $`N_\mathrm{H}`$ during the time intervals used for accumulating the spectra can lead to spurious spectral features like an apparent decrease in the flux incident onto the absorbing material (Parmar et al. (1986)). To be able to resolve the structures occuring on short timescales, instruments with a larger effective area and moderate energy resolution are needed. In this paper, therefore, we present results from the analysis of an RXTE observation of Her X-1, focusing on the spectral analysis and the behavior of the lightcurve during a pre-eclipse dip. Results from a previous analysis of these data, focusing on the orbit determination, have been presented by Stelzer et al. (1997). In Sect. 2 we describe our RXTE observation and the data reduction. Sect. 3 presents our results on the temporal evolution of the column density of the absorbing material throughout the dip, using two different data analysis methods. In Sect. 4 we discuss the temporal behavior of the source in terms of a simple model for individual structures in the lightcurve. We summarize our results in Sect. 5. ## 2 Observations and data analysis Our RXTE observations of Her X-1 were performed on 1996 July 26. We primarily use data obtained with the PCA, the low energy instrument onboard RXTE. The PCA consists of five nearly identical Xe proportional counters with a total effective area of about $`6500\text{cm}^2`$ (Jahoda et al. (1997)). The data extraction has been performed using the RXTE standard data analysis software, ftools 4.0. To avoid contamination of the spectra due to the Earth’s X-ray bright limb, only data measured at source elevations more than $`10\mathrm{°}`$ above the spacecraft horizon were used in the present analysis. Background subtraction was performed with version 1.5 of the background estimator program, using a model where the background is estimated from the rate of Very Large Events in the detector. The diffuse X-ray background and a model for the activation of radioactivity within the detector is added. See Jahoda (1996) for a description of the PCA. For the spectral analysis, version 2.2.1 of the PCA response matrix was used (Jahoda, 1997, priv. comm.). The spectral modeling of data from an observation of the Crab pulsar made available to us by the RXTE PCA and HEXTE teams with this matrix suggests that the matrix is well understood on the 1% level (see Dove et al. (1998) for a discussion of these data). Using this version of the PCA response matrix, the overall description of the Crab continuum is good, with the remaining deviations mainly being below 3 keV and around the Xe K edge at 35 keV. To avoid these uncertainties we used data from 3 to 18 keV only, which is sufficient for an analysis of the dips because of the power law nature of the Her X-1 X-ray spectrum and due to the $`E^3`$ dependency of the photoionization cross section (see discussion in Sect. 3.2). To obtain meaningful $`\chi ^2`$ values and to take into account the remaining uncertainties of the matrix, we added a 1% systematic error to all PCA channels. The spectral analysis was performed with XSPEC, version 10.00p (Arnaud (1996)). ## 3 Time evolution of spectral parameters ### 3.1 Dip lightcurves The full lightcurve of the July 26 pre-eclipse dip is shown in Fig. 1. Gaps in the data stream are due to passages through the South Atlantic Anomaly and due to the source being below the spacecraft horizon. The dip ingress, which occurred at orbital phase $`\mathrm{\Phi }_{\mathrm{orb}}=0.75`$, is characterized by a rapid decrease in intensity by a factor 3 within 80 s. The RXTE observation corresponds to a 35 d phase of $`\mathrm{\Psi }_{35}0.13`$, that is the observation took place 4 to 5 days after the Turn-On of the Main High State. From the behavior of the lightcurve in Fig. 1 we estimate that the dip egress takes place close to the end of the observation, when the count rate again reaches the pre-dip level. Under this assumption the duration of the dip was 6.5 hours. In order to describe the time evolution of spectral parameters we use two complementary methods: First, we divide the whole dip lightcurve into segments of 16 s duration and perform spectral fits to each of these segments (Sect. 3.2). Secondly, we use color-color diagrams to model the time evolution of the column density (Sect. 3.3). ### 3.2 Spectral modeling As was mentioned in Sect. 1, the common explanation for the dips is that of photoabsorption and scattering by foreground material. The photon spectrum ($`\mathrm{ph}\mathrm{cm}^2\mathrm{s}^1\mathrm{keV}^1`$) resulting from this process can be well described by a partial covering model of the form $`N_{\mathrm{ph}}`$ $`=`$ $`E^\alpha \left[I_\mathrm{A}\mathrm{exp}\left(\left(\sigma _\mathrm{T}kN_\mathrm{H}+\sigma _{\mathrm{bf}}(E)N_\mathrm{H}\right)\right)+I_\mathrm{U}\right]`$ (1) $`+\text{GAUSS}`$ where $`\sigma _{\mathrm{bf}}(E)`$ is the photoabsorption cross section per hydrogen atom (Morrison & McCammon (1983)), $`\sigma _\mathrm{T}`$ is the Thomson cross section, and $`k=N_\mathrm{e}/N_\mathrm{H}`$ is the number of electrons per hydrogen atom ($`N_\mathrm{e}`$ is the electron column density). In Eq. (1), the continuum emission is modeled as the sum of two power laws, one of which is photo-absorbed by cold matter of column density $`N_\mathrm{H}`$, as well as Thomson scattered out of the line of sight by electrons in the cold material. The second power law is not modified by the absorber, indicating that this additional (scattering) component comes from a geometrically much larger, extended region and thus is not affected by the photoabsorption. To this continuum, an iron emission line (described by a Gaussian) is added, which remains unabsorbed to simplify the spectral fitting process. Two-component models similar to that of Eq. (1) have previously been shown to yield a good description of the dip spectra, while other simpler models were found to result in unphysical spectral parameters. As we mentioned in Sect. 2, to avoid response matrix uncertainties and problems with the exponential cutoff and the cyclotron resonance feature we include only data from 3 to 18 keV in our analysis. This approach results in a simpler spectral model than that used by Choi et al. (1994) and Leahy et al. (1994), who included the whole Ginga LAC energy band from 2 to 37 keV in their analysis. Since $`\sigma _{\mathrm{bf}}E^3`$, except for the highest values of $`N_\mathrm{H}`$, photoabsorption will virtually not influence the spectrum above $``$10 keV, such that the inclusion of data measured up to 18 keV is sufficient for the determination of the continuum strength. As we show in Fig. 2, neither the exponential cut-off nor the cyclotron resonance feature at $``$35 keV need to be taken into account when restricting the upper energy threshold to 18 keV, as significant deviations between the data and the model are observed only for energies above 18 keV. We, therefore, conclude that between 3 and 18 keV, additional continuum components do not affect the spectrum. When holding the power law index constant at its pre-dip value, from our spectral fits of the dip data we obtain acceptable $`\chi ^2`$ values ($`\chi _{\mathrm{red}}^21.5`$). Introducing additional freedoms by leaving the photon index as a free parameter in the fit does not significantly improve the results. Therefore, we do not have to include high energy data to determine the continuum parameters. In different attempts to fit the 16 s time resolved spectra without explicitly allowing for Thomson scattering of the absorbed component we found that the absorbed intensity which was then a free parameter was strongly anticorrelated with the column density. The relation between $`I_\mathrm{A}`$ and $`N_\mathrm{H}`$ from a fit of a spectral model that does not take account of Thomson scattering is shown in Fig. 3 a. The observed anticorrelation reflects the exponential $`N_\mathrm{H}`$-dependence of the Thomson scattering factor and led us to the conclusion that absorption and Thomson scattering may not be separated. We, therefore, use the model of Eq. (1) and hold the continuum parameters fixed to their measured pre-dip values: a single power law of photon index 1.06 plus an emission line feature from ionized iron at 6.7 keV with width of $`\sigma =0.46`$ keV ($`\chi _{\mathrm{red}}^2=1.1`$ at $`3`$$`18`$ keV for 30 degrees of freedom). The normalization of the absorbed power law, $`I_\mathrm{A}`$, was also fixed to its pre-dip value, $`I_\mathrm{A}=0.20\text{ph}\text{cm}^2\text{s}^1\text{keV}^1`$. Finally, the ratio $`k=N_\mathrm{e}/N_\mathrm{H}`$ was set to $`1.21`$, appropriate for material of solar abundances. We emphasize that fixing the parameters to their normal state values assumes that the intrinsic spectral shape of the source does not change during the dip and that variations of the observed spectrum are due to the varying column density only. This is justified by the apparent constancy of the lightcurve outside the dip. Particularly, if $`I_\mathrm{A}`$ is free in the fit, we observe an increase of this parameter for very high $`N_\mathrm{H}`$, which is not clearly systematic (see Fig. 3 b). Such a correlation between $`I_\mathrm{A}`$ and $`N_\mathrm{H}`$ seems to indicate an additional dependence of $`I_\mathrm{A}`$ on the column density (next to the Thomson scattering already taken into consideration in the spectral model). Rather than being due to real variations of the absorbed intensity, we consider this relation to be produced artificially by the fitting process: Variations of $`I_\mathrm{A}`$ during the dip might come about as compensation for slight misplacements of the column density. Any remaining correlation in Fig. 3 b might contain a possible contribution from slight variations in the absorbed continuum. Due to the limited energy resolution of the detector, however, $`N_\mathrm{H}`$ and $`\alpha `$ are strongly correlated. Therefore, a slight real variation is not convincingly separable from the artificial one. To summarize, the remaining free parameters of the spectral model are the iron line normalization, $`N_{\mathrm{Fe}}`$, the normalization of the unabsorbed component, $`I_\mathrm{U}`$, and the column density, $`N_\mathrm{H}`$. We divide the dip observation into 16 s intervals and obtain 941 spectra, covering the energy range from 3 to 18 keV. After subtracting the background, which has been modeled on the same 16 s basis, the individual spectra were fitted with the model of Eq. (1). Typical $`\chi _{\mathrm{red}}^2`$ values obtained from these fits are between $`0.5`$$`1.5`$ for 37 degrees of freedom, indicating that our simple spectral model is sufficient to describe the data. The temporal behavior of the column density mirrors that of the lightcurve (Figs. 4a and b). This supports the assumption that the underlying cause for both is absorbing material whose presence in the line of sight blocks off the X-ray source and thus leads to a modification of the spectral shape due to energy dependent absorption and energy independent scattering as well as a corresponding reduction in the 3 to 18 keV flux. The highest value measured for the column density is about $`\mathrm{3\hspace{0.17em}10}^{24}\text{cm}^2`$, comparable to that found in previous measurements (Reynolds & Parmar (1995)). The variation of the measured count rate with $`N_\mathrm{H}`$ is shown in Fig. 5 together with the count rates predicted by our partial covering model. The transition from the dip to the normal state is manifested in the break of the slope around 1500 cps. The figure indicates that due to the 3 keV energy threshold of the PCA, the instrument is sensitive only to values of $`N_\mathrm{H}`$ of about $`\mathrm{5\hspace{0.17em}10}^{22}\text{cm}^2`$ and above. This also explains why we measure $`N_\mathrm{H}10^{22}\text{cm}^2`$ for the out-of-dip data just before dip-ingress, a value which is rather high compared to the value for absorption by the interstellar medium along the line of sight ($`\mathrm{1.7\hspace{0.17em}10}^{19}\text{cm}^2`$, Mavromatakis (1993)). The normalization of the unabsorbed component, $`I_\mathrm{U}`$, is found to stay almost constant during the whole dip, indicating that the whole variability of Her X-1 during the dip is due to absorption and scattering in the intervening material. The absolute value of $`I_\mathrm{U}`$ is quite small (about 2.5% of $`I_\mathrm{A}`$). The normalization of the iron line, $`N_{\mathrm{Fe}}`$, also shows some decline during phases of high column density. Note, however, that in our spectral model the line feature is not absorbed and scattered. The remaining variation can in principle be explained by partial covering of the line emitting region. ### 3.3 Color-color diagrams As an alternative to spectral fitting, the development of the column density can be visualized by color-color diagrams which show the behavior of broad band X-ray count rates (cf. Leahy (1995) for earlier results from Ginga data). We define four energy bands covering approximately the same range as our spectral analysis (Tab. 1). We then define modified X-ray colors by $`(B_0B_1)/(B_0+B_1)`$, $`(B_1B_2)/(B_1+B_2)`$, and $`(B_2B_3)/(B_2+B_3)`$ where $`B_i`$ is the count rate in band $`i`$. For any given spectral model, a theoretical color can be obtained by folding the spectral model through the detector response matrix. If the only variable parameter in the model is $`N_\mathrm{H}`$, the resulting colors are found to trace characteristic tracks in the color-color diagram. Comparing these tracks with the measured data it is possible to infer the temporal behavior of the column density (see below). As an example, Fig. 6a displays typical theoretical tracks for two possible spectral models for Her X-1. The dotted track represents a model without an unabsorbed component (called the one-component model henceforth), while the solid line is the track computed for a partial covering model. The form of this partial covering model is identical to Eq. (1) with the exception that the iron line is also absorbed and scattered. We used photoabsorption cross sections from Verner & Yakovlev (1995) and Verner et al. (1996) in the computation of the diagrams. The difference between these cross sections and those from Morrison & McCammon (1983) used in Sect. 3.2 is negligible, though. The typical shape of the tracks is due to the $`E^3`$ proportionality of the absorption cross section $`\sigma _{\mathrm{bf}}`$: for low values of $`N_\mathrm{H}`$ only the lower bands are influenced by the absorbing material, while for high values of $`N_\mathrm{H}`$ all bands are influenced. In both cases the model track starts at the low $`N_\mathrm{H}`$-values in the upper right corner of the diagram marked by the square which describes the situation before the dip. Moving along the track the column density increases. For low values of $`N_\mathrm{H}`$, the tracks of both models are similar since the influence of absorption is negligible. For larger $`N_\mathrm{H}`$ the lower energy bands are increasingly affected by absorption. At a critical value of $`N_\mathrm{H}`$ the unabsorbed component begins to dominate the low energy bands in the partial covering model. Since the unabsorbed component has, by definition, the same shape as the non-dip spectrum, the track turns towards the low-$`N_\mathrm{H}`$ color. In the one-component model, the absence of an unabsorbed component leads to a further decrease in flux in the low energy bands that is only stopped by response matrix and detector background effects. For each of the energy bands defined in Table 1 we generate a background subtracted lightcurve of 32 s resolution and obtain the color-color diagrams shown in Fig. 6b–d. The data line up along a track which is curved similar to the theoretical tracks of Fig. 6a. The accumulation of data points in the upper right corner of the diagrams of Fig. 6b–d consists of the out-of-dip data, where the colors remain constant and the column density is at its lowest value. As noted above, the early turn of the observed tracks in the color-color diagram suggests the presence of an unabsorbed spectral component in the data. To quantify this claim, we compare theoretical tracks from the partial covering model to the data. This is done by varying the relative contributions of $`I_\mathrm{A}`$ and $`I_\mathrm{U}`$ to the total spectrum such that the total normalization of the incident spectrum, $`I_\mathrm{A}+I_\mathrm{U}`$, is kept at its pre-dip value. Except for the normalizations, all other spectral parameters are fixed at their pre-dip values (cf. Sect. 3.2). We define the best fit model to be the model in which the root mean square distance between the track and the data is minimal (cf. Fig. 6). Not surprisingly, the ratio between $`I_\mathrm{A}`$ and $`I_\mathrm{U}`$ found using this method is similar to the average ratio found from spectral fitting, about 3%. Using this best fit model, $`N_\mathrm{H}`$ as a function of time is found by projecting the measured colors onto the track. The projection provides slightly different values of $`N_\mathrm{H}`$ for each color-color diagram examined. Major discrepancies are due to projection onto a wrong part of the model curve in the region where the curve overlaps with itself. To even out these discrepancies we calculated the median of $`N_\mathrm{H}`$ from all three color-color diagrams used in our analysis. The time development of $`N_\mathrm{H}`$ found from the color-color diagrams is in good agreement with the $`N_\mathrm{H}`$ resulting from the spectral fits (Fig. 7). ## 4 Light curves The irregular variations observed in the lightcurve during the dips are generally thought to be due to the cloudy structure of an absorber extending above the disk surface. The location of the material, i.e., its distance from the neutron star, however, is unknown and depends on model assumptions. According to Crosa & Boynton (1980), the obscuring matter is the temporarily thickened disk rim, or, in the modification of this model by Bochkarev (1989) and Bochkarev & Karitskaya (1989), consists of “blobs” in an extended corona above the disk rim. On the other hand, in the coronal wind model of Schandl et al. (1997) the dips are caused by a spray of matter at some inner disk radius where the accretion stream impacts on the disk. Since all models assume that the absorbing matter exists in the form of clumps of material, it is interesting to ask whether these blobs can be seen in the data. The symmetry of the structures in the first half of the lightcurve shown in Fig. 4a suggests that this is indeed the case. We therefore tried to model this part of the dip lightcurve by fitting Gaussians to each of the visibly identified structures. The best fit to the data is displayed in Fig. 8. Information about the parameters of each Gaussian is summarized in Table 2 (available in electronic form only). Note that the structures appear to be quasi-periodic on a time-scale of about $`10^3`$ d (=90 s) which could provide a hint on their nature. A similar 144 s periodicity has previously been reported by Leahy et al. (1992). A study of a larger sample of dips is necessary, however, before any claim on the existence of periodic structures in the dip data can be settled. Assuming that each structure in the lightcurve represents a single cloud of matter which is absorbing X-rays while crossing the line of sight, the width of the Gaussian represents a direct measure of the crossing time for such a cloud. The horizontal extent of the cloud could be derived if the radial distance of these clouds from the neutron star and their velocity were known. To give a rough estimate for the cloud size, we assume that the matter moves on Keplerian orbits close to the outer disk rim (as is favored by the model of Crosa & Boynton (1980)), at a distance of $`\mathrm{2\hspace{0.17em}10}^{11}`$cm from the neutron star (Cheng et al. (1995)). The Keplerian velocity at this radius is approximately $`\mathrm{3\hspace{0.17em}10}^7`$ cm/s. Assuming that the clouds are spherical, i.e., assuming that their angular size estimated from the FWHM of the fitted Gaussian corresponds to their radial size, we derive typical cloud diameters on the order of $`10^{8\mathrm{}10}`$ cm. Making use of the observed column densities we find a proton density of about $`\mathrm{6\hspace{0.17em}10}^{14}\text{cm}^3`$. On the other hand, the apparent periodicity mentioned above might also indicate that the blobs are very close to the neutron star. Assuming Keplerian motion, the periodicity could indicate orbital radii as small as $`10^9`$ cm (the inner radius of the accretion disk is at $`10^8`$ cm; Horn (1992)) and their densities would be accordingly higher. ## 5 Discussion and Conclusions In this paper we have presented the temporal evolution of spectral parameters of Her X-1 during a pre-eclipse dip. Due to the large effective area of the PCA, the temporal resolution of our data is higher than that of earlier data. Using two different methods we were able to show that a partial covering model in which the column density $`N_\mathrm{H}`$ and the normalization of the unabsorbed continuum $`I_\mathrm{U}`$ are the only time variable spectral components is sufficient to explain the observed spectral and temporal variability. This result is in qualitative agreement with the previous analyses for pre-eclipse dips presented by Ushimaru et al. (1989), Choi et al. (1994), Leahy et al. (1994), and Reynolds & Parmar (1995). Note that models without an unabsorbed component were not able to result in satisfactory fits, which is also consistent with the low state observations presented by Mihara et al. (1991) where the unabsorbed component was clearly required. As proposed by Choi et al. (1994) from the analysis of the pulsed fraction of the lightcurve, the unabsorbed component is probably due to scattering of radiation in an extended hot electron corona into the line of sight. These authors also show that the interpretation of Ushimaru et al. (1989), who attributed the unabsorbed component to a leaky cold absorber, does not hold. The anticorrelation between $`I_\mathrm{U}`$ and $`N_\mathrm{H}`$ found by Leahy et al. (1994) and Leahy (1997) in Ginga observations has been interpreted by these authors as evidence that the obscuring material partially also obscures the extended corona. Thus, the geometric covering fraction of the obscuring material is assumed to be quite high during episodes of large $`N_\mathrm{H}`$. Our measured values of $`I_\mathrm{U}`$ exhibit some variability (Fig. 4c), but there is no systematic correlation between $`I_\mathrm{U}`$ and $`N_\mathrm{H}`$, neither has such a correlation been seen by Choi et al. (1994). A possible interpretation for this discrepancy is that Leahy et al. (1994) and Leahy (1997) did not include Thomson scattering in their fits. Indeed, when setting $`k=0`$ in Eq. (1) and fitting the RXTE data with both normalizations as free parameters, $`I_\mathrm{U}`$ is much more variable and $`I_\mathrm{A}`$ appears to be correlated with $`N_\mathrm{H}`$. In addition, it is generally difficult to distinguish between the absorbed and the unabsorbed component for low values of $`N_\mathrm{H}`$ such that $`I_\mathrm{U}`$ and $`I_\mathrm{A}`$ get easily confused by the fitting routine. We also tried fitting the data with a spectral model in which $`I_\mathrm{U}`$ was held fixed at $`0.005\text{ph}\text{cm}^2\text{s}^1\text{keV}^1`$, the average value of the fits of Sect. 3.2. The resulting $`\chi _{\mathrm{red}}^2`$ values from this fit were comparable to those of the fits presented in Sect. 3.2 since the variations in $`I_\mathrm{U}`$ in the latter fit are small enough to be compensated by slight changes in $`N_\mathrm{H}`$ in the former. Furthermore, due to the lower temporal resolution of the previous observations, small variations of $`N_\mathrm{H}`$ could not be resolved in these data. It has been pointed out by Parmar et al. (1986), that this effect might result in a large uncertainty in the determination of $`I_\mathrm{U}`$. As is shown by our fits to structures in the lightcurve (Fig. 8), we are able to resolve and identify individual structures with a temporal resolution of about one minute. Thus we are confident that the investigation presented here is unaffected by these problems. In our analysis of Sect. 3.2 and 3.3 we assumed that $`N_\mathrm{e}/N_\mathrm{H}=1.21`$, i.e., the value appropriate for material of solar composition. Previous investigations of dipping sources, however, hinted at non-solar abundances in most of these systems (Reynolds & Parmar (1995), White et al. (1995)). A direct measurement of the abundance ought to be possible by fitting the RXTE data with a partial covering model in which $`N_\mathrm{e}`$ and $`N_\mathrm{H}`$ are both free parameters. We find that the average $`N_\mathrm{e}/N_\mathrm{H}=1.5`$. Thus, our data could be interpreted as pointing towards a metal overabundance. In contrast to this result, a metal underabundance is favored by Reynolds & Parmar (1995)<sup>1</sup><sup>1</sup>1Note that Reynolds & Parmar (1995) assume that for material of cosmic abundance $`N_\mathrm{e}/N_\mathrm{H}=1`$, and call the reciprocal of the measured ratio the “abundance”. This reciprocal should not be confused with the common usage of abundance in astronomy since it ignores that the scattering cross section of an element with nuclear charge number $`Z`$ is $`Z\sigma _\mathrm{T}`$.. This discrepancy might be due to the higher temporal resolution of our data, where small variations of $`N_\mathrm{e}`$ can be traced. Note, however, that our method of determining $`N_\mathrm{e}/N_\mathrm{H}`$ depends crucially on the assumption that the spectrum incident on the absorbing cloud is constant over time and also depends strongly on the value adopted for $`I_\mathrm{A}`$. Although it is very probable that the source is constant (Sect. 3.2), slight variations of $`I_\mathrm{A}`$ cannot be ruled out. Therefore, we decided to use the solar value of 1.21 in our analysis and postpone the detailed study of the abundances until a larger sample of dips has been observed with RXTE. To conclude, photoabsorption and electron scattering of photons out of the line of sight in cold material appear to be solely sufficient for explaining the temporal variability of the observed flux during this pre-eclipse dip of Her X-1. Further observations of a larger sample of pre-eclipse dips are necessary to verify this result. ###### Acknowledgements. We acknowledge W. Heindl, I. Kreykenbohm, and R. Meier for useful discussions on the data analysis. We thank G. Morfill and R. Neuhäuser for granting the first author the time to finish this work. This work has been financed by DARA grant 50 OR 92054.
no-problem/9812/astro-ph9812237.html
ar5iv
text
# The WOMBAT Challenge: A “Hounds and Hares” Exercise for Cosmology ## 1 Introduction Cosmic Microwave Background (CMB) anisotropy observations during the next decade will yield data of unprecedented quality and quantity. Determination of cosmological parameters to the precision that has been forecast (Jungman et al. 1996, Bond, Efstathiou, & Tegmark 1997, Zaldarriaga, Spergel, & Seljak 1997, Eisenstein, Hu, & Tegmark 1998) will require significant advances in analysis techniques to handle the large volume of data, subtract foreground contamination, and account for instrumental systematics. To guarantee accuracy we must ensure that these analysis techniques do not introduce unknown biases into the estimation of cosmological parameters. The Wavelength-Oriented Microwave Background Analysis Team (WOMBAT, http://astro.berkeley.edu/wombat) will produce state-of-the-art simulations of microwave foregrounds, using all available information about the frequency dependence, power spectrum, and spatial distribution of each component. Using the phase information (detailed spatial morphology as opposed to just the power spectrum) of each foreground component offers the possibility of improving upon foreground subtraction techniques that only use the predicted angular power spectrum of the foregrounds to account for their spatial distribution. Most foreground separation techniques rely on assuming that the frequency spectra of the components is constant across the sky, but we will provide information on the spatial variation of each component’s spectral index whenever possible. The most obvious advantage of this approach is that it reflects our actual sky. With the high precision expected from future CMB maps we must test our foreground subtraction techniques on as realistic a sky map as possible. A second advantage is the construction of a common, comprehensive database for all known CMB foregrounds. The database will include known uncertainties in the estimation of the foregrounds. Such a data base should prove valuable for all groups involved in measuring the CMB and extracting cosmological information from it. Section 2 describes our plans to generate foreground models which include phase information, and Section 3 gives a brief survey of existing subtraction techniques and their limitations. These microwave foreground models provide the perfect starting point for the WOMBAT Challenge, a “hounds and hares” exercise in which we will generate skymaps for various cosmological models and offer them to the cosmology community for analysis without revealing the input parameters. This challenge is similar to the “Mystery CMB Sky Map challenge” posted by our sister collaboration, COMBAT<sup>6</sup><sup>6</sup>6Cosmic Microwave Background Analysis Tools, http://cfpa.berkeley.edu/group/cmbanalysis, except that our emphasis is on dealing with realistic foregrounds rather than the ability to analyze large data sets. Section 4 describes our plans to conduct this foreground removal challenge. The WOMBAT Challenge promises to shed light on several open questions in CMB data analysis: What are the best foreground subtraction techniques? Will they allow instruments such as MAP and Planck to achieve the precision in $`C_{\mathrm{}}`$ reconstruction which has been advertised, or will the error bars increase significantly due to uncertainties in foreground models? Perhaps most importantly, do some CMB analysis methods produce biased estimates of the radiation power spectrum and/or cosmological parameters? ## 2 Microwave Foregrounds Phase information is now available for Galactic dust and synchrotron and for the brightest radio galaxies, infrared galaxies, and X-ray clusters on the sky. By incorporating known information on the spatial distribution of the foreground components and spatial variation in their spectral index, we will greatly improve upon previous highly-idealized foreground models. There are four major expected sources of Galactic foreground emission at microwave frequencies: thermal emission from dust, electric or magnetic dipole emission from spinning dust grains (Draine & Lazarian 1998a,1998b), free-free emission from ionized hydrogen, and synchrotron radiation from electrons accelerated by the Galactic magnetic field. Good spatial templates exist for thermal dust emission (Schlegel, Finkbeiner, & Davis 1998) and synchrotron emission (Haslam et al. 1982), although the $`0\stackrel{}{\mathrm{.}}5`$ resolution of the Haslam maps means that smaller-scale structure must be simulated. Extrapolation to microwave frequencies is possible using maps which account for spatial variation of the spectra (Finkbeiner, Schlegel, & Davis 1998; Platania et al. 1998). The COMBAT collaboration has recently posted a software package called FORECAST<sup>7</sup><sup>7</sup>7Foreground and CMB Anisotropy Scan Simulation Tools, http://cfpa.berkeley.edu/group/cmbanalysis/forecast that displays the expected dust foreground for a given frequency, location, and observing strategy. Our best-fit foreground maps will be added to this user-friendly site in the near future, and this should be a useful resource for planning and simulating CMB anisotropy observations. A spatial template for free-free emission based on observations of H$`\alpha `$ (Smoot 1998, Marcelin et al. 1998) can be created in the near future by combining WHAM observations (Haffner, Reynolds, & Tufte 1998) with the southern celestial hemisphere H-Alpha Sky Survey (McCullough 1998). While it is known that there is an anomalous component of Galactic emission at 15-40 GHz (Kogut et al. 1996, Leitch et al. 1997, de Oliveira-Costa et al. 1997) which is partially correlated with dust morphology, it is not yet clear whether this is spinning dust grain emission or free-free emission somehow uncorrelated with H$`\alpha `$ observations. In fact, spinning dust grain emission has yet to be observed, so the uncertainties in its amplitude are tremendous. A template for the “anomalous” emission component will undoubtedly have large uncertainties. Three nearly separate categories of galaxies will also generate microwave foreground emission; they are radio-bright galaxies, low-redshift infrared-bright galaxies, and high-redshift infrared-bright galaxies. The level of anisotropy produced by these foregrounds is predicted by Toffolatti et al. (1998) using models of galaxy evolution to produce source counts, and updated models calibrated to recent SCUBA observations are also available (Blain, Ivison, Smail, & Kneib 1998, Scott & White 1998). For the high-redshift galaxies detected by SCUBA, no spatial template is available, so a simulation of these galaxies with realistic clustering will be necessary. Scott & White (1998) and Toffolatti et al. (1998) have used very different estimates of clustering to produce divergent results for its impact, so this issue will need to be looked at more carefully. Upper and lower limits on the anisotropy generated by high-redshift galaxies and as-yet-undiscovered types of point sources are given by Gawiser, Jaffe, & Silk (1998) using recent observations over a wide range of microwave frequencies. Their upper limit of $`\mathrm{\Delta }T/T=10^5`$ for a $`10^{}`$ beam at 100 GHz is a sobering result; while the real sky would need to conspire against us to produce this much anisotropy it cannot be ruled out at present, and we will need to look for it with direct observations and design analysis techniques that might manage to subtract it. The 5319 brightest low-redshift IR galaxies detected at 60$`\mu `$m are contained in the IRAS 1.2 Jy catalog (Fisher et al. 1995) and can be extrapolated to 100 GHz with a systematic uncertainty of a factor of a few (Gawiser & Smoot 1997). This method needs to be improved to account for the spectral difference between Ultraluminous Infrared Galaxies and normal spirals. Sokasian, Gawiser, & Smoot (1998) have compiled a catalog of 2200 bright radio sources, 758 of which have been observed at 90 GHz and 309 of which have been observed at frequencies above 200 GHz. They have developed a method to extrapolate radio source spectra which has a factor of two systematic uncertainty at 90 GHz. Radio source variability represents a major challenge for most foreground subtraction techniques, and the information present in this catalog allows one to estimate the mean and variance of the source fluxes as a function of frequency. The secondary CMB anisotropies that occur when the photons of the Cosmic Microwave Background radiation are scattered after the original last-scattering surface can be viewed as a type of foreground contamination. The shape of the blackbody spectrum can be altered through inverse Compton scattering by the thermal Sunyaev-Zel’dovich (SZ) effect (Sunyaev & Zel’dovich 1972). The effective temperature of the blackbody can be shifted locally by a doppler shift from the peculiar velocity of the scattering medium (the kinetic SZ and Ostriker-Vishniac effects) as well as by passage through nonlinear structure (the Rees-Sciama effect). Secondary anisotropies can be treated as a type of foreground contamination. Simulations have been made of the impact of the SZ effects in large-scale structure (Persi et al. 1995), clusters (Aghanim et al. 1997), groups (Bond & Myers 1996), and reionized patches (Aghanim et al. 1996, Knox, Scoccimarro, & Dodelson 1998, Gruzinov & Hu 1998, Peebles & Juskiewicz 1998). The brightest 200 X-ray clusters are known from the XBACS catalog and can be used to incorporate the locations of the strongest SZ sources (Refregier, Spergel, & Herbig 1998). The SZ effect itself is independent of redshift, so it can yield information on clusters at much higher redshift than does X-ray emission. However, nearly all clusters are unresolved for $`10^{}`$ resolution so higher-redshift clusters occupy less of the beam and therefore their SZ effect is in fact dimmer. In the 4.5 channels of Planck this will no longer be true, and SZ detection and subtraction becomes more challenging and potentially more fruitful as a probe of cluster abundance at high redshift. ## 3 Reducing Foreground Contamination Various methods have been proposed for reducing foreground contamination. For point sources, it is possible to mask pixels which represent positive $`5\sigma `$ fluctuations since such fluctuations are highly unlikely for Gaussian-distributed CMB anisotropy and can be assumed to be caused by point sources. This pixel masking technique can be improved somewhat by filtering (Tegmark & de Oliveira-Costa 1998; see Tenorio et al. 1998 for a different technique using wavelets). Sokasian, Gawiser, & Smoot (1998) demonstrate that using prior information from good source catalogs may allow the masking of pixels which contain sources brighter than the $`1\sigma `$ level of CMB fluctuations and instrument noise. For the 90 GHz MAP channel, this could reduce the residual radio point source contamination by a factor of two, which might significantly reduce systematic errors in cosmological parameter estimation. Galactic foregrounds with well-understood frequency spectra can be projected out of multi-frequency observations on a pixel-by-pixel basis (Dodelson & Kosowsky 1995, Brandt et al. 1994). Prior information in the form of spatial templates can be included in this projection, but uncertainty in the spectral index is a cause for concern. Perhaps surprisingly, the methods for foreground subtraction which have the greatest level of mathematical sophistication and have been tested most thoroughly ignore the known locations on the sky of some foreground components. The multi-frequency Wiener filtering approach uses assumptions about the spatial power spectra and frequency spectra of the foreground components to perform a separation in spherical harmonic or Fourier space (Tegmark & Efstathiou 1996; Bouchet et al. 1995,1997,1998; Knox 1998). However, it does not include any phase information at present. The Fourier-space Maximum Entropy Method (Hobson et al. 1998a) can add phase information on diffuse Galactic foregrounds in small patches of sky but treats extragalactic point sources as an additional source of instrument noise, with good results for simulated Planck data (Hobson et al. 1998b) and worrisome systematic difficulties for simulated MAP data (Jones, Hobson, & Lasenby 1998). Maximum Entropy has not yet been adapted to handle full-sky datasets. Both methods have difficulty if pixels are masked due to strong point source contamination or the spectral indices of the foreground components are not well known (Tegmark 1998). Since residual foreground contamination can increase uncertainties and bias parameter estimation, it is important to reduce it as much as possible. Current analysis methods usually rely on cross-correlating the CMB maps with foreground templates at other frequencies (see de Oliveira-Costa et al. 1998; Jaffe, Finkbeiner, & Bond 1998). It is clearly superior to have region-by-region (or pixel-by-pixel) information on how to extrapolate these templates to the observed frequencies; otherwise this cross-correlation only identifies the emission-weighted average spectral index of the foreground from the template frequency to the observed frequency. Because each foreground has a non-Gaussian spatial distribution, the covariance matrix of its $`a_\mathrm{}m`$ coefficients is not diagonal, although this has often been assumed. When a known foreground template is subtracted from a CMB map, it is inevitable that the correlation coefficient used for this subtraction will be slightly different than the true value. This expected under- or over-subtraction of each foreground leads to off-diagonal structure in the “noise” covariance matrix of the remaining CMB map, as opposed to the contributions of expected CMB anisotropies and uncorrelated instrument noise, both of which give diagonal contributions to the covariance matrix of the $`a_\mathrm{}m`$. Thus incomplete foreground subtraction, like $`1/f`$ noise, can introduce non-diagonal correlations into the covariance matrix of the $`a_\mathrm{}m`$. These correlations complicate the likelihood analysis necessary for parameter estimation (Knox 1998). Having phase information on the brightness and spectral index of foreground emission should reduce inaccuracies in foreground subtraction, and this motivates us to produce the best estimates we can of these quantities along with estimates of their uncertainties. ## 4 The WOMBAT Challenge Our purpose in conducting a “hounds and hares” exercise is to simulate the process of analyzing microwave skymaps as accurately as possible. In real-world observations the underlying cosmological parameters and the exact amplitudes and spectral indices of the foregrounds are unknown, so Nature is the hare and cosmologists are the hounds. We will make our knowledge of the various foreground components available to the public, and each best-fit foreground map will be accompanied by a map of its uncertainties and a discussion of possible systematic errors. Each simulation of that foreground will be different from the best-fit map based upon a realization of those uncertainties. Very little is known about the spatial locations of high-redshift infrared-bright galaxies and high-redshift SZ-bright clusters, so WOMBAT will provide simulations of these components. The rough characteristics of these high-redshift foreground sources, but not their locations, will be revealed. This simulates the real observing process in a way not achieved by previous foregrounds simulations. We will release our simulated maps for the community to subtract the foregrounds and extract cosmological information. The WOMBAT Challenge is scheduled to begin on March 1, 1999 and will offer participating groups four months to analyze the skymaps and report their results.<sup>8</sup><sup>8</sup>8see http://astro.berkeley.edu/wombat for timeline, details for participants, and updates We will produce simulations analogous to high-resolution balloon observations (e.g. MAXIMA and BOOMERANG; see Hanany et al. 1998 and de Bernardis & Masi 1998) and to the MAP satellite<sup>9</sup><sup>9</sup>9http://map.gsfc.nasa.gov. This will indicate how close the community is to being able to handle datasets as large as that of MAP (10<sup>6</sup> pixels at 13 resolution for a full-sky map). Given current computing power, complex algorithms appear necessary for analyzing full-sky MAP datasets (Oh, Spergel, & Hinshaw 1998), although simpler approximations may be possible (e.g. Wandelt, Hivon, & Górski 1998). We plan to use the publicly available HEALPIX package of pixelization and analysis routines<sup>10</sup><sup>10</sup>10http://www.tac.dk/~healpix. We will provide a calibration map of CMB anisotropy with a disclosed angular power spectrum in January 1999 so that participants can test the download procedure and become familiar with HEALPIX. Groups who analyze the Challenge maps will be asked to provide us with a summary of their analysis techniques. They may choose to remain anonymous in our comparison of the results but are encouraged to publish their own conclusions based on their participation. One of the biggest challenges in real-world observations is being prepared for surprises, both instrumental and astrophysical (see Scott 1998 for an eloquent discussion). An exercise such as the WOMBAT Challenge is an excellent way to simulate these surprises, and we will include a few in our skymaps. The results of the WOMBAT Challenge will provide estimates of the effectiveness of current techniques of foreground subtraction, power spectrum analysis, and parameter estimation. ## 5 Conclusions Undoubtedly the most important scientific contribution that WOMBAT will make is the production of realistic full-sky maps of all major microwave foreground components with estimated uncertainties. These maps are needed for foreground subtraction and estimation of residual foreground contamination in present and future CMB anisotropy observations. They will allow instrumental teams to conduct realistic simulations of the observing and data analysis process without needing to assume overly idealized models for the foregrounds. By combining various realizations of these foreground maps within the stated uncertainties with a simulation of the intrinsic CMB anisotropies, we will produce the best simulations so far of the microwave sky. Using these simulations in a “hounds and hares” exercise should test how well the various foreground subtraction and parameter estimation techniques work at present. It is easy to question the existing tests of analysis methods which assume idealized foregrounds in analyzing similarly idealized simulations. Data analysis techniques will undoubtedly improve with time, and we hope to reduce the current uncertainty in their efficacy such that follow-up simulations by the instrumental teams themselves can generate confidence in the results of real observations. We can test the resilience of CMB analysis methods to surprises such as unexpected foreground amplitude or spectral behavior, correlated instrument noise, and CMB fluctuations from non-gaussian or non-inflationary models. Cosmologists need to know if such surprises can lead to the misinterpretation of cosmological parameters. In the future, we envision producing time-ordered data, simulating interferometer observations, and adding polarization to our microwave sky simulations. Perhaps the greatest advance we offer is the ability to evaluate the importance of studying the detailed locations of foreground sources. If techniques which ignore this phase information are still successful on our realistic sky maps, that is a significant vote of confidence. Alternatively, it may turn out that techniques which use phase information are needed in order to reduce foreground contamination to a level which does not seriously bias the estimation of cosmological parameters. Combining various techniques may lead to improved foreground subtraction methods, and we hope that a wide variety of techniques will be tested by the participants in the WOMBAT Challenge. ## 6 Acknowledgments We thank Rob Crittenden (IGLOO) and Kris Gorski, Eric Hivon, and Ben Wandelt (HEALPIX) for making pixelization schemes available to the community. We appreciate helpful conversations with Nabila Aghanim, Giancarlo de Gasperis, Mark Krumholz, Alex Refregier, and Philip Stark and gratefully acknowledge the support of NASA LTSA grant #NAG5-6552.