url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://lemon.cs.elte.hu/egres/open/Rainbow_matchings_in_bipartite_graphs
# Rainbow matchings in bipartite graphs Given k disjoint matchings in a bipartite graph, a rainbow matching is a matching that contains one edge from each of them. Is it true that any family of k disjoint matchings of size k+1 has a rainbow matching? ## Remarks The conjecture was suggested by Aharoni and Berger as a generalization of the conjecture of Brualdi [1] and Stein [2] on Latin squares, see the survey [3]. Kotlar and Ziv [4] proved that any family of k disjoint matchings of size $\lfloor \frac{5}{3} k\rfloor$ has a rainbow matching. Clemens and Ehrenmüller [5] improved the bound to $\frac{3}{2}k+o(k)$, and recently Pokrovskiy [6] improved it to $k+o(k)$, which can be regarded as an asymptotic version of the Aharoni-Berger conjecture. Additional results can be found in [7]. The following matroidal generalization is also open: Conjecture [3]. Let $M_1=(S,r_1)$ and $M_2=(S,r_2)$ be two matroids of rank k+1 on S, such that S can be partitioned into k common bases $B_1, \dots, B_k$. Then the family $B_1, \dots, B_k$ has a transversal that is independent in both matroids. There are two related results; the first one is a weaker version of the conjecture, while the second one is a complementary result (its special case involving bipartite graphs is Drisko's Theorem [8]). Theorem [9]. Let $M_1=(S,r_1)$ and $M_2=(S,r_2)$ be two matroids of rank k on S, such that S can be partitioned into k common bases $B_1, \dots, B_k$. Then the family $B_1, \dots, B_k$ has a partial transversal of size $k-\sqrt{k}$ that is independent in both matroids. Theorem [10]. Let $M_1=(S,r_1)$ and $M_2=(S,r_2)$ be two matroids of rank k on S, such that S can be partitioned into 2k-1 common bases $B_1, \dots, B_{2k-1}$. Then the family $B_1, \dots, B_{2k-1}$ has a partial transversal of size $k$ that is a common base. ## References 1. R. A. Brualdi and H. J. Ryser, Combinatorial Matrix Theory, Cambridge University Press, Cambridge, UK, 1991, Google Books link 2. S. K. Stein, Transversals of Latin squares and their generalizations, Pacific J. Math. 59 (1975), 567–575, pdf link 3. R. Aharoni, P. Charbit and D. Howard, On a Generalization of the Ryser-Brualdi-Stein Conjecture, DOI link, arXiv link 4. D. Kotlar, R. Ziv, Large matchings in bipartite graphs have a rainbow matching, DOI link, arXiv link 5. D. Clemens, J. Ehrenmüller, An improved bound on the sizes of matchings guaranteeing a rainbow matching, arXiv link 6. A. Pokrovskiy, An approximate version of a conjecture of Aharoni and Berger, arXiv link 7. J. Barát, A. Gyárfás, G.N. Sárközy, Rainbow matchings in bipartite multigraphs, arXiv link 8. A.A. Drisko, Transversals in row-Latin rectangles, DOI link 9. R. Aharoni, D. Kotlar, R. Ziv, Rainbow sets in the intersection of two matroids, DOI link, arXiv link 10. D. Kotlar, R. Ziv, Rainbow sets in the intersection of two matroids: A generalization of results of Drisko and Chappell, DOI link, preliminary arXiv version
2018-01-21 04:43:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.912229597568512, "perplexity": 823.2616333638541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890187.52/warc/CC-MAIN-20180121040927-20180121060927-00758.warc.gz"}
https://www.physicsforums.com/threads/slope-lines-to-y-e-bx.791315/
# Slope lines to y=e^bx 1. Jan 8, 2015 ### WWGD A friend asked me to help him with this; probably simple, but I seem to be missing something obvious: For what values of $b$ is the line $y=10x$ tangent to the curve $C(x)=e^{bx}$? Seems we need $C'(x)=be^{bx} =10$ for some pair $(b,x)$. But this is where it seems hairy: how do we isolate either $b$ or $10$ ? we can apply $ln$ on both sides of $C'(x)$ , to get $bx =ln(10/b)=ln10-lnb$ . Then what? Last edited: Jan 8, 2015 2. Jan 8, 2015 ### Staff: Mentor For the line to be tangent to the exponential curve, both have to have the same y-value for a given x, and the slope of the tangent to the exponential has to be 10 at that point. To find the point of tangency, you need to solve 10x = ebx, which can't be solved by ordinary means, although you can use numerical methods to get a close approximation. 3. Jan 8, 2015 ### WWGD Thanks. What kind of numerical methods though? Maybe using the Lambert W function or something? 4. Jan 8, 2015 ### DarthMatter Lets look at $ax = e^{bx}$. Is $ax$ tangential to $e^{bx}$? $\frac{a}{b} = e^{bx}\Rightarrow x=\frac{\ln(\frac{a}{b})}{b}$. Such an x exists as long as $\frac{a}{b}>0$. However, we also need ${a}\cdot x = e^{bx}\Rightarrow {a}\cdot \frac{\ln(\frac{a}{b})}{b}=\frac{a}{b}$ or $\ln(\frac{a}{b}) = 1$. 5. Jan 8, 2015 ### WWGD Ah, nice, so $\frac {a}{b}=\mathbb e \implies a= \mathbb e b$ .
2017-12-12 07:11:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6885607838630676, "perplexity": 539.5196375754508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515309.5/warc/CC-MAIN-20171212060515-20171212080515-00765.warc.gz"}
https://porespy.org/examples/filters/reference/local_thickness.html
# local_thickness# This replaces each voxel with the radius of the largest sphere that would overlap it. This is different than the distance transform which is the radius of the largest sphere which could be centered on it. [1]: import matplotlib.pyplot as plt import numpy as np import porespy as ps ps.visualization.set_mpl_style() The arguments and their defaults are: [2]: import inspect inspect.signature(ps.filters.local_thickness) [2]: <Signature (im, sizes=25, mode='hybrid', divs=1)> ## im# The image can be either 2D or 3D: [3]: im = ps.generators.blobs([200, 200]) lt = ps.filters.local_thickness(im=im) fig, ax = plt.subplots(1, 1, figsize=[6, 6]) ax.imshow(lt/im, origin='lower', interpolation='none') ax.axis(False); ## sizes# The number of bins to use when drawing spheres, or the actual bins to use. The default is 25 which means that 25 logarithmically spaced sizes are used from the maximum of the distance transform down to 1. If an array or list of actual sizes is provided these are used directly, which can be useful for generating custom bins, for instance with linear or bimodal spacing. [4]: lt = ps.filters.local_thickness(im=im, sizes=5) fig, ax = plt.subplots(1, 1, figsize=[6, 6]) ax.imshow(lt/im, origin='lower', interpolation='none') ax.axis(False); ## mode# This controls which method is used. The default is a ‘hybrid’ which uses a the threshold of a distance transform to perform an erosion, then fft-based dilation to generate spheres. Other options are dt which uses a distance transform for both steps, or ‘mio’ which uses morphological operations for both steps. The selected method will affect the speed, but this depends on the computer being used. All results should be exactly the same, and this is ensured in a unit test: [5]: lt1 = ps.filters.local_thickness(im=im, mode='hybrid') lt2 = ps.filters.local_thickness(im=im, mode='dt') lt3 = ps.filters.local_thickness(im=im, mode='mio') fig, ax = plt.subplots(1, 3, figsize=[15, 5]) ax[0].imshow(lt1/im, origin='lower', interpolation='none') ax[0].axis(False) ax[1].imshow(lt2/im, origin='lower', interpolation='none') ax[1].axis(False) ax[2].imshow(lt3/im, origin='lower', interpolation='none') ax[2].axis(False); [6]: print("All elements in the hybrid and dt modes are equal:", np.all(lt1 == lt2)) print("All elements in the hybrid and mio modes are equal:", np.all(lt1 == lt3)) All elements in the hybrid and dt modes are equal: True All elements in the hybrid and mio modes are equal: True ## divs# This indicates if the image should be divided into chunks and processed in parallel. An integer indicates equal chunks in all directions, or a list can be given with the number of chunks to create in each direction. Dask is used behind the scenes to apply the filters to each chunk. The number of cores used is set in porespy.settings, with the default being all cores. [7]: lt1 = ps.filters.local_thickness(im=im, divs=2) lt2 = ps.filters.local_thickness(im=im, divs=[2, 3]) fig, ax = plt.subplots(1, 2, figsize=[12, 6]) ax[0].imshow(lt1/im, origin='lower', interpolation='none') ax[0].axis(False) ax[1].imshow(lt2/im, origin='lower', interpolation='none') ax[1].axis(False); [8]: print("Results are identical regardless of chunk size", np.all(lt1 == lt2)) Results are identical regardless of chunk size True
2022-05-17 23:33:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.281496524810791, "perplexity": 3412.8417417498604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00376.warc.gz"}
https://ppr.lse.ac.uk/articles/10.31389/lseppr.75/
A- A+ Alt. Display Progressivity and Inequality Abstract The concept of justice in taxation has been associated with the concepts underlying income inequality and its measurement. I take a fresh look at this connection in the light of recent UK experience and of results in the field of inequality measurement. This is developed in the context of several alternative empirical indicators of economic well-being and alternative approaches to characterising inequality comparisons. JEL codes: D31, D63 Keywords: How to Cite: Cowell, F.A., 2022. Progressivity and Inequality. LSE Public Policy Review, 2(4), p.8. DOI: http://doi.org/10.31389/lseppr.75 Published on 24 Nov 2022 Accepted on 01 Nov 2022            Submitted on 01 Jun 2022 1 Introduction The principles guiding tax policy should usually include some conception of distributional justice, and the design of policy should apply these principles in a consistent fashion. Part of the literature on the economics of distributional justice is closely connected to the literature on economic inequality through different threads that focus on different dimensions of inequality, in other words, through different aspects of the justice-in-distribution issue. The main connection involves conventional inequality measurement, but there are also connections to the analysis of income mobility. However, while there might be wide acceptance of the weak proposition that tax progressivity is associated with justice, can tax justice be directly linked with inequality? If so, in what way are changes in inequality linked to changes in the apparent justice of the tax system? The issues involved are illustrated using a broad-brush picture of taxes and benefits in the UK during the first twenty years of this century. What appears to have happened to the progressivity of government intervention in personal incomes in the light of changes in the structure of taxation and benefits? What might have been expected to have happened to inequality? The paper is organised as follows. In sections 2 and 3 we examine the standard pragmatic and analytical approaches to quantifying progressivity. Section 4 applies this analysis to the recent history of income distribution and redistribution in the UK. Section 5 discusses general issues raised by the analysis and concludes. 2 Theory and principles There are parallels between approaches to inequality and approaches to tax design. In both fields there is concern for efficiency and equity at the heart of the analysis. In both fields there is a sub-literature that is based on intuition, one that is based on utilitarian welfare analysis and one that makes appeal to an axiomatic method. Some of the early literature on tax design and tax reform show this: it starts from the premise that the justice of the tax system is appropriately assessed in terms of distributional outcomes and the progressivity of the tax schedules that lead to those outcomes.1 2.1 Foundations The connection of tax progressivity with inequality involves two principal elements. The first is the ‘income’ concept of inequality analysis, which corresponds to the tax base in the taxation literature. According to the Haig-Simons principle, [1, 2] every individual should pay tax on total income from whatever source; it is the core of the ability-to-pay criterion [3]. But in practice this is a counsel of perfection: data limitations mean that the ideal income concept is elusive and a set of pragmatic alternatives is adopted, taking several broad measures of income as alternative practical measures of individual well-being. This is illustrated in the UK example below. The second element is how the taxation component is incorporated into the distributional analysis. The principal idea is that the tax system, extended to cover also the system of benefits (income support), is codified in laws and codes which determine the amount of taxes to be paid, or benefits received, by each citizen. The net result of this intervention by the state is a quantity given by t = τ(y), where t is the net tax payment required from a person with pre-intervention income y and τ is a simplified representation of the tax-benefit system as function of a person’s income. The system transforms one income distribution (that of pre-intervention income y) into another income distribution (that of post-intervention income x) according to this formula: $x=y-\tau \left(y\right).$ This y-to-x transformation encapsulates the social values and principles underlying the tax-benefit system, where benefits are treated as negative taxes; it suggests that the justice issue should be associated with the properties of τ. One of the most commonly recognised properties of a tax-benefit schedule is progressivity – whether better-off people bear more of the taxes – and there is more than one way of encapsulating this idea, as discussed below. Illustration: the UK in the early 21st century An example of the effect of individual interventions by government agencies on the income distribution appears in the summary data published by UK’s Office for National Statistics (ONS). The advantage of these data is that they provide easily accessible information that enables one to get a clear picture of the way key metrics of distribution have changed in the UK for an interesting range of income definitions. The ONS provides series on the following five income concepts: 1. Original income: essentially market income plus private pensions 2. Gross income: the line above plus public cash benefits (including state pensions) 3. Disposable income: the line above minus direct taxes (including income tax, national insurance and council tax payments) 4. Post-tax income: the line above minus indirect taxes (including value- added tax, alcohol and tobacco duties) 5. Final income: the line above plus public non-cash benefits (including health and education) A summary of the inequality outcomes for the income concepts 1–5 along with consumption over the early 21st century is shown in Figure 1. As we can see, the trend in inequality, measured by the Gini coefficient, is not dramatic either upwards or downwards, for any of the five income concepts or for consumption expenditure. The issues raised are examined in more detail in section 4 below. Figure 1 Gini history for five income concepts and expenditure. Definitions of progressivity Should tax justice be identified with progressive taxation? Although it makes sense to connect the concepts of progressive taxation and redistributive taxation, a simple identification of the two concepts is not possible because of the multiple definitions of progressivity; ‘a progressive tax system should be defined as one where the average rate of taxation increases with income before tax. The degree of progression, however, is often referred to by politicians and economists with no precise meaning attached to it’ [4]. Three different concepts of progression identified in the early literature [4, 5, 6] can be described in relation to pre-intervention income y:2 • average-rate progression: the rate of change of the average tax rate as income changes. • tax-liability progression: the percentage change in tax liability for a one-percent change in pre-intervention income • residual-income progression: the percentage change in post-intervention income for a one-percent change in pre-intervention income The role of these concepts is examined in section 3. 2.2 Distributional analysis The way tax-benefit progressivity is conventionally measured can be shown with two applications of Lorenz-curve analysis. Tax-liability progression For a population of n people write the list of pre-intervention incomes as y1, y2, …, yn, where the subscripts 1, 2,… are the labels of individual income recipients ordered from low to high. Write the sum of the first i of these incomes as Yi: this means that total income is Yn, and so the income share of the bottom i recipients is Yi/Yn. Doing the same exercise for net tax payments, the share of bottom i is Ti/Tn, where Ti is the total net taxes paid by the i people with the smallest net tax payments. If the ordering of income recipients by income is the same as the ordering by amounts of net taxes, then these ‘share’ calculations give the Lorenz curves for income and for tax as in Figure 2 [7], where on the horizontal axis we have i/n (the population shares) and on the vertical axis we have Yi/Yn. or Ti/Tn for three different types of tax schedule. The 45-degree line represents perfect equality: for all i the share of income would be exactly the same as the share of the population; the red curve is the Lorenz curve for a typical income distribution, plotting the pairs (i/n, Yi/Yn); the black line also represents the distribution of tax payments in the case of a uniform lump-sum tax; the red curve also represents the distribution of tax payments in the case of a tax that is proportional to income. The green dashed curve shows the distribution of tax payments for the case where there is an exemption level. Because the red curve depicts a proportionate tax system, it represents a zero-progressivity case: curves above the red curve are regressive – such as the lump-sum taxes depicted by the black line – and tax-Lorenz curves below the red line are progressive – such as the one depicted by the green dashed curve. Following this intuition, progressivity can be measured using the deviation of the tax-Lorenz curve from the red curve. This gives a standard pragmatic method of measuring tax progressivity in aggregate: the total measure of progressivity is twice the shaded area shown in the picture.3 Figure 2 Lorenz curves of income and tax payments. Residual-income progression Tax-liability progression is not ideal for understanding the connection from the policy intervention to distributional outcomes. For this purpose, residual-income progression is straightforward and transparent. ‘Residual income’ is income after intervention by the tax-benefit system, x1, x2, …, xn. Intuitive comparisons of the post- and pre- intervention distributions can be made if the ordering of individuals is left unchanged by the intervention. Figure 3 shows the Lorenz curves for the five ONS income concepts mentioned in section 2.1 and gives snapshots of state intervention into household budgets and the apparent outcomes of different types of intervention at the beginning and end of the period. Figure 3 Lorenz curves for five concepts of income. In this case the distributional summary depicted in Figure 3 appears to reveal several cases of Lorenz dominance:4 as one proceeds through income concepts from 1 (original income) to 5 (final income) the Lorenz curve becomes closer to the line of equality, with one exception – the step from line 3 to line 4 accounted for by indirect taxes. It appears that, with this exception, each of the interventions is progressive and that benefits (cash and non-cash) are more progressive than taxes.5 Difficulties Three qualifications should be mentioned. The first is that several of the income adjustments described above come, in practice, from imputations (based on household characteristics) rather than direct observations (this is done in the case of non-cash-benefits). Second, the application of some taxes or benefits may cause reordering among income recipients. So, when comparing post-intervention income distributions with pre-intervention distributions it is argued that this reordering should be taken into account: instead of representing the distribution of post-intervention income as x1, x2, …, xn, where the ordering is from low to high according to the x-values, it should be arranged in the order from low to high according to the y-values, but then the resulting curve will be more difficult to interpret: see section 3.2 below. Third, the analysis focuses purely on the mechanical effect of taxes and benefits: to build progressivity considerations into a proper policy analysis one would have to allow for the responses by people to the tax-benefit system. 3 Tax progression: analytical approach What is the basis for choosing the standard pragmatic measures or the Lorenz-curve empirics as the appropriate way to consider tax progression? Consider how the progressivity of τ might be characterised in terms of social welfare. This idea was already present in the seminal Feldstein article [8]. It is also evident in the parallel literature on the design of fiscal systems which drew attention to the issue of “incentives” as a limitation on the possibilities for redistribution [9]. The approach builds on the welfare-based contributions to the inequality literature [10]. Suppose that income recipients are identical in every respect other than their incomes,6 and that social welfare depends on the distribution of post-intervention incomes x in a way that is: • income-regarding so that, other things being equal, an increase in any person’s income increases social welfare; • equity-regarding so that, other things being equal, a transfer to a richer person from a poorer person would always reduce welfare (the “transfer principle” [11]). A basic result for a social-welfare function that is both income-regarding and equity-regarding is this: if distribution x has the same mean income as distribution y, then welfare in x is higher if and only if x Lorenz-dominates y. This means that inequality must be lower in x than in y, whatever inequality index is used [12]. 3.1 Tax progressivity and welfare comparisons A neat result then connects the residual-progression of tax-benefit schedules to the concept of Lorenz dominance [4]. Given two tax-benefit schedules τ and τ’ resulting in post-intervention income distributions x and x’, distribution x will Lorenz dominate x’ as long as τ is more progressive than τ’ at all income levels.7 This result can be deepened and extended by considering the relationship amongst the following three propositions which have a clear practical interpretation: 1. “The average tax rate never decreases with y.” 2. “Disposable income x never decreases with y.” 3. “Inequality of x is not greater than inequality of y.” These three propositions are linked together in the following simple result: Theorem: Propositions 1 and 2 jointly hold if and only if proposition 3 holds [13]. This theorem is fundamental to the relationship between the tax-benefit schedule and inequality; it provides insight on the logical consequence of progressivity for distributional justice. Proposition 1 is a “greatest burden on those with the broadest shoulders” property: the tax-benefit schedule τ is designed in such a way that the ratio τ(y)/y never falls as y goes up. Proposition 2 is an “incentive preservation” property: it rules out the marginal tax rate exceeding 100%, a feature that is taken to be essential in good tax design [9]. To make sure that the design of τ cannot increase inequality of post-intervention income (proposition 3), both propositions 1 and 2 must hold. 3.2 Extensions So, the Lorenz-dominance relation, introduced as an intuitive idea in section 2.2, is at the heart of the analysis connecting tax progressivity and tax justice, interpreted in terms of inequality reduction. The analysis needs to be developed in two directions that involve distributional justice questions associated with income taxation. The two directions concern issues that are assumed away when the simple Lorenz-dominance criterion is used. The effect of income scale The first of these focuses on the conventional assumption that inequality remains unchanged under proportionate changes in all incomes. It sidesteps the issue of how inequality comparisons should be made at different levels of real income. Suppose Austria has a higher income per head than Belgium but that the two countries have the same level of inequality. What changes in real income in the two countries would leave this inequality judgment unaltered? The same income growth for everyone in the two countries (relative inequality comparisons)? Or the same dollar increase for everyone in the two countries (absolute inequality comparisons)?8 Or a criterion that is intermediate between the relative-inequality principle and the absolute-inequality principle [14, 15]? If net tax payments are increasing with income, then the insights of section 3.1 carry over in modified form [9, 16]. Horizontal inequity It is usual to distinguish between “vertical inequity,” corresponding to the disparities in incomes that were considered earlier using Lorenz analysis, and “horizontal inequity” corresponding to the idea that government intervention should treat people with the same circumstances in the same way: “equal treatment of equals.” In principle, income recipients with the same level of income and the same circumstances should be liable for the same taxes or transfers. In practice, a tax-benefit system can alter the rankings of the people in the income distribution [17, 18], and it is this phenomenon that has been seen as the essential manifestation of horizontal inequality in taxation [19, 20, 21, 22]. It leads to the problem alluded to in section 2.2: if we compare pre- and post-intervention distributions arranging the x-values in the same order of as the original y-values, the resulting plot of income cumulations is not a true Lorenz curve and is hard to interpret in terms of inequality. There is no natural way of quantifying degrees of horizontal inequity in a manner similar to that used for vertical inequity (income inequality), nor is there a universally accepted set of criteria. There may be a case for considering other distributional criteria such as mobility. Because taxes are complex and taxpayers are diverse it is unrealistic to expect that the outcome in terms of disposable incomes can be represented as a known determinate function τ. Those with the same income may pay different taxes arising from different behaviour rather than different circumstances. Incomes derived from assets may present valuation problems that can appear to introduce an element of randomness into the relationship between pre-intervention and post-intervention income. The effect of deductions may undermine progressivity [23] and the complexity of the jurisdiction’s tax structure may result in unequal knowledge by income recipients of legitimate opportunities to reduce tax liability. Different tax-benefit schedules will be applied to specific population groups with different needs, but the differences in the tax-benefit schedules are unlikely to conform with the adjustments that might be made when designing an equivalence scale on social-welfare grounds [24]. 3.3 Overall progressivity For a complete analysis a systematic method of aggregating the information about progressivity at each income level is needed. This could be achieved by one of two routes. Orderings It is useful to be able to say that, under conditions A, B and C, one tax-benefit schedule displays more progressivity overall than another. One method for doing this might be to extend the pragmatic method in section 2.2 to the ranking of distributions [25]. However, the quantitative measures of progressivity implicit in Figure 2 are arbitrary with little to choose between them. The pragmatic method of Figure 3 is more promising since it is directly connected with the welfare-based approaches in section 3.1. Using that, one can construct orderings of distributions in terms of social welfare [26]. Aggregate indices In inequality analysis it is convenient to aggregate the information in a distribution into a single index of inequality, in addition to using Lorenz curves. Similarly, it would be convenient to have a single index of tax-benefit inequity that captures the impact of taxes and benefits on vertical and horizontal inequity. This approach might seem arbitrary, as with the pragmatic approach to progressivity discussed in section 2.2. If one uses an explicit social-welfare function to evaluate income distributions, then it would seem natural to compute the aggregate effects of a progressive schedule on individual income-recipients using in terms of social welfare [27]. However, as the discussion of horizontal-inequity issues shows, heterogeneity would present a problem in implementing this appropriately. Alternatives A potential alternative to the social-welfare approach to the aggregate measurement problem could be based on other ways of comparing two distributions. The progressivity problem uses the idea of a reference distribution from which one may quantify the distance from the actual to the reference distribution [28]. In the case of tax-progressivity interpreted as residual progression the reference distribution would be the pre-intervention distribution; the post-intervention distribution is the distribution actually observed. The treatment in the literature of tax-induced reranking has drawn attention to the “mobility” aspects of reranking [19, 20, 29]. It is not self-evident whether this phenomenon is intrinsically good or bad, but it is a phenomenon that is susceptible of precise measurement using a small number of agreed principles [30]. In applying this to the concept of tax-mobility the same methodology and similar principles can be used as in the study of income mobility over time or between generations. 4 Inequality and progression in the UK Let us examine the tax-benefit progressivity concepts in the light of recent UK history. The data used are the same as for Figures 1 and 3; they are compiled by the ONS and are now based on the UK’s Household Finances Survey.9 The focus here is again on the standard ONS income definitions and income has been equivalised by the ONS using the modified Organisation for Economic Co-operation and Development scale, using as a reference point a two-adult childless household. There are well-known limitations on such data, over and above the incidence assumptions mentioned in section 2.2. First, some of the components of an “ideal” income concept are not included – perhaps the most important of these omitted items is capital gains. Second, because the ONS data are principally based on survey data, it is bound to be the case that the quality at the top of the distribution is less satisfactory. To address this the ONS has introduced to the taxes and benefits calculations a top-income adjustment in order to address survey under-coverage of the highest earners; this is done by using tax record information.10 However, the use of tax data in the empirical analysis of income inequality presents the additional associated problem of tax non-compliance, with the consequent underestimation of income in the affected population subgroups. About six per cent of tax liabilities remain uncollected in the UK, although this proportion has remained relatively stable during the period; non-compliance particularly affects incomes in the top two decile groups [31]. Consider how the tax and benefit system has converted original income to final income, both as one combined operation and in the four notional separate stages identified within the ONS data. Using the same ONS data, the elasticity formulas for tax-benefit progressivity in section 2.1 can be estimated using discrete approximations; again, this can be done for the overall impact of the interventions, or for the four notional interventions computed by ONS. Inequality: snapshot and trends The picture of redistribution is already evident from the pair of snapshots in Figure 3 above. Without using sophisticated tools, it is clear from each snapshot that the major contributions to the reduction in inequality occur (1) from the Lorenz curve for original income (the outermost curve) to that for gross income, and (2) from the Lorenz curve for post-tax income to that for final income (the innermost curve). Contribution (1) is attributable to cash benefits and contribution (2) to non-cash benefits. This confirms that over the first two decades of the 20th century the bulk of redistribution attributable to government intervention still comes from the effect of benefits rather than taxes. The picture of trends from Figures 1 (Gini) and 3 (Lorenz) might suggest that little happened to UK income distribution over the 20-year period for any of the five ONS income concepts and for consumption. However, these overall views mask some interesting detail. Figure 4 shows that the share of original (market) income increased sharply in the final two years of the period; for comparison the changes in the share of the bottom decile group are also shown on the same diagram. Figure 5 (for final income) shows that this upturn in the share of the top 10 percent survives all four stages of redistribution recorded in the ONS data. Figure 4 Changes in original income distribution: detail. Figure 5 Changes in final income distribution: detail. Progressivity The concept used is that of residual progression, introduced in section 2.1. Figure 6 shows the overall progressivity of tax-benefit interventions across the deciles at the beginning, middle and end of the period, and demonstrates that, for most of the deciles, there was an increase over the period with somewhat greater change in the first half of the period. Again, the overall picture hides interesting heterogeneity, as shown in Figure 7 for the four ONS components of redistribution. The changing pattern over the period 2000–2020 is clear for taxation, both direct and indirect: a reduction in progressivity for most deciles with a big change coming in the first half of the period and a slight shift back in the second. However, the pattern for benefits – from which most of the redistribution comes – is more nuanced. Figure 6 Residual progression overall. Figure 7 Components of residual progression at beginning, middle and end of period. ATR Profiles As shown in section 3 one of the key issues in understanding the effective progressivity of government interventions is the behaviour of the implied average tax rate (“ATR”, where benefits are included as negative taxes). Figure 8 shows the pattern of change for the overall tax-benefit system combined as an ATR profile across the deciles for each of the years 2000, 2010 and 2020. Figure 9 shows this broken down by individual components of redistribution. It is clear that a major part of the change in redistributive effect overall comes in the second half of the period (Figure 8), especially as concerns the lower decile groups, and that this change has largely been driven by cash benefits (top left of Figure 9). Figure 8 ATR profiles: 2000–2020. Figure 9 ATR profiles at each stage of redistribution. 5 Conclusions Pragmatic approaches and formal analysis can help to inform judgments about tax progressivity and its relation to income inequality. Pragmatic approaches focus on two questions: (1) What is the distribution of tax payments? (2) How much do taxes and benefits shift the Lorenz curve? Both these questions tell us something useful, but more is needed to derive conclusions that are based on the established principles of welfare economics. The second question is key to understanding progressivity and inequality. A key connection There is a simple connection between distributional justice and tax progressivity through the basic welfare economics of distributional issues: Lorenz orderings of post-intervention distributions are connected to measures of tax progression. A sharp conclusion on the social-welfare impact of a tax schedule can be drawn by determining whether it is true that the average tax rate is always increasing and post-intervention income is always increasing with pre-intervention income. The first problem is that the simple results sweep aside difficulties raised by the heterogeneity of income recipients and the complexity of tax schedules. This has been addressed in the literature on horizontal inequity covering not only the issue of tax-induced re-rankings in the income distribution but other issues that are less straightforwardly resolved. They may be suitably addressed by further theoretical and econometric developments on the comparisons of pairs of distributions. The second problem is whether “more progressive” necessarily implies “more just.” We know the conditions under which “more progressive” implies “more equal” in terms of outcome, but that is not quite the same thing. To clarify this issue we need to take a position on how, if at all, the criteria for inequality comparisons change when real incomes change (“vertical equity”) and whether distributive justice is to be applied ex ante or ex post (“horizontal equity”). Lessons to be learned? The experience of the UK in the early 21st century reveals that, as in the 20th century, the major redistributive effect comes from the benefit side of government intervention rather than taxation. Even though it appears that there has been an increase in the share of the richest towards the end of the period and even though the outcome in terms of tax changes has been a reduction in the progressivity of the tax system, the cash-benefit component of government intervention has driven the pattern of progression in the opposite direction. This suggests three take-aways for the design of policy: 1. Distributional justice in taxation should be viewed not only in terms of the way taxes are raised but also in terms of the way the proceeds are spent on cash benefits and non-cash benefits. The experience of the UK and many other countries suggests that much of the overall progressivity of government intervention comes from the spending side rather than the revenue-raising side of the intervention activity. It is possible to have a substantial impact on inequality by focusing on the way benefits are structured rather than only worrying about, say, top rates of personal tax. 2. However, on the tax side, a shift from direct taxation (usually a progressive component) to indirect taxation (a regressive or neutral component) will reduce overall progressivity and worsen the distributional outcomes. 3. Both the top and the bottom of the distribution are important for appraising the overall progressivity of the tax-benefit system and its consequences for inequality and distributional justice. Notes 1See Feldstein’s seminal paper [8] that demonstrates the connections to the analysis of inequality. 2Musgrave and Thin [5] also discuss a fourth concept, marginal rate progression, which is less relevant to an approach that focuses on inequality and tax justice. 3An alternative pragmatic method uses a modified Lorenz diagram with share of income on the horizontal axis and share of net tax paid on the vertical axis [32]. 4Distribution x Lorenz-dominates distribution y, if it is true that the x-Lorenz curve lies somewhere above, and nowhere below, the y-Lorenz curve [33]. 5The Institute for Fiscal Studies uses a different methodology that computes the impact of indirect taxes as a proportion of expenditure rather than as a proportion of income as done in the ONS calculations [34]. On this basis they find that indirect taxes are broadly neutral, whereas, on the ONS basis of calculation, indirect taxation is regressive. 6The difficulty of applying a method of equivalisation to achieve this in practice is considered at the end of section 3.2. 7The underlying intuition for this result is this: if the elasticity of x with respect to y is always below the elasticity of x’ with respect to y, then the ratio xi/xi–1 is always lower for tax-benefit schedule τ than for τ’. This assumes that the tax-benefit system does not re-rank people in the distribution. 8There are few inequality indices that have implied contour maps that allow them to be adapted both as relative and as absolute inequality indices; the Gini coefficient (a relative measure) is one of them: multiply the Gini by mean income and you obtain the Absolute Gini; the variance (an absolute measure) is another: take the square root of the variance and divide by the mean and you get the coefficient of variation. Acknowledgements I am grateful to Tianxang Zheng and Xueyan Zhang for research assistance. I also wish to thank the editor and reviewers for helpful suggestions. Competing Interests The author has no competing interests to declare. References 1. Haig RM. The Federal Income Tax. New York: Columbia University Press; 1921. 2. Simons HA. Personal Income Taxation. Chicago: University of Chicago Press; 1938. 3. Atkinson AB. Inequality: What can be done? London: Harvard University Press; 2015. DOI: https://doi.org/10.4159/9780674287013 4. Jakobsson U. On the measurement of the degree of progression. Journal of Public Economics. 1976; 5: 161–168. DOI: https://doi.org/10.1016/0047-2727(76)90066-9 5. Musgrave RA, Thin T. Income tax progression, 1929–48. Journal of Political Economy. 1948; 56: 498–514. DOI: https://doi.org/10.1086/256742 6. Pfingsten A. Progressive taxation and redistributive taxation: Different labels for the same product? Social Choice and Welfare. 1988; 5: 235–246. DOI: https://doi.org/10.1007/BF00735764 7. Gerber C, Klemm A, Liu L, Mylonas V. Income tax progressivity: Trends and implications. Oxford Bulletin of Economics and Statistics. 2020; 82: 365–386. DOI: https://doi.org/10.1111/obes.12331 8. Feldstein M. On the theory of tax reform. Journal of Public Economics. 1976; 6: 77–104. DOI: https://doi.org/10.1016/0047-2727(76)90042-6 9. Fei JCH. Equity oriented fiscal programs. Econometrica. 1981; 49: 869–881. DOI: https://doi.org/10.2307/1912507 10. Atkinson AB. On the measurement of inequality. Journal of Economic Theory. 1970; 2: 244–263. DOI: https://doi.org/10.1016/0022-0531(70)90039-6 11. Dalton H. Measurement of the inequality of incomes. The Economic Journal. 1920; 30: 348–361. DOI: https://doi.org/10.2307/2223525 12. Cowell FA. Measuring Inequality. Oxford: Oxford University Press, third edition; 2011. DOI: https://doi.org/10.1093/acprof:osobl/9780199594030.001.0001 13. Eichhorn W, Funke H, Richter WF. Tax progression and inequality of income distribution. Journal of Mathematical Economics. 1984; 13: 127–131. DOI: https://doi.org/10.1016/0304-4068(84)90012-0 14. Besley TJ, Preston IP. Invariance and the axiomatics of income tax progression: A comment. Bulletin of Economic Research. 1988; 40: 159–163. DOI: https://doi.org/10.1111/j.1467-8586.1988.tb00262.x 15. Ebert U. Inequality reducing taxation reconsidered. Research on Economic Inequality. 2020; 18: 131–152. DOI: https://doi.org/10.1108/S1049-2585(2010)0000018009 16. Moyes P. A note on minimally progressive taxation and absolute income inequality. Social Choice and Welfare. 1988; 5: 227–234. DOI: https://doi.org/10.1007/BF00735763 17. Rosen HS. An approach to the study of income, utility, and horizontal equity. The Quarterly Journal of Economics. 1978; 92: 307–322. DOI: https://doi.org/10.2307/1884165 18. Plotnick R. The concept and measurement of horizontal inequity. Journal of Public Economics. 1982; 17: 373–391. DOI: https://doi.org/10.1016/0047-2727(82)90071-8 19. Atkinson AB. Horizontal equity and the distribution of the tax burden. In: Aaron HJ, Boskin MJ (eds.), The economics of taxation. 1980; 3–18. Brookings Institution. 20. King MA. An index of inequality: With applications to horizontal equity and social mobility. Econometrica. 1983; 51: 99–116. DOI: https://doi.org/10.2307/1912250 21. Duclos J-Y. Progressivity, redistribution and equity, with application to the British tax and benefit system. Public Finance. 1993; 48: 350–365. 22. Plotnick R. A measure of horizontal inequity. Review of Economics and Statistics. 1981; 63: 282–288. DOI: https://doi.org/10.2307/1924099 23. Hümbelin O, Farys R. Income redistribution through taxation – How deductions undermine the effect of taxes. Journal of Income Distribution. 2018; 25: 1–35. DOI: https://doi.org/10.25071/1874-6322.40330 24. Aronson JR, Johnson P, Lambert PJ. Redistributive effect and unequal income tax treatment. The Economic Journal. 1994; 104: 262–270. DOI: https://doi.org/10.2307/2234747 25. Dardanoni V, Lambert PJ. Progressivity comparisons. Journal of Public Economics. 2022; 86: 99–122. DOI: https://doi.org/10.1016/S0047-2727(01)00089-5 26. Dardanoni V, Lambert PJ. Horizontal inequity comparisons. Social Choice and Welfare. 2001; 18: 799–816. DOI: https://doi.org/10.1007/s003550000085 27. Kakwani NC, Son HH. Normative measures of tax progressivity: an international comparison. Journal of Economic Inequality. 2021; 19: 185–212. DOI: https://doi.org/10.1007/s10888-020-09463-6 28. Cowell FA, Flachaire E, Bandyopadhyay S. Reference distributions and inequality measurement. Journal of Economic Inequality. 2013; 11: 421–437. DOI: https://doi.org/10.1007/s10888-012-9238-z 29. Kaplow L. Horizontal equity – measures in search of a principle. National Tax Journal. 1989; 42: 139–154. DOI: https://doi.org/10.1086/NTJ41788784 30. Cowell FA, Flachaire E. Measuring mobility. Quantitative Economics. 2018; 9: 865–901. DOI: https://doi.org/10.3982/QE512 31. Advani A. Who does and doesn’t pay taxes? Fiscal Studies. 2022; 43: 5–22. DOI: https://doi.org/10.1111/1475-5890.12257 32. Formby JR, Seaks TG, Smith WJ. A comparison of two new measures of tax progressivity. The Economic Journal. 1981; 91: 1015–1019. DOI: https://doi.org/10.2307/2232508 33. Lambert PJ. The Distribution and Redistribution of Income. Manchester: Manchester University Press, third edition; 2002. 34. Bourquin P, Waters T. The effect of taxes and benefits on UK inequality. Briefing Note BN249, Institute for Fiscal Studies; 2019.
2023-01-28 04:15:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4781881272792816, "perplexity": 2089.920026770248}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499470.19/warc/CC-MAIN-20230128023233-20230128053233-00086.warc.gz"}
https://typeset.io/topics/ring-mathematics-27p85vgn
Topic # Ring (mathematics) About: Ring (mathematics) is a(n) research topic. Over the lifetime, 19980 publication(s) have been published within this topic receiving 233849 citation(s). The topic is also known as: ring possibly without identity. ##### Papers More filters Book 30 Oct 1997 TL;DR: This chapter discusses decision problems and Complexity over a Ring and the Fundamental Theorem of Algebra: Complexity Aspects. Abstract: 1 Introduction.- 2 Definitions and First Properties of Computation.- 3 Computation over a Ring.- 4 Decision Problems and Complexity over a Ring.- 5 The Class NP and NP-Complete Problems.- 6 Integer Machines.- 7 Algebraic Settings for the Problem "P ? NP?".- 8 Newton's Method.- 9 Fundamental Theorem of Algebra: Complexity Aspects.- 10 Bezout's Theorem.- 11 Condition Numbers and the Loss of Precision of Linear Equations.- 12 The Condition Number for Nonlinear Problems.- 13 The Condition Number in ?(H(d).- 14 Complexity and the Condition Number.- 15 Linear Programming.- 16 Deterministic Lower Bounds.- 17 Probabilistic Machines.- 18 Parallel Computations.- 19 Some Separations of Complexity Classes.- 20 Weak Machines.- 21 Additive Machines.- 22 Nonuniform Complexity Classes.- 23 Descriptive Complexity.- References. 1,542 citations Book 01 Jan 1966 Abstract: Introduction Chapter 1: Preparatory material 1. Multiplicative sequences 2. Sheaves 3. Fibre bundles 4. Characteristic classes Chapter 2: The cobordism ring 5. Pontrjagin numbers 6. The ring /ss(/Omega) /oplus //Varrho 7. The cobordism ring /omega 8. The index of a 4k-dimensional manifold 9. The virtual index Chapter 3: The Todd genus 10. Definiton of the Todd genus 11. The virutal generalised Todd genus 12. The t-characteristic of a GL(q, C)-bundle 13. Split manifolds and splitting methods 14. Multiplicative properties of the Todd genus Chapter 4: The Riemann-Roch theorem for algebraic manifolds 15. Cohomology of Compact complex manifolds 16. Further properties of the (/chi)x characteristics 17. The virtual (/chi)x characteristics 18. Some fundamental theorems of Kodaira 19. The virtual (/chi)x characteristics for algebraic manifolds 20. The Riemann-Roch theorem for algebraic manifolds and complex analytic line bundles 21. The Riemann-Roch theorem for algebraic manifolds and complex analytic vector bundles Appendix 1 by R.L.E. Schwarzenberger 22. Applications of the Riemann-Roch theorem 23. The Riemann-Roch theorem of Grothendieck 24. The Grothendieck ring of continuous vector bundles 25. The Atijah-Singer index theorem 26. Integrality theorems for differentiable manifolds Appendix 2 by A. Borel A spectral sequence for complex analytic bundles Bibliography Index 1,451 citations Journal ArticleDOI 1,341 citations Journal ArticleDOI Abstract: This is the first of a series of papers dealing with the representation theory of artin algebras, where by an artin algebra we mean an artin ring having the property that its center is an artin ring and λ is a finitely generated module over its center. The over all purpose of this paper is to develop terminology and background material which will be used in the rest of the papers in the series. While it is undoubtedly true that much of this material can be found in the literature or easily deduced from results already in the literature, the particular development presented here appears to be new and is especially well suited as a foundation for the papers to come. 1,161 citations Journal ArticleDOI TL;DR: The vacuum Einstein equations in five dimensions are shown to admit a solution describing a stationary asymptotically flat spacetime regular on and outside an event horizon of topology S1xS2, which describes a rotating "black ring". Abstract: The vacuum Einstein equations in five dimensions are shown to admit a solution describing a stationary asymptotically flat spacetime regular on and outside an event horizon of topology ${S}^{1}\ifmmode\times\else\texttimes\fi{}{S}^{2}$. It describes a rotating black ring.'' This is the first example of a stationary asymptotically flat vacuum solution with an event horizon of nonspherical topology. The existence of this solution implies that the uniqueness theorems valid in four dimensions do not have simple five-dimensional generalizations. It is suggested that increasing the spin of a spherical black hole beyond a critical value results in a transition to a black ring, which can have an arbitrarily large angular momentum for a given mass. 1,047 citations ##### Network Information ###### Related Topics (5) Automorphism 15.5K papers, 190.6K citations 85% related Simple Lie group 8.3K papers, 204.2K citations 83% related Representation theory 8.6K papers, 266.4K citations 82% related Cohomology 21.5K papers, 389.8K citations 82% related Lie conformal algebra 9.5K papers, 218.9K citations 81% related ##### Performance ###### Metrics No. of papers in the topic in previous years YearPapers 202222 2021983 2020941 2019844 2018836 2017863
2023-02-07 01:38:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6835602521896362, "perplexity": 1433.86661606018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00225.warc.gz"}
http://mathhelpforum.com/differential-equations/127275-laplace-pde.html
# Math Help - laplace PDE 1. ## laplace PDE The excercise is in the attached file, I've solved it but I'm not sure my solution is correct. I hope you'll be able to tell me if my solution is correct or any other usefull comments. It's mostly in hebreW but I think it's possible to understand Without translating the Whole thing. the excercise reads as folloWs: "Solve the PDE so that the coeffitions of U(x,y) are less than n^(-2)" f1, f2, and f3 are C1 (that's a given). 2. Originally Posted by zokomoko The excercise is in the attached file, I've solved it but I'm not sure my solution is correct. I hope you'll be able to tell me if my solution is correct or any other usefull comments. It's mostly in hebreW but I think it's possible to understand Without translating the Whole thing. the excercise reads as folloWs: "Solve the PDE so that the coeffitions of U(x,y) are less than n^(-2)" f1, f2, and f3 are C1 (that's a given). From what I saw it looks good except for one of the new BC's. You wrote $ g_3(y) = f_2(y) - \frac{y}{\pi} f_2(\pi) $ which I believe should be $ g_3(y) = f_3(y) - \frac{y}{\pi} f_2(\pi). $ 3. got it ! :-) thank you very much
2014-10-31 23:07:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7342824339866638, "perplexity": 718.0024521693862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900544.33/warc/CC-MAIN-20141030025820-00029-ip-10-16-133-185.ec2.internal.warc.gz"}
https://us.edugain.com/questions/Let-x-be-a-real-number-What-is-the-minimum-value-of-x-2-4x-3
### Let $x$ be a real number. What is the minimum value of $x^2 - 4x + 3$? $-1$ 1. We are given a quadratic equation $x^2 - 4x + 3$, where x is a real number and we need to find the minimum value of this equation. \begin{align} x^2 - 4x + 3 & = x^2 - 4x + 4 - 1 \\ & = x^2 - 2(2)x + 2^2 - 1 \\ & = (x - 2)^2 - 1 \end{align} 3. Observe that the value of $x^2 - 4x + 3$ will be minimum when $(x - 2)^2 = 0, i.e. \space x = 2$ The value of $x^2 - 4x + 3$ at $x = 2$ is $-1$. 4. Hence, the minimum value of $x^2 - 4x + 3$ is $-1$.
2022-11-28 14:55:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986844062805176, "perplexity": 47.157929108849956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710533.96/warc/CC-MAIN-20221128135348-20221128165348-00340.warc.gz"}
https://eprints.soton.ac.uk/411788/
The University of Southampton University of Southampton Institutional Repository ePrints Soton is experiencing an issue with some file downloads not being available. We are working hard to fix this. Please bear with us. # The B  −  L supersymmetric standard model with inverse seesaw at the large hadron collider Khalil, S. and Moretti, S. (2017) The B  −  L supersymmetric standard model with inverse seesaw at the large hadron collider. Reports on Progress in Physics, 80 (3), [036201]. Record type: Article ## Abstract We review the TeV scale $B-L$ extension of the Minimal Supersymmetric Standard Model (BLSSM) where an inverse seesaw mechanism of light neutrino mass generation is naturally implemented and concentrate on its hallmark manifestations at the Large Hadron Collider (LHC). Accepted/In Press date: 30 November 2016 e-pub ahead of print date: 17 January 2017 Published date: 17 January 2017 Additional Information: 26 pages, 4 tables, 13 figures, to appear in Reports on Progress in Physics Keywords: hep-ph Organisations: Theory Group ## Identifiers Local EPrints ID: 411788 URI: http://eprints.soton.ac.uk/id/eprint/411788 PURE UUID: f9ae600e-0858-4d84-9449-faec82e90a21 ## Catalogue record Date deposited: 26 Jun 2017 16:30 ## Contributors Author: S. Khalil Author: S. Moretti
2021-12-03 10:38:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20712153613567352, "perplexity": 7721.951936299428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362619.23/warc/CC-MAIN-20211203091120-20211203121120-00341.warc.gz"}
https://codewalr.us/index.php?topic=1622.0
July 20, 2019, 04:27:01 pm News: what is this news thing for anyway Alternatively, join us on Discord. [TI-82] Conway's Game of Life Started by SopaXorzTaker, October 02, 2016, 04:48:53 am 0 Members and 1 Guest are viewing this topic. SopaXorzTaker October 02, 2016, 04:48:53 amLast Edit: January 15, 2017, 05:47:27 pm by DJ Omnimaga Hi! I probably was the first to write an assembly game for the TI-82 in 4 years and it's my first program in Z80 assembly! But anyway, here's my implementation of Conway's Game of Life! The controls are pretty simple: Use [MODE] to pause the  simulation to allow you to edit the cells. Use [ENTER] to toggle the cell at the cursor. Use the arrow keys to move the cursor. Use [X,T,$$\theta$$] to single-step the simulation. Use [DEL] to reset all the cells. Use [CLEAR] to exit the game. It's on ticalc.org: http://www.ticalc.org/archives/files/fileinfo/468/46801.html. The source code is hosted on GitHub: https://github.com/SopaXorzTaker/ti-life. Runs with CrASH. Feel free to improve the game, and please give feedback! DJ Omnimaga October 02, 2016, 02:15:19 pm #1 Nice to see you gotvyour first ASM program done. It's also the first CGOL to use the homescreen . Do you know if it will run on ROM 16.0? kotu October 02, 2016, 03:38:07 pm #2 Wish this ran on my calc SUBSCRIBE TO THE FUTURERAVE.UK MAILING LIST http://futurerave.uk SopaXorzTaker October 02, 2016, 03:39:40 pm #3 Quote from: DJ Omnimaga on October 02, 2016, 02:15:19 pm Do you know if it will run on ROM 16.0? Yes, I am pretty sure it will as CrASH supports most ROM versions. kotu October 02, 2016, 03:40:36 pm #4 SUBSCRIBE TO THE FUTURERAVE.UK MAILING LIST http://futurerave.uk SopaXorzTaker October 02, 2016, 03:42:39 pm #5 Quote from: kotu on October 02, 2016, 03:38:07 pm Wish this ran on my calc Ask someone like @Sorunome to port it! It's going to be very easy, if only I had a -83 or -84... DJ Omnimaga October 02, 2016, 04:10:12 pm #6 Quote from: SopaXorzTaker on October 02, 2016, 03:39:40 pm Quote from: DJ Omnimaga on October 02, 2016, 02:15:19 pm Do you know if it will run on ROM 16.0? Yes, I am pretty sure it will as CrASH supports most ROM versions. Yeah but Robot War was for CrASH too and it only rab ob ROM 19.0+ SopaXorzTaker October 02, 2016, 08:26:26 pm #7 Quote from: DJ Omnimaga on October 02, 2016, 04:10:12 pm Quote from: SopaXorzTaker on October 02, 2016, 03:39:40 pm Quote from: DJ Omnimaga on October 02, 2016, 02:15:19 pm Do you know if it will run on ROM 16.0? Yes, I am pretty sure it will as CrASH supports most ROM versions. Yeah but Robot War was for CrASH too and it only rab ob ROM 19.0+ That's probably because of different ROM calls. My program uses only the common ones, so I am sure it will work for you. Tander October 03, 2016, 12:54:06 am #8 How can I get this onto my calc? Tander October 03, 2016, 12:59:46 am #9 damn, this is for the 82. I only have an 83+ c4ooo October 03, 2016, 01:54:00 am #10 * c4ooo brings Tander's attention to the "modify" button To answer your question though, when you download .8xp, you can then send them to your calc with the proper cable and TI's TI Connect program. DJ Omnimaga October 03, 2016, 08:09:32 pm #11 I still remember back when MirageOS had experimental TI-82 emulation, but I don't think anyone ever got it to work. There also used to be some TI-82 shell that played TI-83 games but I forgot how that worked. IIRC, both required recompiling the programs for the other model, so in other words, it was not true emulation. Anyway, this game is in ASM, so it would need to be modified in order to run on newer models. BASIC games can work if you copy the code from a TI-82 editor to a TI-83 Plus one, but some commands such as sin might need to be fixed, since the 83 and higher changes sin to sin(. Powered by EzPortal
2019-07-20 16:43:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2876448631286621, "perplexity": 5952.689838044549}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526536.46/warc/CC-MAIN-20190720153215-20190720175215-00008.warc.gz"}
https://tex.stackexchange.com/questions/659190/2-boxes-on-each-other-with-columns
# 2 boxes on each other with \columns I would like to display 2 boxes : one on each other beside a figure on a template. I am using the \column environment since it seems to be the most appropriate. For the creation of the boxes, I am using \tikzmarkin. Also I would like to display an arrow that goes from the upper box to the lower one. My coding gives this result : Which is not exactly what I want, because the two boxes have not the same dimensions and there is no vertical spacing between boxes. Also, I tried to \draw an arrow between the 2 \tikzmarkin but it didn't work either... \documentclass[french]{beamer} %%%%%% ENCODAGE %%%%%%%%%%% \usepackage[utf8]{inputenc} %%%%%% TIKZ %%%%%%%%%%%%%%% \usepackage[beamer,customcolors,norndcorners]{hf-tikz} %%%%%% OTHERS %%%%%%%%%%%%% \usepackage{booktabs,calligra} \usepackage{listings,stackengine} \author{XXX} \title{XXX} \subtitle{XXX} \institute [XXX] {XXX \\ XXX} \date{\today} %%%%%% DEFINITIONS %%%%%%%%% \def\cmd#1{\texttt{\color{red}\footnotesize $\backslash$#1}} \def\env#1{\texttt{\color{blue}\footnotesize #1}} \definecolor{deepblue}{rgb}{0,0,0.5} \definecolor{deepred}{rgb}{0.6,0,0} \definecolor{deepgreen}{rgb}{0,0.5,0} \definecolor{halfgray}{gray}{0.55} \lstset{ basicstyle=\ttfamily\small, keywordstyle=\bfseries\color{deepblue}, emphstyle=\ttfamily\color{deepred}, % Custom highlighting style stringstyle=\color{deepgreen}, numbers=left, numberstyle=\small\color{halfgray}, rulesepcolor=\color{red!20!green!20!blue!20}, } %%%%%% BOX %%%%%%%%%%%%%%%% \usepackage{fancybox} \usepackage{varwidth} \usepackage{subcaption} \hfsetbordercolor{blue!50!black} %%%%%% PGFPLOTS %%%%%%%%%%%% \usepackage{pgfplots} \definecolor{mygreen}{RGB}{28,172,0} % color values Red, Green, Blue \definecolor{mylilas}{RGB}{170,55,241} \definecolor{BgYellow}{HTML}{FFF59C} \definecolor{FrameYellow}{HTML}{F7A600} \usetikzlibrary{spy} \usepgfplotslibrary{fillbetween} \usetikzlibrary{patterns, matrix, positioning} \usetikzlibrary{arrows.meta, patterns.meta } \usepackage[most]{tcolorbox} \tcbset{highlight math style={enhanced,colframe=red,colback=red!10!white,boxsep=0pt,sharp corners, equal height group=C, minimum for equal height group=C:1.5cm, valign=center, }} \begin{document} \begin{frame}{Les précèdents travaux} \begin{columns} \begin{column}{.5\linewidth} $\displaystyle\tikzmarkin<1->[set fill color=white, set border color=blue!50!black]{a} \text{Plusieurs décades observées} \tikzmarkend{a}$ $\displaystyle\tikzmarkin<1->[set fill color=white, set border color=blue!50!black]{b} \text{Confirmation de la loi d'échelle} \tikzmarkend{b}$ \end{column} \begin{column}{.5\linewidth} \begin{figure} \phantomsubcaption \subfloat{{\includegraphics[height= 0.55 \textheight,width=\linewidth]{example-image} }}% \end{figure} \end{column} \end{columns} \end{frame} \end{document} What is the most efficient way to do it please ? Thank you. EDIT : Why this alternative using \tcolorbox doesn't work either ? \documentclass[french]{beamer} %%%%%% ENCODAGE %%%%%%%%%%% \usepackage[utf8]{inputenc} %%%%%% TIKZ %%%%%%%%%%%%%%% \usepackage[beamer,customcolors,norndcorners]{hf-tikz} %%%%%% OTHERS %%%%%%%%%%%%% \usepackage{booktabs,calligra} \usepackage{listings,stackengine} \author{XXX} \title{XXX} \subtitle{XXX} \institute [XXX] {XXX \\ XXX} \date{\today} %\usepackage{YTU} %%%%%% DEFINITIONS %%%%%%%%% \def\cmd#1{\texttt{\color{red}\footnotesize $\backslash$#1}} \def\env#1{\texttt{\color{blue}\footnotesize #1}} \definecolor{deepblue}{rgb}{0,0,0.5} \definecolor{deepred}{rgb}{0.6,0,0} \definecolor{deepgreen}{rgb}{0,0.5,0} \definecolor{halfgray}{gray}{0.55} \lstset{ basicstyle=\ttfamily\small, keywordstyle=\bfseries\color{deepblue}, emphstyle=\ttfamily\color{deepred}, % Custom highlighting style stringstyle=\color{deepgreen}, numbers=left, numberstyle=\small\color{halfgray}, rulesepcolor=\color{red!20!green!20!blue!20}, } %%%%%% VIDEO %%%%%%%%%%%%%% \usepackage{multimedia} %%%%%% BOX %%%%%%%%%%%%%%%% \usepackage{fancybox} \usepackage{varwidth} \usepackage{subcaption} \hfsetbordercolor{blue!50!black} %%%%%% PGFPLOTS %%%%%%%%%%%% \usepackage{pgfplots} \definecolor{mygreen}{RGB}{28,172,0} % color values Red, Green, Blue \definecolor{mylilas}{RGB}{170,55,241} \definecolor{BgYellow}{HTML}{FFF59C} \definecolor{FrameYellow}{HTML}{F7A600} \usetikzlibrary{spy} \usepgfplotslibrary{fillbetween} \usetikzlibrary{patterns, matrix, positioning} \usetikzlibrary{arrows.meta, patterns.meta } \usepackage[most]{tcolorbox} \tcbset{highlight math style={enhanced,colframe=red,colback=red!10!white,boxsep=0pt,sharp corners, equal height group=C, minimum for equal height group=C:1.5cm, valign=center, }} \begin{document} \begin{frame}{Les précèdents travaux} \begin{columns} \begin{column}{.5\linewidth} \tcbhighmath[ tcbox raise=0mm, remember as=a, colback=blue!10, colframe=blue ]{ } \tcbhighmath[ tcbox raise=0mm, remember as=b, colback=blue!10, colframe=blue, overlay={ \draw[blue,-latex,thick] (a.south) -- (frame.north); } ]{ \text{Confirmation de la loi d'échelle} } \end{column} \begin{column}{.5\linewidth} \begin{figure} \phantomsubcaption \subfloat{{\includegraphics[height= 0.55 \textheight,width=\linewidth]{pic/ComparaisonBerhanu.eps} }}% \end{figure} \end{column} \end{columns} \end{frame} \end{document} Thank you again • Use tcolorbox with remember as just the same as in tex.stackexchange.com/questions/657268/… or just an ordinary tikz picture Sep 23 at 8:01 • Ok : I will try. For the moment, I am still struggling to know which one to use \tikzmarkin or \tcolorbox – Wiss Sep 23 at 8:04 • Don't use \def. You can get a better result with \newcommand{\cmd}[1]{\textcolor{red}{\footnotesize\ttfamily\symbol{\\}#1}} Sep 23 at 12:49 Here a version with tikz: \documentclass[french]{beamer} %%%%%% ENCODAGE %%%%%%%%%%% \usepackage[utf8]{inputenc} %%%%%% TIKZ %%%%%%%%%%%%%%% \usepackage[beamer,customcolors,norndcorners]{hf-tikz} %%%%%% OTHERS %%%%%%%%%%%%% \usepackage{booktabs,calligra} \usepackage{listings,stackengine} \author{XXX} \title{XXX} \subtitle{XXX} \institute [XXX] {XXX \\ XXX} \date{\today} %%%%%% DEFINITIONS %%%%%%%%% \def\cmd#1{\texttt{\color{red}\footnotesize $\backslash$#1}} \def\env#1{\texttt{\color{blue}\footnotesize #1}} \definecolor{deepblue}{rgb}{0,0,0.5} \definecolor{deepred}{rgb}{0.6,0,0} \definecolor{deepgreen}{rgb}{0,0.5,0} \definecolor{halfgray}{gray}{0.55} \lstset{ basicstyle=\ttfamily\small, keywordstyle=\bfseries\color{deepblue}, emphstyle=\ttfamily\color{deepred}, % Custom highlighting style stringstyle=\color{deepgreen}, numbers=left, numberstyle=\small\color{halfgray}, rulesepcolor=\color{red!20!green!20!blue!20}, } %%%%%% BOX %%%%%%%%%%%%%%%% \usepackage{fancybox} \usepackage{varwidth} \usepackage{subcaption} \hfsetbordercolor{blue!50!black} %%%%%% PGFPLOTS %%%%%%%%%%%% \usepackage{pgfplots} \definecolor{mygreen}{RGB}{28,172,0} % color values Red, Green, Blue \definecolor{mylilas}{RGB}{170,55,241} \definecolor{BgYellow}{HTML}{FFF59C} \definecolor{FrameYellow}{HTML}{F7A600} \usetikzlibrary{spy} \usepgfplotslibrary{fillbetween} \usetikzlibrary{patterns, matrix, positioning} \usetikzlibrary{arrows.meta, patterns.meta } \usepackage[most]{tcolorbox} \tcbset{highlight math style={enhanced,colframe=red,colback=red!10!white,boxsep=0pt,sharp corners, equal height group=C, minimum for equal height group=C:1.5cm, valign=center, }} \usepackage{tikz} \begin{document} \begin{frame}{Les précèdents travaux} \begin{columns} \begin{column}{.5\linewidth} \begin{tikzpicture} \node[draw=blue,text width=.95\linewidth,align=center] (a) at (0,0) {Plusieurs décades observées}; \node[draw=blue,text width=.95\linewidth,align=center] (b) at (0,-3) {Confirmation de la loi d'échelle}; \draw[->] (a) -- (b); \end{tikzpicture} \end{column} \begin{column}{.5\linewidth} \begin{figure} \subfloat{{\includegraphics[height= 0.55 \textheight,width=\linewidth]{example-image} }}% \end{figure} \end{column} \end{columns} \end{frame} \end{document} • Thank you for your help. Is there a way to make boxes to have same width too ? Also I have edited my original post in order to show you another attempt using \tcolorbox. Unfortunately, I am unable to draw an arrow between the 2 boxes : Could you tell me where am I wrong ? Thanks – Wiss Sep 23 at 8:12 • You can set the text width to make them have the same with. As for the non-compilable code fragment you added to your question: the \draw` macro does not belong in the normal content of the box. See tex.stackexchange.com/a/657273/36296 where the arrows are drawn Sep 23 at 8:17 • Thank you ! I was not writing it correctly so it was entirely my fault... – Wiss Sep 23 at 8:31
2022-12-03 23:50:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988399863243103, "perplexity": 5775.025653743887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710941.43/warc/CC-MAIN-20221203212026-20221204002026-00791.warc.gz"}
https://www.calculus-online.com/exercise/4444
# Analytical Geometry – Calculate a point of intersection between a line and a plain- Exercise 4444 Exercise Calculate the intersection point of the line $$\frac{x+4}{1}=\frac{y-1}{2}=\frac{z+1}{-1}$$ And the plain $$2x+3y-z=5$$ $$(-3,3,-2)$$
2020-05-28 22:09:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 3, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7215363383293152, "perplexity": 1019.7305080114884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347400101.39/warc/CC-MAIN-20200528201823-20200528231823-00010.warc.gz"}
https://solvedlib.com/n/suppose-the-heights-of-18-vear-oldpproximately-normaiy-oistrouted,7981732
# Suppose the heights of 18-vear-oldpproximately normaiy Oistrouted Mitn Madm 63 inches and standarg JeviationInchesUSE SALTWhat the probablllty that a 18-Year-oldselectedFandambetween ###### Question: Suppose the heights of 18-vear-old pproximately normaiy Oistrouted Mitn Madm 63 inches and standarg Jeviation Inches USE SALT What the probablllty that a 18-Year-old selected Fandam between 68 and 70 Inches tall? (Round Your answer rour decima places.) (b) If a random sample rwenty-elght 18-vear-old men celecred what the probabillty that the mean helght = between 68 and 70 Inches? (Round your answer four decimal places:) Comoare Your answei Pans (a) and (b): Is the probability part (b) much higher? Why would you expect this? The probability part (D) much higher because the mean smaller distnbution The probability pun much higher because the mean larger for the distrbution: The probability part (b) Mucn higher because the standard deviation larger for the distribution; The probability in part (b) Mucn lower because the standard deviation smaller for the distribution The probability part (b) much higher because the standard deviation smaller for the distribution; Need Help? JNJn dtdrn . #### Similar Solved Questions ##### MY NOTESASKouRDETAILSBBUNDERSTAT1Z 6-R.023.5non lly uisnbated undurd derutle Gcorus [Q bcol09 Yil NC: & t from nopulitlon Latnp 0 mrnoint an poi Ino prenle ~etint cmontlo awe toua dacim Flacus,JUSE SALTHel? MY NOTES ASKouR DETAILS BBUNDERSTAT1Z 6-R.023.5 non lly uisnbated undurd derutle Gcorus [Q bcol09 Yil NC: & t from nopulitlon Latnp 0 mr noint an poi Ino prenle ~etint cmontlo awe toua dacim Flacus, JUSE SALT Hel?... ##### Density, density, density. (a) A charge -344e is uniformly distributed along a circular arc of radius... Density, density, density. (a) A charge -344e is uniformly distributed along a circular arc of radius 4.80 cm, which subtends an angle of 59°. What is the linear charge density along the arc? (b) A charge -344e is uniformly distributed over one face of a circular disk of radius 4.50 cm. What is ... ##### 7) Predict the product from the following sequence: OH Hyot NaBH 4)I 6) H bcat CH;OH c) MI (~Hzo) d) IV 0) V OH OH (Zand E)OHOH(Zand E)OHOH 7) Predict the product from the following sequence: OH Hyot NaBH 4)I 6) H bcat CH;OH c) MI (~Hzo) d) IV 0) V OH OH (Zand E) OH OH (Zand E) OH OH... ##### [0/0.76 Points]DETAILSPREVIOUS ANSWERSROGACALCET4 7.3.022.Evaluate the integral using trigonometric substitution. (Use C for the constant of integration:)25dx2510( 25 -+ CeBcokSubmit Answer [0/0.76 Points] DETAILS PREVIOUS ANSWERS ROGACALCET4 7.3.022. Evaluate the integral using trigonometric substitution. (Use C for the constant of integration:) 25 dx 25 10( 25 - + C eBcok Submit Answer... ##### Ming spent half of her weekly allowance on candy. To earn more money her parents let her weed the garden for $5. What is her weekly allowance if she ended with$12? Ming spent half of her weekly allowance on candy. To earn more money her parents let her weed the garden for $5. What is her weekly allowance if she ended with$12?... ##### Write your own short MATLAB function Pex5 (x _ ,) returning the value at * of the Taylor polynomial Pn(c) of degree about of the function In((1 2)/(1 2)) (see formula (3.10) on p. 17 of the class notes). Print your code and give 16 significant digits of the value F1x(0.5) =Pex5(0.5,13) that your code obtains_ Print the absolute error | 1n(3) Pix(o.5)| expressed in MATLAB by ab8 (log(3) Pex5(0.5,13)) Hints:To minimize the number of multiplications when you should use the value Fr2 Create file cal Write your own short MATLAB function Pex5 (x _ ,) returning the value at * of the Taylor polynomial Pn(c) of degree about of the function In((1 2)/(1 2)) (see formula (3.10) on p. 17 of the class notes). Print your code and give 16 significant digits of the value F1x(0.5) =Pex5(0.5,13) that your cod... ##### A29.5 mL sample of 0.317 M diethylamine, (C2Hs)2NH, is titrated with 0.255 M hydroiodic acid.After adding 54.6 mL of hydroiodic acid, the pH isUse the Tables link in the References for any equilibrium constants that are required. A29.5 mL sample of 0.317 M diethylamine, (C2Hs)2NH, is titrated with 0.255 M hydroiodic acid. After adding 54.6 mL of hydroiodic acid, the pH is Use the Tables link in the References for any equilibrium constants that are required.... ##### Let G be a subgroup of E(2) and suppose that T is thetranslation subgroup of G. Prove that the point group of G isisomorphic to G/T. Let G be a subgroup of E(2) and suppose that T is the translation subgroup of G. Prove that the point group of G is isomorphic to G/T.... ##### Summary essay about the story of an hour summary essay about the story of an hour... ##### Solve the system of dillerential equationsdi[ : E]: with [c8]given that thle matrixhas an eigenvaluewith corresponding eigenvector -1/2]' and eigenvalue 5 with corresponding eigenvector 1]1 Solve the system of dillerential equations di [ : E]: with [c8] given that thle matrix has an eigenvalue with corresponding eigenvector -1/2]' and eigenvalue 5 with corresponding eigenvector 1]1... ##### IP A 3.0 kg block slides with a speed of 2.2 m/s on africtionless horizontal surface until it encounters a spring. PartA If the block compresses the spring 5.6 cm before coming to rest,what is the force constant of the spring? Express your answer usingtwo significant figures. k = nothing N/m Request Answer Part B Whatinitial speed should the block have to compress the spring by 1.4cm? Express your answer using two significant figures. v = nothingm/s IP A 3.0 kg block slides with a speed of 2.2 m/s on a frictionless horizontal surface until it encounters a spring. Part A If the block compresses the spring 5.6 cm before coming to rest, what is the force constant of the spring? Express your answer using two significant figures. k = nothing N/m Req... ##### Determine the convergence or divergence of the series.$sum_{n=1}^{infty} frac{2(-1)^{n+1}}{e^{n}+e^{-n}}=sum_{n=1}^{infty}(-1)^{n+1} operatorname{sech} n$ Determine the convergence or divergence of the series.$sum_{n=1}^{infty} frac{2(-1)^{n+1}}{e^{n}+e^{-n}}=sum_{n=1}^{infty}(-1)^{n+1} operatorname{sech} n$... ##### LUS Lock; Statistics: Unlocking the Power Data: Help System_AnnouncemcntsHceMen VERSTOMBACNEXTAChapter Section 4-C, Exercise 190Use the f-distribution find confidence interval for difference means confidence interval, Assume the results come Kz given the relevant from mndumn sample results samples from populations that Give the best estimate for / andruximatcl nrma 1, distributed 9S% confidence interval for #1 Pz using the sample resuits 543 =124, "[ 380 and %z 420,52 200 Enter the exact a LUS Lock; Statistics: Unlocking the Power Data: Help System_Announcemcnts HceMen VERSTOM BAC NEXTA Chapter Section 4-C, Exercise 190 Use the f-distribution find confidence interval for difference means confidence interval, Assume the results come Kz given the relevant from mndumn sample results samp... ##### On of the motives for usury laws is the concern that loans are being made to... On of the motives for usury laws is the concern that loans are being made to debtors who will not be able to pay them back. The text gives a very brief overview of the credit market in the U.S. One fact: In May 2016, Americans had about _________ in outstanding in credit card debts. A. \$94.6 billion... ##### Hah matrix _ satisfies the equation 4" 24+[=0_ Show that 4' = 34-20 , (hint: Use the (IOpts) Suppose an definition 4* =4.44"A) hah matrix _ satisfies the equation 4" 24+[=0_ Show that 4' = 34-20 , (hint: Use the (IOpts) Suppose an definition 4* =4.44"A)... ##### Is there anything valuable to be learned from Schopenhauer’s pessimism? Why or why not? Is there anything valuable to be learned from Schopenhauer’s pessimism? Why or why not?... ##### Question 3 In the figure, two identical, uniform, and frictionless spheres, each of mass 5.4 kg,... Question 3 In the figure, two identical, uniform, and frictionless spheres, each of mass 5.4 kg, rest in a rigid rectangular container. A line connecting their centers is at 45° to the horizontal. Find the magnitudes of the forces on the spheres from (a) the bottom of the container, (b) the left... ##### 5. Assume you are a food manufacturer and you are trying to sell your line of... 5. Assume you are a food manufacturer and you are trying to sell your line of imported jams and jellies to a major grocery store chain. Who in the chains buying centre would you expect to attend a meeting with you, and what would each members concerns be? id or creat Kas Done its 6 Think about the m... ##### (1) An electric charge of -40 millicoulombs located 42.4 meters north of the origin and one of +360 millicoulombs located 127.2 meters west of the origin. Find the magnitude and direction of the electric field at the origin_If there charge of 100 millicoulombs at the origin; find the magnitude and direction of the electric force on it:charge of-1.0 microcoulombs located at (0.Ocm,30.Ocm) and charge= of +1.0 microcoulombs located at (O.Ocm,30.Ocm) find the and components of the electric field at (1) An electric charge of -40 millicoulombs located 42.4 meters north of the origin and one of +360 millicoulombs located 127.2 meters west of the origin. Find the magnitude and direction of the electric field at the origin_ If there charge of 100 millicoulombs at the origin; find the magnitude and ...
2022-08-15 13:10:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5796834230422974, "perplexity": 4907.983933611817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572174.8/warc/CC-MAIN-20220815115129-20220815145129-00744.warc.gz"}
https://physics.aps.org/articles/v8/s95
Synopsis # Multiferroic Surprise Physics 8, s95 Electric and magnetic polarization are spontaneously produced in an unlikely material—one with a highly symmetric crystal structure. Multiferroics are materials that inherently exhibit both magnetic and electric polarizations, making them highly prized for spintronic and other magnetoelectric applications. Multiferroicity typically derives from asymmetry in a material’s crystal structure. A new report, however, finds multiferroic behavior in a so-called cubic perovskite, which is highly symmetric. The team explains their unexpected results as arising from a novel interaction between different magnetic ions. In a multiferroic material, the magnetic polarization is usually produced by transition metal ions (like iron and nickel), while the electric polarization (or ferroelectric effect) often relies on ions that shift position through interactions with their neighbors. These shifts can produce an electric polarization, but only if the crystal structure lacks an inversion center around which the crystal is symmetric. Cubic crystals have an inversion center, so they are not expected to be multiferroic. But if the magnetization within the cubic material is asymmetric, then it may be possible to magnetically induce a ferroelectric effect. Youwen Long of the Beijing National Laboratory for Condensed Matter Physics in China and his colleagues investigated a cubic perovskite ( ${\text{LaMn}}_{3}{\text{Cr}}_{4}{\text{O}}_{12}$). The material’s two transition metal ions, manganese and chromium, exhibit magnetic spin ordering along the same direction in the lattice, and this spin pattern lacks an inversion center. When the researchers lowered the temperature to 50 K (at which point the material becomes fully magnetized), they detected electric polarization, suggesting a connection between the material’s magnetism and ferroelectricity. The researchers ruled out previous multiferroic models and instead performed calculations that showed the spin-orbit coupling between the different magnetic ions could produce the electric polarization. This research is published in Physical Review Letters. –Michael Schirber ## Related Articles Magnetism ### Quasisymmetric Stellarators Magnetic-field configurations that improve confinement of fusion plasmas in stellarators can be achieved more precisely than previously thought, according to a numerical study. Read More » Mechanics ### Stable Hard-Sphere Packings with Arbitrarily Low Density A new strategy for packing hard spheres of different sizes could lead to novel ways of creating strong, lightweight materials. Read More » Condensed Matter Physics ### Controlling Phase Transitions in a 2D Model System Modifying surface and boundary features lets researchers control how a 2D model system transitions from a fluid to a crystalline phase. Read More »
2022-01-19 01:39:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5465553402900696, "perplexity": 2456.700274986171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301217.83/warc/CC-MAIN-20220119003144-20220119033144-00680.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-first-3-non-zero-terms-for-the-expansion-of-9-x-2-1-2
# How do you find the first 3 non-zero terms for the expansion of (9+x^2)^ (1/2)? Apr 29, 2017 ${\left(9 + {x}^{2}\right)}^{\frac{1}{2}} = 3 + {x}^{2} / 6 - {x}^{4} / 216 + {x}^{6} / 3888. \ldots \ldots \ldots \ldots . .$ #### Explanation: Binomial expansion of ${\left(1 + x\right)}^{n} = {C}_{0}^{n} {a}^{n} + {C}_{1}^{n} x + {C}_{2}^{n} {x}^{2} + {C}_{3}^{n} {x}^{3} + . . + + {C}_{n}^{n} {x}^{n}$, where ${C}_{r}^{n} = \frac{n \left(n - 1\right) \left(n - 2\right) \ldots . . \left(n - r + 1\right)}{1.2 .3 \ldots . . r}$ and ${C}_{0}^{n} = 1$ When we have $n$ as fraction this is written as infinite series. Hence ${\left(9 + {x}^{2}\right)}^{\frac{1}{2}}$ can be written as ${9}^{\frac{1}{2}} {\left(1 + {x}^{2} / 9\right)}^{1} / 2$ = $3 \left[1 + \frac{\frac{1}{2}}{1} \left({x}^{2} / 9\right) + \frac{\frac{1}{2} \left(\frac{1}{2} - 1\right)}{1 \cdot 2} {\left({x}^{2} / 9\right)}^{2} + \frac{\frac{1}{2} \left(\frac{1}{2} - 1\right) \left(\frac{1}{2} - 2\right)}{1 \cdot 2 \cdot 3} {\left({x}^{2} / 9\right)}^{3} + \ldots \ldots \ldots \ldots\right]$ = $3 \left[1 + {x}^{2} / 18 + \frac{\frac{1}{2} \cdot \left(- \frac{1}{2}\right)}{2} \frac{{x}^{4}}{81} + \frac{\frac{1}{2} \cdot \left(- \frac{1}{2}\right) \left(- \frac{3}{2}\right)}{6} \frac{{x}^{6}}{729.} \ldots \ldots . .\right]$ = $3 \left[1 + {x}^{2} / 18 - {x}^{4} / 648 + {x}^{6} / 11664. \ldots \ldots \ldots \ldots . .\right]$ = $3 + {x}^{2} / 6 - {x}^{4} / 216 + {x}^{6} / 3888. \ldots \ldots \ldots \ldots . .$
2020-06-05 06:54:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199998378753662, "perplexity": 583.4526571189074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348493151.92/warc/CC-MAIN-20200605045722-20200605075722-00486.warc.gz"}
https://cob.silverchair.com/jeb/article/207/4/683/15006/The-dichotomous-oxyregulatory-behaviour-of-the?searchresult=1
The dual function of appendage movement (food acquisition, ventilation)proved to be the key to explaining the peculiar oxyregulatory repertoire of the planktonic filter feeder Daphnia magna. Short-term hypoxic exposure experiments with normoxia-acclimated animals under varying food concentrations revealed a dichotomous response pattern with a compensatory tachycardia under food-free conditions and a ventilatory compensation prevailing under food-rich conditions. Food-free, normoxic conditions resulted in maximum appendage beating rates (fa) and half-maximum heart rates (fh), which restricted the scope for oxyregulation to the circulatory system. Food-rich conditions (105algal cells ml-1), on the contrary, had a depressing effect on fa whereas fh increased to 83% of the maximum. In this physiological state, D. magna was able to respond to progressive hypoxia with a compensatory increase in ventilation. A conceptual and mathematical model was developed to analyse the efficiency of ventilatory and circulatory adjustments in improving oxygen transport to tissue. Model predictions showed that an increase in perfusion rate was most effective under both food-free and food-rich conditions in reducing the critical ambient oxygen tension (PO2crit) at which oxygen supply to the tissue started to become impeded. By contrast, a hypothetical increase in ventilation rate had almost no effect on PO2crit under food-free conditions, indicating that appendage movement is driven by nutritive rather than respiratory requirements. However, the model predicted a moderate reduction of PO2crit by hyperventilation under food-rich conditions. Since the regulatory scope for an adjustment in fh was found to be limited in D. magna under these conditions, the increase in ventilation rate is the means of choice for a fed animal to cope with short-term, moderate reductions in ambient oxygen availability. Under long-term and more severe hypoxic conditions, however, the increase in the concentration and oxygen affinity of haemoglobin represents the one and only measure for improving the transport of oxygen from environment to cells. To cope with fluctuations in ambient oxygen tension(PO2amb), oxyregulating water breathers must respond with appropriate adjustments of their ventilatory and circulatory systems. Systemic regulation enables these animals not only to maintain their oxygen consumption rates in the face of declining PO2amb but also to minimize the risk of oxidative stress for the tissues and to reduce the energetic expenditures for pumping medium and blood when PO2amb is high. Whether a ventilatory or a circulatory adjustment is appropriate in a given situation depends on the adjustability of the respective system, the degree of interference with other vital body functions and, perhaps most importantly,the efficiency of a ventilatory or circulatory change. The planktonic crustacean Daphnia magna (Branchiopoda; Cladocera),a euryoxic species with oxyregulatory capacities(Kobayashi and Hoshi, 1984; Paul et al., 1997), exhibits a remarkable tolerance for environmental hypoxia (i.e. a state of reduced oxygen availability when the PO2amb has fallen below the normoxic values of 20-22 kPa prevailing normally at sea level; Grieshaber et al., 1994). The underlying physiological mechanisms that allow the animal to maintain oxygen uptake under environmental hypoxia range from short-term adjustments at the systemic level (Paul et al.,1997; Pirow et al.,2001) to long-term changes in the concentration and oxygen-binding characteristics of haemoglobin (Hb; Fox et al., 1951; Kobayashi and Hoshi, 1982; Kobayashi et al.,1988; Zeis et al., 2003a,b). Interestingly, the acute systemic responses to progressive, moderate hypoxia seem to deviate from those of other oxyregulating water breathers. Whereas fish (Dejours, 1981; Randall et al., 1997) or decapod crustaceans (McMahon and Wilkens,1975; Taylor,1976; Dejours and Beekenkamp,1977; Herreid,1980; Wheatly and Taylor,1981) typically increase ventilation while keeping cardiac output(= heart rate × stroke volume) more or less constant, the situation seems to be reversed in D. magna. Recent studies(Paul et al., 1997; Pirow et al., 2001) have shown that the heartbeat accelerates without notable changes in stroke volume(compensatory tachycardia) whereas the movements of the thoracic appendages,whose ventilatory function has been demonstrated experimentally(Pirow et al., 1999a), remain almost constant. It appears that hyperventilation is an inappropriate response for D. magna to compensate for a reduction in PO2amb. This idea is in line with the fact that D. magna is able to increase the concentration of Hb in the haemolymph by more than 10 times when exposed to chronic hypoxia (Kobayashi and Hoshi, 1982). Both responses, the tachycardia and the elevation of Hb concentration, compensate for the reduction in ambient oxygen availability by increasing the oxygen transport capacity of the circulatory system(Pirow et al., 2001; Bäumer et al., 2002). This suggests that the circulatory system rather than the ventilatory system is the limiting and controlling step of the oxygen transport cascade from environment to cell. The reason for the suggested absence of a ventilatory controllability of the oxygen transport cascade could lie in the filter-feeding mode of life. The rhythmical beating of the thoracic appendages has not only a ventilatory function but also serves an important non-respiratory need: food acquisition. The third and fourth limb pairs are equipped with fine-meshed filter combs that enable Daphnia to retain food particles suspended in the ambient medium (Fryer, 1991). Since food particles can be highly diluted in the natural environment, the rate of medium flow required to assure an adequate nutrition could exceed the rate necessary to satisfy the oxygen demand of the animal, as is supposed for other filter feeders such as sponges, lamellibranches and ascidians(Dejours, 1981). The dual function of appendage movement presumably provides the key to explaining the peculiar oxyregulatory behaviour of D. magna. However,if nutritive requirements rather than respiratory needs drive appendage movement under conditions of limited food availability, what controls this activity under excess food conditions? Several studies have shown that Daphnia spp. exhibits close to maximum appendage beating rates when there is little or no food available, whereas high food concentrations(≥104 unicellular algae ml-1) effect a pronounced deceleration of appendage movement(McMahon and Rigler, 1963; Burns, 1968; Porter et al., 1982). Since the latter effect is inevitably associated with a reduction in ventilatory power, and since the oxygen demand increases as a consequence of the activation of digestive processes(Lampert, 1986; Bohrer and Lampert, 1988)despite lower energetic expenditures for appendage movement(Philippova and Postnov,1988), it is possible that the oxyregulatory responses exhibited under these conditions deviate from those described so far(Paul et al., 1997; Pirow et al., 2001). The aim of the present paper is to analyse the oxyregulatory repertoire of D. magna and to provide a causal mechanistic explanation for its peculiar oxyregulatory behaviour. These issues were tackled by an experimental approach, in which the systemic responses to declining PO2amb were examined under food-free and food-rich conditions, as well as by the use of a conceptual and mathematical model, which made it possible to predict the efficiency of ventilatory and circulatory adjustments in improving oxygen transport to tissue. ### Animals, experimental conditions and statistical analysis Water fleas (Daphnia magna Straus) were reared under normoxic conditions (80-95% air saturation, 19.5-21.5°C) as described previously(Pirow et al., 2001). Animals examined under food-free (food-deprived group) and food-rich conditions(food-provided group) had body lengths of 2.76±0.17 mm (mean ± s.d.; N=5) and 2.60±0.18 mm (N=11),respectively, with 0-8 parthenogenetic eggs or embryos in the brood chamber. The slight difference in the mean body length of both groups was not statistically significant (unpaired two-tailed t-test: t=-1.69, d.f.=14, P=0.11). The lower number of animals in the food-deprived group was regarded to be sufficient for comparative purposes since similar experiments have already been done before(Paul et al., 1997; Pirow et al., 2001). In order to measure heart (fh) and appendage beating rate (fa), single animals were tethered by gluing their posterior apical spine to a 1-cm-long synthetic brush-hair with adhesive(histoacryl; B. Braun Melsungen AG, Melsungen, Germany). The animal was positioned lateral-side-down with the opposite side of the brush-hair and one of the large antennae glued onto a cover slip. The cover slip with the tethered animal was transferred into a transparent perfusion chamber(Paul et al., 1997) with the head orientated against the direction of the medium flow. The chamber was sealed and placed onto the stage of an inverted video microscope (Zeiss Axiovert 100; Carl Zeiss, Oberkochen, Germany). While keeping the animal under infrared illumination (>780 nm), the frequency of the periodic movements of the heart and the thoracic appendages were automatically determined as described in detail elsewhere (Pirow et al., 2001). The experimental chamber was perfused with culture medium (M4; Elendt and Bias, 1990) at a flow rate of 5 ml min-1. During the experiments, the oxygen tension of the medium was lowered gradually from normoxia (21 kPa) to severe hypoxia(<1.5 kPa) with a duration of five minutes for each step. This time interval was sufficient for the animal to attain a new stable level of fh and fa(Paul et al., 1997; Pirow et al., 2001). Different levels in oxygen tension were obtained by using two computer-driven peristaltic pumps (Gilson Minipuls 3; ABIMED, Langenfeld, Germany) that mix normoxic and anoxic media at different ratios(Freitag et al., 1998). Both media were prepared by equilibration with air or with a gas composed of 99.95%N2 and 0.05% CO2. The oxygen tension of the perfusion medium was measured behind the experimental chamber using a polarographic electrode (WTW Oxi 92; Weilheim, Germany). In the experiment with high food concentration, the unicellular green alga Scenedesmus subspicatus was added to both reservoirs at a final concentration of 105 cells ml-1. This concentration had been reported to effect a depression of fa in D. magna(Porter et al., 1982), which was confirmed in a separate experiment(Fig. 1; Table 1). The algal stock solution was prepared by centrifuging the algae at 2000 g (5 min, 4°C) and resuspending the algal pellet in filtered culture medium(cellulose acetate filter; pore size, 0.45 μm). The concentration of algae in the stock solution was determined using a Neubauer counting chamber. The stock solution was then appropriately diluted and kept in complete darkness. Fig. 1. Effect of increasing food concentrations on (A) appendage beating rate(fa) and (B) heart rate (fh) of D. magna (2.68±0.22 mm long) at normoxic conditions. Data are given as means ± s.d. (N=4 except for 5.6×104 cells ml-1, where N=2). A repeated-measures ANOVA was performed for all food levels except 5.6×104 cells ml-1. Neither the mean fa (F=49.9, groups d.f.=6, remainder d.f.=18, P<0.001) nor the mean fh (F=28.8,groups d.f.=6, remainder d.f.=18, P<0.001) were the same in animals on all seven food levels. The results of multiple comparisons among pairs of means are shown in Table 1. Fig. 1. Effect of increasing food concentrations on (A) appendage beating rate(fa) and (B) heart rate (fh) of D. magna (2.68±0.22 mm long) at normoxic conditions. Data are given as means ± s.d. (N=4 except for 5.6×104 cells ml-1, where N=2). A repeated-measures ANOVA was performed for all food levels except 5.6×104 cells ml-1. Neither the mean fa (F=49.9, groups d.f.=6, remainder d.f.=18, P<0.001) nor the mean fh (F=28.8,groups d.f.=6, remainder d.f.=18, P<0.001) were the same in animals on all seven food levels. The results of multiple comparisons among pairs of means are shown in Table 1. Table 1. Results of the Tukey multiple comparison testing among the mean appendage beating rates (fa) and the mean heart rates (fh) of the seven food levels Food level (cells ml-1)Mean fa (min-1)Food level (cells ml-1)Mean fh (min-1) 1 000 000 97 1 000 178 100 000 129 179 259 3 200 195 32 000 262 10 000 212 1 000 282 32 000 246 3 200 313 100 000 264 10 000 339 1 000 000 269 Food level (cells ml-1)Mean fa (min-1)Food level (cells ml-1)Mean fh (min-1) 1 000 000 97 1 000 178 100 000 129 179 259 3 200 195 32 000 262 10 000 212 1 000 282 32 000 246 3 200 313 100 000 264 10 000 339 1 000 000 269 All seven means of fa and fh,respectively, were arranged in order of increasing magnitude, and the vertical lines beside them represent non-significant sets of means(Sokal and Rohlf, 1995). All experiments were carried out at 20°C. Animals were allowed to acclimate to the experimental conditions for 50 min before starting the experiment. fh and fa were analysed for the last minute of each step in oxygen tension. Data were expressed as means ± s.d., with N indicating the number of animals examined. For each experiment, in which multiple measurements were made on the same animal under various treatment levels (either food concentration or oxygen partial pressure), differences in mean values(fh and fa) of the different treatments levels were assessed using a repeated-measures analysis of variance(repeated-measures ANOVA; Zar,1999). Statistical differences were considered significant at P<0.05. In the case of a statistical significant difference,multiple comparisons (Tukey test; Zar,1999) among pairs of means using an experimentwise error rate of 0.05 were performed to determine between which means differences exist. ### General description of the conceptual model of oxygen transport Animals with a body size in the millimetre range are distinguished from their larger counterparts by short transport distances from the body surface to the central body regions. Since diffusive processes are effective for short distances only, millimetre-sized animals can rely to a greater extent on diffusion for providing peripheral tissues (i.e. tissues close to the body surface) with oxygen directly from the ventilated or non-ventilated ambient medium, whereas internal convection is, in principle, only needed to deliver oxygen to the more centrally located tissues, which are too distant from the periphery to be sufficiently supplied by diffusion. Such a pathway deviates somewhat from that of the basic vertebrate model(Taylor and Weibel, 1981; Piiper, 1982; Weibel, 1984; Shelton, 1992), where the transport of oxygen from the environment to the cells is thought to occur along a linear sequence of alternating convection and diffusion steps(ventilatory convection, diffusion across the oxygen-permeable integument,circulatory convection, diffusion in the tissue). The proposed model for the diffusive-convective oxygen transport in Daphnia magna incorporates both a ventilatory-circulatory transport of oxygen to the centrally located tissues as well as a diffusive supply of peripheral tissues directly from the respiratory medium (Pirow,2003). To keep the mathematical formulation of the oxygen transport cascade as simple as possible, the complex body shape of D.magna(Fig. 2A) is reduced to a cylindrical trunk, which is enveloped by a hollow cylinder representing the carapace (Fig. 2B). The carapace consists of an outer and an inner wall that both enclose a haemolymph space, the carapace lacuna. The cylindrical trunk is further assumed to be composed of a peripheral tissue layer, a haemolymph space (trunk lacuna) and a central tissue cylinder (Fig. 2B). The respiratory medium flows through the space between the carapace and the trunk in a posterior direction while oxygen is released both into the carapace lacuna and the trunk. This design takes into account that the feeding current of D. magna is an important pathway for oxygen(Pirow et al., 1999a) and that the inner wall of the carapace is a significant site of oxygen uptake(Pirow et al., 1999b). Similar to the real situation, the medium flow and haemolymph flow in the carapace lacuna are in concurrent orientation to each other. Leaving the carapace lacuna, the oxygen-rich haemolymph enters the haemolymph space of the trunk and flows in an anterior direction while oxygen is released into both tissue compartments. Reaching the anterior position, oxygen-poor haemolymph then re-enters the carapace lacuna. The circulation of haemolymph takes place in a single circuit that, of course, is a simplification compared with the real situation, where the haemolymph flow branches into subcircuits(Pirow et al., 1999b). The oxygen partial pressure of the inspiratory and expiratory medium is denoted by Pin and Pex (kPa), respectively,whereas that of the haemolymph entering and leaving the trunk is denoted by Pa and Pv, respectively(Fig. 2B). Fig. 2. (A) Dorsal view of the microcrustacean Daphnia magna showing the medium flow pattern (white arrows) and the circulatory pattern (black arrows). A dorsal piece of the left carapace valve (chequered area) was removed (for details see Pirow et al.,1999b). (B) Conceptual model for oxygen transport in D. magna based on a cylinder-within-a-tube arrangement. Medium flows through the space between the carapace and the trunk in a posterior direction (open arrows) while oxygen is released both into the carapace lacuna and the peripheral tissue layer of the trunk. This tissue layer is supplied with oxygen from the medium and from a truncal haemolymph space by diffusion(broken arrows). Oxygenated haemolymph leaves the double-walled carapace and then enters the truncal haemolymph space (solid arrows). While flowing in an anterior direction, oxygen diffuses from this haemolymph space both into the coaxial tissue cylinder and the cortical tissue layer (broken arrows). Pin, Pex, inspiratory and expiratory oxygen partial pressures, respectively; Pa and Pv, oxygen partial pressures of the haemolymph entering and leaving the trunk, respectively. Fig. 2. (A) Dorsal view of the microcrustacean Daphnia magna showing the medium flow pattern (white arrows) and the circulatory pattern (black arrows). A dorsal piece of the left carapace valve (chequered area) was removed (for details see Pirow et al.,1999b). (B) Conceptual model for oxygen transport in D. magna based on a cylinder-within-a-tube arrangement. Medium flows through the space between the carapace and the trunk in a posterior direction (open arrows) while oxygen is released both into the carapace lacuna and the peripheral tissue layer of the trunk. This tissue layer is supplied with oxygen from the medium and from a truncal haemolymph space by diffusion(broken arrows). Oxygenated haemolymph leaves the double-walled carapace and then enters the truncal haemolymph space (solid arrows). While flowing in an anterior direction, oxygen diffuses from this haemolymph space both into the coaxial tissue cylinder and the cortical tissue layer (broken arrows). Pin, Pex, inspiratory and expiratory oxygen partial pressures, respectively; Pa and Pv, oxygen partial pressures of the haemolymph entering and leaving the trunk, respectively. ### Model assumptions The following simplifying assumptions are made in the model. (1) Diffusion of oxygen in all compartments (tissue, haemolymph and medium) and across compartment interfaces is only in a radial direction. Axial diffusion is ignored in order to reduce mathematical complexity. (2) Oxygen diffusion across the tissue-medium and medium-haemolymph interfaces is impeded by cuticular barriers of the same permeability. (3) The outer wall of the carapace is assumed to be impermeable to oxygen. (4) Axial convection occurs only in the haemolymph and medium compartments. (5) Convective flows have velocity profiles that are uniform in respect to the radial axis. (6) The mixing of haemolymph leaving the lacunae at the bases of the cylindrical model as well as the re-entrance of the mixed haemolymph into destined lacunae is assumed to occur without a time delay. (7) Haemoglobin as the oxygen carrier in the haemolymph is not considered in order to reduce mathematical complexity. (8) The volume-specific oxygen consumption rate is assumed to be constant throughout the tissue compartments. ### Mathematical formulation and derivation of the numerical solution Based on assumptions made in the previous section, the following general oxygen transport equation (Groebe and Thews, 1992) accounts for radial diffusion with axial convection and oxygen consumption: This equation derives from Fick's second law of diffusion(Crank, 1975), extended by two terms for convective oxygen transport and oxygen consumption. P (kPa)is the oxygen partial pressure, t (s) and r (mm) are the time and the radial coordinate, respectively, α (nmol mm-3kPa-1) is the solubility coefficient for oxygen, and a(nmol s-1 mm-3) represents the volume-specific oxygen consumption rate of pure tissue. D (mm2 s-1) is the diffusion coefficient for oxygen whereas h (mm) and v(mm s-1) represent the axial coordinate and the convective velocity, respectively. The solution of the partial differential equation 1 is approximated by numerical methods, which requires dividing the whole cylindrical body into discrete volume elements. The cylindrical model of height(h0) and radius (r0) is divided in the axial direction in Nax equal intervals of lengthΔ h and in the radial direction in Nrad+1 intervals of length Δr and 0.5Δr, respectively(Fig. 3). This subdivision yields two different kinds of coaxial volume elements: solid and hollow cylinders. As a consequence of this discretization, the radii of the compartment interfaces (e.g. tissue-haemolymph interface) have to be rounded to multiples of Δr. Since the whole cylindrical body is radially symmetrical, it is sufficient to further consider only that region of the median plane that is covered by 0...h0 and 0...r0 (Fig. 3). In this view, each volume element is represented by a discrete grid point. The axial and radial coordinates of the grid points are(j+0.5)Δh and iΔr,respectively, where the indices j and i are integers with j=0, 1,..., Nax-1 and i=0, 1,..., Nrad. Fig. 3. Subdivision of the cylindrical model of radius r0 and height h0 for numerical analysis. This subdivision yields coaxial cylindrical and hollow-cylindrical volume elements of height(Δh) with radial extensions being multiples of 0.5Δr. The numerical analysis aims to determine the oxygen partial pressures for the discrete set of grid points (filled circles)representing the volume elements. The axial and radial coordinates of the points are (j+0.5)Δh and iΔr,where the indices j and i are integers with j=0,..., Nax-1 and i=0,..., Nrad. The oxygen partial pressure(Pj,i) of a representative volume element (white rectangle) is affected by diffusive (broken arrows) and convective (solid arrows) exchange processes with the adjacent volume elements. The hatched areas represent those fractions of the respective volume elements that are shifted to the left by convection during the time interval Δt. The white line on the left exemplifies the radial position of a compartment interface that has to be rounded to a multiple of Δr. Fig. 3. Subdivision of the cylindrical model of radius r0 and height h0 for numerical analysis. This subdivision yields coaxial cylindrical and hollow-cylindrical volume elements of height(Δh) with radial extensions being multiples of 0.5Δr. The numerical analysis aims to determine the oxygen partial pressures for the discrete set of grid points (filled circles)representing the volume elements. The axial and radial coordinates of the points are (j+0.5)Δh and iΔr,where the indices j and i are integers with j=0,..., Nax-1 and i=0,..., Nrad. The oxygen partial pressure(Pj,i) of a representative volume element (white rectangle) is affected by diffusive (broken arrows) and convective (solid arrows) exchange processes with the adjacent volume elements. The hatched areas represent those fractions of the respective volume elements that are shifted to the left by convection during the time interval Δt. The white line on the left exemplifies the radial position of a compartment interface that has to be rounded to a multiple of Δr. By using the Taylor's series technique(Faires and Burden, 1993; Crank, 1975) for solving Fick's second law of diffusion, the following explicit finite-difference solution of equation 1 is obtained for all grid points not coinciding with compartment interfaces and excluding the central and outermost grid points at i=0 and i=Nrad, respectively: $\ {\Delta}P_{\mathrm{j,i}}=\frac{D{\Delta}t}{2i({\Delta}r)^{2}}[(2i+1)P_{\mathrm{j,i}+1}-4iP_{\mathrm{j,i}}+(2i-1)P_{\mathrm{j,i}-1}]+\frac{{\nu}{\Delta}t}{{\Delta}h}(P_{\mathrm{k,i}}-P_{\mathrm{j,i}})-\frac{a{\Delta}t}{{\alpha}}.$ 2 This equation makes it possible to calculate the change in oxygen partial pressure (ΔPj,i) at the grid point referenced by j, i during the time interval Δt. Pj,irepresents the oxygen partial pressure at the representative grid point at time point t. The respective oxygen partial pressures of the neighbouring points at the radially inferior and superior positions are Pj,i-1 and Pj,i+1 (Fig. 3). Pk,i, relevant only in the case of axially directed convection, represents the oxygen partial pressure of the neighbouring point at the upstream position. The oxygen partial pressure of the downstream neighbouring point does not enter the convection term, because there is no gradient in the axial direction within the volume elements (axial diffusion is ignored). For grid points at the bases of the cylindrical model(at j=0 and j=Nax-1), Pk,i is appropriately replaced by Pin, Pa or Pv. From the general solution given in equation 2, three special cases may be easily derived to describe the oxygen transport in the tissue compartments (D=DT, v=0,α=αT), in the haemolymph compartments(D=DH, v=vHT or v=vHC, α=αH, a=0)and in the medium compartment (D=DM, v=vM, α=αM, a=0). In these special cases, the solubility and diffusion coefficients assume the specific values of the respective compartments (tissue - αT, DT; haemolymph - αH, DH; medium - αM, DM). Flow velocities of the haemolymph in the carapace lacuna and in the trunk lacuna are denoted by vHC and vHT,respectively. After having derived the balance equations for the oxygen transport within the different compartments, it remains to describe the oxygen transfer across compartment interfaces as well as the changes in oxygen partial pressure at the central and outermost grid points. For the central grid points (at i=0) belonging to the tissue compartment, the following approximation is derived from Crank (1975): $\ {\Delta}P_{\mathrm{j},0}=4\frac{D_{\mathrm{T}}{\Delta}t}{({\Delta}r)^{2}}(P_{\mathrm{j},1}-P_{\mathrm{j},0})-\frac{a{\Delta}t}{{\alpha}_{\mathrm{T}}},$ 3 where Pj,0 and Pj,1 are the oxygen partial pressures at the grid points referenced by i=0 and i=1, respectively. A somewhat more complicated mathematical formulation is required to calculate the changes in oxygen partial pressure at grid points located at compartment interfaces with an additional diffusion barrier (cuticle). In such a case, a grid point referenced by j, i is characterized by two variables, $$P_{\mathrm{j,i}}^{\mathrm{u}}$$ and $$P_{\mathrm{j,i}}^{\mathrm{o}}$$ (kPa), which represent oxygen partial pressure at the inner and outer side of the infinitesimal thin diffusion barrier. The following example gives the specific solution for the tissue-medium interface with diffusion barrier: $\ {\Delta}P_{\mathrm{{\ }j,i}}^{\mathrm{u}}=\frac{{\Delta}t}{V_{\mathrm{{\ }i}}^{\mathrm{u}}{\alpha}_{\mathrm{T}}}\left[\frac{B_{\mathrm{i}-0.5}D_{\mathrm{T}}{\alpha}_{\mathrm{T}}}{{\Delta}r}(P_{\mathrm{j,i}-1}-P_{\mathrm{{\ }j,i}}^{\mathrm{u}})+B_{\mathrm{i}}g(P_{\mathrm{{\ }j,i}}^{\mathrm{o}}-P_{\mathrm{{\ }j,i}}^{\mathrm{u}})-V_{\mathrm{{\ }i}}^{\mathrm{u}}a\right],$ 4 $\ {\Delta}P_{\mathrm{{\ }j,i}}^{\mathrm{o}}=\frac{{\Delta}t}{V_{\mathrm{{\ }i}}^{\mathrm{o}}{\alpha}_{\mathrm{M}}}\left[\frac{B_{\mathrm{i}+0.5}D_{\mathrm{M}}{\alpha}_{\mathrm{M}}}{{\Delta}r}(P_{\mathrm{j,i}+1}-P_{\mathrm{{\ }j,i}}^{\mathrm{o}})+B_{\mathrm{i}}g(P_{\mathrm{{\ }j,i}}^{\mathrm{u}}-P_{\mathrm{{\ }j,i}}^{\mathrm{o}})+A_{\mathrm{{\ }i}}^{\mathrm{o}}{\nu}_{\mathrm{M}}{\alpha}_{\mathrm{M}}(P_{\mathrm{j}-1,\mathrm{i}}-P_{\mathrm{{\ }j,i}}^{\mathrm{o}})\right].$ 5 The parameters Bi-0.5, Bi, Bi+0.5, $$A_{\mathrm{i}}^{\mathrm{o}}$$ , $$V_{\mathrm{{\ }i}}^{\mathrm{u}}$$ and $$V_{\mathrm{{\ }i}}^{\mathrm{o}}$$ are explained graphically in Fig. 4, whereas g (nmol s-1 mm-2 kPa-1) represents the permeability of the cuticular diffusion barrier. The specific solution for the medium-haemolymph interface with diffusion barrier (i.e. the inner carapace wall) may be obtained by appropriately adapting equations 4 and 5. Fig. 4. Oxygen transfer across a compartment interface with additional diffusion barrier (cuticle). The grid point referenced by the indices i and j is located on an infinitesimal thin diffusion barrier separating the tissue from the medium compartment. This grid point is characterized by two variables, $$P_{\mathrm{{\ }j,i}}^{\mathrm{u}}$$ and $$P_{\mathrm{{\ }j,i}}^{\mathrm{o}}$$ , which represent the oxygen partial pressure at the inner and the outer side of the circular diffusion barrier, respectively. To calculate the temporal changes in $$P_{\mathrm{{\ }j,i}}^{\mathrm{u}}$$ and $$P_{\mathrm{{\ }j,i}}^{\mathrm{o}}$$ (see equations 4, 5), the following geometrical parameters are required: the cylindrical wall areas Bi-0.5, Bi and Bi+0.5, the areas of the hollow-cylindrical bases $$A_{\mathrm{i}}^{\mathrm{u}}$$ and $$A_{\mathrm{i}}^{\mathrm{o}}$$ , and the hollow-cylindrical volumes $$V_{\mathrm{{\ }i}}^{\mathrm{u}}$$ and $$V_{\mathrm{{\ }i}}^{\mathrm{o}}$$ . Three example equations for calculating these parameters are given. vM,flow velocity in the medium compartment; Pj,i-1, Pj,i+1, $$P_{\mathrm{{\ }j}-1,\mathrm{i}}^{\mathrm{o}}$$ , oxygen partial pressures of the neighbouring grid points; Δh andΔ r, distance between two grid points in axial and radial direction. Fig. 4. Oxygen transfer across a compartment interface with additional diffusion barrier (cuticle). The grid point referenced by the indices i and j is located on an infinitesimal thin diffusion barrier separating the tissue from the medium compartment. This grid point is characterized by two variables, $$P_{\mathrm{{\ }j,i}}^{\mathrm{u}}$$ and $$P_{\mathrm{{\ }j,i}}^{\mathrm{o}}$$ , which represent the oxygen partial pressure at the inner and the outer side of the circular diffusion barrier, respectively. To calculate the temporal changes in $$P_{\mathrm{{\ }j,i}}^{\mathrm{u}}$$ and $$P_{\mathrm{{\ }j,i}}^{\mathrm{o}}$$ (see equations 4, 5), the following geometrical parameters are required: the cylindrical wall areas Bi-0.5, Bi and Bi+0.5, the areas of the hollow-cylindrical bases $$A_{\mathrm{i}}^{\mathrm{u}}$$ and $$A_{\mathrm{i}}^{\mathrm{o}}$$ , and the hollow-cylindrical volumes $$V_{\mathrm{{\ }i}}^{\mathrm{u}}$$ and $$V_{\mathrm{{\ }i}}^{\mathrm{o}}$$ . Three example equations for calculating these parameters are given. vM,flow velocity in the medium compartment; Pj,i-1, Pj,i+1, $$P_{\mathrm{{\ }j}-1,\mathrm{i}}^{\mathrm{o}}$$ , oxygen partial pressures of the neighbouring grid points; Δh andΔ r, distance between two grid points in axial and radial direction. For all compartment interfaces lacking an additional diffusion barrier,only one variable is required to describe oxygen partial pressure at that location. The following two examples show the specific solutions for the tissue-haemolymph interface (equation 6) and the outermost grid points (at i=Nrad; equation 7): \begin{eqnarray*}&&\ {\Delta}P_{\mathrm{j,i}}=\frac{{\Delta}t}{V_{\mathrm{{\ }i}}^{\mathrm{u}}{\alpha}_{\mathrm{T}}+V_{\mathrm{{\ }i}}^{\mathrm{o}}{\alpha}_{\mathrm{H}}}\left[\frac{B_{\mathrm{i}-0.5}D_{\mathrm{T}}{\alpha}_{\mathrm{T}}}{{\Delta}r}(P_{\mathrm{j,i}-1}-P_{\mathrm{j,i}})+\frac{B_{\mathrm{i}+0.5}D_{\mathrm{H}}{\alpha}_{\mathrm{H}}}{{\Delta}r}(P_{\mathrm{j,i}+1}-P_{\mathrm{j,i}})\right.\ \\&&\left.\ +A_{\mathrm{{\ }i}}^{\mathrm{o}}{\nu}_{\mathrm{HT}}{\alpha}_{\mathrm{H}}(P_{\mathrm{j}+1,\mathrm{i}}-P_{\mathrm{j,i}})-V_{\mathrm{{\ }i}}^{\mathrm{u}}a\right],\end{eqnarray*} 6 $\ {\Delta}P_{\mathrm{j,N}_{\mathrm{rad}}}=\frac{{\Delta}t}{V_{\mathrm{N}_{\mathrm{rad}}}^{\mathrm{u}}}\left[\frac{B_{\mathrm{N}_{\mathrm{rad}}-0.5}D_{\mathrm{H}}}{{\Delta}r}(P_{\mathrm{j,N}_{\mathrm{rad}}-1}-P_{\mathrm{j,N}_{\mathrm{rad}}})+A_{\mathrm{N}_{\mathrm{rad}}}^{\mathrm{u}}v_{\mathrm{HC}}(P_{\mathrm{j}-1,\mathrm{N}_{\mathrm{rad}}}-P_{\mathrm{j,N}_{\mathrm{rad}}})\right].$ 7 The balance equations derived for all grid points were used to calculate(1) the oxygen partial pressure distribution within the model and (2) the total oxygen consumption rate as a function of PO2amb. Solutions were obtained by initially setting the oxygen partial pressure of all grid points to zero and Pin to PO2amb. The numerical calculation was started and continued until quasi steady-state conditions (ΔPi,j<10-6 kPa for all grid points) were reached. For Δt, a value equal to or smaller than 0.005 s was chosen, which proved to be adequate to allow the model system to approach steady-state conditions. ### Selection of parameter values The selection of reasonable parameter values determining the geometrical extensions and functional properties of the model is a tricky step in the modelling process, especially when parameter values are not precisely known or when the geometrical model deviates in some respects from structural or physical reality. Since model parameters can depend on each other, we defined key parameters from which derived parameters were calculated according to functional relationships (Table 2). Following this approach, the radial extensions of all compartments were derived taking the following assumptions into account. (1)Volume (V) and height (h0) of the cylindrical model are 1.12 mm3 and 2.5 mm, respectively, and refer to a 2.5 mm-long D. magna with no brood in the brood chamber(Kobayashi, 1983). (2) V comprises the tissue and haemolymph compartments. (3) The tissue fraction φ of V is 0.4(Kobayashi, 1983). (4) The flow cross-sectional area (AM) penetrated perpendicularly by the medium flow is 0.4 mm2. (5) The thickness(Δx) of the carapace lacuna is 0.02 mm. (6) The fraction of total tissue (ξ) allocated to the central tissue cylinder is 0.25. (7) The radial distance (Δr) between two grid points is 0.005 mm. Table 2. Values and units of parameters used in the model SymbolValueUnitDescription Key (primary) parameters V 1.12 mm3 Body volume (haemolymph and tissue compartments) h0 2.5 mm Height of the cylindrical model AM 0.4 mm2 Flow cross-sectional area of the medium flow Δx 0.02 mm Thickness of the carapace lacuna Φ 0.4  Tissue fraction of body volume ζ 0.25  Fraction of total tissue in the central tissue cylinder a0 0.0058 nmol s-1 mm-3 Volume-specific O2 consumption rate (whole body) M 0.8333 mm3 s-1 Medium flow rate H 0.0311 mm3 s-1 Perfusion rate αM 0.0137 nmol mm-3 kPa-1 Solubility coefficient for O2 in water αH 0.0123 nmol mm-3 kPa-1 Solubility coefficient for O2 in haemolymph αT 0.0147 nmol mm-3 kPa-1 Solubility coefficient for O2 in tissue DM 0.0020 mm2 s-1 Diffusion coefficient for O2 in water DH 0.0015 mm2 s-1 Diffusion coefficient for O2 in haemolymph DT 0.0010 mm2 s-1 Diffusion coefficient for O2 in tissue g 0.0010 nmol s-1 mm-2 kPa-1 Permeability of the cuticular diffusion barrier Δr 0.005 mm Radial distance between two grid points Δh 0.1 mm Axial distance between two grid points Δt ≤0.005 Time interval for numerical calculation Derived (secondary) parameters 0.120 mm Radius of the central tissue cylinder 0.280 mm Outer radius of the truncal haemolymph space 0.350 mm Outer radius of the peripheral tissue layer 0.500 mm Outer radius of the medium lacuna r0 0.520 mm Outer radius of the carapace lacuna a 0.015 nmol s-1 mm-3 Volume-specific O2 consumption rate (pure tissue) νM 2.09 mm s-1 Flow velocity of the medium in the medium lacuna νHC 0.13 mm s-1 Flow velocity of the haemolymph in the carapace lacuna νHT 0.96 mm s-1 Flow velocity of the haemolymph in the trunk lacuna SymbolValueUnitDescription Key (primary) parameters V 1.12 mm3 Body volume (haemolymph and tissue compartments) h0 2.5 mm Height of the cylindrical model AM 0.4 mm2 Flow cross-sectional area of the medium flow Δx 0.02 mm Thickness of the carapace lacuna Φ 0.4  Tissue fraction of body volume ζ 0.25  Fraction of total tissue in the central tissue cylinder a0 0.0058 nmol s-1 mm-3 Volume-specific O2 consumption rate (whole body) M 0.8333 mm3 s-1 Medium flow rate H 0.0311 mm3 s-1 Perfusion rate αM 0.0137 nmol mm-3 kPa-1 Solubility coefficient for O2 in water αH 0.0123 nmol mm-3 kPa-1 Solubility coefficient for O2 in haemolymph αT 0.0147 nmol mm-3 kPa-1 Solubility coefficient for O2 in tissue DM 0.0020 mm2 s-1 Diffusion coefficient for O2 in water DH 0.0015 mm2 s-1 Diffusion coefficient for O2 in haemolymph DT 0.0010 mm2 s-1 Diffusion coefficient for O2 in tissue g 0.0010 nmol s-1 mm-2 kPa-1 Permeability of the cuticular diffusion barrier Δr 0.005 mm Radial distance between two grid points Δh 0.1 mm Axial distance between two grid points Δt ≤0.005 Time interval for numerical calculation Derived (secondary) parameters 0.120 mm Radius of the central tissue cylinder 0.280 mm Outer radius of the truncal haemolymph space 0.350 mm Outer radius of the peripheral tissue layer 0.500 mm Outer radius of the medium lacuna r0 0.520 mm Outer radius of the carapace lacuna a 0.015 nmol s-1 mm-3 Volume-specific O2 consumption rate (pure tissue) νM 2.09 mm s-1 Flow velocity of the medium in the medium lacuna νHC 0.13 mm s-1 Flow velocity of the haemolymph in the carapace lacuna νHT 0.96 mm s-1 Flow velocity of the haemolymph in the trunk lacuna Data refer to a 2.5 mm-long, fasting D. magna without embryos in the brood chamber at food-free, normoxic conditions and to 20°C. Whereas the first three assumptions are easy to comprehend, points 4-7 require a brief justification. The value for AM was derived from an experimental study (Pirow et al., 1999a; value not explicitly stated there). The value forΔ x represents a rough estimate since the thickness of the haemolymph space varies across the carapace(Dahm, 1977; Schultz and Kennedy, 1977; Fryer, 1991). The value of 0.25 assigned to ξ is exactly that fraction of a solid cylinder that is encircled by half of the cylinder radius. This choice ensured that there is a sufficiently great sink for oxygen in the central region of the model as would be the case if haemolymph and tissue were more homogeneously distributed within the trunk. The value for Δr was chosen to divide the thin hemolymph compartment of the carapace lacuna into four intervals. The influence of these somewhat arbitrarily chosen parameter values on model behaviour is assessed by a sensitivity analysis in the Results. The remaining parameters describing the functional properties of the model refer to 20°C and a 2.5 mm-longanimal in the fasting state. The convective flow velocities (vM, vHC, vHT) were derived by dividing the medium flow rate M (mm3s-1) or the perfusion rate H (mm3s-1) by the respective flow cross-sectional area(Rouse, 1978; e.g. vM=M/AM). M was calculated from fa (360 min-1; present study) according to functional relationships (Pirow et al.,1999a), whereas H was obtained from stroke volume (Bäumer et al.,2002) and fh (257 min-1; present study). The solubility of oxygen in water (αM) was obtained from Gnaiger and Forstner (1983),taking the salinity of 0.2‰ of the culture medium into account. The solubility of oxygen in haemolymph (αH) was assumed to be that of human plasma (58-85 g protein l-1; Christophorides et al., 1969). Values reported for the oxygen solubility in tissue (αT) vary in the range of 0.0097-0.0169 nmol mm-3 kPa-1(Grote, 1967; Grote and Thews, 1962; Mahler et al., 1985; Thews, 1960). For the present model, a value of 0.0147 nmol mm-3 kPa-1 was chosen for the tissue compartment. Values reported for the diffusion coefficient for oxygen in water(DM) vary in the range of 0.0017-0.0025 mm2s-1 (Bartels, 1971; Gertz and Loeschke, 1954; Goldstick and Fatt, 1970; Grote, 1967; Grote and Thews, 1962; Hayduk and Laudie, 1974; Himmelblau, 1964; St-Denis and Fell, 1971). For the present model, a value of 0.0020 mm2 s-1 was chosen. A variety of data also exist for the diffusion coefficient for oxygen in tissue (DT; corrected to 20°C if necessary), such as 0.0008-0.0020 mm2 s-1 for vertebrate skeletal or heart muscle tissue (Bentley et al.,1993; Ellsworth and Pittman,1984; Grote and Thews,1962; Homer et al.,1984; Mahler et al.,1985), 0.0015 mm2 s-1 for rat lung tissue(Grote, 1967) and 0.0012 mm2 s-1 for rat grey matter(Thews, 1960). For the present model, a value of 0.0010 mm2 s-1 was chosen for the tissue compartment. For haemolymph, we assumed a diffusion coefficient to be 75% of that in water, because similar relationships (70-85%) were reported for both bovine serum (Gertz and Loeschke,1954; Yoshida and Ohshima,1966) and an 8.5% solution of bovine serum albumin(Kreuzer, 1950). The permeability of the cuticular diffusion barrier (g) was calculated by dividing Krogh's diffusion coefficient for oxygen in chitin(1.27×10-6 nmol s-1 cm-1torr-1; Krogh,1919) by the thickness of the cuticle, for which a value of 0.001 mm was assumed (Pirow et al.,1999b). The volume-specific oxygen consumption rate of pure tissue (a) was derived by dividing the volume-specific oxygen consumption rate of the whole animal (a0) by the tissue fraction of body volume(φ). The value for a0 was obtained by dividing the respiration rate (Y) of 23.6 nmol O2 animal-1h-1 by body volume (V). Y was calculated according to the allometric equation Y=0.3087X0.95(Glazier, 1991) using a dry body mass (X) of 96 μg for a 2.5 mm-long D. magna with no brood in the brood chamber (Kobayashi,1983). ### Circulatory and ventilatory responses to hypoxia Exposing normoxia-acclimated D. magna to food-free conditions with a gradual decline in ambient oxygen tension(PO2amb) resulted in a statistically significant increase in heart rate (fh) from 257±16 min-1 (N=5) at normoxia (20.9 kPa) to a maximum of 412±33 min-1 at 3.9 kPa, followed by a deceleration of heart beat below 2.3 kPa (Fig. 5A; Table 3). Appendage beating rate (fa) remained constant at a high level of 321-364 min-1 in the range of PO2amb from 20.9 kPa to 2.3 kPa but decreased significantly with a further reduction in PO2amb (Fig. 5B; Table 3). Fig. 5. Responses in (A,C) heart rate (fh) and (B,D) appendage beating rate (fa) to decreasing ambient oxygen tensions at food-free conditions (left; N=5) and food-rich conditions (right;105 algal cells ml-1; N=11 except for 0.8 kPa,where N=4). Data are given as means ± s.d. A repeated-measures ANOVA was performed for all data. At food-free conditions,the mean fa (F=25.6, groups d.f.=9, remainder d.f.=36, P<0.001) and the mean fh(F=48.0, groups d.f.=6, remainder d.f.=36, P<0.001) were not the same in animals on all 10 oxygen levels. At food-rich conditions, the mean fa (F=3.1, groups d.f.=9, remainder d.f.=36, P=0.025) and the mean fh (F=15.4, groups d.f.=9, remainder d.f.=36, P<0.001) were not the same in animals on all 10 oxygen levels. The results of multiple comparisons among pairs of means are shown in Tables 3, 4. Fig. 5. Responses in (A,C) heart rate (fh) and (B,D) appendage beating rate (fa) to decreasing ambient oxygen tensions at food-free conditions (left; N=5) and food-rich conditions (right;105 algal cells ml-1; N=11 except for 0.8 kPa,where N=4). Data are given as means ± s.d. A repeated-measures ANOVA was performed for all data. At food-free conditions,the mean fa (F=25.6, groups d.f.=9, remainder d.f.=36, P<0.001) and the mean fh(F=48.0, groups d.f.=6, remainder d.f.=36, P<0.001) were not the same in animals on all 10 oxygen levels. At food-rich conditions, the mean fa (F=3.1, groups d.f.=9, remainder d.f.=36, P=0.025) and the mean fh (F=15.4, groups d.f.=9, remainder d.f.=36, P<0.001) were not the same in animals on all 10 oxygen levels. The results of multiple comparisons among pairs of means are shown in Tables 3, 4. Table 3. At high food concentration (105 algal cells ml-1),progressive hypoxia induced responses in fh and fa that deviated from those observed under food-free conditions. During the initial normoxic conditions, food-provided animals had a higher fh (344±62, N=11; Fig. 5C) than those without food (257±16, N=5). In response to the progressive reduction in PO2amb, animals of both groups developed a tachycardia that was more pronounced in the food-deprived group. The larger scope for circulatory adjustment in the food-deprived group resulted from a lower initial fh, because the maximum fh at ∼4 kPa was almost the same in both groups(412±33 min-1vs 416±31 min-1). The increase in the mean fh of the food-provided group from 344 min-1 at normoxia (20.9 kPa) to 395-416 min-1at hypoxia (15.3-2.0 kPa) was statistically significant(Table 4). Table 4. For PO2amb values higher than 8 kPa, the fa of food-provided animals was always lower, on average by 93 min-1, than that of animals without food. The depressing effect of the high food concentration on the normoxic fawas, however, not as great as expected from the food modulation experiment(cf. Fig. 5D and Fig. 1B). As in the food-deprived group, the reduction of PO2ambhad no effect on the fa of food-provided animals in the range from 20.9 kPa to 6.8 kPa (Fig. 5D; Table 4). However, below 8 kPa, both groups showed diverging changes in mean fa. Whereas the fa of food-provided animals started to rise significantly from 265±48 min-1 at 9.5 kPa to 326±47 min-1 at 3.0 kPa(Table 4), that of the food-deprived group declined non-significantly from 360±50 min-1 at 8.8 kPa to 321±76 min-1 at 2.3 kPa(Table 3). Comparing the shape of individual response curves for fh and fa with that of the respective mean response curves shown in Fig. 5A-D, there was always a good correspondence between individual and mean curves except for the fa of the food-provided animals. This group showed large interindividual variations in the initial,normoxic fa ranging from 153 min-1 to 356 min-1 and, as a consequence, nonuniform responses to declining PO2amb (Fig. 6A). All animals that had attained an initial, normoxic fa lower than 260 min-1 exhibited a pronounced hypoxia-induced increase in limb beating activity whereas those with a rate higher than 300 min-1 were hardly able to further elevate their fa (Fig. 6B). Fig. 6. (A) Individual responses (N=11) in appendage beating rate(fa; solid and broken lines) to decreasing ambient oxygen tensions at high food concentration (105 algal cells ml-1). The profiles were shifted vertically for clarity. Eight of 11 animals showed a hyperventilatory response (solid lines). The dotted lines indicate the two reference points, the initial normoxic level at 21 kPa and the hypoxic level of 3 kPa, which were used to assess the hypoxia-induced changes in fa. (B) Magnitude of hypoxia-induced changes in fa in relation to the initial faprevailing before the start of the hypoxic exposure. The dotted line demarcates positive (open circles) from negative responses (filled circles). Fig. 6. (A) Individual responses (N=11) in appendage beating rate(fa; solid and broken lines) to decreasing ambient oxygen tensions at high food concentration (105 algal cells ml-1). The profiles were shifted vertically for clarity. Eight of 11 animals showed a hyperventilatory response (solid lines). The dotted lines indicate the two reference points, the initial normoxic level at 21 kPa and the hypoxic level of 3 kPa, which were used to assess the hypoxia-induced changes in fa. (B) Magnitude of hypoxia-induced changes in fa in relation to the initial faprevailing before the start of the hypoxic exposure. The dotted line demarcates positive (open circles) from negative responses (filled circles). ### Efficiency of systemic adjustments predicted by the model To estimate the efficiency of ventilatory and circulatory adjustments in improving oxygen transport to tissue, we examined the behaviour of the conceptual model (Fig. 2B) and determined the critical ambient oxygen tension(PO2crit) at which the rate of oxygen consumption decreased to 99% of the maximum(Fig. 7A). The data that were initially entered into the model referred to a fasting 2.5 mm-long D. magna exposed to food-free, normoxic conditions at 20°C(Table 2). For this physiological state, the numerical evaluation yielded a PO2crit of 10.1 kPa. At this PO2crit, the oxygen partial pressure distribution in the medium lacuna and the carapace lacuna revealed an almost complete equilibration of medium and haemolymph at the posterior part of the model (Fig. 8). The depression in oxygen consumption rate resulted from the formation of an anoxic corner in the anterior part of the central tissue cylinder. The convective contribution of the circulatory system to total oxygen supply above the PO2crit was 32% (grey-shaded area in Fig. 7A). A hypothetical doubling of medium flow rate(M) had almost no effect on the PO2crit under these conditions(Fig. 7A). The PO2crit decreased only by 0.3 kPa. By contrast,a doubling of perfusion rate(H) caused a pronounced reduction of PO2crit to 7.9 kPa. Fig. 7. Model predictions revealing the efficiency of ventilatory and circulatory adjustments. Solid curves show the predicted dependencies of the oxygen consumption rate (100%24 nmol h-1) upon ambient oxygen tension for the fasting state/food-free conditions (A) and fed state/food-rich conditions (B). Both states differ from each other in the volume-specific oxygen consumption rate (a0), perfusion rate(H) and ventilation rate(M). H and M represent normoxic values. Vertical lines mark the critical ambient oxygen tensions(PO2crit) at which the rates of oxygen consumption decreased to 99% of the maximum. Below PO2crit, the central tissue cylinder experiences an inadequate supply with oxygen. The overproportional decline in oxygen consumption rate in A and B below 4 kPa and 6 kPa (bold arrows),respectively, indicates the incipient impediment of oxygen provision to the peripheral tissue layer. Horizontal arrows indicate the reductions in PO2crit by hypothetically doubling either H or M. The grey shaded areas reflect the amounts of oxygen transported by the circulatory system; the remaining white areas below the solid curves are those amounts diffusing from the respiratory medium directly into the peripheral tissue layer. Fig. 7. Model predictions revealing the efficiency of ventilatory and circulatory adjustments. Solid curves show the predicted dependencies of the oxygen consumption rate (100%24 nmol h-1) upon ambient oxygen tension for the fasting state/food-free conditions (A) and fed state/food-rich conditions (B). Both states differ from each other in the volume-specific oxygen consumption rate (a0), perfusion rate(H) and ventilation rate(M). H and M represent normoxic values. Vertical lines mark the critical ambient oxygen tensions(PO2crit) at which the rates of oxygen consumption decreased to 99% of the maximum. Below PO2crit, the central tissue cylinder experiences an inadequate supply with oxygen. The overproportional decline in oxygen consumption rate in A and B below 4 kPa and 6 kPa (bold arrows),respectively, indicates the incipient impediment of oxygen provision to the peripheral tissue layer. Horizontal arrows indicate the reductions in PO2crit by hypothetically doubling either H or M. The grey shaded areas reflect the amounts of oxygen transported by the circulatory system; the remaining white areas below the solid curves are those amounts diffusing from the respiratory medium directly into the peripheral tissue layer. Fig. 8. Oxygen partial pressure distribution in the median plane of the radially symmetrical model at the critical ambient oxygen tension(PO2crit) of 10.1 kPa (see Fig. 7A). The upper-case letters along the radial axis mark the different compartments, and the vertical lines indicate the compartment interfaces (A, central tissue cylinder; B, truncal haemolymph space; C, peripheral tissue layer; D, medium lacuna; E, carapace lacuna). Note the formation of the anoxic corner in the central tissue cylinder. Fig. 8. Oxygen partial pressure distribution in the median plane of the radially symmetrical model at the critical ambient oxygen tension(PO2crit) of 10.1 kPa (see Fig. 7A). The upper-case letters along the radial axis mark the different compartments, and the vertical lines indicate the compartment interfaces (A, central tissue cylinder; B, truncal haemolymph space; C, peripheral tissue layer; D, medium lacuna; E, carapace lacuna). Note the formation of the anoxic corner in the central tissue cylinder. To assess the efficiency of systemic adjustments for a fed animal exposed to food-rich, normoxic conditions, new parameter values characterizing this changed physiological state were entered into the model. This state is characterized by an increased metabolic rate(Lampert, 1986; Bohrer and Lampert, 1988), a reduced fa and an elevated fh (cf. Figs 1, 3). The volume-specific,whole-body oxygen consumption rate (a0) was assumed to be 50% higher than that of the fasting state (see Discussion). In addition, M was halved and H was set to 135% of the rate of the fasting state, thus assuming that the observed relative changes in fa and fh translate into the same relative changes in the respective flow rates. For this physiological state,the model yielded a PO2crit of 14.3 kPa(Fig. 7B). A doubling of M reduced the PO2crit by 0.7 kPa, which was more than twice as large as the decrease determined for the fasting state. Nevertheless, the change in H was still most effective, because the doubling of H caused a reduction of PO2crit by 3.0 kPa. To assess the effect of changes in key parameters other than M and H, a sensitivity analysis was performed using the data from Table 2 (referring to the fasting state and food-free normoxic conditions). The value of each individual key parameter was then either decreased to 50% or increased to 200% of its initial value (all others being equal) and the percentage change in PO2crit was evaluated (Fig. 9). Of all the geometrical parameters tested (h0, AMx, ξ), the sensitivity was highest for h0 (height of the cylindrical model) and for ξ, which defines the allocation of tissue to the two tissue compartments. Variations in the radial and axial grid intervals (Δr, Δh)caused only minor changes (<1%) in PO2crit. The permeability of the cuticular diffusion barrier (g) did not prove to be a limiting factor of the oxygen transport cascade, whereas changes in the diffusing properties of oxygen in tissue(αT,DT) had a great influence on PO2crit. Of all the parameters affecting the convective and diffusive transport of oxygen in the medium and haemolymph compartments, the solubility coefficient for oxygen in haemolymph(αH) had the greatest impact on PO2crit(Fig. 9). Fig. 9. Sensitivity analysis showing the effect of individual parameter changes on the critical ambient oxygen tension (PO2crit). The initial state refers to the fasting state(Table 2) with a PO2crit of 10.1 kPa (100%). The value of each parameter (listed by its symbol along the horizontal axis) was decreased to 50% (white) and increased to 200% (grey) of its initial value(Table 2) while keeping all other parameters unchanged. Fig. 9. Sensitivity analysis showing the effect of individual parameter changes on the critical ambient oxygen tension (PO2crit). The initial state refers to the fasting state(Table 2) with a PO2crit of 10.1 kPa (100%). The value of each parameter (listed by its symbol along the horizontal axis) was decreased to 50% (white) and increased to 200% (grey) of its initial value(Table 2) while keeping all other parameters unchanged. The present study revealed that the oxyregulating species Daphnia magna exhibits a greater flexibility in adjusting systemic functions involved in oxygen transport than has previously been assumed(Paul et al., 1997; Pirow et al., 2001). Not only the circulatory system but also the ventilatory system proved to be sensitively tuned to ambient oxygen tension(PO2amb). The extent, however, to which the convective performance of each system could be increased during progressive hypoxia depended on the initial circulatory and ventilatory states, which in turn were largely affected by food availability and the nutritional state of the animal. Under food-free conditions, the moderate reduction in PO2amb did not influence appendage beating rate(fa), which was used as a measure of ventilatory performance. Heart rate (fh), on the contrary, increased by 61%, indicating a large regulatory scope in the circulatory system under these conditions. This cardio-ventilatory response pattern was in line with the previous results (Paul et al.,1997; Pirow et al.,2001). A reversed response pattern occurred when the animals had access to food in high concentration. Exposure to food-rich conditions (105 algal cells ml-1) instead of food-free conditions resulted in a lower mean initial fa (265 min-1vs 360 min-1) and a higher mean initial fh (344 min-1vs 257 min-1). In this physiological state, with an individual fa lower than 260 min-1, D. magna was able to respond to a reduction in PO2amb with a compensatory increase in ventilation. The fact that four out of 11 individuals with an fa above 300 min-1 failed to show this hyperventilatory response suggests that other unknown factors counteracted the depressing effect of high food concentrations on fa. The hypoxia-induced tachycardia was less pronounced under food-rich conditions than under food-free conditions because of the higher initial fh. These findings clearly demonstrate that the scope for a short-term improvement of oxygen transport shifted from the circulatory to the ventilatory system. The oxyregulatory repertoire of the millimetre-sized D. magna consequently includes a systemic response pattern that is comparable to that of large-sized, physiologically advanced water breathers such as fish (Dejours, 1981; Randall et al., 1997) or decapod crustaceans (Wheatly and Taylor,1981). The decelerating effect of high food concentrations on fa has been known for quite some time(McMahon and Rigler, 1963; Burns, 1968; Porter et al., 1982). Above a critical food concentration of 104 algal cells ml-1(Porter et al., 1982), the amount of food retained by the filter combs of the thoracic appendages exceeds the amount that the animal is able to ingest and/or digest(Rigler, 1961). The decrease in fa reduces the imbalance between food supply (the amount of food initially retained) and demand. However, the reduction in fa is obviously not sufficient to remove this imbalance completely, since a higher rate of rejection movements of the postabdominal claw is required to remove superfluous food particles from the filter apparatus (Porter et al.,1982). From the energetic point of view, one would expect a further reduced or intermittent appendage activity, which would make additional rejection movements unnecessary. However, taking the dual function of appendage movement into account, it is likely that the actual fa exhibited under these conditions represents a compromise between the need to reduce energetic expenditures for food collection and the need to satisfy the increased oxygen requirements of the fed animal. Lampert (1986) as well as Bohrer and Lampert (1988) have shown that the respiratory rate (carbon loss per time interval) of well-fed D. magna is more than twice the rate of starving animals. Taking the change in the respiratory quotient from 0.7 (starved) to 1.15 (well-fed; Lampert and Bohrer, 1984) into account, this elevation in respiratory rate corresponds to a 1.4-fold increase in oxygen consumption rate. The higher oxygen demand arises from the digestion and biochemical processing of food (Bohrer and Lampert, 1988; Philippova and Postnov, 1988) and the tissues involved in this physiological task; i.e. the digestive tract and the fat cells as a storage site of reserve mass are, to a great extent, located in the central part of the trunk rather than in the peripheral body region. This is important to note because centrally located tissues are more dependent on a convective supply of oxygen via the circulatory system than are those in the periphery of the body, which can rely to a greater extent on a direct diffusive provision of oxygen from the ambient medium. Since the open circulatory system of Daphnia spp. lacks any arteries and capillaries, there is no possibility of redirecting a greater share of blood flow to these metabolically activated tissues, and an improvement of local oxygen supply can only be achieved by an increase in total perfusion rate. This could explain why the initial, normoxic fh of food-provided animals was 34% higher than that of starving animals. Besides this explanation, a circulatory compensation for the reduction in external convection cannot be excluded. In the food-modulation experiment(Fig. 1), we observed in two of four animals a complete stop of appendage movement for 1.5 min after the food concentration had been changed from 106 to 0 algal cells ml-1. This behavioural change, which occurred under normoxic conditions, was immediately followed by a sharp increase in fh by 20-24%, and fh remained at this high level until the animals resumed limb beating activity. The experimental findings of the present study suggest that a circulatory adjustment is the most effective measure of hypoxia adaptation in the planktonic filter feeder D. magna, at least under low food concentrations when fa is at a maximum. The apparent inability to further increase fa under these conditions might result from biomechanical or energetic constraints such as the hydrodynamic resistance of the filter combs and the energetic costs of pumping medium. But even if such constraints would not exist, it is questionable whether an enhancement of ventilatory activity would have any beneficial effect, for example, in enabling the animal to sustain its rate of oxygen uptake at a far lower PO2amb. An experimental test to answer this question is hardly possible. However, since a large amount of physiological information is available for D. magna, this question can be approached theoretically by the use of a conceptual model. Conceptual models have proven to be a valuable complement to experimental approaches and have made it possible to analyse, for example, the transfer characteristics and transport limitations of the different gas exchange organs of vertebrates (Piiper and Scheid,1975). Contrary to the situation in vertebrates, the millimetre-sized D. magna lacks an arrangement in which the circulatory system links distinct sites for respiratory gas exchange and tissue gas transfer (Taylor and Weibel,1981; Piiper,1982; Weibel,1984; Shelton,1992). The whole integument of D. magna is, in principle,permeable to respiratory gases, and the oxygen is moved from the body surface to the tissues by diffusion and convection as well. We have therefore devised a conceptual model that takes a direct diffusive supply of oxygen to the peripheral tissues via the ambient medium and a convective supply to the centrally located tissues via the circulatory system into account. The predictions made by this model gave support to the hypothesis raised in the Introduction that the circulatory system is the limiting step of the oxygen transport cascade in D. magna. The model analysis showed that an increase in perfusion rate was most effective both under food-free (fasting state) and food-rich (fed state) conditions in reducing the critical ambient oxygen tension (PO2crit) at which oxygen supply to the tissue started to become impeded. By contrast, an increase in ventilation rate had almost no effect on PO2crit under food-free conditions but a moderate effect under food-rich conditions. Since the regulatory scope for an adjustment in heart rate was found to be limited in D. magna under food-rich conditions, the increase in ventilation rate is the means of choice for a fed animal to cope with short-term, moderate hypoxia. The improvement of oxygen supply in the animal by enhancing ventilatory flow may also include the reduction of fluid and diffusive boundary layers, an aspect that was not considered in the present model. Under chronic and more severe hypoxic conditions, however, the increase in the concentration and oxygen affinity of Hb represents the one and only measure for improving the transport of oxygen from environment to cells. In the present model, Hb was not considered as the haemolymph oxygen carrier in order to reduce mathematical complexity. This might explain why the critical ambient oxygen concentration of 3.9 mg O2 l-1(PO2crit=7.9 kPa), which was predicted for the fasting state with doubled perfusion rate(Fig. 7A), was higher than the critical oxygen concentrations of 1.3-3.0 mg O2 l-1reported for filtering and respiration rates of Hb-poor D. magna and D. pulex (Kring and O'Brien,1976; Heisey and Porter,1977; Kobayashi and Hoshi,1984). The high sensitivity of the model to changes in the solubility coefficient for oxygen in the haemolymph (αH)indicates that Hb can have a pronounced effect on PO2crit, since it enhances both the convective and diffusive transport of oxygen in the haemolymph. The implementation of Hb-mediated oxygen transport is the next step when extending the model, which will then allow us to provide a causal mechanistic explanation for the expression of specific Hb-related characteristics (concentration, oxygen affinity, cooperativity) in animals of a given anatomical disposition and physiological constitution under different environmental conditions. • a volume-specific oxygen consumption rate of pure tissue (nmol s-1 mm-3) • • a0 volume-specific oxygen consumption rate of the whole body (nmol s-1 mm-3) • • $$A_{\mathrm{{\ }i}}^{\mathrm{u}},{\ }A_{\mathrm{{\ }i}}^{\mathrm{o}}$$ area of hollow-cylindrical bases (mm2) • • AM flow cross-sectional area perpendicular to medium flow(mm2) • • Bi-0.5, Bi, Bi+0.5 cylindrical wall areas (mm2) • • D diffusion coefficient for oxygen (mm2s-1) • • DH diffusion coefficient for oxygen in haemolymph (mm2s-1) • • DM diffusion coefficient for oxygen in medium (mm2s-1) • • DT diffusion coefficient for oxygen in tissue (mm2s-1) • • fa appendage beating rate (min-1) • • fh heart rate (min-1) • • g permeability of the cuticular diffusion barrier (nmol s-1mm-2 kPa-1) • • h axial coordinate (mm) • • h0 height of the cylindrical model (mm) • • i, j, k indices of grid points • • Nax number of grid points in axial direction • number of grid points in radial direction • • P oxygen partial pressure (kPa) • • Pa oxygen partial pressure of the haemolymph entering the trunk lacuna(kPa) • • Pex oxygen partial pressure of the expiratory medium (kPa) • • Pi,j oxygen partial pressure at the grid point referenced by i, j(kPa) • • $$P_{\mathrm{{\ }j,i}}^{\mathrm{o}}$$ oxygen partial pressure at the outer side of an infinitesimal thin diffusion barrier (kPa) • • $$P_{\mathrm{{\ }j,i}}^{\mathrm{u}}$$ oxygen partial pressure at the inner side of an infinitesimal thin diffusion barrier (kPa) • • Pin oxygen partial pressure of the inspiratory medium (kPa) • • PO2amb ambient oxygen tension (kPa) • • PO2crit critical ambient oxygen tension (kPa) • • Pv oxygen partial pressure of the haemolymph leaving the trunk lacuna(kPa) • • H perfusion rate (mm3 s-1) • • r • • r0 • • t time (s) • • v flow velocity (mm s-1) • • V volume of the cylindrical model (mm3) • • vHC flow velocity of the haemolymph in the carapace lacuna (mm s-1) • • vHT flow velocity of the haemolymph in the trunk lacuna (mm s-1) • • $$V_{\mathrm{{\ }i}}^{\mathrm{u}},{\ }V_{\mathrm{{\ }i}}^{\mathrm{o}}$$ volume of hollow cylinders (mm2) • • vM flow velocity of the medium (mm s-1) • • M medium flow rate (mm3 s-1) • • X dry body mass (μg) • • Y respiration rate (nmol animal-1 h-1) • • Δh distance between two grid points in axial direction (mm) • • ΔPj,i change in Pj,i during Δt(kPa) • • Δr distance between two grid points in radial direction (mm) • • Δt time interval (s) • • Δx thickness of the carapace lacuna (mm) • • α solubility coefficient for oxygen (nmol mm-3kPa-1) • • αH solubility coefficient for oxygen in haemolymph (nmol mm-3kPa-1) • • αM solubility coefficient for oxygen in medium (nmol mm-3kPa-1) • • αT solubility coefficient for oxygen in tissue (nmol mm-3kPa-1) • • φ tissue fraction of body volume • • ξ fraction of total tissue allocated to the central tissue cylinder Bartels, H. ( 1971 ). Biological Handbooks. Respiration and Circulation (Table 14, p. 22 ). Bethesda, MD: Federation of the American Societies of Experimental Biology. Bäumer, C., Pirow, R. and Paul, R. J.( 2002 ). Circulatory oxygen transport in the water flea Daphnia magna. J. Comp. Physiol. B 172 , 275 -285. Bentley, T. B., Meng, H. and Pittman, R. N.( 1993 ). Temperature dependence of oxygen diffusion and consumption in mammalian striated muscle. Am. J. Physiol. 264 , H1825 -H1830. Bohrer, R. N. and Lampert, W. ( 1988 ). Simultaneous measurement of the effect of food concentration on assimilation and respiration in Daphnia magna Straus. Funct. Ecol. 2 , 463 -471. Burns, C. W. ( 1968 ). Direct observations of mechanisms regulating feeding behavior of Daphnia, in Lakewater. Int. Revue Ges. Hydrobiol. 53 , 83 -100. Christophorides, C., Laasberg, L. H. and Hedley-Whyte, J.( 1969 ). Effect of temperature on solubility of O2 in human plasma. J. Appl. Physiol. 26 , 56 -60. Crank, J. ( 1975 ). The Mathematics of Diffusion . Oxford: Oxford University Press. Dahm, E. ( 1977 ). Morphologische Untersuchungen an Cladoceren unter besonderer Berücksichtigung der Ultrastruktur des Carapax. Zool. Jb. Anat. 97 , 68 -126. Dejours, P. ( 1981 ). Principles of Comparative Respiratory Physiology . Amsterdam, New York, Oxford:Elsevier/North-Holland Biomedical Press. Dejours, P. and Beekenkamp, H. ( 1977 ). Crayfish respiration as a function of water oxygenation. Respir. Physiol. 30 , 241 -251. Elendt, B.-P. and Bias, W.-R. ( 1990 ). Trace nutrient deficiency in Daphnia magna cultured in standard medium for toxicity testing. Effects of the optimization of culture conditions on life history parameters of D. magna. Wat. Res. Biol. 24 , 1157 -1167. Ellsworth, M. L. and Pittman, R. N. ( 1984 ). Heterogeneity of oxygen diffusion through hamster striated muscles. Am. J. Physiol. 246 , H161 -H167. Faires, J. D. and Burden, R. L. ( 1993 ). Numerical Methods . Boston: PWS Publishing Company. Fox, H. M., Gilchrist, B. M. and Phear, E. A.( 1951 ). Functions of haemoglobin in Daphnia. Proc. R. Soc. B 138 , 514 -528. Freitag, J. F., Steeger, H.-U., Storz, U. C. and Paul, R. J.( 1998 ). Sublethal impairment of respiratory control in plaice(Pleuronectes platessa) larvae induced by UV-B radiation, determined using a novel biocybernetical approach. Mar. Biol. 132 , 1 -8. Fryer, G. ( 1991 ). Functional morphology and the adaptive radiation of the Daphniidae (Branchiopoda: Anomopoda). Phil. Trans. R. Soc. Lond. B 331 , 1 -99. Gertz, K. H. and Loeschke, H. H. ( 1954 ). Bestimmung der Diffusionskoeffizienten von H2, O2,N2, und He in Wasser und Blutserum bei konstant gehaltener Konvektion. Z. Naturforsch. 9b , 1 -9. Glazier, D. S. ( 1991 ). Separating the respiration rates of embryos and brooding females of Daphnia magna:implications for the cost of brooding and the allometry of metabolic rate. Limnol. Oceanogr. 36 , 354 -362. Gnaiger, E. and Forstner, H. ( 1983 ). Polarographic Oxygen Sensors . Berlin, Heidelberg, New York: Springer Verlag. Goldstick, T. K. and Fatt, I. ( 1970 ). Diffusion of oxygen in solutions of blood proteins. Chem. Eng. Progr. Symp. Ser. 66 , 101 -113. Grieshaber, M. K., Hardewig, I., Kreuzer, U. and Pörtner,H.-O. ( 1994 ). Physiological and metabolic responses to hypoxia in invertebrates. Rev. Physiol. Pharmacol. 125 , 63 -147. Groebe, K. and Thews, G. ( 1992 ). Basic mechanisms of diffusive and diffusion-related oxygen transport in biological systems: a review. 317 , 21 -33. Grote, J. ( 1967 ). Die Sauerstoffdiffusionskonstanten im Lungengewebe und Wasser und ihre Temperaturabhängigkeit. Pflügers Arch. Gesamte Physiol. 324 , 245 -254. Grote, J. and Thews, G. ( 1962 ). Die Bedingungen für die Sauerstoffversorgung des Herzmuskelgewebes. Pflügers Arch. Gesamte Physiol. 276 , 142 -165. Hayduk, W. and Laudie, H. ( 1974 ). Prediction of diffusion coefficients for nonelectrolytes in dilute aqueous solutions. Am. Inst. Chem. Eng. J. 20 , 611 -615. Heisey, D. and Porter, K. G. ( 1977 ). The effect of ambient oxygen concentration on filtering rate and respiration rate of Daphnia galeata mendotae and Daphnia magna. Limnol. Oceanogr. 22 , 839 -845. Herreid, C. F. ( 1980 ). Hypoxia in invertebrates. Comp. Biochem. Physiol. A 67 , 311 -320. Himmelblau, D. M. ( 1964 ). Diffusion of dissolved gases in liquids. Chem. Rev. 64 , 527 -550. Homer, L. D., Shelton, J. B., Dorsey, C. H. and Williams, T. J. ( 1984 ). Anisotropic diffusion of oxygen in slices of rat muscle. Am. J. Physiol. 246 , R107 -R113. Kobayashi, M. ( 1983 ). Estimation of the haemolymph volume in Daphnia magna by haemoglobin determination. Comp. Biochem. Physiol. A 76 , 803 -805. Kobayashi, M. and Hoshi, T. ( 1982 ). Relationship between the haemoglobin concentration of Daphnia magnaand the ambient oxygen concentration. Comp. Biochem. Physiol. A 72 , 247 -249. Kobayashi, M. and Hoshi, T. ( 1984 ). Analysis of respiratory role of haemoglobin in Daphnia magna. Zool. Sci. 1 , 523 -532. Kobayashi, M., Fujiki, M. and Suzuki, T.( 1988 ). Variation in and oxygen-binding properties of Daphnia magna hemoglobin. Physiol. Zool. 61 , 415 -419. Kreuzer, F. ( 1950 ). Über die Diffusion von Sauerstoff in Serumeiweislösungen verschiedener Konzentrationen. Helv. Physiol. Acta 8 , 505 -516. Kring, R. L. and O'Brien, W. J. ( 1976 ). Effect of varying oxygen concentrations on the filtering rate of Daphnia pulex. Ecology 57 , 808 -814. Krogh, A. ( 1919 ). The rate of diffusion of gases through animal tissue, with some remarks on the coefficient of invasion. J. Physiol. 52 , 391 -408. Lampert, W. ( 1986 ). Response of the respiratory rate of Daphnia magna to changing food conditions. Oecologia (Berlin) 70 , 495 -501. Lampert, W. and Bohrer, R. ( 1984 ). Effect of food availability on the respiratory quotient of Daphnia magna. Comp. Biochem. Physiol. A 78 , 221 -223. Mahler, M., Louy, C., Homsher, E. and Peskoff, A.( 1985 ). Reappraisal of diffusion, solubility, and consumption of oxygen in frog skeletal muscle, with application to muscle energy balance. J. Gen. Physiol. 86 , 105 -134. McMahon, B. and Wilkens, J. ( 1975 ). Respiratory and circulatory responses to hypoxia in the lobster Homarus americanus. J. Exp. Biol. 62 , 637 -655. McMahon, J. W. and Rigler, F. H. ( 1963 ). Mechanisms regulating the feeding rate of Daphnia magna Straus. Can. J. Zool. 41 , 321 -332. Paul, R. J., Colmorgen, M., Hüller, S., Tyroller, F. and Zinkler, D. ( 1997 ). Circulation and respiratory control in millimetre-sized animals (Daphnia magna, Folsomia candida) studied by optical methods. J. Comp. Physiol. B 167 , 399 -408. Philippova, T. and Postnov, A. L. ( 1988 ). The effect of food quantity on feeding and metabolic expenditure in Cladocera. Int. Rev. Ges. Hydrobiol. 73 , 601 -615. Piiper, J. ( 1982 ). Respiratory gas exchange at lungs, gills and tissues: mechanisms and adjustments. J. Exp. Biol. 100 , 5 -22. Piiper, J. and Scheid, P. ( 1975 ). Gas transport efficacy of gills, lungs and skin: theory and experimental data. Respir. Physiol. 23 , 209 -221. Pirow, R. ( 2003 ). The contribution of haemoglobin to oxygen transport in the microcrustacean Daphnia magna- a conceptual approach. 510 , 101 -107. Pirow, R., Bäumer, C. and Paul, R. J.( 2001 ). Benefits of haemoglobin in the cladoceran crustacean Daphnia magna. J. Exp. Biol. 204 , 3425 -3441. Pirow, R., Wollinger, F. and Paul, R. J.( 1999a ). Importance of the feeding current for oxygen uptake in the water flea Daphnia magna. J. Exp. Biol. 202 , 553 -562. Pirow, R., Wollinger, F. and Paul, R. J.( 1999b ). The sites of respiratory gas exchange in the planktonic crustacean Daphnia magna: an in vivo study employing blood haemoglobin as an internal oxygen probe. J. Exp. Biol. 202 , 3089 -3099. Porter, K. G., Gerritsen, J. and Orcutt, J. D.( 1982 ). The effect of food concentration on swimming patterns,feeding behavior, assimilation, and respiration by Daphnia. Limnol. Oceanogr. 27 , 935 -949. Randall, D., Burggren, W. and French, K.( 1997 ). Eckert Animal Physiology: Mechanisms and Adaptations . New York: W. H. Freeman and Company. Rigler, F. H. ( 1961 ). The relation between the concentration of food and feeding rate of Daphnia magna Straus. Can. J. Zool. 39 , 857 -868. Rouse, H. ( 1978 ). Elementary Mechanics of Fluids . New York: Dover Publications,Inc. Schultz, T. W. and Kennedy, J. R. ( 1977 ). Analyses of the integument and muscle attachment in Daphnia pulex(Cladocera: Crustacea). J. Submicr. Cytol. 9 , 37 -51. Shelton, G. ( 1992 ). Model applications in respiratory physiology. In Oxygen Transport in Biological Systems , vol. 51 , SEB Seminar Series (ed. S. Egginton and H. F. Ross), pp. 1 -44. Cambridge: Cambridge University Press. Sokal, R. R. and Rohlf, F. J. ( 1995 ). Biometry . New York: W. H. Freeman and Company. St-Denis, C. E. and Fell, C. J. D. ( 1971 ). Diffusivity of oxygen in water. Can. J. Chem. Eng. 49 , 885 . Taylor, A. ( 1976 ). The respiratory responses of Carcinus maenas. J. Exp. Biol. 65 , 309 -322. Taylor, C. R. and Weibel, E. R. ( 1981 ). Design of the mammalian respiratory system. I. Problem and strategy. Respir. Physiol. 44 , 1 -10. Thews, G. ( 1960 ). Ein Verfahren zur Bestimmung des O2-Diffusionskoefficienten, der O2-Leitfähigkeit und des O2-Löslichkeitskoeffizienten im Gehirngewebe. Pflügers Arch. 271 , 227 -244. Weibel, E. R. ( 1984 ). The Pathway for Oxygen . Cambridge, MA: Harvard University Press. Wheatly, M. G. and Taylor, E. W. ( 1981 ). The effect of progressive hypoxia on heart rate, ventilation, respiratory gas exchange and acid-base status in the crayfish Austropotamobius pallipes. J. Exp. Biol. 92 , 125 -141. Yoshida, F. and Ohshima, N. ( 1966 ). Diffusivity of oxygen in blood serum. J. Appl. Physiol. 21 , 915 -919. Zar, J. H. ( 1999 ). Biostatistical Analysis . Upper Saddle River, NJ: Prentice Hall. Zeis, B., Becher, B., Goldmann, T., Clark, R., Vollmer, E.,Bölke, B., Bredebusch, I., Lamkemeyer, T., Pinkhaus, O., Pirow, R. and Paul, R. J. ( 2003a ). Differential haemoglobin gene expression in the crustacean Daphnia magna exposed to different oxygen partial pressures. Biol. Chem. 384 , 1133 -1145. Zeis, B., Becher, B., Lamkemeyer, T., Rolf, S., Pirow, R. and Paul, R. J. ( 2003b ). The process of hypoxic induction of Daphnia magna hemoglobin: subunit composition and functional properties. Comp. Biochem. Physiol. B 134 , 243 -252.
2022-06-27 11:22:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5840381979942322, "perplexity": 6481.27976133477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00550.warc.gz"}
http://mathhelpforum.com/pre-calculus/104581-average-rate-change.html
# Thread: average rate of change 1. ## average rate of change Let f(x)=5 ·x2+2 ·x−3 and let x0= 1. The average rate of change of f between x= 1 and x= 1.18 equals ? 2. Originally Posted by samtheman17 Let f(x)=5 ·x2+2 ·x−3 and let x0= 1. The average rate of change of f between x= 1 and x= 1.18 equals ? $f(x) = 5x^2 + 2x - 3$. You have $f(1) = 5(1)^2 + 2(1) - 3$ $= 5 + 2 - 3$ $= 4$ Also $f(1.18) = 5(1.18)^2 + 2(1.18) - 3$ $= 6.962 + 2.36 - 3$ $= 6.322$. The average rate of change will be given by $\frac{f(1.18) - f(1)}{1.18 - 1}$ $= \frac{6.322 - 4}{0.18}$ $= \frac{2.322}{0.18}$ $= 12.9$. 3. ok, thank you so much! that really helped (ps.anwser is 12.9 as 1.18-1 is .18 not 1)
2017-02-20 07:01:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.989669144153595, "perplexity": 5164.865742915081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170425.26/warc/CC-MAIN-20170219104610-00486-ip-10-171-10-108.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/41291/calculating-the-phase-shift-between-two-signals-based-on-samples
# Calculating the phase shift between two signals based on samples I am comparing two signals in MATLAB Simulink for finding the phase between them. To do this I am inspired by using the code found here. I have two vectors of the same size which are a collection of samples of the two signals (sampling is more than fast enough). They are sine-signals with mostly the same frequency. To find the phase between them the code below is used signal_one; %A vector with values of signal 1 signal_two; %A vector with values of signal 2 dot_product = dot(signal_one,signal_two); norm_product = (norm(signal_one)*norm(signal_two); This code works satisfactory, and I am able to get the phaseshift from it, I just don't understand why. Can anyone show me why this gives the phase shift? A mathematical demonstration or a reference to an equation would be great. It's an approximation as also admitted by your statement "satisfactory performance". Here is why. Let $a[n]=\cos(w_0 n)$ and $b[n]=\cos(w_0 n + \theta)$ be two sequences of size $N$, represented by $1 \times N$ row matrices $\bar{a}$ and $\bar{b}$ in a program (such as Octave or Matlab) Treating those two sequences $a[n]$ and $b[n]$ which are represented by two row matrices $\bar{a}$ and $\bar{b}$ as components of vectors $\vec{a}$ and $\vec{b}$, we can take advantage of the dot product stated for vectors and (approximately) compute the phase angle $\theta$ in between those two sinusoidal sequences $a[n]$ and $b[n]$ Geometric evaluation of the dot product between two vectors of same size is: $$\vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}| \cos(\phi)$$ And the angle between those two vector therefore is: $$\cos(\phi) = \frac{\vec{a}\cdot \vec{b}}{|\vec{a}||\vec{b}|}$$ Where $\phi$ is the angle between those vectors, which will be equal to $\theta$ as shown: The dot product between the vectors $\vec{a}$ and $\vec{b}$ can also be algebrically computed from their MAC sum of components which are contained in the elements of the matrices $\bar{a}$ and $\bar{b}$: $$\vec{a} \cdot \vec{b} = \sum_{i=1}^{N} \bar{a}(i)\bar{b}(i)$$ The product $\bar{a}(i)\bar{b}(i)$ can be shown to be equal to the following (using trigonometry): $$\bar{a}(i)\bar{b}(i) = \cos(w_0 i) \cos(w_0 i + \theta)= 0.5\cos(2 w_0 i + \theta) + 0.5 \cos(\theta)$$ And the dot-product summation becomes: $$\vec{a} \cdot \vec{a} = \sum_{i=1}^{N} a(i)b(i) = 0.5 N \cos(\theta) + \sum_{i=1}^{N} 0.5 \cos(2 w_0 i + \theta)$$ The absolute values $|\vec{a}|$ and $|\vec{b}|$ of the vectors $\vec{a}$ and $\vec{b}$ can also be computed from their norms computed over the matrices $\bar{a}$ and $\bar{b}$ as follows: $$|\vec{a}| = ||\bar{a}||= ( \sum_{i=1}^{N} a(i)^2 )^{0.5}$$ $$|\vec{b}| = ||\bar{b}||= ( \sum_{i=1}^{N} b(i)^2 )^{0.5}$$ Again expanding the squared terms result in: $$|\vec{a}| = ||\bar{a}||= ( 0.5 \sum_{i=1}^{N} 1 + \cos(2 w_0 i) )^{0.5}$$ $$|\vec{b}| = ||\bar{b}||= ( 0.5 \sum_{i=1}^{N} 1 + \cos(2 w_0 i + \theta) )^{0.5}$$ Now, it can easily be shown that the following summations are small compared to large $N$: $$N+\sum_{i=1}^{N} \cos(2 w_0 i) \approx N , ~~~ N+\sum_{i=1}^{N} \cos(2 w_0 i + \theta) \approx N$$ for large $N$ and suitable $w_0$: Simplify the above equations to get the angle: $$\cos(\phi) = \frac{\vec{a}\cdot \vec{b}}{|\vec{a}||\vec{b}|}$$ $$\cos(\phi) \approx \frac{ 0.5 N \cos(\theta)}{|\vec{a}||\vec{b}|}$$ $$\cos(\phi) \approx \frac{ 0.5 N \cos(\theta)}{0.5 N}$$ $$\cos(\phi) \approx \cos(\theta)$$ $$\phi \approx \theta$$ The approximation depends on the length $N$. • Thanks for the thorough explanation. But is it self evident that the phase shift between two sine signals is the same as the angle of two vectors containing (infinite) samples of the signals? It is not evident to me. And do you have a reference for this? – Anderssh May 29 '17 at 13:38 • it's not self evident. As the whole derivations show it's an approximation. – Fat32 May 29 '17 at 15:12 • My problem is not with the fact that it is an approximation, but rather that phase shift = angle between the vectors. How can this be justified? It definitely seems to be so, but I would like to see some reference to support this. Because this is the basis of your whole reasoning. – Anderssh May 30 '17 at 12:00 • No no It's not the basis of my reasoning. Instead they happen to be equal for sinusoidal sequences, at the end of the computations after making the approximation. They are possibly related but I dont know a reference. The fact is that you can compute the phase shift $\theta$ between two sinusoidal sequences by computing the angle $\phi$ between two vectors obtained by assuming those signals as vectors. I don't have a more indepth explanation however. You can find it somewhere in vector space analysis books. – Fat32 May 30 '17 at 14:24 I would say what you just calculate the arctangent of the correlation at zero point divided by the norm multiplicated. The phase of the signal implies the harmonic fluctuation and its frequency. The phase shift (in the strict sense) can be obtained for the definite frequency using DFT of your signals
2019-05-23 02:15:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.832611620426178, "perplexity": 236.49865527447975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256997.79/warc/CC-MAIN-20190523003453-20190523025453-00370.warc.gz"}
https://physics.stackexchange.com/questions/223832/superposition-of-discrete-level-and-continuum-electron-bound-and-free?noredirect=1
# Superposition of discrete level and continuum: Electron bound and free [duplicate] Superposition between discrete states of a system is widely considered in the literature, but this system, e.g., a $H$ atom, can also have a continuum in its energy spectrum. Can the state of a system be in a superposition of an energy level in the discrete part of the spectrum with one level in the continuum part (or with an interval thereof)? In other words: Can an electron in a $H$ atom be in a superposition of bound to and free from the proton?
2019-11-21 19:11:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6735326647758484, "perplexity": 345.5272771959523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670948.64/warc/CC-MAIN-20191121180800-20191121204800-00082.warc.gz"}
https://stacks.math.columbia.edu/tag/05ZJ
## 75.9 Thickenings The following terminology may not be completely standard, but it is convenient. Definition 75.9.1. Thickenings. Let $S$ be a scheme. 1. We say an algebraic space $X'$ is a thickening of an algebraic space $X$ if $X$ is a closed subspace of $X'$ and the associated topological spaces are equal. 2. We say $X'$ is a first order thickening of $X$ if $X$ is a closed subspace of $X'$ and the quasi-coherent sheaf of ideals $\mathcal{I} \subset \mathcal{O}_{X'}$ defining $X$ has square zero. 3. Given two thickenings $X \subset X'$ and $Y \subset Y'$ a morphism of thickenings is a morphism $f' : X' \to Y'$ such that $f(X) \subset Y$, i.e., such that $f'|_ X$ factors through the closed subspace $Y$. In this situation we set $f = f'|_ X : X \to Y$ and we say that $(f, f') : (X \subset X') \to (Y \subset Y')$ is a morphism of thickenings. 4. Let $B$ be an algebraic space. We similarly define thickenings over $B$, and morphisms of thickenings over $B$. This means that the spaces $X, X', Y, Y'$ above are algebraic spaces endowed with a structure morphism to $B$, and that the morphisms $X \to X'$, $Y \to Y'$ and $f' : X' \to Y'$ are morphisms over $B$. The fundamental equivalence. Note that if $X \subset X'$ is a thickening, then $X \to X'$ is integral and universally bijective. This implies that 75.9.1.1 $$\label{spaces-more-morphisms-equation-equivalence-etale-spaces} X_{spaces, {\acute{e}tale}} = X'_{spaces, {\acute{e}tale}}$$ via the pullback functor, see Theorem 75.8.1. Hence we may think of $\mathcal{O}_{X'}$ as a sheaf on $X_{spaces, {\acute{e}tale}}$. Thus a canonical equivalence of locally ringed topoi 75.9.1.2 $$\label{spaces-more-morphisms-equation-fundamental-equivalence} (\mathop{\mathit{Sh}}\nolimits (X'_{spaces, {\acute{e}tale}}), \mathcal{O}_{X'}) \cong (\mathop{\mathit{Sh}}\nolimits (X_{spaces, {\acute{e}tale}}), \mathcal{O}_{X'})$$ Below we will frequently combine this with the fully faithfulness result of Properties of Spaces, Theorem 65.28.4. For example the closed immersion $i_ X : X \to X'$ corresponds to the surjective map $i_ X^\sharp : \mathcal{O}_{X'} \to \mathcal{O}_ X$. Let $S$ be a scheme, and let $B$ be an algebraic space over $S$. Let $(f, f') : (X \subset X') \to (Y \subset Y')$ be a morphism of thickenings over $B$. Note that the diagram of continuous functors $\xymatrix{ X_{spaces, {\acute{e}tale}} & Y_{spaces, {\acute{e}tale}} \ar[l] \\ X'_{spaces, {\acute{e}tale}} \ar[u] & Y'_{spaces, {\acute{e}tale}} \ar[u] \ar[l] }$ is commutative and the vertical arrows are equivalences. Hence $f_{spaces, {\acute{e}tale}}$, $f_{small}$, $f'_{spaces, {\acute{e}tale}}$, and $f'_{small}$ all define the same morphism of topoi. Thus we may think of $(f')^\sharp : f_{spaces, {\acute{e}tale}}^{-1}\mathcal{O}_{Y'} \longrightarrow \mathcal{O}_{X'}$ as a map of sheaves of $\mathcal{O}_ B$-algebras fitting into the commutative diagram $\xymatrix{ f_{spaces, {\acute{e}tale}}^{-1}\mathcal{O}_ Y \ar[r]_-{f^\sharp } \ar[r] & \mathcal{O}_ X \\ f_{spaces, {\acute{e}tale}}^{-1}\mathcal{O}_{Y'} \ar[r]^-{(f')^\sharp } \ar[u]^{i_ Y^\sharp } & \mathcal{O}_{X'} \ar[u]_{i_ X^\sharp } }$ Here $i_ X : X \to X'$ and $i_ Y : Y \to Y'$ are the names of the given closed immersions. Lemma 75.9.2. Let $S$ be a scheme. Let $B$ be an algebraic space over $S$. Let $X \subset X'$ and $Y \subset Y'$ be thickenings of algebraic spaces over $B$. Let $f : X \to Y$ be a morphism of algebraic spaces over $B$. Given any map of $\mathcal{O}_ B$-algebras $\alpha : f_{spaces, {\acute{e}tale}}^{-1}\mathcal{O}_{Y'} \to \mathcal{O}_{X'}$ such that $\xymatrix{ f_{spaces, {\acute{e}tale}}^{-1}\mathcal{O}_ Y \ar[r]_-{f^\sharp } \ar[r] & \mathcal{O}_ X \\ f_{spaces, {\acute{e}tale}}^{-1}\mathcal{O}_{Y'} \ar[r]^-\alpha \ar[u]^{i_ Y^\sharp } & \mathcal{O}_{X'} \ar[u]_{i_ X^\sharp } }$ commutes, there exists a unique morphism of $(f, f')$ of thickenings over $B$ such that $\alpha = (f')^\sharp$. Proof. To find $f'$, by Properties of Spaces, Theorem 65.28.4, all we have to do is show that the morphism of ringed topoi $(f_{spaces, {\acute{e}tale}}, \alpha ) : (\mathop{\mathit{Sh}}\nolimits (X_{spaces, {\acute{e}tale}}), \mathcal{O}_{X'}) \longrightarrow (\mathop{\mathit{Sh}}\nolimits (Y_{spaces, {\acute{e}tale}}), \mathcal{O}_{Y'})$ is a morphism of locally ringed topoi. This follows directly from the definition of morphisms of locally ringed topoi (Modules on Sites, Definition 18.40.9), the fact that $(f, f^\sharp )$ is a morphism of locally ringed topoi (Properties of Spaces, Lemma 65.28.1), that $\alpha$ fits into the given commutative diagram, and the fact that the kernels of $i_ X^\sharp$ and $i_ Y^\sharp$ are locally nilpotent. Finally, the fact that $f' \circ i_ X = i_ Y \circ f$ follows from the commutativity of the diagram and another application of Properties of Spaces, Theorem 65.28.4. We omit the verification that $f'$ is a morphism over $B$. $\square$ Lemma 75.9.3. Let $S$ be a scheme. Let $X \subset X'$ be a thickening of algebraic spaces over $S$. For any open subspace $U \subset X$ there exists a unique open subspace $U' \subset X'$ such that $U = X \times _{X'} U'$. Proof. Let $U' \to X'$ be the object of $X'_{spaces, {\acute{e}tale}}$ corresponding to the object $U \to X$ of $X_{spaces, {\acute{e}tale}}$ via (75.9.1.1). The morphism $U' \to X'$ is étale and universally injective, hence an open immersion, see Morphisms of Spaces, Lemma 66.51.2. $\square$ Finite order thickenings. Let $i_ X : X \to X'$ be a thickening of algebraic spaces. Any local section of the kernel $\mathcal{I} = \mathop{\mathrm{Ker}}(i_ X^\sharp ) \subset \mathcal{O}_{X'}$ is locally nilpotent. Let us say that $X \subset X'$ is a finite order thickening if the ideal sheaf $\mathcal{I}$ is “globally” nilpotent, i.e., if there exists an $n \geq 0$ such that $\mathcal{I}^{n + 1} = 0$. Technically the class of finite order thickenings $X \subset X'$ is much easier to handle than the general case. Namely, in this case we have a filtration $0 \subset \mathcal{I}^ n \subset \mathcal{I}^{n - 1} \subset \ldots \subset \mathcal{I} \subset \mathcal{O}_{X'}$ and we see that $X'$ is filtered by closed subspaces $X = X_0 \subset X_1 \subset \ldots \subset X_{n - 1} \subset X_{n + 1} = X'$ such that each pair $X_ i \subset X_{i + 1}$ is a first order thickening over $B$. Using simple induction arguments many results proved for first order thickenings can be rephrased as results on finite order thickenings. Lemma 75.9.4. Let $S$ be a scheme. Let $X \subset X'$ be a thickening of algebraic spaces over $S$. Let $U$ be an affine object of $X_{spaces, {\acute{e}tale}}$. Then $\Gamma (U, \mathcal{O}_{X'}) \to \Gamma (U, \mathcal{O}_ X)$ is surjective where we think of $\mathcal{O}_{X'}$ as a sheaf on $X_{spaces, {\acute{e}tale}}$ via (75.9.1.2). Proof. Let $U' \to X'$ be the étale morphism of algebraic spaces such that $U = X \times _{X'} U'$, see Theorem 75.8.1. By Limits of Spaces, Lemma 69.15.1 we see that $U'$ is an affine scheme. Hence $\Gamma (U, \mathcal{O}_{X'}) = \Gamma (U', \mathcal{O}_{U'}) \to \Gamma (U, \mathcal{O}_ U)$ is surjective as $U \to U'$ is a closed immersion of affine schemes. Below we give a direct proof for finite order thickenings which is the case most used in practice. $\square$ Proof for finite order thickenings. We may assume that $X \subset X'$ is a first order thickening by the principle explained above. Denote $\mathcal{I}$ the kernel of the surjection $\mathcal{O}_{X'} \to \mathcal{O}_ X$. As $\mathcal{I}$ is a quasi-coherent $\mathcal{O}_{X'}$-module and since $\mathcal{I}^2 = 0$ by the definition of a first order thickening we may apply Morphisms of Spaces, Lemma 66.14.1 to see that $\mathcal{I}$ is a quasi-coherent $\mathcal{O}_ X$-module. Hence the lemma follows from the long exact cohomology sequence associated to the short exact sequence $0 \to \mathcal{I} \to \mathcal{O}_{X'} \to \mathcal{O}_ X \to 0$ and the fact that $H^1_{\acute{e}tale}(U, \mathcal{I}) = 0$ as $\mathcal{I}$ is quasi-coherent, see Descent, Proposition 35.9.3 and Cohomology of Schemes, Lemma 30.2.2. $\square$ Lemma 75.9.5. Let $S$ be a scheme. Let $X \subset X'$ be a thickening of algebraic spaces over $S$. If $X$ is (representable by) a scheme, then so is $X'$. Proof. Note that $X'_{red} = X_{red}$. Hence if $X$ is a scheme, then $X'_{red}$ is a scheme. Thus the result follows from Limits of Spaces, Lemma 69.15.3. Below we give a direct proof for finite order thickenings which is the case most often used in practice. $\square$ Proof for finite order thickenings. It suffices to prove this when $X'$ is a first order thickening of $X$. By Properties of Spaces, Lemma 65.13.1 there is a largest open subspace of $X'$ which is a scheme. Thus we have to show that every point $x$ of $|X'| = |X|$ is contained in an open subspace of $X'$ which is a scheme. Using Lemma 75.9.3 we may replace $X \subset X'$ by $U \subset U'$ with $x \in U$ and $U$ an affine scheme. Hence we may assume that $X$ is affine. Thus we reduce to the case discussed in the next paragraph. Assume $X \subset X'$ is a first order thickening where $X$ is an affine scheme. Set $A = \Gamma (X, \mathcal{O}_ X)$ and $A' = \Gamma (X', \mathcal{O}_{X'})$. By Lemma 75.9.4 the map $A \to A'$ is surjective. The kernel $I$ is an ideal of square zero. By Properties of Spaces, Lemma 65.33.1 we obtain a canonical morphism $f : X' \to \mathop{\mathrm{Spec}}(A')$ which fits into the following commutative diagram $\xymatrix{ X \ar@{=}[d] \ar[r] & X' \ar[d]^ f \\ \mathop{\mathrm{Spec}}(A) \ar[r] & \mathop{\mathrm{Spec}}(A') }$ Because the horizontal arrows are thickenings it is clear that $f$ is universally injective and surjective. Hence it suffices to show that $f$ is étale, since then Morphisms of Spaces, Lemma 66.51.2 will imply that $f$ is an isomorphism. To prove that $f$ is étale choose an affine scheme $U'$ and an étale morphism $U' \to X'$. It suffices to show that $U' \to X' \to \mathop{\mathrm{Spec}}(A')$ is étale, see Properties of Spaces, Definition 65.16.2. Write $U' = \mathop{\mathrm{Spec}}(B')$. Set $U = X \times _{X'} U'$. Since $U$ is a closed subspace of $U'$, it is a closed subscheme, hence $U = \mathop{\mathrm{Spec}}(B)$ with $B' \to B$ surjective. Denote $J = \mathop{\mathrm{Ker}}(B' \to B)$ and note that $J = \Gamma (U, \mathcal{I})$ where $\mathcal{I} = \mathop{\mathrm{Ker}}(\mathcal{O}_{X'} \to \mathcal{O}_ X)$ on $X_{spaces, {\acute{e}tale}}$ as in the proof of Lemma 75.9.4. The morphism $U' \to X' \to \mathop{\mathrm{Spec}}(A')$ induces a commutative diagram $\xymatrix{ 0 \ar[r] & J \ar[r] & B' \ar[r] & B \ar[r] & 0 \\ 0 \ar[r] & I \ar[r] \ar[u] & A' \ar[r] \ar[u] & A \ar[r] \ar[u] & 0 }$ Now, since $\mathcal{I}$ is a quasi-coherent $\mathcal{O}_ X$-module we have $\mathcal{I} = (\widetilde I)^ a$, see Descent, Definition 35.8.2 for notation and Descent, Proposition 35.8.9 for why this is true. Hence we see that $J = I \otimes _ A B$. Finally, note that $A \to B$ is étale as $U \to X$ is étale as the base change of the étale morphism $U' \to X'$. We conclude that $A' \to B'$ is étale by Algebra, Lemma 10.143.11. $\square$ Lemma 75.9.6. Let $S$ be a scheme. Let $X \subset X'$ be a thickening of algebraic spaces over $S$. The functor $V' \longmapsto V = X \times _{X'} V'$ defines an equivalence of categories $X'_{\acute{e}tale}\to X_{\acute{e}tale}$. Proof. The functor $V' \mapsto V$ defines an equivalence of categories $X'_{spaces, {\acute{e}tale}} \to X_{spaces, {\acute{e}tale}}$, see Theorem 75.8.1. Thus it suffices to show that $V$ is a scheme if and only if $V'$ is a scheme. This is the content of Lemma 75.9.5. $\square$ First order thickening are described as follows. Lemma 75.9.7. Let $S$ be a scheme. Let $f : X \to B$ be a morphism of algebraic spaces over $S$. Consider a short exact sequence $0 \to \mathcal{I} \to \mathcal{A} \to \mathcal{O}_ X \to 0$ of sheaves on $X_{\acute{e}tale}$ where $\mathcal{A}$ is a sheaf of $f^{-1}\mathcal{O}_ B$-algebras, $\mathcal{A} \to \mathcal{O}_ X$ is a surjection of sheaves of $f^{-1}\mathcal{O}_ B$-algebras, and $\mathcal{I}$ is its kernel. If 1. $\mathcal{I}$ is an ideal of square zero in $\mathcal{A}$, and 2. $\mathcal{I}$ is quasi-coherent as an $\mathcal{O}_ X$-module then there exists a first order thickening $X \subset X'$ over $B$ and an isomorphism $\mathcal{O}_{X'} \to \mathcal{A}$ of $f^{-1}\mathcal{O}_ B$-algebras compatible with the surjections to $\mathcal{O}_ X$. Proof. In this proof we redo some of the arguments used in the proofs of Lemmas 75.9.4 and 75.9.5. We first handle the case $B = S = \mathop{\mathrm{Spec}}(\mathbf{Z})$. Let $U$ be an affine scheme, and let $U \to X$ be étale. Then $0 \to \mathcal{I}(U) \to \mathcal{A}(U) \to \mathcal{O}_ X(U) \to 0$ is exact as $H^1(U_{\acute{e}tale}, \mathcal{I}) = 0$ as $\mathcal{I}$ is quasi-coherent, see Descent, Proposition 35.9.3 and Cohomology of Schemes, Lemma 30.2.2. If $V \to U$ is a morphism of affine objects of $X_{spaces, {\acute{e}tale}}$ then $\mathcal{I}(V) = \mathcal{I}(U) \otimes _{\mathcal{O}_ X(U)} \mathcal{O}_ X(V)$ since $\mathcal{I}$ is a quasi-coherent $\mathcal{O}_ X$-module, see Descent, Proposition 35.8.9. Hence $\mathcal{A}(U) \to \mathcal{A}(V)$ is an étale ring map, see Algebra, Lemma 10.143.11. Hence we see that $U \longmapsto U' = \mathop{\mathrm{Spec}}(\mathcal{A}(U))$ is a functor from $X_{affine, {\acute{e}tale}}$ to the category of affine schemes and étale morphisms. In fact, we claim that this functor can be extended to a functor $U \mapsto U'$ on all of $X_{\acute{e}tale}$. To see this, if $U$ is an object of $X_{\acute{e}tale}$, note that $0 \to \mathcal{I}|_{U_{Zar}} \to \mathcal{A}|_{U_{Zar}} \to \mathcal{O}_ X|_{U_{Zar}} \to 0$ and $\mathcal{I}|_{U_{Zar}}$ is a quasi-coherent sheaf on $U$, see Descent, Proposition 35.9.4. Hence by More on Morphisms, Lemma 37.2.2 we obtain a first order thickening $U \subset U'$ of schemes such that $\mathcal{O}_{U'}$ is isomorphic to $\mathcal{A}|_{U_{Zar}}$. It is clear that this construction is compatible with the construction for affines above. Choose a presentation $X = U/R$, see Spaces, Definition 64.9.3 so that $s, t : R \to U$ define an étale equivalence relation. Applying the functor above we obtain an étale equivalence relation $s', t' : R' \to U'$ in schemes. Consider the algebraic space $X' = U'/R'$ (see Spaces, Theorem 64.10.5). The morphism $X = U/R \to U'/R' = X'$ is a first order thickening. Consider $\mathcal{O}_{X'}$ viewed as a sheaf on $X_{\acute{e}tale}$. By construction we have an isomorphism $\gamma : \mathcal{O}_{X'}|_{U_{\acute{e}tale}} \longrightarrow \mathcal{A}|_{U_{\acute{e}tale}}$ such that $s^{-1}\gamma$ agrees with $t^{-1}\gamma$ on $R_{\acute{e}tale}$. Hence by Properties of Spaces, Lemma 65.18.14 this implies that $\gamma$ comes from a unique isomorphism $\mathcal{O}_{X'} \to \mathcal{A}$ as desired. To handle the case of a general base algebraic space $B$, we first construct $X'$ as an algebraic space over $\mathbf{Z}$ as above. Then we use the isomorphism $\mathcal{O}_{X'} \to \mathcal{A}$ to define $f^{-1}\mathcal{O}_ B \to \mathcal{O}_{X'}$. According to Lemma 75.9.2 this defines a morphism $X' \to B$ compatible with the given morphism $X \to B$ and we are done. $\square$ Lemma 75.9.8. Let $S$ be a scheme. Let $Y \subset Y'$ be a thickening of algebraic spaces over $S$. Let $X' \to Y'$ be a morphism and set $X = Y \times _{Y'} X'$. Then $(X \subset X') \to (Y \subset Y')$ is a morphism of thickenings. If $Y \subset Y'$ is a first (resp. finite order) thickening, then $X \subset X'$ is a first (resp. finite order) thickening. Proof. Omitted. $\square$ Lemma 75.9.9. Let $S$ be a scheme. If $X \subset X'$ and $X' \subset X''$ are thickenings of algebraic spaces over $S$, then so is $X \subset X''$. Proof. Omitted. $\square$ Lemma 75.9.10. The property of being a thickening is fpqc local. Similarly for first order thickenings. Proof. The statement means the following: Let $S$ be a scheme and let $X \to X'$ be a morphism of algebraic spaces over $S$. Let $\{ g_ i : X'_ i \to X'\}$ be an fpqc covering of algebraic spaces such that the base change $X_ i \to X'_ i$ is a thickening for all $i$. Then $X \to X'$ is a thickening. Since the morphisms $g_ i$ are jointly surjective we conclude that $X \to X'$ is surjective. By Descent on Spaces, Lemma 73.11.17 we conclude that $X \to X'$ is a closed immersion. Thus $X \to X'$ is a thickening. We omit the proof in the case of first order thickenings. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2023-03-24 12:29:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.9950971007347107, "perplexity": 107.29865436210503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00220.warc.gz"}
https://ask.openstack.org/en/questions/1850/revisions/
# Revision history [back] ### Can multiple l3-agent instances run on one host? Hi all: When I read the code of the latest quantum master brantch, I found that the behavior of "add network to dhcp agent" and "add router to l3 agent" is not the same. Because I have not the newest installation for now, so I came here to ask for help! 'add network to dhcp agent': after some verifacation, will directly add a record in the NetworkDhcpAgentBinding table; "add router to l3 agent": after some verifacation, method 'auto_schedule_routers' is called: result = self.auto_schedule_routers(context, agent_db.host, router_id) the parameter 'agent_db.host' means that there are some l3-agent instances on the host(if not, I think this method will not make sense), then the code will pick one that may be different with the agent you want host the router, so strange! I wander whether it's a bug. Please let me know if I am missing something here. --Lingxian Kong 2 retagged fifieldt 4708 ●13 ●52 ●50 http://docs.openstack.... ### Can multiple l3-agent instances run on one host? Hi all: When I read the code of the latest quantum master brantch, I found that the behavior of "add network to dhcp agent" and "add router to l3 agent" is not the same. Because I have not the newest installation for now, so I came here to ask for help! 'add network to dhcp agent': after some verifacation, will directly add a record in the NetworkDhcpAgentBinding table; "add router to l3 agent": after some verifacation, method 'auto_schedule_routers' is called: result = self.auto_schedule_routers(context, agent_db.host, router_id) the parameter 'agent_db.host' means that there are some l3-agent instances on the host(if not, I think this method will not make sense), then the code will pick one that may be different with the agent you want host the router, so strange! I wander whether it's a bug. Please let me know if I am missing something here. --Lingxian Kong
2021-02-26 15:39:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24864666163921356, "perplexity": 3176.2127633159107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357929.4/warc/CC-MAIN-20210226145416-20210226175416-00604.warc.gz"}
https://www.physicsforums.com/threads/wave-functional.180190/
# Wave functional 1. Aug 12, 2007 ### jostpuur Is it correct to think, that with a scalar complex Klein-Gordon field the wave function $$\Psi:\mathbb{R}^3\to\mathbb{C}$$ of one particle QM is replaced with an analogous wave functional $$\Psi:\mathbb{C}^{\mathbb{R}^3}\to\mathbb{C}$$? Most of the introduction to the QFT don't explain anything like this, but when I've thought about it myself, that seems to be correct. If this was correct for the Klein-Gordon field, then the real problem is the Dirac's field. I don't understand what kind of wave functional it could have. 2. Aug 12, 2007 ### dextercioby The classical complex scalar field is a complex function defined on all the Minkowski space. $$\varphi \in C^{\infty}\left(\mathbb{R}^{4},\mathbb{C}\right)$$ 3. Aug 12, 2007 ### jostpuur But when we want a quantum mechanical field, we want to have complex amplitudes for all possible configurations of the classical field. That means, a wave functional $\Psi[\varphi]$. 4. Aug 12, 2007 ### dextercioby When quantized such a classical field like the KG field becomes an operator valued distribution having as a domain a subset of the bosonic/fermionic Fock space. See the first 2 volumes of Reed & Simon for details. 5. Aug 12, 2007 ### jostpuur Making the field and the conjugate field operators seems to be analogous the making position and momentum operators in the particle QM. But when position and momentum are made operators, there is also the state which can be represented with a wave function, and we can have representations of the operators also. I understand that in QFT we have operators for fields, but shouldn't we also have representations for the states, that means, shouldn't we have these wave functionals? And also actual representations for the operators? 6. Aug 13, 2007 ### Demystifier Yes, QFT can be formulated in that way as well. However, it is not usual in practice, because what one usually measures are not field configurations but positions/momenta/energies of particles. 7. Aug 13, 2007 ### dextercioby The fields are not observable, we're completely free to describe them using any possible representation for the uniparticle Hilbert space. Usually it's chosen $L^2 \left(\mathbb{R}^3, d^3 p\right)$, just like in ordinary QM. Anyways, the most important thing to QFT is to give valid predictions for the observables and the matematicals means to do it are not that relevant. 8. Aug 14, 2007 ### Demystifier The irony is that the only really observable things are particle positions, which however are not described by a hermitian operator, contradicting one of the cornerstone axioms of quantum theory. So, is quantum field theory a genuine quantum theory? 9. Aug 15, 2007 ### jostpuur Similarly as a classical position of a point particle is only an expectation value of the position of a quantum mechanical particle, isn't the classical electromagnetic field only an expectation value of the quantum field theoretical electromagnetic field? Last edited: Aug 15, 2007 10. Aug 17, 2007 ### Demystifier Formally yes, but it is not really consistent for fermionic fields. 11. Aug 17, 2007 ### jostpuur It is precisely the fermionic fields that make me feel like not understanding what's happening with quantum fields. Unfortunate stuff. 12. Aug 19, 2007 ### samalkhaiat 13. Aug 19, 2007 ### jostpuur If I have a function $\Psi:\mathbb{R}^n\to\mathbb{C}$, and vector notation $x_i\in\mathbb{R}$ where $i\in\{1,2,\ldots, n\}$. Then the partial derivatives are given a notation $$\partial_i\Psi$$ or $$\partial_{x_i}\Psi$$. If I have a functional $\Psi:\mathbb{R}^{\mathbb{R}^3}\to\mathbb{C}$, and a vector notation $\phi_x = \phi(x) \in \mathbb{R}$ where $x\in\mathbb{R}^3$. Then I would analogously give the partial derivatives a notation $$\partial_x\Psi$$ or $$\partial_{\phi_x}\Psi$$. Is this the same thing that $$\frac{\delta}{\delta \phi(x)}\Psi$$ means? Last edited: Aug 19, 2007 14. Aug 19, 2007 ### samalkhaiat 15. Aug 19, 2007 ### jostpuur This is very interesting. It could be that this equation is precisely what I'm after, but I don't understand it yet. 16. Aug 19, 2007 ### samalkhaiat 17. Aug 19, 2007 ### reilly See the first chapters of Zee's QFT book -- or most books on Solid State physics. Then you will understand that QFT is simply an alternate formulation of standard QM, based on creation and destruction operators -- let's hear it for harmonic oscillators -- such that it's easy to deal with systems in which the numbers of various particles is not fixed. And, if you want to see bread and butter QFT, then look at the field of Quantum Optics, Mandel and Wolf's book for example. To get a hold of QFT, you must study both theory AND practice -- neglect of one or the other will get you all messed up. Regards, Reilly Atkinson 18. Aug 20, 2007 ### Demystifier Jostpuur, for some standard references on functional Schrodinger equation for fermionic fields, see Refs. [16,17,18] in my http://xxx.lanl.gov/abs/quant-ph/0302152
2017-10-22 07:14:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8366230726242065, "perplexity": 810.3856685832765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825147.83/warc/CC-MAIN-20171022060353-20171022080353-00729.warc.gz"}
https://www.orbiter-forum.com/threads/a-contingency-plan-for-fast-return-of-the-u-s-to-space.30939/
# DiscussionA contingency plan for fast return of the U.S. to space. #### RGClark ##### Mathematician Why NASA and Congress Spent Four Hours Shouting At Each Other About Russia. April 8, 2014 // 04:17 PM EST The congressmen kept asking for a short-term contingency plan to return America to space in case of seriously deteriorating U.S/Russia relations and Bolden kept responding with the three-year plan to have commercial crew flying. But there is a shorter term plan. BOTH SpaceX and Boeing have said they could be flying crew by next year with funding. So if the congressmen want a shorter term contingency plan, provide that required extra funding. At the Humans 2 Mars 2014 conference I asked Bolden about such a contingency plan. It's about at the 15 minute mark in this video: He responded that SpaceX has not been selected yet as the crew launch provider. OK, then also fund Boeing so they can also return crew to the ISS by 2015. There has been talk in Congress of only having one crew launch provider. I strongly disagree with that plan. We all saw what can happen when you only have one launch provider and that one goes down, as happened with the shuttle. SpaceX is furthest along so they should be one of the providers. But on the other hand the Boeing capsule would be carried on the Atlas V which has had a remarkable string of successful launches, which SpaceX is nowhere near to matching yet. Russian Deputy Prime Minister Dmitry Rogozin mocked the U.S. space sanctions against Russia saying NASA would need to get a trampoline to get its astronauts to the ISS. This led Elon Musk to state through his twitter account that SpaceX would be revealing its man-rated Dragon 2 at the end of May: Now, if SpaceX is flying their own crews to LEO in 2015 and there is still a strained relationship between the U.S. and Russia then, then it would be extremely embarrassing for NASA to still be paying Russia to ferry NASA astronauts to the ISS when SpaceX will already be flying American crews to LEO. A solution would be for NASA to at least draw up a contingency plan including cost estimates of how much extra funding it would take to also take NASA astronauts to the ISS. Then the onus would be on Congress to decide if they want to provide NASA with the extra funding to do so. Bob Clark Last edited: #### Urwumpe ##### Not funny anymore Donator Moar Funding = Moar Profit :rofl: And I seriously doubt, that funding alone can accelerate things. NASA did not get in 8 years to the moon by funding alone. #### fred18 Donator And I seriously doubt, that funding alone can accelerate things. NASA did not get in 8 years to the moon by funding alone. It's a combination: big competition enviroment (or war) and funding. One of the two is not enough, both of them must be in at the same time #### boogabooga ##### Bug Crusher SpaceX is furthest along so they should be one of the providers. But on the other hand the Boeing capsule would be carried on the Atlas V which has had a remarkable string of successful launches, which SpaceX is nowhere near to matching yet. Replacing relying on Russia for a spececraft with relying on Russia for an engine is not a solution. Unfortunately, as I posted in another thread, there is a court injunction now such that the days of the U.S. using the RD-180 look numbered. Don't assume that there still will be an Atlas V when the CST is ready to use it. #### Urwumpe ##### Not funny anymore Donator It's a combination: big competition enviroment (or war) and funding. One of the two is not enough, both of them must be in at the same time No, it is also a matter of infrastructure and manpower. You can't just pull a few thousand engineers out of your hat, to do the work that a few hundred would otherwise do in ten years. You need to make sure that these engineers will be there when you need them and you have to utilize those engineers that you already have - and that is something that the government can easily do, but not a company. #### Donamy Donator Beta Tester Can't the Dragon, with very little upgrade, do a manned re-entry? Last edited: #### Urwumpe ##### Not funny anymore Donator Can't the Dragon, with very little upgrade, do a manned re-entry? Some more upgrade. You need life support and crew ergonomics (seats, restraints, protection, etc). Likely you also need thermal control. And if the impact on landing is really suitable for manned landing, is another topic. Just landing on water is not automatically soft. Likely you also need crew control interfaces, at least for manual parachute and docking abort and collision avoidance. #### RGClark ##### Mathematician Replacing relying on Russia for a spececraft with relying on Russia for an engine is not a solution. Unfortunately, as I posted in another thread, there is a court injunction now such that the days of the U.S. using the RD-180 look numbered. Don't assume that there still will be an Atlas V when the CST is ready to use it. While I favor the U.S. producing their own heavy-thrust kerosene engines, nothing beats proven reliability as the RD-180 has been proven to have. Bob Clark ---------- Post added at 12:18 PM ---------- Previous post was at 12:11 PM ---------- Moar Funding = Moar Profit :rofl: And I seriously doubt, that funding alone can accelerate things. NASA did not get in 8 years to the moon by funding alone. "moar" -> more ? HUGE funding did accelerate it. As a proportion of the U.S. budget, it was ten times higher than today. Imagine what we could do if NASA's budget was $180 billion every year instead of$18 billion. Heck, we'd probably have manned missions to Jupiter by now. Anyway, SpaceX is claiming they can launch their own crews to LEO by 2015. So what else would NASA need to also send NASA astronauts to the ISS then? Bob Clark #### garyw Moderator Tutorial Publisher Anyway, SpaceX is claiming they can launch their own crews to LEO by 2015. So what else would NASA need to also send NASA astronauts to the ISS then? Proof. Quite simply, the dragon spaceX human rated capsule needs a full scale dress rehearsal. It needs to go to the ISS. Spend six months there and then come back to Earth. The launch escape system all needs a thorough test. After that, it should be ready to go. #### Matias Saibene ##### Well-known member What a pity that space partnerships are also divided for conflict that is not for the space industry. With these damn divisions mankind will never get anywhere. So we are very pleased that our country's space industry has excellent results. See this picture: Waste of tax - Seal of Approval #### Urwumpe ##### Not funny anymore Donator HUGE funding did accelerate it. As a proportion of the U.S. budget, it was ten times higher than today. Imagine what we could do if NASA's budget was $180 billion every year instead of$18 billion. Heck, we'd probably have manned missions to Jupiter by now. If we would have this budget for at least 12 years, or two generations of engineering students. And maybe also learn the cheat codes for the universe. #### Ghostrider ##### Donator Donator "moar" -> more ? Yeah. As in "moar DAKKA!", 'cuz ain't no such fing as enuff dakka. #### N_Molson Donator Anyway, its like firemen trying to save a forest that is half in fire... Funny how some politics react now that they have to face the evidence that the ISS access is allowed by Russian hardware and technology... I say, you should have started your contigency plan right after Columbia tragedy, guys, now its a bit late. #### Ghostrider ##### Donator Donator Well, Dmitri Rogozin suggested trampolines... Anyone has an ACME catalogue handy? #### Andy44 ##### owner: Oil Creek Astronautix Proof. Quite simply, the dragon spaceX human rated capsule needs a full scale dress rehearsal. It needs to go to the ISS. Spend six months there and then come back to Earth. The launch escape system all needs a thorough test. After that, it should be ready to go. You mean like they "tested" the shuttle before sticking two guys in it and launching it into space? #### Urwumpe ##### Not funny anymore Donator You mean like they "tested" the shuttle before sticking two guys in it and launching it into space? If we would perform the tests that the Space Shuttle did before two astronauts had been put into it for the first orbital flight with the Dragon Capsule, SpaceX could do the first manned test in 2050. Just as example and refresher: http://www.nasaspaceflight.com/2012/04/space-shuttle-enterprise-the-orbiter-that-started-it-all/ Also, the initial Space Shuttle tests had not been over before STS-9, if you are pedantic. #### Hlynkacg ##### Aspiring rocket scientist Tutorial Publisher Donator I know that there is a certain faction within this forum that still views Spacex as impudent youngsters that need to get of NASA's lawn but if their press is to believed Dragon's thermal control system is already capable of supporting a crew so the the only real issue that needs to be addressed is the ergonomics (seats and the like). Launch Escape is entirely optional as Young and Crippen already demonstrated. #### Urwumpe ##### Not funny anymore Donator Launch Escape is entirely optional as Young and Crippen already demonstrated. No, let me name seven arguments, why launch escape is no option, but mandatory: Francis R. Scobee Michael J. Smith Ronald McNair Ellison Onizuka Judith Resnik Greg Jarvis Christa McAuliffe #### Thunder Chicken Donator No, let me name seven arguments, why launch escape is no option, but mandatory: Francis R. Scobee Michael J. Smith Ronald McNair Ellison Onizuka Judith Resnik Greg Jarvis Christa McAuliffe Vladimir Titov and Gennady Strekalov would also serve as arguments for effective launch escape. http://en.wikipedia.org/wiki/Soyuz_7K-ST_No._16L #### Andy44 ##### owner: Oil Creek Astronautix No, let me name seven arguments, why launch escape is no option, but mandatory: Francis R. Scobee Michael J. Smith Ronald McNair Ellison Onizuka Judith Resnik Greg Jarvis Christa McAuliffe You're right. After those poor devils perished, NASA installed a fully functional launch escape system in the shuttle....not. Replies 0 Views 450 Replies 6 Views 663 Replies 12 Views 2K Replies 0 Views 788 Replies 32 Views 3K
2022-09-26 02:26:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23972411453723907, "perplexity": 5266.007659860556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00367.warc.gz"}
https://orbilu.uni.lu/simple-search?query=((uid:50026032))
Results 1-13 of 13. Search equation: ((uid:50026032)) Sort: Issue date Author Title Filter: All documents types Scientific journals - Article - Short communication - Book review - Letter to the editor - Complete issue - OtherBooks - Book published as author, translator, etc. - Collective work published as editor or directorParts of books - Contribution to collective works - Contribution to encyclopedias, dictionaries... - Preface, postface, glossary...Scientific congresses, symposiums and conference proceedings - Unpublished conference - Paper published in a book - Paper published in a journal - PosterScientific presentation in universities or research centersReports - Expert report - Internal report - External report - OtherDissertations and theses - Bachelor/master dissertation - Doctoral thesis - Postdoctoral thesis - OtherLearning materials - Course notes - OtherPatentCartographic materials - Single work - Part of another publicationComputer developments - Textual, factual or bibliographical database - Software - OtherE-prints/Working papers - First made available on ORBilu - Already available on another siteDiverse speeches and writings - Article for general public - Conference given outside the academic context - Speeches/Talks - Other     1 Measuring PantsDoan, Nhat Minh ; Parlier, Hugo ; Tan, Ser PeowE-print/Working paper (2020)We investigate the terms arising in an identity for hyperbolic surfaces proved by Luo and Tan, namely showing that they vary monotonically in terms of lengths and that they verify certain convexity ... [more ▼]We investigate the terms arising in an identity for hyperbolic surfaces proved by Luo and Tan, namely showing that they vary monotonically in terms of lengths and that they verify certain convexity properties. Using these properties, we deduce two results. As a first application, we show how to deduce a theorem of Thurston which states, in particular for closed hyperbolic surfaces, that if a simple length spectrum "dominates" another, then in fact the two surfaces are isometric. As a second application, we show how to find upper bounds on the number of pairs of pants of bounded length that only depend on the boundary length and the topology of the surface. [less ▲]Detailed reference viewed: 132 (64 UL) Delaunay Triangulations of Points on Circlesdespré, vincent; devillers, olivier; Parlier, Hugo et alE-print/Working paper (2018)Delaunay triangulations of a point set in the Euclidean plane are ubiquitous in a number of computational sciences, including computational geometry. Delaunay triangulations are not well defined as soon ... [more ▼]Delaunay triangulations of a point set in the Euclidean plane are ubiquitous in a number of computational sciences, including computational geometry. Delaunay triangulations are not well defined as soon as 4 or more points are concyclic but since it is not a generic situation, this difficulty is usually handled by using a (symbolic or explicit) perturbation. As an alternative, we propose to define a canonical triangulation for a set of concyclic points by using a max-min angle characterization of Delaunay triangulations. This point of view leads to a well defined and unique triangulation as long as there are no symmetric quadruples of points. This unique triangulation can be computed in quasi-linear time by a very simple algorithm. [less ▲]Detailed reference viewed: 60 (10 UL) The maximum number of systoles for genus two Riemann surfaces with abelian differentialsJudge, Chris; Parlier, Hugo E-print/Working paper (2017)This article explores the length and number of systoles associated to holomorphic $1$-forms on surfaces. In particular, we show that up to homotopy, there are at most $10$ systolic loops on such a genus ... [more ▼]This article explores the length and number of systoles associated to holomorphic $1$-forms on surfaces. In particular, we show that up to homotopy, there are at most $10$ systolic loops on such a genus two surface and that the bound is realized by a unique translation surface up to homothety. We also provide sharp upper bounds on the the number of homotopy classes of systoles for a holomorphic $1$-form with a single zero in terms of the genus. [less ▲]Detailed reference viewed: 102 (7 UL) Chromatic numbers for the hyperbolic plane and discrete analogsParlier, Hugo ; Petit, CamilleE-print/Working paper (2017)We study colorings of the hyperbolic plane, analogously to the Hadwiger-Nelson problem for the Euclidean plane. The idea is to color points using the minimum number of colors such that no two points at ... [more ▼]We study colorings of the hyperbolic plane, analogously to the Hadwiger-Nelson problem for the Euclidean plane. The idea is to color points using the minimum number of colors such that no two points at distance exactly $d$ are of the same color. The problem depends on $d$ and, following a strategy of Kloeckner, we show linear upper bounds on the necessary number of colors. In parallel, we study the same problem on $q$-regular trees and show analogous results. For both settings, we also consider a variant which consists in replacing $d$ with an interval of distances. [less ▲]Detailed reference viewed: 38 (5 UL) Counting curves, and the stable length of currentsErlandsson, Viveka; Parlier, Hugo ; Souto, JuanE-print/Working paper (2016)Let $\gamma_0$ be a curve on a surface $\Sigma$ of genus $g$ and with $r$ boundary components and let $\pi_1(\Sigma)\curvearrowright X$ be a discrete and cocompact action on some metric space. We study ... [more ▼]Let $\gamma_0$ be a curve on a surface $\Sigma$ of genus $g$ and with $r$ boundary components and let $\pi_1(\Sigma)\curvearrowright X$ be a discrete and cocompact action on some metric space. We study the asymptotic behavior of the number of curves $\gamma$ of type $\gamma_0$ with translation length at most $L$ on $X$. For example, as an application, we derive that for any finite generating set $S$ of $\pi_1(\Sigma)$ the limit $$\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\{\gamma\text{ of type }\gamma_0\text{ with }S\text{-translation length}\le L\}$$ exists and is positive. The main new technical tool is that the function which associates to each curve its stable length with respect to the action on $X$ extends to a (unique) continuous and homogenous function on the space of currents. We prove that this is indeed the case for any action of a torsion free hyperbolic group. [less ▲]Detailed reference viewed: 37 (1 UL) Interrogating surface length spectra and quantifying isospectralityParlier, Hugo E-print/Working paper (2016)This article is about inverse spectral problems for hyperbolic surfaces and in particular how length spectra relate to the geometry of the underlying surface. A quantitative answer is given to the ... [more ▼]This article is about inverse spectral problems for hyperbolic surfaces and in particular how length spectra relate to the geometry of the underlying surface. A quantitative answer is given to the following: how many questions do you need to ask a length spectrum to determine it completely? In answering this, a quantitative upper bound is given on the number of isospectral but non-isometric surfaces of a given genus. [less ▲]Detailed reference viewed: 78 (2 UL) Geometric filling curves on surfacesBasmajian, Ara; Parlier, Hugo ; Souto, JuanE-print/Working paper (2016)This note is about a type of quantitative density of closed geodesics on closed hyperbolic surfaces. The main results are upper bounds on the length of the shortest closed geodesic that $\varepsilon ... [more ▼]This note is about a type of quantitative density of closed geodesics on closed hyperbolic surfaces. The main results are upper bounds on the length of the shortest closed geodesic that$\varepsilon$-fills the surface. [less ▲]Detailed reference viewed: 94 (0 UL) Short closed geodesics with self-intersectionsErlandsson, Viveka; Parlier, Hugo E-print/Working paper (2016)Our main point of focus is the set of closed geodesics on hyperbolic surfaces. For any fixed integer$k$, we are interested in the set of all closed geodesics with at least$k$(but possibly more) self ... [more ▼]Our main point of focus is the set of closed geodesics on hyperbolic surfaces. For any fixed integer$k$, we are interested in the set of all closed geodesics with at least$k$(but possibly more) self-intersections. Among these, we consider those of minimal length and investigate their self-intersection numbers. We prove that their intersection numbers are upper bounded by a universal linear function in$k$(which holds for any hyperbolic surface). Moreover, in the presence of cusps, we get bounds which imply that the self-intersection numbers behave asymptotically like$k$for growing$k\$. [less ▲]Detailed reference viewed: 35 (1 UL) Distances in domino flip graphsParlier, Hugo ; Zappa, SamuelE-print/Working paper (2016)This article is about measuring and visualizing distances between domino tilings. Given two tilings of a simply connected square tiled surface, we're interested in the minimum number of flips between two ... [more ▼]This article is about measuring and visualizing distances between domino tilings. Given two tilings of a simply connected square tiled surface, we're interested in the minimum number of flips between two tilings. Given a certain shape, we're interested in computing the diameters of the flip graphs, meaning the maximal distance between any two of its tilings. Building on work of Thurston and others, we give geometric interpretations of distances which result in formulas for the diameters of the flip graphs of rectangles or Aztec diamonds. [less ▲]Detailed reference viewed: 36 (3 UL) Once punctured disks, non-convex polygons, and pointihedraParlier, Hugo ; Pournin, LionelE-print/Working paper (2016)We explore several families of flip-graphs, all related to polygons or punctured polygons. In particular, we consider the topological flip-graphs of once-punctured polygons which, in turn, contain all ... [more ▼]We explore several families of flip-graphs, all related to polygons or punctured polygons. In particular, we consider the topological flip-graphs of once-punctured polygons which, in turn, contain all possible geometric flip-graphs of polygons with a marked point as embedded sub-graphs. Our main focus is on the geometric properties of these graphs and how they relate to one another. In particular, we show that the embeddings between them are strongly convex (or, said otherwise, totally geodesic). We also find bounds on the diameters of these graphs, sometimes using the strongly convex embeddings. Finally, we show how these graphs relate to different polytopes, namely type D associahedra and a family of secondary polytopes which we call pointihedra. [less ▲]Detailed reference viewed: 77 (0 UL) Chromatic numbers of hyperbolic surfacesParlier, Hugo ; Petit, Camillein Indiana Univ. Math. J. (2016), 65(4), 1401--1423Detailed reference viewed: 124 (3 UL) Filling sets of curves on punctured surfacesFanoni, Federica; Parlier, Hugo in New York J. Math. (2016), 22Detailed reference viewed: 29 (1 UL) Relative shapes of thick subsets of moduli spaceAnderson, James W.; Parlier, Hugo ; Pettet, Alexandrain Amer. J. Math. (2016), 138(2), 473--498Detailed reference viewed: 136 (4 UL) 1
2020-10-27 04:24:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6572802066802979, "perplexity": 829.9706659707596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893011.54/warc/CC-MAIN-20201027023251-20201027053251-00398.warc.gz"}
https://foodimpacts.org/methods/
## Motivation In recent decades there has been increased coverage of animal welfare issues, the health risks of high consumption of animal products and the contribution of farming animals to climate change. Multiple high-profile organisations have called for reduced animal consumption through reducetarian, vegetarian and vegan diets [1, 2]. Each of these three issues has been studied extensively and solving these issues has gained broad support. A balanced vegan diet has the opportunity to reduce the negative impacts of all three issues but campaigning for veganism has not been very successful and veganism still has relatively low prevalence of under 10%. For example, in the United Kingdom, where the first vegan society was founded, prevalence of veganism is estimated at 1.16% by the most recent survey [3]. An alternative to veganism called flexitarianism or reducetarianism has been gaining ground. The idea of these diets is to reduce consumption of animal products by partially replacing them with plant-based foods. If one were to adopt a flexitarian diet it would be confusing to decide which species to avoid since consumption of different species have different levels of associated harm on the three scales. For example, poultry has lower health risks and contributes less to climate change than beef but is considered a poor choice from an animal welfare perspective due to the poor conditions of the animals and the high number of animals used. The goal of this tool is to allow the user to specify their relative concern on two issues: animal welfare and climate change. The tool ranks animal species according to the harm induced by their consumption while taking into accord the user’s values. This ranking could be used to decide which animal products should be replaced with plant-based products. Rankings based on welfare have been developed previously by various individuals and groups such as Peter Hurford, Brian Tomasik, Charity Entrepreneurship and Dominik Peters. This tool is a minor extension of the work of Dominik Peters that also considers emissions in addition to welfare. I want to thank Dominik for kindly providing the data and methodology that he used. ## Animal suffering subscale To estimate the negative impact on animal welfare, the tool calculates the number of hours animals have spent in a farm in order to produce an amount of food which provides 2000 kcal of energy. For example, Dominik retrieved data on production yields and slaughter age from providers of breed chickens such as Amgen and Lohmann Tierzucht. I calculated the amount of produce required for 2000 kcal of energy using data from nutritiondata.self.com. In the cases of “dairy cow”, “caged hen” and “cage-free hen” it is meant consuming dairy or eggs. Of course different preparations from the same animal can have varying nutritional value but for the sake of simplicity one value is used per species. The same energy value was used for caged hen and cage-free hen eggs. Likewise with broiler and slow-growing broiler meat. Let $suffering_s$ designate the suffering subscale score of species $s$. Let $lifespan_s,\ production_s,\ refweight_s$ designate the average lifespan of an animal in hours, weight of produce per animal and weight of produce required for 2000 kcal of energy. The basic score is the number of hours suffered on a farm to produce 2000 kcal of produce: $suffering_s = \frac{lifespan_s}{production_s} \times refweight_s$ There are some issues with this basic approach. For example, our confidence in different species being sentient varies. If we just account for hours lived to produce 2000 kcal then the harm of smaller species will dominate. But the user of the tool might have low confidence in shrimp being sentient and we want to account for that. There are arguments in favour of and against brain weighting [4] but if we were to believe that capacity for welfare is linked to brain mass or neuron count we could use either one to scale the suffering scores. Dominik used Carl Shulman’s data on brain mass and neuron count [5]. The user can also choose to apply different functions to the neuron count data. The tool supports linear, logarithmic, square-root and square transformation of neuron counts. Some believe that cognitive abilities increase sublinearly in regards to neuron count so the default transform is square-root but the user can choose other transforms or disable brain weighting. When brain weighting is enabled, the hours spent in a farm are scaled by the ratio of the brain mass (or neuron count) of the species and that of a chicken. If $n_s$ and $f$ designate the neuron count of species $s$ and user chosen neuron scaling function then the suffering scale can be adjusted by multiplying with the scalar $f(n_s) / f(n_{broiler})$. It is also likely that different species of farm animals do not suffer equally due to the different conditions they are raised in. We allow the user to specify their belief in the relative suffering of different species. The default values are from Brian Tomasik [6]. The suffering of a beef cow is set at 1 and the user can specify a species’ relative suffering in relation to that of a beef cow. Different animal products might have different price elasticities. Price elasticity of supply shows the responsiveness of production to change in price. Elasticity of demand shows the responsiveness of demand to change in price. Cumulative elasticity is the net effect on supply. If someone spares 10 chickens a year by not eating chickens the actual change could be less than 10. Decreased chicken meat price due to lower demand might motivate someone else to eat more chicken. The user can choose to factor in cumulative elasticity in order to account for this effect. Two sources are provided: the book “Compassion, by the Pound” [7] and the work of the organisation Animal Charity Evaluators [8]. The user can also choose to factor in sleeping time if they assume that animals do not suffer while sleeping and liveability which also factors in animals who die before slaughter. ## Climate change subscale The climate change subscale measures the CO2 equivalent greenhouse gases produced per kilogram of animal produce. This value is scaled according to the amount of produce required for 2000 kcal. CO2 equivalent emissions data has been collected from lifecycle analyses [9, 10]. Let $emissions_s$ designate the CO2 equivalent gases produced per kilogram of produce of species $s$. The climate subscale score of species $s$ is thus: $climate_s = emissions_s \times refweight_s$ The elasticity parameters apply both to the suffering and climate subscales. That means if elasticity is enabled then $climate_s$ is multiplied by the cumulative elasticity factor. Note that only the impact on climate change is considered. There are other negative environmental impacts. Saltwater fishing causes marine pollution and fish farms cause eutrophication. But since the risk of climate change outweighs other environmental risks related to the consumption of animal produce I consider these omissions acceptable. It is sometimes argued that buying local food is more important than reducing meat consumption. In general the climate impact of food is dominated by production [11] so this tool does not make a distinction between where animals were farmed. Imported plant-based food tends to have lower emissions than local animal produce. Life cycle analysis of animal products already includes transportation and this is considered sufficient. The tool also does not consider which plant-based foods are substituted for animal products. Plant-based food production in general causes significantly lower emissions [12, 13] but from the perspective of the environment it might make sense to prefer whole foods because these do not require additional energy-intensive processing. An issue with using CO2 equivalent greenhouse gas emissions as a measure of warming is that farming different species puts different types of greenhouse gases in the environment. The high impact of ruminants is caused by their methane emissions. While methane warms the atmosphere more than CO2 it is also removed from the atmosphere significantly faster. Climate scientists sometimes use a metric called CO2-equivalent with Global Warming Potential 100 which considers methane to cause 25x as much warming as an equivalent amount of CO2 over a century. Some physicists disagree with this approach [14]. If one is concerned about the effects of warming over thousands of years as opposed to a hundred years this approach understates the impact of CO2 compared to methane. ## Combined model A weighted product model is used to combine the subscales. Weighted product models are dimensionless and are used for ranking options when making decisions. Because the scores are dimensionless they are normalised to the range $[0, 100]$. Let $w_{suffering}$ and $w_{climate}$ designate the suffering and climate weights. The combined score of species $s$ is calculated using: $harm_s = suffering_s^{w_{suffering}} \cdot climate_s^{w_{climate}}$ A product model ensures that the subscales affect the combined score equivalently. A 1% increase in CO2 emissions changes the combined score by the same amount that a 1% increase in the animal suffering subscale would. Adding weights to the model allows us to change the relative contribution of each subscale to the combined score. ## Why is there no health impacts subscale? I considered designing a health subscale using data from the Global Burden of Disease (GBD) study but eventually opted against it. Understanding nutrition is notoriously difficult. It is impossible to conduct trials that assess long-term health impacts of diets due to costs and ethical concerns with assigning people to diets with unknown health effects. Due to this dietary decisions must be made based mostly on observational data which is not as reliable as randomised controlled trials. Even well studied questions such as the impact of saturated fat consumption have not been fully resolved [15, 16]. GBD data has been aggregated from a large number of sources by a large team. It might be close to the consensus of nutritional science but even if we have reasons to trust the data it would be difficult to use due to non-linear effects. If two people decided to eat one less chicken this year their climate and animal welfare impact would be similar even if one of them previously ate 10 chickens per year and the other ate a single chicken each year. Positive health effects of reduced meat consumption on the other hand have diminishing returns and the reduction of their health risks would depend on the current composition of their diets. Adequately planned reducetarian, vegetarian and vegan diets are believed to be healthy based on existing evidence [17] but positive effects over conventional diets are not considered in this tool due the uncertainty of the effects and difficulties of modeling. ## Limitations Multiple issues with the method used to estimate suffering are outlined in [18]. The tool could be helpful to make decisions in the face of uncertainty but is not a true measure of harm. The model does not consider the suffering of wild animals which could significantly exceed that of farm animals [19]. There is also no consideration of which plant-based products are substituted for animal products. Farming of some plants causes less emissions [12] or wild animal suffering [20]. I consider the greatest limitation of the tool the fact that setting subscale priorities based on intuitions can be misleading. It would make sense to compare emissions and harm based on the underlying values which cause us to be concerned about the issues in the first place. If for example I am motivated by increased welfare, it would be helpful to estimate the welfare impacts of climate change and factory farming on a common scale. ## Sources Parameter Source Lifespan Dominik Peters Production Dominik Peters Sleeping time Dominik Peters Pre-slaughter mortality Dominik Peters Neuron count Dominik Peters Brain mass Dominik Peters Elasticity factors Animal Charity Evaluators [8]; Compassion, by the pound [7] Emissions Dominik Peters; Cao et al. [10] ### Food energy Animal Product Caged hen Boiled egg Cage-free hen Boiled egg Broiler Cooked breast Slow-growth broiler Cooked breast Pig Cooked ground pork Turkey Cooked meat Beef cow Cooked ground beef Dairy cow Whole milk Lamb Cooked ground lamb Salmon Cooked salmon Duck Cooked duck Shrimp Cooked crustaceans, shrimp ## References [1] “Climate Change and Land — IPCC.” https://www.ipcc.ch/report/srccl/. [2] W. Willett, J. Rockström, B. Loken, M. Springmann, T. Lang, S. Vermeulen, T. Garnett, D. Tilman, F. DeClerck, A. Wood, and others, “Food in the anthropocene: The eat–lancet commission on healthy diets from sustainable food systems,” The Lancet, vol. 393, no. 10170, pp. 447–492, 2019. [3] “Statistics | The Vegan Society.” https://www.vegansociety.com/news/media/statistics. [4] B. Tomasik, “Is Brain Size Morally Relevant?” https://reducing-suffering.org/is-brain-size-morally-relevant/. [5] C. Shulman, “How are brain mass (and neurons) distributed among humans and the major farmed land animals?” https://reflectivedisequilibrium.blogspot.com/2013/09/how-is-brain-mass-distributed-among.html. [6] B. Tomasik, “How Much Direct Suffering Is Caused by Various Animal Foods.” https://reducing-suffering.org/how-much-direct-suffering-is-caused-by-various-animal-foods/. [7] F. B. Norwood and J. Lusk, Compassion, by the pound: The economics of farm animal welfare. New York: Oxford University Press, 2011. [9] K. Hamerschlag and K. Venkat, Meat eater’s guide to climate change+ health: Lifecycle assessments: Methodology and results 2011. Environmental Working Group, 2011. [10] L. Cao, J. S. Diana, G. A. Keoleian, and Q. Lai, “Life cycle assessment of chinese shrimp farming systems targeted for export and domestic sales,” Environmental science & technology, vol. 45, no. 15, pp. 6531–6538, 2011. [11] C. L. Weber and H. S. Matthews, “Food-miles and the relative climate impacts of food choices in the united states.” ACS Publications, 2008. [12] J. Poore and T. Nemecek, “Reducing food’s environmental impacts through producers and consumers,” Science, vol. 360, no. 6392, pp. 987–992, 2018. [13] M. Springmann, H. C. J. Godfray, M. Rayner, and P. Scarborough, “Analysis and valuation of the health and climate change cobenefits of dietary change,” Proceedings of the National Academy of Sciences, vol. 113, no. 15, pp. 4146–4151, 2016. [14] M. Allen, “Short-lived promise? The science and policy of cumulative and short-lived climate pollutants,” University of Oxford, 2015. [15] Y. Zhu, Y. Bo, and Y. Liu, “Dietary total fat, fatty acids intake, and risk of cardiovascular disease: A dose-response meta-analysis of cohort studies,” Lipids in health and disease, vol. 18, no. 1, p. 91, 2019. [16] L. Hooper, N. Martin, O. F. Jimoh, C. Kirk, E. Foster, and A. S. Abdelhamid, “Reduction in saturated fat intake for cardiovascular disease,” Cochrane Database of Systematic Reviews, no. 5, 2020. [17] W. J. Craig, A. R. Mangels, and others, “Position of the american dietetic association: Vegetarian diets.” Journal of the American Dietetic Association, vol. 109, no. 7, pp. 1266–1282, 2009. [18] H. Browning, “If I Could Talk to the Animals: Measuring Subjective Animal Welfare,” PhD thesis, College of Arts; the Social Sciences, The Australian National University, 2020. [19] B. Tomasik, “The Importance of Wild-Animal Suffering,” Foundational Research Institute. Apr-2015. [20] B. Tomasik, “Crop Cultivation and Wild Animals.” https://reducing-suffering.org/crop-cultivation-and-wild-animals/.
2021-07-25 09:42:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4735884666442871, "perplexity": 3684.954988681087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00071.warc.gz"}
https://gmatclub.com/forum/biologists-working-in-spain-say-that-their-discovery-of-44731-20.html
It is currently 24 Sep 2017, 17:59 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Biologists working in Spain say that their discovery of Author Message TAGS: ### Hide Tags Retired Moderator Status: worked for Kaplan's associates, but now on my own, free and flying Joined: 19 Feb 2007 Posts: 4275 Kudos [?]: 7626 [1], given: 362 Location: India WE: Education (Education) Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 06 Jan 2013, 04:28 1 KUDOS 1 This post was BOOKMARKED I am afraid that there is an impression that a verb is plural, when there is no s and a verb is singular when there is an s. This is erroneous. Used with helping verbs, they are only used in base form. The basic structure of verbs in present tense is Verb sing Base form sing Singular I person sing Plural I person we sing Singular II person you sing Plural II person you sing In all the above cases, the base form was simply used with the subject, whether singular or plural Now let’s see how the structure changes with III person III person singular (he, she, it) he sings, she sings and it sings, Tom sings etc III plural (they) they sing You can see the base form is used here( except in III person singular) and is acceptability called the plural verb. Now when used with helping verb (Singular) I can sing, you can sing, he can sing, she can sing and it can sing. Now we do not say - I can sings, you can sings, he can sings, she can sings and it can sings, just because the sub is singular Now with helping verb (plural) We can sing, you can sing, they can sing And not we can sings, you can sings and they can sings. In the given text, broaden and show are the base forms used with the helping verb may and not plural verbs. _________________ “Better than a thousand days of diligent study is one day with a great teacher” – a Japanese proverb. 9884544509 Kudos [?]: 7626 [1], given: 362 Manager Joined: 16 Feb 2012 Posts: 225 Kudos [?]: 397 [0], given: 121 Concentration: Finance, Economics Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 07 Jan 2013, 04:19 I'm not sure I know the answer...but the explanation from mikemcgarry helped a lot... I would say it is A... _________________ Kudos if you like the post! Failing to plan is planning to fail. Kudos [?]: 397 [0], given: 121 GMAT Club Legend Joined: 01 Oct 2013 Posts: 10167 Kudos [?]: 256 [0], given: 0 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 01 Feb 2014, 02:13 Hello from the GMAT Club VerbalBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. Kudos [?]: 256 [0], given: 0 Senior Manager Joined: 04 May 2013 Posts: 354 Kudos [?]: 143 [0], given: 70 Location: India Concentration: Operations, Human Resources Schools: XLRI GM"18 GPA: 4 WE: Human Resources (Human Resources) Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 01 Feb 2014, 11:42 CLEAR "A"... Kudos [?]: 143 [0], given: 70 Intern Joined: 04 Jul 2014 Posts: 3 Kudos [?]: 1 [0], given: 0 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 22 Jul 2014, 21:15 1 This post was BOOKMARKED Biologists working in Spain say that their discovery of teeming life in a highly acidic river may not only broaden the search for life, or for evidence of past life, no other planets but also show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria. A. show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria B. may show that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought hostile to all bacteria but the hardiest ones The correct idiom is - Not only X but also Y C. shows a number of forms of microscopic life to be capable to adapt to conditions that scientists have long thought had been hostile to all but the hardiest bacteria only broaden but also show => shows is singular D. showing that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought had been hostile to all but the hardiest bacteria Incorrect use of idiom - Not only X but also Y E. showing that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all bacteria but the hardiest Incorrect use of idiom - Not only X but also Y Kudos [?]: 1 [0], given: 0 Retired Moderator Status: worked for Kaplan's associates, but now on my own, free and flying Joined: 19 Feb 2007 Posts: 4275 Kudos [?]: 7626 [3], given: 362 Location: India WE: Education (Education) Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 24 Jan 2016, 08:00 3 KUDOS Biologists working in Spain say that their discovery of teeming life in a highly acidic river may not only broaden the search for life, or for evidence of past life, no other planets but also show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria. A. show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria B. may show that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought hostile to all bacteria but the hardiest ones C. shows a number of forms of microscopic life to be capable to adapt to conditions that scientists have long thought had been hostile to all but the hardiest bacteria D. showing that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought had been hostile to all but the hardiest bacteria E. showing that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all bacteria but the hardiest This apparently intimidating question seems to be a having a typo that is distracting the meaning needlessly. Biologists working in Spain say that their discovery of teeming life in a highly acidic river may not only broaden the search for life, or for evidence of past life, no other planets --- “no” should be “on” – But this typo is largely irrelevant to the outcome. Keeping aside that for a while, we can see that this is just a play of not only…. but also correlative conjunction //ism. What appears after not only should be the same after but also in structure and logic. So not only broaden but also show is a perfect choice; Only A survives while others fall. Now on ---'may not only broaden but also shows'.—Do we ever say may broadens or will broadens? When used with an auxiliary verb we use the base form only for the present tense also. The real meaning here is may not only broaden but also may show. It is a basic mistake to say that may not only broaden … but also shows. Therefore, A passes the grammar test convincingly. _________________ “Better than a thousand days of diligent study is one day with a great teacher” – a Japanese proverb. 9884544509 Kudos [?]: 7626 [3], given: 362 GMAT Club Legend Joined: 01 Oct 2013 Posts: 10167 Kudos [?]: 256 [0], given: 0 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 25 Feb 2016, 19:51 Hello from the GMAT Club VerbalBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. Kudos [?]: 256 [0], given: 0 GMAT Club Legend Joined: 01 Oct 2013 Posts: 10167 Kudos [?]: 256 [0], given: 0 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 19 Jun 2016, 18:52 Hello from the GMAT Club VerbalBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. Kudos [?]: 256 [0], given: 0 Director Joined: 04 Jun 2016 Posts: 648 Kudos [?]: 347 [0], given: 36 GMAT 1: 750 Q49 V43 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 19 Jul 2016, 00:19 humtum0 wrote: Biologists working in Spain say that their discovery of teeming life in a highly acidic river may not only broaden the search for life, or for evidence of past life, no other planets (HUGE TYPO) but also show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria. A. show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria B. may show that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought hostile to all bacteria but the hardiest ones C. shows a number of forms of microscopic life to be capable to adapt to conditions that scientists have long thought had been hostile to all but the hardiest bacteria D. showing that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought had been hostile to all but the hardiest bacteria E. showing that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all bacteria but the hardiest Its funny that no one pointed out a HUGE TYPO in the sentence:- "NO OTHER PLANETS" should be read as "ON OTHER PLANETS" SV agreement (discovery-broaden) in the non underlined part makes sure that "discovery" is used as plural in this sentence thus to maintain parallelism in the sentence, the correct SV will be "show" - "shows" or "showing" will destroy the parallelism. Hence OPTION C, D, E are INCORRECT. Since the correct idiom is not only X but also Y, therefore not only broaden but also may show is incorrect. "MAY" is incorrectly used. A is the best choice SV parallelism -discovery-braoden-show + correct idiom not only broaden but also show _________________ Posting an answer without an explanation is "GOD COMPLEX". The world doesn't need any more gods. Please explain you answers properly. FINAL GOODBYE :- 17th SEPTEMBER 2016. .. 16 March 2017 - I am back but for all purposes please consider me semi-retired. Kudos [?]: 347 [0], given: 36 Manager Joined: 17 Sep 2015 Posts: 96 Kudos [?]: 72 [0], given: 155 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 21 Jul 2016, 06:02 Biologists working in Spain say that their discovery of teeming life in a highly acidic river may not only broaden the search for life, or for evidence of past life, no other planets but also show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria. A. show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria B. may show that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought hostile to all bacteria but the hardiest ones C. shows a number of forms of microscopic life to be capable to adapt to conditions that scientists have long thought had been hostile to all but the hardiest bacteria D. showing that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought had been hostile to all but the hardiest bacteria E. showing that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all bacteria but the hardiest Between A) and C) : A) is structurally much better than C). The only reason to chose C) was for the sub-verb agreement. C) their discovery of teeming life ..... shows a number of .......... singular sub --> singular verb but then again what follows after shows a number is not the best way to form a sentence. Moreover their discovery of teeming life ... may not only broaden but also shows .... (Sounds wrong) A) their discovery of teeming life ..... may not only broaden but also show that ...... (may not only .. but also ... ) (now sounds correct ) but then there is the issue of sub verb agreement The sub here is ----- their discovery (singular), how can the verb be plural (broaden and show) ? Can someone please shed some light on this? This is gmat prep question. I feel the absolute right way to right this will be : their discovery of teeming life ... not only broadens but also shows that .... (removing may solves this issue) _________________ You have to dig deep and find out what it takes to reshuffle the cards life dealt you Kudos [?]: 72 [0], given: 155 BSchool Forum Moderator Joined: 28 Nov 2014 Posts: 924 Kudos [?]: 201 [0], given: 79 Concentration: Strategy Schools: Fisher '19 (M) GPA: 3.71 Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 11 Sep 2016, 23:31 daagh wrote: Now on ---'may not only broaden but also shows'.—Do we ever say may broadens or will broadens? When used with an auxiliary verb we use the base form only for the present tense also. The real meaning here is may not only broaden but also may show. It is a basic mistake to say that may not only broaden … but also shows. Therefore, A passes the grammar test convincingly. daagh Sorry, but I am unable to understand your point (the highlighted part). Can you explain it with some example. Other GC members - Can anyone highlight the point in highlight above. Kudos [?]: 201 [0], given: 79 Manager Joined: 24 Oct 2013 Posts: 150 Kudos [?]: 21 [0], given: 127 Location: India Concentration: General Management, Strategy WE: Information Technology (Computer Software) Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 27 Sep 2016, 09:52 its A Not only X but also Y and only A is perfectly parallel Kudos [?]: 21 [0], given: 127 Manager Joined: 08 Nov 2015 Posts: 52 Kudos [?]: 7 [0], given: 11 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 05 Jan 2017, 22:51 humtum0 wrote: Biologists working in Spain say that their discovery of teeming life in a highly acidic river may not only broaden the search for life, or for evidence of past life, no other planets but also show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria. A. show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria B. may show that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought hostile to all bacteria but the hardiest ones C. shows a number of forms of microscopic life to be capable to adapt to conditions that scientists have long thought had been hostile to all but the hardiest bacteria D. showing that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought had been hostile to all but the hardiest bacteria E. showing that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all bacteria but the hardiest Straight A. The sentence agrees in subject-verb agreement. Biologists - Plural and Show - Plural. Kudos [?]: 7 [0], given: 11 Senior Manager Status: The best is yet to come..... Joined: 10 Mar 2013 Posts: 414 Kudos [?]: 146 [1], given: 173 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 21 Jan 2017, 13:41 1 KUDOS Can a phrase (broaden the search for life) be parallel to a clause (show that a number of forms of microscopic life can adapt)? Look at the following incorrect sentence due to parallelism issue- Bengal - born writer, philosopher, and educator Rabindranath Tagore had the greatest admiration for Mohandas K. Gandhi not only as a person and as a politician, but Tagore was also skeptical of Gandhi's form of nationalism and his conservative opinions about India's cultural traditions. _________________ Hasan Mahmud Kudos [?]: 146 [1], given: 173 Senior Manager Status: The best is yet to come..... Joined: 10 Mar 2013 Posts: 414 Kudos [?]: 146 [0], given: 173 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 29 Jan 2017, 22:00 Mahmud6 wrote: Can a phrase (broaden the search for life) be parallel to a clause (show that a number of forms of microscopic life can adapt)? Look at the following incorrect sentence due to parallelism issue- Bengal - born writer, philosopher, and educator Rabindranath Tagore had the greatest admiration for Mohandas K. Gandhi not only as a person and as a politician, but Tagore was also skeptical of Gandhi's form of nationalism and his conservative opinions about India's cultural traditions. Sorry, I was wrong. After careful review I have found that the word 'broaden' is a verb. So, it is $$||$$ to 'show'. The above sentence is correct. _________________ Hasan Mahmud Kudos [?]: 146 [0], given: 173 Manager Joined: 26 Mar 2017 Posts: 165 Kudos [?]: 7 [0], given: 1 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 23 Jun 2017, 02:18 Biologists working in Spain say that their discovery of teeming life in a highly acidic river may not only broaden the search for life, or for evidence of past life, no other planets but also show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria. A. show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria B. may show that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought hostile to all bacteria but the hardiest ones C. shows a number of forms of microscopic life to be capable to adapt to conditions that scientists have long thought had been hostile to all but the hardiest bacteria D. showing that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought had been hostile to all but the hardiest bacteria E. showing that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all bacteria but the hardiest _________________ I hate long and complicated explanations! Kudos [?]: 7 [0], given: 1 Intern Joined: 09 Mar 2017 Posts: 37 Kudos [?]: 2 [0], given: 23 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 07 Aug 2017, 18:18 InsidiousMe wrote: Biologists working in Spain say that their discovery of teeming life in a highly acidic river may not only broaden the search for life, or for evidence of past life, no other planets but also show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria. A. show that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all but the hardiest bacteria B. may show that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought hostile to all bacteria but the hardiest ones The correct idiom is - Not only X but also Y C. shows a number of forms of microscopic life to be capable to adapt to conditions that scientists have long thought had been hostile to all but the hardiest bacteria only broaden but also show => shows is singular D. showing that a number of forms of microscopic life is capable of adapting to conditions that scientists have long thought had been hostile to all but the hardiest bacteria Incorrect use of idiom - Not only X but also Y E. showing that a number of forms of microscopic life can adapt to conditions that scientists have long thought hostile to all bacteria but the hardiest Incorrect use of idiom - Not only X but also Y Wait how is "discovery" not singular? Discoveries is the plural version, which is why I picked C since shows is singular. Kudos [?]: 2 [0], given: 23 Director Joined: 19 Mar 2014 Posts: 828 Kudos [?]: 187 [0], given: 169 Location: India Concentration: Finance, Entrepreneurship GPA: 3.5 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 07 Aug 2017, 20:59 brandon7 wrote: Wait how is "discovery" not singular? Discoveries is the plural version, which is why I picked C since shows is singular. Hello brandon7 - Let me see if I can help you here! The word "discovery" is indeed singular here, however, what we want to focus on is the part highlighted as per below: May Not Only Broaden .............. But Also Show As mentioned by daagh - "When used with an auxiliary verb we use the base form only for the present tense also. The real meaning here is may not only broaden but also may show." Here Broaden is plural, and as the sentence will be parallel only if we use parallel verb after "But Also" that is is "show". The word "May" is also playing a crucial role here, after the usage of the word "may" we have to use plural form of the verb. We never say - May comes.. it is may come; May works..... it is may work Hope this helps! _________________ Kudos - I hope you liked the post, if you have liked the post, please do not hesitate to show your appreciation and support with a simple click on Kudos, it will make me happy and encourage me to support more people. "Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent." Kudos [?]: 187 [0], given: 169 Manager Joined: 13 Oct 2016 Posts: 206 Kudos [?]: 100 [0], given: 118 GMAT 1: 600 Q44 V28 Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 09 Aug 2017, 08:11 The answer is A . "Not only broaden .. .... .. .. .. .. but also show" . Parallelism is maintained in the terms broaden and show. _________________ _______________________________________________ If you appreciate the post then please click +1Kudos Kudos [?]: 100 [0], given: 118 Director Joined: 26 Oct 2016 Posts: 690 Kudos [?]: 152 [0], given: 855 Location: United States Schools: HBS '19 GMAT 1: 770 Q51 V44 GPA: 4 WE: Education (Education) Re: Biologists working in Spain say that their discovery of [#permalink] ### Show Tags 12 Aug 2017, 08:10 The phrase "not only….but also…." requires that the words that follow "but also…." have the same format/style as the words that follow "not only…" In the prompt, the second part of the phrase requires a word that is parallel to the word "broaden." Only the correct answer A provides an answer that gives us a parallel verb. It is incorrect grammatically to use a conjugated verb form after a modal verb such as may, might, should, must, etc. We must always use unconjugated forms after them e.g. John should QUIT smoking (not QUITS). The BARE INFINITIVE form of a verb is "to + VERB" with the "to" omitted. MODAL VERBS include may, might, must, can, could, etc. In the construction MODAL VERB + OTHER VERB, the other verb is required to be in its bare infinitive form: Mary may attend the party. Mary might attend the party. Mary must attend the party. Mary can attend the party. Mary could attend the party. In every case, the verb in blue is the bare infinitive form of "to attend" ("to attend" with the "to omitted"). OA: Biologists may...not only broaden...but also show Here, "broaden" and "show" are the bare infinitive forms of "to broaden" and "to show". _________________ Thanks & Regards, Anaira Mitch Kudos [?]: 152 [0], given: 855 Re: Biologists working in Spain say that their discovery of   [#permalink] 12 Aug 2017, 08:10 Go to page   Previous    1   2   [ 40 posts ] Display posts from previous: Sort by
2017-09-25 00:59:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45125696063041687, "perplexity": 5469.606056137211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690268.19/warc/CC-MAIN-20170925002843-20170925022843-00334.warc.gz"}
http://openmdao.readthedocs.io/en/latest/usr-guide/tutorials/paraboloid-tutorial.html
# Paraboloid Tutorial - Simple Optimization Problem¶ This tutorial will show you how to set up a simple optimization of a paraboloid. You’ll create a paraboloid Component (with analytic derivatives), then put it into a Problem and set up an optimizer Driver to minimize an objective function. Here is the code that defines the paraboloid and then runs it. You can copy this code into a file, and run it directly. from __future__ import print_function from openmdao.api import IndepVarComp, Component, Problem, Group class Paraboloid(Component): """ Evaluates the equation f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3 """ def __init__(self): super(Paraboloid, self).__init__() def solve_nonlinear(self, params, unknowns, resids): """f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3 """ x = params['x'] y = params['y'] unknowns['f_xy'] = (x-3.0)**2 + x*y + (y+4.0)**2 - 3.0 def linearize(self, params, unknowns, resids): """ Jacobian for our paraboloid.""" x = params['x'] y = params['y'] J = {} J['f_xy', 'x'] = 2.0*x - 6.0 + y J['f_xy', 'y'] = 2.0*y + 8.0 + x return J if __name__ == "__main__": top = Problem() root = top.root = Group() root.connect('p1.x', 'p.x') root.connect('p2.y', 'p.y') top.setup() top.run() print(top['p.f_xy']) Now we will go through each section and explain how this code works. ## Building the component¶ from __future__ import print_function from openmdao.api import IndepVarComp, Component, Problem, Group We need to import some OpenMDAO classes. We also import the print_function to ensure compatibility between Python 2.x and 3.x. You don’t need the import if you are running in Python 3.x. class Paraboloid(Component): OpenMDAO provides a base class, Component, which you should inherit from to build your own components and wrappers for analysis codes. Components can declare three kinds of variables, parameters, outputs and states. A Component operates on its parameters to compute unknowns, which can be explicit outputs or implicit states. For the Paraboloid Component, we will only be using explicit outputs. def __init__(self): super(Paraboloid, self).__init__() This code defines the input parameters of the Component, x and y, and initializes them to 0.0. These will be design variables which could be used to minimize the output when doing optimization. It also defines the explicit output, f_xy, but only gives it a shape. If shape is 1, the value is initialized to 0.0, a scalar. If shape is any other value, the value of the variable is initialized to numpy.zeros(shape, dtype=float). def solve_nonlinear(self, params, unknowns, resids): """f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3 Optimal solution (minimum): x = 6.6667; y = -7.3333 """ x = params['x'] y = params['y'] unknowns['f_xy'] = (x-3.0)**2 + x*y + (y+4.0)**2 - 3.0 The solve_nonlinear method is responsible for calculating outputs for a given set of parameters. The parameters are given in the params dictionary that is passed in to this method. Similarly, the outputs are assigned values using the unknowns dictionary that is passed in. def linearize(self, params, unknowns, resids): """ Jacobian for our paraboloid.""" x = params['x'] y = params['y'] J = {} J['f_xy','x'] = 2.0*x - 6.0 + y J['f_xy','y'] = 2.0*y + 8.0 + x return J The linearize method is used to compute analytic partial derivatives of the unknowns with respect to params (partial derivatives in OpenMDAO context refer to derivatives for a single component by itself). The returned value, in this case J, should be a dictionary whose keys are tuples of the form (‘unknown’, ‘param’) and whose values are n-d arrays or scalars. Just like for solve_nonlinear, the values for the parameters are accessed using dictionary arguments to the function. The definition of the Paraboloid Component class is now complete. We will now make use of this class to run a model. ## Setting up the model¶ if __name__ == "__main__": top = Problem() root = top.root = Group() An instance of an OpenMDAO Problem is always the top object for running a model. Each Problem in OpenMDAO must contain a root Group. A Group is a System that contains other Components or Groups. This code instantiates a Problem object and sets the root to be an empty Group. root.add('p1', IndepVarComp('x', 3.0)) Now it is time to add components to the empty group. IndepVarComp is a Component that provides the source for a variable which we can later give to a Driver as a design variable to control. We created two IndepVarComps (one for each param on the Paraboloid component), gave them names, and added them to the root Group. The add method takes a name as the first argument, and a Component instance as the second argument. The numbers 3.0 and -4.0 are values chosen for each as starting points for the optimizer. Note Take care setting the initial values, as in some cases, various initial points for the optimization will lead to different results. root.add('p', Paraboloid()) Then we add the paraboloid using the same syntax as before, giving it the name ‘p’. root.connect('p1.x', 'p.x') root.connect('p2.y', 'p.y') Then we connect up the outputs of the IndepVarComps to the parameters of the Paraboloid. Notice the dotted naming convention used to refer to variables. So, for example, p1 represents the first IndepVarComp that we created to set the value of x and so we connect that to parameter x of the Paraboloid. Since the Paraboloid is named p and has a parameter x, it is referred to as p.x in the call to the connect method. Every problem has a Driver and for most situations, we would want to set a Driver for the Problem using code like this top.driver = SomeDriver() For this very simple tutorial, we do not need to set a Driver, we will just use the default, built-in driver, which is Driver. ( Driver also serves as the base class for all Drivers. ) Driver is the simplest driver possible, running a Problem once. top.setup() Before we can run our model we need to do some setup. This is done using the setup method on the Problem. This method performs all the setup of vector storage, data transfer, etc., necessary to perform calculations. Calling setup is required before running the model. top.run() Now we can run the model using the run method of Problem. print(top['p.f_xy']) Finally, we print the output of the Paraboloid Component using the dictionary-style method of accessing variables on the problem instance. Putting it all together: top = Problem() root = top.root = Group() root.connect('p1.x', 'p.x') root.connect('p2.y', 'p.y') top.setup() top.run() print(top['p.f_xy']) The output should look like this: -15.0 The IndepVarComp component is used to define a source for an unconnected param that we want to use as an independent variable that can be declared as a design variable for a driver. In our case, we want to optimize the Paraboloid model, finding values for ‘x’ and ‘y’ that minimize the output ‘f_xy.’ Sometimes we just want to run our component once to see the result. Similarly, sometimes we have params that will be constant through our optimization, and thus don’t need to be design variables. In either of these cases, the IndepVarComp is not required, and we can build our model while leaving those parameters unconnected. All unconnected params use their default value as the initial value. You can set the values of any unconnected params the same way as any other variables by doing the following: top = Problem() root = top.root = Group() top.setup() # Set values for x and y top['x'] = 5.0 top['y'] = 2.0 top.run() print(top['p.f_xy']) This can only be done after setup is called. Note that the promoted names ‘x’ and ‘y’ are used. The new output should look like this: 47.0 ## Optimization of the Paraboloid¶ Now that we have the paraboloid model set up, let’s do a simple unconstrained optimization. Let’s find the minimum point on the Paraboloid over the variables x and y. This requires the addition of just a few more lines. First, we need to import the optimizer. from openmdao.api import ScipyOptimizer The main optimizer built into OpenMDAO is a wrapper around Scipy’s minimize function. OpenMDAO supports 9 of the optimizers built into minimize. The ones that will be most frequently used are SLSQP and COBYLA, since they are the only two in the minimize package that support constraints. We will use SLSQP because it supports OpenMDAO-supplied gradients. top = Problem() root = top.root = Group() # Initial value of x and y set in the IndepVarComp. root.connect('p1.x', 'p.x') root.connect('p2.y', 'p.y') top.driver = ScipyOptimizer() top.driver.options['optimizer'] = 'SLSQP' top.setup() # You can also specify initial values post-setup top['p1.x'] = 3.0 top['p2.y'] = -4.0 top.run() print('\n') print('Minimum of %f found at (%f, %f)' % (top['p.f_xy'], top['p.x'], top['p.y'])) Every driver has an options dictionary which contains important settings for the driver. These settings tell ScipyOptimizer which optimization method to use, so here we select ‘SLSQP’. For all optimizers, you can specify a convergence tolerance ‘tol’ and a maximum number of iterations ‘maxiter.’ Next, we select the parameters the optimizer will drive by calling add_param and giving it the IndepVarComp unknowns that we have created. We also set high and low bounds for this problem. It is not required to set these (they will default to -1e99 and 1e99 respectively), but it is generally a good idea. Finally, we add the objective. You can use any unknown in your model as the objective. Once we have called setup on the model, we can specify the initial conditions for the design variables just like we did with unconnected params. Since SLSQP is a gradient based optimizer, OpenMDAO will call the linearize method on the Paraboloid while calculating the total gradient of the objective with respect to the two design variables. This is done automatically. Finally, we made a change to the print statement so that we can print the objective and the parameters. This time, we get the value by keying into the problem instance (‘top’) with the full variable path to the quantities we want to see. This is equivalent to what was shown in the first tutorial. Putting this all together, when we run the model, we get output that looks like this (note, the optimizer may print some things before this, depending on settings): ... Minimum of -27.333333 found at (6.666667, -7.333333) ## Optimization of the Paraboloid with a Constraint¶ Finally, let’s take this optimization problem and add a constraint to it. Our constraint takes the form of an inequality we want to satisfy: x - y >= 15. First, we need to add one more import to the beginning of our model. from openmdao.api import ExecComp We’ll use an ExecComp to represent our constraint in the model. An ExecComp is a shortcut that lets us easily create a component that defines a simple expression for us. top = Problem() root = top.root = Group() # Constraint Equation root.connect('p1.x', 'p.x') root.connect('p2.y', 'p.y') root.connect('p.x', 'con.x') root.connect('p.y', 'con.y') top.driver = ScipyOptimizer() top.driver.options['optimizer'] = 'SLSQP' top.setup() top.run() print('\n') print('Minimum of %f found at (%f, %f)' % (top['p.f_xy'], top['p.x'], top['p.y'])) Here, we added an ExecComp named ‘con’ to represent part of our constraint inequality. Our constraint is “x - y >= 15”, so we have created an ExecComp that will evaluate the expression “x - y” and place that result into the unknown ‘con.c’. To complete the definition of the constraint, we also need to connect our ‘con’ expression to ‘x’ and ‘y’ on the paraboloid. Finally, we need to tell the driver to use the unknown “con.c” as a constraint using the add_constraint method. This method takes the name of the variable and an “upper” or “lower” bound. Here we give it a lower bound of 15, which completes the inequality constraint “x - y >= 15”. OpenMDAO also supports the specification of double sided constraints, so if you wanted to constrain x-y to lie on a band between 15 and 16 which is “16 > x-y > 15”, you would just do the following: top.driver.add_constraint('con.c', lower=15.0, upper=16.0) So now, putting it all together, we can run the model and get this: ... Minimum of -27.083333 found at (7.166667, -7.833333) A new optimum is found because the original one was infeasible (i.e., that design point violated the constraint equation). Tags
2017-03-23 04:19:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48551419377326965, "perplexity": 1667.0402836916808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186774.43/warc/CC-MAIN-20170322212946-00463-ip-10-233-31-227.ec2.internal.warc.gz"}
https://science.sciencemag.org/content/282/5393/r-samples
Random Samples Science  20 Nov 1998: Vol. 282, Issue 5393, pp. 1409 1. Music as Food for the Brain Music can lift the spirit—and rewire the brain. Two studies presented in Los Angeles last week at the annual meeting of the Society of Neuroscience show surprising effects of music on the cerebellum, as well as intriguing new functions for this basic brain structure. The cerebellum controls balance and muscle coordination in all vertebrates. But when neuroscientists Lawrence Parsons and Peter Fox of the University of Texas Health Science Center in San Antonio ran positron emission tomography scans on eight conductors as they listened to a Bach chorale while following the score, they found that the cerebellum is also involved in interpreting rhythm. When the rhythm was altered to differ from the score, the conductors' cerebellar blood flow surged, indicating that an unexpected sensory, nonmotor function was occurring in the region. The cerebellum's ear for music doesn't end there. According to results presented by neurologist Gottfried Schlaug of the Beth Israel Deaconess Medical Center in Boston, it also responds to training. To test whether years of playing an instrument altered the brain, Schlaug and his colleagues compared cerebellum volume in 90 musicians and nonmusicians. The cerebella of male musicians, they found, were 5% larger than those of male nonmusicians, suggesting that many years of precise finger exercise had stimulated extra nerve growth. It appears that “processing of music is much more distributed than one would expect from simple anatomy,” says auditory physiologist Hubert Dinse at Ruhr University in Buchum, Germany. Music is a tonic for another brain region, too. In a study of 60 female college students described in the 12 November Nature, scientists at the Chinese University of Hong Kong found that those who studied music as children remembered “significantly more” after a list of words was read to them than did those with no music training. This, they said, fits with earlier research showing that among musicians, the planum temporale, a brain region involved in language, is enlarged. 2. Making e Easy Any study of exponential growth—from bacterial populations to interest rates—depends on a fundamental constant called e. Because this number (often rounded to 2.718) can't be expressed as a fraction, scientists must estimate it with an approximate formula. Now a self-taught inventor and a meteorology professor describe in the October issue of Mathematical Intelligencer several new formulas for e and use them to calculate it to thousands of decimal places with a desktop computer. For both bankers and bugs, e describes a basic limit to exponential growth. For example, if you invested $1 at 100% interest, compounded monthly, you would have$2.61 in a year. If the interest were compounded every 30 seconds, you would end with about a dime more. No matter how frequently you earned interest, you could never take home more than e multiplied by the number of dollars you first deposited. Harlan Brothers and John Knox, a meteorologist at Valparaiso University in Indiana, derived their first approximation by averaging a simple textbook formula, (1 + 1/n)n, that slightly underestimates e, with another, (1 - 1/n)−n, that slightly overestimates it. This doubled the number of correct decimal places (the higher the n, the more decimal places can be achieved). With further tinkering, they were able to improve the accuracy sixfold. The new formulas would require too much computer memory to challenge the most accurate estimate of e, which is already known to 50 million decimal places, says numerical analyst Simon Plouffe of Hydro-Quebec in Montreal, holder of several numerical computation records. That doesn't worry Knox, who says “What we've done is bring mathematics back to the people,” by demonstrating that amateurs can find fresh ways of representing e. “I'd like college math teachers to add it to the curriculum” to show students that the textbooks don't always have the last word. 3. From ESO to NRAO Riccardo Giacconi, director-general of the European Southern Observatory (ESO) for the past 6 years, will move to Washington, D.C., next July to assume the presidency of Associated Universities Inc., which, with the National Science Foundation, manages the National Radio Astronomy Observatory (NRAO). His duties will include presiding over the start-up of the world's largest fully steerable radio telescope, in Green Bank, West Virginia, late next year. 4. PNAS's Flexible Embargo The Proceedings of the National Academy of Sciences (PNAS) last week decided to lift an embargo and distribute an uncorrected manuscript so reporters could have access on the same day that a paper on a similar topic appeared in Science—a sign, some observers say, of stresses that are undermining the embargo system (Science, 30 October, p. 860). Knowing that a paper on natural substances that block blood vessel formation, or angiogenesis inhibitors, was coming out in Science (13 November, p. 1324), molecular biologist Jun Liu at the Massachusetts Institute of Technology “suggested it would be appropriate” to let his paper out early even though it's not due for publication until January, explains academy press officer Susan Turner-Lowe. “We couldn't see the logic” of withholding it, she says—although she admits that “it would drive us crazy” if PNAS decided to distribute every draft paper at an author's request.
2020-09-27 21:11:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1964484006166458, "perplexity": 3300.5426441097347}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401578485.67/warc/CC-MAIN-20200927183616-20200927213616-00224.warc.gz"}
https://zbmath.org/?q=an:1058.34012
## On multi-point boundary value problems for linear ordinary differential equations with singularities.(English)Zbl 1058.34012 The authors investigate the singular linear differential equation $u^{(n)}= \sum_{i=1}^n p_i(t)u^{(i-1)}+q(t) \tag{1}$ on $$[a,b]\subset \mathbb{R}$$, where the functions $$p_i$$ and $$q$$ can have singularities at $$t=a, t=b$$ and $$t=t_0\in (a,b)$$. This means that $$p_i$$ and $$q$$ are not integrable on $$[a,b]$$. Equation (1) is studied with the boundary conditions $u^{(i-1)}(t_0)=0 \text{ for } 1\leq i\leq n-1,\;\sum_{j=1}^{n-n_1}\alpha_{1j}u^{(j-1)}(t_{1j}) + \sum_{j=1}^{n-n_2}\alpha_{2j}u^{(j-1)}(t_{2j})=0, \tag{2}$ or $u^{(i-1)}(a)=0\text{ for }1\leq i\leq n-1,\;\sum_{j=1}^{n-n_0}\alpha_{j}u^{(j-1)}(t_{j})=0, \tag{3}$ where $$t_{1j}, t_{2j}, t_j$$ are certain interior points in $$(a,b)$$. The authors introduce the Fredholm property for these problems which means that the unique solvability of the corresponding homogeneous problem implies the unique solvability of the nonhomogeneous problem for every $$q$$ which is weith-integrable on $$[a,b]$$. Then, for the solvability of a problem having the Fredholm property, it sufficies to show that the corresponding homogeneous problem has only the trivial solution. In this way, the authors prove main theorems on the existence of a unique solution of (1),(2) and of (1),(3). Examples verifying the optimality of the conditions in various corollaries are shown as well. ### MSC: 34B10 Nonlocal and multipoint boundary value problems for ordinary differential equations 34B05 Linear boundary value problems for ordinary differential equations 34B16 Singular nonlinear boundary value problems for ordinary differential equations Full Text:
2023-03-25 07:41:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7307496666908264, "perplexity": 161.09651676137966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00063.warc.gz"}
http://www.tutorcircle.com/distance-from-a-point-to-a-line-t8IUp.html
# Distance From A Point To A Line The Distance between two points is given by D = √ (dp2 + dq2); Where ‘D’ is the distance; ‘P’ is the coordinates of x – axis; ‘Q’ is the coordinates of y – axis; ‘dp’ is the difference between the x – coordinates of the points; And ‘dq’ is the difference between the y – coordinates of the points. Suppose we have coordinates of the points then we use the above formula for finding the distance between the coordinates of the Point. Or in other words the length of the line segment is also said to be the distance of a line. The Distance From a Point to a Line when the coordinates are: (p1, q1) and (p2, q2); Then the distance between two points is given by the pythagoras theorem: D = √ (p2 - p1)2 + (q2 - q1)2; And the formula for three coordinates (p1, q1, r1) and (p2, q2, r2) then the distance between three points is given by: D = √ (p2 - p1)2 + (q2 - q1)2+ (r2 - r1)2; And in general the distance between two points ‘p’ and ‘q’ is given by: D = |p – q| = √∑a = 1|pa – qa|2, Suppose we have the coordinates of a points are (5, 8) and (-9, -3) then find the distance between two points. We know that the coordinates of the points is (p1, q1) and (p2, q2); Here the value of p1 = 5; And the value of p2 = -9; The value of q1 = 8; The value of q2 = -3; Then the distance between two points is: The formula for finding the distance between two points is: D = √ (p2 - p1)2 + (q2 - q1)2; Then put the values in the given formula: D = √ ((-9) - 5)2 + ((-3) - 8)2; On further solving we get: D = √ (14)2 + (-11)2; D = √ 196 + 121; D = √317; D = 17.80. So the distance from point to line is 17.80. ## Using Trigonometry There are several problems in mathematics that involve use of Trigonometry for finding the quantities like height of a tower or a pole or any other vertical distance, angle of elevation and demotion, horizontal distances etc. Trigonometry can be categorized in following: 1.      Core 2.      Plane 3.      Spherical 4.      Analytic Let us now understand how to so...Read More ## Using Two Line Equations Line is a straight figure having one dimesion. Representation of linear relationships between the variables is shown by drawing a line. Using two line equations we can find out whether two lines are parallel to each other or not. If two lines are lying in a plane such that we get an Intersection Point on solving them (i.e. the values of unknown variables are pos...Read More ## When the Line is Horizontal and Vertical Vertical line can be defined as the line whose x- coordinate remains same and y –coordinate changes. We can say a line which goes straight up and down and also parallel to the y – axis of the coordinate plane is known as vertical line. All points lie on the line having same x – coordinate. No Slope is defined for vertical line.
2013-05-25 22:00:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271932005882263, "perplexity": 847.9899663650045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706469149/warc/CC-MAIN-20130516121429-00062-ip-10-60-113-184.ec2.internal.warc.gz"}
https://nbviewer.jupyter.org/github/QuantumBFS/tutorials/blob/gh-pages/dev/generated/quick-start/1.prepare-ghz-state/main.ipynb
# Prepare Greenberger–Horne–Zeilinger state with Quantum Circuit¶ First, you have to use this package in Julia. In [ ]: using Yao Now, we just define the circuit according to the circuit image below: In [ ]: circuit = chain( 4, put(1=>X), repeat(H, 2:4), control(2, 1=>X), control(4, 3=>X), control(3, 1=>X), control(4, 3=>X), repeat(H, 1:4), ) Let me explain what happens here. ## Put single qubit gate X to location 1¶ we have an X gate applied to the first qubit. We need to tell Yao to put this gate on the first qubit by In [ ]: put(4, 1=>X) We use Julia's Pair to denote the gate and its location in the circuit, for two-qubit gate, you could also use a tuple of locations: In [ ]: put(4, (1, 2)=>swap(2, 1, 2)) But, wait, why there's no 4 in the definition above? This is because all the functions in Yao that requires to input the number of qubits as its first arguement could be lazy (curried), and let other constructors to infer the total number of qubits later, e.g In [ ]: put(1=>X) which will return a lambda that ask for a single arguement n. In [ ]: put(1=>X)(4) ## Apply the same gate on different locations¶ next we should put Hadmard gates on all locations except the 1st qubits. We provide repeat to apply the same block repeatly, repeat can take an iterator of desired locations, and like put, we can also leave the total number of qubits there. In [ ]: repeat(H, 2:4) ## Define control gates¶ In Yao, we could define controlled gates by feeding a gate to control In [ ]: control(4, 2, 1=>X) Like many others, you could leave the number of total qubits there, and infer it later. In [ ]: control(2, 1=>X) ## Composite each part together¶ This will create a ControlBlock, the concept of block in Yao basically just means quantum operators, since the quantum circuit itself is a quantum operator, we could create a quantum circuit by composite each part of. Here, we use chain to chain each part together, a chain of quantum operators means to apply each operators one by one in the chain. This will create a ChainBlock. In [ ]: circuit = chain( 4, put(1=>X), repeat(H, 2:4), control(2, 1=>X), control(4, 3=>X), control(3, 1=>X), control(4, 3=>X), repeat(H, 1:4), ) You can check the type of it with typeof In [ ]: typeof(circuit) ## Construct GHZ state from 00...00¶ For simulation, we provide a builtin register type called ArrayReg, we will use the simulated register in this example. First, let's create |00⋯00⟩, you can create it with zero_state In [ ]: zero_state(4) Or we also provide bit string literals to create arbitrary eigen state In [ ]: ArrayReg(bit"0000") They will both create a register with Julia's builtin Array as storage. ## Feed Registers to Circuits¶ Circuits can be applied to registers with apply! In [ ]: apply!(zero_state(4), circuit) or you can use pipe operator |>, when you want to chain several operations together, here we measure the state right after the circuit for 1000 times In [ ]: results = zero_state(4) |> circuit |> r->measure(r, nshots=1000) using StatsBase, Plots hist = fit(Histogram, Int.(results), 0:16) bar(hist.edges[1] .- 0.5, hist.weights, legend=:none) GHZ state will collapse to |0000⟩ or |1111⟩. This notebook was generated using Literate.jl.
2020-09-20 18:35:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5435516238212585, "perplexity": 2471.9819119694453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198287.23/warc/CC-MAIN-20200920161009-20200920191009-00607.warc.gz"}
https://comresult.com/disadvantages-of-ibsp/d1a247-into-function-is-also-called
Many functions can be defined as the antiderivative of another function. f . { f Often, the expression giving the function symbol, domain and codomain is omitted. x {\displaystyle X\to Y} {\displaystyle y=f(x)} b f For example, the function ( {\displaystyle X_{i}} Frequently, for a starting point Note that such an x is unique for each y because f is a bijection. {\displaystyle g\circ f} x {\displaystyle x\mapsto ax^{2}} {\displaystyle n\mapsto n!} consisting of all points with coordinates : Functions are often defined by a formula that describes a combination of arithmetic operations and previously defined functions; such a formula allows computing the value of the function from the value of any element of the domain. E : ↦ X or the preimage by f of C. This is not a problem, as these sets are equal. } Rational functions are quotients of two polynomial functions, and their domain is the real numbers with a finite number of them removed to avoid division by zero. t 1 1 called an implicit function, because it is implicitly defined by the relation R. For example, the equation of the unit circle For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g. j ] x be a function. E.g., if × 2 Here is another classical example of a function extension that is encountered when studying homographies of the real line. The productivity function is also called the per worker production function from TOPIC 6 at University of Texas f and 0. {\displaystyle f_{t}(x)=f(x,t)} whose graph is a hyperbola, and whose domain is the whole real line except for 0. ) X ) x Then the function g is called the inverse function of f, and it is denoted by f-1, if for every element y of B, g(y) = x, where f(x) = y. defines y as an implicit function of x, called the Bring radical, which has = but the domain of the resulting function is obtained by removing the zeros of g from the intersection of the domains of f and g. The polynomial functions are defined by polynomials, and their domain is the whole set of real numbers. {\displaystyle Y,} f ) If ∈ ( − C t id In mathematical analysis, and more specifically in functional analysis, a function space is a set of scalar-valued or vector-valued functions, which share a specific property and form a topological vector space. R Instead, it is correct, though long-winded, to write "let ( These generalized functions may be critical in the development of a formalization of the foundations of mathematics. , g {\displaystyle R^{\text{T}}\subseteq Y\times X} / ( i Y = ∈ X x ) } Y X , x x {\displaystyle f\colon X\to Y.} , {\displaystyle g\colon Y\to X} When looking at the graphs of these functions, one can see that, together, they form a single smooth curve. {\displaystyle f^{-1}(0)=\mathbb {Z} } [citation needed]). {\displaystyle f|_{S}} ∘ = , ( ) namely, Default Argument Values. The expression . Function restriction may also be used for "gluing" functions together. d {\displaystyle h(-d/c)=\infty } f such that i X x x Index notation is often used instead of functional notation. The function f is bijective if and only if it admits an inverse function, that is, a function Formally speaking, it may be identified with the function, but this hides the usual interpretation of a function as a process. d The more general definition of a function is usually introduced to second or third year college students with STEM majors, and in their senior year they are introduced to calculus in a larger, more rigorous setting in courses such as real analysis and complex analysis. the preimage [10] In symbols, the preimage of y is denoted by that is, if f has a right inverse. 2 f in the domain of 0 are respectively a right identity and a left identity for functions from X to Y. ) . R is an operation on functions that is defined only if the codomain of the first function is the domain of the second one. ∈ f (read: "the map taking x to f(x, t0)") represents this new function with just one argument, whereas the expression f(x0, t0) refers to the value of the function f at the point (x0, t0). ) j y − x X Bar charts are often used for representing functions whose domain is a finite set, the natural numbers, or the integers. {\displaystyle f\colon X\to Y} is a two-argument function, and we want to refer to a partially applied function / and is given by the equation. X Power series can be used to define functions on the domain in which they converge. − f And that's also called your image. [28] If f is injective, for defining g, one chooses an element R f 2. B I ( ( f ) → ) The Bring radical cannot be expressed in terms of the four arithmetic operations and nth roots. between these two sets. with f(x) = x2," where the redundant "be the function" is omitted and, by convention, "for all . For example, the function f(x) = 2x has the inverse function f −1 (x) = … ( ! 1 {\displaystyle (x_{1},\ldots ,x_{n})} and another which is negative and denoted h Y S , {\displaystyle f^{-1}(C)} {\displaystyle g\circ f} In the notation the function that is applied first is always written on the right. = , S [14][29] If, as usual, the axiom of choice is assumed,[citation needed] then f is surjective if and only if[citation needed] there exists a function 0 Thus, if for a given function f(x) there exists a function g(y) such that g(f(x)) = x and f(g(y)) = y, then g is called the inverse function of f and given the notation f −1, where by convention the variables are interchanged. f G the Cartesian plane. C Recommending means this is a discussion worth sharing. . is always positive if x is a real number. × is a function g from the reals to the reals, whose domain is the set of the reals x, such that f(x) ≠ 0. of n sets For example, ∈ {\displaystyle f\colon X\to Y,} are equal. f {\displaystyle x} . + t f For example, the rightmost function in the above figure is a bijection and its inverse is obtained by reversing the direction of each arrow. 0 ( For example, let f(x) = x2 and g(x) = x + 1, then {\displaystyle x} f = to S, denoted [10] If A is any subset of X, then the image of A under f, denoted f(A), is the subset of the codomain Y consisting of all images of elements of A,[10] that is, The image of f is the image of the whole domain, that is, f(X). defines a function ) [14] It is also called the range of f,[10][11][12][13] although the term range may also refer to the codomain. {\displaystyle x\mapsto \{x\}.} Y ) These functions are particularly useful in applications, for example modeling physical properties. the domain is included in the set of the values of the variable for which the arguments of the square roots are nonnegative. is continuous, and even differentiable, on the positive real numbers. x 4. − y f(a) = b, then f is an on-to function. The image of this restriction is the interval [–1, 1], and thus the restriction has an inverse function from [–1, 1] to [0, π], which is called arccosine and is denoted arccos. i Y Several methods for specifying functions of real or complex variables start from a local definition of the function at a point or on a neighbourhood of a point, and then extend by continuity the function to a much larger domain. x ) Z 1 In usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. of the domain such that what goes into the function is put inside parentheses after the name of the function: So f(x) shows us the function is called "f", and "x" goes in. When the elements of the codomain of a function are vectors, the function is said to be a vector-valued function. For giving a precise meaning to this concept, and to the related concept of algorithm, several models of computation have been introduced, the old ones being general recursive functions, lambda calculus and Turing machine. {\displaystyle g\circ f=\operatorname {id} _{X}} X f {\displaystyle f(x,y)=xy} : = : ) T ! The result of a function is called a return value. Y Onto function or Surjective function : Function f from set A to set B is onto function if each element of set B is connected with set of A elements. } { y f f Y ( {\displaystyle f\circ \operatorname {id} _{X}=\operatorname {id} _{Y}\circ f=f.}. = x {\displaystyle U_{i}} ) | f → − e {\displaystyle h\circ (g\circ f)} 1 to S. One application is the definition of inverse trigonometric functions. {\displaystyle x\mapsto f(x,t_{0})} ∘ , x R 2 [10][11][12][13] However, range is sometimes used as a synonym of codomain,[13][14] generally in old textbooks. where Values that are sent into a function are called _____. … contains exactly one element. ) ( ( X such that the domain of g is the codomain of f, their composition is the function maps of manifolds). = x Let A = {1, 2, 3}, B = {4, 5} and let f = {(1, 4), (2, 5), (3, 5)}. We haven't declared our function seperately (float average(int num1, int num2);) as we did in the previous example.Instead, we have defined our 'average' function before 'main'. f For example, let consider the implicit function that maps y to a root x of R f {\displaystyle f(A)} ( The fundamental theorem of computability theory is that these three models of computation define the same set of computable functions, and that all the other models of computation that have ever been proposed define the same set of computable functions or a smaller one. f … = A f These choices define two continuous functions, both having the nonnegative real numbers as a domain, and having either the nonnegative or the nonpositive real numbers as images. maps of manifolds). yields, when depicted in Cartesian coordinates, the well known parabola. x R 1 {\displaystyle f\circ g} , { f ∈ Y Functions whose domain are the nonnegative integers, known as sequences, are often defined by recurrence relations. R ) , = For example, a portion of a table for the sine function might be given as follows, with values rounded to 6 decimal places: Before the advent of handheld calculators and personal computers, such tables were often compiled and published for functions such as logarithms and trigonometric functions. X defines a relation on real numbers. In logic and the theory of computation, the function notation of lambda calculus is used to explicitly express the basic notions of function abstraction and application. y The index notation is also often used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. On the other hand, x and In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. → {\displaystyle f\colon X\times X\to Y;\;(x,t)\mapsto f(x,t)} ( {\displaystyle \{-3,-2,2,3\}} f ) {\displaystyle f(n)=n+1} . If the . {\displaystyle f\colon E\to Y,} If the domain of a function is finite, then the function can be completely specified in this way. g However, when extending the domain through two different paths, one often gets different values. A complicated reasoning, the function f is 5 and the cosine function is used more in complicated... ( 1 ) =2, f ( x ) \in Y. }..... Returning a value the determination of its domain often used for distinguishing some variables parameters... Kinds of typed lambda calculi can define fewer functions than untyped lambda calculus does include! ) = B, then the function. ). }. }. }. } }! Into a function may be identified with the function. ). }. } }... Consisting of building programs by using only subroutines that behave like mathematical.. A point this has the usual mathematical meaning in computer science function with more than one meaning ∞ n. 0, π ] symbol denoting the function 's codomain is a binary relation is functional and serial of! Programs by into function is also called only subroutines that behave like mathematical functions. [ 5 ] is both injective surjective! { n=0 } ^ { \infty } { x^ { n } \over n! }... 31 ] ( Contrarily to the real numbers to the capability to “overload” a function having some properties without! Are defined it has been said that functions on manifolds are defined in terms of the function. ) }! Illustrating the function, the power series can be defined as solutions of the four arithmetic operations nth. Usual, the cosine functions are particularly useful in applications, for example modeling physical properties return... Surjections, this does not require the axiom of choice notating functions, where trigonometric... Set Y. }. }. }. }. }. } }! One antiderivative, which takes the value of the images of all elements in the of., then f is 5 and the cosine functions are monotonic has led the world to through. During the study of function arguments are called _____ '' instead of functional notation called injection or! Let a function. ). }. }. }. }. }. }..! Few things into draggables called the graph of the four arithmetic operations and nth roots scope it was called into function is also called! Perform this task, we will use invoke, because a JavaScript can... ↦ n! } }. }. }. }..... Are distinct from their values the reals '' may refer to a single real variable were considered, and would! Special variables that hold copies of function arguments are called _____ which also represents the scope it was called natural. A plot that represents ( parts of this may create a plot represents! Is no possible value of f is 5 and the cosine functions defined..., together, they have been squared, together, they form a element... Function returns discrete dynamical systems, a binary relation is functional. ). } }! Might be omitted value of Y, one writes f x function ; it monotonic! Violate the necessary conditions for existence and uniqueness of solutions of differential equations result of a complex function return! Of several variables and to avoid appearing pedantic student’s first name would be a function having some properties without! Equations result of a set } { x^ { n } \over n! } }..! Keyword void, π ] the usual interpretation of a function, into function is also called be! Whose domain is a finite set, the position of a real variable gluing '' functions.. F }. }. }. }. }. }. }. }..... Are partial functions from integers to integers, known as sequences, are defined. Function with more than one meaning f { \displaystyle f\circ g=\operatorname { id } _ { x }... Domain x is unique for each Y because f is 5 and the cosine function is the set of to. Square to compute the sum of three numbers that have been squared working will... Choice is assumed }. }. }. }. }. } }... Use invoke, because a JavaScript function can be visualized as the antiderivative of 1/x that is, into function is also called! For each Y because f is 5 and the value of f B! 2 ) =3, f ( x ) ). }. }. }..... Each point of view is used without qualification, it will be name. It gets shared to your followers ' Disqus feeds, and gives the creator kudos point is! Input, function from a into B of all elements in language... Least one a ∈ a such that '' redirects here show that f ( 3 ) =4..! Initial function to turn a few things into draggables the time travelled and its average speed output for each because. Is 5 and the word image is used without qualification, it common! Other words, if each B ∈ B there exists at least one a ∈ a such that compute sum. For existence and uniqueness of an implicit function theorem provides mild differentiability conditions existence..., one writes f x a partial function is injective when restricted to the interval, it a... Area, a function from the real numbers onto the positive root. Function Name− this is typically the case of surjections, this does not include the of... Without qualification, it is said to be a vector-valued function. ). }..! Of illustrating the function. ). }. }. }. }. }..! Has been said that functions are now used throughout all areas of mathematics. [ 5 ] informal settings convenience. Which takes the value of the square function. ). }. } }. Most one element of its domain is always a single smooth curve are functions from integers integers... Function composition may be extended by analytic continuation generally consists of several variables and avoid. When extending the domain of definition of a function is injective when restricted to the function ). Associates a single output value to each input, function '' with some sort of structure! How To Teach A Chihuahua To Roll Over, What Kind Of List Will Ol Tags Create, Creative Writers Toolkit, Business Opportunities In Textile Industry, What Causes Brain Aneurysm, Frozen Beer Battered Onion Rings,
2021-06-14 23:51:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9272968173027039, "perplexity": 583.4048163048822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487614006.8/warc/CC-MAIN-20210614232115-20210615022115-00108.warc.gz"}
https://scholars.bgu.ac.il/display/n6399367
On spaces of bounded $q$-variation in dimension $N$ Academic Article • • Overview • • Research • • View All • abstract • Motivated by the formula, due to Bourgain, Brezis and Mironescu, \begin{equation*} \lim_{\varepsilon\to 0^+} \int_\Omega\int_\Omega \frac{|u(x)-u(y)|^q}{|x-y|^q}\,\rho_\varepsilon(x-y)\,dx\,dy=K_{q,N}\|\nabla u\|_{L^{q}}^q\,, \end{equation*} that characterizes the functions in $L^q$ that belong to $W^{1,q}$ (for $q>1$) and $BV$ (for $q=1$), respectively, we study what happens when one replaces the denominator in the expression above by $|x-y|$. It turns out that, for $q>1$ the resulting expression gives rise to a new space that we denote by $BV^q(\Omega)$. We show, among other things, that $BV^q(\Omega)$ contains both the spaces $BV(\Omega)\cap L^\infty(\Omega)$ and $W^{1/q,q}(\Omega)$. We also present applications of this space to the study of singular perturbation problems of Aviles-Giga type. publication date • January 1, 2017
2021-09-24 22:14:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9667609333992004, "perplexity": 293.67418089274855}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00309.warc.gz"}
https://plainmath.net/2571/difference-probability-distribution-continuous-probability-distribution
# What is the difference between a discrete probability distribution and a continuous probability distribution? Give your own example of each. What is the difference between a discrete probability distribution and a continuous probability distribution? Give your own example of each. You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Jozlyn A discrete distribution is one in which the data can only take on certain values, for example integers. A continuous distribution is one in which data can take on any value within a specified range (which may be infinite). For a discrete distribution, probabilities can be assigned to the values in the distribution - for example, "the probability that the web page will have 12 clicks in an hour is 0.15." In contrast, a continuous distribution has an infinite number of possible values, and the probability associated with any particular value of a continuous distribution is null. Therefore, continuous distributions are normally described in terms of probability density, which can be converted into the probability that a value will fall within a certain range.
2022-08-17 22:26:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7528044581413269, "perplexity": 222.95531750836122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00549.warc.gz"}
http://www.mzan.com/article/40271402-how-to-detect-if-user-cancel-auto-renewable-subscriptions-during-the-free-trial.shtml
Home How to detect if user cancel auto-renewable subscriptions during the free trial period? Apple's document does not seem to mention this. So if user cancels an auto-renewable subscription purchased during the free trial period, how do we detect? In appstore receipt JSON there is this field: is_trial_period. But I think this is for indication of whether the free trial period is over. The only thing I can think of is this NSBundle.mainBundle().appStoreReceiptURL?.path and if this is nil than that will indicate the user has not subscribed or cancel within the free trial period. But for sandbox testing, there is no way to do a cancel during free trial period to test this scenario. Does anyone have a solid knowledge of this?
2018-05-28 09:49:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.680611252784729, "perplexity": 1557.7405568032616}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794872766.96/warc/CC-MAIN-20180528091637-20180528111637-00458.warc.gz"}
https://socratic.org/questions/how-do-you-graph-2sinx-2
# How do you graph -2sinx+2? $- 2 \sin x + 2$ can be graphed by starting with $\sin x$ The $- 2$ tells us two things: (a) the amplitude is 2, and (b) the graph of $\sin x$ is reflected over the x-axis . Finally, the +2 shifts the entire graph vertically upward 2 units.
2023-02-01 13:15:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5556889176368713, "perplexity": 349.9731359943091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499934.48/warc/CC-MAIN-20230201112816-20230201142816-00360.warc.gz"}
http://tex.stackexchange.com/questions/30919/equation-reference-with-cleveref/30922
# Equation-reference with cleveref I am using cleveref, which seems to be working fine, except for my equation references, which are referenced as ??. My equation is: $$\label{ex2eqn} Y_{t} = \epsilon_t + 1.8Y_{t-1} + 0.8Y_{t-2} + 0.2Y_{t-6} - 0.36Y_{t-7} + 0.16Y_{t-8}$$ and my reference is This transformation is done by inserting $Y_{t-1}$ into \Cref{ex2eqn}, then $Y_{t-2}$ and so on. A minimal working example is: \documentclass[11pt]{article} \usepackage{amsmath} \usepackage[thmmarks]{ntheorem} \usepackage{cleveref} \begin{document} $$\label{ex2eqn} e=mc^2$$ A reference: \Cref{ex2eqn}. \end{document} - If I supplement your code snippets to a full example, I can't comprehend the problem. Please provide a minimal example that clearly reproduces the described behaviour. – Thorsten Donig Oct 8 '11 at 12:55 You need to compile twice to have the references set. Did you do that? – Andro Oct 8 '11 at 13:19 I did. I am sorry I did not supply a minimal working example in the beginning. Now that I have, it seems the problem is rather odd, and has something todo with the combination of thmmarks and amsmath... – utdiscant Oct 8 '11 at 13:22 Load ntheorem with the amsmath (compatibility) package option. \documentclass[11pt]{article} \usepackage{amsmath} \usepackage[amsmath,thmmarks]{ntheorem} \usepackage{cleveref} \begin{document} $$\label{ex2eqn} e=mc^2$$ A reference: \Cref{ex2eqn}. \end{document} - I found that loading the ntheorem package (without the additional amsmath option) before the amsmath package also fixed it. – cmhughes Oct 8 '11 at 15:44 @cmhughes: I doubt that this won't break other things. See section 3.2.1 of the ntheorem manual. – lockstep Oct 8 '11 at 15:47 Thanks for the reference! – cmhughes Oct 8 '11 at 15:51
2016-02-08 10:52:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9292510151863098, "perplexity": 1786.8531231230813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152987.97/warc/CC-MAIN-20160205193912-00096-ip-10-236-182-209.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/130447/collapsing-successor-of-singular
# collapsing successor of singular Let $\lambda$ be a singular cardinal. Is it consistent that there is a forcing of size $\lambda^+$ that collapses $\lambda^+$ while preserving all cardinals below $\lambda$? (Note that even without the size requirement this implies a failure of the Jensen covering property, so such a forcing does not necessarily exist.) - Theorem. Suppose $\kappa$ is a regular uncountable cardinal and $|P|\leq \kappa.$ Then $\Vdash_P cf(\kappa)=|\kappa|.$ Isn't the stationary tower forcing much larger than $\lambda^+$? –  Monroe Eskew May 13 '13 at 15:32
2015-07-30 10:17:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928945899009705, "perplexity": 380.9811029424296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987171.38/warc/CC-MAIN-20150728002307-00054-ip-10-236-191-2.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/925506/success-in-maths-soft-question/925516
# Success in maths (soft question) I would really like to hear from any professional mathematicians who didn't just sail through their university education. If one looks at the pages of many of today's mathematicians, one finds that they usually aced their university exams (often coming top of the year at prestigious universities etc). This can be disheartening (at times) for someone who hasn't followed that path. Is there actually a non-negligible hope for such people? Thank you. • Not a prof mathematician, so I won't post this as an answer. There is a huge difference between: 1) a person who is able to do well in an exam under time pressure, where there is a defined answer and it is clear what skills are needed to answer the question; 2) a good research mathematician, who is someone able to tackle a question that has never been tackled before; where a defined answer may not even exist; where you may end up learning an entire new branch of mathematics just to try an approach. Often there is a correlation between these two different skills, but this is not always the case – Mathmo123 Sep 9 '14 at 22:42 • I'm not particularly great at playing the guitar, but I do enjoy playing. However, I don't plan on making a career out of it either. – Jonny Sep 9 '14 at 22:48 • @gragas What good does it do to tell the OP to start acing his tests? Why should he feel good/bad or anything from the result of some test? – Rustyn Sep 9 '14 at 22:55 • @Rustyn I would say I didn't mean to be so harsh, but I did. He specifically said that it "can be disheartening for someone who hasn't followed this path [acing exams]," which implies he doesn't ace exams and its disheartening for him. Maybe if he dedicated more time to his classed, and started acing the exams, he wouldn't feel so disheartened. This is just a thought. – user168210 Sep 9 '14 at 23:02 • @gragas You mean, "She specifically said..." – user174896 Sep 9 '14 at 23:06 I didn't always ace my exams in school. Being a mathematician has nothing to do with test scores. My advice is to make mathematics personal. It's not about anybody else but you. In my opinion, it's best to compete with yourself. Try and have your own relationship with mathematics, understand it in a way that is your own, and, as always, engage mathematics because it gives you great joy. • @Mathmo123 No. You're implying that a mathematician is good if and only if they're bad at taking tests, which is false. A more accurate statement would be "a good mathematician is not necessarily a good test taker". – beep-boop Sep 9 '14 at 23:05 • @alexqwx is pedantry like that really necessary? I don't think I implied that at all, and nor is your correction what I mean to say. My intention is: a good mathematician is not necessarily a good test taker and a good test taker is not necessarily a good mathematician. They are 2 completely different skills – Mathmo123 Sep 9 '14 at 23:09 • @alexqwx Let $A = \{1,2,3\}$ and $B = \{2,3,4\}$. Then $A \ne B$ is a true statement. It does not however mean that $x \in A \iff x \not\in B$. I don't know what you're trying to prove here. – Mathmo123 Sep 9 '14 at 23:23 • When I read Mathmo's comment, I understood it the way he says he meant it. – Dave Sep 10 '14 at 0:16 • @alexqwx Good mathematician $\not \Rightarrow$ Good test taker. This is what you were seeking I imagine? – user142198 Sep 10 '14 at 3:30 1) Don't worry about other people. Chances are, the successful people you refer to got that way from thinking about their work and not by focusing on the performance or others. I often found myself falling into this trap as an undergraduate. I'd say about half of what should have been math time was spent worrying about my relative abilities. Not coincidentally, my absolute abilities improved dramatically when I focused on work alone. It's not easy, but it's worth it. (Also, for every unimpeded genius you find, there's another mathematician, just as successful, with a rockier path.) 2) Don't be afraid to do mathematical work outside of pure mathematics. You may feel comfortable in a computational field outside of pure mathematics. I studied pure math for my B.A. and M.A. but then switched to an applied field for my PhD. I sense less competition in the applied fields because of the large, messy and probably insoluble problems applied scientists all face. I still get to do lots of rewarding and advanced mathematics, without what could be called the sanctimonious attitudes of some people in the pure domain. Since my switch, I've felt much better about my abilities. By feedback, my abilities have improved as a result. Maybe this would work for you too. • Totally Agree with you MRicci, great answer. +1 – Rustyn Sep 9 '14 at 22:49 • Plenty of mathematicians get published in the likes of IEEE PAMI or Automatica. There are quite a few open engineering problems waiting for an applied mathematician to solve. – Damien Sep 10 '14 at 9:31 I wanted to post the excerpt below somewhere because I think many will find it interesting and, as far as I can tell, it seems to be nowhere on the internet (see my comments here). After several minutes of searching, this question wound up being the closest fit that I was able to find. What follows are the italicized (except for 9 words) introductory comments to Part 3. Mathematics in Isaac Asimov’s 1969 book Opus 100 (amazon.com page and Wikipedia page)—pages 89−91 in my 1969 Dell paperback version and, I think, pages 87−90 in the 1969 Houghton Mifflin hardback version. Asimov has three more personal commentaries in Part 3 (pages 94−95, 102−104, 113−115 in my paperback version), but those other commentaries veer away from the main topic of what follows. WHEN I WAS IN GRADE SCHOOL, I had an occasional feeling that I might be a mathematician when I grew up. I loved the math classes because they seemed so easy. As soon as I got my math book at the beginning of a new school term, I raced through it from beginning to end, found it all beautifully clear and simple, and then breezed through the course without trouble. It is, in fact, the beauty of mathematics, as opposed to almost any other branch of knowledge, that it contains so little unrelated and miscellaneous factual material one must memorize. Oh, there are a few definitions and axioms, some terminology—but everything else is deduction. And, if you have a feel for it, the deduction is all obvious, or becomes obvious as soon as it is once pointed out. As long as this holds true, mathematics is not only a breeze, it is an exciting intellectual adventure that has few peers. But then, sooner or later (except for a few transcendent geniuses), there comes a point when the breeze turns into a cold and needle-spray storm blast. For some it comes quite early in the game: long division, fractions, proportions, something shows up which turns out to be no longer obvious no matter how carefully it is explained. You may get to understand it but only by constant concentration; it never becomes obvious. And at that point mathematics ceases to be fun. When there is a prolonged delay in meeting that barrier, you feel lucky, but are you? The longer the delay, the greater the trauma when you do meet the barrier and smash into it. I went right through high school, for instance, without finding the barrier. Math was always easy, always fun, always an “A-subject” that required no studying. To be sure, I might have had a hint there was something wrong. My high school was Boys High School of Brooklyn and in the days when I attended (1932 to 1935) it was renowned throughout the city for the skill and valor of its math team. Yet I was not a member of the math team. I had a dim idea that the boys on the math team could do mathematics I had never heard of, and that the problems they faced and solved were far beyond me. I took care of that little bit of unpleasantness, however, by studiously refraining from giving it any thought, on the theory (very widespread among people generally) that a difficulty ignored is a difficulty resolved. At Columbia I took up analytical geometry and differential calculus and, while I recognized a certain unaccustomed intellectual friction heating up my mind somewhat, I still managed to get my A’s. It was when I went on to integral calculus that the dam broke. To my horror, I found that I had to study; that I had to go over a point several times and that even then it remained unclear; that I had to sweat away over the homework problems and sometimes either had to leave them unsolved or, worse still, worked them out incorrectly. And in the end, in the second semester of the year course, I got (oh shame!) a B. I had, in short, reached my own particular impassable barrier, and I met that situation with a most vigorous and effective course of procedure—I never took another math course. Oh, I’ve picked up some additional facets of mathematics on my own since then, but the old glow was gone. It was never the shining gold of “Of course” anymore, only the dubiously polished pewter of “I think I see it.” Fortunately, a barrier at integral calculus is quite a high one. There is plenty of room beneath it within which to run and jump, and I have therefore been able to write books on mathematics. I merely had to remember to keep this side of integral calculus. In June, 1958, Austin Olney of Houghton Mifflin (whose acquaintance I had first made the year before and whose suggestion is responsible for this book you are holding) asked me to write a book on mathematics for youngsters. I presume he thought I was an accomplished mathematician and I, for my part, did not see my way clear to disabusing him. (I suppose he is disabused now, though.) I agreed readily (with one reservation which I shall come to in due course) and proceeded to write a book called Realm of Numbers which was as far to the safe side of integral calculus as possible. In fact, it was about elementary arithmetic, to begin with, and it was not until the second chapter that I as much as got to Arabic numerals, and not until the fourth chapter that I got to fractions. However, by the end of the book I was talking about imaginary numbers, hyperimaginary numbers, and transfinite numbers—and that was the real purpose of the book. In going from counting to transfiinites [sic], I followed such a careful and gradual plan that it never stopped seeming easy. Anyway, here’s part of a chapter from the book, rather early on, while I am still reveling in the simplest matters, but trying to get across the rather subtle point of the importance of zero. • I am not sure those "simplest matters" are simplest to explain, but Asimov is probably a far better explainer than I. Thanks for posting. Minor point: I do not have access to the text in question, but should the "go" in "I go (oh shame!) a B." perhaps be "got"? – tomsmeding Feb 10 at 12:22 • @tomsmeding: Yes, "I go (oh shame!) a B" should be "I got (oh shame!) a B". I'm surprised at this, because I carefully transcribed (to an MS Word document prior to pasting in an answer window) and then very carefully proofed the result nearly word-by-word by placing my finger in the screen and checking with my print copy, which was a 160% photocopy blow-up of those 3 pages to lessen the chance of error. BTW, those in the U.S. should realize that a 'B' back then was NOT the same as a 'B' now (grade inflation). I suspect 10% to 15% probably got an A, and maybe 10% to 20% more got a B. – Dave L. Renfro Feb 10 at 16:06 Rob Kirby may be the example you are looking for (https://www.simonsfoundation.org/science_lives_video/robion-kirby/): In 1963, rising mathematical star John Milnor set forth a list of what he considered the seven hardest and most important problems in the nascent field of geometric topology. Just five years later, no fewer than four of those problems had been laid to rest, largely through the efforts of a young mathematics professor whose entry into mathematics research had seemed anything but auspicious. Described by colleagues and students as “slow,” “non-threatening,” and “deliberate,” Robion Kirby had followed a desultory path through higher education, marked by failed exams, lost fellowships and recommendations that he go study somewhere else. Yet just three years out of graduate school, he pulled off a mathematical coup, one that helped define the future of his field. “I sometimes felt like the Virgin Mary,” Kirby says. “How could this happen to me?” Another example (coincidentally also in topology) is Stephen Smale: https://en.wikipedia.org/wiki/Stephen_Smale Smale entered the University of Michigan in 1948.[3][4] Initially, he was a good student, placing into an honors calculus sequence taught by Bob Thrall and earning himself A's. However, his sophomore and junior years were marred with mediocre grades, mostly Bs, Cs and even an F in nuclear physics. However, with some luck, Smale was accepted as a graduate student at the University of Michigan's mathematics department. Yet again, Smale performed poorly in his first years, earning a C average as a graduate student. It was only when the department chair, Hildebrandt, threatened to kick Smale out that he began to work hard.[5] Smale finally earned his Ph.D. in 1957, under Raoul Bott.
2021-06-17 02:45:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5838613510131836, "perplexity": 1021.0442314105036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626465.55/warc/CC-MAIN-20210617011001-20210617041001-00509.warc.gz"}
https://terrytao.wordpress.com/2013/05/
You are currently browsing the monthly archive for May 2013. A finite group ${G=(G,\cdot)}$ is said to be a Frobenius group if there is a non-trivial subgroup ${H}$ of ${G}$ (known as the Frobenius complement of ${G}$) such that the conjugates ${gHg^{-1}}$ of ${H}$ are “disjoint as possible” in the sense that ${H \cap gHg^{-1} = \{1\}}$ whenever ${g \not \in H}$. This gives a decomposition $\displaystyle G = \bigcup_{gH \in G/H} (gHg^{-1} \backslash \{1\}) \cup K \ \ \ \ \ (1)$ where the Frobenius kernel ${K}$ of ${G}$ is defined as the identity element ${1}$ together with all the non-identity elements that are not conjugate to any element of ${H}$. Taking cardinalities, we conclude that $\displaystyle |G| = \frac{|G|}{|H|} (|H| - 1) + |K|$ and hence $\displaystyle |H| |K| = |G|. \ \ \ \ \ (2)$ A remarkable theorem of Frobenius gives an unexpected amount of structure on ${K}$ and hence on ${G}$: Theorem 1 (Frobenius’ theorem) Let ${G}$ be a Frobenius group with Frobenius complement ${H}$ and Frobenius kernel ${K}$. Then ${K}$ is a normal subgroup of ${G}$, and hence (by (2) and the disjointness of ${H}$ and ${K}$ outside the identity) ${G}$ is the semidirect product ${K \rtimes H}$ of ${H}$ and ${K}$. I discussed Frobenius’ theorem and its proof in this recent blog post. This proof uses the theory of characters on a finite group ${G}$, in particular relying on the fact that a character on a subgroup ${H}$ can induce a character on ${G}$, which can then be decomposed into irreducible characters with natural number coefficients. Remarkably, even though a century has passed since Frobenius’ original argument, there is no proof known of this theorem which avoids character theory entirely; there are elementary proofs known when the complement ${H}$ has even order or when ${H}$ is solvable (we review both of these cases below the fold), which by the Feit-Thompson theorem does cover all the cases, but the proof of the Feit-Thompson theorem involves plenty of character theory (and also relies on Theorem 1). (The answers to this MathOverflow question give a good overview of the current state of affairs.) I have been playing around recently with the problem of finding a character-free proof of Frobenius’ theorem. I didn’t succeed in obtaining a completely elementary proof, but I did find an argument which replaces character theory (which can be viewed as coming from the representation theory of the non-commutative group algebra ${{\bf C} G \equiv L^2(G)}$) with the Fourier analysis of class functions (i.e. the representation theory of the centre ${Z({\bf C} G) \equiv L^2(G)^G}$ of the group algebra), thus replacing non-commutative representation theory by commutative representation theory. This is not a particularly radical depature from the existing proofs of Frobenius’ theorem, but it did seem to be a new proof which was technically “character-free” (even if it was not all that far from character-based in spirit), so I thought I would record it here. The main ideas are as follows. The space ${L^2(G)^G}$ of class functions can be viewed as a commutative algebra with respect to the convolution operation ${*}$; as the regular representation is unitary and faithful, this algebra contains no nilpotent elements. As such, (Gelfand-style) Fourier analysis suggests that one can analyse this algebra through the idempotents: class functions ${\phi}$ such that ${\phi*\phi = \phi}$. In terms of characters, idempotents are nothing more than sums of the form ${\sum_{\chi \in \Sigma} \chi(1) \chi}$ for various collections ${\Sigma}$ of characters, but we can perform a fair amount of analysis on idempotents directly without recourse to characters. In particular, it turns out that idempotents enjoy some important integrality properties that can be established without invoking characters: for instance, by taking traces one can check that ${\phi(1)}$ is a natural number, and more generally we will show that ${{\bf E}_{(a,b) \in S} {\bf E}_{x \in G} \phi( a x b^{-1} x^{-1} )}$ is a natural number whenever ${S}$ is a subgroup of ${G \times G}$ (see Corollary 4 below). For instance, the quantity $\displaystyle \hbox{rank}(\phi) := {\bf E}_{a \in G} {\bf E}_{x \in G} \phi(a xa^{-1} x^{-1})$ is a natural number which we will call the rank of ${\phi}$ (as it is also the linear rank of the transformation ${f \mapsto f*\phi}$ on ${L^2(G)}$). In the case that ${G}$ is a Frobenius group with kernel ${K}$, the above integrality properties can be used after some elementary manipulations to establish that for any idempotent ${\phi}$, the quantity $\displaystyle \frac{1}{|G|} \sum_{a \in K} {\bf E}_{x \in G} \phi( axa^{-1}x^{-1} ) - \frac{1}{|G| |K|} \sum_{a,b \in K} \phi(ab^{-1}) \ \ \ \ \ (3)$ is an integer. On the other hand, one can also show by elementary means that this quantity lies between ${0}$ and ${\hbox{rank}(\phi)}$. These two facts are not strong enough on their own to impose much further structure on ${\phi}$, unless one restricts attention to minimal idempotents ${\phi}$. In this case spectral theory (or Gelfand theory, or the fundamental theorem of algebra) tells us that ${\phi}$ has rank one, and then the integrality gap comes into play and forces the quantity (3) to always be either zero or one. This can be used to imply that the convolution action of every minimal idempotent ${\phi}$ either preserves ${\frac{|G|}{|K|} 1_K}$ or annihilates it, which makes ${\frac{|G|}{|K|} 1_K}$ itself an idempotent, which makes ${K}$ normal. Vitaly Bergelson, Tamar Ziegler, and I have just uploaded to the arXiv our joint paper “Multiple recurrence and convergence results associated to ${{\bf F}_{p}^{\omega}}$-actions“. This paper is primarily concerned with limit formulae in the theory of multiple recurrence in ergodic theory. Perhaps the most basic formula of this type is the mean ergodic theorem, which (among other things) asserts that if ${(X,{\mathcal X}, \mu,T)}$ is a measure-preserving ${{\bf Z}}$-system (which, in this post, means that ${(X,{\mathcal X}, \mu)}$ is a probability space and ${T: X \mapsto X}$ is measure-preserving and invertible, thus giving an action ${(T^n)_{n \in {\bf Z}}}$ of the integers), and ${f,g \in L^2(X,{\mathcal X}, \mu)}$ are functions, and ${X}$ is ergodic (which means that ${L^2(X,{\mathcal X}, \mu)}$ contains no ${T}$-invariant functions other than the constants (up to almost everywhere equivalence, of course)), then the average $\displaystyle \frac{1}{N} \sum_{n=1}^N \int_X f(x) g(T^n x)\ d\mu \ \ \ \ \ (1)$ converges as ${N \rightarrow \infty}$ to the expression $\displaystyle (\int_X f(x)\ d\mu) (\int_X g(x)\ d\mu);$ see e.g. this previous blog post. Informally, one can interpret this limit formula as an equidistribution result: if ${x}$ is drawn at random from ${X}$ (using the probability measure ${\mu}$), and ${n}$ is drawn at random from ${\{1,\ldots,N\}}$ for some large ${N}$, then the pair ${(x, T^n x)}$ becomes uniformly distributed in the product space ${X \times X}$ (using product measure ${\mu \times \mu}$) in the limit as ${N \rightarrow \infty}$. If we allow ${(X,\mu)}$ to be non-ergodic, then we still have a limit formula, but it is a bit more complicated. Let ${{\mathcal X}^T}$ be the ${T}$-invariant measurable sets in ${{\mathcal X}}$; the ${{\bf Z}}$-system ${(X, {\mathcal X}^T, \mu, T)}$ can then be viewed as a factor of the original system ${(X, {\mathcal X}, \mu, T)}$, which is equivalent (in the sense of measure-preserving systems) to a trivial system ${(Z_0, {\mathcal Z}_0, \mu_{Z_0}, 1)}$ (known as the invariant factor) in which the shift is trivial. There is then a projection map ${\pi_0: X \rightarrow Z_0}$ to the invariant factor which is a factor map, and the average (1) converges in the limit to the expression $\displaystyle \int_{Z_0} (\pi_0)_* f(z) (\pi_0)_* g(z)\ d\mu_{Z_0}(x), \ \ \ \ \ (2)$ where ${(\pi_0)_*: L^2(X,{\mathcal X},\mu) \rightarrow L^2(Z_0,{\mathcal Z}_0,\mu_{Z_0})}$ is the pushforward map associated to the map ${\pi_0: X \rightarrow Z_0}$; see e.g. this previous blog post. We can interpret this as an equidistribution result. If ${(x,T^n x)}$ is a pair as before, then we no longer expect complete equidistribution in ${X \times X}$ in the non-ergodic, because there are now non-trivial constraints relating ${x}$ with ${T^n x}$; indeed, for any ${T}$-invariant function ${f: X \rightarrow {\bf C}}$, we have the constraint ${f(x) = f(T^n x)}$; putting all these constraints together we see that ${\pi_0(x) = \pi_0(T^n x)}$ (for almost every ${x}$, at least). The limit (2) can be viewed as an assertion that this constraint ${\pi_0(x) = \pi_0(T^n x)}$ are in some sense the “only” constraints between ${x}$ and ${T^n x}$, and that the pair ${(x,T^n x)}$ is uniformly distributed relative to these constraints. Limit formulae are known for multiple ergodic averages as well, although the statement becomes more complicated. For instance, consider the expression $\displaystyle \frac{1}{N} \sum_{n=1}^N \int_X f(x) g(T^n x) h(T^{2n} x)\ d\mu \ \ \ \ \ (3)$ for three functions ${f,g,h \in L^\infty(X, {\mathcal X}, \mu)}$; this is analogous to the combinatorial task of counting length three progressions in various sets. For simplicity we assume the system ${(X,{\mathcal X},\mu,T)}$ to be ergodic. Naively one might expect this limit to then converge to $\displaystyle (\int_X f\ d\mu) (\int_X g\ d\mu) (\int_X h\ d\mu)$ which would roughly speaking correspond to an assertion that the triplet ${(x,T^n x, T^{2n} x)}$ is asymptotically equidistributed in ${X \times X \times X}$. However, even in the ergodic case there can be additional constraints on this triplet that cannot be seen at the level of the individual pairs ${(x,T^n x)}$, ${(x, T^{2n} x)}$. The key obstruction here is that of eigenfunctions of the shift ${T: X \rightarrow X}$, that is to say non-trivial functions ${f: X \rightarrow S^1}$ that obey the eigenfunction equation ${Tf = \lambda f}$ almost everywhere for some constant (or ${T}$-invariant) ${\lambda}$. Each such eigenfunction generates a constraint $\displaystyle f(x) \overline{f(T^n x)}^2 f(T^{2n} x) = 1 \ \ \ \ \ (4)$ tying together ${x}$, ${T^n x}$, and ${T^{2n} x}$. However, it turns out that these are in some sense the only constraints on ${x,T^n x, T^{2n} x}$ that are relevant for the limit (3). More precisely, if one sets ${{\mathcal X}_1}$ to be the sub-algebra of ${{\mathcal X}}$ generated by the eigenfunctions of ${T}$, then it turns out that the factor ${(X, {\mathcal X}_1, \mu, T)}$ is isomorphic to a shift system ${(Z_1, {\mathcal Z}_1, \mu_{Z_1}, x \mapsto x+\alpha)}$ known as the Kronecker factor, for some compact abelian group ${Z_1 = (Z_1,+)}$ and some (irrational) shift ${\alpha \in Z_1}$; the factor map ${\pi_1: X \rightarrow Z_1}$ pushes eigenfunctions forward to (affine) characters on ${Z_1}$. It is then known that the limit of (3) is $\displaystyle \int_\Sigma (\pi_1)_* f(x_0) (\pi_1)_* g(x_1) (\pi_1)_* h(x_2)\ d\mu_\Sigma$ where ${\Sigma \subset Z_1^3}$ is the closed subgroup $\displaystyle \Sigma = \{ (x_1,x_2,x_3) \in Z_1^3: x_1-2x_2+x_3=0 \}$ and ${\mu_\Sigma}$ is the Haar probability measure on ${\Sigma}$; see this previous blog post. The equation ${x_1-2x_2+x_3=0}$ defining ${\Sigma}$ corresponds to the constraint (4) mentioned earlier. Among other things, this limit formula implies Roth’s theorem, which in the context of ergodic theory is the assertion that the limit (or at least the limit inferior) of (3) is positive when ${f=g=h}$ is non-negative and not identically vanishing. If one considers a quadruple average $\displaystyle \frac{1}{N} \sum_{n=1}^N \int_X f(x) g(T^n x) h(T^{2n} x) k(T^{3n} x)\ d\mu \ \ \ \ \ (5)$ (analogous to counting length four progressions) then the situation becomes more complicated still, even in the ergodic case. In addition to the (linear) eigenfunctions that already showed up in the computation of the triple average (3), a new type of constraint also arises from quadratic eigenfunctions ${f: X \rightarrow S^1}$, which obey an eigenfunction equation ${Tf = \lambda f}$ in which ${\lambda}$ is no longer constant, but is now a linear eigenfunction. For such functions, ${f(T^n x)}$ behaves quadratically in ${n}$, and one can compute the existence of a constraint $\displaystyle f(x) \overline{f(T^n x)}^3 f(T^{2n} x)^3 \overline{f(T^{3n} x)} = 1 \ \ \ \ \ (6)$ between ${x}$, ${T^n x}$, ${T^{2n} x}$, and ${T^{3n} x}$ that is not detected at the triple average level. As it turns out, this is not the only type of constraint relevant for (5); there is a more general class of constraint involving two-step nilsystems which we will not detail here, but see e.g. this previous blog post for more discussion. Nevertheless there is still a similar limit formula to previous examples, involving a special factor ${(Z_2, {\mathcal Z}_2, \mu_{Z_2}, S)}$ which turns out to be an inverse limit of two-step nilsystems; this limit theorem can be extracted from the structural theory in this paper of Host and Kra combined with a limit formula for nilsystems obtained by Lesigne, but will not be reproduced here. The pattern continues to higher averages (and higher step nilsystems); this was first done explicitly by Ziegler, and can also in principle be extracted from the structural theory of Host-Kra combined with nilsystem equidistribution results of Leibman. These sorts of limit formulae can lead to various recurrence results refining Roth’s theorem in various ways; see this paper of Bergelson, Host, and Kra for some examples of this. The above discussion was concerned with ${{\bf Z}}$-systems, but one can adapt much of the theory to measure-preserving ${G}$-systems for other discrete countable abelian groups ${G}$, in which one now has a family ${(T_g)_{g \in G}}$ of shifts indexed by ${G}$ rather than a single shift, obeying the compatibility relation ${T_{g+h}=T_g T_h}$. The role of the intervals ${\{1,\ldots,N\}}$ in this more general setting is replaced by that of Folner sequences. For arbitrary countable abelian ${G}$, the theory for double averages (1) and triple limits (3) is essentially identical to the ${{\bf Z}}$-system case. But when one turns to quadruple and higher limits, the situation becomes more complicated (and, for arbitrary ${G}$, still not fully understood). However one model case which is now well understood is the finite field case when ${G = {\bf F}_p^\omega = \bigcup_{n=1}^\infty {\bf F}_p^n}$ is an infinite-dimensional vector space over a finite field ${{\bf F}_p}$ (with the finite subspaces ${{\bf F}_p^n}$ then being a good choice for the Folner sequence). Here, the analogue of the structural theory of Host and Kra was worked out by Vitaly, Tamar, and myself in these previous papers (treating the high characteristic and low characteristic cases respectively). In the finite field setting, it turns out that nilsystems no longer appear, and one only needs to deal with linear, quadratic, and higher order eigenfunctions (known collectively as phase polynomials). It is then natural to look for a limit formula that asserts, roughly speaking, that if ${x}$ is drawn at random from a ${{\bf F}_p^\omega}$-system and ${n}$ drawn randomly from a large subspace of ${{\bf F}_p^\omega}$, then the only constraints between ${x, T^n x, \ldots, T^{(p-1)n} x}$ are those that arise from phase polynomials. The main theorem of this paper is to establish this limit formula (which, again, is a little complicated to state explicitly and will not be done here). In particular, we establish for the first time that the limit actually exists (a result which, for ${{\bf Z}}$-systems, was one of the main results of this paper of Host and Kra). As a consequence, we can recover finite field analogues of most of the results of Bergelson-Host-Kra, though interestingly some of the counterexamples demonstrating sharpness of their results for ${{\bf Z}}$-systems (based on Behrend set constructions) do not seem to be present in the finite field setting (cf. this previous blog post on the cap set problem). In particular, we are able to largely settle the question of when one has a Khintchine-type theorem that asserts that for any measurable set ${A}$ in an ergodic ${{\bf F}_p^\omega}$-system and any ${\epsilon>0}$, one has $\displaystyle \mu( T_{c_1 n} A \cap \ldots \cap T_{c_k n} A ) > \mu(A)^k - \epsilon$ for a syndetic set of ${n}$, where ${c_1,\ldots,c_k \in {\bf F}_p}$ are distinct residue classes. It turns out that Khintchine-type theorems always hold for ${k=1,2,3}$ (and for ${k=1,2}$ ergodicity is not required), and for ${k=4}$ it holds whenever ${c_1,c_2,c_3,c_4}$ form a parallelogram, but not otherwise (though the counterexample here was such a painful computation that we ended up removing it from the paper, and may end up putting it online somewhere instead), and for larger ${k}$ we could show that the Khintchine property failed for generic choices of ${c_1,\ldots,c_k}$, though the problem of determining exactly the tuples for which the Khintchine property failed looked to be rather messy and we did not completely settle it. [This guest post is authored by Ingrid Daubechies, who is the current president of the International Mathematical Union, and (as she describes below) is heavily involved in planning for a next-generation digital mathematical library that can go beyond the current network of preprint servers (such as the arXiv), journal web pages, article databases (such as MathSciNet), individual author web pages, and general web search engines to create a more integrated and useful mathematical resource. I have lightly edited the post for this blog, mostly by adding additional hyperlinks. – T.] This guest blog entry concerns the many roles a World Digital Mathematical Library (WDML) could play for the mathematical community worldwide. We seek input to help sketch how a WDML could be so much more than just a huge collection of digitally available mathematical documents. If this is of interest to you, please read on! The “we” seeking input are the Committee on Electronic Information and Communication (CEIC) of the International Mathematical Union (IMU), and a special committee of the US National Research Council (NRC), charged by the Sloan Foundation to look into this matter. In the US, mathematicians may know the Sloan Foundation best for the prestigious early-career fellowships it awards annually, but the foundation plays a prominent role in other disciplines as well. For instance, the Sloan Digital Sky Survey (SDSS) has had a profound impact on astronomy, serving researchers in many more ways than even its ambitious original setup foresaw. The report being commissioned by the Sloan Foundation from the NRC study group could possibly be the basis for an equally ambitious program funded by the Sloan Foundation for a WDML with the potential to change the practice of mathematical research as profoundly as the SDSS did in astronomy. But to get there, we must formulate a vision that, like the original SDSS proposal, imagines at least some of those impacts. The members of the NRC committee are extremely knowledgeable, and have been picked judiciously so as to span collectively a wide range of expertise and connections. As president of the IMU, I was asked to co-chair this committee, together with Clifford Lynch, of the Coalition for Networked InformationPeter Olver, chair of the IMU’s CEIC, is also a member of the committee. But each of us is at least a quarter century older than the originators of MathOverflow or the ArXiv when they started. We need you, internet-savvy, imaginative, social-networking, young mathematicians to help us formulate the vision that may inspire the creation of a truly revolutionary WDML! Some history first.  Several years ago, an international initiative was started to create a World Digital Mathematical Library. The website for this library, hosted by the IMU, is now mostly a “ghost” website — nothing has been posted there for the last seven years. [It does provide useful links, however, to many sites that continue to be updated, such as the European Mathematical Information Service, which in turn links to many interesting journals, books and other websites featuring electronically available mathematical publications. So it is still worth exploring …] Many of the efforts towards building (parts of) the WDML as originally envisaged have had to grapple with business interests, copyright agreements, search obstructions, metadata secrecy, … and many an enterprising, idealistic effort has been slowly ground down by this. We are still dealing with these frustrations — as witnessed by, e.g., the CostofKnowledge initiative. They are real, important issues, and will need to be addressed. Suppose that ${G = (G,\cdot)}$ is a finite group of even order, thus ${|G|}$ is a multiple of two. By Cauchy’s theorem, this implies that ${G}$ contains an involution: an element ${g}$ in ${G}$ of order two. (Indeed, if no such involution existed, then ${G}$ would be partitioned into doubletons ${\{g,g^{-1}\}}$ together with the identity, so that ${|G|}$ would be odd, a contradiction.) Of course, groups of odd order have no involutions ${g}$, thanks to Lagrange’s theorem (since ${G}$ cannot split into doubletons ${\{ h, hg \}}$). The classical Brauer-Fowler theorem asserts that if a group ${G}$ has many involutions, then it must have a large non-trivial subgroup: Theorem 1 (Brauer-Fowler theorem) Let ${G}$ be a finite group with at least ${|G|/n}$ involutions for some ${n > 1}$. Then ${G}$ contains a proper subgroup ${H}$ of index at most ${n^2}$. This theorem (which is Theorem 2F in the original paper of Brauer and Fowler, who in fact manage to sharpen ${n^2}$ slightly to ${n(n+2)/2}$) has a number of quick corollaries which are also referred to as “the” Brauer-Fowler theorem. For instance, if ${g}$ is a an involution of a group ${G}$, and the centraliser ${C_G(g) := \{ h \in G: gh = hg\}}$ has order ${n}$, then clearly ${n \geq 2}$ (as ${C_G(g)}$ contains ${1}$ and ${g}$) and the conjugacy class ${\{ aga^{-1}: a \in G \}}$ has order ${|G|/n}$ (since the map ${a \mapsto aga^{-1}}$ has preimages that are cosets of ${C_G(g)}$). Every conjugate of an involution is again an involution, so by the Brauer-Fowler theorem ${G}$ contains a subgroup of order at least ${\max( n, |G|/n^2)}$. In particular, we can conclude that every group ${G}$ of even order contains a proper subgroup of order at least ${|G|^{1/3}}$. Another corollary is that the size of a simple group of even order can be controlled by the size of a centraliser of one of its involutions: Corollary 2 (Brauer-Fowler theorem) Let ${G}$ be a finite simple group with an involution ${g}$, and suppose that ${C_G(g)}$ has order ${n}$. Then ${G}$ has order at most ${(n^2)!}$. Indeed, by the previous discussion ${G}$ has a proper subgroup ${H}$ of index less than ${n^2}$, which then gives a non-trivial permutation action of ${G}$ on the coset space ${G/H}$. The kernel of this action is a proper normal subgroup of ${G}$ and is thus trivial, so the action is faithful, and the claim follows. If one assumes the Feit-Thompson theorem that all groups of odd order are solvable, then Corollary 2 suggests a strategy (first proposed by Brauer himself in 1954) to prove the classification of finite simple groups (CFSG) by induction on the order of the group. Namely, assume for contradiction that the CFSG failed, so that there is a counterexample ${G}$ of minimal order ${|G|}$ to the classification. This is a non-abelian finite simple group; by the Feit-Thompson theorem, it has even order and thus has at least one involution ${g}$. Take such an involution and consider its centraliser ${C_G(g)}$; this is a proper subgroup of ${G}$ of some order ${n < |G|}$. As ${G}$ is a minimal counterexample to the classification, one can in principle describe ${C_G(g)}$ in terms of the CFSG by factoring the group into simple components (via a composition series) and applying the CFSG to each such component. Now, the “only” thing left to do is to verify, for each isomorphism class of ${C_G(g)}$, that all the possible simple groups ${G}$ that could have this type of group as a centraliser of an involution obey the CFSG; Corollary 2 tells us that for each such isomorphism class for ${C_G(g)}$, there are only finitely many ${G}$ that could generate this class for one of its centralisers, so this task should be doable in principle for any given isomorphism class for ${C_G(g)}$. That’s all one needs to do to prove the classification of finite simple groups! Needless to say, this program turns out to be far more difficult than the above summary suggests, and the actual proof of the CFSG does not quite proceed along these lines. However, a significant portion of the argument is based on a generalisation of this strategy, in which the concept of a centraliser of an involution is replaced by the more general notion of a normaliser of a ${p}$-group, and one studies not just a single normaliser but rather the entire family of such normalisers and how they interact with each other (and in particular, which normalisers of ${p}$-groups commute with each other), motivated in part by the theory of Tits buildings for Lie groups which dictates a very specific type of interaction structure between these ${p}$-groups in the key case when ${G}$ is a (sufficiently high rank) finite simple group of Lie type over a field of characteristic ${p}$. See the text of Aschbacher, Lyons, Smith, and Solomon for a more detailed description of this strategy. The Brauer-Fowler theorem can be proven by a nice application of character theory, of the type discussed in this recent blog post, ultimately based on analysing the alternating tensor power of representations; I reproduce a version of this argument (taken from this text of Isaacs) below the fold. (The original argument of Brauer and Fowler is more combinatorial in nature.) However, I wanted to record a variant of the argument that relies not on the fine properties of characters, but on the cruder theory of quasirandomness for groups, the modern study of which was initiated by Gowers, and is discussed for instance in this previous post. It gives the following slightly weaker version of Corollary 2: Corollary 3 (Weak Brauer-Fowler theorem) Let ${G}$ be a finite simple group with an involution ${g}$, and suppose that ${C_G(g)}$ has order ${n}$. Then ${G}$ can be identified with a subgroup of the unitary group ${U_{4n^3}({\bf C})}$. One can get an upper bound on ${|G|}$ from this corollary using Jordan’s theorem, but the resulting bound is a bit weaker than that in Corollary 2 (and the best bounds on Jordan’s theorem require the CFSG!). Proof: Let ${A}$ be the set of all involutions in ${G}$, then as discussed above ${|A| \geq |G|/n}$. We may assume that ${G}$ has no non-trivial unitary representation of dimension less than ${4n^3}$ (since such representations are automatically faithful by the simplicity of ${G}$); thus, in the language of quasirandomness, ${G}$ is ${4n^3}$-quasirandom, and is also non-abelian. We have the basic convolution estimate $\displaystyle \|1_A * 1_A * 1_A - \frac{|A|^3}{|G|} \|_{\ell^\infty(G)} \leq (4n^3)^{-1/2} |G|^{1/2} |A|^{3/2}$ (see Exercise 10 from this previous blog post). In particular, $\displaystyle 1_A * 1_A * 1_A(0) \geq \frac{|A|^3}{|G|} - (4n^3)^{-1/2} |G|^{1/2} |A|^{3/2} \geq \frac{1}{2n^3} |G|^2$ and so there are at least ${|G|^2/2n^3}$ pairs ${(g,h) \in A \times A}$ such that ${gh \in A^{-1} = A}$, i.e. involutions ${g,h}$ whose product is also an involution. But any such involutions necessarily commute, since $\displaystyle g (gh) h = g^2 h^2 = 1 = (gh)^2 = g (hg) h.$ Thus there are at least ${|G|^2/2n^3}$ pairs ${(g,h) \in G \times G}$ of non-identity elements that commute, so by the pigeonhole principle there is a non-identity ${g \in G}$ whose centraliser ${C_G(g)}$ has order at least ${|G|/2n^3}$. This centraliser cannot be all of ${G}$ since this would make ${g}$ central which contradicts the non-abelian simple nature of ${G}$. But then the quasiregular representation of ${G}$ on ${G/C_G(g)}$ has dimension at most ${2n^3}$, contradicting the quasirandomness. $\Box$ Anonymous on Open question: scarring for th… Various Items | Not… on IMU Graduate Breakout Fellowsh… Anonymous on Open question: scarring for th… louigiaddario on IMU Graduate Breakout Fellowsh… gninrepoli on P=NP, relativisation, and mult… Wolfgang Arendt on A quick application of the clo… Terence Tao on Ultrafilters, nonstandard anal… sagar on Ultrafilters, nonstandard anal… Anonymous on On time management Terence Tao on An elementary non-commutative… Anonymous on An elementary non-commutative… Nathan on 254A, Notes 1: Concentration o… Terence Tao on Distinguished Lecture Series I… Terence Tao on 254A, Notes 1: Concentration o… Anonymous on Distinguished Lecture Series I…
2016-05-03 12:28:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 289, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.878808319568634, "perplexity": 271.47661912719224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121534.33/warc/CC-MAIN-20160428161521-00014-ip-10-239-7-51.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/64731-areas-definate-integrals-print.html
# areas and definate integrals • December 12th 2008, 07:13 PM gixxer areas and definate integrals I'm just curious how to solve something like S dx/4x+6 from x=1 to x=6 or S du/8u^1/2. I understand how to do the differentiation but I dont know what happens to the dx or du during the process. I'm trying to study for a final and this is driving me crazy. Thank you so much. Greg • December 12th 2008, 07:28 PM Chris L T521 Quote: Originally Posted by gixxer I'm just curious how to solve something like S dx/4x+6 from x=1 to x=6 or S du/8u^1/2. I understand how to do the differentiation but I dont know what happens to the dx or du during the process. I'm trying to study for a final and this is driving me crazy. Thank you so much. Greg For the first one, $\int_1^6\frac{\,dx}{4x+6}$, use a substitution $z=4x+6\implies \,dz=4\,dx\implies\tfrac{1}{4}\,dz=\,dx$. You can also convert the limits of integration as well. $z(1)=10$ and $z(6)=30$. Thus, $\int_1^6\frac{\,dx}{4x+6}$ is the same thing as $\tfrac{1}{4}\int_{10}^{30}\frac{\,dx}{z}$. The second integral is a bit easier to evalute. Can you take this one from here? For the second one, $\int\frac{\,du}{8u^{\frac{1}{2}}}=\tfrac{1}{8}\int u^{-\frac{1}{2}}\,du$. From here, apply the power rule of integration: $\int x^{n}\,dx=\frac{x^{n+1}}{n+1}+C$ (since this is indefinite integration). Can you take it from here? Does this make sense? • December 12th 2008, 08:38 PM gixxer Thank you for all your help it made me understand this unit a lot better. You got me thinking correctly again. One more question... https://webwork3.asu.edu/webwork2_fi...afca8dd651.png Would this be solved by substitution? • December 12th 2008, 08:48 PM Chop Suey Yes. Try $u = e^{2x}-2x$ • December 12th 2008, 09:15 PM Pi314 Yeah it uses substitution as previously stated, I actually was doing a problem like that earlier today. Apparently any anti-deriv. problem that requires multiplication and/or division needs to be solved via subs. So far the examples I've come across follow this idea. And at times you're not left with the "proper" dx so you might have to "break" one of the functions. for instance, for the anti. derivative of (8x+8^(2x))(x^2+e^(2x)) dx you have to take out the 8 after making the second portion "u" and then you'll be left with (x^2+e^(2x)) dx and you can continue to solve the problem normally. • December 13th 2008, 01:53 AM Prove It Quote: Originally Posted by Pi314 Yeah it uses substitution as previously stated, I actually was doing a problem like that earlier today. Apparently any anti-deriv. problem that requires multiplication and/or division needs to be solved via subs. So far the examples I've come across follow this idea. And at times you're not left with the "proper" dx so you might have to "break" one of the functions. for instance, for the anti. derivative of (8x+8^(2x))(x^2+e^(2x)) dx you have to take out the 8 after making the second portion "u" and then you'll be left with (x^2+e^(2x)) dx and you can continue to solve the problem normally. Yes that's right... So in the one above... $\int_0^4{(e^{2x}-2x)^5 (e^{2x}-1)\,dx}$ as stated earlier, you'd have to use the substitution $u = e^{2x} - 2x$. This would mean $\frac{du}{dx} = 2e^{2x} - 2 = 2(e^{2x} - 1)$. So you would need to "break" the original function so that you get $\frac{du}{dx}$ as a factor... $\int_0^4{(e^{2x}-2x)^5 (e^{2x}-1)\,dx}=\frac{1}{2}\int_0^4{(e^{2x}-2x)^5 2(e^{2x}-1)\,dx}$ $= \frac{1}{2}\int_{x=0}^{x=4}{u^5 \frac{du}{dx}\,dx}$ $= \frac{1}{2}\int_{x=0}^{x=4}{u^5 \, du}$ $= \frac{1}{2}\left[\frac{1}{6}u^6\right]_{x=0}^{x=4}$ $= \frac{1}{12}[(e^{2x}-2x)^6]_0^4$ $= \frac{1}{12}[(e^8 - 8)^6 - (e^0 - 0)^6]$ $= \frac{1}{12}[(e^8 - 8)^6 - 1]$. • December 13th 2008, 11:38 AM GaloisTheory1 Quote: Originally Posted by gixxer I'm just curious how to solve something like S dx/4x+6 from x=1 to x=6 or S du/8u^1/2. I understand how to do the differentiation but I dont know what happens to the dx or du during the process. I'm trying to study for a final and this is driving me crazy. Thank you so much. Greg 1. $\int_1^6 \frac{dx}{4x}+6 = \int_1^6 \frac{1}{4}\cdot\frac{dx}{x}+6=[\frac{1}{4}\cdot\ln{x}+6x]|_1^6$ 2. $\int 8 \cdot u^{1/2}=8 \cdot \frac{2}{3}u^{\frac{3}{2}}+C$ • December 13th 2008, 01:04 PM mr fantastic Quote: Originally Posted by GaloisTheory1 1. $\int_1^6 \frac{dx}{4x}+6 = \int_1^6 \frac{1}{4}\cdot\frac{dx}{x}+6=[\frac{1}{4}\cdot\ln{\color{red} |} x {\color{red} |} +6x]|_1^6$ 2. $\int 8 \cdot u^{1/2} \, {\color{red} du}=8 \cdot \frac{2}{3}u^{\frac{3}{2}}+C$ Some minor corrections (in red).
2015-07-31 09:31:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8842900991439819, "perplexity": 735.5923506749357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988065.26/warc/CC-MAIN-20150728002308-00110-ip-10-236-191-2.ec2.internal.warc.gz"}
http://openstudy.com/updates/514c702ae4b05e69bfad1629
## onegirl Group Title Find the linear approximation to f(x) at x = x o. Graph the function and its linear approximation. f(x) = sin 3x, x 0, = 0 one year ago one year ago 1. onegirl Group Title 2. satellite73 Group Title this is the same as "find the equation of the line tangent to the graph of $$y=\sin(3x)$$ at $$(0,0)$$" 3. satellite73 Group Title take the derivative, replace $$x$$ by 0 for the slope (you will get 3) and then use the point slope formula 4. satellite73 Group Title since the point is $$(0,0)$$ the equation of the line will be $$y=3x$$ 5. onegirl Group Title ok so find the derivative of sin 3x right? 6. satellite73 Group Title yes 7. onegirl Group Title Ok so i got 3cos(3x) 8. satellite73 Group Title yes now evaluate at $$x=0$$ to find your slope 9. onegirl Group Title ok i got 3 10. satellite73 Group Title yes 11. satellite73 Group Title that is the slope of your line 12. onegirl Group Title okay so now i have to use the point slop formula? 13. satellite73 Group Title yes 14. satellite73 Group Title but the point is $$(0,0)$$ so it is just $$y=3x$$ 15. onegirl Group Title okay so what do i do next? 16. satellite73 Group Title you are done 17. onegirl Group Title okay 18. electrokid Group Title linear approximation of f(x) at x=x0: $f(x)=f(x_0)+(x-x_0)f'(x_0)$ 19. electrokid Group Title you can use this way too. (notice the resemblance to equation of line)
2014-08-28 03:08:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6446865200996399, "perplexity": 5194.036665683798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830074.72/warc/CC-MAIN-20140820021350-00029-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.weimengtw.com/or/482
# 最大化1-範數:使用二進制變量來緩解非凸性 Provide the standard citation for YALMIP ``````@inproceedings{Lofberg2004, address = {Taipei, Taiwan}, author = {L{\"{o}}fberg, J.}, booktitle = {In Proceedings of the CACSD Conference}, title = {YALMIP : A Toolbox for Modeling and Optimization in MATLAB}, year = {2004} } `````` which is shown at https://yalmip.github.io/reference/lofberg2004/ Then perhaps you can also reference the relevant YALMIP wiki page for your problem, namely YALMIP Logics and integer-programming representations.
2020-10-22 15:30:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9259448051452637, "perplexity": 11706.812596899697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879673.14/warc/CC-MAIN-20201022141106-20201022171106-00310.warc.gz"}
http://physics.stackexchange.com/questions/48151/question-on-inflation-as-a-phase-transition/48461
# Question on inflation as a phase transition I have just finished watching the following video http://www.youtube.com/watch?v=beQ9fZ0jVdE where Laughlin, Gross and some students discuss e.g. about inflation. The following question is risen: Is inflation a phase transition (e.g. of the geometry of space time)? - The inflating universe can for example be described by the FLRW metric $$d\tau = dt^2-a(t)^2(dx^2 + dy^2 + dz^2)$$ The scale factor $a(t)$, which describes the expansion, is obtained from the appropriate Friedman equation which contains only the vacuum energy $\rho_0$ as source of gravity $$\frac{\dot{a}}{a} = \sqrt\frac{8\pi G \rho_0}{3} = H$$ with the exponential solution $$a = K e^{H t}$$ For inflation with this exponential expansion to occur, the the Hubble constant $H$ and therefore $\rho_0$ has to be constant. During an inflationary phase it is assumed that the most important contrubution to the vacuum energy is due to the potential energy $V(\phi)$ of the scalar inflaton field $\phi$ which has the Lagrangian $$\frac{\dot{\phi}^2}{2} + \frac{(\nabla\phi)^2}{2} - V(\phi)$$ So an inflational period itself, for example the inflation of the early universe, can not be described by a phase transition; the potential energy density of the inflaton field is assumed to be about constant, sitting in the left local minimum in the figure below. However the end of this early inflation which can be described by a jump to the lower value of the potential energy density (and much slower inflation) we observe today, corresponds to a phase transition during which the difference in the potential energy density of the inflaton field between the left and the right local minimum is released as some kind of latent heat (reheating) and converted to kinetic energy and finally to particles and seeds of the galaxies we observe today. More details about how the end of the early inflation can be described as a phase transition can be found for example in this paper. As said above the inflaton field is assumed to be a scaler field; plenty of scalar fields occur for example for different physical reasons in string theory, some of them are described here or here. Some people are trying to reconstruct the potential of the inflaton field from observations. - Thank you for the reply. By the way. What is the physical origin of the inflaton field? Why has the potential this particular form? –  Hamurabi Jan 6 '13 at 14:24 Hi @Hamurabi I am not deep enough in inflationary cosmology unfortunately, but what I have heared there are many different physical mechanisms considered that could give rise to an appropriate inflaton field and I am not sure if there exist theoretical models which can exactly reproduce such a potential of the inflaton. An alternative explanation I have heared about why the early faster inflation (corresponding to a larger vacuum energy density) could have slowed down to the slower present inflation (corresponding to a very small vacuum energy density) is tunneling between two such vacua. –  Dilaton Jan 7 '13 at 23:46 Maybe you could ask the two interesting questions of your comment separately to get specific answers focused on each of them? –  Dilaton Jan 7 '13 at 23:47
2014-08-31 10:34:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7811691761016846, "perplexity": 383.79113475343956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500959239.73/warc/CC-MAIN-20140820021559-00317-ip-10-180-136-8.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/15928/how-to-compare-powers-without-calculating
How to Compare powers without calculating? Is there any rule for powers so that i can compare which one is greater without actually calculating? For example 54^53 and 53^54 23^26 and 26^23 3^4 and 4^3 (very simple but how without actually calculating) - Is it always $a^b$ vs $b^a$? – Aryabhata Dec 30 '10 at 20:18 In my case (GRE preparation), yes it is. – LifeH2O Dec 31 '10 at 18:23 If $a\gt b\gt e , b^a\gt a^b$. To see this, take logs. You want to compare $a \ln b$ with $b \ln a$. $\ln$ rises slowly, so the larger multiplier wins. - +1,that's nice explanation,but what could be for any $a^b$ and $c^d$ ? – Quixotic Dec 31 '10 at 10:32 @Debanjan: Comparing logs still makes it easier, but there is no simple answer. Note that given b>d and a, you can find c large enough that c^d>a^b. Sometimes you can still estimate the ratio of logs using whatever you know, like ln 2=.69, or ln 3=1.1 – Ross Millikan Dec 31 '10 at 14:54 if b>d and a>c then a^b>c^d – LifeH2O Dec 31 '10 at 18:32 Problem 99 on the Euler Project site asks you to find the largest of a list of these: projecteuler.net/index.php?section=problems&id=99 – Ross Millikan Jan 1 '11 at 1:08 How one would check that project euler question without calculating? – LifeH2O Jan 1 '11 at 14:11
2016-06-25 21:33:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8866113424301147, "perplexity": 1920.5339096517525}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00052-ip-10-164-35-72.ec2.internal.warc.gz"}
http://finmath.net/finmath-lib/apidocs/net/finmath/integration/package-summary.html
finMath lib documentation Package net.finmath.integration Provides algorithms for numerical integration and wrappers to libraries with algorithms for numerical integration. See: Description Package net.finmath.integration Description Provides algorithms for numerical integration and wrappers to libraries with algorithms for numerical integration. Author: Christian Fries
2018-12-14 23:32:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8928701281547546, "perplexity": 12080.480453327016}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826530.72/warc/CC-MAIN-20181214232243-20181215014243-00639.warc.gz"}
https://mathoverflow.net/questions/264979/what-is-this-symmetric-simplex-category-concretely
# What is this symmetric simplex category, concretely? Let $\Delta_+$ denote the category of finite ordinal numbers with monotonic maps (the subscript indicates that $0$ is included, so this is the augmented simplex category). This has a monoidal structure (given by the sum), which is not symmetric. But we can make it symmetric in a universal way, see here for the general procedure. Let us denote this symmetric monoidal category by $(\Delta_+)_{\mathrm{sym}}$. Question. What is a more "concrete" symmetric monoidal category which is equivalent to $(\Delta_+)_{\mathrm{sym}}$? Notice that this is not the symmetric monoidal category $\mathcal{F}$ of finite sets. Whereas $(\Delta_+)_{\mathrm{sym}}$ classifies algebra objects in symmetric monoidal categories, $\mathcal{F}$ classifies commutative algebra objects in symmetric monoidal categories. Hence, there will be a strong symmetric monoidal functor $(\Delta_+)_{\mathrm{sym}} \to \mathcal{F}$, which is essentially surjective, but not fully faithful. $\Delta_+$ is the monoidal category generated from the associative operad, considered as a non-symmetric operad. Similarly, $(\Delta_+)_{{\rm sym}}$ is the symmetric monoidal category generated from the associative operad, this time considered as a symmetric operad. This category can then be explicitly described as the category whose objects are finite sets and such that the morphisms from $I$ to $J$ are given by maps $f:I \to J$ together with, for each $j \in J$, a choice of a linear order on $f^{-1}(j)$. The symmetric monoidal structure is given by disjoint union. • @HeinrichD, Exactly. Maybe I should have recalled the general construction. Given a (single colored) symmetric operad $P$, the symmetric monoidal category generated from $P$ has as objects the finite sets and the morphisms from $I$ to $J$ are given by maps $f: I \to J$ together with a choice, for each $j \in J$, of a multi-operation $\varphi \in P(f^{-1}(j))$ (here we think of the underlying data of $P$ as a functor from the groupoid of finite sets to sets) . Composition is then defined using the composition structure of the operad. – Yonatan Harpaz Mar 19 '17 at 7:42
2019-04-23 11:01:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9684567451477051, "perplexity": 151.0562768224211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578596571.63/warc/CC-MAIN-20190423094921-20190423120921-00427.warc.gz"}
https://www.greenemath.com/College_Algebra/145/Fundamental-Theorem-of-AlgebraPracticeTest.html
### About The Fundamental Theorem of Algebra: The fundamental theorem of algebra tells us that a polynomial of degree n, will have n complex solutions, although some of these solutions may be repeated. Test Objectives • Demonstrate the ability to write a polynomial function given certain conditions Fundamental Theorem of Algebra Practice Test: #1: Instructions: Write a polynomial function of degree 3 that satisifies the given conditions. $$a)\hspace{.2em}f(2)=-30$$ $$\text{Zeros}: 5, 4, -3$$ #2: Instructions: Write a polynomial function of degree 3 that satisifies the given conditions. $$a)\hspace{.2em}f(1)=-8$$ $$\text{Zeros}: 2 \hspace{.25em}\text{multiplicity}\hspace{.25em}2, 5$$ #3: Instructions: Write a polynomial function of degree 3 that satisifies the given conditions. $$a)\hspace{.2em}f(2)=135$$ $$\text{Zeros}: -1 \hspace{.25em}\text{multiplicity}\hspace{.25em}3$$ #4: Instructions: Write a polynomial function of degree 3 that satisifies the given conditions. $$a)\hspace{.2em}f(3)=20$$ $$\text{Zeros}: -7, -1, 2$$ #5: Instructions: Write a polynomial function of degree 3 that satisifies the given conditions. $$a)\hspace{.2em}f(-1)=-16$$ $$\text{Zeros}: 3 \hspace{.25em}\text{multiplicity}\hspace{.25em}2, -\frac{1}{2}$$ Written Solutions: #1: Solutions: $$a)\hspace{.2em}f(x)=-x^3 + 6x^2 + 7x - 60$$ #2: Solutions: $$a)\hspace{.2em}f(x)=2x^3 - 18x^2 + 48x - 40$$ #3: Solutions: $$a)\hspace{.2em}f(x)=5x^3 + 15x^2 + 15x + 5$$ #4: Solutions: $$a)\hspace{.2em}f(x)=\frac{1}{2}x^3 + 3x^2 - \frac{9}{2}x - 7$$ #5: Solutions: $$a)\hspace{.2em}f(x)=2x^3 - 11x^2 + 12x + 9$$
2022-12-05 04:52:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6819136738777161, "perplexity": 762.2041707897764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00138.warc.gz"}
https://ask.cloudbase.it/question/1298/let-user-set-password-without-auto-generated-password-on-initial-boot/?comment=1317
New Question I cannot figure out how to configure the image to simply ask for the user to set the password and not have a default password (like in the one provided https://cloudbase.it/windows-cloud-im...). I saw this, http://ask.cloudbase.it/question/865/... and then the CLI method nova get-password <instance> <ssh_private_key> to get the default password. But I just want the user to provide a password on their first login. edit retag close merge delete Sort by » oldest newest most voted Do sysprep with OOBE and Generalize and shutdown. more Hello, you need to set in the cloudbase-init configuration file the following: [DEFAULT] first_logon_behaviour=always Thanks, more I followed the instructions here, http://docs.openstack.org/image-guide/windows-image.html at what point in that process do I need to change that setting? I just executed the MSI file. I'll have to change this on the generated image manually I guess? (e.g. regenerate and change it on the image)? ( 2016-07-01 18:25:24 +0300 )edit You just need to change the cloudbase-init configuration file which is located at: C:\Program Files\Cloudbase Solutions\Cloudbase-Init\conf\cloudbase-init.conf ( 2016-07-01 19:04:19 +0300 )edit As long as you change the image before it boots it's fine, you don t need to fully regenerate the image. ( 2016-07-01 19:46:39 +0300 )edit how can you change it before it boots? Is it possible to extract the qcow2 image and change it? ( 2016-07-01 20:00:34 +0300 )edit On linux, with qemu-img you can convert it to raw, then with losetup, kpartx and mount you mount it, and then change the file content. ( 2016-07-01 21:42:34 +0300 )edit Doesn't seem to work. I added "first_logon_behavior=always" to the cloudbase-init.conf and to cloudbase-init-unattended.conf. I took this text from a booted image that was changed in OS which still made me decrypt the password. [DEFAULT] first_logon_behaviour=always inject_user_password=false ( 2016-07-02 01:08:52 +0300 )edit The only thing I want the user to do is type in a password. I'm going to try and redo the entire image to see if that works. ( 2016-07-02 01:13:35 +0300 )edit I recreated the image and still the same issue. It prompts me to input the current password (which I have to decrypt) and then change the pass. The config has first_logon_behaviour=always inside of cloudbase-init.conf inside the folder path of C:\Program Files\Cloudbase Solutions\Cloudbase-Init\conf ( 2016-07-05 17:05:34 +0300 )edit First of all please try with first_logon_behaviour=always instead of first_logon_behavior=always (it's missing an u). If the problem it is still present could you send us the cloudbase-init log in order to get an overview regarding the context of this issue. Also it will be useful for us to know on what version of operating system are you trying this. Thanks for using cloudbase-init, and sorry for this issue, Alex more I fixed this issue but the issue still persists. It is still asking for a password when I boot the image. I'm trying this on Windows Server 2012 R2. Where can I get the cloudbase-init log? Would this be on the instance? ( 2016-08-01 16:38:41 +0300 )edit
2020-09-20 01:21:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18046315014362335, "perplexity": 4053.127188179595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193087.0/warc/CC-MAIN-20200920000137-20200920030137-00592.warc.gz"}
http://accessphysiotherapy.mhmedical.com/content.aspx?bookid=965&sectionid=53599844
Chapter 6 ### Outline • The Elbow Joint Structure Movements Movements • Muscles of the Elbow and Radioulnar Joints Location Characteristics and Functions of Individual Muscles • Muscular Analysis of the Fundamental Movements of the Forearm Flexion Extension Pronation Supination • The Wrist and Hand Structure of the Wrist (Radiocarpal) Joint Movements of the Hand at the Wrist Joint Structure and Movements of the Midcarpal and Intercarpal Joints Structure of the Carpometacarpal and Intermetacarpal Joints Movements of the Carpometacarpal Joint of the Thumb Movements of the Carpometacarpal and Intermetacarpal Joints of the Fingers Structure of the Metacarpophalangeal Joints Movements of the Metacarpophalangeal Joints of the Four Fingers Movements of the Metacarpophalangeal Joints of the Thumb The Interphalangeal Joints • Muscles of the Wrist and Hand Location Characteristics and Functions of Muscles • Muscular Analysis of the Fundamental Movements of the Wrist, Fingers, and Thumb The Wrist The Fingers The Thumb The Thumb Metacarpal The Thumb Phalanges Length of Long Finger Muscles Relative to Range of Motion in Wrist and Fingers Using the Hands for Grasping • Common Injuries of the Forearm, Elbow, Wrist, and Fingers Fractures of the Forearm Elbow Dislocation and Fracture Sprained or Strained Wrist Carpal Tunnel Syndrome Avulsion Fracture Epicondylitis • Laboratory Experiences ### Objectives At the conclusion of this chapter, the student should be able to: 1. Name, locate, and describe the structure and ligamentous reinforcements of the articulations of the elbow, forearm, wrist, and hand. 2. Name and demonstrate the movements possible in the joints of the elbow, forearm, wrist, and hand regardless of the starting position. 3. Name and locate the muscles and muscle groups of the elbow, forearm, wrist, and hand, and name their primary actions as agonists, stabilizers, neutralizers, or antagonists. 4. Analyze the fundamental movements of the forearm, hand, and fingers with respect to joint and muscle actions. 5. Describe the common athletic injuries of the forearm, elbow, wrist, and fingers. 6. Perform an anatomical analysis of the elbow, forearm, wrist, and hand in a motor skill. In much the same way that the shoulder girdle's cooperation with the shoulder joint contributes to the wide range of motion available to the hand, the cooperative movements of the elbow, radioulnar, and wrist joints contribute to the versatility and precision of its movements. Although the hand is intrinsically skillful, its usefulness is greatly impaired when anything interferes with the motions of the forearm or wrist. Injury to any one of the joints involved makes this painfully obvious to the sufferer. ### The Elbow Joint #### Structure The elbow is far more complex than the simple hinge joint that it appears to be. The two bones of the forearm attach to the humerus in totally different ways. The humeroulnar joint is indeed a true hinge joint, but the humeroradial joint is far from it. It has been classified as an ... Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access. Ok ## Subscription Options ### AccessPhysiotherapy Full Site: One-Year Subscription Connect to the full suite of AccessPhysiotherapy content and resources including interactive NPTE review, more than 500 videos, Anatomy & Physiology Revealed, 20+ leading textbooks, and more.
2017-01-22 14:04:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23911696672439575, "perplexity": 7425.95039586378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00449-ip-10-171-10-70.ec2.internal.warc.gz"}
https://scikit-criteria.quatrope.org/en/latest/tutorial/sufdom.html
# Dominance and satisfaction analysis (AKA filters)¶ This tutorial provides a practical overview of how to use scikit-criteria for satisfaction and dominance analysis, as well as the creation of filters for data cleaning. ## Case¶ In order to decide to purchase a series of bonds, a company studied five candidate investments: PE, JN, AA, FX, MM and GN. The finance department decides to consider the following criteria for selection. selection: 1. ROE: Return percentage. Sense of optimality, $$Maximize$$. 2. CAP: Market capitalization. Sense of optimality, $$Maximize$$. 3. RI: Risk. Sense of optimality, $$Minimize$$. The full decision matrix [1]: import skcriteria as skc dm = skc.mkdm( matrix=[ [7, 5, 35], [5, 4, 26], [5, 6, 28], [3, 4, 36], [1, 7, 30], [5, 8, 30], ], objectives=[max, max, min], alternatives=["PE", "JN", "AA", "FX", "MM", "FN"], criteria=["ROE", "CAP", "RI"], ) dm [1]: ROE[▲ 1.0] CAP[▲ 1.0] RI[▼ 1.0] PE 7 5 35 JN 5 4 26 AA 5 6 28 FX 3 4 36 MM 1 7 30 FN 5 8 30 6 Alternatives x 3 Criteria ## Satisfaction analysis¶ It is reasonable to think that any decision-maker would want to set “satisfaction thresholds” for each criterion, in such a way that alternatives that do not exceed the thresholds in any criterion are eliminated. The basic idea was proposed in the work of “A Behavioral Model of Rational Choice and presents the definition of “aspiration levels” and are set a priori by the decision maker. For our example we will assume that the decision-maker only accepts alternatives with $$ROE >= 2%$$. For this analysis we will need the skcriteria.preprocessing.filters module . [2]: from skcriteria.preprocessing import filters The filters are transformers and works as follows: • At the moment of construction they are provided with a dict that as a key has the name of a criterion, and as a value the condition to be satisfied. • Optionally it receives a parameter ignore_missing_criteria which if it is set to False (default value) fails any attempt to transform an decision matrix that does not have any of the criteria. • For an alternative not to be eliminated the alternative has to pass all filter conditions. The simplest filter consists of instances of the class filters.Filters, which as a value of the configuration dict, accepts functions that are applied to the corresponding criteria and returns a mask where the True values denote the alternatives that we want to keep. To write the function that filters the alternatives where $ROE >= 2. [3]: def roe_filter(v): return v >= 2 # criteria are numpy.ndarray flt = filters.Filter({"ROE": roe_filter}) flt [3]: Filter(criteria_filters={'ROE': <function roe_filter at 0x7fd716405160>}, ignore_missing_criteria=False) However, scikit-criteria offers a simpler collection of filters that implements the most common operations of equality, inequality and inclusion a set. In our case we are interested in the FilterGE class, where GE stands for Greater or Equal. So the filter would be defined as [4]: flt = filters.FilterGE({"ROE": 2}) flt [4]: FilterGE(criteria_filters={'ROE': 2}, ignore_missing_criteria=False) The way to apply the filter to a DecisionMatrix, is like any other transformer: [5]: dmf = flt.transform(dm) dmf [5]: ROE[▲ 1.0] CAP[▲ 1.0] RI[▼ 1.0] PE 7 5 35 JN 5 4 26 AA 5 6 28 FX 3 4 36 FN 5 8 30 5 Alternatives x 3 Criteria As can be seen, we eliminated the alternative MM which did not comply with an $$ROE >= 2$$. If on the other hand (to give an example) we would like to filter out the alternatives $$ROE > 3$$ and $$CAP > 4$$ (using the original matrix), we can use the filter FilterGT where GT is Greater Than. [6]: filters.FilterGT({"ROE": 3, "CAP": 4}).transform(dm) [6]: ROE[▲ 1.0] CAP[▲ 1.0] RI[▼ 1.0] PE 7 5 35 AA 5 6 28 FN 5 8 30 3 Alternatives x 3 Criteria Note: If it is necessary to filter the alternatives by two separate conditions, a pipeline can be used. An example of this can be seen below, where we combine a satisficing and a dominance filter The complete list of filters implemented by Scikit-Criteria is: • filters.Filter: Filter alternatives according to the value of a criterion using arbitrary functions. filters.Filter({"criterion": lambda v: v > 1}) • filters.FilterGT: Filter Greater Than ($$>$$). filters.FilterGT({"criterion": 1}) • filters.FilterGE: Filter Greater or Equal than ($$>=$$). filters.FilterGE({"criterion": 2}) • filters.FilterLT: Filter Less Than ($$<$$). filters.FilterLT({"criterion": 1}) • filters.FilterLE: Filter Less or Equal than ($$<=$$). filters.FilterLE({"criterion": 2}) • filters.FilterEQ: Filter Equal ($$==$$). filters.FilterEQ({"criterion": 1}) • filters.FilterNE: Filter Not-Equal than ($$!=$$). filters.FilterNE({"criterion": 2}) • filters.FilterIn: Filter if the values is in a set ($$\in$$). filters.FilterIn({"criterion": [1, 2, 3]}) • filters.FilterNotIn: Filter if the values is not in a set ($$\notin$$). filters.FilterNotIn({"criterion": [1, 2, 3]}) ## Dominance¶ An alternative $$A_0$$ is said to dominate an alternative $$A_1$$ ($$A_0 \succeq A_1$$), if $$A_0$$ is equal in all criteria and better in at least one criterion. On the other hand, $$A_0$$ strictly dominate $$A_1$$ ($$A_0$$ \succeq A_1$). :math:A_1 ($$A_0 \succ A_1$$), if $$A_0$$ is better on all criteria than $$A_1$$. Under this same train of thought, an alternative that dominates all others is called a “dominant alternative”. If there is a dominant alternative, it is undoubtedly the best choice, as long as a full ranking is not required. On the other hand, an alternative is dominated if there exists at least one other alternative that dominates it. If a dominated alternative exists and a consigned ordering is not desired, it must be removed from the set of decision alternatives. Generally only the non-dominated or efficient alternatives are the interested ones. ### Scikit-Criteria dominance analysis¶ Scikit-criteria, contains a number of tools within the attribute, DecisionMatrix.dominance, useful for the evaluation of dominant and dominated alternatives. For example, we can access all the dominated alternatives by using the dominated method [7]: dmf.dominance.dominated() [7]: PE False JN False AA False FX True FN False dtype: bool It can be seen with this, that FX is an dominated alternative. In addition if we want to know which are the strictly dominated alternatives we need to provide the strict parameter to the method: [8]: dmf.dominance.dominated(strict=True) [8]: PE False JN False AA False FX True FN False dtype: bool It can be seen that FX is strictly dominated by at least one other alternative. If we wanted to find out which are the dominant alternatives of FX, we can opt for two paths: 1. List all the dominant/strictly dominated alternatives of FX using dominator_of(). [9]: dmf.dominance.dominators_of("FX", strict=True) [9]: array(['PE', 'AA', 'FN'], dtype=object) 1. Use dominance()/dominance.dominance() to see the full relationship between all alternatives. [10]: dmf.dominance(strict=True) # equivalent to dmf.dominance.dominance() [10]: PE JN AA FX FN PE False False False True False JN False False False False False AA False False False True False FX False False False False False FN False False False True False the result of the method is a DataFrame that in each cell has a True value if the row alternative dominates the column alternative. If this matrix is very large we can, for example, visualize it in the form of a heatmap using the library seaborn [11]: import seaborn as sns sns.heatmap(dmf.dominance.dominance(strict=True)) [11]: <AxesSubplot:> Finally we can see how each of the alternatives relate to each other dominatnes with FX using compare(). [12]: for dominant in dmf.dominance.dominators_of("FX"): display(dmf.dominance.compare(dominant, 'FX')) Criteria Performance ROE CAP RI Alternatives PE True True True 3 FX False False False 0 Equals False False False 0 Criteria Performance ROE CAP RI Alternatives JN True False True 2 FX False False False 0 Equals False True False 1 Criteria Performance ROE CAP RI Alternatives AA True True True 3 FX False False False 0 Equals False False False 0 Criteria Performance ROE CAP RI Alternatives FN True True True 3 FX False False False 0 Equals False False False 0 ### Filter non-dominated alternatives¶ Finally skcriteria offers a way to filter non-dominated alternatives, which it accepts as a parameter if you want to evaluate strict dominance. [13]: flt = filters.FilterNonDominated(strict=True) flt [13]: FilterNonDominated(strict=True) [14]: flt.transform(dmf) [14]: ROE[▲ 1.0] CAP[▲ 1.0] RI[▼ 1.0] PE 7 5 35 JN 5 4 26 AA 5 6 28 FN 5 8 30 4 Alternatives x 3 Criteria ## Full expermient¶ We can finally create a complete MCDA experiment that takes into account the in satisfaction and dominance analysis. The complete experiment would have the following steps 1. Eliminate alternatives that do not yield at least 2% ($ROE >=$2). 2. Eliminate dominated alternatives. 3. Convert all criteria to maximize. 4. The weights are scaled by the total sum. 5. The matrix is scaled by the vector modulus. 6. Apply TOPSIS. The most convenient way to do this is to use a pipeline. [15]: from skcriteria.preprocessing import scalers, invert_objectives from skcriteria.pipeline import mkpipe pipe = mkpipe( filters.FilterGE({"ROE": 2}), filters.FilterNonDominated(strict=True), invert_objectives.NegateMinimize(), scalers.SumScaler(target="weights"), scalers.VectorScaler(target="matrix"), TOPSIS(), ) pipe [15]: SKCPipeline(steps=[('filterge', FilterGE(criteria_filters={'ROE': 2}, ignore_missing_criteria=False)), ('filternondominated', FilterNonDominated(strict=True)), ('negateminimize', NegateMinimize()), ('sumscaler', SumScaler(target='weights')), ('vectorscaler', VectorScaler(target='matrix')), ('topsis', TOPSIS(metric='euclidean'))]) We now apply the pipeline to the original data [16]: pipe.evaluate(dm) [16]: PE JN AA FN Rank 3 4 2 1 Method: TOPSIS [17]: import datetime as dt import skcriteria print("Scikit-Criteria version:", skcriteria.VERSION) print("Running datetime:", dt.datetime.now()) Scikit-Criteria version: 0.7 Running datetime: 2022-05-07 03:51:22.283023
2022-06-26 18:20:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38581085205078125, "perplexity": 5906.043132026455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271763.15/warc/CC-MAIN-20220626161834-20220626191834-00525.warc.gz"}
https://stacks.math.columbia.edu/tag/011R
Lemma 12.18.2. Let $(A, E, \alpha , f, g)$ be an exact couple in an abelian category $\mathcal{A}$. Set 1. $d = g \circ f : E \to E$ so that $d \circ d = 0$, 2. $E' = \mathop{\mathrm{Ker}}(d)/\mathop{\mathrm{Im}}(d)$, 3. $A' = \mathop{\mathrm{Im}}(\alpha )$, 4. $\alpha ' : A' \to A'$ induced by $\alpha$, 5. $f' : E' \to A'$ induced by $f$, 6. $g' : A' \to E'$ induced by “$g \circ \alpha ^{-1}$”. Then we have 1. $\mathop{\mathrm{Ker}}(d) = f^{-1}(\mathop{\mathrm{Ker}}(g)) = f^{-1}(\mathop{\mathrm{Im}}(\alpha ))$, 2. $\mathop{\mathrm{Im}}(d) = g(\mathop{\mathrm{Im}}(f)) = g(\mathop{\mathrm{Ker}}(\alpha ))$, 3. $(A', E', \alpha ', f', g')$ is an exact couple. Proof. Omitted. $\square$ ## Comments (0) There are also: • 3 comment(s) on Section 12.18: Spectral sequences: exact couples ## Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work. All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 011R. Beware of the difference between the letter 'O' and the digit '0'.
2019-07-16 04:32:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.5802474617958069, "perplexity": 1645.6346010511961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00393.warc.gz"}
https://www.darwinproject.ac.uk/letter/?docId=letters/DCP-LETT-9626.xml;query=darwin;brand=default;hit.rank=3
# From W. C. Marshall   5 September [1874]1 Derwent Island | Keswick Saturday Sep 5th. Dear Mr. Darwin I am sending you by tomorrow’s post some more leaves of Pinguicula wh. have seeds on them for the most part. I also enclose a list from wh. you will see that 79 per cent of the leaves I have examined had insects on them.2 I have counted the remains of insects wh. had apparently been some time on the leaf & many small things wh. I cd. not have recognised as insects without the aid of a magnifying glass. I have also counted in several small spiders. The insects were for the most part small gnats & aphides, but there seemed to be a great variety, I have found a few beetles, but no moths. The observations have been made during an exceptionally rainy week, with an average daily rainfall of $\frac{3}{4}$ of an inch!! Pinguicula Vulgaris grows in wet places on mountain slopes, & has as far as I have observed a partiality for running water. The following are some of the more conspicuous plants tt. grow with it— Parnassia Palustris Drosera Rotundifolia Saxifraga aizoides Anagallis tenella Erica tetralix3 With regard to the secretion from insects, I can not trace it; I observe fluid on the leaves, generally in the chanel formed by the edge; but whether this is a secretion of the plant, or from the insect, or merely rain water lodged, I can not tell; but it is certainly sticky, & therefore if rain water must have disolved some of the viscid matter of the points on the leaves.4 I have observed tt. the leaves are not unfrequently eaten as if by slugs. Also I have no doubt you have noticed tt. there is a tendency in the leaves to curl tightly over entrapped insects that get near the edge, & the same applies to seeds.5 I have noticed brown patches & in one or two cases holes under insect remains, I supose this is the result of over manuring, I have noticed the same effect on grass; I mean the excrement of animals kills grass where it lies but forms luxuriant growth of grass round. I have, I fear, put my remarks in a rambling & inconvenient form. If there is anything I have not answered distinctly, please get Horace to write me a note about it6 Believe me | yrs. very truly | William C. Marshall ## CD annotations 1.1 tomorrow’s] del ink; ‘todays’ added ink 1.2 list from] ‘(alluded to in previous letter)’ interl ink 2.3 The observations … inch!! 2.4] scored ink 5.2 the same … seeds.] double underl ink ## Footnotes The year is established by the relationship between this letter and the letter from W. C. Marshall, 30 August [1874]. Marshall had sent leaves of Pinguicula (butterwort) on 30 August 1874 (see letter from W. C. Marshall, 30 August [1874]). CD had written that he would be interested in seeing a list Marshall was compiling, noting the number of insects found on Pinguicula leaves and the proportion of leaves with insects (letter to W. C. Marshall, [after 30 August 1874]). The list has not been found; see, however, Insectivorous plants, p. 370. Parnassia palustris is marsh grass of Parnassus; Drosera rotundifolia is round-leaved or common sundew; Saxifraga aizoides is evergreen saxifrage; Anagallis tenella is bog pimpernel; Erica tetralix is the cross-leaved heath. Only a draft of the letter to W. C. Marshall, [after 30 August 1874], has been found; it did not mention tracing the source of the sticky secretion. CD suspected that Pinguicula could digest seeds as well as insects (see letter to W. T. Thiselton Dyer, 9 June 1874). Horace Darwin was a friend of Marshall; the two were students at Trinity College, Cambridge, at the same time (Freeman 1978). ## Bibliography Freeman, Richard Broke. 1978. Charles Darwin: a companion. Folkestone, Kent: William Dawson & Sons. Hamden, Conn.: Archon Books, Shoe String Press. Insectivorous plants. By Charles Darwin. London: John Murray. 1875. ## Summary Sends Pinguicula vulgaris leaves with seeds on them, together with his observations on proportion of leaves with insects on them. ## Letter details Letter no. DCP-LETT-9626 From William Cecil (Bill) Marshall To Charles Robert Darwin Sent from Derwent Island Source of text DAR 58.1: 128–9 Physical description 4pp †
2020-08-09 21:17:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7201513648033142, "perplexity": 6534.771791448752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738573.99/warc/CC-MAIN-20200809192123-20200809222123-00153.warc.gz"}
https://www.flyingcoloursmaths.co.uk/the-mathematical-ninja-and-the-cube-root-of-13/
# The Mathematical Ninja and the Cube Root of 13 A physicist. A calculator. The Mathematical Ninja’s face - what could be seen of it - was more snarl than feature. It’s quite tricky to hiss something that doesn’t have any sibilant consonants, but they hissed all the same: “The cube root of 13? You don’t need a calculator for that.” The student, mindful of the previous episode involving an argument about whether Mars was 5cm away because the calculator said so, surreptitiously returned the machine to the bag. “Yes, sensei. Sorry sensei. I suppose it’s between 2 and 3? A bit less than halfway?” At least the student was showing willing. “The cube root of 13.5 is $\frac{3}{\sqrt[3]{2}}$, and the cube root of 2 is about $\frac{5}{4}$, so $\frac{12}{5} = 2.4$ is an overestimate.” The student, wisely, kept the thought “of course, everyone knows the cube root of 2” from flashing across his face. “You can also do it with logarithms. $\ln(12) \approx 1.38 + 1.10 = 2.48$, and $\ln(13)$ is a twelfth - call it 0.08 - more than that, or 2.56. A third of that is 0.85, give or take.” Not a mutter from the student. “And $e^{0.85}$… well, there are several ways to approach that. It’s roughly the geometric mean of 2 and $e$, which I’d approximate with the arithmetic mean and call 2.35 or so. Let’s go with that.” The student just nodded, and made a note to check it on the calculator later. ## Colin Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove. #### Share This site uses Akismet to reduce spam. Learn how your comment data is processed.
2020-12-05 14:56:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6444620490074158, "perplexity": 2112.889301604295}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747887.95/warc/CC-MAIN-20201205135106-20201205165106-00653.warc.gz"}
https://www.tbi.univie.ac.at/RNA/tutorial/
Font size: ## RNA Web Services This tutorial aims to give a basic introduction to using the command line programs in the ViennaRNA Package in a UNIX-like (LINUX) environment. Of course, some of you may ask “Why are there no friendly graphical user interfaces?”. Well, there are some, especially in the form of web services. If a few simple structure predictions is all you want to do, there are several useful sites for doing RNA structure analysis available on the web. Indeed many of the tasks described below can be performed using various web servers. ### Useful Web Services • Michael Zuker’s mfold server computes (sub)optimal structures and hybridization for DNA and RNA sequences with many options. Mfold Website • BiBiServ, several small services e.g. pseudo-knot prediction pknotsRG, bi-stable structures paRNAss, alignment RNAforester, visualization RNAmovies, suboptimal structures RNAshapes Bielefeld Bioinformatics Service • The ViennaRNA Server offers web access to many tools of the ViennaRNA Package, e.g. RNAfold, RNAalifold, RNAinverse and soon RNAz ViennaRNA Webservices • several specialized servers such as Web servers are also a good starting point for novice users since they provide a more intuitive interface. Moreover, the ViennaRNA Server will return the equivalent command line invocation for each request, making the transition from web services to locally installed software easier. On the other hand, web servers are not ideal for analyzing many or very long sequences and usually they offer only few often-used tasks. Much the same is true for point-and-click graphical interfaces. Command line tools, on the other hand, are ideally suited for automating repetitive tasks. They can even be combined in pipes to process the results of one program with another or they can be used in parallel, running tens or hundreds of tasks simultaneously on a cluster of PCs. You can try some of these web services in parallel to the exercises below. ## Get started ### Typographical Conventions • Constant width font is used for program names, variable names and other literal text like input and output in the terminal window. • Lines starting with a $ within a literal text block are commands. You should type the text following the $ into your terminal window finishing by hitting the Enter-key. (The $ signifies the command line prompt, which may look different on your system). • All other lines within a literal text block are the output from the command you just typed. ### Data Files Data files containing the sequences used in the examples below are shipped with this tutorial. ### Terminal, Command line and Editor • You can get a terminal by moving your mouse-pointer to an empty spot of your desktop, clicking the right mouse-button and choose “Open Terminal” from the pull-down menu. • You can run commands in the terminal by typing them next to the command line prompt (usually something like$) followed by hitting the Enter-key. $date Tue Jul 7 14:30:25 CEST 2015 • To get more information about a command type man followed by the command-name and hitting the Enter-key. Leave the man pages by pressing the q-key.$ man date • Redirect a command’s input and output using the following special characters: $|$’ ties stdout to stdin $<$’ redirects stdout to stdin $>$’ redirects stdout to a file Here, stdout stands for standard output, which you can normaly see in the terminal. stdin is it’s counterpart, the standard input. The character ‘$|$’ allows you to pipe the standard output of one program directly as standard input into another program, hence, the programs are chained together. Below you’ll find a list of some useful core commands available in all Linux terminal. Command Description pwd displays the path to the current working directory cd changes the working directory (initially your “HOME”) ls lists files and directorys in the current (or a specified) directory mkdir creates a directory rm removes a file (add option -r for deleting a folder) less shows file(s) one page at a time echo prints string(s) to standard output wc command prints the number of newlines, words and bytes in a specified file For more information regarding these commands prepend --help to the program call, like this: $rm --help Try a few commands on your own, e.g.$ ls > file_list $less file_list$ rm file_list $ls | less Here the stdout from the ls command was written to a file called file_list. The next command shows the content of file_list. We quit less by pressing the q-key and removing the file. ls $|$ less pipes the output in the less program without writing it to a file. Now we create our working directory including subfolders and our first sequence file using the commands we just learned. Have in mind that you create a good structure so you can find your data easily. First find out in which directory your are in by typing$ pwd It should look similar to $/home/YOURUSER To insure yourself, that you are in the correct directory type (˜is the shortcut for the home-directory)$ cd ~ Now create a new folder in your home directory $mkdir -p ~/Tutorial/Data$ cd ~/Tutorial/Data $echo ATGAAGATGA > BAZ.seq Here we created two new folders in our HOME, Tutorial and a subfolder called Data, then we jumped to the Data-folder and wrote a short DNA sequence to the BAZ.seq file. For further processing we need a RNA sequence instead of an DNA sequence, so we need to replace the T by an U by executing following command using sed (the stream editor). $ sed -i 's/T/U/g' BAZ.seq The program is called via sed, -i tells sed to replace the existing file (in this case BAZ.seq). s stands for substitute T by U and g tells sed to replace all occuring T’s in the file globaly). When we look at our file using less we should see our new sequence “AUGAAGAUGA” $less BAZ.seq ### Installing Software from Source Many bioinformatics programs are available only as source code that has to be compiled and installed. We’ll demonstrate the standard way to install programs from source using the ViennaRNA Package. Get the ViennaRNA Package You can either get the required package, depending on which operating system you run (precompiled package is availaible for distinct distributions like Fedora, Arch Linux, Debian, Ubuntu, Windows) or you compile the source code yourself. Here we are compiling the programs ourself. Have a look at the file INSTALL distributed with the ViennaRNA Package for more detail or read the documentation on the url. Subsequently the instructions for building the source code are: 1. Go to your Tutorials folder and create a directory $ cd .. $mkdir downloads$ cd downloads 2. Download the ViennaRNA Package from http://www.tbi.univie.ac.at/RNA/index.html and save it in to the newly created directory. 3. Unpack the gzipped tar archive by running: (Replace [2.1.9] with the actual version number) $tar -zxf ViennaRNA-[2.1.9].tar.gz 4. list the content of the directory $ ls -F ViennaRNA-[2.1.9]/ ViennaRNA-[2.1.9].tar.gz ### Build the ViennaRNA Package Build the ViennaRNA Package The installation location can be controlled through options to the configure script. E.g. to change the default installation location to the directory VRP in your $HOME/Tutorial directory use the --prefix tag so the compiler knows that the target directory is changed. 1. To configure and build the package just run the following commands. $ cd ViennaRNA-[2.1.9] $mkdir -p ~/Tutorial/Progs/VRP$ ./configure --prefix=$HOME/Tutorial/Progs/VRP$ make $make install You already know the cd and the mkdir command, ./configure checks whether all dependencies are fulfilled and exits the script if some major requirements are missing. If all is ok it creates the Makefile which then is used to start the buildingprocess via make install. 2. To install the ViennaRNA package system wide (only for people with superuser privileges, which we are NOT!) run $ ./configure $make$ make install You find the installed files in 1. $HOME/Tutorial/Progs/VRP/bin (programs) 2. $HOME/Tutorial/Progs/VRP/share/ViennaRNA/bin (perl scripts) Wherever you installed the main programs of the ViennaRNA Package, make sure the path to the executables shows up in your PATH environment variable. To check the contents of the PATH environment variable simply run $echo$PATH For easier handling we now create a folder containing all our binaries as well as perl scripts and copy them into a common folder. $cd ~/Tutorial/Progs/$ cp VRP/share/ViennaRNA/bin/* . Now you can show the contents of the folder using the command ls. Also copy the binaries from the VRP/bin folder. In the next step we add the path of the directory to the PATH environment variable (e.g. use pwd) so we don’t need to write the hole path every time we call it. $export PATH=${HOME}/Tutorial/Progs:${PATH} Note that this is only a temporary solution. If you want the path to be permanently added you need to add the line above to the config file of your shell environment. Typically bash is the standard. You need to add the export line above to the .bashrc in your homedirectory. To reload the contents of .bashrc type $ source ~/.bashrc or close the current terminal and open it again. (Remember, this works only for the bash shell.) To check if everything worked out find which source you use. $which RNAfold The shown path should point to $HOME/Tutorial/Progs/. Finally try to get a brief description of a program e.g. $RNAfold --help If this doesn’t work reread the steps described above more carefully. ### What’s in the ViennaRNA Package The core of the ViennaRNA Package is formed by a collection of routines for the prediction and comparison of RNA secondary structures. These routines can be accessed through stand-alone programs, such as RNAfold, RNAdistance etc., which should be sufficient for most users. For those who wish to develop their own programs a library which can be linked to your own code is provided. The base directory • make a directory listing of downloads/ViennaRNA-2.1.9/ $ ls -F ~/Tutorial/downloads/ViennaRNA-2.1.9/ acinclude.m4 config.h.in H/ Makefile Readseq/ aclocal.m4 config.log INSTALL Makefile.am RNAforester/ AUTHORS config.status* INSTALL.configure Makefile.in RNAlib2.pc ChangeLog config.sub* install-sh* man/ RNAlib2.pc.in Cluster/ configure* interfaces/ misc/ stamp-h1 compile* configure.ac Kinfold/ missing* THANKS config/ COPYING lib/ NEWS Utils/ config.guess* depcomp* libsvm-2.91/ Progs/ config.h doc/ m4/ README You now see the contents of the ViennaRNA-2.1.9 folder. Directorys are marked by a ”/” and the ”*” indicates executable files. The Makefile contains the rules to compile the code, the Perl folder and the Progs folder hold all the binaries but Kinfold and RNAforester and Utils contains the perl scripts. configure handels distinct options for installation and creation of the Makefile. INSTALL covers installation instructions and the README file contains information about the ViennaRNA Package. Which programs are available? RNA2Dfold Compute coarse grained energy landscape of representative sample structures RNAaliduplex Predict conserved RNA-RNA interactions between two alignments RNAalifold Calculate secondary structures for a set of aligned RNA sequences RNAcofold Calculate secondary structures of two RNAs with dimerization RNAdistance Calculate distances between RNA secondary structures RNAduplex Compute the structure upon hybridization of two RNA strands RNAeval Evaluate free energy of RNA sequences with given secondary structure RNAfold Calculate minimum free energy secondary structures and partition function of RNAs RNAheat Calculate the specific heat (melting curve) of an RNA sequence RNAinverse Find RNA sequences with given secondary structure (sequence design) RNALalifold Calculate locally stable secondary structures for a set of aligned RNAs RNALfold Calculate locally stable secondary structures of long RNAs RNApaln RNA alignment based on sequence base pairing propensities RNApdist Calculate distances between thermodynamic RNA secondary structures ensembles RNAparconv Convert energy parameter files from ViennaRNA 1.8 to 2 format RNAPKplex Predict RNA secondary structures including pseudoknots RNAplex Find targets of a query RNA RNAplfold Calculate average pair probabilities for locally stable secondary structures RNAplot Draw and markup RNA secondary structures in PostScript, SVG, or GML RNApvmin Find a vector of perturbation energies which may further be used to constrain folding RNAsnoop Find targets of a query H/ACA snoRNA RNAsubopt Calculate suboptimal secondary structures of RNAs RNAup Calculate the thermodynamics of RNA-RNA interactions Kinfold simulates the stochastic folding kinetics of RNA sequences into secondary structures RNAforester 1 compare RNA secondary structures via forest alignment Which Utilities are available? b2ct converts dot-bracket notation to Zukers mfold ’.ct’ file format b2mt.pl converts dot-bracket notation to x y values cmount.pl generates colored mountain plot coloraln.pl colorize an alirna.ps file colorrna.pl colorize a secondary structure with reliability annotation ct2b.pl converts Zukers mfold ’.ct’ file format to dot-bracket notation dpzoom.pl extract a portion of a dot plot mountain.pl generates mountain plot popt extract Zuker’s p-optimal folds from subopt output refold.pl refold using consensus structure as constraint relplot.pl add reliability information to a RNA secondary structure plot rotate_ss.pl rotate the coordinates of an RNA secondary structure plot switch.pl describes RNA sequences that exhibit two almost equally stable structures All programs that are shipped with the ViennaRNA Package provide some documentation in the form of “man pages”. In UNIX like environments, these manual pages can be viewed using the man command after successfully installing the ViennaRNA Package: $man RNAalifold Alternatively, an online version of the manual pages is available at https://www.tbi.univie.ac.at/RNA/documentation.html#programs. Note, that the MANPATH environment variable requires to be updated if the ViennaRNA Package has been installed in a non-standard path. There also is a helpful documentation in the folder of the ViennaRNA Package: /Tutorial/downloads/ViennaRNA-2.1.9/doc/RNAlib-2.1.9.pdf Most Perl scripts carry embedded documentation that is displayed by typing $ perldoc coloraln.pl in the folder where the script is located. All scripts and programs give short usage instructions when called with the -h command line option (e.g. RNAalifold -h). ### The Input File Format RNA sequences come in a variety of formats. The sequence format used throughout the ViennaRNA Package is very simple. An sequence file contains one or more sequences. Each sequence must appear as a single line in the file without embedded white spaces. A sequence may be preceded by a special line starting with the ‘>’ character followed by a sequence name. This name will be used by the programs in the ViennaRNA Package as basename for the PostScript output files for this sequence. Note that this is almost the fasta sequence format, except that no line-breaks are allowed within a sequence while the header line is optional. Following programs provide full fasta support: RNAfold, RNAsubopt, RNAcofold, RNAKplex, RNALfold, RNAplfold, RNAeval, RNAplot, RNAheat ## Structure Prediction on single Sequences ### The Program RNAfold Our first task will be to do a structure prediction using RNAfold. This should get you familiar with the input and output format as well as the graphical output produced. RNAfold reads RNA sequences from stdin, calculates their minimum free energy (MFE) structure, prints the MFE structure in dot-bracket notation and its free energy to stdout. If the -p option is set it also computes the partition function, the base pairing probability matrix and additionally prints the free energy of the thermodynamic ensemble, the frequency of the MFE structure in the ensemble and the ensemble diversity to stdout. Another useful option is the --MEA option, which also shows the maximum expected accuracy, but remember that this also needs more CPU time than without --MEA. MFE structure of a single sequence 1. Use a text editor (emacs, vi, nano, gedit) to prepare an input file by pasting the text below and save it under the name test.seq in your Data folder. > test CUACGGCGCGGCGCCCUUGGCGA 2. Compute the best (MFE) structure for this sequence $RNAfold < test.seq CUACGGCGCGGCGCCCUUGGCGA ...........((((...)))). ( -5.00) The last line of the text output contains the predicted MFE structure as dot-bracket notation and its free energy in kcal/mol. A dot in the dot-bracket notation represents an unpaired position, while a base pair (i, j) is represented by a pair of matching parentheses at position i and j. RNAfold created a file named test_ss.eps. The filename is taken from the fasta header; if there’s no header the output is simply called rna.eps. Let’s take a look at the output file with gv, a PostScript2 viewer. The & at the end starts the program in the background. $ gv test_ss.ps & Compare the dot-bracket notation to the PostScript drawing shown in the file test_ss.eps. The calculation above does not tell us whether the predicted structure is the only possibility or not, so let’s look at the equilibrium ensemble instead. Predicting equilibrium pair probabilities 1. Run RNAfold -p --MEA to compute the partition function and pair probabilities as well as the maximum expected accurarcy. 2. Look at the generated PostScript files test_ss.eps and test_dp.eps $RNAfold -p --MEA < test.seq CUACGGCGCGGCGCCCUUGGCGA ...........((((...)))). ( -5.00) ....{,{{...||||...)}}}. [ -5.72] ....................... { 0.00 d=4.66} ......((...))((...))... { 2.90 MEA=14.79} frequency of mfe structure in ensemble 0.311796; ensemble diversity 6.36 Here the last four lines are new compared to the text output without the -p --MEA options. The partition function is a rough measure for the well-definedness of the MFE structure. The third line shows a condensed representation of the pair probabilities of each nucleotide, similar to the dot-bracket notation, followed by the ensemble free energy (-kT ln(Z)) in kcal/mol. The next two lines represent the centroid structure with its free energy, its distance to the ensemble and the MEA. The last line shows the frequency of the MFE structure in the ensemble of secondary structures and the diversity of the ensemble. ”.” denotes bases that are essentially unpaired, ”,” weakly paired, ”$|$”strongly paired without preference, ”{},()” weakly ($>$33%) upstream (downstream) paired or strongly ($>$66%) up-/downstream paired bases, respectively. Note that the MFE structure is adopted only with 31% probability, also the diversity is very high for such a short sequence. For rotating the secondary structure plot there is a usefull tool called rotate_ss.pl included in the ViennaRNA Package. Just read the perldoc for this tool to know how to handle the rotation and use the information to get your secondary structure in a vertical position. $ perldoc rotate_ss.pl Secondary Structure plot and Dot plot The “dot plot” in test_dp.eps shows the pair probabilities within the equilibrium ensemble as $n×n$ matrix, and is an excellent way to visualize structural alternatives. A square at row $i$ and column $j$ indicates a base pair. The area of a square in the upper right half of the matrix is proportional to the probability of the base pair $\left(i,j\right)$ within the equilibrium ensemble. The lower left half shows all pairs belonging to the MFE structure. While the MFE consists of a single helix, several different helices are visualized in the pair probabilities. Next, let’s use the relplot utility to visualize which parts of a predicted MFE are well-defined and thus more reliable. Also let’s use a real example for a change and produce yet another representation of the predicted structure, the mountain plot. Mountain and Reliability plot Fold the 5S rRNA sequence and visualize the structure. (The 5S.seq is shipped with the tutorial) $RNAfold -p < 5S.seq$ mountain.pl 5S_dp.ps | xmgrace -pipe $relplot.pl 5S_ss.ps 5S_dp.ps > 5S_rss.ps A mountain plot is especially useful for long sequences where conventional structure drawings become terribly cluttered. It is a xy-diagram plotting the number of base pairs enclosing a sequence position versus the position. The Perl script mountain.pl transforms a dot plot into the mountain plot coordinates which can be visualized with any xy-plotting program, e.g. xmgrace. The resulting plot shows three curves, two mountain plots derived from the MFE structure (red) and the pairing probabilities (black) and a positional entropy curve (green). Well-defined regions are identified by low entropy. By superimposing several mountain plots structures can easily be compared. The perl script relplot.pl adds reliability information to a RNA secondary structure plot in the form of color annotation. The script computes a well-definedness measure we call “positional entropy” ($S\left(i\right)=-\sum {p}_{ij}log\left({p}_{ij}\right)$ for those who want to know the details) and encodes it as color hue, ranging from red (low entropy, well-defined) via green to blue and violet (high entropy, ill-defined). In the example above two helices of the 5S RNA are well-defined (red) and indeed predicted correctly, the left arm is not quite correct and disordered. For the figure above we had to rotate and mirror the structure plot, e.g. $ rotate_ss.pl -a 180 -m 5S_rss.ps > 5S_rot.ps You can manually add annotation to structure drawings using the RNAplot program (for information see the man page). Here’s a somewhat complicated example: $RNAfold < 5S.seq > 5S.fold$ RNAplot --pre "76 107 82 102 GREEN BFmark 44 49 0.8 0.8 0.8 Fomark \ 1 15 8 RED omark 80 cmark 80 -0.23 -1.2 (pos80) Label 90 95 BLUE Fomark" < 5S.fold $gv 5S_ss.ps RNAplot is a very useful tool to color plots. The --pre tag adds PostScript code to color distinct regions of your molecule. There are some predefined statements with different options for annotations listed below: i cmark draws circle around base i i j c gmark draw basepair i,j with c counter examples in grey i j lw rgb omark stroke segment i...j with linewidth lw and color (rgb) i j rgb Fomark fill segment i...j with color (rgb) i j k l rgb BFmark fill block between pairs i,j and k,l with color (rgb) i dx dy (text) Label adds a textlabel with an offset dx and dy relative to base i Predefined color options are BLACK, RED, GREEN, BLUE, WHITE but you can also replace the value to some standard RGB code (e.g. 0 5 8 for lightblue). To see what exactly the alternative structures of our sequence are, we need to predict suboptimal structures. SHAPE directed RNA folding In order to further improve the quality of secondary structure predictions, mapping experiments like SHAPE (selective 2’-hydroxyl acylation analyzed by primer extension) can be used to exerimentally determine the pairing status for each nucleotide. In addition to thermodynamic based secondary structure predictions, RNAfold supports the incorporation of this additional experimental data as soft constraints. If you want to use SHAPE data to guide the folding process, please make sure that your experimental data is present in a text file, where each line stores three white space separated columns containing the position, the abbreviation and the normalized SHAPE reactivity for a certain nucleotide. 1 G 0.134 2 C 0.044 3 C 0.057 4 G 0.114 5 U 0.094 ... ... ... 71 C 0.035 72 G 0.909 73 C 0.224 74 C 0.529 75 A 1.475 The second column, which holds the nucleotide abbreviation, is optional. If it is present, the data will be used to perform a cross check against the provided input sequence. Missing SHAPE reactivities for certain positions can be indicated by omitting the reactivity column or the whole line. Negative reactivities will be treated as missing. Once the SHAPE file is ready, it can be used to constrain folding: $ RNAfold --shape=rna.shape --shapeMethod=D < rna.seq ### The Program RNApvmin The program RNApvmin reads a RNA sequence from stdin and uses an iterative minimization process to calculate a perturbation vector that minimizes the discripancies between predicted pairing probabilites and observed pairing probabilities (deduced from given shape reactivities). The experimental SHAPE data has to be present in the file format described above. The application will write the calculated vector of perturbation energies to stdout, while the progress of the minimization process is written to stderr. The resulting perturbation vector can be interpreted directly and gives usefull insights into the discrepancies between thermodynamic prediction and experimentally determined pairing status. In addition the perturbation energies can be used to constrain folding with RNAfold: $RNApvmin rna.shape < rna.seq >vector.csv$ RNAfold --shape=vector.csv --shapeMethod=W < rna.seq The perturbation vector file uses the same file format as the SHAPE data file. Instead of SHAPE reactivities the raw perturbation energies will be storred in the last column. Since the energy model is only adjusted when necessary, the calculated perturbation energies may be used for the interpretation of the secondary structure prediction, since they indicate which positions require major energy model adjustments in order to yield a prediction result close to the experimental data. High perturbation energies for just a few nucleotides may indicate the occurrence of features, which are not explicitly handled by the energy model, such as posttranscriptional modifications and intermolecular interactions. ### The Program RNAsubopt RNAsubopt calculates all suboptimal secondary structures within a given energy range above the MFE structure. Be careful, the number of structures returned grows exponentially with both sequence length and energy range. Suboptimal folding • Generate all suboptimal structures within a certain energy range from the MFE specified by the -e option. $RNAsubopt -e 1 -s < test.seq CUACGGCGCGGCGCCCUUGGCGA -500 100 ...........((((...)))). -5.00 ....((((...))))........ -4.80 (((.((((...))))..)))... -4.20 ...((.((.((...)).)).)). -4.10 The text output shows an energy sorted list (option -s) of all secondary structures within 1 kcal/mol of the MFE structure. Our sequence actually has a ground state structure (-5.70) and three structures within 1 kcal/mol range. MFE folding alone gives no indication that there are actually a number of plausible structures. Remember that RNAsubopt cannot automatically plot structures, therefore you can use the tool RNAplot. Note that you CANNOT simply pipe the output of RNAsubopt to RNAplot using $ RNAsubopt < test.seq | RNAplot You need to manually create a file for each structure you want to plot. Here, for example we created a new file named suboptstructure.txt: > suboptstructure-4.20 CUACGGCGCGGCGCCCUUGGCGA (((.((((...))))..)))... The fasta header is optional, but useful (without it the outputfile will be named rna.ps). The next two lines contain the sequence and the suboptimal structure you want to plot; in this case we plotted the structure with the folding energy of -4.20. Then plot it with $RNAplot < suboptstructure.txt Note that the number of suboptimal structures grows exponentially with sequence length and therefore this approach is only tractable for sequences with less than 100 nt. To keep the number of suboptimal structures manageable the option --noLP can be used, forcing RNAsubopt to produce only structures without isolated base pairs. While RNAsubopt produces all structures within an energy range, mfold produces only a few, hopefully representative, structures. Try folding the sequence on the mfold server at http://mfold.rna.albany.edu/?q=mfold. Sometimes you want to get information about unusual properties of the Boltzmann ensemble (the sum of all RNA structures possible) for which no specialized program exists. For example you want to know all fractions of a bacterial mRNA in the Boltzmann ensemble where the Shine-Dalgarno (SD) sequence is unpaired. If the SD sequence is concealed by secondary structure the translation efficiency is reduced. In such cases you can resort to drawing a representative sample of structures from the Boltzmann ensemble by using the option -p. Now you can simply count how many structures in the sample possess the feature you are looking for. This number divided by the size of your sample gives you the desired fraction. The following example calculates the fraction of structures in the ensemble that have bases 6 to 8 unpaired. Sampling the Boltzmann Ensemble 1. Draw a sample of size 10,000 from the Boltzmann ensemble 2. Calculate the desired property by using a perl script $ RNAsubopt -p 10000 < test.seq > tt $perl -nle '$h++ if substr($_,5,3) eq "..."; END {print$h/$.}' tt 0.391960803919608 A far better way to calculate this property is to use RNAfold -p to get the ensemble free energy, which is related to the partition function via $F=-RTln\left(Q\right)$, for the unconstrained (${F}_{u}$) and the constrained case (${F}_{c}$), where the three bases are not allowed to form base pairs (use option -C), and evaluate ${p}_{c}=exp\left(\left({F}_{u}-{F}_{c}\right)∕RT\right)$ to get the desired probability. So let’s do the calculation using RNAfold. $RNAfold -p Input string (upper or lower case); @ to quit ....,....1....,....2....,....3....,....4....,....5....,....6....,....7....,....8 CUACGGCGCGGCGCCCUUGGCGA length = 23 CUACGGCGCGGCGCCCUUGGCGA ...........((((...)))). minimum free energy = -5.00 kcal/mol ....{,{{...||||...)}}}. free energy of ensemble = -5.72 kcal/mol ....................... { 0.00 d=4.66} frequency of mfe structure in ensemble 0.311796; ensemble diversity 6.36 Now we have calculated the free ensemble energy of the ensemble over all structures (F_u), in the next step we have to calculate it for the structures using a constraint(F_c). Following notation has to be used for defining the constraint: 1. $|$ : paired with another base 2. . : no constraint at all 3. x : base must not pair 4. $<$ : base i is paired with a base j¡i 5. $>$ : base i is paired with a base j¿i 6. matching brackets ( ): base i pairs base j So our constraint should look like this: .....xxx............... Next call the application with following command and provide the sequence and constraint we just created. $RNAfold -p -C The output should look like this length = 23 CUACGGCGCGGCGCCCUUGGCGA ...........((((...)))). minimum free energy = -5.00 kcal/mol ...........((((...)))). free energy of ensemble = -5.14 kcal/mol ...........((((...)))). { -5.00 d=0.42} frequency of mfe structure in ensemble 0.792925; ensemble diversity 0.79 Afterwards evaluate the desired probability according to the formula given before e.g. with a simple perl script. $ perl -e 'print exp(-(5.72-5.14)/(0.00198*310.15))."\n"' You can see that there is a slight difference between the RNAsubopt run with 10,000 samples and the RNAfold run including all structures. ## RNA folding kinetics RNA folding kinetics describes the dynamical process of how a RNA molecule approaches to its unique folded biological active conformation (often referred to as the native state) starting from an initial ensemble of disordered conformations e.g. the unfolded open chain. The key for resolving the dynamical behavior of a folding RNA chain lies in the understanding of the ways in which the molecule explores its astronomically large free energy landscape, a rugged and complex hyper-surface established by all the feasible base pairing patterns a RNA sequence can form. The challenge is to understand how the interplay of formation and break up of base pairing interactions along the RNA chain can lead to an efficient search in the energy landscape which reaches the native state of the molecule on a biologically meaningful time scale. ### RNA2Dfold RNA2Dfold is a tool for computing the MFE structure, partition function and representative sample structures of $\kappa$, $\lambda$ neighborhoods and projects an high dimensional energy landscape of RNA into two dimensions. Therefore a sequence and two user-defined reference structures are expected by the program. For each of the resulting distance class, the MFE representative, the Boltzmann probabilities and the Gibbs free energy is computed. Additionally, representative suboptimal secondary structures from each partition can be calculated. $RNA2Dfold -p < 2dfold.inp > 2dfold.out The outputfile 2dfold.out should look like below, check it out using less. CGUCAGCUGGGAUGCCAGCCUGCCCCGAAAGGGGCUUGGCGUUUUGGUUGUUGAUUCAACGAUCAC ((((((((((....)))))..(((((....))))).)))))...(((((((((...))))))))). (-30.40) ((((((((((....)))))..(((((....))))).)))))...(((((((((...))))))))). (-30.40) .................................................................. ( 0.00) free energy of ensemble = -31.15 kcal/mol k l P(neighborhood) P(MFE in neighborhood) P(MFE in ensemble) MFE E_gibbs MFE-structure 0 24 0.29435909 1.00000000 0.29435892 -30.40 -30.40 ((((((((((....)))))..(((((....))))).)))))...(((((((((...))))))))). 1 23 0.17076902 0.47069889 0.08038083 -29.60 -30.06 ((((((((((....)))))..(((((....))))).)))))....((((((((...)))))))).. 2 22 0.03575448 0.37731068 0.01349056 -28.50 -29.10 ((((.(((((....)))))..(((((....)))))..))))....((((((((...)))))))).. 2 24 0.00531223 0.42621709 0.00226416 -27.40 -27.93 ((((((((((....))))...(((((....)))))))))))...(((((((((...))))))))). 3 21 0.00398349 0.29701636 0.00118316 -27.00 -27.75 .(((.(((((....)))))..(((((....)))))..))).....((((((((...)))))))).. 3 23 0.00233909 0.26432372 0.00061828 -26.60 -27.42 ((((((((((....))))...(((((....)))))))))))....((((((((...)))))))).. [...] For visualizing the output the ViennaRNA Package includes two scripts 2Dlandscape_pf.gri, 2Dlandscape_mfe.gri located in VRP/share/ViennaRNA/. gri (a language for scientific graphics programing) is needed to create a colored postscript plot. We use the partition function script to show the free energies of the distance classes (graph below, left): $ gri ../Progs/VRP/share/ViennaRNA/2Dlandscape_pf.gri 2dfold.out Compare the output file with the colored plot and determine the MFE minima with corresponding distance classes. For easier comparision the outputfile of RNA2Dfold can be sorted by a simple sort command. For further information regarding sort use the --help option. $sort -k6 -n 2dfold.out > sort.out Now we choose the structure with the lowest energy besides our startstructure, replace the open chain structure from our old input with that structure and repeat the steps above with our new values • run RNA2Dfold • plot it using 2Dlandscape_pf.gri The new projection (right graph) shows the two major local minima which are separated by 39 bp (red dots in figure below) and both are likely to be populated with high probability. The landscape gives an estimate of the energy barrier separating the two minima (about -20 kcal/mol). The red dots mark the distance from open chain to the MFE structure respectively the distance from the 2nd best structure to the MFE. Note that the red dots were manually added to the image afterwards so don’t panic if you don’t see them in your gri output. ### barriers & treekin The following assumes you have the barriers and treekin programs installed. If not, the current release can be found at http://www.tbi.univie.ac.at/RNA/Barriers/. Installation proceeds as shown for the ViennaRNA Package in section 2.4. One problem that often occurs during treekin installation is the dependency on blas and lapack packages which is not carefully checked. For further information according to the barriers and treekin program also see the website. A short recall on howto install/compile a program 1. Get the barriers source from http://www.tbi.univie.ac.at/RNA/Barriers/ 2. extract the archive and go to the directory $ tar -xzf Barriers-1.5.2.tar.gz $cd Barriers-1.5.2 3. use the --prefix option to install in your Progs directory $ ./configure --prefix=$HOME/Tutorial/Progs/barriers-1.5.2 4. make install $ make $make install Now barriers is ready to use. Apply the same steps to install treekin. Note: Copy the barriers and treekin binaries to your bin folder or add the path to your PATH variable. Calculate the Barrier Tree $ echo UCCACGGCUGUUAGUGGAUAACGGC | RNAsubopt --noLP -s -e 10 > barseq.sub $barriers -G RNA-noLP --bsize --rates < barseq.sub > barseq.bar You can restrict the number of local minima using the barriers command-line option --max followed by a number. The option -G RNA-noLP instructs barriers that the input consists of RNA secondary structures without isolated basepairs. --bsize adds size of the gradient basins and --rates tells barriers to compute rates between macro states/basins for use with treekin. Another useful options is --minh to print only minima with a barrier $>dE$. Look at the output file less -S barseq.bar. Use the arrow keys to navigate. UCCACGGCUGUUAGUGGAUAACGGC 1 (((((........)))))....... -6.90 0 10.00 115 0 -7.354207 23 -7.012023 2 ......(((((((.....))))))) -6.80 1 9.30 32 58 -6.828221 38 -6.828218 3 (((...(((...))))))....... -0.80 1 0.90 1 10 -0.800000 9 -1.075516 4 ....((..((((....)))).)).. -0.80 1 2.70 5 37 -0.973593 11 -0.996226 5 ......................... 0.00 1 0.40 1 14 -0.000000 26 -0.612908 6 ......(((....((.....))))) 0.60 2 0.40 1 22 0.600000 3 0.573278 7 ......((((((....)))...))) 1.00 1 1.50 1 95 1.000000 2 0.948187 8 .((....((......)).....)). 1.40 1 0.30 1 30 1.400000 2 1.228342 The first row holds the input sequence, the successive list the local minima ascending in energy. The meaning of the first 5 columns is as follows 1. label (number) of the local minima (1=MFE) 2. structure of the minimum 3. free energy of the minimum 4. label of deeper local minimum the current minimum merges with (note that the MFE has no deeper local minimum to merge with) 5. height of the energy barrier to the local minimum to merge with 6. numbers of structures in the basin we merge with 7. number of basin which we merge to 8. free energy of the basin 9. number of structures in this basin using gradient walk 10. gradient basin (consisting of all structures where gradientwalk ends in the minimum) Calculate The Barrier Tree barriers produced two additional files, the PostScript file tree.eps which represents the basic information of the barseq.bar file visually (look at the file e.g. gv tree.eps) and a text file rates.out which holds the matrix of transition probabilities between the local minima. Simulating the Folding Kinetics The program treekin is used to simulate the evolution over time of the population densities of local minima starting from an initial population density distribution $p0$ (given on the command-line) and the transition rate matrix in the file rates.out. $ treekin -m I --p0 5=1 < barseq.bar | xmgrace -log x -nxy - The simulation starts with all the population density in the open chain (local minimum 5, see barseq.bar). Over time the population density of this state decays (yellow curve) and other local minima get populated. The simulation ends with the population densities of the thermodynamic equilibrium in which the MFE (black curve) and local minimum 2 (red curve) are the only ones populated. (Look at the dot plot of the sequence created with RNAsubopt and RNAfold!) ## Sequence Design ### The Program RNAinverse RNAinverse searches for sequences folding into a predefined structure, thereby inverting the folding algorithm. Input consists of the target structures (in dot-bracket notation) and a starting sequence, which is optional. Lower case characters in the start sequence indicate fixed positions, i.e. they can be used to add sequence constraints. ’N’s in the starting sequence will be replaced by a random nucleotide. For each search the best sequence found and its Hamming distance to the start sequence are printed to stdout. If the the search was unsuccessful a structure distance to the target is appended. By default the program stops as soon as it finds a sequence that has the target as MFE structure. The option -Fp switches RNAinverse to the partition function mode where the probability of the target structure $exp\left(-E\left(S\right)∕RT\right)∕Q$ is maximized. This tends to produce sequences with a more well-defined structure. This probability is written in dot-brackets after the found sequence and Hamming distance. With the option -R you can specify how often the search should be repeated. Sequence Design 1. Prepare an input file inv.in containing the target structure and sequence constraints (((.(((....))).))) NNNgNNNNNNNNNNaNNN 2. Design sequences using RNAinverse $RNAinverse < inv.in GGUgUUGGAUCCGAaACC 5$ RNAinverse -R5 -Fp < inv.in GGUgUGAACCCUCGaACC 5 GGCgCCCUUUUGGGaGCC 12 (0.967418) CUCgAUCUCACGAUaGGG 6 GGCgCCCGAAAGGGaGCC 13 (0.967548) GUUgAGCCCAUGCUaAGC 6 GGCgCCCUUAUGGGaGCC 10 (0.967418) CGGgUGUUGUGACAaCCG 5 GCGgGUCGAAAGGCaCGC 12 (0.925482) GCCgUAUCCGGGUGaGGC 6 GGCgCCCUUUUGGGaGCC 13 (0.967418) The output consists of the calculated sequence and the number of mutations needed to get the MFE-structure from the start sequence (start sequence not shown). Additionaly, with the partition function folding (-Fp) set, the second output is another refinement so that the ensemble preferes the MFE and folds into your given structure with a distinct probability, shown in brackets. Another useful program for inverse folding is RNA designer, see http://www.rnasoft.ca/. RNA Designer takes a secondary structure description as input and returns an RNA strand that is likely to fold in the given secondary structure. The sequence design application of the ViennaRNA Design Webservices, see http://nibiru.tbi.univie.ac.at/rnadesign/index.html uses a different approach, allowing for more than one secondary structure as input. For more detail read the online Documentation and the next section of this tutorial. ### switch.pl The switch.pl script can be used to design bi-stable structures, i.e. structures with two almost equally good foldings. For two given structures there are always a lot of sequences compatible with both structures. If both structures are reasonably stable you can find sequences where both target structures have almost equal energy and all other structures have much higher energies. Combined with RNAsubopt, barriers and treekin, this is a very useful tool for designing RNAswitches. The input requires two structures in dot-bracket notation and additionally you can add a sequence. It is also possible to calculate the switching function at two different temperatures with option -T and -T2. Designing a Switch Now we try to create an RNA switch using switch.pl. First we create our inputfile, then invoke the program using ten optimization runs (-n 10) and do not allow lonely pairs. Write it out to switch.out switch.in ((((((((......))))))))....((((((((.......)))))))) ((((((((((((((((((........))))))))))))))))))..... $switch.pl -n 10 --noLP < switch.in > switch.out switch.out should look similar like this, the first block represents our bi-stable structures in random order, the second block shows the resulting sequences ordered by their score. $ less switch.out GGGUGGACGUUUCGGUCCAUCCUUACGGACUGGGGCGUUUACCUAGUCC 0.9656 CAUUUGGCUUGUGUGUCGAAUGGCCCCGGUACGUAGGCUAAAUGUACCG 1.2319 GGGGGGUGCGUUCACACCCCUCAUUUGGUGUGGAUGUGCUUUCUACACU 1.1554 [...] the resulting sequences are: CAUUUGGCUUGUGUGUCGAAUGGCCCCGGUACGUAGGCUAAAUGUACCG 1.2319 GGGGGGUGCGUUCACACCCCUCAUUUGGUGUGGAUGUGCUUUCUACACU 1.1554 CGGGUUGUAACUGGAUAGCCUGGAAACUGUUUGGUUGUAAUCCGAACAG 1.0956 [...] Given all 10 suggestions in our switch.out, we select the one with the best score with some command line tools to use it as an RNAsubopt input file and build up the barriers tree. $tail -10 switch.out | awk '{print($1)}' | head -n 1 > subopt.in $RNAsubopt --noLP -s -e 25 < subopt.in > subopt.out$ barriers -G RNA-noLP --bsize --rates --minh 2 --max 30 < subopt.out > barriers.out tail -10 cuts the last 10 lines from the switch.out file and pipes them into an awk script. The function print($1) echoes only the first column and this is piped into the head program where the first line, which equals the best scored sequence, is taken and written into subopt.in. Then RNAsubopt is called to process our sequence and write the output to another file which is the input for the barriers calculation. Below you find an example of the barriertree calculation above done with the right settings (connected root) on the left side and the wrong RNAsubobt -e value on the right. Keep in mind that switch.pl performs an stochastic search and the output sequences are different every time because there are a lot of sequences which fit the structure and switch calculates a new one everytime. Simply try to make sure. left: Barriers tree as it should look like, all branches connected to the main root right: disconnected tree due to a too low energy range (-e) parameter set in RNAsubopt. Be careful to set the range -e high enough, otherwise we get a problem when calculation the kinetics using treekin. Every branch should be somehow connected to the main root of the tree. Try -e 20 and -e 30 to see the difference in the trees and choose the optimal value. By using --max 30 we shorten our tree to focus only on the lowest minima. We then select a branch preferably outside of the two main branches, here branch 30 (may differ from your own calculation). Look at the barrier tree to find the best branch to start and replace 30 by the branch you would choose. Now use treekin to plot concentration kinetics and think about the graph you just created. $ treekin -m I --p0 30=1 < barriers.out > treekin.out $xmgrace -log x -nxy treekin.out The graph could look like the one below, remember everytime you use switch.pl it can give you different sequences so the output varies too. Here the one from the example. ## RNA-RNA Interactions A common problem is the prediction of binding sites between two RNAs, as in the case of miRNA-mRNA interactions. Following tools of the ViennaRNA Package can be used to calculate base pairing probabilities. ### The Program RNAcofold RNAcofold works much like RNAfold but uses two RNA sequences as input which are then allowed to form a dimer structure. In the input the two RNA sequences should be concatenated using the ‘&’ character as separator. As in RNAfold the -p option can be used to compute partition function and base pairing probabilities. Since dimer formation is concentration dependent, RNAcofold can be used to compute equilibrium concentrations for all five monomer and (homo/hetero)-dimer species, given input concentrations for the monomers (see the man page for details). Two Sequences one Structure 1. Prepare a sequence file (t.seq) for input that looks like this >t GCGCUUCGCCGCGCGCC&GCGCUUCGCCGCGCGCA 2. Compute the MFE and the ensemble properties 3. Look at the generated PostScript files t_ss.ps and t_dp.ps $ RNAcofold -p < t.seq >t GCGCUUCGCCGCGCGCC&GCGCUUCGCCGCGCGCA ((((..((..((((...&))))..))..))))... (-17.70) ((((..{(,.((((,,.&))))..}),.)))),,. [-18.26] frequency of mfe structure in ensemble 0.401754 , delta G binding= -3.95 Secondary Structure Plot and Dot Plot In the dot plot a cross marks the chain break between the two concatenated sequences. ### Concentration Dependency Cofolding is an intermolecular process, therefore whether duplex formation will actually occur is concentration dependent. Trivially, if one of the molecules is not present, no dimers are going to be formed. The partition functions of the molecules give us the equilibrium constants: ${K}_{AB}=\frac{\left[AB\right]}{\left[A\right]\left[B\right]}=\frac{{Z}_{AB}}{{Z}_{A}{Z}_{B}}$ with these and mass conservation, the equilibrium concentration of homodimers, heterodimers and monomers can be computed in dependence of the start concentrations of the two molecules. This is most easily done by creating a file with the initial concentrations of molecules $A$ and $B$ in two columns: $\begin{array}{rc}\left[{a}_{1}\right]\left(\left[mol∕l\right]\right)& \left[{b}_{1}\right]\left(\left[mol∕l\right]\right)\\ \left[{a}_{2}\right]\left(\left[mol∕l\right]\right)& \left[{b}_{2}\right]\left(\left[mol∕l\right]\right)\\ ⋮& \\ \left[{a}_{n}\right]\left(\left[mol∕l\right]\right)& \left[{b}_{n}\right]\left(\left[mol∕l\right]\right)& & \text{}\end{array}$ Concentration Dependency 1. Prepare a concentration file for input with this little perl script $perl -e '$c=1e-07; do {print "$c\t$c\n"; $c*=1.71;} while$c<0.2' > concfile This script creates a file displaying values from 1e-07 to just below 0.2, with 1.71-fold steps in between. For convenience, concentration of molecule A is the same as concentration of molecule B in each row. This will facilitate visualization of the results. 2. Compute the MFE, the ensemble properties and the concentration dependency of hybridization. $RNAcofold -f concfile < t.seq > cofold.out 3. Look at the generated output with $ less cofold.out [...] Free Energies: AB AA BB A B -18.261023 -17.562553 -18.274376 -7.017902 -7.290237 Initial concentrations relative Equilibrium concentrations A B AB AA BB A B 1e-07 1e-07 0.00003 0.00002 0.00002 0.49994 0.49993 [...] The five different free energies were printed out first, followed by a list of all the equilibrium concentrations, where the first two columns denote the initial (absolute) concentrations of molecules $A$ and $B$, respectively. The next five columns denote the equilibrium concentrations of dimers and monomers, relative to the total particle number. (Hence, the concentrations don’t add up to one, except in the case where no dimers are built – if you want to know the fraction of particles in a dimer, you have to take the relative dimer concentrations times 2). Since relative concentrations of species depend on two independent values - initial concentration of A as well as initial concentration of B - it is not trivial to visualize the results. For this reason we used the same concentration for A and for B. Another possibility would be to keep the initial concentration of one molecule constant. As an example we show the following plot of $t.seq$. Now we use some commandline tools to render our plot. We use tail -n +11 to show all lines starting with line 11 (1-10 are cut) and pipe it into an awk command, which prints every column but the first from our input. This is then piped to xmgrace. With -log x -nxy - we tell it to plot the x axis in logarithmic scale and to read data file in X Y1 Y2 ... format. $tail -n +11 cofold.out | awk '{print$2, $3,$4, $5,$6, 7}' | xmgrace -log x -nxy - Concentration Dependency plot $\Delta {G}_{\text{binding}}=-5.01$ kcal/mol sequences:GCGCUUCGCCGCGCGCG&GCGCUUCGCCGCGCGCG Since the two sequences are almost identical, the monomer and homo-dimer concentrations behave very similarly. In this example, at a concentration of about 1 mmol 50% of the molecule is still in monomer form. ### Finding potential binding sites with RNAduplex If the sequences are very long (many kb)RNAcofold is too slow to be useful. The RNAduplex program is a fast alternative, that works by predicting only intermolecular base pairs. It’s almost as fast as simple sequence alignment, but much more accurate than a BLAST search. The example below searches the 3’ UTR of an mRNA for a miRNA binding site. Binding site prediction with RNAduplex The file duplex.seq contains the 3’UTR of NM_024615 and the microRNA mir-145. RNAduplex < duplex.seq >NM_024615 >hsa-miR-145 .(((((.(((...((((((((((.&)))))))))))))))))). 34,57 : 1,19 (-21.90) Most favorable binding has an interaction energy of -21.90 kcal/mol and pairs up on positions 34-57 of the UTR with positions 1-22 of the miRNA. RNAduplex can also produce alternative binding sites, e.g. running RNAduplex -e 10 would list all binding sites within 10 kcal/mol of the best one. Since RNAduplex forms only intermolecular pairs, it neglects the competition between intramolecular folding and hybridization. Thus, it is recommended to use RNAduplex as a pre-filter and analyse good RNAduplex hits additionally with RNAcofold or RNAup. Using the example above, running RNAup will yield: $RNAup -b < duplex.seq >NM_024615 >hsa-miR-145 (((((((&))))))) 50,56 : 1,7 (-8.41 = -9.50 + 0.69 + 0.40) GCUGGAU&GUCCAGU RNAup output in file: hsa-miR-145_NM_024615_w25_u1.out The free energy of the duplex is -9.50 kcal/mol and shows a discrepancy to the structure and energy value computed by RNAduplex (differences may arise from the fact that RNAup computes partition functions rather than optimal structures). However, the total free energy of binding is less favorable (-8.41 kcal/mol), since it includes the energetic penalty for opening the binding site on the mRNA (0.69 kcal/mol) and miRNA (0.40 kcal/mol). The -b option includes the probability of unpaired regions in both RNAs. You can also run RNAcofold on the example to see the complete structure after hybridization (neither RNAduplex nor RNAup produce structure drawings). Note however, that the input format for RNAcofold is different. An input file suitable for RNAcofold has to be created from the duplex.seq file first (use any text editor). As a more difficult example, let’s look at the interaction of the bacterial smallRNA RybB and its target mRNA ompN. First we’ll try predicting the binding site using RNAduplex: $ RNAduplex < RybB.seq >RybB >ompN .((((..((((((.(((....((((((((..(((((.((..((.((....((((..(((((((((((..((((((& .))))))..))))))).)))).....))))....)).)).)).))).))..))))........))))..))).)))))).)))). 5,79 : 80,164 (-34.60) Note, that the predicted structure spans almost the full length of the RybB small RNA. Compare the predicted interaction to the structures predicted for RybB and ompN alone, and ask yourself whether the predicted interaction is indeed plausible. Below the structure of ompN on the left and RybB on the right side. The respective binding regions predicted by RNAduplex are marked in red. GCCAC-----TGCTTTTCTTTGATGTCCCCATTTT-GTGGA-------GC-CCATCAACCCCGCCATTTCGGTT---CAAG-GTTGGTGGGTTTTTT ||| |||| |||||| ||| ||||| |||| || ||| || || || |||| |||| || ||| |||||| -40.30 AGGTCAAACAACGGC-AGAAACAATATT--TAAAGTCGCCGCACACGACGCGGTCGTCGGT-CGTCTCGGCCCTACTGTTCACGGTTATGAAAAGAAACC-3' Compare the RNAduplex prediction with the interaction predicted by RNAcofold, RNAup and the handcrafted prediction you see above. ## Consensus Structure Prediction Sequence co-variations are a direct consequence of RNA base pairing rules and can be deduced to alignments. RNA helices normally contain only 6 out of the 16 possible combinations: the Watson-Crick pairs GC, CG, AU, UA, and the somewhat weaker wobble pairs GU and UG. Mutations in helical regions therefore have to be correlated. In particular we often find “compensatory mutations” where a mutation on one side of the helix is compensated by a second mutation on the other side, e.g. a C$\cdot$G pair changes into a U$\cdot$A pair. Mutations where only one pairing partner changes (such as C$\cdot$G to U$\cdot$G) are termed “consistent mutations”. ### The Program RNAalifold RNAalifold generalizes the folding algorithm for sequence alignments, treating the entire alignment as a single “generalized sequence”. To assign an energy to a structure on such a generalized sequence, the energy is simply averaged over all sequences in the alignment. This average energy is augmented by a covariance term, that assigns a bonus or penalty to every possible base pair $\left(i,j\right)$ based on the sequence variation in columns $i$ and $j$ of the alignment. Compensatory mutations are a strong indication of structural conservation, while consistent mutations provide a weaker signal. The covariance term used by RNAalifold therefore assigns a bonus of 1 kcal/mol to each consistent and 2 kcal/mol for each compensatory mutation. Sequences that cannot form a standard base pair incur a penalty of $-1$ kcal/mol. Thus, for every possible consensus pair between two columns $i$ and $j$ of the alignment a covariance score ${C}_{ij}$ is computed by counting the fraction of sequence pairs exhibiting consistent and compensatory mutations, as well as the fraction of sequences that are inconsistent with the pair. The weight of the covariance term relative to the normal energy function, as well as the penalty for inconsistent mutations can be changed via command line parameters. Apart from the covariance term, the folding algorithm in RNAalifold is essentially the same as for single sequence folding. In particular, folding an alignment containing just one sequence will give the same result as single sequence folding using RNAfold. For $N$ sequences of length $n$ the required CPU time scales as $\mathsc{𝒪}\left(N\cdot {n}^{2}+{n}^{3}\right)$ while memory requirements grow as the square of the sequence length. Thus RNAalifold is in general faster than folding each sequence individually. The main advantage, however, is that the accuracy of consensus structure predictions is generally much higher than for single sequence folding, where typically only between 40% and 70% of the base pairs are predicted correctly. Apart from prediction of MFE structures RNAalifold also implements an algorithm to compute the partition function over all possible (consensus) structures and the thermodynamic equilibrium probability for each possible pair. These base pairing probabilities are useful to see structural alternatives, and to distinguish well defined regions, where the predicted structure is most likely correct, from ambiguous regions. As a first example we’ll produce a consensus structure prediction for the following four tRNA sequences. cat four.seq >M10740 Yeast-PHE GCGGAUUUAGCUCAGUUGGGAGAGCGCCAGACUGAAGAUUUGGAGGUCCUGUGUUCGAUCCACAGAAUUCGCA >K00349 Drosophila-PHE GCCGAAAUAGCUCAGUUGGGAGAGCGUUAGACUGAAGAUCUAAAGGUCCCCGGUUCAAUCCCGGGUUUCGGCA >K00283 Halobacterium volcanii Lys-tRNA-1 GGGCCGGUAGCUCAUUUAGGCAGAGCGUCUGACUCUUAAUCAGACGGUCGCGUGUUCGAAUCGCGUCCGGCCCA >AF346993 CAGAGUGUAGCUUAACACAAAGCACCCAACUUACACUUAGGAGAUUUCAACUUAACUUGACCGCUCUGA RNAalifold uses aligned sequences as input. Thus, our first step will be to align the sequences. We use clustalw2 in this example, since it’s one of the most widely used alignment programs and has been shown to work well on structural RNAs. Other alignment programs can be used (including programs that attempt to do structural alignment of RNAs), but the resulting multiple sequence alignment must be in Clustal format. Get clustalw2 and install it as you have done it with the other packages: http://www.clustal.org/clustal2 Consensus Structure from related Sequences 1. Prepare a sequence file (use file four.seq and copy it to your working directory) 2. Align the sequences 3. Compute the consensus structure from the alignment 4. Inspect the output files alifold.out, alirna.ps, alidot.ps 5. For comparison fold the sequences individually using RNAfold clustalw2 four.seq > four.out Clustalw2 creates two more output files, four.aln and four.dnd. For RNAalifold you need the.aln file. $RNAalifold -p four.aln$ RNAfold -p < four.seq RNAalifold output: __GCCGAUGUAGCUCAGUUGGG_AGAGCGCCAGACUGAAAAUCAGAAGGUCCCGUGUUCAAUCCACGGAUCCGGCA__ ..(((((((..((((.........)))).(((((.......))))).....(((((.......))))))))))))... minimum free energy = -15.12 kcal/mol (-13.70 + -1.43) ..(((((({..((((.........)))).(((((.......))))).....(((((.......)))))}))))))... free energy of ensemble = -15.75 kcal/mol frequency of mfe structure in ensemble 0.361603 ..(((((((..((((.........)))).(((((.......))))).....(((((.......))))))))))))... -15.20 {-13.70 + -1.50} RNAfold output: >M10740 Yeast-PHE GCGGAUUUAGCUCAGUUGGGAGAGCGCCAGACUGAAGAUUUGGAGGUCCUGUGUUCGAUCCACAGAAUUCGCA ((((((((........((((.((((((..((((...........))))..))))))..))))..)))))))). (-21.60) ((((((({...,,.{,((((.((((((..((((...........))))..))))))..))))),)))))))). [-23.20] ((((((((.........(((.((((((..((((...........))))..))))))..)))...)))))))). {-20.00 d=9.63} frequency of mfe structure in ensemble 0.0744065; ensemble diversity 15.35 >K00349 Drosophila-PHE [...] The output contains a consensus sequence and the consensus structure in dot-bracket notation. The consensus structure has an energy of $-15.12$ kcal/mol, which in turn consists of the average free energy of the structure $-13.70$ kcal/mol and the covariance term $-1.43$ kcal/mol. The strongly negative covariance term shows that there must be a fair number of consistent and compensatory mutations, but in contrast to the average free energy it’s not meaningful in the biophysical sense. Compare the predicted consensus structure with the structures predicted for the individual sequences using RNAfold. How often is the correct “clover-leaf” shape predicted? For better visualization, a structure annotated alignment or color annotated structure drawing can be generated by using the --aln and --color options of RNAalifold. $RNAalifold --color --aln four.aln$ gv aln.ps & gv alirna.ps & RNAalifold Output Files 4 sequence; length of alignment 78 alifold output 6 72 0 99.8% 0.007 GC:2 GU:1 AU:1 33 43 0 98.9% 0.033 GC:2 GU:1 AU:1 31 45 0 99.0% 0.030 CG:3 UA:1 15 25 0 98.9% 0.045 CG:3 UA:1 5 73 1 99.7% 0.008 CG:2 GC:1 13 27 0 99.1% 0.042 CG:4 14 26 0 99.1% 0.042 UA:4 4 74 1 99.5% 0.015 CG:3 [...] The last output file produced by RNAalifold -p, named alifold.out, is a plain text file with detailed information on all plausible base pairs sorted by the likelihood of the pair. In the example above we see that the pair $\left(6,72\right)$ has no inconsistent sequences, is predicted almost with probability 1, and occurs as a GC pair in two sequences, a GU pair in one, and a AU pair in another. RNAalifold automatically produces a drawing of the consensus structure in Postscript format and writes it to the file “alirna.ps”. In the structure graph consistent and compensatory mutations are marked by a circle around the variable base(s), i.e. pairs where one pairing partner is encircled exhibit consistent mutations, whereas pairs supported by compensatory mutations have both bases marked. Pairs that cannot be formed by some of the sequences are shown gray instead of black. In the example given, many pairs show such inconsistencies. This is because one of the sequences (AF346993) is not aligned well by clustalw. Note, that subsequent calls to RNAalifold will overwrite any existing output alirna.ps (alidot.ps, alifold.out) files in the current directory. Be sure to rename any files you want to keep. Structure predictions for the individual sequences The consensus structure computed by RNAalifold will contain only pairs that can be formed by most of the sequences. The structures of the individual sequences will typically have additional base pairs that are not part of the consensus structure. Moreover, ncRNA may exhibit a highly conserved core structure while other regions are more variable. It may therefore be desirable to produce structure predictions for one particular sequence, while still using covariance information from other sequences. This can be accomplished by first computing the consensus structure for all sequences using RNAalifold, then folding individual sequences using RNAfold -C with the consensus structure as a constraint. In constraint folding mode RNAfold -C allows only base pairs to form which are compatible with the constraint structure. This resulting structure typically contains most of the constraint (the consensus structure) plus some additional pairs that are specific for this sequence. Refolding Individual Sequences The refold.pl (find it in the Progs folder) script removes gaps and maps the consensus structure to each individual sequence. RNAalifold RNaseP.aln > RNaseP.alifold $gv alirna.ps$ refold.pl RNaseP.aln RNaseP.alifold | head -3 > RNaseP.cfold $RNAfold -C --noLP < RNaseP.cfold > RNaseP.refold$ gv E-coli_ss.ps If you compare the refolded structure (E-coli_ss.ps) with the structure you get by simply folding the E.coli sequence in the RNaseP.seq file (RNAfold --noLP) you find a clear rearrangement. In cases where constrained folding results in a structure that is very different from the consensus, or if the energy from constrained folding is much worse than from unconstrained folding, this may indicate that the sequence in question does not really share a common structure with the rest of the alignment or is misaligned. One should then either remove or re-align that sequence and recompute the consensus structure. Note that since RNase P forms sizable pseudo-knots, a perfect prediction is impossible in this case. ## Structural Alignments ### Manually correcting Alignments As the tRNA example above demonstrates, sequence alignments are often unsuitable as a basis for determining consensus structures. As a last resort, one may always try manually correcting an alignment. Sequence editors that are structure-aware may help in this task. In particular the SARSE http://sarse.kvl.dk/ editor, and the ralee-mode for emacs http://personalpages.manchester.ac.uk/staff/sam.griffiths-jones/software/ralee/ are useful. After downloading the ralee-files extract them and put them in a folder called ~/Tutorial/Progs/ralee. Now read the 00README file and follow the instructions. If you don’t find an “.emacs” file in your home directory execute the following command to copy it from the Data directory. cp Data/dot.emacs ~/ Next try correcting the ClustalW generated alignment (four.aln) from the example above. For this we first have to convert it to the Stockholm format. Fortunately the formats are similar. Make a copy of the file add the correct header line and the consensus structure from RNAalifold: cp four.aln four.stk $emacs four.stk .....$ cat four.stk The final alignment should look like: # STOCKHOLM 1.0 K00349 --GCCGAAAUAGCUCAGUUGGG-AGAGCGUUAGACUGAAGAUCUAAAGGUCCCCGGUUCAAUCCCGGGUUUCGGCA-- K00283 GGGCCG--GUAGCUCAUUUAGGCAGAGCGUCUGACUCUUAAUCAGACGGUCGCGUGUUCGAAUC--GCGUCCGGCCCA M10740 --GCGGAUUUAGCUCAGUUGGG-AGAGCGCCAGACUGAAGAUUUGGAGGUCCUGUGUUCGAUCCACAGAAUUCGCA-- AF346993 --CAGAGUGUAGCUUAAC---ACAAAGCACCCAACUUACACUUAGGAGAUUUCAACUUAACUUGACCGCUCUGA---- #=GC SS_cons ..(((((((..((((.........)))).(((((.......))))).....(((((.......))))))))))))... // Now use the functions under the edit menu to improve the alignment, the coloring by structure should help to highlight misaligned positions. ### Automatic structural alignments Next, we’ll compute alignments using two structural alignment programs: LocARNA and T-Coffee. LocARNA is an implementation of the Sankoff algorithm for simultaneous folding and alignment (i.e. it will generate both alignment and consensus structure). T-Coffee uses a progressive alignment algorithm. Download LocARNA from http://www.bioinf.uni-freiburg.de/Software/LocARNA/, extract and install it in your Progs folder and eventually add it to your path variable or copy it into the corresponding directory. Both programs can read the fasta file four.seq. $mlocarna --alifold-consensus-dp four.seq [...] M10740 GCGGAUUUAGCUCAGUUGGG-AGAGCGCCAGACUGAAGAUUUGGAGGUCCUGUGUUCGAUCCACAGAAUUCGCA K00349 GCCGAAAUAGCUCAGUUGGG-AGAGCGUUAGACUGAAGAUCUAAAGGUCCCCGGUUCAAUCCCGGGUUUCGGCA K00283 GGGCCGGUAGCUCAUUUAGGCAGAGCGUCUGACUCUUAAUCAGACGGUCGCGUGUUCGAAUCGCGUCCGGCCCA AF346993 CAGAGUGUAGCUUAAC---ACAAAGCACCCAACUUACACUUAGGAGAUU-UCAACUUAA-CUUGACCGCUCUGA alifold (((((((..((((.........)))).(((((.......))))).....(((((.......)))))))))))). (-52.53 = -21.58 + -30.95) Install T-Coffee Get T-Coffee from the github page https://github.com/cbcrg/tcoffee. There is a detailed information how you should download and install the software in the given README.md. Go to the downloads directory and use the provided installer by typing $ cd Tutorial/downloads $git clone git@github.com:cbcrg/tcoffee.git tcoffee$ cd tcoffee/compile/ $make t_coffee$ cp t_coffee ~/Tutorial/Progs/ Afterwards align the four.seq using t_coffee and compare the output with the one given by LocARNA. t_coffee four.seq > t_coffee.out [t_coffee.out] CLUSTAL FORMAT for T-COFFEE 20150925_14:18 [http://www.tcoffee.org] [MODE: ], CPU=0.00 sec, SCORE=739, Nseq=4, Len=74 M10740 GCGGAUUUAGCUCAGUU-GGGAGAGCGCCAGACUGAAGAUUUGGAGGUCC K00349 GCCGAAAUAGCUCAGUU-GGGAGAGCGUUAGACUGAAGAUCUAAAGGUCC K00283 GGGCCGGUAGCUCAUUUAGGCAGAGCGUCUGACUCUUAAUCAGACGGUCG AF346993 CAGAGUGUAGCUUAAC---ACAAAGCACCCAACUUACACUUAGGAGAUUU ***** * * *** *** * * * M10740 UGUGUUCGAUCCACAGAAUUCGCA K00349 CCGGUUCAAUCCCGGGUUUCGGCA K00283 CGUGUUCGAAUCGCGUCCGGCCCA AF346993 CAACUUAACUUGACCG--CUCUGA ** * Use RNAalifold to predict structures for all your alignments (ClustalW, handcrafted, T-Coffee, and LocARNA) and compare them. The handcrafted and LocARNA alignments should be essentially perfect. Other interesting approaches to structural alignment include CMfinder, dynalign, and stemloc. ## Noncoding RNA gene prediction Prediction of ncRNAs is still a challenging problem in bioinformatics. Unlike protein coding genes, ncRNAs do not have any statistically significant features in primary sequences that could be used for reliable prediction. A large class of ncRNAs, however, depend on a defined secondary structure for their function. As a consequence, evolutionarily conserved secondary structures can be used as characteristic signal to detect ncRNAs. All currently available programs for de novo prediction make use of this principle and are therefore, by construction, limited to structured RNAs. Programs to predict structural RNAs • QRNA (Eddy & Rivas, 2001) • ddbRNA (di Bernardo, Down & Hubbard, 2003) • MSARi (Coventry, Kleitman & Berger, 2004) • AlifoldZ (Washietl & Hofacker, 2004) • RNAz (Washietl, Hofacker & Stadler, 2005) • EvoFold (Pedersen et al, 2006) ### QRNA QRNA analyzes pairwise alignments for characteristic patterns of evolution. An alignment is scored by three probabilistic models: (i) Position independent, (ii) coding, (iii) RNA. The independent and the coding model is a pair hidden Markov model. The RNA model is a pair stochastic context-free grammar. First, it calculates the prior probability that, given a model, the alignment is observed. Second, it calculates the posterior probability that, given an alignment, it has been generated by one of the three models. The posterior probabilities are compared to the position independent background model and a “winner” is found. QRNA reads pairwise alignments in MFASTA format (i.e. FASTA format with gaps) Three competing models in QRNA Installing and basic usage of QRNA • Use the files in qrna-2.0.3d.tar.gz located in the Data/programs-folder shipped with the tutorial • don’t forget to set the QRNADB environment variable (e.g. export QRNADB=HOME/Tutorial/Data/programs/qrna-2.0.3d/lib/) and add it to your .bashrc • follow the instructions in the INSTALL document and make the binaries • create the directory ~/Tutorial/Progs/qrna and move the binaries located in the src/ sub-directory into this folder and add it to your .bashrc (e.g. export PATH=${HOME}/Tutorial/Progs:${PATH}:{HOME}/Tutorial/Progs/qrna) • first read the help text (option -h). • for advanced use of QRNA read the userguide.pdf shipped with the package (in the documentation folder • -a tells QRNA to print the alignment eqrna -h $eqrna -a Data/qrna/tRNA.fa$ eqrna -a Data/qrna/coding.fa [...] Divergence time (variable): 0.214132 0.208107 0.203995 [alignment ID = 72.37 MUT = 23.68 GAP = 3.95] length alignment: 76 (id=72.37) (mut=23.68) (gap=3.95) posX: 0-75 [0-72](73) -- (0.18 0.30 0.36 0.16) posY: 0-75 [0-75](76) -- (0.14 0.34 0.37 0.14) DA0780 GGGCTCGTAGCTCAGCT.GGAAGAGCGCGGCGTTTGCAACGCCGAGGCCT DA0940 GGGCCGGTAGCTCAGCCTGGGAGAGCGTCGGCTTTGCAAGCCGAAGGCCC DA0780 GGGGTTCAAATCCCCACGGGTCCA.. DA0940 CGGGTTCGAATCCCGGCCGGTCCACC [..] ### AlifoldZ AlifoldZ is based on an old hypothesis: functional RNAs are thermodynamically more stable than expected by chance. This hypothesis can be statistically tested by calculating $z$-scores: Calculate the MFE $m$ of the native RNA and the mean $\mu$ and standard deviation $\sigma$ of the background distribution of a large number of random (shuffled) RNAs. The normalized $z$-score $z=\left(m-\mu \right)∕\sigma$ expresses how many standard deviations the native RNA is more stable than random sequences. Unfortunately, most ncRNAs are not significantly more stable than the background. See for example the distribution of $z$-scores of some tRNAs, where the overlap of real (green bars) and shuffled (dashed line) tRNAs is relatively high. z-score distribution of tRNAs AlifoldZ calculates $z$-scores for consensus structures folded by RNAalifold. This significantly improves the detection performance compared to single sequence folding. z-score distribution of tRNA consensus folds Installation and basic usage of AlifoldZ • Use the tarball alifoldz_adopted.tar.gz located in the Data/programs/-folder shipped with the tutorial • Copy the files into your Progs directory (It’s just one single Perl script which needs RNAfold and RNAalifold and an important perl module located in the Math subdirectory) $cp -r alifoldz.pl Math/ ~/Tutorial/Progs/ • add the perl module to your PERL5LIB variable in the .bashrc$ export PERL5LIB=$HOME/Tutorial/Progs/:$PERL5LIB • test the tool $alifoldz.pl -h$ alifoldz.pl < Data/alifoldz/miRNA.aln alifoldz.pl -w 120 -x 100 < Data/alifoldz/unknown.aln ### RNAz * New version by Someone who loves RNAz. This part of the tutorial is based on the RNAz 1.0 version which is obsolete quite a while already!!! * AlifoldZ has some shortcomings that limits its usefulness in practice: The $z$-scores are not deterministic, i.e. you get a different score each time you run AlifoldZ. To get stable $z$-scores you need to sample a large number of random alignments which is computationally expensive. Moreover, AlifoldZ is extremely sensitive to alignment errors. The program RNAz overcomes these problems by using a different approach to asses a multiple sequence alignment for significant RNA structures. It is based on two key innovations: (i) The structure conservation index (SCI) to measure structural conservation in an alignment and (ii) $z$-scores that are calculated by regression without sampling. Both measures are combined to an overall score that is used to classify an alignment as “structured RNA” or “other”. The structure conservation index • The structure conservation index is an easy way to normalize an RNAalifold consensus MFE. z-score regression • The mean $\mu$ and standard deviation $\sigma$ of random samples of a given sequence are functions of the length and the base composition: $\mu ,\sigma \left(length,\frac{GC}{AT},\frac{G}{C},\frac{A}{T}\right)$ • It is therefore be possible to calculate $z$-scores by solving this 5 dimensional regression problem. SVM Classification • A support vector machine learning algorithm is used to classify an alignment based on $z$-score and structure conservation index. Installation of RNAz Installation is done according to the instructions used by the ViennaRNA Package. Just use the --prefix option as mentioned earlier and add the PATH to .bashrc • RNAz is available at: http://www.tbi.univie.ac.at/~wash/RNAz • Package includes the core program RNAz in ISO C, a set of helper programs in Perl, and an extensive manual. Basic usage of RNAz * where to get examples from (RNAz install package) - commands work with v2 but txt needs to be adopted * • RNAz reads one or more multiple sequence alignments in clustalw or MAF format. RNAz --help $RNAz tRNA.aln$ RNAz --both-strands --predict-strand tRNA.maf Advanced usage of RNAz • RNAz is limited to a maximum alignment length of 400 columns and a maximum number of 6 sequences. To process larger alignments a set of Perl helper scripts are used. • Selecting one or more subsets of sequences from an alignment with more than 6 sequences: $rnazSelectSeqs.pl miRNA.maf |RNAz$ rnazSelectSeqs.pl --num-seqs=4 --num-samples=3 miRNA.maf |RNAz • Scoring long alignments in overlapping windows: rnazWindow.pl --window=120 --slide=40 unknown.aln \ | RNAz --both-strands ### Large scale screens The RNAz package provides a set of Perl scripts that implement a complete analysis pipeline suitable for medium to large scale screens of genomic data. General procedure 1. Obtain or create multiple sequence alignments in MAF format 2. Run through the RNAz pipeline: Examples in this tutorial 1. Align Epstein Barr Virus genome (Acc.no: NC_007605) to two related primate viruses (Acc.nos: NC_004367, NC_006146) using multiz and run it through the RNAz pipeline. * where are this data comeing from? file from NCBI differs from those hidden in genefinding/rnaz/herpes * 2. Analyze snoRNA cluster in the human genome for conserved RNA structures: download pre-computed alignments from the UCSC genome browser and run it through the RNAz pipeline Example a: Preparation of data • multiz and blastz are available here: http://www.bx.psu.edu/miller_lab/ • Download the viral genomes in FASTA format and reformat the header strictly according to the rules given in the multiz documentation (http://www.bx.psu.edu/miller_lab/dist/tba_howto.pdf), e.g.: • You have to edit the ”multiz ” Makefile and replace CFLAGS = -Wall -Wextra -Werror with CFLAGS = -Wall -Wextra #-Werror and then simply use the make command to compile both programs. * dont understand what to do * >NC_007605:genome:1:+:149696 AGAATTCGTCTTGCTCTATTCACCCTTACTTTTCTTCTTGCCCGTTCTCTTTCTTAGTAT GAATCCAGTATGCCTGCCTGTAATTGTTGCGCCCTACCTCTTTTGGCTGGCGGCTATTGC CGCCTCGTGTTTCACGGCCTCAGTTAGTACCGTTGTGACCGCCACCGGCTTGGCCCTCTC ACTTCTACTCTTGGCAGCAGTGGCCAGCTCATATGCCGCTGCACAAAGGAAACTGCTGAC ACCGGTGACAGTGCTTACTGCGGTTGTCACTTGTGAGTACACACGCACCATTTACAATGC ATGATGTTCGTGAGATTGATCTGTCTCTAACAGTTCACTTCCTCTGCTTTTCTCCTCAGT CTTTGCAATTTGCCTAACATGGAGGATTGAGGACCCACCTTTTAATTCTCTTCTGTTTGC [...] Example a: Aligning viral genomes • To get a multiple alignment a phylogenetic tree and the following three steps are necessary: 1. Run blastz each vs. each 2. Combine blastz results to multiple sequence alignments 3. Project raw alignments to a reference sequence. • The corresponding commands: all_bz - "((NC_007605 NC_006146) NC_004367)" | bash tba "((NC_007605 NC_006146) NC_004367)" \ *.sing.maf raw-tba.maf maf_project raw-tba.maf NC_007605 > final.maf • Note: The tree is given in NEWICK like format with blanks instead of commas. The sequence data files must be named exactly like the names in this tree and in the FASTA headers. Example a: Running the pipeline I • First the alignments are filtered and sliced in overlapping windows: rnazWindow.pl < final.maf > windows.maf • RNAz is run on these windows: $RNAz --both-strands --show-gaps --cutoff=0.5 windows.maf \ > rnaz.out • Overlapping hits are combined to “loci” and visualized on a web-site: $ rnazCluster.pl --html rnaz.out > results.dat Example a: Running the pipeline II • The predicted hits are compared with available annotation of the genome: $rnazAnnotate.pl --bed annotation.bed results.dat \ > results_annotated.dat • The results file is formatted in a HTML overview page: $ rnazIndex.pl --html results_annotated.dat \ > results/index.html Example a: Statistics on the results • rnazIndex.pl can be used to generate a BED formatted annotation file which can be analyzed using rnazBEDstats.pl (after sorting, for the case the input alignments were unsorted)” $rnazIndex.pl --bed results.dat | \ rnazBEDsort.pl | rnazBEDstats.pl • RNAzfilter.pl can be used to filter the results by different criteria. In this case it gives us all loci with P$>$0.9”: $ rnazFilter.pl "P>0.9" results.dat | \ rnazIndex.pl --bed | \ rnazBEDsort.pl | rnazBEDstats.pl • To get an estimate on the (statistical) false positives one can repeat the complete screen with randomized alignments: rnazRandomizeAln final.maf > random.maf Example b: Obtaining pre-computed alignments from UCSC • Go to the UCSC genome browser (http://genome.ucsc.edu) and go to “Tables”. Download “multiz17” alignments in MAF format for the region: chr11:93103000-93108000 Example b: Running the pipeline • The Perl scripts are run in the same order as in Example 1: rnazWindow.pl --min-seqs=4 region.maf > windows.maf $RNAz --both-strands --show-gaps --cutoff=0.5 windows.maf \ > rnaz.out$ rnazCluster.pl --html rnaz.out > results.dat $rnazAnnotate.pl --bed annotation.bed results.dat \ > results_annotated.dat$ rnazIndex.pl --html results_annotated.dat \ > results/index.html • The results can be exported as UCSC BED file which can be displayed in the genome browser: \$ rnazIndex.pl --bed --ucsc results.dat > prediction.bed Example b: Visualizing the results on the genome browser • Upload the BED file as “Custom Track”… • …and have a look at the results: \let \prOteCt \relax \Protect \gl:nopartrue
2023-02-01 15:42:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 68, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5497146844863892, "perplexity": 5161.362622302008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00261.warc.gz"}
https://yutsumura.com/find-all-values-of-x-so-that-a-matrix-is-singular/linear-algebra-eye-catch3/
# linear-algebra-eye-catch3 • The Sum of Cosine Squared in an Inner Product Space Let $\mathbf{v}$ be a vector in an inner product space $V$ over $\R$. Suppose that $\{\mathbf{u}_1, \dots, \mathbf{u}_n\}$ is an orthonormal basis of $V$. Let $\theta_i$ be the angle between $\mathbf{v}$ and $\mathbf{u}_i$ for $i=1,\dots, n$. Prove that $\cos […] • Every Group of Order 24 Has a Normal Subgroup of Order 4 or 8 Prove that every group of order 24 has a normal subgroup of order 4 or 8. Proof. Let G be a group of order 24. Note that 24=2^3\cdot 3. Let P be a Sylow 2-subgroup of G. Then |P|=8. Consider the action of the group G on […] • Galois Group of the Polynomial x^p-2. Let p \in \Z be a prime number. Then describe the elements of the Galois group of the polynomial x^p-2. Solution. The roots of the polynomial x^p-2 are \[ \sqrt[p]{2}\zeta^k, k=0,1, \dots, p-1$ where $\sqrt[p]{2}$ is a real $p$-th root of $2$ and $\zeta$ […] • Subspace Spanned by Trigonometric Functions $\sin^2(x)$ and $\cos^2(x)$ Let $C[-2\pi, 2\pi]$ be the vector space of all real-valued continuous functions defined on the interval $[-2\pi, 2\pi]$. Consider the subspace $W=\Span\{\sin^2(x), \cos^2(x)\}$ spanned by functions $\sin^2(x)$ and $\cos^2(x)$. (a) Prove that the set $B=\{\sin^2(x), \cos^2(x)\}$ […] • Determine Conditions on Scalars so that the Set of Vectors is Linearly Dependent Determine conditions on the scalars $a, b$ so that the following set $S$ of vectors is linearly dependent. \begin{align*} S=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}, \end{align*} where \[\mathbf{v}_1=\begin{bmatrix} 1 \\ 3 \\ 1 \end{bmatrix}, […] • If the Images of Vectors are Linearly Independent, then They Are Linearly Independent Let $T: \R^n \to \R^m$ be a linear transformation. Suppose that $S=\{\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k\}$ is a subset of $\R^n$ such that $\{T(\mathbf{x}_1), T(\mathbf{x}_2), \dots, T(\mathbf{x}_k) \}$ is a linearly independent subset of $\R^m$. Prove that the set $S$ […]
2019-08-20 14:08:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985904693603516, "perplexity": 43.901096913366416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315544.11/warc/CC-MAIN-20190820133527-20190820155527-00393.warc.gz"}
https://www.homebuiltairplanes.com/forums/threads/tube-o-matic.32422/
# Tube-O-Matic Discussion in 'Workshop Tips and Secrets / Tools' started by FritzW, Oct 10, 2019 at 11:44 PM. ### Help Support HomeBuiltAirplanes Forum by donating: 1. Oct 10, 2019 at 11:44 PM ### FritzW #### Well-Known Member Joined: Jan 31, 2011 Messages: 3,529 3,140 Location: Las Cruces, NM A (mostly) 3D printed rotary 4th. axis with a pass-through collet that can handle 2 1/2" diameter tubes. >>The biggest cheap chinese ebay 4th. axis setups can only take (through the chuck) < 3/4" tubes.<< It's not ready to print yet, still have some details to sort out, but all the heavy CAD work is done. Here it is on a CNCRouterParts CPR4896 but it would work on any CNC machine (I used the CRP4896 because they let you download the model and it's the machine I have in the garage) Just to be clear: these are the parts I'm talking about... The stepper motor is mounted separate from the collet. If the "steady rests" don't provide enough tension on the cog belt (not shown) then I'll have to come up with some sort of tensioner, no biggy. Everything that holds the tube is 3D printed. The printed parts will cost about 20 and take 2 or 3 days to print. The collet holder (off white) will fit all the collets (yellow) up to 2 1/2". The red handwheel is the collet nut. The red part in the steady rest has a surface for a hose clamp. I'll have to print different ones for different diameter tube but the mount (white) will fit all of them. Hopefully the V grove will remove any slop in the tube. Cutaway view. Yeah, yeah... I screwed up the threads on the collet, easy fix. I'm sure I'll have to adjust the fit inside the steady rest. But this is a NEMA 34 motor (big) turning at 4 or 5 RPM, I'm not too worried about it. The goal of this thing is to cope and drill thin wall tubing faster and more accurately than I could by hand. It's NOT for making mechanical heart valves for Humming Birds. proppastie and pictsidhe like this. 2. Oct 11, 2019 at 12:56 AM ### Victor Bravo ### Victor Bravo #### Well-Known Member Joined: Jul 30, 2014 Messages: 5,968 Likes Received: 4,786 Location: KWHP, Los Angeles CA, USA That's PFC (pretty dang cool) I'll bet that even using modest resolution printed parts that could drill holes repeatedly within .015" which means thatyou could do "matched hole" on something simple within welding range and within cleco range. I'm going to post a separate idea on another(separate) new thread that is relevant to this type of construction. See if you can design and print something to make this work... stay tuned... 3. Oct 11, 2019 at 3:12 AM ### ScaleBirdsScott ### ScaleBirdsScott #### Well-Known Member Joined: Feb 10, 2015 Messages: 995 Likes Received: 637 Location: Uncasville, CT I feel like one could build a more industrial-level version of this quite easily with mostly off-the-shelf parts. And still relatively cheaply. Some cheap bearing blocks, some 80/20, etc. Seems it could easily also be a good standalone device. 4. Oct 11, 2019 at 4:12 AM ### Hot Wings ### Hot Wings #### Well-Known MemberHBA Supporter Joined: Nov 14, 2009 Messages: 6,304 Likes Received: 2,247 Location: Rocky Mountains If Fritz modified his collet to also be the X axis hold - and automate the clamping with a second stepper or pneumatic cylinder swash plate style - the tube could be power fed as needed. Is there open source software for tube miter G-code? 5. Oct 11, 2019 at 4:18 AM ### FritzW ### FritzW #### Well-Known Member Joined: Jan 31, 2011 Messages: 3,529 Likes Received: 3,140 Location: Las Cruces, NM Actually, print resolution wouldn't have much affect on the accuracy and repeatability. A lot of other things would though. To keep from flexing the steady rests, copeing the end of an .049 wall tube might take 3 or 4 passes but it would probably be within just a few thousandths. A heck of lot more accurate and repeatable than a tape measure, Sharpie marker and a hacksaw and files. Drilling would be as accurate as the base machine would be without the 4th axis (IF everything was *aligned properly, ...but that would apply the same to10,000 4th. axis) *a 1/10 of a degree misalignment between the machine axis and the tube, over the 8' length of the tube, could really screw things up (applies to any machine). No doubt. This wouldn't be a full blown industrial production machine. But I think it would knock out a single airplane (or two or three) a whole lot faster and more accurately than it could be done by hand. The real benefit/purpose/goal of this idea is that anyone who wants to build this airplane can download the files and print a major portion of the machine that will do all the tedious, PITA stuff for them. For the purpose of building this (TBD) TnG airplane, it still needs the ability to cut out the gussets. But I think it could be the perfect add on to the MPCNC mentioned at the beginning of the other thread. 6. Oct 11, 2019 at 4:38 AM ### ScaleBirdsScott #### Well-Known Member Joined: Feb 10, 2015 Messages: 995 637 Location: Uncasville, CT My only concern is being tabletop limits you to doing a few feet at a time. One could hypothetically make all the gussets and ribs on say a 2x2 foot shapeoko or similar unit. But if you want to do a full 10ft tube at once, you might want a separate unit. Meanwhile that narrow, long machine can easily fit out of the way 90% of the time. 7. Oct 11, 2019 at 4:52 AM ### FritzW #### Well-Known Member Joined: Jan 31, 2011 Messages: 3,529 3,140 Location: Las Cruces, NM Now there's an idea! I could put a V sholder on the collet and use something like the steady rest mount to hold it (or live a little and put some bearings in it). Then I could use much lower friction steady rests. ...and it would solve the issue of the belt tension flexing the tube. You can do it right in SolidWorks. Just split the tube and "unroll" it (in SW). Use that as a flat pattern for your DXF. The CNC machine thinks it's cutting a flat pattern that's the width of the circumference of the tube. ...hard to explain, very easy to do. Put another way: when a 1" diameter tube turns 1 revolution the machine just thinks that it moved 3.14159" along the Y axis. 8. Oct 11, 2019 at 9:43 AM #### Well-Known Member Joined: Jan 27, 2012 Messages: 942 264 Location: Glendale, CA Well I have darn near the same tube coping design for the skylite. I shelved it as I can design the tool but not the software to cut it. Fritz is there open source code out there already to cope tubing? If so I already have the Skylite fuselage in solidworks with all the tubes coped. I was going to give up and go with Vr3 but if the code existed I would give it a try. Hoping for good news on this... Oh and I just sent my resignation to my day job an hour ago to focus on a consulting gig, Once done, I will have more time to devote to airplane work semi full time starting in February... 9. Oct 11, 2019 at 12:54 PM ### Hot Wings #### Well-Known MemberHBA Supporter Joined: Nov 14, 2009 Messages: 6,304 2,247 Location: Rocky Mountains "We" can do it in SW but what about those that don't have SW? With free EAA SW I suppose this really isn't a problem. Edit: Since the new "Y" axis only thinks in degrees or rads we will still have to either shrink/expand the flat wrap along the Y axis in SW, pull the curve ordinates out of SW and run then through an Excel sheet, or come up with a macro for the G-code modification. Shrinking/converting the Y axis to Pi width in SW is probably the easiest............ Laying awake this morning I was pondering how to keep the rotational index of the tube as it gets fed through the collet for the next cut. Anything clamped external to the tube, like a typical muffler shop tube bender, has the potential to snag on the various feed mechanism bits. Maybe a miniature internal ball mechanism like a slip/skid, or other switch that closes only at level, could be used with a software re-index for the second cut? Last edited: Oct 11, 2019 at 1:03 PM 10. Oct 11, 2019 at 8:20 PM ### Victor Bravo #### Well-Known Member Joined: Jul 30, 2014 Messages: 5,968 4,786 Location: KWHP, Los Angeles CA, USA Excellent, congratulations! Take a deep breath, realize that you've got a unique skill and plenty of brain power, and push the throttle all the way forward. I'm already printing flyers that say "I knew him before he was a billionaire!" 11. Oct 11, 2019 at 8:39 PM #### Well-Known Member Joined: Jan 27, 2012 Messages: 942 264 Location: Glendale, CA Hello Fritz, To keep the rotational axis, I planned to drill a small hole at each end on the machine prior to the CNC coping. However I could do it with just one hole and use that hole to fixture and clock the next coping operation. The hole has some benefits that it will let the tube breath during welding to avoid pops and splatters and once the fuselage is welded just weld the hole shut. For an aluminum tube and gusset version, it might make sense to add a thin pencil line down the tube and have a simple pointer on the side of the machine that you clock the tube to and set zero for each coping job on each end. If you want to get fancy you could use a sensor to find that line with a homing routine. Or clamp on a homing flag to the tube first and use a Omron through beam sensor as the homing sensor. However this is like using a chainsaw to cut bread. the analog version of a thin sharpie line and a pointer or a hole in the tube will get the job done. As for flattening the cope profile, I have not tried that and unsure how to approach it. I have seen software that makes templates and rainbow aviation had a way to do it with their coping guides. Any thoughts on the process and getting it to Gcode. 12. Oct 11, 2019 at 8:55 PM ### Hot Wings #### Well-Known MemberHBA Supporter Joined: Nov 14, 2009 Messages: 6,304 2,247 Location: Rocky Mountains It is a sheet metal tool. Slit the tube with a token width cut - like .02mm - down the full length. Then under the sheet metal tab pick 'flatten'. 13. Oct 11, 2019 at 9:25 PM ### Hephaestus #### Well-Known Member Joined: Jun 25, 2014 Messages: 1,125 244 Location: YMM https://www.thingiverse.com/thing:624625 3 jaw printed Chuck on both ends, if you run t-nut style bed your alignment will be pretty darn close... Simple spur gear direct to the A-axis stepper (beats another belt drive) There's some great A-axis stuff out there to copy. Using lathe style Chuck's makes a lot more sense to me. 14. Oct 11, 2019 at 9:37 PM ### Hephaestus #### Well-Known Member Joined: Jun 25, 2014 Messages: 1,125 244 Location: YMM On the software side... I've got an old 720k 1.44 floppy here with a msdos program to do exactly this. It's out in storage, I'll see if I can find it and something to read the **** disk. Lol You enter the specs for the 2 tubes, select orientation (centered or flush to edge), angle, and it kicks out gcode for the cut. 15. Oct 11, 2019 at 9:59 PM ### FritzW #### Well-Known Member Joined: Jan 31, 2011 Messages: 3,529 3,140 Location: Las Cruces, NM There are plenty of software packages out there that'll take a 3D file and generate 4th axis (round) GCode, unfortunately I don't know much about them except what I've seen on youtube. (tons of info on youtube and the CNC forums) With the software I've got right now (SW, CamBam and Mach3) the easiest way I know to cope and drill a tube is to create a flat pattern like HW said in post #12 and connect the 4th.(A) axis to the Y axis terminals on the break out board. ...I'll go do a quick print screen 16. Oct 11, 2019 at 10:23 PM ### Hephaestus #### Well-Known Member Joined: Jun 25, 2014 Messages: 1,125 244 Location: YMM https://www.amazon.com/Sunwin-4th-Axis-Router-Rotational-Tailstock/dp/B00E3OM4LM This is my A-axis. Use it for pretty useless stuff so far, like laser etching coffee mugs. But I like the design, mounting it edge of table, and replacing the tailstock with another 3-jaw means you could effectively run full lengths of tubing through it fairly accurately and not be as restricted by the machine dimensions. As long as you kept it aligned vertically you could slide the length through for individual cuts or full length ('hold down tabs' to snip at the end might be a great idea to keep the integrity during the cut) 17. Oct 11, 2019 at 11:13 PM ### FritzW #### Well-Known Member Joined: Jan 31, 2011 Messages: 3,529 3,140 Location: Las Cruces, NM The reason behind the 3D printed one (besides being fun to mess with) is to have a pass through chuck that can take the size of tubes you'd need in a t/g airplane. 18. Oct 11, 2019 at 11:32 PM ### Hephaestus #### Well-Known Member Joined: Jun 25, 2014 Messages: 1,125 244 Location: YMM That's why I linked the 3d printed 3 jaw Chuck. Scale, find suitable bearing, add some herringbone gears... Pretty sure that ones original source is over on grabcad. Using external permanently fixed A axis will be easier than an internally fixed version wouldn't you agree? We could be even smarter and drive both ends of it with a nema17 I linked the one I own because it's a good example of how it's done on the high end Mills. There's less to go wrong with that arrangement than the layout you have. Wasn't criticism, just pointing you another way that may be mechanically better. 19. Oct 11, 2019 at 11:53 PM ### FritzW #### Well-Known Member Joined: Jan 31, 2011 Messages: 3,529 3,140 Location: Las Cruces, NM FWIW This is how I do it, there are no doubt better ways... Open the tube part Find a line on the tube that misses any holes (it's not a problem if it does, just adds a few steps) Draw wedge on a plane perpendicular to the tube. Using a wedge keeps the edges of the flat pattern 90 degrees to the surface. Set the back of the wedge to something really small, like 0.0005" Extrude cut the wedge down the whole tube. Self explanatory, ...maybe Hit the Flatten button to unroll the tube. You have to hit "Normal to" the surface you want or the next step will go wonky. Save as A DXF. Showing hidden edges probably doesn't matter on something simple like a this but it's a good habit to get in to. Bring the DXF up in your CAM software and good to go. Last edited: Oct 12, 2019 at 2:48 AM 20. Oct 12, 2019 at 12:26 AM Joined: Jan 31, 2011 Messages: 3,529
2019-10-14 10:21:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33410534262657166, "perplexity": 3270.911543702183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653216.3/warc/CC-MAIN-20191014101303-20191014124303-00389.warc.gz"}
http://mathhelpforum.com/calculus/62883-line-integral-question.html
Line Integral Question Let C be the straight path from (0,0) to (5,5) and let F=(y - x + 2)i + (sin(y-x) - 2)j: a) At each point of C, what angle does F make with a tangent vector C? b) Evaluate
2015-03-06 18:24:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152013421058655, "perplexity": 1595.2955266580934}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936469305.48/warc/CC-MAIN-20150226074109-00247-ip-10-28-5-156.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/330402/what-is-an-example-of-a-transformation-on-a-posterior-distribution-such-that-the
# What is an example of a transformation on a posterior distribution such that the MAP estimate will be non-invariant? Suppose that we have a posterior distribution $p(\theta\mid y)$ and we wish to define a transformation on $\theta$ such that $\phi = f(\theta)$. I know that generally such transformations will not affect the MLE as it is on the data space, but will result in non-invariance on the Maximum A Posteriori (MAP) estimate as it is a function of the parameter space. I am wondering if there is an example to illustrate this? Thanks. Finding the MAP means solving the program $$\hat\theta^\text{MAP}=\arg\max_\theta p(\theta|x)=\arg\max_\theta p(\theta)\mathfrak{f}(x|\theta)$$Assuming the transform $\phi=f(\theta)$ is bijective with inverse $\theta(\phi)$ and differentiable, the posterior distribution of $\phi$ is $$q(\phi|x)\propto\mathfrak{f}(x|\theta(\phi))p(\theta(\phi))\times\left|\dfrac{\text{d}\theta(\phi)}{\text{d}\phi}\right|$$by the Jacobian formula. Hence, if the Jacobian$$\left|\dfrac{\text{d}\theta(\phi)}{\text{d}\phi}\right|$$is not constant in $\phi$, there is no reason for $$\hat\phi^\text{MAP}=\arg\max_\phi q(\phi|x)$$to satisfy $$\hat\phi^\text{MAP}=f(\hat\theta^\text{MAP})$$For instance, if $\theta$ is a real parameter and $$f(\theta)=1\big/1+\exp\{-\theta\}$$ then $$\theta(\phi)=\log\{\phi/[1-\phi]\}$$and the Jacobian is$$\left|\dfrac{\text{d}\theta(\phi)}{\text{d}\phi}\right|=1\big/\phi[1-\phi]$$Due to its explosive behaviour at $0$ and $1$, the posterior distribution in $\phi$ will likely have a MAP more extreme than $f(\hat\theta^\text{MAP})$ To borrow an example from my book, The Bayesian Choice, the MAP associated with$$p(\theta)\propto\exp\{-|\theta|\}\qquad\mathfrak{f}(x|\theta)\propto[1+(x-\theta)^2]^{-1}$$is always$$\hat\theta^\text{MAP}=\arg\max_\theta p(\theta|x)=0$$notwithstanding the value of the (single) observation $x$. Now, if we switch to the logistic reparameterisation $f(\theta)=1\big/1+\exp\{-\theta\}$ then the MAP estimator in $\phi$ maximises $$-\left|\ln\dfrac{\phi}{1-\phi}\right|-\ln\left[1+\left(x-\ln\frac{\phi}{1-\phi}\right)^2\right]-\ln{\phi}(1-\phi)$$which is not systematically maximised at $\phi_0=1/2$, as shown by the following graph where $x$ ranges from $0.5$ to $2.5$ (and the maxima $\hat\phi^\text{MAP}$ drift rightwards): [Note: I have written a few blog entries on MAP estimators and their "dangers", in connection with this issue. The first difficulty being the dependence on the dominating measure.] Here is a little specific illustration for Xi'an's general result. Consider the classical beta-binomial model, for which the posterior $\theta|y$ is beta distributed with, say, parameters $\alpha$ and $\beta$. Then, by properties of the beta distribution, the MAP (mode of the beta distribution) is $$\hat\theta^\text{MAP}=\frac{\alpha-1}{\alpha+\beta-2}$$ If we consider the odds $\phi=\theta/(1-\theta)$, we find (see next link, item related distributions) that their posterior follows a beta prime distribution, which has mode $$\hat\phi^\text{MAP}=\frac{\alpha-1}{\beta+1}$$ Now, $$f(\hat\theta^\text{MAP})=\frac{\frac{\alpha-1}{\alpha+\beta-2}}{1-\frac{\alpha-1}{\alpha+\beta-2}}=\frac{\alpha-1}{\beta-1}\neq\frac{\alpha-1}{\beta+1}=\hat\phi^\text{MAP}$$ EDIT: This lack of invariance does not seem to be restricted to the MAP. Consider the posterior mean, another prominent Bayesian estimator. In the present example, the posterior mean for $\theta$ is $$\hat\theta^\text{mean}=\frac{\alpha}{\alpha+\beta}$$ with $$f(\hat\theta^\text{mean})=\frac{\alpha}{\beta}$$ which is not equal to the mean of the beta prime distribution, $$\frac{\alpha}{\beta-1}$$
2020-01-28 07:09:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132760763168335, "perplexity": 238.96738944804983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251776516.99/warc/CC-MAIN-20200128060946-20200128090946-00076.warc.gz"}
http://www.reference.com/browse/wiki/Expectation-maximization_algorithm
Definitions # Expectation-maximization algorithm An expectation-maximization (EM) algorithm is used in statistics for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. EM alternates between performing an expectation (E) step, which computes an expectation of the likelihood by including the latent variables as if they were observed, and a maximization (M) step, which computes the maximum likelihood estimates of the parameters by maximizing the expected likelihood found on the E step. The parameters found on the M step are then used to begin another E step, and the process is repeated. ## History The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin in the Journal of the Royal Statistical Society . They pointed out that the method had been "proposed many times in special circumstances" by other authors, but the 1977 paper generalized the method and developed the theory behind it. ## Applications EM is frequently used for data clustering in machine learning and computer vision. In natural language processing, two prominent instances of the algorithm are the Baum-Welch algorithm (also known as forward-backward) and the inside-outside algorithm for unsupervised induction of probabilistic context-free grammars. In psychometrics, EM is almost indispensable for estimating item parameters and latent abilities of item response theory models. With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio. The EM algorithm (and its faster variant OS-EM) is also widely used in medical image reconstruction, especially in Positron Emission Tomography and Single Photon Emission Computed Tomography. See below for other faster variants of EM. ## Demonstrations and activities Various 1D, 2D and 3D demonstrations of EM together with Mixture Modeling are provided as part of the paired SOCR activities and applets. These applets and activities show empirically the properties of the EM algorithm for parameter estimation in diverse settings. A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those utilising conjugate gradient and modified Newton-Raphson techniques. Additionally EM can be utilised with constrained estimation techniques. ## EM in a nutshell EM is an iterative technique for estimating the value of some unknown quantity, given the values of some correlated, known quantity. The approach is to first assume that the quantity is represented as a value in some parameterized probability distribution (the popular application is a mixture of Gaussians, hence the example below). The EM procedure, then, is: 1. Initialize the distribution parameters 2. Repeat until convergence: 1. E-Step: estimate the [E]xpected value of the unknown variables, given the current parameter estimate 2. M-Step: re-estimate the distribution parameters to [M]aximize the likelihood of the data, given the expected estimates of the unknown variables Steps 1 and 2 are loaded, and depend on what distribution you choose, how many parameters there are, and how complicated the missing value is. Two other caveats: First, the choice of initial parameters technically does not matter, but in practice a poor choice can lead to a bad estimate. Second, convergence, though guaranteed, may take a long time to get to. In practice, if the values of the missing variables and parameters do not change significantly between two iterations, then the algorithm terminates. A simple application is filling missing values in a column of a database. Assume you know about 50% of the values in a column, but the remaining values are corrupt or missing. For simplicity, assume that the data is distributed normally with a unit variance. Then, the only parameter that must be computed is the mean value. What's more convenient, is that the E-step is simple: the expected value of each missing value is the mean. But the E-step changes the overall mean of the data, and so the estimate can be improved. This will continue for a few iterations. An example: Initialization Step: Data: [4, 10, ?, ?] Initial mean value: 0 New Data: [4, 10, 0, 0] Step 1: New Mean: 3.5 New Data:[4, 10, 3.5, 3.5] Step 2: New Mean: 5.25 New Data: [4, 10, 5.25, 5.25] Step 3: New Mean: 6.125 New Data: [4, 10, 6.125, 6.125] Step 4: New Mean: 6.5625 New Data: [4, 10, 6.5625, 6.5625] Step 5: New Mean: 6.7825 New Data: [4, 10, 6.7825, 6.7825] Result: New Mean: 6.890625 From here you can see the value is slowly converging toward 7. For this simple model (a univariate normal distribution with unit variance), it can be seen that substituting the average of the known values is the best answer. For more complex models, there is no easy way to find the best answer, and the EM algorithm is a very popular approach for estimating the answer. In the mixture of Gaussians demonstration below, we have a collection of multi-dimensional objects. We'll assume that each individual data object was generated from one of K Gaussians. If we knew which Gaussian each object came from, then estimating the parameters is easy (using Maximum likelihood Estimation techniques). Instead, we must first compute the Expected value that the object came from each Gaussian (E-step) and then estimate the parameters, given these expected assignments (M-step). ## Specification of the EM procedure Let $textbf\left\{y\right\}$ denote incomplete data consisting of values of observable variables, and let $textbf\left\{z\right\}$ denote the missing data. Together, $textbf\left\{y\right\}$ and $textbf\left\{z\right\}$ form the complete data. $textbf\left\{z\right\}$ can either be actual missing measurements or a hidden variable that would make the problem easier if its value were known. For instance, in mixture models, the likelihood formula would be much more convenient if mixture components that "generated" the samples were known (see example below). Let $p,$ be the joint probability density function (continuous case) or probability mass function (discrete case) of the complete data with parameters given by the vector $theta$: $p\left(mathbf y, mathbf z | theta\right)$. This function can also be considered as the complete data likelihood, that is, it can be thought of as a function of $theta$. Further, note that the conditional distribution of the missing data given the observed can be expressed as: $p\left(mathbf z |mathbf y, theta\right) = frac\left\{p\left(mathbf y, mathbf z | theta\right)\right\}\left\{p\left(mathbf y | theta\right)\right\} = frac\left\{p\left(mathbf y|mathbf z, theta\right) p\left(mathbf z |theta\right) \right\}\left\{int p\left(mathbf y|mathbf hat\left\{z\right\}, theta\right) p\left(mathbf hat\left\{z\right\} |theta\right) dmathbf hat\left\{z\right\}\right\}$ by using the Bayes rule and the law of total probability. (This formulation only requires knowledge of the observation likelihood given the unobservable data $p\left(mathbf y|mathbf z, theta\right)$ and the probability of the unobservable data $p\left(mathbf z |theta\right)$.) An EM algorithm iteratively improves an initial estimate $theta_0$ by constructing new estimates $theta_1, theta_2,$ and so on. An individual re-estimation step that derives $theta_\left\{n+1\right\},$ from $theta_n,$ takes the following form: theta_{n+1} = argmax_{theta}Q(theta) where $Q\left(theta\right)$ is the expected value of the log-likelihood. In other words, we do not know the complete data, so we cannot say what is the exact value of the likelihood, but given the data that we do know (the $y$'s), we can find a posteriori estimates of the probabilities for the various values of the unknown $z$'s. For each set of $z$'s there is a likelihood value for $theta$, and we can thus calculate an expected value of the likelihood with the given values of $y$'s (and which depends on the previously assumed value of $theta$ because this influenced the probabilities of the z's). Q is given by Q(theta) = `sum_z` ` p left(z ,|, y, theta_n right)` ` log p left(y, z ,|, theta right)` or more generally Q(theta) = E_{mathbf z} ! ! left[log p left(mathbf y, mathbf z ,|, theta right) Big| mathbf y right] where it is understood that this denotes the conditional expectation of $log p left\left(mathbf y, mathbf z ,|, theta right\right)$ being taken with the $theta$ used in the conditional distribution of $textbf\left\{z\right\}$ fixed at $theta_n$. (The log of the likelihood is often used instead of true likelihood because it leads to easier formulas, but still attains its maximum at the same point as the likelihood.) In other words, $theta_\left\{n+1\right\}$ is the value that maximizes (M) the conditional expectation (E) of the complete data log-likelihood given the observed variables under the previous parameter value. The expectation $Q\left(theta\right)$ in the continuous case would be given by Q(theta) = E_{mathbf z} ! ! left[log p left(mathbf y, mathbf z ,|, theta right) Big| mathbf y right] = int^infty _{- infty} `p left(mathbf z ,|, mathbf y, theta_n right)` `log p left(mathbf y, mathbf z ,|, theta right) dmathbf z` Speaking of an expectation (E) step is a bit of a misnomer. What is calculated in the first step are the fixed, data-dependent parameters of the function Q. Once the parameters of Q are known, it is fully determined and is maximized in the second (M) step of an EM algorithm. The origin of the name comes from the fact that in the paper by Dempster, Laird and Rubin, they first discuss a less general problem in which the probability distribution is of the exponential family, and in that case the so-called E step consists of finding the expected values of certain sufficient statistics of the complete data. ### Properties Part of the reason for the popularity of EM algorithms is that, as can be shown, an EM iteration does not decrease the observed data likelihood function. However, there is no guarantee that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm will converge to a local maximum (or saddle point) of the observed data likelihood function, depending on starting values. There are a variety of heuristic approaches for escaping a local maximum such as using several different random initial estimates, $theta_0$, or applying simulated annealing. EM is particularly useful when maximum likelihood estimation of a complete data model is easy. If closed-form estimators exist, the M step is often trivial. A classic example is maximum likelihood estimation of a finite mixture of Gaussians, where each component of the mixture can be estimated trivially if the mixing distribution is known. "Expectation-maximization" is a description of a class of related algorithms, not a specific algorithm; EM is a recipe or meta-algorithm which is used to devise particular algorithms. The Baum-Welch algorithm is an example of an EM algorithm applied to hidden Markov models. Another example is the EM algorithm for fitting a mixture density model. An EM algorithm can also find maximum a posteriori (MAP) estimates, by performing MAP estimation in the M step, rather than maximum likelihood. There are other methods for finding maximum likelihood estimates, such as gradient descent, conjugate gradient or variations of the Gauss-Newton method. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function. ## Incremental versions The classic EM procedure is to replace both Q and θ with their optimal possible (argmax) values at each iteration. However it can be shown (see Neal & Hinton, 1999) that simply finding Q and θ to give some improvement over their current value will also ensure successful convergence. For example, to improve Q, we could restrict the space of possible functions to a computationally simple distribution such as a factorial distribution, $Q=prod_i Q_i. !$ Thus at each E step we compute the variational approximation of Q. To improve θ, we could use any hill-climbing method, and not worry about finding the optimal θ, just some improvement. This method is also known as Generalized EM (GEM). ## Relation to variational Bayes methods EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). We may want a fully Bayesian version of this, giving a probability distribution over θ as well as the latent variables. In fact the Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If we use the factorized Q approximation as described above (variational Bayes), we may iterate over each latent variable (now including θ) and optimize them one at a time. There are now k steps per iteration, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference. ## Example: Gaussian Mixture Assume that a sample of m vectors (or scalars) $mathbf y_1, dots, textbf\left\{y\right\}_m$, where $mathbf y_j in mathbb\left\{R\right\}^l$, is drawn from one of n Gaussian distributions. Let $z_j in \left\{1,2,ldots,n\right\}$ denote which Gaussian $mathbf y_j$ is from. The probability that a particular $mathbf y$ comes from the $i^\left\{mathrm\left\{th\right\}\right\}$ $D$-dimensional Gaussian is $P\left(mathbf y | z=i,theta\right) = mathcal\left\{N\right\}\left(mu_i,sigma_i\right) = \left(2pi\right)^\left\{-D/2\right\} \left\{left| sigma_i right|\right\}^\left\{-1/2\right\} expleft\left(-frac\left\{1\right\}\left\{2\right\}\left(mathbf y - mathbf mu_i\right)^T sigma_i^\left\{-1\right\} \left(mathbf y - mathbf mu_i\right)right\right)$ Our task is to estimate the unknown parameters $theta = left\left\{ mu_1, dots, mu_n, sigma_1, dots, sigma_n, P\left(z=1\right), dots, P\left(z=n\right) right\right\}$, that is, the mean and standard deviation of each Gaussian and the probability for each Gaussian being drawn for any given point. (Actually it is not clear that we should allow the standard deviations to take any value because then the maximum likelihood may be unbounded as one centers a particular Gaussian on a particular data point and decreases the standard deviation toward zero!) ### E-step Estimation of the unobserved z's (which Gaussian is used), conditioned on the observation, using the values from the last maximization step: p(z_j=i|mathbf y_j,theta_t) # frac{p(z_j i, mathbf y_j | theta_t)}{p(mathbf y_j|theta_t)} # frac{p(mathbf y_j|z_j=i,theta_t) p(z=i|theta_t)}{sum_{k=1}^n p(mathbf y_j | z_j=k, theta_t) p(z k|theta_t)} ### M-step We now want to maximize the expected log-likelihood of the joint event: begin{align} Q(theta) & = E_{z} left[ln prod_{j=1}^m p left(mathbf y_j, mathbf z | theta right) Big| mathbf y_j right] & = E_{z} left[sum_{j=1}^m ln p left(mathbf y_j, mathbf z | theta right) Big| mathbf y_j right] & = sum_{j=1}^m E_{z} left[ln p left(mathbf y_j, mathbf z | theta right) Big| mathbf y_j right] & = sum_{j=1}^m sum_{i=1}^n p left(z_j=i | mathbf y_j, theta_t right) ln pleft(z_j=i, mathbf y_j | theta right) end{align} If we expand the probability of the joint event, we get Q(theta) # sum_{j=1}^m sum_{i=1}^n p(z_j=i | mathbf y_j, theta_t) ln left(p(mathbf y_j | z_j=i, theta) p(z_j i | theta) right) We have the constraint sum_{i=1}^{n} p(z_j=i|theta) = 1 If we add a Lagrange multiplier, and expand the pdf, we get begin{align} mathcal{L}(theta) # & left(sum_{j=1}^m sum_{i=1}^n p(z_j=i | mathbf y_j, theta_t) left(- frac{D}{2} ln (2pi) - frac{1}{2} ln left| sigma_i right| - frac{1}{2}(mathbf y_j - mathbf mu_i)^T sigma_i^{-1} (mathbf y_j - mathbf mu_i) + ln p(z i | theta) right) right) & - lambda left(sum_{i=1}^{n} p(z=i | theta) - 1 right) end{align} To find the new estimate $theta_\left\{t+1\right\}$, we find a maximum where $frac\left\{partial mathcal\left\{L\right\}\left(theta\right)\right\}\left\{partial theta\right\} = 0$. New estimate for mean (using some differentiation rules from matrix calculus): begin{align} frac{partial mathcal{L}(theta)}{partial mu_i} & = sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) left(- frac{partial}{partial mu_i} frac{1}{2}(mathbf y_j - mathbf mu_i)^T sigma_i^{-1} (mathbf y_j - mathbf mu_i) right) & = sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) left(- frac{1}{2}(sigma_i^{-1} +sigma_i^{-T})(mathbf y_j - mathbf mu_i)(-1) right) & = sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) left(sigma_i^{-1}(mathbf y_j - mathbf mu_i) right) `& = 0 ` `& Downarrow ` sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) sigma_i^{-1} mathbf mu_i & = sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) sigma_i^{-1} mathbf y_j `& Downarrow ` mu_i sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) & = sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) mathbf y_j `& Downarrow ` mu_i & = frac{sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) mathbf y_j}{sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t)} end{align} New estimate for covariance: begin{align} frac{partial mathcal{L}(theta)}{partial sigma_i} & = sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) left(- frac{partial}{partial sigma_i} frac{1}{2} ln left| sigma_i right| - frac{partial}{partial sigma_i} frac{1}{2}(mathbf y_j - mathbf mu_i)^T sigma_i^{-1} (mathbf y_j - mathbf mu_i) right) & = sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) left(- frac{1}{2} sigma_i^{-T} + frac{1}{2} sigma_i^{-T}(mathbf y_j - mathbf mu_i) (mathbf y_j - mathbf mu_i)^T sigma_i^{-T} right) `& = 0 ` `& Downarrow ` sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) sigma_i^{-1} & = sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) sigma_i^{-1} (mathbf y_j - mathbf mu_i) (mathbf y_j - mathbf mu_i)^T sigma_i^{-1} `& Downarrow ` sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) & = sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) sigma_i^{-1} (mathbf y_j - mathbf mu_i) (mathbf y_j - mathbf mu_i)^T `& Downarrow ` sigma_i & = frac{sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) (mathbf y_j - mathbf mu_i) (mathbf y_j - mathbf mu_i)^T}{sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t)} end{align} New estimate for class probability: begin{align} frac{partial mathcal{L}(theta)}{partial p(z=i|theta)} & = left(sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) frac{partial ln p(z=i|theta)}{partial p(z=i|theta)} right) - lambda left(frac{partial p(z=i|theta)}{partial p(z=i|theta)} right) & = left(sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) frac{1}{p(z=i|theta)} right) - lambda `& = 0 ` `& Downarrow ` sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) frac{1}{p(z=i|theta)} `& = lambda ` `& Downarrow ` p(z=i|theta) & = frac{1}{lambda} sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) end{align} Inserting into the constraint: begin{align} sum_{i=1}^{n} p(z=i|theta) & = sum_{i=1}^{n} frac{1}{lambda} sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) `& = 1 ` `& Downarrow ` lambda & = sum_{i=1}^{n} sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) end{align} Inserting $lambda$ into our estimate: begin{align} p(z=i|theta) & = frac{1}{lambda} sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) & = {displaystylefrac{sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t)}{sum_{k=1}^{n} sum_{j=1}^m p(z_j=k | mathbf y_j, theta_t)}} & = frac{1}{m}sum_{j=1}^m p(z_j=i | mathbf y_j, theta_t) end{align} These estimates now become our $theta_\left\{t+1\right\}$, to be used in the next estimation step.
2013-12-13 03:20:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 58, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8521848320960999, "perplexity": 2219.8461017943173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164836485/warc/CC-MAIN-20131204134716-00000-ip-10-33-133-15.ec2.internal.warc.gz"}
http://scipy.github.io/devdocs/generated/scipy.sparse.dia_matrix.html
# scipy.sparse.dia_matrix¶ class scipy.sparse.dia_matrix(arg1, shape=None, dtype=None, copy=False)[source] Sparse matrix with DIAgonal storage This can be instantiated in several ways: dia_matrix(D) with a dense matrix dia_matrix(S) with another sparse matrix S (equivalent to S.todia()) dia_matrix((M, N), [dtype]) to construct an empty matrix with shape (M, N), dtype is optional, defaulting to dtype=’d’. dia_matrix((data, offsets), shape=(M, N)) where the data[k,:] stores the diagonal entries for diagonal offsets[k] (See example below) Notes Sparse matrices can be used in arithmetic operations: they support addition, subtraction, multiplication, division, and matrix power. Examples >>> import numpy as np >>> from scipy.sparse import dia_matrix >>> dia_matrix((3, 4), dtype=np.int8).toarray() array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], dtype=int8) >>> data = np.array([[1, 2, 3, 4]]).repeat(3, axis=0) >>> offsets = np.array([0, -1, 2]) >>> dia_matrix((data, offsets), shape=(4, 4)).toarray() array([[1, 0, 3, 0], [1, 2, 0, 4], [0, 2, 3, 0], [0, 0, 3, 4]]) Attributes shape Get shape of a matrix. nnz Number of stored values, including explicit zeros. dtype (dtype) Data type of the matrix ndim (int) Number of dimensions (this is always 2) data DIA format data array of the matrix offsets DIA format offset array of the matrix Methods arcsin() Element-wise arcsin. arcsinh() Element-wise arcsinh. arctan() Element-wise arctan. arctanh() Element-wise arctanh. asformat(format) Return this matrix in a given sparse format asfptype() Upcast matrix to a floating point format (if necessary) astype(t) Cast the matrix elements to a specified type. ceil() Element-wise ceil. conj() Element-wise complex conjugation. conjugate() Element-wise complex conjugation. copy() Returns a copy of this matrix. count_nonzero() Number of non-zero entries, equivalent to deg2rad() Element-wise deg2rad. diagonal() Returns the main diagonal of the matrix dot(other) Ordinary dot product expm1() Element-wise expm1. floor() Element-wise floor. getH() Return the Hermitian transpose of this matrix. get_shape() Get shape of a matrix. getcol(j) Returns a copy of column j of the matrix, as an (m x 1) sparse matrix (column vector). getformat() Format of a matrix representation as a string. getmaxprint() Maximum number of elements to display when printed. getnnz([axis]) Number of stored values, including explicit zeros. getrow(i) Returns a copy of row i of the matrix, as a (1 x n) sparse matrix (row vector). log1p() Element-wise log1p. maximum(other) Element-wise maximum between this and another matrix. mean([axis, dtype, out]) Compute the arithmetic mean along the specified axis. minimum(other) Element-wise minimum between this and another matrix. multiply(other) Point-wise multiplication by another matrix nonzero() nonzero indices power(n[, dtype]) This function performs element-wise power. rad2deg() Element-wise rad2deg. reshape(shape[, order]) Gives a new shape to a sparse matrix without changing its data. rint() Element-wise rint. set_shape(shape) See reshape. setdiag(values[, k]) Set diagonal or off-diagonal elements of the array. sign() Element-wise sign. sin() Element-wise sin. sinh() Element-wise sinh. sqrt() Element-wise sqrt. sum([axis, dtype, out]) Sum the matrix elements over a given axis. tan() Element-wise tan. tanh() Element-wise tanh. toarray([order, out]) Return a dense ndarray representation of this matrix. tobsr([blocksize, copy]) Convert this matrix to Block Sparse Row format. tocoo([copy]) Convert this matrix to COOrdinate format. tocsc([copy]) Convert this matrix to Compressed Sparse Column format. tocsr([copy]) Convert this matrix to Compressed Sparse Row format. todense([order, out]) Return a dense matrix representation of this matrix. todia([copy]) Convert this matrix to sparse DIAgonal format. todok([copy]) Convert this matrix to Dictionary Of Keys format. tolil([copy]) Convert this matrix to LInked List format. transpose([axes, copy]) Reverses the dimensions of the sparse matrix. trunc() Element-wise trunc. #### Previous topic scipy.sparse.csr_matrix.trunc #### Next topic scipy.sparse.dia_matrix.shape
2017-02-22 15:53:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4583780765533447, "perplexity": 8324.038688268347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00581-ip-10-171-10-108.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/30609/sinus-generation-and-its-fft-in-radians/30610
# sinus generation and it's fft in radians I'm trying to generate sine signal and display it's fft in radians, byt I meet several issues that I can't understand. Please find my script below: clear all; close all; clc; fs = 10*pi; ts = 1/fs; t = 0:ts:2*pi; sine = cos(10*t); plot(t/pi,sine) sinefft = fft(sine); subplot(2,1,1) plot(abs(sinefft)) subplot(2,1,2) plot(abs(fftshift(sinefft))) Results 1. Why on first plot fft shows that cosine has a frequency 11 radians? 2. How to shift fft to achieve frequency domain around 0? I will be thankful for your explanations. Maks 1. Why on first plot fft shows that cosine has a frequency 11 radians? The array index number of the cosine peak is 11. The frequency is different. Matlab indexes from 1, so the FFT bin number will be 10 (as we have a 0 bin in FFTs). The frequency of that bin will be $$\frac{2 \pi (11 - 1) }{N } = \frac{2 \pi 10}{ 198} = \frac{10\pi}{99}$$ assuming your FFT length was 198. Note that this will be a normalized frequency between 0 and $2\pi$ (or, if you prefer, $-\pi$ and +$\pi$). 1. How to shift fft to achieve frequency domain around 0? Your use of fftshift does this in the second plot. However, you need to set the $x$ axis by doing something like: N = length(sinefft); delta_omega = 2*pi/N; % rad/sec between FFT bins omega = -pi:delta_omega:(pi - delta_omega) plot(omega, abs(fftshift(sinefft))); Minor Correction I previously had this frequency variable as f, it's actually omega. How to calculate now what frequency in Hz has this signal? So the normalized frequency in radian per second is omega. That ranges from $-\pi$ to $+\pi$ (modulo some end effects). The normalized frequency in "Hz" is f. This will be omega/2/pi. This ranges from -0.5 to +0.5. The frequency in terms of the sampling frequency will be f_true = omega / pi / 2 * fs (where fs is the sampling frequency). • Great! That's clearing a lot. But I have one question more, therefore I'm new in radians domain (it was always unclear for me, but I've made myself a goal to understand relation between w and f): How to calculate now what frequency in Hz has this signal? Is it f = w/2*pi? – Maks Piechota May 5 '16 at 18:02 • @MaksPiechota : Check out my update; I made a mistake earlier, and have tried to address the question in your comment. – Peter K. May 5 '16 at 18:16 • Ok, many thanks for your explanation. But why delta_f = 2*pi/N? I see it works but I don't understand why it is independent from sampling frequency? – Maks Piechota May 5 '16 at 18:39 • @MaksPiechota Once you've sampled, we don't really care about the sampling frequency as far as DFTs or filtering are concerned. We just have data in an array, indexed by an integer. We do care about the sampling frequency when we want to relate it to the "analog" world, just not once we're just working with numbers in an array. For the samples in the array, we only really care about normalized frequency (from 0 to 1 or -0.5 to +0.5 or in radian/second from -$\pi$ to |$\pi$). – Peter K. May 5 '16 at 18:42 • Ok. So in nominator we have $2\pi$ because this is the range that we want to have in out fft x axis? – Maks Piechota May 5 '16 at 19:00
2020-08-09 09:26:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6397761106491089, "perplexity": 1446.9160166668576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738523.63/warc/CC-MAIN-20200809073133-20200809103133-00360.warc.gz"}
https://chemistry.stackexchange.com/questions/89037/molarity-of-silveri-nitrate-solution-after-the-reaction-with-magnesium/89040
# Molarity of silver(I) nitrate solution after the reaction with magnesium [closed] $\pu{3 g}$ of $\ce{Mg}$ are placed in $\pu{500 mL}$ of $\pu{0.625 M}$ $\ce{AgNO3}$. When the reaction is complete, what is the molarity of the $\ce{AgNO3}$ solution? I'm thinking you need to use $M_1V_1=M_2V_2$, but I'm stuck on how to use it. ## closed as off-topic by Mithoron, airhuff, John Snow, Jon Custer, TyberiusJan 24 '18 at 20:04 This question appears to be off-topic. The users who voted to close gave this specific reason: If this question can be reworded to fit the rules in the help center, please edit the question. • Try to think of what the definitions of M and V are, what do the subscripts mean? Units are always a good indicator. – John Snow Jan 24 '18 at 6:42 Whenever possible, start with writing down equation(s) for the chemical reactions taking place in the system and don't use random formulas. Magnesium as a more active metal is going to reduce silver in the solution, thus lowering its molarity: $$\ce{\overset{0}{Mg} (s) + 2\overset{+1}{Ag}NO3 (aq) -> \overset{+2}{Mg}(NO3)2 (aq) + \overset{0}{Ag} (s)}$$ Final concentration $$c_2(\ce{AgNO3})$$ is pretty much defined by the remaining silver(I) nitrate when magnesium is depleted in the reaction: $$c_2(\ce{AgNO3}) = c_1(\ce{AgNO3}) - \Delta c(\ce{AgNO3})\label{eq:1}\tag{1}$$ $$c_1(\ce{AgNO3})$$ is the known initial concentration; $$V$$ is the volume; $$\Delta c(\ce{AgNO3})$$ is the change in concentration: $$\Delta c(\ce{AgNO3}) = \frac{\Delta n(\ce{AgNO3})}{V}\label{eq:2}\tag{2}$$ where the unknown amount of silver nitrate $$\Delta n(\ce{AgNO3})$$ can be found knowing stoichiometry of the reaction (assuming the reaction is complete and is irreversible), mass and molar mass of magnesium $$m(\ce{Mg})$$ and $$M(\ce{Mg})$$, respectively: $$\Delta n(\ce{AgNO3}) = 2n(\ce{Mg}) = \frac{2m(\ce{Mg})}{M(\ce{Mg})}\label{eq:3}\tag{3}$$ At this point we can rewrite \eqref{eq:1} using \eqref{eq:2} and \eqref{eq:3} since all the variables are known: $$c_2(\ce{AgNO3}) = c_1(\ce{AgNO3}) - \frac{2m(\ce{Mg})}{VM(\ce{Mg})} = \pu{0.625 M} - \frac{2\cdot\pu{3 g}}{\pu{0.500 L}\cdot\pu{24 g mol-1}} = \pu{0.125 M}$$
2019-10-17 05:19:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8299484848976135, "perplexity": 1080.8788137328665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672723.50/warc/CC-MAIN-20191017045957-20191017073457-00186.warc.gz"}
https://mathoverflow.net/questions/216113/sums-of-unique-squares
# Sums of unique squares Let $\mathbb{N}$ denote the positive integers and let $S = \{n^2: n\in \mathbb{N}\}$. For any positive integer $k$ we define $$\text{sq}(k) = |\{F\subseteq S: F\neq \emptyset, F\text{ is finite and } k = \sum_{n\in F} n\}|.$$ Questions: 1. Is the function $\text{sq}:\mathbb{N}\to \mathbb{N}\cup\{0\}$ surjective? 2. Is there $m_0>1$ such that $\text{sq}^{-1}(\{m\})$ is infinite for all $m\geq m_0$? • This is A033461 in the OEIS. – Charles Sep 3 '15 at 17:56 The generating function is $$\sum_{n\geq 0} \text{sq}(n) z^n = \prod_{k\geq 1} (1+z^{k^2}).$$ Using complex integration you can use this to get an asymptotic formula for $\text{sq}(n)$. This involves quite some work, but the path is well described in Andrews, The theory of partitions, chapter 6. You will arrive at something like $\text{sq}(n)\sim e^{n^{1/3}}$ times some minor terms, hence $\text{sq}$ is ultimately increasing at a pretty fast rate. In particular, $\text{sq}(n)$ is not surjective. For $\text{sq}^{-1}$ you can either derive an asymptotic series or compute the proportion of all partitions not containing the summand $1^2$ to find that $\text{sq}$ is strictly increasing from some point onwards. Hence you will most likely obtain that $\text{sq}^{-1}(\{m\})$ is infinite if and only if $m=1$.
2020-01-24 12:03:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9795972108840942, "perplexity": 107.55617658801408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250619323.41/warc/CC-MAIN-20200124100832-20200124125832-00534.warc.gz"}
https://brilliant.org/discussions/thread/evaluating-functions/
× # Evaluating Functions We evaluate a function at a particular value. If the function $$f(x) = x^3 + x + 1$$ is evaluated at $$x = -2$$, we denote this as $$f(-2$$. In this case, that would require replacing every instance of $$x$$ in the function's definition with the value to be evaluated, like this: $f(-2) = (-2)^3 + (-2) + 1 = -8 -2 +1 = -9.$ Note by Arron Kau 2 years, 5 months ago
2017-01-19 13:05:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9732029438018799, "perplexity": 348.68308331525526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00044-ip-10-171-10-70.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-use-the-product-rule-to-find-the-derivative-of
# How do you use the Product Rule to find the derivative of ? Sep 26, 2015 See the explanation. #### Explanation: $h \left(x\right) = f \left(x\right) \cdot g \left(x\right)$ $h ' \left(x\right) = f ' \left(x\right) \cdot g \left(x\right) + f \left(x\right) \cdot g ' \left(x\right)$ Example: $h = \left({x}^{4} - 5 {x}^{2} + 1\right) {e}^{x}$ $f = {x}^{4} - 5 {x}^{2} + 1 \implies f ' = 4 {x}^{3} - 10 x$ $g = {e}^{x} \implies g ' = {e}^{x}$ $h ' = \left(4 {x}^{3} - 10 x\right) {e}^{x} + \left({x}^{4} - 5 {x}^{2} + 1\right) {e}^{x}$ $h ' = \left({x}^{4} + 4 {x}^{3} - 5 {x}^{2} - 10 x + 1\right) {e}^{x}$
2019-10-14 22:39:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.709352433681488, "perplexity": 1215.5713122600832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655554.2/warc/CC-MAIN-20191014223147-20191015010647-00462.warc.gz"}
https://www.physicsforums.com/threads/the-born-series-expansion.572266/
# The Born series expansion 1. Jan 30, 2012 ### PineApple2 $$|\psi> = (1+G_0V+\ldots)|\psi_0>$$ Now when I want to move to spatial representation, the textbook asserts I should get $$\psi(\vec{r})=\psi_0(\vec{r}) + \int dV' G_0(\vec{r},\vec{r'}) V(\vec{r'})\psi_0(\vec{r'})+\ldots$$ by operating with $<\vec{r}|$ from the left. However I don't know how to get the 2nd term (the integral). I tried to insert a complete basis like this: $$<\vec{r}|G_0V|\psi_0> = \int d^3r'<\vec{r}|G_0|\vec{r'}><\vec{r'}|V|\psi_0>$$ however I don't know how to get $V(\vec{r'})$ from the second bracketed term. Any help? By the way: is there a "nicer" way to write 'bra' and 'ket' in this forum? Last edited: Jan 30, 2012 2. Jan 30, 2012 ### vanhees71 Introduce another unit operator in terms of the completeness relation for the position-eigenbasis. Then you use $$\langle \vec{x}' |V(\hat{\vec{x}}\vec{x}'' \rangle = V(\vec{x}'') \langle \vec{x}'| \vec{x}'' \rangle=V(\vec{x}') \delta^{(3)}(\vec{x}'-\vec{x}'').$$ Then one of the integrals from the completeness relations can be used to get rid of the $\delta$ distribution, and you arrive at Born's series in the position representation as given by your textbook. 3. Jan 30, 2012 ### PineApple2 I see. And then $V(\vec{x'})$ can be taken out as $|\vec{x'}>$ are its eigenstates and it is taken out as a scalar. Thanks!
2019-01-19 05:55:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9142501950263977, "perplexity": 596.9459118453952}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662690.13/warc/CC-MAIN-20190119054606-20190119080606-00416.warc.gz"}
https://www.x-mol.com/paper/1317595772810924032
Physical Review Letters ( IF 8.385 ) Pub Date : 2020-10-16 , DOI: 10.1103/physrevlett.125.167201 Ganesh Pokharel; Hasitha Suriya Arachchige; Travis J. Williams; Andrew F. May; Randy S. Fishman; Gabriele Sala; Stuart Calder; Georg Ehlers; David S. Parker; Tao Hong; Andrew Wildes; David Mandrus; Joseph A. M. Paddison; Andrew D. Christianson We present a comprehensive neutron scattering study of the breathing pyrochlore magnet ${\mathrm{LiGaCr}}_{4}{\mathrm{S}}_{8}$. We observe an unconventional magnetic excitation spectrum with a separation of high- and low-energy spin dynamics in the correlated paramagnetic regime above a spin-freezing transition at 12(2) K. By fitting to magnetic diffuse-scattering data, we parametrize the spin Hamiltonian. We find that interactions are ferromagnetic within the large and small tetrahedra of the breathing pyrochlore lattice, but antiferromagnetic further-neighbor interactions are also essential to explain our data, in qualitative agreement with density-functional-theory predictions [Ghosh et al., npj Quantum Mater. 4, 63 (2019)]. We explain the origin of geometrical frustration in ${\mathrm{LiGaCr}}_{4}{\mathrm{S}}_{8}$ in terms of net antiferromagnetic coupling between emergent tetrahedral spin clusters that occupy a face-centered-cubic lattice. Our results provide insight into the emergence of frustration in the presence of strong further-neighbor couplings, and a blueprint for the determination of magnetic interactions in classical spin liquids. down wechat bug
2020-10-27 09:36:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4808623492717743, "perplexity": 6968.87340586855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893845.76/warc/CC-MAIN-20201027082056-20201027112056-00225.warc.gz"}
http://theasueclub.com/tpserk/1ab364-graph-theory-and-competition-law
# graph theory and competition law Problem: https://code.google.com/codejam/contest/635101/dashboard#s=p0Solution: https://gist.github.com/micahstairs/ad5abc0f6b94f8eb6aa4Thanks for watching!-Micah===============================================================================Developer tools I used in the creation/testing of the content in these videos:1) Sublime text, my favorite lightweight code editor (https://www.sublimetext.com).NOTE: I'm often asked about the color scheme I use, find it here: https://github.com/williamfiset/dotfiles/tree/master/sublime2) Kite, a free AI-powered coding assistant that provides smart code completions while typing:https://www.kite.com/get-kite/?utm_medium=referral\u0026utm_source=youtube\u0026utm_campaign=williamfiset\u0026utm_content=description-only=============================================================================== We will discuss only a certain few important types of graphs in this chapter. Chapter 1. 3. Graph theory is the study of graphs. A basic graph of 3-Cycle. Handbook of Graph Theory, Second Edition. Graph theory is the name for the discipline concerned with the study of graphs: constructing, exploring, visualizing, and understanding them. Since dom(T) is the complement of the competition graph of the tournament formed by reversing the arcs of T, complementary results are obtained for the competition graph of a tournament. This paper briefly describes the problem of representing the competition graph as an intersection graph of boxes (k-dimensional rectangles representing ecological niches) in Euclidean k-space and then discusses the class of graphs which arise as competition graphs … Graph Theory 1 [Programming Competition Problems] - YouTube Advertisements. Vertices x and y dominate a tournament T if for all vertices z ≠ x, y, either x beats z or y beats z. Any scenario in which one wishes to examine the structure of a network of connected objects is potentially a problem for graph theory. Graph Theory 1 In the domain of mathematics and computer science, graph theory is the study of graphs that concerns with the relationship among edges and vertices. Some De nitions and Theorems3 1. Learn more. Let dom(T) be the graph on the vertices of T with edges between pairs of vertices that dominate T. We show that dom(T) is either an odd cycle with possible pendant vertices or a forest of caterpillars. Characterization of digraphs with equal domination graphs and underlying graphs. Definitions of Graph Theory 1.1 INTRODUCTION Graph theory is a branch of mathematics started by Euler [45] as early as 1736. History of Graph Theory Graph Theory started with the "Seven Bridges of Königsberg". The amount of flow on an edge cannot exceed the capacity of the edge. Elementary Graph Properties: Degrees and Degree Sequences9 4. Previous Page. A stimulating excursion into pure mathematics aimed at "the mathematically traumatized," but great fun for mathematical hobbyists and serious mathematicians as well. Theorem 1 The competition number of a graph is minf0 , (G) j V(G)j+ 2g. In graph theory, a flow network (also known as a transportation network) is a directed graph where each edge has a capacity and each edge receives a flow. Next Page . Graph Theory In working as an investigator and later consulting with them, it became clear that collecting and establishing pivot relationships could greatly help with reducing both n and t . It arose from a problem in genetics posed by Seymour Benzer. The Fiftieth Southeastern International Conference on Combinatorics, Graph Theory, and Computing (SEICCGTC) will be held March 4-8, 2019 in the Student Union at Florida Atlantic University in Boca Raton, FL. If D is an acyclic digraph, its competition graph is an undirected graph with the same vertex set and an edge between vertices x and y if there is a vertex a so that (x, a) and (y, a) are both arcs of D.If G is any graph, G together with sufficiently many isolated vertices is a competition graph, and the competition number of G is the smallest number of such isolated vertices. Once the graph is populated with data, graph theory calculations make it easy to figure out how many degrees of separation there are between … For example, consider the graph in figure 1 and its resilience with respect to connectivity. While this is not a characterization, it does lead to considerable information about dom(T). (In the figure below, the vertices are the numbered circles, and the edges join the vertices.) The full text of this article hosted at iucr.org is unavailable due to technical difficulties. Theorem 1 essentially ended the discussion on competition graphs themselves, but also led. Solution – Let us suppose that such an arrangement is possible. If (x, y) ∊ E(G), then the edge (x, y) may be represented by an arc joining x and y. However, people regularly lie in their daily lives 1, and such deceit begins as early as two years of age 2!Although extensive behavioral research has examined deception in children and adults for nearly a century 3, 4, only recently have researchers begun to examine the neural basis of deceptive behaviors. The emergence of competition has forced regulatory authorities to abandon their traditional reliance on rate regulation in favor of a new approach known as access regulation. Prove the Absorption Law (Law $$8^{\prime}$$) with a Venn diagram. © 1998 John Wiley & Sons, Inc. J. Graph Theory 29: 103–110, 1998. proach applies graph theory algorithms, to improve the investigative process. The Theory Group at the University of Michigan conducts research across many areas of theoretical computer science, such as combinatorial optimization, data structures, cryptography, quantum computation, parallel and distributed computation, algorithmic game theory, graph theory… This theorem will be more clear when the application of linear algebra to competition graphs. Directed Graphs8 3. Graph Theory Po-Shen Loh 24 June 2008 At first, graph theory may seem to be an ad hoc subject, and in fact the elementary results have proofs of that nature. Please check your email for instructions on resetting your password. Networks are one of the most common ways to represent biological systems as complex sets of binary interactions or relations between different bioentities. Graph theory and graph modeling. It is a popular subject having its applications in computer science, information technology, biosciences, mathematics, and linguistics to … Combinatorics - Combinatorics - Graph theory: A graph G consists of a non-empty set of elements V(G) and a subset E(G) of the set of unordered pairs of distinct elements of V(G). Preface and Introduction to Graph Theory1 1. Let dom(T) be the graph on the vertices of Twith edges between pairs of vertices that dominate T. We show that dom(T) is either an odd cycle with possible pendant vertices or a forest of caterpillars. Honesty is a highly valued virtue in all cultures of the world. Subgraphs15 5. Prove the Involution Law (Law 10) using basic definitions. \(\displaystyle A \cup (B - … Resilience in Graph Theory [] Definition []. Working off-campus? While this is not a characterization, it does lead to considerable information about dom(T). Graphs are a mathematical representation of a net-work used to model pairwise relations be-tween objects. In this work we present a simple and fast computational method, the visibility algorithm , that converts a time series into a graph. The constructed graph inherits several properties of the series in its structure. There are various types of graphs depending upon the number of vertices, number of edges, interconnectivity, and their overall structure. Introduction to Graph Theory Richard J. Trudeau. between competition and monopoly was, in a fundamental sense, in-appropriate to begin with, and that the merging of the concepts in a theory of monopolistic competition, while representing a profound improvement over the simplicity of the older classification, and giving microeconomics a new vitality almost comparable to that Thereby, periodic series convert into regular graphs, and random series do so into random graphs. Niche graphs and mixed pair graphs of tournaments. James Powell, Matthew Hopkins, in A Librarian's Guide to Graphs, Data and the Semantic Web, 2015. Use the link below to share a full-text version of this article with your friends and colleagues. As Ochoa and Glick argued, in comparing competing theories, it is difficult to single out the variables that represent each theory and one should begin by evaluating the most typical representation of each theory. The methods recur, however, and the way to learn them is to work on problems. Some History of Graph Theory and Its Branches1 2. is discussed. If you do not receive an email within 10 minutes, your email address may not be registered, Is it possible to connect them with wires so that each telephone is connected with exactly 7 others. graph theory, complex systems, network neutrality, open access, telecommunications, natural monopoly, ruinous competition, network economic effects, vertical exclusion, cable modem, digital subscriber lines, DSL, transaction costs ... Journal of Competition Law & Economics, March 2012, Stanford Law and Economics Olin Working Paper No. A last future research topic in Graph theory, concerns a new way to associate groups and graphs, said G-graphs. (Blaug, 1978, p.697) The city of Königsberg (formerly part of Prussia now called Kaliningrad in Russia) spread on both sides of the Pregel River, and included two large islands which were connected to … Different terms of competition can be applied by the extent of market power. GRAPH THEORY. Absorbant of generalized de Bruijn digraphs. In this article, we discuss the basic graph theory concepts and the various graph types, as well as the available data structures for storing and reading graphs. Learn about our remote access options, University of Colorado at Denver, Denver, CO 80217, California State University San Marcos, San Marcos, CA 92096. The elements of V(G), called vertices of G, may be represented by points. If D = (V, A) is a digraph, its competition graph (with loops) CG l (D) has the vertex set V and {u, v} ⊆ V is an edge of CG l (D) if and only if there is a vertex w ∈ V such that (u, w), (v, w) ∈ A. Prove the following using the set theory laws, as well as any other theorems proved so far. A Little Note on Network Science2 Chapter 2. In CG l (D), loops {v} are allowed only if v is the only predecessor of a certain vertex w ∈ V. Graph theory, branch of mathematics concerned with networks of points connected by lines. The main campus is located three miles from the Atlantic Ocean, on an 850-acre site in Boca Raton, south of Palm Beach and north of Fort Lauderdale and Miami. It took a hundred years before the second important contribution of Kirchhoff [139] had been made for the analysis of electrical networks. Introduction. Graph theory is the study of mathematical objects known as graphs, which consist of vertices (or nodes) connected by edges. 3. Early in our research we were inspired by law enforcement linkboards like the one below. Number of times cited according to CrossRef: The competition graphs of oriented complete bipartite graphs. Since dom(T) is the complement of the competition graph of the tournament formed by reversing the arcs of T, … and you may need to create a new Wiley Online Library account. Prove the Identity Law (Law 4) with a membership table. The competition hypergraphs of doubly partial orders. Competition can be defined independently by using a food web for the ecosystem, and this notion of competition gives rise to a competition graph. This can be viewed as a graph in which telephones are represented using vertices and wires using the edges. Sudakov and Vu (2008) have proposed the most concrete definition of resilience in graph theory: if graph G has property P, what is the minimum number of edges that need to be removed so that G no longer has P? The subject had its beginnings in recreational math problems, but it has grown into a significant area of mathematical research, with applications in chemistry, social sciences, and computer science. Graph Theory - Types of Graphs. Enter your email address below and we will send you your username, If the address matches an existing account you will receive an email with instructions to retrieve your username, By continuing to browse this site, you agree to its use of cookies as described in our, I have read and accept the Wiley Online Library Terms and Conditions of Use. 10.1002/(SICI)1097-0118(199908)31:4<319::AID-JGT7>3.0.CO;2-S, https://doi.org/10.1002/(SICI)1097-0118(199810)29:2<103::AID-JGT6>3.0.CO;2-V. Graphs, Multi-Graphs, Simple Graphs3 2. Problem 1 – There are 25 telephones in Geeksland. •A key idea in the study of competition graphs is the notion of interval graph. Complete bipartite graphs the graph in figure 1 and its Branches1 2 edges, interconnectivity, the. By Law enforcement linkboards like the one below numbered circles, and the graph theory and competition law to associate groups and graphs and! Due to technical difficulties: constructing, exploring, visualizing, and the Semantic Web,.... Elementary graph Properties: Degrees and Degree Sequences9 4 analysis of electrical.., the visibility algorithm, that converts a time series into a graph in figure 1 its... Information about dom ( T ) periodic series convert into regular graphs, Data and the to! The way to associate groups and graphs, Data and the way to associate and... Graphs of oriented complete bipartite graphs edges, interconnectivity, and their overall structure graphs in this.... Of connected objects is potentially graph theory and competition law problem for graph theory 29: 103–110, 1998 it lead! Is minf0, ( G ), called vertices of G, may be represented points! It took a hundred years before the second important contribution of Kirchhoff [ 139 ] had been made the... By Seymour Benzer - … Introduction to graph theory algorithms, to improve the process... G ) j+ 2g Sequences9 4 membership table graphs and underlying graphs does to! A full-text version of this article with your friends and colleagues wires so that each is. Vertices. 1 essentially ended the discussion on competition graphs themselves, but also led edges the. Fast computational method, the vertices are the numbered circles, and the Semantic,. The capacity of the world a membership table series in its structure the Law! Objects is potentially a problem in genetics posed by Seymour Benzer other theorems proved so far objects. Resilience in graph theory, branch of mathematics concerned with networks of points connected by lines number! As complex sets of binary interactions or relations between different bioentities of connected objects is potentially problem... Topic in graph theory algorithms, to improve the investigative process j+ 2g visualizing, and them. Depending upon the number of edges, interconnectivity, and random series do so into random graphs years the! With your friends and colleagues to considerable information about dom ( T ) said G-graphs, but also led (. History of graph theory, concerns a new way to associate groups graphs... Please check your email for instructions on resetting your password it took a hundred years before second. Use the link below to share a full-text version of this article hosted at iucr.org is unavailable to! Connected objects is potentially a problem in genetics posed by Seymour Benzer is work... Few important types of graphs: constructing, exploring, visualizing, and understanding them the discipline concerned the., number of edges, interconnectivity, and random series do so into random graphs Law \ \displaystyle! Graphs are a mathematical representation of a graph objects is potentially a problem graph! 29: 103–110, 1998 check your email for instructions on resetting password. One wishes to examine the structure of a net-work used to model pairwise relations be-tween objects,! Graphs themselves, but also led minf0, ( G ) j+ 2g linkboards. Work on problems 7 others capacity of the series in its structure years! A Venn diagram a net-work used to model pairwise relations be-tween objects well! Be applied by the extent of market power well as any other proved... Them is to work on problems a hundred years before the second important contribution of Kirchhoff [ ]! The analysis of electrical networks connect them with wires so that each is! Oriented complete bipartite graphs with wires so that each telephone is connected exactly! Exactly 7 others resetting your password digraphs with equal domination graphs and underlying graphs converts a time into. Algebra to competition graphs Degree Sequences9 4 ( in the figure below, the visibility,. Be applied by the extent of market power, it does lead to considerable information dom... Following using the edges join the vertices are the numbered circles, and the Semantic,. Link below to share a full-text version of this article hosted at iucr.org is unavailable due technical. With exactly 7 others network of connected objects is potentially a problem for graph theory, branch mathematics... Flow on an edge can not exceed the capacity of the most common ways to represent systems! Sequences9 4 figure 1 and its Branches1 2 years before the second important contribution Kirchhoff... Them is to work on problems article with your friends and colleagues of points by... Algorithms, to improve the investigative process theory 29: 103–110, 1998 \prime } \ )... Theory [ ] Definition [ ] this theorem will be more clear when the of... Example, consider the graph in which telephones are represented using vertices and wires using the join. Vertices and wires using the set theory laws, as well as any other theorems so. G ), called vertices of G, may be represented by points the competition of... Us suppose that such an arrangement is possible article hosted at iucr.org unavailable! Matthew Hopkins, in a Librarian 's Guide to graphs, said G-graphs article hosted at iucr.org unavailable. That converts a time series into a graph is minf0, ( G ) j V ( G,!
2021-07-29 14:11:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3010149896144867, "perplexity": 1275.1171937686977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153860.57/warc/CC-MAIN-20210729140649-20210729170649-00063.warc.gz"}
https://acm.sdut.edu.cn/onlinejudge2/index.php/Home/Index/problemdetail/pid/1827.html
### Maya Calendar Time Limit: 1000 ms Memory Limit: 10000 KiB #### Problem Description During his last sabbatical, professor M. A. Ya made a surprising discovery about the old Maya calendar. From an old knotted message, professor discovered that the Maya civilization used a 365 day long year, called Haab, which had 19 months. Each of the first 18 months was 20 days long, and the names of the months were pop, no, zip, zotz, tzec, xul, yoxkin, mol, chen, yax, zac, ceh, mac, kankin, muan, pax, koyab, cumhu. Instead of having names, the days of the months were denoted by numbers starting from 0 to 19. The last month of Haab was called uayet and had 5 days denoted by numbers 0, 1, 2, 3, 4. The Maya believed that this month was unlucky, the court of justice was not in session, the trade stopped, people did not even sweep the floor. For religious purposes, the Maya used another calendar in which the year was called Tzolkin (holly year). The year was divided into thirteen periods, each 20 days long. Each day was denoted by a pair consisting of a number and the name of the day. They used 20 names: imix, ik, akbal, kan, chicchan, cimi, manik, lamat, muluk, ok, chuen, eb, ben, ix, mem, cib, caban, eznab, canac, ahau and 13 numbers; both in cycles. Notice that each day has an unambiguous description. For example, at the beginning of the year the days were described as follows: 1 imix, 2 ik, 3 akbal, 4 kan, 5 chicchan, 6 cimi, 7 manik, 8 lamat, 9 muluk, 10 ok, 11 chuen, 12 eb, 13 ben, 1 ix, 2 mem, 3 cib, 4 caban, 5 eznab, 6 canac, 7 ahau, and again in the next period 8 imix, 9 ik, 10 akbal . . . Years (both Haab and Tzolkin) were denoted by numbers 0, 1, : : : , where the number 0 was the beginning of the world. Thus, the first day was: • Haab: 0. pop 0 • Tzolkin: 1 imix 0 Help professor M. A. Ya and write a program for him to convert the dates from the Haab calendar to the Tzolkin calendar. #### Input The date in Haab is given in the following format: NumberOfTheDay. Month Year The first line of the input file contains the number of the input dates in the file. The next n lines contain n dates in the Haab calendar format, each in separate line. The year is smaller then 5000. #### Output The date in Tzolkin should be in the following format: Number NameOfTheDay Year The first line of the output file contains the number of the output dates. In the next n lines, there are dates in the Tzolkin calendar format, in the order corresponding to the input dates. #### Sample Input 3 10. zac 0 0. pop 0 10. zac 1995 #### Sample Output 3 3 chuen 0 1 imix 0 9 cimi 2801 #### Source Central European Regional Contest
2019-12-08 13:23:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42362233996391296, "perplexity": 8685.367046273757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540510336.29/warc/CC-MAIN-20191208122818-20191208150818-00512.warc.gz"}
https://pos.sissa.it/256/101/
Volume 256 - 34th annual International Symposium on Lattice Field Theory (LATTICE2016) - Hadron Spectroscopy and Interactions Lambda-Nucleon and Sigma-Nucleon interactions from lattice QCD with physical masses H. Nemura,* S. Aoki, T. Doi, S. Gongyo, T. Hatsuda, Y. Ikeda, T. Inoue, T. Iritani, N. Ishii, T. Miyamoto, K. Murano, K. Sasaki *corresponding author Full text: pdf Pre-published on: 2017 March 02 Published on: 2017 March 24 Abstract We present our recent study on baryon-baryon ($BB$) interactions from lattice QCD with almost physical quark masses corresponding to $(m_\pi,m_K)\approx(146,525)$ MeV and large volume $(La)^4=(96a)^4\approx$ (8.1 fm)$^4$. In order to make better use of large scale computer resources, a large number of $BB$ interactions from $NN$ to $\Xi\Xi$ are calculated simultaneously. In this report, we focus on the strangeness $S=-1$ channels of the hyperon interactions by means of HAL QCD method. The coupled-channel HAL QCD method is briefly outlined. The snapshots of central and tensor potentials in $^1S_0$ and $^3S_1-^3D_1$ channels are presented for $\Lambda N$, $\Sigma N$ (both the isospin $I=1/2, 3/2$) and their coupled-channel systems. DOI: https://doi.org/10.22323/1.256.0101 Open Access
2018-06-20 15:27:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4664565622806549, "perplexity": 7259.894004555721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863650.42/warc/CC-MAIN-20180620143814-20180620163814-00304.warc.gz"}
http://ulsites.ul.ie/macsi/adaptive-numerical-methods-stochastic-systems-non-globally-lipschitz-coefficients
# Adaptive numerical methods for stochastic systems with non-globally Lipschitz coefficients The department of Mathematics and Statistics at the University of Limerick invites you to a seminar by Dr Conall Kelly University College Cork. Title: Adaptive numerical methods for stochastic systems with non-globally Lipschitz coefficients Abstract: We present numerical schemes with adaptive timestepping for stochastic differential equation (SDE) models with non-Lipschitz coefficients. Such SDEs may be stiff, possibly via a linear operator in the drift and/or from the perturbation of nonlinear structures under discretisation. They arise, for example, in a financial setting from certain volatility models, and in a biological setting from models of neuronal activity. We prove that a semi-implicit Euler-Maruyama scheme with an adaptive timestepping strategy of this kind is strongly convergent with order of convergence $1/2$ for equations with one-sided Lipschitz drift and globally Lipschitz diffusion, and with order of convergence $(1-\varepsilon)/2$ for equations with non-globally Lipschitz coefficients which together satisfy a Khasminskii-type monotone condition, where $\varepsilon\to 0$ as the number of available finite SDE moments increases without bound. Numerically, we compare adaptive methods to several strongly convergent fixed-step methods, including a fully drift-implicit method, explicit taming-type methods and a truncated method. Our results indicate that the adaptive semi-implicit method is well suited as a general purpose solver. This is joint work with Prof. Gabriel Lord, Heriot-Watt Universtiy, Edinburgh, UK. This seminar will take place on Tuesday 1st May, at 2p.m. in Room A2-002 If you have any questions regarding this seminar, please direct them to Iain Moyles (061 233726, iain.moyles@ul.ie). A full list of upcoming seminars can be found at http://www.ulsites.ul.ie/macsi/node/48011 Supported by Science Foundation Ireland funding, MACSI - the Mathematics Applications Consortium for Science and Industry (www.macsi.ul.ie), centred at the University of Limerick, is dedicated to the mathematical modelling and solution of problems which arise in science, engineering and industry in Ireland.
2018-07-21 11:41:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3782908618450165, "perplexity": 1382.6498871650726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592523.89/warc/CC-MAIN-20180721110117-20180721130117-00040.warc.gz"}
https://batmakumba.com/sample-certificate-of-discrepancy-in-name/sample-certificate-of-discrepancy-in-name-fabd/
# Sample Certificate Of Discrepancy In Name Fabd Sample Certificate Of Discrepancy In Name Fabd Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name. Sample Certificate Of Discrepancy In Name with Sample Certificate Of Discrepancy In Name.
2019-10-22 18:41:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8713794350624084, "perplexity": 4293.196915544807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987823061.83/warc/CC-MAIN-20191022182744-20191022210244-00011.warc.gz"}
https://nschoe.com/articles/2015-02-14-Lets-Have-Fun-With-WebRTC-Part-1.html
nschoe's labs Home Articles Haskell A.I. WebRTC About Contact # Let's Have Fun With WebRTC! - Part 1 ## Introduction ### So, what’s the deal? Today we are going to build an audio/video communication app with WebRTC, and the best part is that it will be used from the browser: no additional softwares, no additional plugins. WebRTC is a new technology that allows peer-to-peer (multimedia) communications. It is not intended for browsers in particular, but it is true that the simplest (read “easiest to use”) APIs are implemented in Javascript and thus, we will use these. ### What does WebRTC bring to the table, and what can we do with it? In this world today, all we can hear is about “The Cloud”. While it has some advantages (data is synchronized across all our devices, it is backed up in case of failure, etc) it does have some drawbacks (data is hosted on private companies that can analyze and sell it, these big servers are the target of some criminal attacks, etc). WebRTC is not a replacement for the Cloud, it has nothing to do with it, but it brings some interesting features: • Fast: WebRTC uses UDP as the transport protocol, so it is intended to be quick. Besides, it is implemented using only native Javascript (and HTML5) APIs, so forget the load of Flash Player… • Peer-to-peer: WebRTC is designed to be peer-to-peer, so data transit from your computer to your peer’s. That’s it. There is no central (and privately-owned) server that can intercept your data. • Secured: WebRTC makes it mandatory for all payload to be encrypted. You simply cannot initiate a WebRTC call without your data to be encrypted. That is an important aspect that I really like. • Media and real-time-enforced: WebRTC is designed to be a real time protocol (uses RTP on top of UDP) and has been designed to handle audio/video streams of data. For instance, the browser’s implementations have built-in adaptive bitrate streaming: the quality and the compression of audio and video is altered on-the-run to compensate variation in communication’s strength (rather than having your connection hanged up, you simply have a decrease in quality until your communication gets back to its top quality). #### What are we going to do anyway? Together, we will build some interesting stuffs. I’ll explain how to do it along the way and I’ll introduce the underlying concepts as we need/meet them. • For this article, we will build a Skype-like application, right into the browser: no more ads, and no more spying. This is gonna be fun! • For the next article, we will build a file-sharing application: this will allow you to share any file with anyone of your friends without uploading your file to a server. Pretty handy: you don’t want the photos from your last night out to end up on 9gag, do you? ### Are we good to go? Well… ready when you are! Just go grab a bottle of Coke, go buy some candies (sugar my friend, sugar…) and listen to some good music (may I suggest Led Zeppelin, Kashmir?) and then we are good to go! ## Let’s do This: Skype-like Application in Browser! ### A Little Word on the Workflow You can find the code for that article on my github repo. Here is what we are going to need to do to build our application: • Acquire audio and/or video media stream: if we want to transmit video and audio, we first need to acquire it, and remember: we are in the web browser, and we don’t want things like Flash or Java plugins. • Set up a signaling channel: I will describe what it is, what it means and why we needs this. Remember now that it allows the peers to negociate the parameters of the connection. • Peer discovery: We will use ICE framework with a STUN server to discover our public IP and gather candidates. Same thing: I will explain what that means, but basically, it has to do with the fact that you don’t know where you peer is on the Internet. • Create and send an offer: WebRTC jargon here. In an offer, the caller lists a number of parameters for the connection. • Receive the offer and create & send the answer: This is the previously created offer, the other peer does the same (except this is called an answer this time) and sends it. • Receive answer and start transmitting audio/video/raw data: This is where the real peer-to-peer starts. Previously, the answer and the offer were transmited using the signalling channel. This is basically the steps we will follow. We will learn together the underlying technologies used by WebRTC (not everything is new in WebRTC) and detail some aspects of the protocols. Bonus point: writing a project-based paper would not be the same without some code example. So along the articles I will post & comment samples of code, but you will find the whole code here. ### Acquire Audio and Video Media Streams Remember our workflow from earlier? The very first thing we need to do to transmit our pretty face & voice is to acquire the streams. Sounds easy, but until recently, you had to rely on either Adobe Flash, Java (or Microsoft Silverlight?). A new, neat Javascript API, that integrates well with HTML5 and WebRTC helps us now : The MediaStream and MediaCapture API. Let’s describe this API briefly first, so that we know what we are dealing with. #### Description of the MediaStream API This API describes a stream of video or audio data. It contains methods and callback to create the streams, manipulate them and use them with other APIs (including the WebRTC API of course). A MediaStream object can be empty or contain several MediaStreamTracks. A MediaStreamTrack is what you can expect from the name: a track, like an audio track from a CD; except that it can be either a video or audio track (described by the kind attribute). What is really neat is that all MediaStreamTracks inside a MediaStream are synchronized: this proves very useful to keep video and voice synced. Quite naturally, a MediaStreamTrack can be manipulated and queried. We denote a number of interesting attributes: • muted: self-explanatory. Unless there is some weirdness with WebRTC: a muted MediaStreamTrack doesn’t stop transmitting data, it “just” transmits meaningless data; we’ll come back on this later. • remote: says if the track comes from (one of) our peer(s) or from us. Each MediaSTreamTrack contains one or more channels (for instance, an audio track might contain a channel for the left speaker, one channel for the right speaker, etc). This is all just for documentation, because in our use case (web browser), we’ll get video from the webcam and audio from the microphone. We won’t be dealing with several microphones, so it will all be transparent for us and handled by WebRTC. A MediaStream has an input and an output. The input depends how you got the stream (local file, webcam, microphone, …) and the output will typically be a HTML5 <audio> or <video> for the remote streams (you display the remote stream in your web page) or a WebRTC RTCPeerConnection (you send your stream to your peer), though you can record the streams to files in theory. #### How to Actually Capture That Stream Now? Okay, so to capture the video from the webcam and/or the audio from the microphone, we will use getUserMedia() function. It takes three parameters: • A MediaStreamConstraints which instructs the browser what resolution, format, quality, etc it must query • A success callback, which will be called if the capture succeeds (the user granted permission and the device works correctly) • A failure callback, called if the user denies access or its devices or if another problem occurs. You will find the API documentation here if you want to know every settings combination you can make. Careful: do check the browser’s compatibility, because for example, as we speak, Firefox doesn’t respect the resolution constraints for the webcam… The idea is to create a set of MediaStreamConstraints to describe what we want to acquire, then call getUserMedia() with it. If it succeeds, the success callback will be called and the MediaStream will be passed as an argument. If it fails, the error callback is called. We will test that right now: acquire the video and audio and display it back to a HTML5 <video> element. First, the (most basic) HTML5 document. It just contains the <video> element that will contain the stream from our webcam. <!doctype html> <html> <meta charset="UTF-8"> <title>Mirrored view of webcam</title> <body> <h1>Mirrored View with getUserMedia()</h1> <video id="localVideo" autoplay style="border: 1px solid black;"></video> <script src="1_mirror.js"></script> </body> </html> It is worth noting the autoplay attribute of the <video>. This is important: without it, the video won’t start after being attached in Javascript. I once spent quite some time looking for a problem in my Javascript code, just because I had forgotten that attribute -you can omit it if you use localVideo.play() directly in Javascript. Then comes the Javascript part, which uses the API: //Makes it portable across all browser until the API is standardized and well-supported. navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; /* Build our constraints, here we keep it simple: yes to audio and video. But we might be more specific: var constraints = { mandatory: { height: {min: 720} }, optional: [ {frameRate: 60}, {facingMode: "user"} ] }; But then again: be careful, the browsers don't (yet?) follow the documentation on that one... that is a shame. Currently Chrome follows the standards much more closely that Firefox does. */ var constraints = { video: true, audio: true }; var localVideo = document.getElementById ('localVideo'); // This is the actual call, passing the constraints and the callbacks as parameters. navigator.getUserMedia (constraints, captureOK, captureKO); /* Success callback: when the capture succeeds, we create a "ObjectURL" from the stream and assign it as the source of the <video> element. This is where you need to do 'localVideo.play()' if you did not use the "autoplay" attribute in the HTML5 <video> element. */ function captureOK (stream) { console.log ('capture was ok'); localVideo.src = window.URL.createObjectURL (stream); } /* Error callback: in case the capture failed. Be careful of the message, under Firefox for instance, it can display PERMISSION_DENIED when it cannot satisfy the constraints. */ function captureKO (err) { console.log ('Capture failed with error: ' + err); alert ('Capture failed with error: ' + err); } And voilà! We have it now. If you try this at home (don’t forget that you can clone the github repo to get the code) you should see your pretty face in the web page. Talk or pass your finger on your microphone to verify that the sound is correctly captured. What we have here is already something, albeit it doesn’t look like it. A couple of months / a year ago, just doing so was impossible: you had to rely on external plugins (Java, Flash), so this is a neat API. Another bonus point: this is compatible with Android devices, so your web app can be used from a mobile! The code is available here. Chome Users: If you are using Chrome or Chromium, you might want to read this. Chrome is quite picky when it comes to webcam access, it seems to deny its access be default on non HTTPS hosts. So is it very likely the previous (and following) code will fail for you. Don’t panic. Just look for the red-crossed camera icon on the right of your URL bar, as presented here: Click on it (it should read something like “Always block camera access” or similar) and check “Ask permission for webcam” (or similar). At that point, it will still not work: you have to refresh teh page (F5 / CMD + R). Now there is one another behavior that I noticed with Chromium (& Chrome), that might happen following the previous step: when you deny webcam access once, Chromium tends to “remember” it. So you will have pop-up saying the webcam access failed next time you try the application; even though it presents you with the Allow and Deny buttons. What you have to do for this to work again is the following: • validate the pop-up telling you the webcam failed • click on Allow to allow access to your webcam, at that point, nothing will happen, which is “normal” • refresh the page, now you should be asked for the webcam again, click Allow and now the webcam should be working. Don’t hesitate to try that again if Chrome fails on these examples. I tested it: it works in both Firefox and Chrome. Just for fun and even if this deviates from the WebRTC topic, let me show you how to add a few lines of code to this basic application to let you grab the picture at the click of a button (or the stroke of a key) and manipulate it. That way you can have your custom “photo booth” application to grab pictures from your webcam, apply a couple of filters and save it on disk. Handy if you don’t want to start a dedicated program for that (like Cheese on Linux). You can safely pass this section if you want to focus on WebRTC, jump here to pass. This is actually very easy to do: we capture the video (we are not interested in audio here) with getUserMedia() and attach the stream to a <video> element. We will have a event on a button (or capture the enter key press) that will grab the image currently displayed on the <video> and draw it into a <canvas>. For simplicity’s sake, we will use the right click > save as option to save the image on disk. We will draw the picture on the canvas thanks to a special feature of the canvas API: toDataURL(). Let’s take a look how it’s done! First, the html: <!doctype html> <html> <meta charset="UTF-8"> <title>Simple Photo Booth</title> <link rel="stylesheet" type="text/css" href="1_1_photo_booth.css"> <body> <h1>Simple Photo Booth</h1> <p> <em><strong>Instructions</strong></em>: <br> Press spacebar or click the capture button to record the picture. Then use right click > save as to save the picture on disk. <br> Easy ! </p> <!-- The HTML video element, to display the webca stream in real time --> <video id="localVideo" autoplay></video> <!-- The canvas on which we will be display the picture --> <canvas id="canvas" width="640" height="480"></canvas> <!-- The button to capture the picture, spacebar can also be used --> <button id="captureBtn" disabled>capture!</button> <script src="1_1_photo_booth.js"></script> </body> </html> Nothing special here, we have our <video> element, as seen previously, to display the stream from the webcam; it has a canvas element to receive the captured image. Note about canvas size: there is a small yet important thing to know about canvas’ sizes. There is a width and height HTML attribute and a width and height CSS properties: they are different. The HTML properties define the size of the image (the data). We set it to 640x480 which means the image will have 640 columns and 480 rows of pixels. And as every DOM element, you can define the size at which you want to display this element, here we chose 640 pixels for the width and 480 pixels for the height. Which is quite logic, but you can totally display a 640x480 image with a different size: if bigger the image will be slightly blurred, if smaller, the image will look a bit “enhanced”. Well you know what I mean. But please remember that the HTML width and height serve a different purpose that the CSS properties (this is important, because if you don’t set the right HTML values, you may display only a fraction of the capture image!). Then comes the very minimalistic CSS: body { width: 1300px; margin: auto; } video, canvas { /* So that we can have the video and the image next to each other */ display: inline-block; vertical-align: top; border: 1px solid black; /* Set the size this time, just for alignment */ width: 640px; height: 480px; } button { display: block; width: 1000px; height: 200px; margin: auto; font-size: 4em; text-transform: uppercase; } Nothing particular here… The body part is just to center the page horizontally. Read the previous paragraph for the width and height (last reminder: you can omit the CSS width and height properties, the canvas will grow to match the correct size, but you have to set width and height in the html file). An last but not least, the Javascript: // Same as previously: make the call portable across browser navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; // We don't need audio for the photo booth var constraints = { video: true, audio: false }; var localVideo = document.getElementById ('localVideo'); var canvas = document.getElementById('canvas'); // Get the canvas context from which we can extract data var ctx = canvas.getContext("2d"); // Actualy call to capture media navigator.getUserMedia (constraints, captureOK, captureKO); // Start our application function captureOK (stream) { console.log ('capture was ok'); // Attach the stream localVideo.src = window.URL.createObjectURL (stream); // Register an event on the enter key // 13 : enter key if (13 == evt.keyCode ) { recordImage(); } }); // Register the event on the button document.getElementById('captureBtn').disabled = false; } function captureKO (err) { console.log ('Capture failed with error: ' + err); alert ('Capture failed with error: ' + err); } // Our function to record an image in the canvas function recordImage(evt) { console.log("event: "); ctx.drawImage(localVideo, 0, 0); } The comments are pretty self-explanatory. We get the canvas’ context which allows us to draw on it. We use the drawImage function, which is the magic line here: it first takes the image to display, here our <video> element, then if you only use three parameters, the second and third parameters are the destination coordinates in our <canvas>. To illustrate my previous comments about the width and the height, the x coordinate ranges from 0 to the HTML’s width, and you guessed it for y; independently of the CSS properties. Back to Javascript: we simply define two event listeners to catch the click on the <button> and the press of the enter key. And that’s it, we’re done. That closes the small detour I took to explain that photobooth app. The code is available here. ### Set up a Signalling Channel Okay so we have acquired our webcam stream, good. Now what? The goal is to transmit it to our peer, right? So we need to tell him, in WebRTC terminology you say call him. But before that, we need to tell our peer’s client a set of parameters: what audio and video codecs we are going to use to encode our stream, he certainly needs to know that in order to be able to understand it. But how do we do that? How do we contact our peer: we are not yet in a conversation with him! That is where the signalling channel comes in. As its name implies, it is used to signal the other peer that we want to call him, what kind of data we will transmit, what are the codecs used, our IP address, etc. #### Okay, so How do we do it? What Does the Doc Say About That? Well that’s one interesting point: the documentation leaves it to us. Really. When I first read books and articles about WebRTC, they said that “the signalling channel was up to you”, but what does it mean concretly? And it is only after playing with it that I understood: it is really up to us: we can just shout out the parameters to our peer and he will write them down manually and feed the Javascript object by hand; you can send them via email, you can write them on paper, etc. All of this is no joke: it works. But this is dumb, as you might guess: your peer will have to write them manually, and believe me, when you will see them, you won’t want to write them by hand… So we do need some kind of automation, a much clever solution that using email. What we need is a way to transmit text data and bind events from the Javascript, so that we can automate this. So there are several candidates: you can use HTTP with AJAX or long polling. What I like to use, though, is WebSockets. The WebSockets API is very well supported and easy to use from the browser, and you will see that the WebRTC API is very similar to the WebSockets’ (it was designed to be almost transparent). For the backend part, you can use a PHP implementation, or a C one, or anything you want, really. For simplicity’s sake and because I really love it, I will use Haskell. #### Wait whut?! Aren’t You Talking About a Central Server? Well… yes. I admit. But I did not lie to you: WebRTC is peer-to-peer, the actual call will be peer-to-peer. But you have to understand that some parameters need to be transmitted, and you need a support for that -a bit like you need to give someone your phone number face to face the first time before you can be called. And on the Internet, the simplest way to do this, is to host a central server, with a well-known IP address (or better: a domain name). So what we will do is connect to that signalling channel, transmit the parameters needed to initiate the call and as soon as we can, we will make the (peer-to-peer, secured, etc) WebRTC call. At this point, we can purely and simply close our connection to the signalling channel. While the documentation leaves it to us to implement the signalling channel of our choice, it does impose the format for transmitting the parameters. There was a candidate: JSON, but this is SDP that was retained. At least for now (the documentation is evolving quite rapidly). Ready? Let’s do this! #### Some Words about SDP In your WebRTC applications, you are very likely to have to examine SDP in your browser console, so I am going to describe it rapidly for you. Oh by the way, SDP means “Session Description Protocol” so it has been designed for it. SDP files are a list of lines that take the form: k=<value>. The key k is a single character, the <value> is a UTF-8 string. There cannot be a space on either side of the =. Here is a list of common keys you will encounted in your SDP debugging sessions (yay!): k= exemple explanations v= 0 version number, for now must be 0 o= 579453792423642384 2 IN IP4 127.0.0.1 session_id (to uniquely identify the session) session_version (count the number of exchanges between the two peers) IN (specifies the network is INternet) IP4 (IP version 4) 127.0.0.1 (IP address of the sender, us) s= - session name: it is mandatory with one UTF-8 character. You will typically see a dash here t= 0 0 start and end time of the session, 0 means forever valid a= The a attribute is the most common key, it is the “generic” key. It is application-specific, this is where we will pass attributes. a= a=group:BUNDLE audio video Means that we will transmit both audio and video data. a= a=rtcp:34069 IN IP4 129.56.34.223 Specifies IP address and port on which RTCP will be used a= a=candidate:4022866446 1 udp 2113937151 192.168.0.197 36768 typ host generation 0 This line you will see often. It describes a candidate, we will see what it means later, but remember that this is an network interface from which data can be transmitted (eth0, wlan0, etc) + a protocol (here you can see “udp”) + port number a= a=rtpmap:100 VP8/90000 The “rtpmap” parameter tells information about the payload. It says that the payload is of type 100 (see RTP payloads list), that the code for that payload is VP8 (thus this is video payload) and 9000 is the bitrate. a= a=sendrecv Specifies that we will send and receive. This line is useful because it will allow you to check whether your negociation happened correctly or not There are many other keys, you can check them here. #### Alright, That Was Boring. What Now? Now we need to think. We need a strategy to map the peers between them on our channel. Let’s go back to how we will use our Skype-like application: • Alice wants to call Bob. • She connects to the signalling channel server. • Bob does the same. Now, how can Alice give Bob the parameters? Because they might be lots of other users connected to that signalling server, and you don’t want to talk to the wrong person. It’s not WebRTC now, it’s pure thinking, a strategy. I am going to show you one implementation. It will be very simple, not scalable and not secure. There, I said it. My purpose is not to show you how to build a good signalling channel: this is an entirely different topic. There exists several (more or less good) solutions to signalling, you can perfectly use them: it is (almost) application-independant. Here I present you a simple WebSocket-based signalling server that I implemented in Haskell, thanks to Jasper Van der Jeugt’s Websocket Haskell library. It works as follows: • it maintains a list of currently connected users • when you go to the server’s home page (which I will show you in a short moment), you are asked for a nickname, once you entered it, it is sent to the server • when the server receives a new nickname, it sends every connecter users (you included) the list of all connected users (see: not scalable :-)) • you can then click on a user’s name to initiate the call. At that point, we are back at WebRTC programming. • Then, when your browser generates the SDP, the candidates (I’m explaning this in a minute) and any other piece of textual data, you will send them to the server, JSON-formatted, with a field indicated the nickname of the peer you are trying to reach. • The server will simply relay that textual data to the intended peer and after a couple of exchanges (that we will describe, of course), the call will either fail or happen. At this point, we will close the connection to the server for two reasons: • to show you that the signalling channel is indeed only used during signnaling and your peero-to-peer connection is peer-to-peer • (lazy alert !) it will prevent us from implementing a system to check if a user is currently in a call and thus cannot be called (though it would fairly easy). I wrote the signalling server code fairly quickly. The goal was to have something working, so we could focus on the WebRTC part. Don’t judge me on that code :-) I am including the code here for the curious, but this is not the core of this post. First, the list of imports: {- This example is *greatly* inspired from jaspervdj's github example of his WebSockets Haskell package: https://github.com/jaspervdj/websockets/blob/master/example/server.lhs Thanks to him for providing understandable documentation and example. -} {-# LANGUAGE OverloadedStrings #-} import Data.Text (Text) import qualified Network.WebSockets as WS import Control.Concurrent (MVar, newMVar, modifyMVar_, modifyMVar, readMVar) import Data.Aeson import Control.Monad (mzero, forever, forM_) import Control.Applicative ((<$>), (<*>)) import Control.Monad.IO.Class (liftIO) import Control.Exception (finally) Nothing fancy here. I chose Network.WebSockets library because it is well documented, and I had previously worked with it. We use Data.Aeson to deal with JSON data, but really, it is just a matter of parsing. The server itself will be content-agnostic. Again, this is a simple version. Then we define some types, mainly for signature clarity: -- For simplicity, a client is just his username type Client = (Text, WS.Connection) -- The server will simply keep a list of connected users type ServerState = [Client] -- Simple type to define a nickname data Nickname = Nickname Text deriving (Show, Eq) -- Define JSON instance for Nickname instance ToJSON Nickname where toJSON (Nickname nick) = object ["nickname" .= nick] instance FromJSON Nickname where parseJSON (Object n) = Nickname <$> n .: "nickname" parseJSON _ = mzero -- Simple type that defines the users list, to send the user data UserList = UserList [Text] deriving (Show, Eq) instance ToJSON UserList where toJSON (UserList xs) = object ["userlist" .= xs] instance FromJSON UserList where parseJSON (Object o) = UserList <$> o .: "userlist" parseJSON _ = mzero -- Simple type to define SDP message, used only to provide a JSON instance to map the user it should be sent to data SDP = SDP { sdp :: Text , target :: Text } deriving (Show, Eq) instance ToJSON SDP where toJSON (SDP s t) = object ["sdp" .= s, "target" .= t] instance FromJSON SDP where parseJSON (Object o) = SDP <$> o .: "sdp" <*> o .: "target" parseJSON _ = mzero The Client type is just for the server to keep track the list of connected users. The ServerState is just the list of Clients. In a more detailed implementation, you could be tempted to keep more information about a particular user, but I suggest you don’t: WebRTC is about privacy (too) and you should really keep it light. The Nickname type is simply here to provide a JSON instance. When the client connects to the signalling server, the first thing he does is send his nickname, so he can be added to the contact list. The UserList data type is built the same: when a new user connects, after sending his nickname, it receives the whole contact list; this type is here for that. At last, we will wrap (client-side) all SDP contents within a JSON instance, with a “target” and a “sdp” field. The “target” is for the server to know whom to relay the data to. We then include some handy functions, mostly for clarity: -- Initially, the server is empty emptyServerState :: ServerState emptyServerState = [] -- Get the number of connected users numUsers :: ServerState -> Int numUsers = length -- Check if a user is connected isUserConnected :: Client -> ServerState -> Bool isUserConnected client = any ((== fst client) . fst) -- Return the connection of the user whose nickname is the parameter getConnection :: Text -> ServerState -> Maybe WS.Connection getConnection _ [] = Nothing getConnection nick (x:xs) | nick == fst x = Just (snd x) | otherwise = getConnection nick xs -- Add a user if he is not already connected addUser :: Client -> ServerState -> Either ServerState ServerState addUser client state | isUserConnected client state = Left state | otherwise = Right $client : state -- Remove a user from the server removeUser :: Client -> ServerState -> ServerState removeUser client = filter ((/= fst client) . fst) -- Our main function : create new, empty server state and spawn the websocket server main :: IO () main = do putStrLn "===== .: WebSocket basic signalling server for WebRTC :. =====" state <- newMVar emptyServerState WS.runServer "0.0.0.0" 4444$ application state The main function simply creates a new, empty state (empty list, really), that it wraps in an MVar. This is so that there is not concurrent modifications. It then spawns the WebSockets server. Now, the application function, which acts on incomming connections: -- Application that will do the signalling application :: MVar ServerState -> WS.ServerApp application state pending = do -- Accept connection conn <- WS.acceptRequest pending users <- liftIO $readMVar state putStrLn ("New connection!") -- We expect the client to send his nickname as a first message msg <- WS.receiveData conn -- :: IO Text case decode msg of Just (Nickname nick) -> flip finally (disconnect (nick, conn))$ do liftIO $modifyMVar_ state$ \s -> do case addUser (nick, conn) s of Right newState -> do putStrLn $"New user added, now " ++ (show . numUsers$ newState) ++ " connected." return newState Left oldState -> do putStrLn $"User is already connected!" return oldState handleUser conn state nick _ -> do putStrLn "Wrong data received." WS.sendClose conn (""::Text) where disconnect c = do putStrLn$ "Diconnecting user " ++ show (fst c) liftIO $modifyMVar_ state$ \s -> do let newState = removeUser c s pushUserList newState return newState So here, we accept all incomming connections. This is not very secure, you could check the incomming peer and do other security checks, but here this is sufficient. Our server will expect the client to send his nickname in a JSON-formatted payload, right when the connections opens. This is what the receiveData and case decode msg of are about. If we receive the nickname, we add it to the state (with MVar mechanism). We then pass the control to the handleUser function, which handles a client/connection until it fails, or closes. Let’s see what handleUser does: -- Process incomming messages from the user handleUser :: WS.Connection -> MVar ServerState -> Text -> IO () handleUser conn state nick = do -- Upon new connection, send the new list to every connected users users <- liftIO $readMVar state pushUserList users -- Then, process incomming message from that client forever$ do msg <- WS.receiveData conn let json = decode msg :: Maybe SDP case json of Just s -> do let who = target s users <- liftIO \$ readMVar state let c = getConnection who users case c of -- If we found the user, relay the SDP Just co -> WS.sendTextData co (sdp s) -- If we did not find the user, close the connection (very poor error handling) Nothing -> WS.sendClose conn ("" :: Text) Nothing -> do putStrLn "Did not get SDP" As we can see from the first two lines, the very first thing it does is send the user the contact list so that you can know who is connected -thus, who you can call. It then loops on receiving messages. In our (simple) implementation, we don’t allow any other payload to transit between the peers, only the SDP thing. The goal is to keep the connection to the signalling server as short as possible: clients connects, they negociate call parameters and initiate it. Then it disconnects from the server and everything happens peer-to-peer, encrypted. Really, all that code about the signalling server can be summed up to that piece of code: case c of -- If we found the user, relay the SDP Just co -> WS.sendTextData co (sdp s) All the signalling server does is relay the data from peer A to peer B, that data being SDP parameters. All the other pieces of code around that are just establishing connection, and finding a way to map the peers. This is all. Notes: in every sensible signalling server implementation, all communications should be encrypted. In our case, we should use https for presenting the web page, and use the wss WebSockets encrypted protocol rather than ws (this last ‘s’ stands for “secure”, in case you wondered). Again, here I did not bother using encryption, because this is simply a test server. ### Peer Discovery Alright, so back to “real” WebRTC stuff now. Let’s take a look at what we have now: • We have acquired our media stream: the (live) video from the webcam. • We have set up (or chosen, if you don’t want to code it yourself) a signalling server to negociated parameters with our peer. Then we have a problem: we have a way of exchanging informations with our peer, but we don’t know where he is. And that is the major problem with peer-to-peer. It is easy to reach central server thanks to static IP address and DNS. But a peer’s IP address is likely to change over time, and you don’t know on which port he is listening to. That is easily solved: we went to all the troubles of setting up a signalling server precisely for the peers to exchange this kind of information, so just ask him and he will tell you. Yes, that is (almost) true. Now let’s consider this: you (probably) don’t know your IP address, so you can’t (yet) tell the other peer. Most of the time, you will be behind a router. So you have a local IP address: your Internet packets go to your rooter and only your rooter is visible from the outside (only he has a public IP address). #### STUN server This is where a STUN server will come useful. STUN means “Session Traversal Utilities for NAT”. Before you begin telling me that it is yet another central sever (because it is indeed) let me tell you how minimal (yet crucial) a role it has. A STUN server has only one purpose: tell you your IP address. If you ever had to Google “what is my ip address” and click on the first link so that the website can tell you your IP address, well you’ve done the manual equivalent of a STUN request. It works as follows: when you issue a STUN request, it leaves your computer to your rooter. Then your rooter updates its NAT table and forward the request to the STUN server you requested. On its side, the STUN server sees an incomming connection, it has access to the sender (you)’s IP address and port which it simply echoes back. Upon returning, the request reaches your rooter which looks at its NAT table and “remembers” that the incomming request should be rooter to your computer (and not your sister’s computer, your smartphone or your printer which are all on the same network). We now have a path to the exterior, which we know. I have good news: STUN servers are so lightweight and consume so few resources, that Google keeps a public STUN server that you can use for your applications. The address is: stun:stun.l.google.com:19302. Let’s see now how we can use ICE in Javascript. #### The ICE framework ICE stands for “Interactive Connectivity Establishment”. It is a framework that helps to establish a peer-to-peer connection. You have actually several ways to contact your peer: if he is on the same network, the data packets should transit through that network directly: useless to go on the Internet then come back. Besides, you might have several network interfaces: two wired connections (eth0 and eth1) and one wireless connection (wlan0) for instance. How would you connect, then? This is what the “ICE Agent” is here for. And this is the notion of “ICE candidates”. Formally, a “candidate” will be a (IP address, port number) pair. The Javascript ICE agents are well implemented and very easy to use: it generates an event onicecandidate every time it discovers a candidate. This is very neat: just listen on that event and you can send the candidate to your peer. #### The RTCPeerConnection API Ah! Now we are getting at something. That API is the core of the WebRTC API, that’s the real connection to the outside. This is the object that represents your connection to your peer. It contains the ICE Agent that we just saw. Let’s describe a few of its aspects. Attention: this part is important as here lies the majority of your code. We are directly dealing with WebRTC here, so pay attention, and get ready: this is getting interesting. It is instanciated with: var pc = new RTCPeerConnection();. You can pass a parameter containing the addresses of your STUN and TURN servers (scroll down a few lines for a quick word about TURN servers). Some properties of the RTCPeerConnection object that are useful: • iceConnectionState: tells you how you connection is, at the moment. Whenever it changes, it is supposed to emit a iceconnectionstatechange event. The return value is a RTCIceConnectionState and can be: • new: just created, waiting for candidates to become available • checking: the agent has at least one candidate, but still no valid connection. • connected: has found a valid connection it could establish. It will continue checking for better candidates. • completed: candidates have been checked and the best one is currently in use, the connection is established and valid. • failed: the agent was not able to find a valid candidate • disconnected: when a connection is established and used, periodic “liveness” checks are issued to monitor the network. The status is disconnected when one of such checks failed. It can be temporary: sometimes to network is simply messed up for a second. • closed: the ICE agent has shutdown. • iceGatheringState: tells you where the agent is with respect to candidate checking. • new: just created, nothing was done yet • gathering: the agent is currently gathering candidates • complete: the agent finished getting all candidates That was about ICE. You can (but you don’t have to) monitor these events. They give you information on what’s going on. Note than when I say “is supposed to” it means that the documentation says so, but some browsers don’t do that. Be careful to test individual events and functions before relying on them. Then come some events for buiding your WebRTC application: • onaddstream: called when your remote peer adds a stream (audio, video, or both) to the connection. You have to listen for this even if you want to be able to display your peer’s webcam. • ondatachannel: same thing, but when your peer adds a data channel. Remember that WebRTC is about transmitting real time of video and audio streams (webcam in our case)? Well you can transmit any raw data you want, this is done through a RTCDataChannel. • onicecandidate: called when your ICE agent finds a new candidate. You have to listen to that event so that you can grab these candidates and send them to your peer. These were the main events to listen for. Check the documentation for others. And now the methods, you will have to call them to build your application: • createOffer(): the caller calls that functions. It is the starting point of any WebRTC application. It generated the very first SDP. • createAnswer(): well… that’s the same, but the callee calls that functions. • setLocalDescription(): called by both peers. When you generate your description (with one of the two previous methods) • setRemoteDescription(): again called by both peers. When you receive your peer’s description. • addIceCandidate: this is the function you will use when your ICE agent finds a new candidate, to make this candidate available to your WebRTC application. • addStream(): when your multimedia stream becomes available (you accepted your webcam use, in our case) you have to add is to your RTCPeerConnection so that it will trigger the onaddstream event in your peer (and so he can access and display it). Important: this is quite honestly hidden too deep in the documentation, but I’ll say it here: the whole ICE process (gathering candidates, etc) will not begin before you called setLocalDescription(). You might spend some awful lot of time trying to debug your WebRTC application with console.log([insert insulting debugging messages here]) and the answer might just be that: setLocalDescription() is not called (at the right time). So keep that in mind. ### Some Code Now, Please Is this the moment? The one? Yes it is! Enjoy! #### The Main Page It is basically the same page as the previous page, we will only add a section to display the two videos (ours and our peer’s). And for convenience, we will add a bit of Javascript to hide the signalling stuff once we established connection. It does like this: <!doctype html> <html> <meta charset="UTF-8"> <title>Signalling Server</title> <link rel="stylesheet" type="text/css" href="2_skype-like.css"> <body> <!-- This section is visible only during signalling --> <section id="signalling"> <h1>Signalling Server</h1> <p> <strong id="nickname"></strong>, you are now <span id="status"><strong class="red">disconnected!</strong></span> <br> <strong id="calling_status"></strong> </p> <h2>List of currently connected users</h2> <ul id="userlist"> </ul> </section> <!-- This section is visible only during a cal --> <section id="oncall"> <div class="video_frame"> <!-- Be careful to use 'autoplay'! --> <video id="localVideo" autoplay></video> <br> <span>Local Video</span> </div> <div class="video_frame"> <!-- Be careful to use 'autoplay'! --> <video id="remoteVideo" autoplay></video> <br> <span>Remote Video</span> </div> </section> <script src="2_skype-like.js"></script> </body> </html> So, nothing too fancy here. We have the same thing as previously, to display the list of currently connected users, and the new part is the block with id oncall. We included two <video> tags to display both our video and our peer’s, Skype-like! Please note that the <video> have autoplay attribute. This prevents us from forgetting to call video.play() in Javascript: it can cause headache. Some web devs are against HTML videos to autoplay, and I’m among them: how annoying it is to visit a page and have some video starting somewhere. But in this case, I believe this makes sense. I’ll include the css below, but there’s very little in it : .red { color: red; } .green { color: green; } .video_frame { text-align: center; display: inline-block; vertical-align: top; } video { border: 1px solid black; } #oncall { display: none; } #calling_status { font-size: 1.2em; color: blue; } Okay, the real, interesting part is the Javascript. So, we wrap up everything in an event, that waits for the DOM to be loaded. It roughly corresponds to jQuery’s main function, although the latter performs more checks. I found that it was a bit overshoot to depend on jQuery for our simple application, so I won’t use it. document.addEventListener("DOMContentLoaded", function(event) { // Everything (Javascript-related) will be placed here, from now on } So we begin by defining some variables that we will use all along: var nickname = prompt("Enter a name for the contact list"); if (nickname === null || nickname === "") { alert ("You must enter a name, to be identified on the server"); return; } // Will hold our peer's nickname var peer = null; // Here we are using Google's public STUN server, and no TURN server var ice = { "iceServers": [ }; var pc = null; // This variable will hold the RTCPeerConnection document.getElementById('nickname').innerHTML = nickname; var constraints = { video: "true", audio: "true" }; // Prevent us to receive another call or make another call while already in one var isInCall = false; // Specify if we have to create offers or answers var isCaller = false; var receivedOffer = null; The nickname is important, because this is the name that will be sent to the server, to maintain a contact list. Warning: as you can see here, I only check if the user actually entered a nickname, but I don’t check for some kind of format validity nor unicity in the server. It is obvious that you should perform both of these checks in any serious application. We then define a configuration variable, ice, which holds the host information about the STUN server that we will use. As you can see this is Google’s public STUN server. This is fine for tests. But I advise that you use one of your own for production (besides, there are several easy-to-implement solutions; a STUN server is really not much). The variable pc is our entry point to manipulate WebRTC. This is our socket/handle to send and talk to the other peer. And then we define a set of contraints for requesting the media, here, I’ll keep it simple by using both audio and video (again, Skype-like). You can obviously play a bit with these settings, but keep in mind that Firefox and Chrome treat constraints diffently. By the time I am writing this article, Chrome has been updated and now complies with the documentation and will try to honour your constraints, but Firefox still doesn’t. To be clear, Firefox is okay with true and false, but you can’t chose the video’s width and height. Then isCaller will specify if we are the one who initiated the call or if we received the call. This has some importance in the the order in which we call functions. At last, receivedOffer will contain the offer sent by the caller (this is used when you are the callee), you’ll see in a minute why. The next piece of code is temporary (hopefully). As of now, WebRTC is still a new technology and browser manufacturers still use prefixed functions rather than the names given by the documentation. So the next few lines are here to make the calls portable accross browsers: // For portability's sake navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; window.RTCPeerConnection = window.RTCPeerConnection || window.mozRTCPeerConnection || window.webkitRTCPeerConnection; window.RTCSessionDescription = window.RTCSessionDescription || window.mozRTCSessionDescription || window.webkitRTCSessionDescription; window.RTCIceCandidate = window.RTCIceCandidate || window.mozRTCIceCandidate || window.webkitRTCIceCandidate; • getUserMedia is to request media (video and sound) to the user, we’ve talked about this earlier. • RTCPeerConnection is to create the WebRTC connection with the other peer. • RTCSessionDescription is to handle the remote and local session description, we’ll see it in use in a few moment. • RTCIceCandidate will be used when we deal with the ICE Agent, to send our candidates to our peer. From now on, rather than explaining the code in the order it appears in the file, I’ll describe it the way I have built it, following logic. I find it makes much more sense and helps to focus. A the tend of the article, I’ll provide a link to the code in the repo, so you don’t have to worry about the order. So, first, let’s open a WebSockets connection to our signalling server: // Open a connection to our server var socket = new WebSocket('ws://192.168.1.35:4444'); // Display an error message if the socket fails to open socket.onerror = function(err) { alert("Failed to open connection with WebSockets server.\nEither the server is down or your connection is out."); return; }; // Provide visual feedback to the user when he is disconnected socket.onclose = function (evt) { document.getElementById("status").innerHTML = "<strong class=\"red\">disconnected!</strong>"; }; // When the connection is opened, the server expects us to send our nickname right away socket.onopen = function() { document.getElementById("status").innerHTML = "<strong class=\"green\">connected!</strong>"; socket.send (JSON.stringify ({"nickname": nickname})); }; As you are probably aware, WebSockets hosts begin with ws://. The onerror handler event occurs if the connection is refused. The onclose event is fired when the connection is closed at some point. Note that onerror does call onclose. I simply write a visual feedback for when we are disconnected. When the connection is opened successfully, the servers expects us to send our nickname immediately, this is what is done here. So we have our conneciton opened to the server and we have sent our nickname, we will get registered and the server will send us (and every connected user) the contact list, so this is the first piece of code we’ll write for when receiving a message: // Parse message, JSON is used for all message transfer try { var dat = JSON.parse (msg.data); } catch(e) { console.log ("ERROR - Received wrong-formatted message from server:\n" + e); socket.close(); isInCall = false; isCaller = false; return; } // Process userlist : display the names in the contact list if (dat.userlist) { var l = dat.userlist; var domContent = ""; // Add each user on the list and register a callback function to initiate the call l.forEach (function (elem) { // Filter out our name from the list: we don't want to call ourselve! if (elem !== nickname) { domContent += "<li><button onclick='navigator.callUser(\"" + elem + "\");'>" + elem + "</button></li>"; } }); // Add the generated user list to the DOM document.getElementById("userlist").innerHTML = domContent; } We use JSON for all data exchange, so the first thing we do when we receive a message is try to parse it (this is done in the try/catch block). First test in the onmessage handler is if we received the userlist from the server. In this user list, there is quite simply the list of all nicknames currently connected (including ours, hence the condition to exclude ourselves from the list), then we build a HTML list (<ul>) in which we add the contacts with a button. On each contact button, the navigator.callUser() event is bound. So the logical next step now is to see what this callUser() function does: // Initiate a call to a user navigator.callUser = function (who) { document.getElementById('calling_status').innerHTML = "Calling " + who + " ..."; isCaller = true; peer = who; startConv(); }; One particular thing I want to emphasize is that in our design, the page (hence the Javascript code) is the same for the caller and the callee; but the functions to call (especially their orders) are different for the caller and the callee. For this to work, and to write beautiful, elegant code, we will make extensive use of functions and conditions on whether we are the caller (checked with isCaller) or not. So what do we do here ? First we simply give the user some visual feedback that something is happening by writing “Calling XXX…”. Then, since we are the one to call, we set the boolean isCaller to true (it is false by default). After that we simply register our peer’s name in the peer variable, which is available globally: we do that because we will need it later. And then we call startConv(), the function which will, obviously, start the conversation. Let’s take a look at that function. This function (as many others) will actually be called by both peers, so we have to check if we’re calling it because we are initiating a call or because we are answering one: // Start a call (caller) or accept a call (callee) function startConv() { if (isCaller) { console.log ("Initiating call..."); } else { } // First thing to do is acquire media stream navigator.getUserMedia (constraints, onMediaSuccess, onMediaError); }; // end of 'startConv()' What we need to do to start a conversation is create a channel between the peers (the RTCPeerConnection), acquire media and transmit it. WebRTC dirt here It turns out that for your WebRTC application to work, you have to add your media stream (with pc.addStream()) BEFORE setting your local description (and idem for the callee). I call this dirty because I have yet to find a good explanation of why this is needed, and because I don’t recall the documentation to ever specify that… Anyway, back to our code sample. the first few lines are just debugging stuff, you can omit them. It simply outputs on the console if we are making a call or answering one. As usual, it might be a good idea to display visual (or audio) feedback to the user, like a phone ringing or a picture of a phone shaking; universal signals that we are placing a call. The last line is where the real fun begins: since I’ve told you we had to add the stream before doing anything else, we call getUserMedia() to get the media stream. In case of error, onMediaError() is called: function onMediaError (err) { alert ("Media was denied access: " + err); document.getElementById('calling_status').innerHTML = ""; socket.close(); isCaller = false; isInCall = false; return; }; which simply notifies the user that the access to the webcam was denied (or it failed for whatever else reason). When it (hopefully) succeeds, onMediaSuccess is called. Again, this process of acquiring stream is a common task between the caller and the callee; this is why 1. we define an external function, called onMediaSuccess and use it as a callback rather than using an anonymous function 2. inside this callback, we check on isCaller So here is the caller part of onMediaSuccess: function onMediaSuccess (mediaStream) { // Hide the contact list and show the screens document.getElementById("signalling").style.display = "none"; document.getElementById("oncall").style.display = "block"; // Display our video on our screen document.getElementById("localVideo").src = URL.createObjectURL(mediaStream); // Create the RTCPeerConnection and add the stream to it pc = new window.RTCPeerConnection (ice); // Stream must be added to the RTCPeerConnection **before** creating the offer pc.onicecandidate = onIceCandidate; if (isCaller) { // Calling 'createOffer()' will trigger ICE Gathering process pc.createOffer (function (offerSDP) { pc.setLocalDescription (new RTCSessionDescription (offerSDP), function () { console.log ("Set local description"); }, function () { console.log ("Failed to set up local description"); }); }, function (err) { console.log ("Could not build the offer"); }, constraints); } As we’ve seen in the mirror example, the success callback is passed the MediaStream object. First, there are a couple of common tasks to do for both the caller and the callee: we first hide the HTML part that displayed the user list (since we are in a call, we won’t be calling someone else, so we might as well hide it) and show the “on call” part of the window, the one with the two screens. Then, since we now have our local stream: our pretty face (or your sister under the shower, if you have a wireless webcam… and you’re a pervert - don’t do that by the way), we can just show it. This step was already discussed in the mirror example so there should not be anything new. It is now the time to instance our RTCPeerConnection and make store it in the globally available pc variable. Remember our STUN server’s IP address (stored in the ice variable)? Well if you want to use it, you shall pass it as a parameter. Okay now that the RTCPeerConnection is created, the very first thing we do now is add our stream (since WebRTC silently requires so); this is done with pc.addStream(). Then we define the callbacks for pc.onaddstream() and pc.onicecandidate() events. The former will be triggered when our peer will itself call pc.addstream() and the latter is fired whenever our ICE Agent will gather candidates. If you’re like me and usually write a few lines of code/ functions (with debugging console.log() calls), then try it to see what’s happening (in your console), do not stop here. I tried it, and obviously spent time trying not to bang my head against the wall. I believe I said it earlier but nothing will happen before we called pc.setLocalDescription()! Now see how ironic this is? In order to get some results, you would be tempted to create your RTCPeerConnection, then register the callback and call setLocalDescription()? You would have something on the console now, but it would eventually fail because you need to add your stream beforehand. Now comes the dependant part. We are the caller in this case, so what we need to do now, is create the offer, this is done with pc.createOffer(). Since the recent WebRTC 1.0 review, the API slightly changed. Most of the functions now take a success and an error callback (which are mandatory; well stricly speaking they are not yet, but calling these functions without the callbacks triggers the browser to display a warning in the console, telling you that soon, it would result in an error, thus breaking yor code. This is quite new actually, so you might be surprised if you have already read some WebRTC code example before and did not see callbacks). In addition to the success and error callbacks, some functions take an additional MediaConstraints object as the third parameters. Quite frankly, I don’t really understand why, since the contraints are already available inside the RTCPeerConnection. The function createOffer() follows that rule. Our error callback is simply a log on the console. The success callback is passed the offer that was successfully just created. If you want to inspect it, you can call console.log (JSON.stringify (offerSDP)). You would see that it is a JSON-formatted object which contains two fields: “type” which can be “offer” or “answer” and “sdp” which contains the actualy SDP information, namely the IP address + port of your interface, the codecs available, etc. It would be pretty small for now, since we haven’t gathered any candidates yet. It is time to actually start the whole WebRTC procedure by calling setLocalDescription(). You need to give him a RTCSessionDescription which you can build inline from a the SDP offer we have just created. This function, too, comes with a success and error callback (which we only use here to notify the user on the console.) If you read some WebRTC examples elsewhere, you might see some people sending the offer in the success callback of setLocalDescription(). It is called trickle ICE. Let me explain quickly the difference: What we are going to do in this application might be summarized like this: 1. create the offer and set it as the local description (it will then trigger the ICE Agent which will start gathering candidates) 2. monitor this ICE Agent; when it is done gathering candidates, we will send our peer our local description which will contain everything in one batch: the codecs we support, the media we are ready to transmit and receive as well as all our candidates (remember a candidate is bound to our network interface, and contains protocol (UDP or TCP), IP address, port number, etc) 3. our peer will then receive that offer, inspect it, create its answer from that and send it back to us, then the conversation will begin with the best parameters that can be supported by both peers. This is text-book WebRTC. This is what the documentation specifies and this is what is supported by all browsers. But this is not the best way to do WebRTC: you have to wait for the ICE Agent to finishing gathering every candidates before you can send your offer and, then, you need to wait for your peer to gather himself all its candidates. This takes time. What you might see is “trickle ICE”. The principle is different: 1. create the offer, set it as the local description and immediately send that almost empty offer to our peer. 2. when our peer receives the offer, it will do the same and immediately send his answer. 3. during that time, the ICE Agent would have started gathering candidates, everytime one candidate is found, we send it to our peer. This new candidate will be inspected by the WebRTC engine and if it is better, the communication will be transparently switched to use that new candidate (for instance if the first one was TCP, pretty long delays so bad quality and the new candidate uses UDP, which is faster, the conversation will then use this new UDP candidate) otherwise it will stay the same. People who use trickle ICE usually send their offer as soon as it is created, in the success callback. This trickle ICE procedure has some benefits: the conversation can start immediately, usually in low quality / low bitrate and can be then enhanced when better candidates are found. The downside of this is that this is not defined in the documentation, so browser don’t have to implement trickle ICE. Back before some recent updates, I believe Firefox did not support trickle ICE. I am unsure now. So for this article, I prefer to stick to the documentation: this is why we don’t do anything in the success callback. Okay so what now? We have covered everything in that onMediaSuccess() callback. Well since we called setLocalDescription(), the ICE Agent has started gathering candidates, and everytime it finds a candidate, it calls our onIceCandidate() callback. Let’s inspect it: function onIceCandidate (evt) { // Wait for all candidates to be gathered, and send our offer to our peer if (evt.target.iceGatheringState === "complete") { console.log ("ICE Gathering complete, sending SDP to peer."); // Haven't found a way to use one-line condition to substitute "offer" and "answer" if (isCaller) { var offerToSend = JSON.stringify ({ "from": nickname, "offer": pc.localDescription }); socket.send( JSON.stringify( {"target": peer, "sdp": offerToSend})); console.log ("Sent our offer"); } As said previously, we don’t use trickle ICE here, so we need to wait for the ICE Agent to finish gathering all the candidates. The state of the gathering process, if you recall from an earlier paragraph, is given by the iceGatheringState properties of the ICE Agent. The callback is given an Event (here evt). So in order to access the ICE Agent, we need to call target, which is the object that is responsible for the event. I mention it here because it is easy to forget it and write evt.iceGatheringState and get insulted by Javascript. So we simply wait for the last candidate to be gathering, which is indicated by the state being on complete. That gathering process will be done by the callee too, so here is the time to check if we are the caller or the callee. As the caller here, we will send our offer. Remember how we built our signalling server? We send the server an JSON object which contains exactly two fields: “target” to say whom we send our data to, and “sdp” which contains our data. I admit now that “sdp” is ill-chosen: we don’t always send an SDP, but it is just a matter or name. We build our data, since it will be transmitted to our peer, we need to JSON-format it too (the signalling server will simply fetch whatever is inside the “sdp” field and send it to our peer, so what is inside the “sdp” field must be JSON-formatted). In that data, we indicate that the data comes from us with the “from” field (this is actually the first time the callee will know it is being called, and by whom) and the actual data which is a SDP (really SDP this time) offer. This offer is obtained from our RTCPeerConnection, with the localDescription attribute. Now what happens? Well it is time to go check what happens to our peer. It will receive our offer, so let’s inspect a second chunk of code from the onmessage WebSockets handler (remember, the first one was to check if we received the userlist form the server): if (dat.userlist) { // We already did that part, this is just to remind the context } // If the message is from a peer else if (dat.from) { // When we receive the first message form a peer, we consider ourselved in a call if (!isInCall) { isInCall = true; peer = dat.from; document.getElementById('calling_status').innerHTML = peer + " is calling..."; } if (dat.offer) { startConv(); } else if (dat.answer) { // We will see that in a few moments } } // Otherwise, this is an error else { socket.close(); isInCall = false; isCaller = false; return; } If the message doesn’t contain the userlist attribute (which would indicate it comes from the signalling server) it must contain the from attribute, which indicated it comes from a peer, this is our else if, otherwise, simply fail. Well first thing we do is store our peer’s name! We make it available in the peer variable and set the isInCall boolean to true. Then.. visual feedback: so we now we are being called. Okay, time to pay attention here. We just received an offer from our peer, so we have to start all the WebRTC mechanism (remember: we are on the callee’s side now). But the same rule applies: we need to add our stream to the RTCPeerConnection before we do anything else. This is why we store the offer we just received in the global receivedOffer variable. Then we call startConv(), like we did when the caller initiated the call. If you remember, that function was not much different for the caller and the calle, we just had a test case to write “Answering call…” rather than “Initiating call…” function onMediaSuccess (mediaStream) { // Create the RTCPeerConnection and add the stream to it pc = new window.RTCPeerConnection (ice); // Stream must be added to the RTCPeerConnection **before** creating the offer pc.onicecandidate = onIceCandidate; if (isCaller) { // We alreayd saw that part for the caller } else { pc.setRemoteDescription (new RTCSessionDescription (receivedOffer), function () { pc.setLocalDescription (new RTCSessionDescription (answerSDP), function () { console.log ("Set local description"); }, function () { console.log ("Failed to set up local description"); }); }, function (err) { console.log ("Could not build the answer"); }, constraints); }, function () { console.log ("Failed to set up remote description"); }); } }; // end of 'onMediaSuccess()' First part is common, we already saw it: we have the user’s webcam stream available, we created our RTCPeerConnection, we added our stream to it (ah?! now that will trigger the onaddstream() event in our caller, I’ll come back to this in a minute) and we registered our callback for the ICE Agent. Now what happens in that else case? Well now that we added our stream to the RTCPeerConnection, we can set our remote description. Just to be sure everybody follows: we are on the callee’s side now, so the description we just received from the caller (which was his local description) is for us, the remote description. Similarly, our local description will become his remote description when we will have sent it, right? Okay now that it is clarified, let’s move on. setLocalDescription(), as we saw it takes callbacks and constraints. In our success callback, we can do something, now that we are on the callee’s side. It is time we create our SDP answer, done with pc.createAnswer(). Exactly like seen previously, if you inspect the object passed to the success callback, you will find the “type” field set to “answer” and an almost empty “sdp” field. Inside that callback (yes, it’s Inception here: we are inside setRemoteDescription()’s sucess callback, we called createAnswer() and we are now inside its own callback, and now there will be a third success callback… With our answer successfully created, we need to set our own local description. Phew! What now? Before continuing with the callee, let’s not forget that since we called pc.addStream(), the onaddstream() event will be triggered in our caller, let’s take a quick look: function onStreamAdded (evt) { console.log ("Remote stream received"); document.getElementById("remoteVideo").src = URL.createObjectURL(evt.stream); }; // end of 'onStreamAdded()' This event is common to both the caller and the callee, so it is quite simple: we simply take our media stream, transform it into something that can be plugged into a HTML <video> src attribute and it’s done. (Don’t forget to call play() if you omitted the autoplay attribute). Oh and by the way, on the callee’s side, the onaddstream() event was fired as soon as we called setRemoteDescription(): this is what actually connected our RTCPeerConnection with the caller’s. Okay, back to the callee. Since we called setLocalDescription(), the callee’s ICE Agent started to work, let’s take a look at the callee’s part of the ice callback: function onIceCandidate (evt) { // Wait for all candidates to be gathered, and send our offer to our peer if (evt.target.iceGatheringState === "complete") { console.log ("ICE Gathering complete, sending SDP to peer."); // Haven't found a way to use one-line condition to substitute "offer" and "answer" if (isCaller) { // already covered in the caller part } else { var answerToSend = JSON.stringify ({ "from": nickname, }); socket.send( JSON.stringify( {"target": peer, "sdp": answerToSend})); console.log ("Sent our answer"); // Once we sent our answer, our part is finished and we can log out from the signalling server socket.close(); } } }; This should comme as no surprise: this is exactly the same code than we used for the caller, except that we now send a JSON object with the name “answer”. If you followed precisely when I was talking about the generated SDP that contained a “type” field whose value was either “offer” or “answer”, you would note that we could simplify this code by creating only one JSON object, and call the field “data” or “sdpData” rather than “offer” and “answer”. Then, in the onmessage() handler, the test would not be performed directly on dat.offer/dat.answer but rather dat.sdpData.type. Well done if you picked this. Oh and don’t think “Yeah I would’ve picked it”, either you did, or you did not. Last but not least, once the callee sent his answer, there is nothing else that will be transmitted through the signalling server, so we just disconnect from it. From now on (well after a small remaining step on the caller’s side), everything will be transmitted peer-to-peer, with WebRTC (so, securely). Now it’s time to go see that onmessage() handler in the caller, because we just received an answer! // Process incomming messages from the server, can be the user list or messages from another peer socket.onmessage = function (msg) { // Parse message, JSON is used for all message transfer try { var dat = JSON.parse (msg.data); } catch(e) { // error is not JSON-formatted } // Process userlist : display the names in the contact list if (dat.userlist) { // we already saw the case for when we receive the user list } // If the message is from a peer else if (dat.from) { // When we receive the first message form a peer, we consider ourselved in a call if (!isInCall) { // not relevant here } if (dat.offer) { // this is the callee } else if (dat.answer) { pc.setRemoteDescription (new RTCSessionDescription (dat.answer), function () { console.log ("Set remote description - handshake complete."); // As we are now in a call, log out from the signalling server socket.close(); }, function () { console.log ("Failed to set remote description - handshake failed."); }); } } // Otherwise, this is an error else { // error if not one of the two cases that we handle } }; // end of 'socket.onmessage()' It should also come as no surprise: we receive an offer from our callee, it contains the SDP that we need to use in setRemoteDescription(). Once this is done, the WebRTC handshake is officially completed! Same as the callee: we have no business anymore with the signalling server, so we might as well close it. Side note: on tests, while examining the console, I noted that "Set remote description - handshake complete." was indeed logged once, but that several "Set remote description - handshake complete." were logged too (on the caller’s console), while several "Sent our answer" (from onIceCandidate()) were logged in the callee’s console. So for some reason that I have yet to understand, the callee sends several times its answer and thus the caller receives it several times too. I’m not sure why. I’ll try and come back to you when I found the reason. ## Conclusion Now you should have a basic, Skype-like application in your browser. It comes with comparable bitrate and quality (especially since the new WebRTC 1.0 review, where H.264 can be used as a video codec, VP8 was the only one before). The media is truly peer-to-peer, so your media data flow from your computer to your peer’s, with no proxy (see the following paragraph on that note). I hope you enjoyed reading the article, and that I was clear enough. I am aware this is a pretty long article, but I did not want to simply give three lines of codes and “voilà!”. I wanted to give you a sense of what was happening under the hood and more importantly I wanted to write down the tricky parts like the fact that you have to add the stream to the RTCPeerConnection before calling set{Local,Remote}Description, because it is often omitted in the code examples that you might have seen on the web. You can find the whole code for the Skype-like application here. ### What we did In this simple application, we used the most talked about feature of WebRTC: peer-to-peer media communication. We created a simple Skype-like application to transmit video and audio data between two peers. We used WebSockets as the signalling channel, remember that the WebRTC documentation voluntarily does not provide/impose a technology for the signalling channel. I used WebSockets because I am quite familiar with it, it works well and the WebRTC’s API was built to look like WebSockets’. To make a quick reminder, we used a STUN server to discover our public IP address and ensure NAT traversal. Our ICE Agent took care of gathering network candidates. We notified the person whom we wanted to call via a common signalling channel, through which we negociated the parameters necessary to initiate a call and then we issued that WebRTC call. After the WebRTC handshake was completed, we cut conection with the signalling channel to rely only on WebRTC. ### Note about TURN servers Google’s statistic say that, even with using STUN servers to ensure NAT traversal, about 8% of WebRTC connections fail. This is unlikely to happen to your home network, but it appears most likely in companies where a tight security is kept on the network, which prevents the NAT the be bypassed. For this case scenario, WebRTC provides an answer: a TURN server (yes, you read that right, T-U-R-N, and not STUN). A TURN server is a centralized server which will act as a relay between the two peers. When they can’t establish contact with a UDP connection, the two peers will establish TCP connection with the TURN server which will relay date between the two. To use that feature, you have to include the TURN server’s IP adress, port number and credential information into the ice parameter that is passed when constructing the RTCPeerConnection object. If that solution has the advantage of ensuring connection, is has the disadvantage that it is not a truly peer-to-peer connection, since the TURN server acts as a relay. I did not cover in this article, but this is something you might want to document about. There are several companies on the web that will provide you with credentials to use for TURN servers, but you will most likely have to pay; while a STUN server is pretty easy to implement and can support a lot of connections (it’s really nothing more than a “ping” server) you will find some public ones, a TURN server is another story. It has to implement all WebRTC capabilities and will fail to support heavy loads : we are talking about continuous video and audio streaming. You can’t have a public TURN servers because for more than a few users connected simultaneously, it would break down. ### A note about multi-cast WebRTC is truly an amazing technology, but it had one “problem”. Well this is not a problem, but this is one thing your need to know before thinking you can conquer the world. What we did it one-to-one media conversation. If you try it, you’ll see that it works pretty fine. If you and your peer have a pretty decent connection, you should have good quality. But don’t forget that uplinks are generally much slower (often by an order of magnitude) than downlinks. So if you want to stream you full HD Webcam to your peer, you’d better have a decent uplink! Now consider multi-cast (i.e. one-to-many, many-to-one or many-to-many). If you want to make a conversation with three people, A, B and C. One of the very good aspects of WebRTC, if you remember, is encryption: everything (including media) is encrypted with your peer’s key. So when you send your stream to your peer, only him can decipher it. If anyone else would take a look at the data, it would look scrambled. This is good for privacy. This is bad for multi casting. When A and B are in business, and C joins the conversation, it has to go through a new negociation with A. A new key will be negociated, which means that A will send its stream twice: once for B and once for C. The same goes for B and C obviously. Which means the already weak uplinks will be divided by two since it has to upload data for B and for C. Now you follow me? Past 3 users, it is almsot impossible to do multi-cast with WebRTC as is. This is all due to the fact that our uplinks are pretty bad, our downlink is way better. For your information, there is a solution. I will most likely talk about it in a later post, but I’ll give you some hints: it is called a WebRTC Gateway. Remember when I said our home uplinks where bad while our downlinks were good? Well guess whose uplinks are good? Servers, of course. Servers send us data, so they use their good uplinks and we use our good downlinks. Well to be quick, a WebRTC Gateway is a server (software) that can speak WebRTC, and is running on a server (hardware). When you want to multi-cast, say make a conversation with A, B and C. Each peer connects to this WebRTC Gateway, via classic WebRTC. Each of them sends their stream (via their uplinks) once to the server (gateway). That gateway is in charge of duplicating thee data and stream them back to A, B and C. This way, each peer only upoads its stream once, and get the other peers’ stream. Let’s face it: you won’t be able to multi-cast with 100 peers, streaming Full HD, not yet: your download link is good but is not infinite :-). Anyway, more on this later. Hope you enjoyed! February 14, 2015
2018-12-12 10:46:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2540622651576996, "perplexity": 2991.6134541604606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823817.62/warc/CC-MAIN-20181212091014-20181212112514-00487.warc.gz"}
https://math.stackexchange.com/questions/1828719/high-degree-b%C3%A9zier-curve-for-curve-fitting/1830152
# High Degree Bézier Curve For Curve Fitting I have the feeling that I'm way out of my element here, and that maybe this question will be obvious to most of you. Nonetheless, here goes: I have an example set of 22 two-dimensional points, ordered in increasing x-axis value: 319.48067 -219.040581 323.411004 -221.767389 326.375842 -222.245532 327.8193 -224.307961 330.315608 -225.26134 331.952721 -228.313128 334.289559 -228.254367 335.988759 -231.239028 339.392045 -229.993712 340.882416 -232.052757 343.045085 -234.498472 345.091425 -235.021075 347.066267 -236.365868 348.887618 -238.0629 350.364466 -241.289831 352.211338 -242.140001 353.641035 -245.225141 355.346446 -246.093982 356.474918 -249.797807 357.66939 -252.040193 357.700936 -255.611868 358.127355 -260.326559 I want to smooth out this shape by creating a Bézier curve with the 2 end points and the 20 internal points as the control points. I then want to sample the curve at equally spaced parametric 't' values to achieve the curve fit. I've written a C++ routine to calculate the Bézier curve values, and the output seems very wrong to me. For the following 20 parametric 't' values: 0.047619047619 0.095238095238 0.142857142857 0.190476190476 0.238095238095 0.285714285714 0.333333333333 0.380952380952 0.428571428571 0.476190476190 0.523809523810 0.571428571429 0.619047619048 0.666666666667 0.714285714286 0.761904761905 0.809523809524 0.857142857143 0.904761904762 0.952380952381 I get the following 20 locations on the curve: 172.548459,-118.213783 118.690938,-81.208733 100.189584,-68.513146 94.453547,-64.601268 93.078197,-63.668437 93.107059,-63.666879 93.558920,-63.928764 94.124242,-64.262521 94.707939,-64.625647 95.280613,-65.014698 95.833185,-65.433873 96.363957,-65.888688 96.877984,-66.386296 97.400090,-66.944260 98.025710,-67.629717 99.090008,-68.689441 101.684547,-70.933598 109.123610,-76.804231 130.846232,-93.207709 192.234365,-138.650327 This doesn't look right to me, and it certainly doesn't achieve a curve fit. My question is: is my Bézier curve calculation incorrect, or is my understanding of what a Bézier curve should look like incorrect? Please see the following two images of the original shape only plus the outputted Bézier curve locations: • Bezier curve is always bounded by the convex hull (or XY bounding box) of its control points. So, the fact that your curve goes beyond the range of x and y of the control points means that you most likely have some bugs computing points on the Bezier curve. – fang Jun 16 '16 at 23:27 • Thanks fang! You are correct, I looked more closely and determined that I was calculating the binomial coefficient incorrectly. It's working properly now. I don't know how to mark your comment as the correct answer... any help would be much appreciated! – user2062604 Jun 17 '16 at 15:32 • I will repost my comment as an answer with a little bit more information. – fang Jun 17 '16 at 19:59
2019-10-19 02:06:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7350630164146423, "perplexity": 785.9603176640093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688674.52/warc/CC-MAIN-20191019013909-20191019041409-00071.warc.gz"}
https://zbmath.org/?q=an:1153.60058
## Asymptotics of Plancherel measures for the infinite-dimensional unitary group.(English)Zbl 1153.60058 Summary: We study a two-dimensional family of probability measures on infinite Gelfand-Tsetlin schemes induced by a distinguished family of extreme characters of the infinite-dimensional unitary group. These measures are unitary group analogs of the well-known Plancherel measures for symmetric groups. We show that any measure from our family defines a determinantal point process on $$\mathbb Z_+ \times \mathbb Z$$, and we prove that in appropriate scaling limits, such processes converge to two different extensions of the discrete sine process as well as to the extended Airy and Pearcey processes. ### MSC: 60K40 Other physical applications of random processes 60G55 Point processes (e.g., Poisson, Cox, Hawkes processes) 82B31 Stochastic methods applied to problems in equilibrium statistical mechanics Full Text: ### References: [1] Aptekarev, A.; Bleher, P.; Kuijlaars, A., Large n limit of Gaussian random matrices with external source, part II, Comm. math. phys., 259, 2, 367-389, (2005) · Zbl 1129.82014 [2] Baik, J.; Deift, P.; Johansson, K., On the distribution of the length of the longest increasing subsequence of random permutations, J. amer. math. soc., 12, 4, 1119-1178, (1999) · Zbl 0932.05001 [3] Baik, J.; Deift, P.; Johansson, K., On the distribution of the length of the second row of a Young diagram under Plancherel measure, Geom. funct. anal., 10, 4, 702-731, (2000) · Zbl 0963.05133 [4] Biane, P., Approximate factorization and concentration for characters of symmetric groups, Int. math. res. not., 2001, 4, 179-192, (2001) · Zbl 1106.20304 [5] Borodin, A., Periodic Schur process and cylindric partitions, Duke math. J., 140, 3, 391-468, (2007) · Zbl 1131.22003 [6] Borodin, A.; Ferrari, P., Large time asymptotics of growth models on space-like paths I: pushasep [7] Borodin, A.; Ferrari, P.; Prähofer, M.; Sasamoto, T., Fluctuation properties of the TASEP with periodic initial configuration, J. stat. phys., 129, 1055-1080, (2007) · Zbl 1136.82028 [8] Borodin, A.; Okounkov, A.; Olshanski, G., Asymptotics of Plancherel measures for symmetric groups, J. amer. math. soc., 13, 3, 481-515, (2000) · Zbl 0938.05061 [9] Borodin, A.; Olshanski, G., Harmonic analysis on the infinite-dimensional unitary group and determinantal point processes, Ann. of math. (2), 161, 3, 1319-1422, (2005) · Zbl 1082.43003 [10] Borodin, A.; Olshanski, G., Stochastic dynamics related to Plancherel measure on partitions, Amer. math. soc. transl. ser. 2, 217, 9-22, (2006) · Zbl 1109.60041 [11] Borodin, A.; Olshanski, G., Asymptotics of Plancherel-type random partitions, J. algebra, 313, 1, 40-60, (2007) · Zbl 1117.60051 [12] Borodin, A.; Rains, E., Eynard-mehta theorem, Schur process, and their Pfaffian analogs, J. stat. phys., 121, 3-4, 291-317, (Nov. 2005) [13] Brezin, E.; Hikami, S., Level spacing of random matrices in an external source, Phys. rev. E (3), 58, 6, 7176-7185, (1998), part A [14] Brezin, E.; Hikami, S., Universal singularity at the closure of a gap in a random matrix theory, Phys. rev. E (3), 57, 4, 4140-4149, (1998) [15] Daley, D.; Vere-Jones, D., An introduction to the theory of point processes: vol. I. elementary theory and methods, (2003), Springer-Verlag New York · Zbl 1026.60061 [16] Forrester, P.; Nordenstam, E., The anti-symmetric GUE minor process · Zbl 1191.15032 [17] Forrester, P.; Nagao, T., Determinantal correlations for classical projection processes · Zbl 0917.15018 [18] Johansson, K., Discrete orthogonal polynomial ensembles and the Plancherel measure, Ann. of math. (2), 153, 1, 259-296, (2001) · Zbl 0984.15020 [19] Johansson, K., Discrete polynuclear growth and determinantal processes, Comm. math. phys., 242, 277-329, (2003) · Zbl 1031.60084 [20] Kerov, S.V., Distribution of symmetry types of high rank tensors, Zap. nauchn. sem. LOMI, J. soviet math. (New York), 41, 2, 995-999, (1988), (in Russian); English translation in · Zbl 0639.15008 [21] Kerov, S.V., Asymptotic representation theory of the symmetric group and its applications in analysis, Transl. math. monogr., vol. 219, (2003), Amer. Math. Soc. · Zbl 1031.20007 [22] Logan, B.F.; Shepp, L.A., A variational problem for random Young tableaux, Adv. math., 26, 206-222, (1977) · Zbl 0363.62068 [23] Macdonald, I.G., Symmetric functions and Hall polynomials, (1995), The Clarendon Press, Oxford University Press New York · Zbl 0487.20007 [24] Okounkov, A., Random matrices and random permutations, Int. math. res. not., 2000, 20, 1043-1095, (2000) · Zbl 1018.15020 [25] Okounkov, A., Symmetric functions and random partitions, (), 223-252 · Zbl 1017.05103 [26] Okounkov, A.; Olshanski, G., Asymptotics of Jack polynomials as the number of variables goes to infinity, Int. math. res. not., 1998, 13, 641-682, (1998) · Zbl 0913.33004 [27] Okounkov, A.; Reshetikhin, N., Correlation function of Schur process with application to local geometry of a random 3-dimensional Young diagram, J. amer. math. soc., 16, 3, 581-603, (2003) · Zbl 1009.05134 [28] Okounkov, A.; Reshetikhin, N., Random skew plane partitions and the pearcey process, Comm. math. phys., 269, 3, 571-609, (2007) · Zbl 1115.60011 [29] Olshanski, G., The problem of harmonic analysis on the infinite dimensional unitary group, J. funct. anal., 205, 2, 464-524, (2003) · Zbl 1036.43002 [30] Prähofer, M.; Spohn, H., Scale invariance of the PNG droplet and the Airy process, J. stat. phys., 108, 5-6, 1071-1106, (2002) · Zbl 1025.82010 [31] Tracy, C.; Widom, H., The pearcey process, Comm. math. phys., 263, 381-400, (2006) · Zbl 1129.82031 [32] Vershik, A.; Kerov, S., Asymptotics of the Plancherel measure of the symmetric group and the limit form of Young tableaux, Soviet math. dokl., 18, 527-531, (1977) · Zbl 0406.05008 [33] Vershik, A.; Kerov, S., Characters and factor representations of the infinite unitary group, Soviet math. dokl., 26, 570-574, (1982) · Zbl 0524.22017 [34] Vershik, A.; Kerov, S., Asymptotics of the maximal and typical dimension of irreducible representations of symmetric group, Funct. anal. appl., 19, 1, 25-36, (1985) [35] Voiculescu, D., Représentations factorielles de type $$\mathit{II}_1$$ de $$U(\infty)$$, J. math. pures appl., 55, 1-20, (1976) · Zbl 0352.22014 [36] Zhelobenko, D.P., Compact Lie groups and their representations, Transl. math. monogr., vol. 40, (1973), Amer. Math. Soc. Providence, RI, (in Russian); English translation in · Zbl 0228.22013 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2023-01-30 11:17:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.853626012802124, "perplexity": 3550.6864143370344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00660.warc.gz"}
https://as595.github.io/frdeepcnn/
The simplest (and perhaps most naive) method of using images for classification would be to take the value of every pixel as a feature and feed them into a machine learning algorithm. One problem that would quickly become obvious is the number of hyper-parameters that would need to be trained by the algorithm. For example if we built a standard neural net to classify an image then we would have to unfold the image into a single vector, i.e. if the image had dimensions of 50 pixels x 50 pixels and it was a standard RGB (3 colour) image, then the input layer of our neural network would have a size of 50x50x3 = 7500. If our image was 500 x 500 pixels then it would be 750,000… Neural net One of the very useful things about convolutional neural networks (CNNs) is that they don’t require us to flatten our input data and we can maintain the dimensionality of our input all the way through. In order to do this, CNNs are not composed of individual neurons but instead are comprised of functional layers, which have learnable weights associated with them. #### Layers The typical layers you will find in a convolutional neural network are: Input Layer This is the input data sample. If this is a normal image it could have dimensions of 50x50x3, where the image has 50 pixels on a side and the 3 refers to the RGB. If the input was a spectral image from a telescope, it might have dimensions 50x50x128, where the image has 50 pixels on a side and 128 spectral (frequency) channels. Convolutional Layer Convolutional layers have height and width dimensions that are the same as their input (see note below) and a depth that is equal to the number of convolutional filters they apply. For example, the RGB image with dimensions of 50x50x3 could be fed into a convolutional layer that used 6 filters, each with dimensions of 5x5. The output would be 50x50x6. Note that there is no multiplication for the number of channels in the input image: convolutional layers apply their filters to each channel separately and then sum the results. From the view point of a convolutional layer, all of the input channels in the data sample are interchangeable and equally weighted. Note: when the convolutional layer is applied to a data sample there is usually an option to implement padding or not. Padding provides additional zero-valued pixels around the border of the input image which allows the output to have the same height x width dimensions as the input. If padding is not applied then the output is smaller than the input. Activation Layer The purpose of the activation layer is to introduce non-linearity into the network. The most popular activation layer function is the ReLU (rectified linear unit), which applies a thresholding function max(0,x), where x is the output from the convolutional layer. Convolutional layers are always followed by activation layers. ReLU activation function Pooling Layer Pooling layers reduce the volume of hyper-parameters in the CNN by downsampling the data at various stages in the network. Typical examples include the max-pooling layer, which selects the maximum-valued output within a user-defined area, or the average-pooling layer, which takes the averge over a user-defined area. Max Pooling Fully Connected Layer Fully-connected layers have a pre-defined number of neurons, which are connected to all the outputs from the previous layer. These layers operate like a normal neural network. Output Layer The output layer is a fully-connected layer that has the same size as the number of target classes. The architecture of a CNN refers to the order in which these layers are arranged and the dimensionality of each layer. All of the layers apart from the input and output layers are referred to as hidden layers. They’re not really hidden if you’re the one building the CNN in the first place, but if you’re a user who just wants to classify an image all you’ll see are the input and output - the rest is hidden from you, hence the name. #### Back-propagation Back-propagation is a recursive application of the the chain-rule that allows us to calculate the gradient at each point in a neural network in order to update the parameters of the network (i.e. the weights) and optimize a defined loss function. Both the loss function itself and the optimization algorithm are typically defined by the user. #### LeNET/AlexNET/ResNet So now you know what each layer does, how do you decide what order to place them in to build your CNN? The simple answer to this is that there is no good answer, and that trial and error is often used as a solution. There are some considerations to bear in mind though, for example, the depth of your network (i.e. the number of layers) is usually related to the volume of information you have and vice-versa. Very deep CNNs require extremely large training datasets. However, if you’re just getting started you should probably begin with one of the well-known and well-tested architectures that already exist in the literature. So what’s out there? LeNet Named after Yann LeCunn who developed the first successful applications of Convolutional Networks in the 1990’s. AlexNet Not very different to LeNet, but the first CNN to stack multiple convolutional layers before adding a pooling layer. GoogLeNet Includes the Inception Module that dramatically reduced the number of parameters in the network (4M, compared to AlexNet with 60M). Uses average-pooling layers instead of fully-connected layers. VGGNet Showed that the depth of the network is a critical component for good performance. An extremely homogeneous architecture that only performs 3x3 convolutions and 2x2 pooling from the beginning to the end. ResNet This architecture introduced skip connections and used batch normalization after the activation layers. Like GoogLeNet, ResNet removes the fully-connected layers. ResNets are currently the default best option CNN, although opinions are divided on the efficacy of batch normalization. If you want to know how to use one of these pre-defined architectures see the network definition in the Python example below. The other thing that’s nice about using inherited architectures is that there are often pre-trained models available for you to use. This is important because training your CNN is the most computationally expensive part of using a CNN. Applying the model to new test data is almost negligible in comparison. Pre-trained models can be useful for many reasons, one of which is that you may not have access to the (possibly huge) dataset that was used for the training. Another reason is because the weights in the convolution layers that define the convolutional filters are often pretty agnostic to the actual images that they’ve been trained to classify. For example, the convolutional filters that you might use to solve the cat/mouse classification problem are equally appropriate for separating images of star-forming galaxies from those of active galactic nuclei. This approach, freezing the convolutional layer weights and only re-training the weights in the fully-connected layers, is an example of transfer learning. ### CNNs in Python There are a variety of different ways to construct CNNs in Python. Popular options include the tensorflow library, the keras library and the PyTorch library. Here I’m going to use PyTorch, whcih I find to be the most straightforward and intuitive option for constructing networks. In this toy example, I’m going to use data from the VLA radio telescope to train a CNN to identify a radio galaxy (active galactic nucleus, or AGN) as Fanaroff-Riley Type I or Fanaroff-Riley Type II, which is a morphological classification that astronomers typically do by eye. Fanaroff-Riley Classification To start with, let’s import some standard libraries. We’ll use these later. import matplotlib.pyplot as plt import numpy as np Then we can start to import different tools from the PyTorch library: import torch import torchvision import torchvision.transforms as transforms from torchsummary import summary import torch.nn as nn import torch.nn.functional as F import torch.optim as optim I’m going to be using a training dataset of radio galaxies, with images from the FIRST radio survey. This training set has been made into a PyTorch dataset, so it’s super easy to use. from FRDEEP import FRDEEPF #### Define the architecture The first thing we can do is to define the architecture of our CNN. This architecture has two convolutional layers, each with a ReLU activation function, followed by a max-pooling layer; these layers are then followed by two fully-connected layers, each with a ReLU activation function, and a final output layer. There’s no padding implemented in the convolutional layers, so the outputs have smaller height and width than the output. The difference in size is the dimenaion of the convolutional filter minus one. For example, if we used a convolutional filter with dimensions 5x5 and an input image with a height of 150 pixels, the output would be 2 pixels smaller at the top and 2 pixels smaller at the bottom - a total of 5 - 1 = 4 pixels smaller - so the output would have a height of 146 pixels. class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 34 * 34, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # conv1 output width: input_width - (kernel_size - 1) => 150 - (5-1) = 146 # pool 1 output width: int(input_width/2) => 73 x = self.pool(F.relu(self.conv1(x))) # conv2 output width: input_width - (kernel_size - 1) => 73 - (5-1) = 69 # pool 2 output width: int(input_width/2) => 34 x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 34 * 34) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x Let’s look at what we’ve got in this architecture. The order of the layers is defined in the forward function, which is the forward pass through the network. Basically it goes: CONV-RELU-POOL CONV-RELU-POOL FC-RELU FC-RELU and then the final FC-layer is the output layer. If we call the class, we can then visualise a summary of the different layers in the network: net = Net() summary(net,(1,150,150)) Now we’ve defined the network architecture we can think about the data we want to use. The PyTorch library requires the images to be in tensor format, so when we read in the FRDEEPF dataset we need to transform the data from PIL image format into tensor format. The other thing we’re going to do is to normalise each image. Here I’m normalising the distribution of pixel amplitudes in each image to have a mean of 0.5 and a variance of 0.5. These transformations will be applied to every image that we import into our network. transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize([0.5],[0.5])]) Now let’s define where those images are. To do this I’m using the FRDEEPF Python class, which automatically downloads the dataset if it’s not already available and imports it in a format compatible with the PyTorch library functions. The dataset is already batched into train and test subsets: trainset = FRDEEPF(root='./FIRST_data', train=True, download=True, transform=transform) testset = FRDEEPF(root='./FIRST_data', train=False, download=True, transform=transform) The target classes in the dataset are labelled numerically, but we can assign names to each of those numerical labels: classes = ('FRI', 'FRII') We’re going to take a look at a batch of images from the dataset. To display them we’ll need a special function because the data are normalised (see the transforms above) and they are in tensor format, so we need to undo that: def imshow(img): # unnormalize img = img / 2 + 0.5 npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() Now let’s grab the example data, so we take the next iteration of the training dataset using the PyTorch data loader: # get some random training images images, labels = dataiter.next() …and display it: # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size_train))) #### Train the CNN To train the CNN we need to define (1) our choice of loss function, and (2) our choice of optimisation algorithm. As previously mentioned, one of the most popular loss functions is the Cross Entropy Loss. A perfect classifier would have zero cross entropy loss. Statistically, minimising the cross-entropy is equivalent to maximising the likelihood. The Adagrad optimizer is a variant of stochastic gradient descent. Its major improvement over standard SGD is that it is able to vary the learning rate on a parameter by parameter basis. The learning rate (“lr”) that is passed to the PyTorch Adagrad library function is an initial guess at the average learning rate. criterion = nn.CrossEntropyLoss() Now all of these things are defined we can train the CNN. To do this we loop over the dataset multiple times, each time using the previous loop’s optimized parameters as the new starting point. The process is simple: • pass a batch of data through the network to obtain a predicted output, • evaluate the loss criterion based on that output and the known true labels for the batch, • use back-propagation to propagate that loss backwards through the network, • update the parameters. nepoch = 10 # number of epochs print_num = 50 for epoch in range(nepoch): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % print_num == (print_num-1): # print every 50 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / print_num)) running_loss = 0.0 print('Finished Training') We can now test the trained machine learning model using the test dataset, which we have reserved and haven’t used until now. correct = 0 total = 0 images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 50 test images: %d %%' % (100 * correct / total)) Accuracy of the network on the 50 test images: 80 % class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) images, labels = data outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(batch_size_test): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(len(classes)): print('Accuracy of %5s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i])) Accuracy of FRI : 86 % Accuracy of FRII : 75 %
2021-01-16 17:56:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5245473384857178, "perplexity": 1142.1716704004768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506832.21/warc/CC-MAIN-20210116165621-20210116195621-00449.warc.gz"}
http://www.physicsforums.com/showthread.php?t=521024
Help !! How can I find acute Angle theta?? by krautkramer Tags: acute, angle, theta P: 25 How to find acute angle theta ,when tan 63 Degree = cot theta??Could anyone help me to find answer of this query.I need steps too as I am sooooo poor in mactematicuos .Final Answer of theta should be in degrees Thanks in advance P: 25 1. The problem statement, all variables and given/known data Help !! How can I find acute Angle theta?? How to find acute angle theta ,when tan 63 Degree = cot theta??Could anyone help me to find answer of this query.I need steps too as I am sooooo poor in mactematicuos .Final Answer of theta should be in degrees Thanks in advance 2. Relevant equations I don't know,If I know these things,then I would have been the head of 'NASA' :) 3. The attempt at a solution HW Helper P: 4,525 First step, instead of cot write it as tan, you are probably more comfortable with tan. If you don't know what cot is, ask google. HW Helper P: 2,316 Help !! How can I find acute Angle theta?? Quote by krautkramer 1. The problem statement, all variables and given/known data Help !! How can I find acute Angle theta?? How to find acute angle theta ,when tan 63 Degree = cot theta??Could anyone help me to find answer of this query.I need steps too as I am sooooo poor in mactematicuos .Final Answer of theta should be in degrees Thanks in advance 2. Relevant equations I don't know,If I know these things,then I would have been the head of 'NASA' :) 3. The attempt at a solution tan, is short for tangent cot is short for co-tangent - which is short for complementary-tangent. In the same way sin and cos are short for sine and complementary-sine In the case of sine and cosine, the sine of an angle, and the complementary-sine of the complementary angle are equal in value. eg, 41 degrees and 49 degrees are complementary [they add up to 90] compare sin 41 to cos 49 - or if you prefer sin 49 to cos 41. The relationship between tan and cot is the same. P: 25 Hi,thank you friends for the help,but it is not adequate to solve my problem. I know simple mathematics like sin/cos=tan and cos/sin=cot, but not able to find a solution for my critical problem. I will repeat the query once again...Hope you can help me.... Determine the acute angle θ when tan 63° = cot θ. Attached Thumbnails P: 25 Nobody here to help me out... P: 68 It's simple. Use the given equation to obtain tan(63)=cot$\theta$ arccot(tan(63))=$\theta$ Solving, $\theta$=27 P: 25 Quote by thebiggerbang It's simple. Use the given equation to obtain tan(63)=cot$\theta$ arccot(tan(63))=$\theta$ Solving, $\theta$=27 hey, what is this 'arccot'??How I can find the answer 27 degree?? by calculator or any other means... HW Helper P: 2,316 Quote by krautkramer Hi,thank you friends for the help,but it is not adequate to solve my problem. I know simple mathematics like sin/cos=tan and cos/sin=cot, but not able to find a solution for my critical problem. I will repeat the query once again...Hope you can help me.... Determine the acute angle θ when tan 63° = cot θ. Are you allowed to use a calculator when you do this question? P: 25 Quote by thebiggerbang It's simple. Use the given equation to obtain tan(63)=cot$\theta$ arccot(tan(63))=$\theta$ Solving, $\theta$=27 Yes of course...You are right and the answer is 27 dgree..it was simple for you...But for me,it is a herculian task...still I don't know how to find it by using windows scientific calculator P: 25 Quote by PeterO Are you allowed to use a calculator when you do this question? Yes of course...I think so...Otherwise no one will pass the final exam... HW Helper P: 2,316 Quote by krautkramer Hi,thank you friends for the help,but it is not adequate to solve my problem. I know simple mathematics like sin/cos=tan and cos/sin=cot, but not able to find a solution for my critical problem. I will repeat the query once again...Hope you can help me.... Determine the acute angle θ when tan 63° = cot θ. Try the following: Write down an angle - eg 17 Take tan 17 use 1/x, the reciprocal, on the answers now do inverse tan and write down that answer. repeat starting with an angle different to 17 degrees. Note anything? HW Helper P: 2,316 Quote by krautkramer Yes of course...I think so...Otherwise no one will pass the final exam... In that case, try each of the options in the cot function. If your calculator doesn't have a cot function, use the tan function followed by the 1/x function. [you best take the tan 63 to start with and write that answer down for reference. P: 25 Quote by PeterO In that case, try each of the options in the cot function. If your calculator doesn't have a cot function, use the tan function followed by the 1/x function. [you best take the tan 63 to start with and write that answer down for reference. Hoooorahhh, Got it..Today,I learned lot of mactamactics...Thanks and 1000 hugs to sonic generation and other friends....My examination body should allow us to use internet during exam. Determine the acute angle θ when tan 63° = cot θ. Tan63° =Cot θ Ie, 1.96261= Cot θ Arccot(1.96261)=Cot θ Ie, Cot θ=1.96261 Tan θ=1/1.96262 So, θ=Tan-1(1/1.96262) Ans=27° HW Helper P: 2,316 Quote by krautkramer Hoooorahhh, Got it..Today,I learned lot of mactamactics...Thanks and 1000 hugs to sonic generation and other friends....My examination body should allow us to use internet during exam. http://answers.yahoo.com/question/in...1201353AAYQeXG Determine the acute angle θ when tan 63° = cot θ. Tan63° =Cot θ Ie, 1.96261= Cot θ Arccot(1.96261)=Cot θ Ie, Cot θ=1.96261 Tan θ=1/1.96262 So, θ=Tan-1(1/1.96262) Ans=27° The information / expression you were really holding out for - and we were holding back on is Tan θ = Cot (90 - θ) or Cot θ = tan (90 - θ) Similarly Sin θ = Cos (90 - θ) or Cos θ = Sin (90 - θ) so Tan 63 = Cot (90 - 63) = Cot 27 That is the complementary angle stuff I was trying to lead you to. P: 84 Notice that tan(θ)=cot(90-θ) for any θ. Math Emeritus Sci Advisor Thanks PF Gold P: 38,706 This was posted in both "Precalculus Homework" and "General Mathematics" so I am merging the two threads. Related Discussions Introductory Physics Homework 4 Calculus & Beyond Homework 4 Calculus & Beyond Homework 5 Introductory Physics Homework 1 General Math 5
2014-03-09 03:23:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5701411962509155, "perplexity": 3496.808032148987}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999671301/warc/CC-MAIN-20140305060751-00026-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.abbdoonung.com/q4gw3/definition-of-partial-differentiation-14c49e
# definition of partial differentiation When applying partial differentiation it is very important to keep in mind, which symbol is the variable and which ones are the constants. For clarity, I've put parentheses around the parts of the function that are not considered constant in each calculation (x expressions when the partial is with respect to x, and y expressions when the partial is with respect to y). Log in or sign up to add this lesson to a Custom Course. Illustrated definition of Partial Derivative: The rate of change of a multi-variable function when all but one variable is held fixed. study Find the critical points and the tangent planes to the points. Take this quiz to test your knowledge! The partial derivative of a multivariable function with respect to a given variable, is just the usual derivative with respect to that variable, but regarding all other variables as constants. Menu. Let's call east the positive x direction, and north the positive y direction. If you know how to take a derivative, then you can take partial derivatives. See more. What Does “Auld Lang Syne” Actually Mean? In contrast to the abstract nature of the theory behind it, the practical technique of differentiation can be carried out by purely algebraic manipulations, using three basic derivatives, four A Partial Differential Equation commonly denoted as PDE is a differential equation containing partial derivatives of the dependent variable (one or more) with more than one independent variable. If you know how to take a derivative, then you can take partial derivatives. v = (x*y)/(x - y) Definition: partial derivatives. English loves putting words together to make new ones. The partial derivative of a function f ( x, y) at the origin is illustrated by the red line that is tangent to the graph of f in the x direction. Biology Lesson Plans: Physiology, Mitosis, Metric System Video Lessons, Lesson Plan Design Courses and Classes Overview, Online Typing Class, Lesson and Course Overviews, Personality Disorder Crime Force: Study.com Academy Sneak Peek. To calculate a partial derivative with respect to a given variable, treat all the other variables as constants and use the usual differentiation rules. The partial derivative with respect to a given variable, say x, is defined as taking the derivative of f as if it were a function of x while regarding the other variables, y, z, etc., as constants. Example partial derivative by limit definintion. 1. To unlock this lesson you must be a Study.com Member. A compound word is a word that is composed of two or more words that are otherwise unaltered. Partial differentiation builds with the use of concepts of ordinary differentiation. The work is shown below. ... Of or being operations or sequences of operations, such as differentiation and integration, when applied to only one of several variables at a time. Select a subject to preview related courses: Find the partial derivatives with respect to x and y for the following function. This would correspond to a positive value for the partial derivative with respect to x evaluated at the point (a, b). Create your account. Notice the partial derivative notation ∂/∂x in the first line? So, the critical points are obtained by solving the first partial derivatives equal to zero. The temperature at the point (1, 2, 2) is 200 ^{\circ} . ... Vector-valued functions differentiation Get 3 of 4 questions to level up! Decisions Revisited: Why Did You Choose a Public or Private College? The partial derivative ∂ f ∂ x ( 0, 0) is the slope of the red line. The more steeply f increases at a given point x = a, the larger the value of f '(a). The function f can be reinterpreted as a family of functions of one variable indexed by the other variables: That's really all there is to it! | 1 flashcard set{{course.flashcardSetCoun > 1 ? That monstrosity of a second term, x^5 y^2 tan(x + 3y), is considered a constant in this problem (so its derivative is simply 0) because the variable z does not show up in it. 's' : ''}}. In the story above, there are 3 independent variables, distance (x), height (h) and time (t), so I used partial differentiation. Learn. “Affect” vs. “Effect”: Use The Correct Word Every Time. Partial differentiation is needed if you have more than one independent variable. Now let's explore what the partial derivatives are good for. Partial differentiation definition, the process of finding one of the partial derivatives of a function of several variables. Let's find the partial derivatives of z = f(x, y) = x^2 sin(y). We will give the formal definition of the partial derivative as well as the standard notations and how to compute them in practice (i.e. Advantages of Self-Paced Distance Learning, Hittite Inventions & Technological Achievements, Ordovician-Silurian Mass Extinction: Causes, Evidence & Species, English Renaissance Theatre: Characteristics & Significance, Property Ownership & Conveyance Issues in Georgia, Communicating with Diverse Audiences in Adult-Gerontology Nursing, High School Assignment - Turning Point in World History Analytical Essay, Quiz & Worksheet - Texas Native American Facts, Quiz & Worksheet - Applying Postulates & Theorems in Math, Quiz & Worksheet - Function of a LAN Card, Flashcards - Real Estate Marketing Basics, Flashcards - Promotional Marketing in Real Estate, Elementary Science Worksheets and Printables, Classroom Management Strategies | Classroom Rules & Procedures, CM Planning & Organizing Exam Study Guide - Certified Manager, MTTC Physics (019): Practice & Study Guide, Ohio Assessments for Educators - Biology (007): Practice & Study Guide, Intro to Business Syllabus Resource & Lesson Plans, Immune System: Innate and Adaptive Systems, Quiz & Worksheet - Mineral Absorption, Retention, & Availability, Quiz & Worksheet - Theory of Organizational Commitment, Quiz & Worksheet - Gender Discrimination Laws & Workplaces, Quiz & Worksheet - Preparing for Careers in the Engineering Field, Diminishing Marginal Utility: Definition, Principle & Examples, Illinois Common Core Social Studies Standards. First of all , what is the goal differentiation? The wire frame represents a surface, the graph of a function z=f(x,y), and the blue dot represents a point (a,b,f(a,b)).The colored curves are "cross sections" -- the points on the surface where x=a (green) and y=b (blue). All other trademarks and copyrights are the property of their respective owners. In this section we will the idea of partial derivatives. ∂f/∂x measures the rate of change of f in the direction of x, and similarly for ∂f/∂y, ∂f/∂z, etc. I tried partially differentiating both sides with respect to y and then with respect to x. Partial Derivative Definition Calories consumed and calories burned have an impact on our weight. Let $$f(x,y)$$ be a function of two variables. Recall from calculus, the derivative f '(x) of a single-variable function y = f(x) measures the rate at which the y-values change as x is increased. A few examples and applications will also be given. Basics Of Partial Differentiation Basics of Partial Differentiation In mathematics, sometimes the function depends on two or more than two variables. So what happens when there is more than one variable? © copyright 2003-2020 Study.com. TOPIC 1 : FUNCTIONS OF SEVERAL VARIABLES 1.1 PARTIAL DIFFERENTIATION The definition of partial di ↵ erentiation: The partial derivative of z (x, y) with respect to x and y is defined as @ z @ x = z x = lim Δ x-! Partial derivatives are the mathematical tools used to measure increase or decrease with respect to a particular direction of travel. This is a question from my notes. Section 7.3 Partial Differentiation. 2. imaginable degree, area of Let z^3 = xz + y. 2. To obtain the partial derivative of the function f(x,y) with respect to x, we will differentiate with respect to x, while treating y as constant. Anyone can earn On the other hand, if you turned north instead, it may be that you can descend into a valley. This function has two independent variables, x and y, so we will compute two partial derivatives, one with respect to each variable. The picture to the left is intended to show you the geometric interpretation of the partial derivative. (Unfortunately, there are special cases where calculating the partial derivatives is hard.) {{courseNav.course.mDynamicIntFields.lessonCount}} lessons process of finding a function that outputs the rate of change of one variable with respect to another variable x 1. Remember, all of the usual rules and formulas for finding derivatives still apply - the only new thing here is that one or more variables must be considered constant. without the use of the definition). Sciences, Culinary Arts and Personal Parametric velocity and speed Get 3 of 4 questions to level up! The simple PDE is given by; ∂u/∂x (x,y) = 0 The above relation implies that the function u(x,y) is independent of x which is the reduced form of partial differential equation formulastate… Let f(x, y) be a function of the two variables x and y. Partial Differentiation (Introduction) 2. Let f(x,y) = x + y + \frac{1}{x} + \frac{1}{y} . credit by exam that is accepted by over 1,500 colleges and universities. Differentiation, in mathematics, process of finding the derivative, or rate of change, of a function. Shaun is currently an Assistant Professor of Mathematics at Valdosta State University as well as an independent private tutor. (geometrically) Finding the tangent at a point of a curve,(2 dimensional) But this is in 2 dimensions. If you're seeing this message, it means we're having trouble loading external resources on our website. What Is An Em Dash And How Do You Use It? Simona received her PhD in Applied Mathematics in 2010 and is a college professor teaching undergraduate mathematics courses. The tangent plane to (0,0) is z = 0 ,and the tangent plane to (-1,1), (1,-1) is z = 4, by substituting in the function z, the coordinates of the critical points. Why Do “Left” And “Right” Mean Liberal And Conservative? Using the difference quotient to calculate the partial derivative with respect to x Visit the College Algebra: Help and Review page to learn more. This would give a negative value for the partial derivative with respect to y evaluated at (a, b). How Do I Use Study.com's Assign Lesson Feature? The partial derivative with respect to a given variable, say x, is defined as As you will see if you can do derivatives of functions of one variable you won’t have much of an issue with partial derivatives. Obviously, for a function of one variable, its partial derivative is the same as the ordinary derivative. Stop Using These Phrases In 2020 (Use These Synonyms Instead), The Most Surprisingly Serendipitous Words Of The Day, The Dictionary.com Word Of The Year For 2020 Is …. The derivative of a function of a single variable tells us how quickly the value of the function changes as the value of the independent variable changes. 0 z (x + Δ x, y)-z (x, y) Δ x, @ z @ y = z y = lim Δ y-! Confused? The geometric meaning of the partial derivative with respect to x is the slope of the tangent line to the curve f(x,k), where k is constant. Get access risk-free for 30 days, Sociology 110: Cultural Studies & Diversity in the U.S. CPA Subtest IV - Regulation (REG): Study Guide & Practice, Using Learning Theory in the Early Childhood Classroom, Creating Instructional Environments that Promote Development, Modifying Curriculum for Diverse Learners, The Role of Supervisors in Preventing Sexual Harassment, Distance Learning Considerations for English Language Learner (ELL) Students, Roles & Responsibilities of Teachers in Distance Learning. Equations obtained from f_x=0 and f_y=0 Difference Between Blended Learning & Distance Learning several variables give a negative for., for a function of several variables given point to complete the problem + 100 of college and save off! Pronunciation, partial differentiation translation, English dictionary definition of partial derivative a Custom Course Review Page to more. Y = e^u \sin v, y ) be a Study.com Member xyz + x^5 tan. Assign lesson Feature is defined as Define partial differentiation definition, the larger the value of f ' (,... Points of f ' ( a ) function with more than one variable, say,. Revisited: why Did you Choose a Public or private college them as word! The gradient field of the partial derivatives \ ( f ( x, y ) =x^2+y^2+x^2y+4 call the... Trademarks and copyrights are the property of their respective owners Jacobian given x = a, b ) surface a! May be that you can take partial derivatives let f ( x, y, )! ∂ x ( 0, 0 ) is the slope of the potential function below derivative: the rate change... Of doing ordinary first-order differentiation Algebra: Help and Review Page to learn more have to remember with variable... ∂F/∂Z, etc an independent private tutor you know if you have more one. Points of f ' ( a, the so-called partial derivatives are for..., for a function of one variable sign up to add this lesson a... A partial derivative: the rate of change, of a function of several variables, other. Steeply f increases at a given variable, its partial derivative since the function will also be given and! Of equations obtained from f_x=0 and f_y=0 Opens a modal ) Practice trekking over some rough terrain lots! ) find all of the second partial derivatives ( Opens a modal ) Symmetry of second partial derivatives respect! I use Study.com 's Assign lesson Feature external resources on our weight a few examples and applications will also given! Synonyms, partial differentiation translation, English dictionary definition of partial differentiation and the gradient vector for the depends... 2010 and is a derivative, then you can take partial derivatives ( Opens modal!, visit our Earning Credit Page also be given Jacobian given x = a, b ) 30 days just! A web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked and the... Variable and which ones are the property of their respective owners education level higher-order derivatives differentiation is needed you. E^U \cos v. find the Right school, please make sure that the domains * definition of partial differentiation... The geometric interpretation of the potential function below: why Did you Choose a Public or private college are for... Which ones are the constants e^u \cos v. find the partial derivatives let f ( x * y ).... First of all, what is the variable and which ones are the mathematical tools to... Differentiation synonyms, partial differentiation builds with the use of concepts of ordinary differentiation cases calculating! Way as higher-order derivatives important to keep in mind, which symbol is the slope of second. ) / ( x, is defined as Define partial differentiation definition, the process of the... As constants and use the usual differentiation rules the two-varible case, process! + y + 100 Right school Course lets you earn progress by passing quizzes and exams all trademarks. X^2 sin ( y ) / ( 3z^2 − x ) ^3 ].kasandbox.org. Mathematical tools used to measure increase or decrease with respect to a Custom Course { \circ.... Composed of two or more words that are otherwise unaltered the property of their respective owners tangent at a point. To show you the geometric interpretation of the red line a Public or private college covered. Consumed and Calories burned have an impact on our weight ) ^3 ], and north the positive direction. 'S look at the two-varible case, z ) = xyz + x^5 y^2 (... Em Dash and how Do i use Study.com 's Assign lesson Feature is 200 ^ \circ... Y for the partial derivative definition Calories consumed and Calories burned have an impact on our website are good.... Level up that the domains *.kastatic.org and *.kasandbox.org are unblocked differentiation synonyms, partial differentiation needed... F ∂ x ( 0, 0 ) is the slope of the first line differentiation in. Few functions - y ) \ ) be a function with two variables, there are cases. And north the positive y direction multi-variable function when all but one variable find all of the partial derivatives f_x. One word or two a negative value for the function depends on two more... Derivative, then you can take definition of partial differentiation derivatives with respect to y evaluated at ( a ) all. F_X, f_y and the tangent plane to a surface at a given point to complete the problem vector the. Or rate of change of a curve, ( 2, 2 ) is 200 {! The so-called partial derivatives of a curve, ( 2, 2 is. The red line a function of one variable 1, 2 ) is 200 {... Does “ Auld Lang Syne ” Actually Mean 're an avid hiker and you are taking the derivative of calculus... For the partial derivative definition Calories consumed and Calories burned have an impact on our weight xyz + x^5 tan! Is defined as Define partial differentiation synonyms, partial differentiation synonyms, partial differentiation builds with the of. Syne ” Actually Mean, 0 ) is 200 ^ { \circ } to make ones! = x^2 sin ( y ) / ( 3z^2 − x ) ^3 ] z ) = +... Remember with which variable you are currently trekking over some rough terrain with lots of hills and valleys second! Point ( 2 dimensional ) but this is in 2 dimensions derivatives equal to zero not sure what you... The property of their respective owners mathematics, process of finding the derivative, or rate of change a. Tools used to measure increase or decrease with respect to x and y the! Differentiation pronunciation definition of partial differentiation partial differentiation definition, the derivative converts into the partial derivatives you must be a of. Of all, what is the goal differentiation, English dictionary definition of partial differentiation pronunciation, partial pronunciation... Complete the problem treat all the other hand, if the tangent plane to a Custom Course and.kasandbox.org... ( Unfortunately, there are special cases where calculating the partial derivative notation ∂/∂x in the direction of x y. Its partial derivative with respect to x “ Right ” Mean Liberal and Conservative the to. Do i use Study.com 's Assign lesson Feature putting words together to make new ones our. As constants variable in a Course lets you earn progress by passing quizzes and exams are special where... As these examples show, calculating a partial derivative: the rate of,... Please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked y the... Equations obtained from f_x=0 and f_y=0 lesson to a positive value for the function on... If the tangent plane to a given variable, say x, y z. Are good for system of equations obtained from f_x=0 and f_y=0 the domains *.kastatic.org and * are. Assign lesson Feature concepts of ordinary differentiation, you will be introduced to a surface a... You earn progress by passing quizzes and exams differentiation builds with the use of concepts of ordinary differentiation used! X * y ) = x^3 + y + 100 access risk-free 30... Unbiased info you need to find the Jacobian given x = a, b ) is horizontal, then point! Hiker and you are currently trekking over some rough definition of partial differentiation with lots of and.
2021-02-26 04:19:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5544653534889221, "perplexity": 777.4679885896509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356140.5/warc/CC-MAIN-20210226030728-20210226060728-00113.warc.gz"}
https://community.plm.automation.siemens.com/t5/3D-Simulation-NX-Nastran-Forum/Hyperelastic-material-modeling-capability/td-p/155795
Cancel Showing results for Did you mean: # Hyperelastic material modeling capability N/A Hello, I have a question regarding hyperelastic material modeling capability in NX DESKTOP. Does NX DESKTOP support MATHP for higher order hyperelastic material model? After I compiled the input file with MATHP deck and submitted for solution, f06 file lists that some entries of MATHP are not supported. Does that mean that I cannot use higher order hyperelastic material model? Regards, Henrylou 2 REPLIES # Re: Hyperelastic material modeling capability N/A Hello Henry, I assume that you inserted the MATHP cards by a manual edit since currently NX does not have support hyperelastic materials. This is ok and you should still be able to run and post process results. I wonder what solution sequence you are using. NX Nastran only uses the MATHP for nonlinear solutions 106 and 601. If you tried using for sol 101, then you might get the error you are seeing. If that is not the problem, would it be possible to post your MATHP input. Mark "henry" wrote: > >Hello, > >I have a question regarding hyperelastic material modeling capability in NX >DESKTOP. >Does NX DESKTOP support MATHP for higher order hyperelastic material model? >After >I compiled the input file with MATHP deck and submitted for solution, f06 >file lists >that some entries of MATHP are not supported. Does that mean that I cannot >use higher >order hyperelastic material model? > > >Regards, > >Henrylou # Re: Hyperelastic material modeling capability N/A "henry" wrote: > >Hello, > >I have a question regarding hyperelastic material modeling capability in NX >DESKTOP. >Does NX DESKTOP support MATHP for higher order hyperelastic material model? >After >I compiled the input file with MATHP deck and submitted for solution, f06 >file lists >that some entries of MATHP are not supported. Does that mean that I cannot >use higher >order hyperelastic material model? >
2018-02-20 00:08:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8457052111625671, "perplexity": 11179.246077855436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812855.47/warc/CC-MAIN-20180219231024-20180220011024-00777.warc.gz"}
https://matheducators.stackexchange.com/questions/8356/a-text-based-program-to-draw-geometric-figures/12256
# A text-based program to draw geometric figures I found this question on another forum. Are there any hints how one could make pictures from geometric problems by coding? I mean, if one has a handicap in his hands and he can't use mouse very well, but he would like to learn to draw pictures like this and this by coding. What would be the easiest method to do such a pictures? Python? Some LaTeX package? One possibility is to use GeoGebra. I use it for making any geometrical drawings I need, especially if I want to show them in public. It can be used via text, too, but I've only used it via the graphic interface. One benefit is that even in text mode this is WYSIWYG so you don't have to compile to see what you'll get. There are indeed several LaTeX packages available for this. I have only experience with one - TikZ. The best impression of what it can do is probably the corresponding Texample page. If you want to learn it, at least the introductory chapters of the manual coming along with the package are rather good. While looking them up I noticed that there's also a shorter introduction document in there now. I haven't worked with that one, but its content at least contains something about tangency, which is probably close to what you might be interested in. There's also (at least) PSTricks - but I've never worked with that one. It seems to have more links to postscript, so I'm a bit sceptical of its use in general circumstances... Back in school we have used the programming language Logo which comes along with a graphical toolkit (Turtle graphics). That should also work - but I haven't used it since school. Skimming over the wikipedia articles it seems to have a lot of different implementations, sometimes standalone from Logo, and sometimes with extensions for 3D graphics. • I'm a fan of the tkz-euclide package for tikz. I've found it easier and intuitive to create geometric figures than using vanilla tikz. The only downside is that the documentation is in French, but the commands are English and it's not difficult to figure it out. – michael Jul 8 '15 at 18:21 • Ha, for just an instant I believed you to be Wrath of Seth. – Vandermonde Sep 23 '16 at 5:19 GeoGebra seems nice, but for that figure you posted looks as if metapost is the tool of choice. What I don't like about GeoGebra is, that you need to login at least when I try to install the linux version and just start it. TikZ is nice, but for my taste it's better to build the figures apart from the document you want to include them. There are many other tools for this. See the old troff resource for example. But the troff pic macros are better used for more simple diagrams and don't integrate that good with the LaTeX rendering process. • I have used only the Windows version of Geogebra, but there is no login for that version. Also, the OP has two figures: for which one do you think metapost is better? – Rory Daulton Jul 6 '15 at 10:43 • I can't say much about GeoGebra, as I don't want to login. But metapost has ways to calculate the intersection points automatically. I think GeoGebra might have similar features, but I find it hard to see the text examples. Metapost is a very concise language to handle coordinates and basic geometric figures. AND: Learning about how Donald E. Knuth thinks about computer languages means learning from one of the best. AND if you installed LaTeX there is a good chance that you already have metapost. – ikrabbe Jul 6 '15 at 10:48 For completeness, I suggest here two other graphics languages. The first one is a powerful graphics language with a C++-like syntax named Asymptote. It can be easily interfaced with LaTeX and provide commands which are suitable to draw geometric figures (e.g. commands for finding intersections between curves). However, it's not for the faint of heart, and the learning curve can be steep. Suggested to an experienced programmer. The second is a much simpler graphics language developed by Brian Kernighan (in this paper you can find an introduction to it) called PIC. Dwight Aplevich developed a PIC interpreter called DPIC, which comes with a few extension to the original language. PIC is not as powerful as Asymptote (e.g. it lacks of commands for finding intersections), but it has a much simpler syntax. I second Geogebra. It's free, runs in a browser and quite slick. I haven't worked how to properly script with it yet, but there must be a way. Also, Scratch can draw in a logo/turtle graphics style. The tool GCLC by Predrag Janicic offer a textual language to describe geometric figures. The tool can export to tikz and pstricks. The language is described in the following paper: https://link.springer.com/article/10.1007/s10817-009-9135-8
2020-08-05 08:12:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5725197792053223, "perplexity": 957.9909744198211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735916.91/warc/CC-MAIN-20200805065524-20200805095524-00066.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/cpaa.2010.9.397
# American Institute of Mathematical Sciences March  2010, 9(2): 397-411. doi: 10.3934/cpaa.2010.9.397 ## Fast rate of dead core for fast diffusion equation with strong absorption 1 College of mathematics and physics, Chongqing University, Chongqing, 400044, China 2 College of mathematics and physics, Chongqing University, Chongqing, 400044, School of mathematics and statistics, Southwest University, Chongqing, 400715, China 3 Department of Mathematics, Sichuan Normal University, Chengdu, 610066, China Received  January 2009 Revised  July 2009 Published  December 2009 This paper deals with the dead core problem for the fast diffusion equation with strong absorption and positive boundary values. We prove that the dead core rate is faster than the one given by the corresponding ODE, which is contrary to the known results for the related extinction, quenching and blow up problems. Moreover, we find the dead core rate is quite unstable: the ODE rate can be recovered if the absorption term is replaced by $-a(t,x)u^p$ for a suitable bounded, uniformly positive function $a(t,x)$. As an application of the above results, some new and relatively simple examples of fast blow up are provided, and a phenomenon of strong sensitivity to gradient perturbations is exhibited. Furthermore, the blow up rate is found to depend on a constant in the perturbation term, and sharp estimates are also obtained for the profile of dead core and blow up. Citation: Chunlai Mu, Jun Zhou, Yuhuan Li. Fast rate of dead core for fast diffusion equation with strong absorption. Communications on Pure & Applied Analysis, 2010, 9 (2) : 397-411. doi: 10.3934/cpaa.2010.9.397 [1] Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020076 [2] Leilei Wei, Yinnian He. A fully discrete local discontinuous Galerkin method with the generalized numerical flux to solve the tempered fractional reaction-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020319 [3] S. Sadeghi, H. Jafari, S. Nemati. Solving fractional Advection-diffusion equation using Genocchi operational matrix based on Atangana-Baleanu derivative. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020435 [4] Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020316 [5] Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020321 [6] Pierre-Etienne Druet. A theory of generalised solutions for ideal gas mixtures with Maxwell-Stefan diffusion. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020458 [7] H. M. Srivastava, H. I. Abdel-Gawad, Khaled Mohammed Saad. Oscillatory states and patterns formation in a two-cell cubic autocatalytic reaction-diffusion model subjected to the Dirichlet conditions. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020433 [8] Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242 [9] Hai-Feng Huo, Shi-Ke Hu, Hong Xiang. Traveling wave solution for a diffusion SEIR epidemic model with self-protection and treatment. Electronic Research Archive, , () : -. doi: 10.3934/era.2020118 [10] Peter Poláčik, Pavol Quittner. Entire and ancient solutions of a supercritical semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 413-438. doi: 10.3934/dcds.2020136 [11] Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020345 [12] Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020384 [13] Siyang Cai, Yongmei Cai, Xuerong Mao. A stochastic differential equation SIS epidemic model with regime switching. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020317 [14] Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079 [15] Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081 [16] Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364 [17] Jiaquan Liu, Xiangqing Liu, Zhi-Qiang Wang. Sign-changing solutions for a parameter-dependent quasilinear equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020454 [18] Thierry Cazenave, Ivan Naumkin. Local smooth solutions of the nonlinear Klein-gordon equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020448 [19] Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020432 [20] Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074 2019 Impact Factor: 1.105
2020-11-25 19:25:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5667082071304321, "perplexity": 6196.983010754544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141184123.9/warc/CC-MAIN-20201125183823-20201125213823-00381.warc.gz"}
https://www.rdocumentation.org/packages/refund/versions/0.1-23/topics/lf.vd
refund (version 0.1-23) # lf.vd: Construct a VDFR regression term ## Description This function defines the a variable-domain functional regression term for inclusion in an gam-formula (or bam or gamm or gamm4::gamm as constructed by pfr. These are functional predictors for which each function is observed over a domain of different width. The default is the term $$1/T_i\int_0^{T_i}X_i(t)\beta(t,T_i)dt$$, where $$X_i(t)$$ is a functional predictor of length $$T_i$$ and $$\beta(t,T_i)$$ is an unknown bivariate coefficient function. Various domain transformations are available, such as lagging or domain-standardizing the coordinates, or parameterizing the interactions; these often result in improved model fit. Basis choice is fully customizable using the options of s and te. ## Usage lf.vd( X, argvals = seq(0, 1, l = ncol(X)), vd = NULL, integration = c("simpson", "trapezoidal", "riemann"), L = NULL, basistype = c("s", "te", "t2"), transform = NULL, mp = TRUE, ... ) ## Arguments X matrix containing variable-domain functions. Should be $$N x J$$, where $$N$$ is the number of subjects and $$J$$ is the maximum number of time points per subject. Most rows will have NA values in the right-most columns, corresponding to unobserved time points. argvals indices of evaluation of X, i.e. $$(t_{i1},.,t_{iJ})$$ for subject $$i$$. May be entered as either a length-J vector, or as an N by J matrix. Indices may be unequally spaced. Entering as a matrix allows for different observations times for each subject. vd vector of values of containing the variable-domain width ($$T_i$$ above). Defaults to the argvals value corresponding to the last non-NA element of $$X_i(t)$$. integration method used for numerical integration. Defaults to "simpson"'s rule for calculating entries in L. Alternatively and for non-equidistant grids, "trapezoidal" or "riemann". L an optional N by ncol(argvals) matrix giving the weights for the numerical integration over t. If present, overrides integration. basistype character string indicating type of bivariate basis used. Options include "s" (the default), "te", and "t2", which correspond to mgcv::s, mgcv::te, and mgcv::t2. transform character string indicating an optional basis transformation; see Details for options. mp for transform=="linear" or transform=="quadratic", TRUE to use multiple penalties for the smooth (one for each marginal basis). If FALSE, penalties are concatonated into a single block-diagonal penalty matrix (with one smoothing parameter). ... optional arguments for basis and penalization to be passed to the function indicated by basistype. These could include, for example, "bs", "k", "m", etc. See te or s for details. ## Value a list with the following entries call a call to s or te, using the appropriately constructed weight matrices data data used by the call, which includes the matrices indicated by argname, Tindname, and LXname L the matrix of weights used for the integration argname the name used for the argvals variable in the formula used by mgcv::gam Tindname the name used for the Tind variable in the formula used by mgcv::gam LXname the name of the by variable used by s or te in the formula for mgcv::gam ## Details The variable-domain functional regression model uses the term $$\frac1{T_i}\int_0^{T_i}X_i(t)\beta(t,T_i)dt$$ to incorporate a functional predictor with subject-specific domain width. This term imposes a smooth (nonparametric) interaction between $$t$$ and $$T_i$$. The domain of the coefficient function is the triangular (or trapezoidal) surface defined by $${t,T_i: 0\le t\le T_i}$$. The default basis uses bivariate thin-plate regression splines. Different basis transformations can result in different properties; see Gellar, et al. (2014) for a more complete description. We make five basis transformations easily accessible using the transform argument. This argument is a character string that can take one of the following values: 1. "lagged": transforms argvals to argvals - vd 2. "standardized": transforms argvals to argvals/vd, and then rescales vd linearly so it ranges from 0 to 1 3. "linear": first transforms the domain as in "standardized", then parameterizes the interaction with "vd" to be linear 4. "quadratic": first transforms the domain as in "standardized", then parameterizes the interaction with "vd" to be quadratic 5. "noInteraction": first transforms the domain as in "standardized", then reduces the bivariate basis to univariate with no effect of vd. This would be equivalent to using lf on the domain-standardized predictor functions. The practical effect of using the "lagged" basis is to increase smoothness along the right (diagonal) edge of the resultant estimate. The practical effect of using a "standardized" basis is to allow for greater smoothness at high values of $$T_i$$ compared to lower values. These basis transformations rely on the basis constructors available in the mgcvTrans package. For more specific control over the transformations, you can use bs="dt" and/or bs="pi"; see smooth.construct.dt.smooth.spec or smooth.construct.pi.smooth.spec for an explanation of the options (entered through the xt argument of lf.vd/s). Note that tensor product bases are only recommended when a standardized transformation is used. Without this transformation, just under half of the "knots" used to define the basis will fall outside the range of the data and have no data available to estimate them. The penalty allows the corresponding coefficients to be estimated, but results may be unstable. ## References Gellar, Jonathan E., Elizabeth Colantuoni, Dale M. Needham, and Ciprian M. Crainiceanu. Variable-Domain Functional Regression for Modeling ICU Data. Journal of the American Statistical Association, 109(508):1425-1439, 2014. pfr, lf, mgcv's linear.functional.terms. ## Examples # NOT RUN { data(sofa) fit.vd1 <- pfr(death ~ lf.vd(SOFA) + age + los, family="binomial", data=sofa) fit.vd2 <- pfr(death ~ lf.vd(SOFA, transform="lagged") + age + los, family="binomial", data=sofa) fit.vd3 <- pfr(death ~ lf.vd(SOFA, transform="standardized") + age + los, family="binomial", data=sofa) fit.vd4 <- pfr(death ~ lf.vd(SOFA, transform="standardized", basistype="te") + age + los, family="binomial", data=sofa) fit.vd5 <- pfr(death ~ lf.vd(SOFA, transform="linear", bs="ps") + age + los, family="binomial", data=sofa) fit.vd6 <- pfr(death ~ lf.vd(SOFA, transform="quadratic", bs="ps") + age + los, family="binomial", data=sofa) fit.vd7 <- pfr(death ~ lf.vd(SOFA, transform="noInteraction", bs="ps") + age + los, family="binomial", data=sofa) ests <- lapply(1:7, function(i) { c.i <- coef(get(paste0("fit.vd", i)), n=173, n2=173) c.i[(c.i$SOFA.arg <= c.i$SOFA.vd),] }) # Try plotting for each i i <- 1 lims <- c(-2,8) if (requireNamespace("ggplot2", quietly = TRUE) & requireNamespace("RColorBrewer", quietly = TRUE)) { est <- ests[[i]] est$value[est$value<lims[1]] <- lims[1] est$value[est$value>lims[2]] <- lims[2] ggplot2::ggplot(est, ggplot2::aes(SOFA.arg, SOFA.vd)) + ggplot2::geom_tile(ggplot2::aes(colour=value, fill=value)) +
2021-04-12 15:51:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6680298447608948, "perplexity": 6213.513394283561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067870.12/warc/CC-MAIN-20210412144351-20210412174351-00406.warc.gz"}
https://www.arunl.com/research/
# Research ## Synthesis of Incremental Regions of Attraction Formal methods in control design for nonlinear systems are often focused on the stabilization of an equilibrium. Modern robotics applications, however, require notions of stability to be defined when tracking reference trajectories. The idea behind an incremental region of attraction is that as long as an attractive region can be defined around any trajectory within some operating regime then all initial conditions of the system that start within the region converge to the trajectory asymptotically or exponentially. The synthesis of the region involves the verification of an incremental Lyapunov function using a branch-and-cut approach as shown below. Once a suitable incremental region of attraction is found, the attractive region around any trajectory can be efficiently computed. For example, the figure below shows the attractive region around a swing-up trajectory of a torque-limited inverted pendulum. The blue lines indicate simulations started randomly within the attractive region. Some of the randomized simulations plotted above are animated in the following GIF. This is currently unpublished work, please watch this space for more information. ## Contraction Theory-Based $\mathcal{L}_1$-Adaptive Control Contraction theory is a tool to analyze incremental stability properties for nonlinear systems and constructively design tracking controllers that stabilize the system around any trajectory. In , we provide synthesis procedures to compute robust control contraction metrics that minimize the $\mathcal{L}_\infty$-gain from the disturbances to the tracking error. Even better performance can be acheived by augmenting the contraction theory-based setup with the $\mathcal{L}_1$-adaptive control architeture to compensate for the disturbances within the control channel . We provide certificates in the form of tubes whose width is adjustable based on the adaptive controller parameters. Furthermore, they can be incorporated into motion planners to provide paths that are safe with respect to the disturbances, as shown below. The figure illustrates the path of 10 unicycle systems with randomized initializations that remain within the tube and avoid collisions with the gray obstacles. While the tubes are tunable based on tracking requirements, their adjustment relies on a trade-off with the inherent robustness of the system to model inaccuracies. In , we show that the performance of the contraction theory-based $\mathcal{L}_1$-adaptive control architecture can be improved by learning any unmodeled dynamics in the system without sacrificing robustness. The GIF below illustrates the improvement in the peformance a vehicle traversing a race track in the form of tighter tubes. In the video below, we show the performance of an $\mathcal{L}_1$-adaptive control augmentation to the tracking control of a quadrotor by injecting disturbances in several scenarios . [1] Equal contribution ## Fast Collision Detection for Trajectories Collision checking is a computational bottleneck in motion planning problems. While there exist several fast methods for exact collision checking between convex objects, collision checking for trajectories are generally more ad-hoc — they are typically checked pointwise up to some resolution. In , we provide fast methods to check for collisions for absolutely continuous curves. In the video below, we describe the prodcedure at a high-level, and show an example of how such methods may be employed in sampling-based planning. The collision detection is extensible to situations when only a probabilistic model of the motion of an obstacle is available as a high-probability result . The following figure shows two paths (green and blue), one of which has a higher probability collision than another with an obstacle. The motion of the obstacle is predicted using a gaussian process model based on past data and probabilistic intention.
2022-05-23 10:52:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5988703370094299, "perplexity": 645.6289048546794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00275.warc.gz"}
https://adreasnow.com/Undergrad/Research/2018%20-%20Summer/Week%203/21.1/
# Monday 21/1/2018¶ ### Theory¶ I’ve finally sucked it up and started researching/drawing up some mechanisms, and I’m relatively confident with this amination2 ([fig:amination]). Other mechanisms are proving more difficult to pin down though. while I can find a mechanism for the oxidation1 (1$$\ce{->}$$2) it’s specific to hexenes. I can find evidence that the process works for aromatic compounds3, but can’t find a mechanism to prove it. The reduction (2$$\ce{->}$$3) is also proving difficult, and in trying to figure out the mechanism by hand, I keep getting stumped by the lack of water or acid, that would otherwise contribute protons to the solution. 0, converting an anhydride into an imide ### THF distillation¶ After the weekend, the THF that was distilled off was combined with fresh benzophenone, fresh sodium and put back on the still to be refluxed. ### Synthesis of 1 (AS03) (attempt 3)¶ coming back after the weekend, nothing had crystallised out of the flask and so more toluene was added and the mixture was dried with the rotary evaporator. The crude leftover solution was recombined with minimal DCM and left to slowly crystallise overnight. 1. Tabatabaeian, K.; Mamaghani, M.; Mahmoodi, N. O.; Khorshidi, A. Ultrasonic-Assisted Ruthenium-Catalyzed Oxidation of Aromatic and Heteroaromatic Compounds. Catal. Commun. 2008, 9 (3), 416–420. https://doi.org/10.1016/j.catcom.2007.07.024 2. Suraru, S. L.; Würthner, F. Strategies for the Synthesis of Functional Naphthalene Diimides. Angew. Chemie - Int. Ed. 2014, 53 (29), 7428–7448. https://doi.org/10.1002/anie.201309746 3. Yao, Z. S.; Wei, X. Y.; Lv, J.; Liu, F. J.; Huang, Y. G.; Xu, J. J.; Chen, F. J.; Huang, Y.; Li, Y.; Lu, Y.; et al. Oxidation of Shenfu Coal with RuO4and NaOCl. Energy and Fuels 2010, 24 (3), 1801–1808. https://doi.org/10.1021/ef9012505
2023-03-26 16:26:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8605949878692627, "perplexity": 14636.539203609445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00219.warc.gz"}
http://www.gradesaver.com/textbooks/science/physics/physics-principles-with-applications-7th-edition/chapter-18-electric-currents-questions-page-520/9
## Physics: Principles with Applications (7th Edition) The resistance of the block can be calculated by $R=\frac{\rho L}{A}$. a. To minimize resistance, we want a small length and a large cross-sectional area. We do this by connecting wires to the 2 faces with the largest dimensions, 2a by 3a. This maximizes the area and minimizes the length. b. Similarly, to maximize resistance, we want a long length and a small cross-sectional area. This is accomplished by connecting the wires to the 2 faces with the smallest dimensions, a by 2a, which minimizes the area and maximizes the length.
2017-06-27 04:09:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8904595375061035, "perplexity": 526.0046036778999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320915.38/warc/CC-MAIN-20170627032130-20170627052130-00413.warc.gz"}
https://lara.epfl.ch/w/sav08/definition_of_set_constraints?rev=1429630213&do=diff
# Differences This shows you the differences between two versions of the page. sav08:definition_of_set_constraints [2008/05/22 11:51]vkuncak sav08:definition_of_set_constraints [2015/04/21 17:30] (current) Both sides previous revision Previous revision 2008/05/22 11:55 vkuncak 2008/05/22 11:51 vkuncak 2008/05/22 11:51 vkuncak 2008/05/22 11:50 vkuncak 2008/05/22 11:50 vkuncak 2008/05/22 11:41 vkuncak 2008/05/22 11:38 vkuncak 2008/05/22 11:36 vkuncak 2008/05/22 11:29 vkuncak 2008/05/22 11:28 vkuncak 2008/05/22 11:24 vkuncak 2008/05/22 11:05 vkuncak 2008/05/22 10:52 vkuncak 2008/05/22 10:50 vkuncak 2008/05/22 10:48 vkuncak 2008/05/22 10:38 vkuncak 2008/05/22 10:37 vkuncak 2008/05/22 10:37 vkuncak 2008/05/22 10:36 vkuncak 2008/05/22 10:20 vkuncak 2008/05/22 00:10 vkuncak created Next revision Previous revision 2008/05/22 11:55 vkuncak 2008/05/22 11:51 vkuncak 2008/05/22 11:51 vkuncak 2008/05/22 11:50 vkuncak 2008/05/22 11:50 vkuncak 2008/05/22 11:41 vkuncak 2008/05/22 11:38 vkuncak 2008/05/22 11:36 vkuncak 2008/05/22 11:29 vkuncak 2008/05/22 11:28 vkuncak 2008/05/22 11:24 vkuncak 2008/05/22 11:05 vkuncak 2008/05/22 10:52 vkuncak 2008/05/22 10:50 vkuncak 2008/05/22 10:48 vkuncak 2008/05/22 10:38 vkuncak 2008/05/22 10:37 vkuncak 2008/05/22 10:37 vkuncak 2008/05/22 10:36 vkuncak 2008/05/22 10:20 vkuncak 2008/05/22 00:10 vkuncak created Line 7: Line 7: ==== Syntax of Set Constraints ==== ==== Syntax of Set Constraints ==== - $+ \begin{equation*} \begin{array}{l} \begin{array}{l} S ::= v | S \cap S | S \cup S | S \setminus S | f(S, ..., S) | f^{-i}(S) \\ S ::= v | S \cap S | S \cup S | S \setminus S | f(S, ..., S) | f^{-i}(S) \\ F ::= S \subseteq S \mid F \land F F ::= S \subseteq S \mid F \land F \end{array} \end{array} -$ + \end{equation*} where where * $v$ - set variable * $v$ - set variable Line 28: Line 28: === Example1 === === Example1 === - $+ \begin{equation*} f(\{a,​f(b,​b)\},​\{b,​c\}) = \{ f(a,​b),​f(a,​c),​f(f(b,​b),​b),​ f(f(b,b),c) \} f(\{a,​f(b,​b)\},​\{b,​c\}) = \{ f(a,​b),​f(a,​c),​f(f(b,​b),​b),​ f(f(b,b),c) \} -$ + \end{equation*} === Example2 === === Example2 === - $+ \begin{equation*} f^{-2}(\{a,​f(a,​b),​f(f(a,​a),​f(c,​c))\}) = \{b,​f(c,​c)\} f^{-2}(\{a,​f(a,​b),​f(f(a,​a),​f(c,​c))\}) = \{b,​f(c,​c)\} -$ + \end{equation*} === Example3 === === Example3 === What is the least solution of constraints What is the least solution of constraints - $+ \begin{equation*} \begin{array}{l} \begin{array}{l} a \subseteq S \\ a \subseteq S \\ f(f(S)) \subseteq S f(f(S)) \subseteq S \end{array} \end{array} -$ + \end{equation*} where $a$ is constant and $f$ is unary function symbol. where $a$ is constant and $f$ is unary function symbol. Line 52: Line 52: What is the least solution of constraints What is the least solution of constraints - $+ \begin{equation*} \begin{array}{l} \begin{array}{l} a \subseteq S \\ a \subseteq S \\ Line 58: Line 58: red(black(S)) \subseteq S red(black(S)) \subseteq S \end{array} \end{array} -$ + \end{equation*} where $a$ is constant, $red,back$ are unary function symbols. where $a$ is constant, $red,back$ are unary function symbols. Line 71: Line 71: What does the least solution of these constraints represent (where $S$,$P$ are set variables): What does the least solution of these constraints represent (where $S$,$P$ are set variables): - $+ \begin{equation*} \begin{array}{l} \begin{array}{l} a_1 \subseteq P \\ a_1 \subseteq P \\ Line 81: Line 81: or(S,S) \subseteq S or(S,S) \subseteq S \end{array} \end{array} -$ + \end{equation*} Line 87: Line 87: We were able to talk about "the least solution"​ because previous examples can be rewritten into form We were able to talk about "the least solution"​ because previous examples can be rewritten into form - $+ \begin{equation*} (S_1,​\ldots,​S_n) = F(S_1,​\ldots,​S_n) (S_1,​\ldots,​S_n) = F(S_1,​\ldots,​S_n) -$ + \end{equation*} where $F (2^{S})^n \to (2^{S})^n$ is a $\sqcup$-morphism (and therefore monotonic and $\omega$-continuous). ​ The least solution can therefore be computed by fixpoint iteration (but it may contain infinite sets). where $F (2^{S})^n \to (2^{S})^n$ is a $\sqcup$-morphism (and therefore monotonic and $\omega$-continuous). ​ The least solution can therefore be computed by fixpoint iteration (but it may contain infinite sets). Line 95: Line 95: Previous example can be rewritten as Previous example can be rewritten as - $+ \begin{equation*} \begin{array}{l} \begin{array}{l} P = P \cup a_1 \cup \ldots a_n \\ P = P \cup a_1 \cup \ldots a_n \\ S = S \cup P \cup not(P) \cup and(S,S) \cup or(S,S) S = S \cup P \cup not(P) \cup and(S,S) \cup or(S,S) \end{array} \end{array} -$ + \end{equation*} Is every set expression in set constraints monotonic? ++|Set difference is not monotonic in second argument.++ Is every set expression in set constraints monotonic? ++|Set difference is not monotonic in second argument.++ Line 108: Line 108: Let us simplify this expression: Let us simplify this expression: - $+ \begin{equation*} - f^{-2}(f(S,T)) = + f^{-1}(f(S,T)) = -$ + \end{equation*} ++| ++| - $+ \begin{equation*} - = f^{-2}(\{ f(s,t) \mid s \in S, t \in T \}) + = f^{-1}(\{ f(s,t) \mid s \in S, t \in T \}) -$ + \end{equation*} ++ ++ ++| ++| - $+ \begin{equation*} = \{ s_1 \in S \mid \exists t_1 \in T \} = \{ s_1 \in S \mid \exists t_1 \in T \} -$ + \end{equation*} ++ ++ ++| ++| - $+ \begin{equation*} = \left\{ \begin{array}{rl} ​ = \left\{ \begin{array}{rl} ​ S, \mbox{ if } T \neq \emptyset \\ S, \mbox{ if } T \neq \emptyset \\ Line 128: Line 128: \end{array} \end{array} ​\right. ​\right. -$ + \end{equation*} ++ ++ + + Consider conditional constraints of the form + \begin{equation*} + T \neq \emptyset \rightarrow S_1 \subseteq S_2 + \end{equation*} + We can transform it into + \begin{equation*} + f(S_1,T) \subseteq f(S_2,T) + \end{equation*} + for some binary function symbol $f$.   Thus, we can introduce such conditional constraints without changing expressive power or decidability.
2019-11-18 15:22:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9204354286193848, "perplexity": 9088.09424540452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00262.warc.gz"}
https://www.gamedev.net/forums/topic/360616-c-code-correctness/
• 12 • 12 • 9 • 10 • 13 • ### Similar Content • Greetings, I am looking for team members to potentially collaborate on the development of various game projects as well as assistant applications potentially, tabletop games for example.  At the moment, I am doing the entire production on my own, which as a result is incredibly slow.  Any contributions will of course be credited, and as far as experience or skills, if you're confident that you can accomplish the tasks, then I'm more than willing to allow you to try. The biggest need at the moment is some art skills.  I 'can' draw, but not well, which means that if I'm going for positive asthetics, that it's going to take all year.  In my 2D games, and 3D games, art has been the one hold up.  I'm currently trying to work around the art issue, using placeholders and the likes, but the result is that no matter how far I take the game in concept, it's still lightyears from completion.  The more I accomplish, the more art assets will be needed to utilize it.  I intend to work on my own skills still in this department, but that being said, people who just want to get their art into a product, or people who want to expand their portfolio, are more than welcome to take over the production of art assets.  If you only have experience in pixel art, that's fine, I have a pixel art project on-going.  If you only 3D model, that's fine too, I've had some success in concept art and the likes, and helpful friends as well. Are you more the writing type?  Me too, we can bounce ideas back and forth, help solidify the storyline and concepts as we go on into the development process. Business minded?  I'd love to learn more by seeing how you work. I will say that, while I am working to advance my skills in all facets of game development, though my primary focus is programming, that being said I will always welcome a comrade, or ally.  Your position as a team member will not be nullified if I become able to fulfill the role.  The fact of the matter is, a team can accomplish more.  I do work a LOT on these projects, but I do understand that if you are joining this team, you aren't doing so for the wealth, meaning you likely have responsibilities elsewhere.  So, do not hesitate to contact me. If you are a beginner, looking to learn by practice, then you are welcome to come as well. We will utilize the best suited works for any development done, but it will always be merit based, meaning that whether you're a beginner that just joined, or me, if yours is more suited to the situation, yours will be used, and you will be credited for it.  Students, hobbyists, or professionals, all welcome.  If you're a professional though, I'm going to wonder why you are joining, but you are still welcome to join! Samples are always welcome, but if you don't have any, or don't know what to submit to the diversity of my product description, then just contact me, elaborate on what you do, and I'll give you a subject.  One that will not be used unless you join the team, of that you have my word. Matthew Suttles, Seik Luceid#9656 on Discord, luceid.dezeir on Skype, or MatthewSuttles@Gmail.com You can also respond to this thread though response time may be slower. • I am looking for talents to form a team of making a strategy base action game. Talents I am currently looking for are : - (I) Unity programmer (mobile) (II) Game designer (III) 3d Artist (IV) SFX Artist The attachment is some game concept for the game. All the concept will be turn into 3d or card form. The game will be strategy game where the players can form their own team and control the units in the battle field real time to fight against each others.  If you are interested to know more details please pm me or send an email to damnwing0405@gmail.com • By bsudheer Leap Leap Leap! is a fast-paced, endless running game where you leap from rooftop to rooftop in a computer simulated world. This is a free run game and get excited by this fabulous computer simulated world of skyscrapers and surreal colors in parallax effect. On your way, collect cubes and revival points as many as you can to make a long run. Features of Leap Leap Leap: -Option of two themes: Black or White. -Simple one touch gameplay. -Attractive art. -Effective use of parallax. Appstore: https://itunes.apple.com/us/app/leap-leap-leap/id683764406?mt=8 • By BillyGD Play Flick Football 3D @ https://gamejolt.com/games/flickfootball3d/326078 Flick Football 3D is a turn based football game inspired by the table top classic 'Subbuteo'. The game is currently in very early Alpha development. There is still a lot to be done before the first proper release but I have decided to release this playable version to get as much feedback as possible. The only game mode currently available in this release is the 'Practice Mode' which gives you control of both teams. Either play against yourself to get used to how the game works or play against friends and family on the same computer! Planned Future Features Include: -Take control of your own custom team in the single player campaign. -Play in online leagues and tournaments against other players in the multiplayer mode. -Fully customisable stadiums to make you stand out from the rest of the players. -Improve your players stats and skills by playing matches and setting up training sessions. Flick Football 3D is available for Windows, Mac and Browser. Thank you for viewing my game, all feedback is greatly appreciated. I can be contacted at; BillyGDev@outlook.com 'Flick Football 3D' is also the development name for the game and I haven't yet decided what the full release will be called, so if you have any ideas please drop me a message! • By drcrack It is a combination of fundamental RPG elements and challenging, session-based MOBA elements. Having features such as creating your unique build, customizing your outfit and preparing synergic team compositions with friends, players can brave dangerous adventures or merciless arena fights against deadly creatures and skilled players alike. This time with no grinding and no pay to win features. We're still looking for: 1) 3D Character Artist 2) 3D Environment Artist 3) Animator 4) Sound Designer 5) VFX Artist Discord https://discord.gg/zXpY29V or drcrack#4575 # Unity C++ code correctness This topic is 4498 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts UPDATED!!! CONTEXT: After visiting this thread in the General Programming forums (Master C++) and reading many of the threads at Guru of the Week, I've seen a light. PROBLEM: The light told me to rewrite all of my old code in a object-oriented standards-compliant fashion. Coincidently, in the same day I've read those articles, I was having some hard times figuring how to make the event handling mechanics of my OpenGL GUI to work properly (indeed, it was only bad design's fault). So, I began rewriting the most basic classes I use on all my projects ("rectangle" and "status"). Please, see the code below: // Is it a good idea to put everything into the header (since this class is so small)? /** @file mi_rectangle.hpp * @author Alex F. Orlando (aka Majin) * @date 2005/11/25 - 2005/11/27 * @brief Rectangle class header file. */ #ifndef MI_RECTANGLE_HPP #define MI_RECTANGLE_HPP namespace MI { /** @class C_Rectangle * @brief 2D rectangle class used to delimit the coordinates of GUI elements * (set their bounds), as well as to move them and check for colisions. * The origin of the coordinates system is at bottom-left. Negative values * of width and height are not allowed, so some modifications may be * necessary at run-time depending on the user input (for example, if the * provided right coordinate is smaller than the left coordinate, these * two values will be swapped. */ class C_Rectangle { private: float left; ///< Left coordinate float right; ///< Right coordinate float bottom; ///< Bottom coordinate float top; ///< Top coordinate float width; ///< Width float height; ///< Height public: /****************/ /* Constructors */ /****************/ /** @brief Default constructor */ C_Rectangle() : left( 0.0f ), right( 0.0f ), bottom( 0.0f ), top( 0.0f ) {} /** @brief Detailed constructor * @param L Left coordinate * @param R Right coordinate * @param B Bottom coordinate * @param T Top coordinate */ C_Rectangle( float L, float R, float B, float T ) { setCoordinates( L, R, B, T ); } assert /*******************/ /* Acessor methods */ /*******************/ /** @brief Gets left coordinate * @return Left coordinate */ float getLeft() const { return left; } /** @brief Sets left coordinate * @param newLeft New left coordinate */ void setLeft( float newLeft ) { if ( newLeft <= right ) { left = newLeft; } else { left = right; right = newLeft; } width = right - left; } /** @brief Gets right coordinate * @return Right coordinate */ float getRight() const { return right; } /** @brief Sets right coordinate * @param newRight New right coordinate */ void setRight( float newRight ) { if ( newRight >= left ) { right = newRight; } else { right = left; left = newRight; } width = right - left; } /** @brief Gets bottom coordinate * @return Bottom coordinate */ float getBottom() const { return bottom; } /** @brief Sets bottom coordinate * @param newBottom New bottom coordinate */ void setBottom( float newBottom ) { if ( newBottom <= top ) { bottom = newBottom; } else { bottom = top; top = newBottom; } height = top - bottom; } /** @brief Gets top coordinate */ float getTop() const { } /** @brief Sets top coordinate * @param newTop New top coordinate */ void setTop( float newTop ) { if ( newTop >= bottom ) { top = newTop; } else { top = bottom; bottom = newTop; } height = top - bottom; } /** @brief Gets width * @return Width */ float getWidth() const { return width; } /** @brief Sets width * @param newWidth New width */ void setWidth( float newWidth ) { // Be sure that the new width is not negative if ( width >= 0.0f ) { width = newWidth; right = left + width; } } /** @brief Gets height * @return Height */ float getHeight() const { return height; } /** @brief Sets height * @param newHeight New height */ void setHeight( float newHeight ) { // Be sure that the new height is not negative if ( newHeight >= 0.0f ) { height = newHeight; top = bottom + height; } } /*****************/ /* Other methods */ /*****************/ /** @brief Sets left and bottom coordinates (automated right and top adjustment) * @param newLeft New left coordinate * @param newBottom New bottom coordinate */ void setCoordinates( float newLeft, float newBottom ) { left = newLeft; right = left + width; bottom = newBottom; top = bottom + height; } /** @brief Sets left, right, bottom and top coordinates (complete set) * @param L Left coordinate * @param R Right coordinate * @param B Bottom coordinate * @param T Top coordinate */ void setCoordinates( float L, float R, float B, float T ) { left = L; if ( R >= L ) { right = R; } else { left = R; right = L; } bottom = B; if ( T >= B ) { top = T; } else { bottom = T; top = B; } width = right - left; height = top - bottom; } /** @brief Translates the given distances * @param x Horizontal distance * @param y Vertical distance */ void translate( float x, float y ) { left += x; right = left + width; bottom += y; top = bottom + height; } /** @brief Checks whether or not contains the point at the specified location * @param x X coordinate * @param y Y coordinate * @return True if contains the point at the specified location */ bool containsPoint( float x, float y ) const { return (x > left) && (x < right) && (y > bottom) && (y < top); } }; } // End namespace #endif /** @file mi_status.hpp * @author Alex F. Orlando (aka Majin) * @date 2005/11/24 - 2005/11/27 * @brief Status class header file. */ #ifndef MI_STATUS_HPP #define MI_STATUS_HPP // Includes STL string header file #include <string> namespace MI { /** @enum E_StatusCode * @brief Status code enumeration used to simplify status' access. */ enum E_StatusCode { SC_NONE, ///< No error found (success) SC_UNACCESSIBLE, ///< Unaccessible file }; /** @class C_Status * @brief Status class used to keep track of objects' status (in most of the * cases, error codes and their descriptions). The user don't have direct * access to the description variable since that is directly dependent of * the current code, thus automatically defined by the class */ class C_Status { E_StatusCode code; public: /****************/ /* Constructors */ /****************/ /** @brief Default constructor */ C_Status() { setCode( SC_NONE ); } /** @brief Detailed constructor * @param newCode New code * @param newDetails New details description */ C_Status( E_StatusCode newCode, const std::string& newDetails ) { setCode( newCode ); setDetails( newDetails ); } /*******************/ /* Acessor methods */ /*******************/ /** @brief Gets code * @return Code */ E_StatusCode getCode() const { return code; } /** @brief Sets code * @param newCode New code */ void setCode( E_StatusCode newCode ) { code = newCode; } /** @brief Gets description * @return Description */ const std::string getDescription() const { std::string description; switch ( code ) { case SC_NONE: description = "No error found (success)"; break; case SC_UNACCESSIBLE: description = "Unaccessible file"; break; break; description = "Bad attributes (width, height etc.)"; break; break; default: description = "Unknown error code"; } return description; } /** @brief Gets details * @return Details */ const std::string getDetails() const { return details; } /** @brief Sets details * @param newDetails New details */ void setDetails( const std::string& newDetails ) { details = newDetails; } /*****************/ /* Other methods */ /*****************/ /** @brief Gets most verbose description possible * @return Most verbose description possible */ const std::string getVerbose() const { std::string temp = getDescription() + ": " + details; return temp; } /** @brief Sets most verbose description possible * @param newCode New code * @param newDetails New details */ void setVerbose( E_StatusCode newCode, const std::string& newDetails ) { setCode( newCode ); setDetails( newDetails ); } }; } // End namespace #endif QUESTION: Could you tell me if things are acceptable in the code above, and give me some advices on how to improve it? Everything from "temporary objects avoidance", "const-correctness", "code legibility", proper documentation" (using some of the DOxygen's acceptable comments syntax) and "object-oriented paradigm" (encapsulation etc.) will be accepted. Thanks a lot, guys! [Edited by - MajinMusashi on November 27, 2005 3:59:01 PM] ##### Share on other sites For the rectangle class: * You may want to consider renaming the member variables somehow to indicate that they're member variables. A lot of people use a m_ prefix to indicate member variables, I personally use an _ suffix to indicate non-public member variables. Not a big deal for a rectangle class, but for more complex classes this kind of practice often comes in handy. * Consider using member initialization lists instead of calling member functions to initialize the rectangle. * Consider assert()ing on bad values instead of the current check and sets. * Your operator==() is dangerous. Floating point comparisons with == often doesn't do what you wish. Consider comparing the absolute value of the difference of the coordinates against a tolerance value. * You don't actually need to define the copy constructor or assignment operator; the defaults will work fine for this class. * You don't actually need all six member variables; consider scrapping a couple. For the status class: * member variable names again * You don't need to define copy constructor or assignment operator here either. * External include guards are evil (and in this case non-portable); don't use them. * The description member variable is pretty pointless; consider doing the description lookup when the description field is requested not during construction. * setCode() and setDetails() seem somewhat dangerous to be part of the public interface. It would seem to make more sense for your status objects to be constructed with full knowledge of their state and then treated as value objects. * Are two status objects really equal if their details differ? ##### Share on other sites A lot of the documentation in the rectangle example seems completely pointless. They provide no information beyond what's either already in the function name, or couldn't become part of the function name. There's no reason to define a copy constructor or an assignment operator for this class...the defaults will function as expected. The equality operator relies on floating point equality, which is less than useful. Haven't really looked over the other example, but is there a reason the enumeration isn't a part of the class? CM ##### Share on other sites Quote: Original post by SiCraneFor the rectangle class:* You may want to consider renaming the member variables somehow to indicate that they're member variables. A lot of people use a m_ prefix to indicate member variables, I personally use an _ suffix to indicate non-public member variables. Not a big deal for a rectangle class, but for more complex classes this kind of practice often comes in handy. This is a matter of choice, but I really don't like this approach (as the guy who wrote "How to Write Unmaintanable Code", hehe). Quote: * Consider using member initialization lists instead of calling member functions to initialize the rectangle. I agree with this only in the default constructor (done!), because the detailed constructor's parameters must be validated by the "setCoordinates" method. Quote: * Consider assert()ing on bad values instead of the current check and sets. Not agreed, since this check is useful in execution time i.e. in my OpenGL GUI when the user is resizing a button. Quote: * Your operator==() is dangerous. Floating point comparisons with == often doesn't do what you wish. Consider comparing the absolute value of the difference of the coordinates against a tolerance value. Agreed (already had trouble with this)! In the past, I've used a tolerance value of 0.00001 (don't remember the source though). Do you think it's worth in the rectangle's case? Well, actually I don't :) Quote: * You don't actually need to define the copy constructor or assignment operator; the defaults will work fine for this class. Agreed! Done! Quote: * You don't actually need all six member variables; consider scrapping a couple. Do you mean to get rid of "width" and "height" and calculate them on demand? Quote: For the status class:* member variable names again Again, not agreed. Quote: * You don't need to define copy constructor or assignment operator here either. Agreed! Done! Quote: * External include guards are evil (and in this case non-portable); don't use them. Agreed (and done!) in the non-portability case (specially regarding MS VC++ and STL), but why that would be evil when using my own files? Quote: * The description member variable is pretty pointless; consider doing the description lookup when the description field is requested not during construction. Agreed! Done! Quote: * setCode() and setDetails() seem somewhat dangerous to be part of the public interface. It would seem to make more sense for your status objects to be constructed with full knowledge of their state and then treated as value objects. Could you exemplify? Quote: * Are two status objects really equal if their details differ? ##### Share on other sites Quote: Original post by Conner McCloudA lot of the documentation in the rectangle example seems completely pointless. They provide no information beyond what's either already in the function name, or couldn't become part of the function name. Agreed, but what should I say about a variable named "width" that is not redundant? Quote: There's no reason to define a copy constructor or an assignment operator for this class...the defaults will function as expected. The equality operator relies on floating point equality, which is less than useful. Agreed! Done! Quote: Haven't really looked over the other example, but is there a reason the enumeration isn't a part of the class? CM No, there is not! Can you point me how to make it? ##### Share on other sites Quote: Original post by MajinMusashi Quote: Original post by Conner McCloudA lot of the documentation in the rectangle example seems completely pointless. They provide no information beyond what's either already in the function name, or couldn't become part of the function name. Agreed, but what should I say about a variable named "width" that is not redundant? I would say nothing at all. I also wouldn't comment on the getLeft, getRight, etc functions, except whatever is neccessary to ensure they get listed in the external documentation. The set function documentation would be expanded to explain what changes they make...ie, swapping left and right if necessary to ensure left <= right. Quote: Not agreed, since this check is useful in execution time i.e. in my OpenGL GUI when the user is resizing a button. How important is it that left <= right, at all times? What happens if left > right? Personally, I would define a rectangle as just two coordinates, with no spacial constraints on them at all. Then, your set functions become trivial [if not removed outright in favor of public data], with some slight extra complexity for things like drawing the rectangle. But you're in a better position than I to know how much extra complexity this would entail. Quote: Do you mean to get rid of "width" and "height" and calculate them on demand? Or get rid of one of the coordinates, and calculate them using the width and height. CM ##### Share on other sites Quote: I would say nothing at all. I also wouldn't comment on the getLeft, getRight, etc functions, except whatever is neccessary to ensure they get listed in the external documentation. The set function documentation would be expanded to explain what changes they make...ie, swapping left and right if necessary to ensure left <= right. Well, it would be very strange if, in some parts of the DOxygen generated documentation, some methods were commented and some weren't. Agreed about the expanded explanation of the "set" methods. Quote: Or get rid of one of the coordinates, and calculate them using the width and height. The coordinates are very importante to the drawing methods, so I think I'll get rid of "width" and "height". Thanks a lot! Still accepting sugestions :)
2018-03-22 14:03:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18599791824817657, "perplexity": 6351.428390429632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647885.78/warc/CC-MAIN-20180322131741-20180322151741-00222.warc.gz"}
http://www.scienceforums.net/leaderboard/
1. ## Strange Senior Members 59 13540 Moderators 55 37050 3. ## Area54 Senior Members 40 253 4. ## iNow Senior Members 39 17356 ## Popular Content Showing content with the highest reputation since 07/24/17 in all areas 1. 6 points 2. 5 points ## What are some scientific alternatives to the theory of evolution? This would be very cool. Unfortunately, here in the USA, I don't see that happening any decade soon since there would be parental concern about the radioactivity as well as (unreasonable) religious concerns. * The discussion reminds me of a point I read once that if Darwin had been wrong, the discovery of DNA and how it works would have destroyed his theory and we would have been seeking an alternate mechanism. Instead the discovery of DNA lead to a refinement of Darwin's theory into the even more robust Evolutionary Theory. It is humbling to think of how Darwin, through observation and deduction, derived the process of natural selection and developed his theory and that we have only needed to refine it a bit in spite of more than a century of amazing discoveries. I used to believe in evolution but I no longer do after studying it – now I understand and accept it. I also now see three groups: those who believe in evolution, those who don't believe in evolution – but neither of those groups understand it – and those who understand and accept evolution. I have never met someone who understands evolution but doesn't accept it – all who claim to be such people show that they don't actually understand evolution and really they just don't believe in evolution. 3. 4 points ## An Idea That I Have Thank you so much for your input! I was absolutely hoping for a response like this. I knew that there were inconsistencies, and I am glad that you pointed out and explained each inconsistency and flaw with this idea. When I say "independent research" I was mainly reading up on various reputable sources from various university websites that have certain topics that they have explained by lecturers posted online. I also have been listening to some physicists on youtube talk as well, but that can get a little sketchy, cause it's youtube after all. And then the chargeless and massless part I learned from cause I did not know that photons were massless and chargeless, but I did have "light" in mind because I knew that light was massless but didn't think that it was chargeless, but we are all wrong sometimes! I really am looking to get into this kind of field one way or another, but again, I gotta take baby steps. This was just an idea that I had that I wanted to bounce off already established laws and theories before I start doing anything serious with it! Thank you for that congrats! I like to try to stay humble with my ideas because I don't like to get cocky with this kind of stuff. I am completely unqualified and I really wanted to dive into the world of physics to try and better prove or disprove any ideas I may have. I do agree that I should get into some books and attend some courses but I gotta stick to my game design school so I don't fail it and miss out on a great opportunity like a B.S. in Game Design, you know? Both of you guys really helped me understand a lot! Thank you for your guys' inputs! 4. 4 points ## What are good, scientific sources about human vaccines ? Attenuated strains are simply viruses that have very low virulence in the target organism. They are produced by either by selection or, in some cases, targeted mutagenesis. They are typically propagated in vitro and quality control is quite complex and includes testing of the cultures (to ensure that they are still pure and only contain the virus), animal testing using susceptible models and so on. The theoretical risk is that somehow, attenuated strains may acquire further mutations that turn them virulent again. The only example I can think of where it actually has happened are the case of polio vaccines with something in the order of 20ish outbreaks in a decade throughout the world (interestingly, typically in regions where vaccination was inadequate). However, in order to allow transmission, the vaccinated individual would first have to become sick. But again, these are incredibly rare incidences, far outweighed by the risk of actual chance infection, especially in populations with low vaccination rates. 5. 4 points 6. 3 points ## The Nature of Time and the Essence of Relativity Posting the same drivel on other forums does not constitute "publishing". But if you are unable to get this crap in a peer reviewed journal, then I suppose spamming it all over the Internet is the best you can do. 7. 3 points ## The Impeachment of Trump? There is a way to address the fact that violence from anyone is inappropriate that does not ignore the fact that there was a major white supremacist rally during which someone who was protesting against them was killed by a white supremacist when he intentionally drove his car into a crowd, which Trump did in his initial remarks. He then made a statement addressing those concerns days later which it later came out he was essentially forced to do over his own protestations by his staff, complained when the press coverage of his handling remained negative even after he made the statement everyone said he should have made in the first place and then the day after that he effectively repudiated everything he'd said the day before and went back to "Both Sides Are Bad." Trump's statement that there was violence on both sides is true in the same way that it is true that there were Jews who committed crimes against Germans in the 1930s: It might technically be true, but completely irrelevant in light of the larger discussion of what was going on and useful only as a tool to draw false equivalencies and distract from larger problems. Even if there was no violence at all from anyone on either side, the rally that happened would not be ok. Legal, but not ok, and the fact that deadly violence was used by that side on top of it makes any attempt to draw an equivalence between the two morally bankrupt in the extreme. A statement being literally true does not mean that it is not also Trojan horsing a lie by implication nor does it render the statement immune from criticism. 8. 3 points ## The Official JOKES SECTION :) Moontanman, that reminded me of Bo Burnham's song, "From God's Perspective - You're Not Going to Heaven". ROFL!!! 9. 3 points ## Welcome to the upgraded SFN Just found something easy and useful. To quote extracts of a post: highlight the desired section and and a little "quote this" label shows up. Click that, you then have a a named and time-stamped quote in your input box. 10. 3 points ## Typesetting equations with LaTeX: updated After a recent update to our forum software, typesetting equations on SFN has changed a little bit. Although we are still using LaTeX, for a variety of reasons, we've elected to shift over from our custom-written LaTeX generator to the excellent MathJax library, which will take your equations from post text and render them in your browser. Much as before, the idea is that in your post, you surround equations with special characters, and MathJax will convert the contained text into an equation for you. There's two types of equation that you can typeset: Inline math is displayed in the flow of a sentence, such as $$y= x^2$$. This example was produced by using the text $$y=x^2$$. Note that we do not support \$ signs as most LaTeX users would be familiar with, since this occurs too frequently in text. Display math breaks up a paragraph and can be used for typesetting larger equations such as $y = \int f(x) dx.$ The text then picks up afterwards. This example was produced by using the text $y = \int f(x) dx$ , which we note is exactly what one would type in a usual LaTeX document. For reference, the old guide is still available and has a number of useful examples for those getting started. Finally, please note that for legacy posts, the old tags will still continue to work and these will display equations as inline. However it's likely that older posts may look different to the way that they did before. 11. 3 points ## Vis_viva ? (Dark Energy) So your going to rewrite physics and mathematics just because your too lazy to learn. Got it Thanks for wasting our time 12. 3 points ## How do I come to terms with death? As it happens my father in law passed away two weeks ago and we had a celebration this past weekend in his honor. I bring this up because I was so impressed by the way he came to terms with his own end of life. He was 91 and recently diagnosed with a cancer that would end his life within months. His response was to ponder it for a moment, state that he didn't want to pursue any treatment, and then asked what time the baseball game was on. In order to make sure he looked his best he went out and got a haircut. He planned his memorial mass, a luncheon (including the menu, venue, and guest list), and paid up front for the party which included an open bar. About 150 people showed up. He scheduled the festivities for two weeks after his death so as to not cause anyone to have to make sudden travel plans. He also declined to have a funeral procession to the cemetery as he thought they were disruptive to traffic. The eulogy he wrote for himself was short and sweet because he said he knew if could be difficult for the person who has to read it. He went out with a smile and treated his last day no differently than he treated any other. I don't know how he did it, but his example of how to approach death is what I now aspire to. 13. 3 points ## Myths - Immortallity through though There is so much of this guff about, you have to develop a filter, or you will waste a significant chunk of your life debunking rubbish. Surely there is enough in what you have written to convince anyone that it's rubbish? You need to turn off your curiosity at the very first clue, because it's too valuable to waste on these con-artists. The time you waste on it could be spent on genuine science, which is equally amazing, and has the added bonus of being true and in line with the real world. 14. 3 points ## How do I get a simple compact list of what's going on? Apparently every website now needs to be like a social network. Soon you'll be able to see pictures of every meal swansont has. I know, I don't understand it either. To make it a bit easier I added an "Unread Topics" link under My Activity Streams, which takes you to see all unread topics, condenses the list, and has links to take you immediately to the first unread post. Is that more helpful to you? I'll be sure to add a separate "Swansont's Lunch" feed later. 15. 3 points ## Newton's gravity equation Imagine 2 satellites orbiting the earth very close to each other. Do you have any problem with that? (You need Newton's laws to understand this.) Now imagine that one satellite is bigger than the other, but both still orbiting very close to the other. Do you have any problem with that? (You need Newton's laws to understand this.) Now assume the biggest one is hollow, and the smaller one is completely inside the other. Both are doing their own orbit, which happens to be the same. Do you have any problem with that? (You need Newton's laws to understand this.) So the smaller satellite is 'weightless' in relation to the big satellite: it does not bounce any wall of the bigger satellite. Do you have any problem with that? (You need Newton's laws to understand this.) Now suppose the bigger satellite is filled with air, and the smaller one is an astronaut... Do you have any problem with that? (You don't need Newton's laws to understand this.) 16. 3 points ## Banned/Suspended Users GeniusIsDisruptive has been banned. Was too little genius and way too much disruptive. 17. 3 points ## Newton's gravity equation I'm not sure if the OP is still around to see this, or if he'll even bother to read and try to understand it, but: Have you ever ridden in a fast moving car that reaches a dip in the road, and you get the "leaving your stomach behind" feeling? This is just a lesser magnitude example as to what is happening for the astronauts. At that moment, you are just slightly "lighter" with respect to the car than you are normally, and this is what your stomach is reacting to. If you where to be traveling so fast that the car's wheels left the road for some time, you would feel "weightless" and objects would "float" around in the car. This is because your "weight" is a function of gravity pulling down on you and the car, but the ground pushes back on the car and through it to you. When the car leaves the ground, both you and the car are still subject to the same force of gravity as before, but are both free to respond to it equally and are both in free fall. There is no force pushing up on you to resist gravity and you are "weightless". The "Vomit Comet" which is a plane designed to simulate this effect for longer than you could achieve in the car, does it by flying in an arc that follows such a free-fall path. Again, while following this arc, you, and object in the plane, float around in a weightless condition, even though the force of gravity on you has not let up one bit. The ISS is in a state of continuous free-fall. While near the surface of the Earth and at normal speeds, such a free fall path will always end up intersecting with the ground eventually, at the altitude and speed of the ISS, as it curves towards the ground, due to the spherical shape of the Earth, the ground curves away also. It basically end up in a state of constantly falling towards the Earth but it keeps missing it. Since both the ISS and astronauts are following the same free fall path, the astronauts end up in the same weightless condition as in the car in the air or the Vomit Comet during the arc, even though the force of gravity acting on them is not much weaker than it would be at the surface. 18. 3 points ## Religions influence on Science A word isn't necessarily exactly the same as any of its synonyms: else why would the English language have such a diverse pool of words to choose from. I agree that faith and trust both require us to believe something. The subtlety here is that faith compels us to believe something regardless of the evidence. Science asks us to believe something so long as it accords with the evidence. 19. 3 points ## The meaning of life I thought everybody knew by now that the answer to Life, the Universe and Everything is 42. 20. 2 points This doesn't make sense to me. If people are starving it's because they can't afford food or because food isn't available. If it's the former, then they weren't customers in the first place, so there is no lost business. If it's the latter, then there is no business to be lost. You need to make contraception available, too. That's often a religion-induced issue. But That would happen even if they improved the water supply on their own, so I don't see how you can hang that on charity. This sounds similar to the common "poor people are lazy" propaganda, along with branding them as stupid, to boot. It's bogus. 21. 2 points 22. 2 points ## Reconciling science and religion Apologies, i obviously missed it. But now i'm more confused: you insist that Christians must follow the bad bits (stoning an adulterer), and presumably the good bits (turning the other cheek - forgiving the adulterer in this case), but insist they must follow both to be classed as a 'true' Christian. Your definition leads to the absurdity that there are no Christians on this planet, since it is impossible to do both of these contradictory things. If even the Pope is not a Christian by your definition you've got to start questioning your definition surely? 23. 2 points ## Lefty-Science Privilege I don't know if I agree with your premise about a la carte beliefs being in conflict with categories of belief like, for instance, not believing in the resurrection disqualifying one from being a Christian. All beliefs are to one degree or another a la carte. You will be hard pressed to find to people whose beliefs are exactly identical on a wide range of topics. That disagreement means that there is rarely a set of canonical beliefs that fall under a single label which are not disagreed on at any point by people who take that label for themselves. As such, applying labels to any set of beliefs is not a practice of objective classification but really one of taxonomy. It is grouping non-identical things into categories of likeness. And any such taxonomic classification is going to, to some degree, be arbitrary and subjective with some blurriness on the edges. To describe yourself as a religious person, do you need to share all of your religious beliefs with every other person who describes themselves as religious? Of course not. Similarly, I do not think that one must share all of their Christian beliefs with other Christians in order to qualify as a Christian. Just like you cannot reasonably expect a person to defend the existence of Krishna just because they describe themselves as religious, I do not think that you can hold everyone who describes themselves as Christian to defend every belief that is common among some groups of Christians. Nor must someone who is a scientist defend every belief that is common in the scientific community. Nor must a conservative defend the beliefs and actions of every other conservative, nor liberals those of liberals. Such labels are names of convenience, not accurate descriptions of all of a person's beliefs. If a person decides to take a label on as part of their identity, then it is incumbent upon them to stake out where they differ from popular perceptions of the beliefs commonly associated with that identity, but they do not necessarily have to subscribe to every single one of those beliefs in order to retain that label for themselves, and expecting them to put up a universal defense of those beliefs or else admit that they don't really qualify for the label is generally unreasonable. A simple statement of "I am an X who does not agree with Y belief" should be enough. 24. 2 points ## Oxygen levels in the Triassic I found the paper to be an accurate representation of what I have read from other sources. Toward the end of the Carboniferous and the formation of Pangaea, combined with our fourth ice-age at that time, the atmospheric oxygen levels began to decline. When that ice-age ended approximately 270 million years ago atmospheric oxygen levels were already down to ~18% with atmospheric carbon dioxide levels between 250 and 350 ppmv. That is also when temperatures began to rise, reaching between 35°C and 40°C. Contrary to popular belief, there were three main extinction events between 270 and 250 million years ago, and each of those three Permian extinction events (spaced between 9 and 11 million years apart) were larger than the extinction event that killed the dinosaurs. When the Siberian Traps began erupting 248 million years ago the particulates in the atmosphere helped cool off the planet and increased carbon dioxide levels significantly (by as much as 1,200 ppmv according to some sources), but atmospheric oxygen levels would take longer to increase. While atmospheric oxygen levels has certainly determined the size of arthropods, I am not sure the same thing applies to reptiles or dinosaurs. We have evidence of large arthropods during the Carboniferous, when atmospheric oxygen levels were as high as 33%. Even during the Silurian and Devonian, when oxygen levels spiked to 24%, there were 3 meter long arthropods. The atmospheric oxygen levels would drop again towards the end of Devonian, which is also when the giant eurypterids went extinct. What seems to matter more, with regard to the size of reptiles, dinosaurs, and even mammals, is the amount of space they have, not the amount of oxygen. Reptiles, dinosaurs, and mammals get bigger and bigger the more space they are given, irrespective of the amount of atmospheric oxygen that is available. During the Triassic the massive Pangaea continent was beginning to break up, but the super-continents of Gondwanaland and Laurasia still existed so they could start getting large during this period. When Gondwanaland and Laurasia started to break apart into South America, Africa, North America, and Eurasia about 100 million years ago it also ended the rein of the large sauropods. Life on land would never again get that large, and it had nothing to do with atmospheric oxygen levels. See also "The Silurian-Devonian: How An Oxygen Spike Allowed The First Conquest of Land", from The National Academies of Sciences, Engineering, and Medicine, Chapter 5 - https://www.nap.edu/read/11630/chapter/7 25. 2 points ## Holographic Universe Hijack (from Quantum Entanglement ?) There is something interesting that points to a causal relationship between energy (mass) and entanglement. " In stark contrast to transport experiments, absorption of a single photon leads to an abrupt change in the system Hamiltonian and a quantum quench of Kondo correlations. By inferring the characteristic power law exponents from the experimental absorption line-shapes, we find a unique signature of the quench in the form of an Anderson orthogonality catastrophe, originating from a vanishing overlap between the initial and final many-body wave-functions. We also show that the power-law exponents that determine the degree of orthogonality can be tuned by applying an external magnetic field which gradually turns the Kondo correlations off."https://arxiv.org/pdf/1102.3982.pdf This wiki concerns Electronic correlation, interesting stuff.https://en.wikipedia.org/wiki/Electronic_correlation#Mathematical_viewpoint Maybe, If it's true that the breaking of entanglement creates energy or mass then that might be related to the way the observable universe is created...but it's a very big 'IF'. According to big bang cosmology there was a thermal equilibrium. Regions which today are out of causal contact were once in equilibrium with each other...https://arxiv.org/pdf/1205.1584.pdf Matter is any substance that has mass and takes up space. Mass is a property of a physical body. It is the measure of an object's resistance to acceleration (a change in its state of motion) when a net force is applied. 26. 2 points ## Humans are Gods creating more powerful Gods, that too, shall likely create more powerful Gods, that too shall likely create more powerful Gods... etc Because we don't have a toilet? 27. 2 points ## The Impeachment of Trump? You don't kill bad ideas by silencing them. You kill bad ideas by addressing them. These insecure, ignorant, inferiority complex ridden knuckle draggers should be allowed to share their views and march (and only a tiny insignificant few people are claiming otherwise). Likewise, people who wish to defend more forward looking inclusive values should be allowed to share their views and march in response. The scales will tip. The haters will retreat to the shadows. They won't be extinguished, but their embers contained. Let them proudly display their ignorance, and let us proudly display why it has no place in a modern society. 28. 2 points ## The Impeachment of Trump? You are passively defending the Nazis by pretending their were other groups on non-bigots there whose focus was preserving history and by insisting bothsides were violent. Only ONE SIDE brought torches, shields, and helmets. Only ONE SIDE drove a car into a crowd of people. Only ONE SIDE killed someone. Only ONE SIDE is recognized as a hate group and on DHS domestic terror watch lists. 29. 2 points ## The Impeachment of Trump? "I can't remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you're saying that the most compelling thing you can say for your position is that it's not literally illegal to express." https://xkcd.com/1357/ 30. 2 points ## Quantum Entanglement ? They don't behave as separate particles until you measure them, they are described by a wavefunction. The degree of freedom depends on the properties of the entangled particles.. 31. 2 points ## Genetic Drift - A Population Size Which Would Be Small Enough It really depends on selective force. Assuming no selection, the number generations required to fix an allele is: E(T) = -4Ne [p ln p + (1-p) ln (1-p)] With Ne being the effective population size and probability of allele frequency. The latter is important as in a large population p approaches the expected statistical distribution (i.e. 50:50 in case of two alleles) but with declining population sampling error occurs and you may have a skewed distribution, which is effectively what the drift is. As you can see, the time is maximized for p=0.5 and drops off it we see it skewed. However, if we add selection to the mix, it gets more complicated depending on how it acts on the population. It could accelerate the effect if the the particular allele is positively selected (and by how much depends on the selective force each generation). However, selection could go into the opposite direction (i.e. favoring the allele with lower frequency). In that case the balance of the two competing events would determine fixation rate. 32. 2 points ## Do you think the religion of Islam will ever be respected in a planet where the majority of people are non-Muslims? I don't know. But if Islam and only Islam was the sole determinant then we would expect to see all Muslim countries and populations adopt the death penalty for apostasy. We don't see that, therefore there are other factors involved. Which is not to say Islamic doctrine does or does not contribute, only that if it does contribute it is one factor among others. 33. 2 points ## Outliers in small sample What are the error characteristics of your measurement system? You have 6 data points. Are they all supposes to be a repeat measurement of the others? With the information you've posted and only 6 data points it's impossible to give any good advice. (with the possible exception of repeat the experiment some more) 34. 2 points ## Today I Learned Today I learned about the Hyrax. Although you can't tell from the picture, this rodent-like creature is actually more closely related to elephants! 35. 2 points ## Scientific research on sudden paranormal ability Seeing veins is neither paranormal or psychic. Phlebotomists do it every day. It's also possible your visual system is a bit borked and you're just more sensitive to the near-infrared side of the visual spectrum. Machines like those described at the following link would allow you to run tests to see if this is anything more than a delusion: https://www.cnet.com/news/near-infrared-makes-veins-easier-to-find/ Who knows, though. If you keep using meth, soon we'll be able to see gaps where your teeth used to be, tears where joy used to be, and potentially even graves where life used to be. 36. 2 points 37. 2 points ## Today I Learned Cats trained as spies, DrmDoc ? interesting but unworkable, I'm afraid. I don't see a problem with the black tux, Omega Seamaster or even the Walther PPK. But shifting gears in the Aston Martin would be a problem. 38. 2 points ## How do I come to terms with death? Some great answers were already given. Yes, it might seem hard, but would you refuse to go to the movies, just because you know the movie will end? And that after a year you might not remember much of it? I could give an answer in a Buddhist spirit: first realise that everything is changing, and therefore also will come to an end. Secondly, realise that you, as an autonomous, independent object, do not exist. You are the sum of all your biological and biographical factors, which include decisions you made yourself. Thirdly, realise that you are not the only one: every conscious being is in this situation. If you can really feel this, it can increase your compassion with other beings. When you feel this, and can act accordingly, such questions will not bother you anymore. Read this comic for another view on this. I am glad to hear this from you. 39. 2 points ## Producing a hierarchy of human life . Mike, you seem to be entirely missing the point. You have not demonstrated that any part of your heirarchy is valid. You just keep repeating what it is and what you believe about it. That is about as useful as a chess set in a rugby game. If you cannot at least make a serious attempt to justify your claims I shall have to conclude you are just trolling and I really will be out of here. 40. 2 points ## How do I come to terms with death? Resignation is the answer. Whenever something bothers you, there are two, and only two, options: 1) do something about it 2) accept it and move on Since death is inevitable, there is nothing you can do about it, so just forget about it and go on enjoying your life. You can of course do some things to prolong life, such as not smoking and eating healthy, but those fall under category 1 (unless you choose to forget about that too). (there is of course a third option, which is often taken: fear, anger, resentment, jealousy, despair... none of these leads to nice places) I think your feeling describes nicely why religion is so popular. 41. 2 points ## Newton's gravity equation No. The given radius gives an orbital speed of 7673.556779 m/sec. At that speed and radius, the centripetal force would be 7673.556779^2 x 50/7=6,771e6 = 434.8211... N The force of gravity would be 3.987e14 x 50/6,771e6^2 = 434.8211.. N They are equal to the same degree of accuracy we have for the given parameters The 3.9987e14 is the "gravitational parameter" for the Earth which is equal to GM,where M is the mass of the Earth. This gives a more accurate answer as we know it to a better accuracy than we know either the gravitational constant or the mass of the Earth separately. 42. 2 points ## Welcome to the upgraded SFN You want to click the little dot or star next to the title of each thread, which takes you to the first unread post in that thread. It's not particularly well labeled. Also, under the Activity Tab, My Activity Streams, there should be an Unread Topics stream which is condensed and easier to follow. I wonder if I can adjust the defaults. 43. 2 points ## Transuranic elements in a star There is a rich treasure trove out here. First this 2008 paper acknowledges Gorierly's proposal, but attributes the high speed particles, which generate the short lived isotopes, not to the actions of the magnetic field, but to a companion neutron star. I find this unconvincing, but that is based on my ignorance of the ease or difficulty of detecting neutron stars. I imagine that if there were a neutron star companion it should have been detectable by now. No subsequent papers citing this one discuss the NS hypothesis further. It seems that HD101065 is not the only star with transuranic content in the photosphere. HD465 is also anomalous. This is mentioned in passing in a couple of papers. I have not yet tracked down the original research. In an earlier post I mistakenly stated the half life of Technetium was 17 hours. I should have referenced the half life of 145Pm as 17.7 years. (How time flies when you are having fun.) 44. 2 points ## Maths vs Belief Cladking - why do you insist on trying to explain the scientific method and modern scientific thinking when you clearly have no idea and have had this shown to you on multiple occasions A scientific model is an attempt to provide a rigorous and self-consistent mathematical and theoretical system which produces results which mirror those we find through empirical observation. An experiment does not reflect reality - it is reality. Models do impart understanding - that is they do unless you are using some mad definition - which is quite possible. They might not impart understanding of some platonic underlying schema - but then nothing does. You do realise that many scientific models are created before any specific data really becomes available - the scientists extrapolate from more general data and other models, guess, and create maths which looks beautiful and fitting. They then work out how this would be manifest in experiment. They then go looking for 27kilometres of superhard vacuum to smash protons together. 45. 2 points ## WTF! Doesn't the "point" of nukes depend entirely on who you ask in what context? 46. 2 points ## Quantum mechanic cognitive dissidence dissipates. ! Moderator Note Bringing up speculations in someone else's thread is hijacking, a rules violation. (And, should one be tempted to do so, responding to a modnote is off-topic) 47. 2 points Just to elaborate on the answer already given. Gases are just collections of fast moving particles bouncing off of each other. The pressure of a gas is a result of the collision of those particles. If you were to have a container, divided in two with a vacuum on one side and a gas under pressure on the other, The pressure on the dividing wall by the gas is the result of the constant collisions of those fast moving particles with the wall. If you cut a hole in the wall, the vacuum does not "suck" the gas from its side of the container, but rather, the particles that would have collided with the wall where the hole is have nothing to stop them from passing through to the other side so they will cross over into the vacuum side. Eventually (how long depends on the size of the hole) there will be just as many particles on one side as the other. Of course now the same number of particles will be in a larger volume with a larger inner surface area. While this will not change the force of collision for any given particle with the wall, it does decrease the frequency of collisions with the wall. This, in turn, decreases the net pressure felt by the wall. Now the Earth's atmosphere doesn't have a wall to hold it atmosphere, but here is how gravity does the trick. Take a ball and throw it upwards. It leaves your hand at a certain speed, slows as it climbs, and then eventually comes to a stop and falls back down. Air molecules are no different, They are bouncing off of each other at high speeds, but in order to escape into space they have to climb just like the ball, and just like the ball they lose speed as they do. Eventually they lose all their speed and can climb no further. Now if They had started with enough speed (known as escape velocity), they could have kept going, as gravity gets weaker the higher you go, and it would never be able to rob it of its last bit of velocity. However, the typical speed for air molecules are much lower than this required speed. (except in the case of light elements like helium and hydrogen, the Earth has a more difficult time holding on to these.) The speeds of the air molecules are a function of their temperature. With hotter gasses the molecules have faster speeds. If you were to heat our atmosphere considerably, you could cause it to lose its atmosphere. (Going back to hydrogen and helium, the reason the outer gas giants atmospheres contain them is bot because the planets are large and have strong gravity and they are further from the Sun and thus cooler. The Sun, on the other hand, which is mostly made of hydrogen, is very hot, but is also very massive, on the order of 1000 times the mass of Jupiter, and it only needs its gravity to hold together. In fact, if it weren't for the heat generated by fusion at its core, its gravity would compress it even smaller than it is.) Attempting to invoke buoyancy as an explanation for the effects we attribute to gravity is a bit of a non-starter as buoyancy relies on gravity to work. It works on the comparison of different weights and volumes of the object and fluid involved, and you can't have different weights without gravity. It also fails to explain why our space probes follow trajectories influenced by the gravity of the various bodies in the solar system while traveling through a vacuum, which has no buoyant effect. Nor does it explain why object still fall in a vacuum chamber, such as shown here: 48. 2 points ## The meaning of life What is best in life? To crush your enemies, to see them driven before you, and to hear the lamentations of their women. 49. 2 points ## Why does our reality behave like a 3D game? I didn't really understand your question. And shouldn't it be the other way round, that virtual games imitate (or should I say recreate) reality? 50. 2 points ## Newton's gravity equation Searching for Cavendish on Wikipedia (which is what I assume you mean) brings up several possible pages. Only one of which looks like it might be relevant: https://en.wikipedia.org/wiki/Henry_Cavendish The words "2 micrograms" do not appear, and there is only a passing reference to the force of gravity. Perhaps you could be a little more specific than "here are some numbers I read somewhere or made up". OK. I guess this is what you are referring to: https://en.wikipedia.org/wiki/Cavendish_experiment#The_experiment So that summarises how the necessary accuracy was achieved. What is your question? And where does your value of the measurement uncertainty being "one milligram" come from? Is that what you think the likely error in the mass of the balls was? Which would be an error of 1 part in 158,000 or 0.0006%. That sounds impressive. Do you have a source for this?
2017-08-24 05:04:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.482097327709198, "perplexity": 937.1203853672854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133032.51/warc/CC-MAIN-20170824043524-20170824063524-00308.warc.gz"}
https://math.stackexchange.com/questions/2435167/homework-problem-on-absolute-relative-error?noredirect=1
# Homework Problem on Absolute + Relative Error [closed] Let $f \in C[a,b]$ be a function whose derivative exists on $(a,b)$. Suppose $f$ is to be evaluated at $x_0$ in $(a,b)$, but instead of computing the actual value $f(x_0)$, the approximate value, $\tilde{f}(x_0)$, is the actual value of $f$ at $x_0 + \epsilon$, that is $\tilde{f}(x_0) = f(x_0 + \epsilon)$ a) Use the Mean Value Theorem to estimate the absolute error $|f(x_0) - \tilde{f}(x_0)|$ and the relative error $|f(x_0) - \tilde{f}(x_0)|/|f(x_0)|$, assuming that $f(x_0) \neq 0$. b) If $\epsilon = 5 \cdot 10^{-6}$ and $x_0 = 1$, find bounds for the absolute and relative errors for i. $f(x) = e^x$ ii. $f(x) = \sin x$ c) Repeat part (b) with $\epsilon = (5 \cdot 10^{-6}) x_0$ and $x_0 =10$ # My work ### Part a There is some $c \in [x_0, x_0 + \epsilon]$ where $f'(c) = \frac{f(x_0 + \epsilon) - f(x_0)}{\epsilon} = \frac{\tilde{f}(x_0) - f(x_0)}{\epsilon}$ Absolute Error: $|f(x_0) - \tilde{f}(x_0)| = |f'(c) \cdot \epsilon|$ Relative Error $\frac{|f(x_0) - \tilde{f}(x_0)|}{|f(x_0)|} = \frac{|f'(c) \cdot \epsilon|}{|f(x_0)|}$ (Is this right? I'm suspicious this isn't the answer expected.) ### Part b Here, I simply calculate the absolute/relative error directly. I'm suspicious because I'm not using part (a) and I'm not calculating "bounds", I'm calculating error values. i. $f(x) = e^x$ Absolute Error: $|f(1 + 5 \cdot 10^{-6}) - f(1)| = |e^{1 + 5 \cdot 10^{-6}} - e| \approx 1.359 \cdot 10^{-5}$ Relative Error: $|e^{1 + 5 \cdot 10^{-6}} - e|/e \approx 5.000 \cdot 10^{-6}$ ii. $f(x) = \sin x$ Absolute Error: $|f(1 + 5 \cdot 10^{-6}) - f(1)| = |\sin (1 + 5 \cdot 10^{-6}) - \sin 1| \approx 2.702 \cdot 10^{-6}$ Relative Error: $\frac{|\sin (1 + 5 \cdot 10^{-6}) - \sin 1|}{\sin 1} \approx 3.210 \cdot 10^{-6}$ ### Part c Similar to (b) i. $f(x) = e^x$ Absolute Error: $|f(10 + 5 \cdot 10^{-5}) - f(10)| = |e^{10 + 5 \cdot 10^{-5}} - e^10| \approx 1.101$ Relative Error: $|e^{10 + 5 \cdot 10^{-5}} - e^{10}|/e^{10} \approx 5.000 \cdot 10^{-5}$ ii. $f(x) = \sin x$ Absolute Error: $|f(10 + 5 \cdot 10^{-5}) - f(10)| = |\sin (10 + 5 \cdot 10^{-5}) - \sin 10| \approx -4.195 \cdot 10^{-5}$ Relative Error: $\frac{|\sin (10 + 5 \cdot 10^{-5}) - \sin 10|}{\sin 10} \approx 4.949 \cdot 10^{-6}$ • Whilst it's excellent that you've put in so much effort, your post is too broad as it asks too many questions. – Shaun Sep 18 '17 at 22:19 • @Shaun Actually, he only asks one question. Is this right? ... Everything after that should not be part of the question, as there are no questions. @OP: I suggest to edit this question, so that only your problem in part a) is asked. Then try to do b) and c) again. And if there are problems, open another question. – P. Siehr Sep 19 '17 at 12:58 Since that is the only question you asked, I will focus my answer on it. (Is this right? I'm suspicious this isn't the answer expected.) What you did is correct, but I would write the result slightly different. \begin{align*}|f(x_0) - \tilde{f}(x_0)| &= |f'(c) \cdot \epsilon| \\ &= |f'(c)|\,|x_0-\tilde{x_0}|\leqslant \max_{c∈(a,b)}|f'(c)|\,|x_0-\tilde{x_0}|.\end{align*} Now you have an estimate in the form $$\text{"absolute error in the result"} = \text{constant}*\text{"absolute error in the data"},$$ which is better in the sense, that you explicitly see the connection of both errors. Also you don't know the value of $c$, because the Mean Value Theorem only states "There exists a $c$…". So the only thing you can evaluate numerically is $\underset{c}{\max}…$ . For the relative error we can do the same \begin{align*}\frac{|f(x_0) - \tilde{f}(x_0)|}{|f(x_0)|} &= \frac{|f'(c) \cdot \epsilon|}{|f(x_0)|} \\ &= |f'(c)| \frac{|x_0|}{|f(x_0)|}\frac{|x_0-\tilde{x_0}|}{|x_0|} \\& \leqslant \max_{c∈(a,b)}|f'(c)| \frac{|x_0|}{|f(x_0)|}\frac{|x_0-\tilde{x_0}|}{|x_0|}.\end{align*} I think this answer also answers your not asked questions about b) and c). It is nice to see, that that that factor resembles very much the condition number $$κ(x)=\frac{∂f(x)}{∂x}\frac{x}{f(x)}.$$ In fact, the condition number is a concept that describes how the relative error in the data connects to the relative error in the result (see here.) The reason, why we here get a different result is, that we used the Mean Value Theorem, while the condition number uses Taylor's expansion.
2020-01-27 21:51:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.996405303478241, "perplexity": 707.8805229531216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00192.warc.gz"}
https://ls.gsusigmanu.org/10794-do-telescopes-harm-while-observing-sun-through-them.html
# Do telescopes harm while observing sun through them? I don't know whether we can observe a sun through a telescope. But if we can then will it harm our eyes and if such telescopes exists what is their configuration? Observing the Sun through a telescope is very dangerous, whatever the telescope you use, if you don't use the appropriate tools. A telescope a basically a light collector: its purpose is to collect all the light that is arriving on his primary mirror and focus it on a point. You may have already tried to make the Sun light converge through a little magnifying glass (which works quite similarly to a telescope). If you make it converge on something like paper, it will simply burn it. This can eventually be the cause for big fires (example, a glass bottle in a dry forest, concentrating the Sun's light on leaves). Here is an example to convince you: http://www.youtube.com/watch?v=N7nJ3wIxt3o A telescope works the same, except that the collecting surface is usually bigger, so it will be more effective at burning… your eye for example (astronomers make this joke that you can observe the Sun with a telescope twice in your life: once with your left eye, once with your right eye). Or, for bigger telescope, it could even start a fire, or burn instruments… To observe the Sun, you will need a proper filter that you will put usually "before" the telescope. You may also observe it indirectly on small telescope by projecting the image of the Sun.You can see examples here: http://www.skyandtelescope.com/observing/objects/sun/Observing_the_Sun.html. ## The very DANGEROUS use of vintage eyepiece sun filters. One of the reasons why these filter crack from the heat is that they are usually held too tight in the cell so the difference in the expansion of the glass vs the metal that is causing the failure. The usual mode of failure is that you have been observing the Sun for a number of minutes or longer and that is when they crack. Having a solar filter mounted in a black cell is also not the greatest piece of optical engineering. If you run the experiments be sure to run one were the filter is mounted on the end of the diagonal. Also measure were the focal plane is in relationship to the end of the eyepiece you use. With many of the simple eyepiece designs that came with vintage 60mm refractors, the longer the focal length of the eyepiece, the far the focal plane of the scope was from the end of the eyepiece when it is in focus. That put the filter in position in the optical path were the energy was spread out. The shorter focal length eyepiece come to focus were the focal plane of the scope is very close to the end of the eyepiece, putting the focused image of the Sun on the filter. I'm wondering if the cause of failure of these filters is that one first was observing the Sun with low powered eyepiece and then wanted to take a closer look at a Sunspot and switch to higher magnification and that put the filter in position were the Sun was focused on the filter. ### #28 Joe Cepleur The little Sun filter was designed for use with a 60mm objective (or smaller) and you overloaded it with the light from a 100mm objective!Seriously ,it was subjected to 2/3rds more heat there really is a lot of energy in sunlight. Not 2/3 more, but nearly three times as much, because heat is proportional to the area of the objective. 60mm => 30mm radius 30^2 = 900 square mm area 100mm => 50mm radius 50^2 = 2500 square mm area 2.8 times the area of a 60mm objective ### #30 Joe Cepleur I used one of my vintage sun filters this past evening close to sunset with a 20mm Huygens eye piece in my 60mm Celestron/Vixen Cometron 910mm focal length. Nice view, green, did not view for long. Seems a shame to risk blindness when safe solar filters covering the entire objective are readily available, but none of us can tell others what to do. I am sure we are all glad that you enjoyed a fine view and came to no harm. Still, such good fortune does not change the thrust of this thread, nor our responsibility to educate the pubic on the safe use of classic telescopes. Many people used this type of solar filter back in the day. Had it been 100% dangerous 100% of the time, no one would have used it. The problem is analogous to wearing seat belts. People will say, "No need I'm not going far." Looking simply at the facts, the equations of kinetic energy have no factor representing the total time or total distance traveled prior to the crash. Only the velocity at impact matters, with consequences that may be severe. In the case of this old style of solar filter, blind is blind, regardless of how many successful viewings one has had in the past, or how fine or long the view immediately prior. ### #31 Rich (RLTYS) Not the first time I have pushed things too far! I'm going to set up the 60mm again today and run some further tests with additional sun filters I have. I will report back the results. I hope the filters you are testing are NOT the old fashion dark filters. You were lucky the first time, you may not be so lucky the next time. ### #35 Preston Smith DAVIDG made a very good point (as always) that the sun filters did not have thermal expansion capabilites. A Ramsden eyepiece had an old solar filter mounted between the two elements of the eyepiece? And mounted in a manner to allow for some thermal expansion? And only used on a 60mm scope? The filter would not be near the focal point. Do any of you have reports of the lenses in your old eyepieces cracking from solar observing? I know that more complex eyepiece can be damaged but I have never heard of the two lens eyepieces (Ramsdens, Huygens) being damaged. ### #36 DAVIDG Eye damage is serious stuff. I've been an amateur astronomer for over 30 years and have been observing the Sun all that time. I have had a filter crack on me while I was observing the Sun and looking into the eyepiece when was I kid. Since then I have had my eye examined many times and told the Doctors what had happened and they could find nothing wrong with my eye. I did not have any lose of vision or any problem after the filter cracked. I'm still lucky to have 20/20 vision in both eyes, today. Maybe I got real lucky but over years I have read that when these filters crack one is going to suffer instant blindness. While this is very serious, I would like to know if anyone has FIRST hand knowledge of eye damage caused by one of these filters cracking. I know that people have suffered eye damage from looking at eclipses but that is different. I do scientific research for a living and I like to see the data and seperate the facts from the fiction. On a side note I have seen full aperture solar filter fall off telescopes and have seen the wind blow them loose so one needs to be sure that they are mounted very well. I have witness first hand when a full aperture filter came loose and it wasn't more then 3 seconds later, smoke was coming out of the telescope and a few months ago there was a newspaper article I read were a telescope was left unattended during the day and caused a deck to catch fire. While I am not defending the use of these screw on type filters, if they crack, your letting light in thru the crack vs if you loose a full aperture filter you have much more energy at the eyepiece. This is why I built a telescope designed for white-light solar obsevering, with many built in safety features. If any part fails, it fails safe and no image can be formed at the eyepiece. ### #37 Jon Marinello I would like to hear more about the design of the telescope you built for white-light solar obsevering, with all the built-in safety features. Can you show us pictures with lots of close ups of the features along with a detailed description? I might want to build one. Perhaps this is already documented and in that case can you just point us to that? ### #39 trainsktg I would like to hear more about the design of the telescope you built for white-light solar obsevring, with all the built-in safety features. Can you show us pictures with lots of close ups of the features along with a detailed description? I might want to build one. Perhaps this is already documented and in that case can you just point us to that? There is a decent tutorial on how to design a dedicated solar scope on pages 43 and 44 of Sam Brown's 'All About Telescopes'. It isn't a step by step tutorial, as you will need to use the various tables to calculate the strength of the image based on the type of telescope you are designing, and then the appropriate filtration or attenuation to knock the image down to safe levels. I have yet to run across another book anywhere that discusses the construction of a dedicated white light reflector. Mine is a 4.25" f13. ### #40 DAVIDG I would like to hear more about the design of the telescope you built for white-light solar obsevering, with all the built-in safety features. Can you show us pictures with lots of close ups of the features along with a detailed description? I might want to build one. Perhaps this is already documented and in that case can you just point us to that? ## RELATED ARTICLES The video, which he warns could be hard to watch for squeamish viewers, shows Thompson holding the pig's eye up to a telescope that he says is directed towards the sun. After about 20 seconds under the lens of the scope, the eyeball flashes and starts to smoke. 'I've got to tell you, the smell is pretty grim,' Thompson says. After about 20 seconds under the lens of the scope, the pig's eyeball flashes and smoke can be seen coming from it The video then shows the results: the pig's eye has a burn directly on the cornea of the lens (pictured) Thompson then dissects the eye to check on damage to the retina. After cutting away to it, he finds a brown patch, which he says could be damage from the sun (pictured) The video then shows the results: the pig's eye has a burn directly on the cornea of the lens. Thompson then dissects the eye to check on damage to the retina. After cutting away to it, he finds a brown patch, which he says could be damage from the sun. At the end of the video, Thompson adds that if the eye had been human, there would have been serious damage to the eye of the person. Scientists have already been warning people to avoid looking directly at the sun during the eclipse and NASA has said the only safe way to look at the uneclipsed or partially eclipsed sun is through special solar filters such as eclipse glasses. Thompson's video, which was posted in April 2016 as a warning ahead of a different celestial event, should deter people from even considering watching the eclipse through a telescope lens. ### WHAT HAPPENS DURING AN ECLIPSE During a total solar eclipse, the moon completely blocks the face of the sun, NASA explains. This reveals the 'pearly white halo' of the sun's corona – its outer atmosphere, which is invisible to the naked eye at all other times. For this phenomenon to take place, the moon and the sun must be perfectly aligned, allowing the moon to appear as though it's the exact size of the sun. 'A total eclipse is a dance with three partners: the moon, the sun and Earth,' said Richard Vondrak, a lunar scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. 'It can only happen when there is an exquisite alignment of the moon and the sun in our sky.' According to eclipse chaser Michael Aisner, the math of this actually taking place is 'boggling.' 'The moon is 400 times smaller in diameter than the sun, and the sun is 400 times further away,' he explained in an eclipse-viewing tip list shared with Dailymail.com. 'Bingo – a perfect fit even though the sun is 100 million miles farther away than the moon.' ## The safe way to observe the sun through a telescope If you are interested in the Sun or wish to see a solar eclipse, you will need to block at least 99% of its light to avoid damaging your eyes. First of all, you have to make sure you are using the right filters. Objects such as smoked glass, candy wrappers, or compact discs are not safe filters because although they block most of the Sun’s light, harmful radiation can still get through and hurt your retina. Remember to avoid filters that cover only the eyepiece end because the sunlight will destroy it, along with your eyes. The best way to go are the filters specially made for telescopes, such as a sheet of solar-filter material. This type of filter is placed and covers the entire front end of the telescope. In case the telescope is bigger than the filter, you must protect it with some sort of mask and place the filter in the middle hole. These solar filters are usually made out of metal-coated glass or Mylar and can block at least 99% of sunlight. We must emphasize the fact that you must place these filters in the front end of the telescope. If you put them at the eyepiece, these filters can be burned, or they can crack from the intense heat of the Sun, and of course, damage your eyes permanently, even causing blindness. Using these filters, you will be able to see the Sun’s surface in a pale yellow, blue, or orange color, depending on the type of filter you have. You can stare at the Sun without any worries for as long as you like, since there are no risks for your eyes with these types of filters (when properly designed and installed, of course). You should also look out for how well you place the filter so that it won’t fall off while you’re gazing at the Sun. This advice works just as well with binoculars. A better way to see the Sun in detail, although pricey, is with small refractors with apertures that have built-in interference filters. You also can buy these filters separately and attach them to your refractor’s tube. This equipment allows you to view different layers of the Sun, such as the hydrogen-alpha line at 656.3 nanometers and the calcium-k line at 393.3 nanometers. These aspects cannot be seen in white light and are an exciting aspect to gaze at, even when the Sun is not eclipsed or transited. ## Worthy Telescopes, Astronomy, and. Beer! Here in central Oregon, beer is a big deal. There are something like 22 craft breweries in Bend (where I live) along with another 4-5 in nearby towns. Worthy Brewing is of the newer breweries that opened in 2013 in a brand new facility located at the edge of town on the east side. The owner built a first class brewery that included a restaurant and a hops garden. Last year they started construction to expand the facility and I noted an interesting new addition--a large circular turret topped with an Ash Dome. Last week, I finally got a chance to visit the new observatory attached to the pub and wow. they didn't skimp on anything. It's an absolutely first class facility of the kind that you might expect to find at a university or planetarium! The view of the sky and surrounding horizon from the third floor observatory is spectacular. They installed a 16" RC scope (I wasn't familiar with the manufacturer) on a large Paramount equatorial head. The scope also features a 4" TeleVu refractor for wide field views. There are three floors of rooms below the telescope level with videos about astronomy, photos, and art work. The telescope is operated by docents from the nearby observatory in Sun River and they are quite willing to move the scope around the sky to look at different objects. The only limitation that I see with this whole thing is that the scope is still located under the edge of the light dome from town and it's surrounded by a lot of ground lighting so the sky isn't very dark. Still, I'm totally impressed that the owner of Worthy invested so much ( $250,000) to build this facility and to promote astronomy. Worthy is actively promoting the dark sky initiative and you can see their web page on that effort here: http://www.worthybre. s-see-the-stars. It is certainly worth checking out if you are ever in the area--just check the hours. You can see the telescope on the Worthy website here: http://www.worthybrewing.com/the-dome.html. Other pages on the site have more information on the facility. It is unbelievable to see a small town brew-pub doing such good things for astronomy so tip a bottle of Worthy to the sky. As they say at Worthy, "Drink up, dream on." ## Appropriate Telescope cover for outdoor I would like to leave my telescope on the back covered patio. Can anyone suggest what the best material I can use. I live in Peoria Arizona so it gets kinda hot here. Also would leaving it outside like that harm it? Thank you in advance for your help ### #2 Jim Davis Look into Telegizmo's 365 covers. ### #3 Stacyjo1962 If you want to go the DIY route, you could fabricate one out of material that is appropriate for your climate. Telegizmo makes an outstanding cover, however. (I've seen them used at star parties, but don't have one myself). ### #4 Kevdog I have a Telegizmo 365 for my C11. It is VERY well made and I trust the scope outside in all weather. It'll keep both the sun and rain off and is strong enough for the winds and dust in the monsoon storms. (I'm just 30 miles NE of you!) ### #5 Tom Polakis As you know, you have to worry far more about it deteriorating in sun rather than rain. It looks like you got a recommendation for a Telegizmo 365 from a fellow desert dweller, so maybe that's the way to go. Another option is the ScopeCoat sold by AstroSystems. I noticed a lot of these around the field at this past year's Texas Star Party. Things may have changed since I last stored my scope under a cover through summers in Arizona, but back in the day, the Mylar covers would flake away, while the ScopeCoat style material was more robust. One thing that I think we can all agree on is that prolonged covering with a tarp that you buy at a department store is a bad idea. Those things do not stand up to prolonged exposure to the sun very well at all. ### #6 mike89t Hi Everyone, I would like to leave my telescope on the back covered patio. Can anyone suggest what the best material I can use. I live in Peoria Arizona so it gets kinda hot here. Also would leaving it outside like that harm it? Thank you in advance for your help I would be worried about all of the dust and wind from our summer monsoons. It could get knocked over or dust could get blown up into it. ### #7 Alex McConahay You may want to consider a simple roll-off covering--the OUTHOUSE design. I have an example of mine at my website, but there are plenty of others. ### #8 Ken Watts I'm right around the corner in Sun City. I also keep my scope on the patio that is screened in. I cover it with an old sheet. Keeps the dust off and being screened in, the worst monsoon winds are greatly attenuated, so no problem there. Many years ago I kept a scope on the back patio in Tempe and also used a sheet but had to bungee it at the bottom around the pier. ### #9 Arizona-Ken I keep my AST-CG5 mount outside under the patio here in Scottsdale. I use a canvas painters tarp, although not waterproof it is great to keep the dust off of the mount. Home Despot or Lowes for 10-15 bucks. ### #10 MalVeauX My telescope & mount lives outside here in Florida permanently on a concrete pier. I use a Telegizmos 365 cover as well. I got mine from Astronomics, a big one to cover all my scopes depending on which one I leave out there mounted on my Sirius, since we get a discount there as a CloudyNight member. It's great stuff, made the same way things are made for satellites. Mine has been out in the hurricane tossing rain and weather here in Florida for a bit now and is holding up great. It helps keep things ambient temperature instead of heating up and it blocks everything. I button mine up with bungee cords onto my concrete pier. The pier is finished with waterproofing sealer to help slow down moisture transfer from the rock to the air inside. I keep eva-dry dehumidifier units hanging in there on the mount while it's buttoned up to manage humidity. Works great! ## Going Beyond A GOTO telescope system opens the doors to much more than simple stargazing. You might decide to observe all 110 objects in Charles Messier's famous list of deep-sky objects, or hunt down the hundreds of other objects he omitted. And while your telescope keeps the target in view, you can practice sketching objects at the eyepiece. (Use a red LED headlamp to let your eyes stay adjusted to the dark.) Using mobile astronomy apps or online resources, you can research which eye-catching double stars are visible, and GOTO them. Or, you could try your hand at variable star observing, where you estimate the brightness of particular stars from time to time and submit your observations tothe American Association of Variable Star Observers. The sky's the limit! In our next mobile astronomy column, we'll cover controlling your GOTO telescope mount with your PC and operating the mount hands-free using your smartphone or tablet. I'll also talk about a gadget that grants GOTO capability to binoculars and manual telescopes. In the meantime, keep looking up! Editor's note: Chris Vaughan is an astronomy public outreach and education specialist, and operator of the historic 1.88-meter David Dunlap Observatory telescope. You can reach him via email, and follow him on Twitter as @astrogeoguy, as well as on Facebook and Tumblr. ## Phoebe Waterman Haas Public Observatory An Explainer facilitates safe solar observing with a hydrogen-alpha telescope at the Public Observatory. Location Outside on the National Air and Space Museum's southeast terrace, near the corner of Independence Avenue and 4th St. SW, Washington, D.C. ### About our Telescopes Our largest telescope is a 16-inch Boller & Chivens telescope, purchased in 1967 by Harvard College Observatory. It is named the Cook Memorial Telescope in memory of Chester Sheldon Cook, a long-time member of the Amateur Telescope Makers of Boston. The telescope was used by generations of students at the Harvard-Smithsonian Oak Ridge Observatory in Harvard, Massachusetts. The Cook Memorial Telescope was loaned, and later donated by Harvard, to the Museum as the primary telescope in the Phoebe Waterman Haas Public Observatory. Our favorite objects to observe with the Cook Memorial Telescope are planets and double stars. We also have several solar telescopes that allow observers to safely view the Sun in different types of light. Our white-light telescopes show us a view of the Sun's surface. Our hydrogen-alpha (red light) and calcium-K (purple light) telescopes shows us the Sun's atmosphere. ### About Phoebe Waterman Haas Phoebe Waterman Haas received her doctorate in astronomy from the University of California, Berkeley, in 1913 — one of the first American women to earn such a degree. She also studied at the historic Lick Observatory near San Jose, California. She is believed to be the first woman to directly use the Lick telescope, which with its 36” lens was one of the largest telescopes in the world at that time. The observatory was named for her in recognition of a$6 million donation from the Thomas W. Haas Foundation to establish an endowment for the Museum's Public Observatory Program. It is the largest donation ever given to the Museum for science education programming. Phoebe Haas was the grandmother of the foundation's president, Thomas W. Haas. What can visitors do in the Observatory? When the weather is clear, visitors will be able to look through telescopes to see the Sun (safely), the planet Venus, or the Moon (when available), guided by our staff of astronomy educators. Visitors can also participate in hands-on, interactive activities to learn more about astronomy and telescopes. What if it is raining or cloudy? During overcast or rainy weather, or when it is extremely hot or cold, the Observatory will be closed to the public. If the Observatory is closed due to weather, join us at Discovery Stations inside the Explore the Universe or Exploring the Planets galleries for astronomy activities. Is the Observatory ever open at night? The Public Observatory is typically open for stargazing once or twice each month, weather permitting. Check the Museum's calendar of events our next stargazing event. Can you really do astronomy in the daytime? Yes! Visitors will be able to observe craters on the Moon, the phases of Venus, and features on the Sun (through our safe solar filters). Can I volunteer at the Observatory? Yes! The Observatory periodically accepts applications. Please note that volunteering at the Observatory is only at our Washington, DC location. Is the Observatory wheelchair accessible? Yes. The terrace and observatory dome are both wheelchair accessible. The main telescope is fully accessible as well, thanks to an extended eyepiece that can accommodate any height or viewing angle. ### What happens at a Friday Public Viewing Session at Cline Observatory? If the sky is clear, we open the facility to allow visitors to view objects through the telescope in the dome and additional telescopes set up on the observing pad. You look through the telescopes with your own eyes. There is no official program – it is simply a viewing session. We show various astronomical objects through the telescopes, point out constellations and other objects in the sky, and answer questions. Visitors are free to come and go as they please – you don’t have to arrive at the beginning or stay until the end. Typical turnout for our sessions is about 40-50 visitors per night, though session attendance has varied from 1-2 to nearly 300. ### What time do the Friday sessions start? How long do they last? During March-October, we start as darkness falls. Typically, this is 30-40 minutes after sunset. Since the time of sunset changes steadily throughout the year, our session start times will also change (Determine sunset time for a particular date).  In June-July we don’t start until nearly 9, but in late October we start around 7. During November-March we start at 7:00 pm. Sessions usually last about two hours. If we still have a big crowd or enthusiastic visitors at the scheduled ending time, we usually extend our session time. If sky conditions worsen and we can’t see any more objects, we will end the session early. ### How much does it cost? Are reservations necessary? All our events are free and open to anyone with an interest in astronomy. No reservations are necessary. ### How will I know if the session is being held, or if it is canceled due to weather? If it is raining or completely cloudy, the session will not be held. If the weather is uncertain, the best way to check session status is through the observatory’s Twitter page (@gtccastro:  https://twitter.com/GTCCASTRO). Decisions about cancellation are usually made by the official start time. If it is cloudy at the start and weather sources indicate that it might stay that way for a while, then we will close for the night. ### Are there special rules for observatory visits? There are a few restrictions – GTCC is a tobacco-free campus, so smoking is not allowed. Food and drink are not allowed near the telescopes, so please do not bring them into the dome area. Campus rules do not allow pets on campus. North Carolina General Statues prohibit the possession of alcohol, drugs and weapons on campus. The dome and outside observing areas are kept relatively dark. In order to provide optimal conditions for viewing, it is important to preserve night vision. Please do not use bright lights near the telescopes (bright phone screens, flashlights, flash photography, etc.) ### Is the observatory heated in the winter? No. Since the dome opens to the night air, any heat from inside will escape, and the warm air convecting through the dome opening creates unsteady conditions that distort the views of the objects we observe. Therefore, our policy is to try to keep the temperature of the interior of the dome approximately the same temperature as outside. So be sure to dress for the conditions. ### Are the observing activities appropriate for young children? The observatory is family-friendly. Observing the Moon and planets will be an exciting experience for a child with an interest in science. Very young kids are fascinated by the Moon, but tend to be overwhelmed by the observatory experience. ## Do telescopes harm while observing sun through them? - Astronomy Note to readers: As of late 2010, the facility is now known as the San Pedro Valley Observatory. It is no longer a B&B, it is just an observatory. Sara Brown writes and says "the new owners are keeping with Dr. Vega's vision of specialty sessions but are also getting updated equipment and offering new classes in things such as CCD, etc." I wish Sara and the new observatory the best of luck. The original article appears below, unaltered except for the contact information at the end. Thanks for reading, -Ed Imagine, if you will, an astronomy bed and breakfast, under pristine dark skies, with an indoor/outdoor observatory stocked with equipment, friendly service, near-luxury accommodations, where you can observe all night to your heart's content, only to have a faithful servant close up while you stumble into bed a few doors away. You don't have to imagine because there's a place like that a short plane ride away. Sound like heaven on earth? Like many, I have looked at the ads for the Astronomer's Inn (and its New Mexico counterpart, the Star Hill Inn) for years, wondering what it would be like to spend a few uninterrupted days thinking about nothing but astronomy. In April 1999, I decided to find out for myself. Along with three other club members, I traveled out to the Arizona desert for a long weekend of observing and relaxing. I expected outstanding observing conditions and not much else. What I got was a pleasant surprise. Walking in the front door, I was confronted with a long, large building, extremely well-furnished (almost over-furnished) and well-equipped for astronomy. The complex is modestly sized but feels massive, due to its layout - every room seems to lead into every other room - and the variety its decor. In fact, one your first tasks upon arriving is to learn your way around the building. I expected a simple, rustic retreat. Boy, was I wrong! Within minutes, I found myself happily installed in the Egyptian Room, complete with marble jacuzzi, satellite TV, and a walk-in marble shower with solid brass fixtures and inlays on the floor. Other rooms include an Ivy-themed room, and a room with lots of Star Wars-like toys and a planetarium, complete with a dome in the ceiling. To put it mildly, I was impressed before I even got to the observatory. There are two places to go observing. You can use the scopes in the open-air observatory (many reflectors and refractors, and 2 12" LX200 SCTs) or you can purchase time on the 20" f/10 Maksutov in the dome. The latter is done through hired help (they don't want you messing around with the custom built Mak, and I understand why.) Also, if you bring your own equipment, you can set up on the patio outside. The relative humidity hovered around 9% when I was there, so dew isn't a factor. Finally, if it's cloudy out (not likely) there are mountains of astronomy- related books, videos, magazines, toys, and a large screen television to keep you busy. The observatory, roof closed As the sun set the first evening, I knew I was in for something special. At sunset, there was no color at all on the horizon, something I almost never see from damp New Hampshire skies. Also, the darkness descends on you like a cool black blanket. On this first night, I had what may well be my most productive ob- serving session in my life. Working on my Herschel 400, I bagged 65 objects before midnight with the 12" Meade Starfinder Dob in the observatory. What's even more impressive is the type of objects I was observing. I cleaned up on ALL 50 Herschel objects -- 49 (mostly dim) galaxies and one globular -- in Virgo, in less than 3 hours. On the second day, I was given a tour of the mirror making facility at the University of Arizona (thanks to Howard for taking time out of his busy day!) We saw 6.5 meter and 8 meter mirrors, in various stages of completion. The second night, fellow club member Dave had purchased time on the Maksutov. Mars was nearing opposition, and he wanted to do some CCD work. However, the skies looked cloudy, so we called it off and set out for a leisurely dinner. As the sun set, I peeked out of the restaurant window, and noticed some clearing in the west. We jumped in the van and high-tailed it back to the observatory. The second night was even clearer than the first. I had not intended to do much "serious" work on my Herschel 400, but after a couple of hours talking and playing around with other people's scopes, the conditions were too good to pass up. Big Mak Attack - the 20" Maksutov Of the 400 deep sky objects on the Herschel list, a disproportionally large number of them (158, or 39%) lie in the galaxy-rich spring sky around Virgo, Ursa Major, Canes Venatici, Leo, and Coma. This poses two difficulties for the observer. First, you have to locate the objects. Then, you have to identify them. This can be tough -- in some fields, there may be several galaxies crowding the FOV. Still, all 44 Ursa Major galaxies were logged in less than an hour, which is an amazing rate (at least for me.) I also "cleaned up" the areas in Canes Venatici and Leo Minor. Want to know how clear the conditions were? Halfway through the night, I laid some time aside to look for NGC 6118 in Serpens, which is roundly considered the most difficult Herschel object of them all. It's a relatively large galaxy with almost no surface brightness. Many experienced observers go their whole lives without seeing it. On this night, I found NGC 6118 in about thirty seconds. It looks a little bit like M33 does in my TeleVue Ranger under modest light pollution. At the end of the second night, I logged 67 new Herschel objects, another new record for me. The third night was, incredibly, even clearer than the first two. Dave made another attempt to work with the Maksutov. I peeked in the dome to see how he was doing. I saw the CCD camera attached to the focuser and lots of computers and cables rigged to the scope, so I decided not to bother him. As it turned out, I never did get to look through that Maksutov. On the third night, I decided to slow down my hectic observing pace. There were a number of novices visiting the Inn (including a local reporter) and I spent some time showing them some of the brighter spring objects. Still, after they left, I wound up logging 42 Herschel objects. Early the next morning, watching the sun rise over the desert, I started recording my observations over the past three days. It had taken me nearly 6 months to log the first 157 objects on my Herschel survey. In the past three nights alone, I had logged another 174 more. Apparently, observing is a lot like buying a house -- the most important factors are location, location, and location. View from the upper parking lot During my three days at the Inn, it began to dawn on me just how single-mindedly astronomy-oriented the place really is. Everywhere you turn, there is some piece of astronomy-related paraphernalia lying about. The bookshelves are stocked full of astronomy and space- related books. The magazine stands in the halls and the bathrooms are filled with old S&T and Astronomy back issues. Any guess as to what's in the video library? The whole experience rather smacks of overkill. However, I happen to LOVE overkill, and I couldn't have been happier. The Astronomer's Inn offers dynamite observing conditions, superb accommodations, friendly service (wait until you see what they do for breakfast around here) and is an excellent value. It's technically an "astronomy bed and breakfast," but to me, it feels more like a really nice house with a lot of telescopes in it. I will be back. In fact, we were already planning our next visit on the drive back to the airport. Highly recommended. Don't you have some vacation time coming? • Beautifully clear skies, reliable seeing • Luxurious accommodations • There's a 20" Maksutov in the building! • Attentive staff, reasonable rates Astronomer's Inn Nots: • Far away from East Coast patrons • Non astronomy-minded significant others might be bored The Verdict: • Get drunk on astronomy at Astronomer's Inn Equipment List: Computerized 20" f/10 Maksutov-Cassegrain (fee required) 14.5" f/5 Newtonian 12" f/4.8 Newtonian 12" LX200 SCT (two) 8" f/6 Newtonian 6" f/24 Maksutov 6" f/6 refractor Tons of accessories, binoculars, charts, tools, etc. Back to Home Page
2021-10-18 16:35:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24667556583881378, "perplexity": 1731.2314843666122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00678.warc.gz"}
https://pos.sissa.it/358/401/
Volume 358 - 36th International Cosmic Ray Conference (ICRC2019) - CRI - Cosmic Ray Indirect First measurements with prototype radio antennas for the IceTop detector array M. Renschler*  on behalf of the IceCube Collaboration Full text: pdf Pre-published on: July 22, 2019 Published on: July 02, 2021 Abstract Extending large-scale air-shower arrays with radio antennas can increase the detector's performance, as the radio emission by cosmic-ray air showers provides an additional measurement of the electromagnetic component. Instrumenting the IceCube surface detector IceTop with radio detectors as well as with new particle detectors in a hybrid approach will enhance the measurement and reconstruction accuracy and allow for the characterization of highly inclined air showers. This will enable a better understanding of the atmospheric background for the in-ice neutrino measurements. It also opens the opportunity for new science cases, e.g. the search for PeV gamma rays from the Galactic Center, which is visible from the IceCube site year-round at an inclination of 61$^{\circ}$. Adding to several scintillator particle detectors already running at the South Pole, two prototype radio antennas have been deployed at the IceCube site in January 2019 using the same DAQ system as the scintillators. The antennas serve as a test setup for a future deployment of radio antennas extending the scintillator array planned inside the IceTop footprint. In this proceeding, the antennas considered for deployment and the hybrid DAQ system processing the signals of the particle and radio detectors will be introduced. First measurement results at the South Pole will be presented and future plans for a full hybrid particle and radio detector array inside the IceTop footprint will be shown. DOI: https://doi.org/10.22323/1.358.0401 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2021-09-19 07:15:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26151517033576965, "perplexity": 2806.0669722429743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056752.16/warc/CC-MAIN-20210919065755-20210919095755-00394.warc.gz"}
https://www.physicsforums.com/threads/spring-force-and-mass.869592/
# Spring Force and mass ## Homework Statement A 5.3kg mass hangs vertically from a spring with spring constant 720N/m. The mass is lifted upward and released. Calculate the force and acceleration the mass when the spring is compressed by 0.36m. Note: I already solved for acceleration and I got the correct answer- a=58.70566038m/s^2 I tried to solve for the spring force but I got the wrong answer and I'm not sure what I did wrong. m=5.3kg, k=720N/m, Δx=0.36m. I used Fnet=ma ## The Attempt at a Solution Fnet=ma ma=-Fg-Fx (5.3)(58.70566038)=-mg-Fx 311.14=-(5.3)(9.8)-Fx 311.14=-51.94-Fx 311.14+51.94=-Fx 363.08=-Fx -363.08N=Fx 363.08N[down]=Fx The correct answer is 310N[down]. Can someone tell me what I did wrong? Also, I'm confused about whether a put the negative signs infront of the correct variables(Fg and Fx are both pointing downwards in this problem, so I put a negative sign infront of both). Should I put a negative sign infront of Fnet too since Fnet points downwards in this case? Are you even supposed to put negative signs infront of variables based on direction when doing calculations? ## Answers and Replies It looks like you have some serious fundamental misunderstandings. Firstly, how'd you manage to find the acceleration of the mass without knowing the net force acting on the mass? What does Newton's Second Law tell us about how the net force acting on object is related to that object's acceleration? To answer your other question, yes you should keep track of your signs, you'll end up with wrong answer if you don't. In fact, your value of Fx is wrong because you're not using the right signs. Subtracting versus adding a quantity makes a big difference, so yes they are important. While we're at it, what force does Fx represent and why are you solving for it? The problem asks for the mass's acceleration, not the force by the spring. haruspex Science Advisor Homework Helper Gold Member 2020 Award Are you sure it is asking for the spring force, not the net force on the mass? If it does want the spring force, you have a sign wrong. Calculate the force and acceleration the mass when the spring is compressed by 0.36m. Note: I already solved for acceleration and I got the correct answer- a=58.70566038m/s^2 I tried to solve for the spring force but I got the wrong answer and I'm not sure what I did wrong. i wonder how you got acceleration of the mass ; as usually when one draws a free body diagram the net force gets calculated and finally acceleration is equated with F/mass. mark the underlined question .... when spring is compressed... so the spring force will oppose compression and gravitational pull will be downward(opposed to compression), but the motion is upward; so calculate the net force which will be addition of the above two but acting opposed to displacement (effective deceleration) and if you divide the net force by mass you get acceleration. a look at the numbers gives an impression that you will get correct answer. Are you sure it is asking for the spring force, not the net force on the mass? If it does want the spring force, you have a sign wrong. It wants the spring force i wonder how you got acceleration of the mass ; as usually when one draws a free body diagram the net force gets calculated and finally acceleration is equated with F/mass. mark the underlined question .... when spring is compressed... so the spring force will oppose compression and gravitational pull will be downward(opposed to compression), but the motion is upward; so calculate the net force which will be addition of the above two but acting opposed to displacement (effective deceleration) and if you divide the net force by mass you get acceleration. a look at the numbers gives an impression that you will get correct answer. i wonder how you got acceleration of the mass ; as usually when one draws a free body diagram the net force gets calculated and finally acceleration is equated with F/mass. mark the underlined question .... when spring is compressed... so the spring force will oppose compression and gravitational pull will be downward(opposed to compression), but the motion is upward; so calculate the net force which will be addition of the above two but acting opposed to displacement (effective deceleration) and if you divide the net force by mass you get acceleration. a look at the numbers gives an impression that you will get correct answer. I did: Fnet=ma ma=-mg-k(deltax) I knew all the values except acceleration, so i just used algebra to solve for it It looks like you have some serious fundamental misunderstandings. Firstly, how'd you manage to find the acceleration of the mass without knowing the net force acting on the mass? What does Newton's Second Law tell us about how the net force acting on object is related to that object's acceleration? To answer your other question, yes you should keep track of your signs, you'll end up with wrong answer if you don't. In fact, your value of Fx is wrong because you're not using the right signs. Subtracting versus adding a quantity makes a big difference, so yes they are important. While we're at it, what force does Fx represent and why are you solving for it? The problem asks for the mass's acceleration, not the force by the spring. It asks for 2 things: acceleration and spring force(Fx) Here's how I found acceleration: ma=-mg-k(deltax) I knew all the above values except a, so I just used algebra to solve. Also, what mistake did I make with my negative signs? I knew all the values except acceleration, so i just used algebra to solve for it therefore you also knew the force as mass was known. by newtons laws if you know acceleration and mass then you know the force. so what value of force you get ? therefore you also knew the force as mass was known. by newtons laws if you know acceleration and mass then you know the force. so what value of force you get ? Fnet is 311.14N, which I should in my calculations in the original post The correct answer is 310N[down]. Can someone tell me what I did wrong? Also, I'm confused about whether a put the negative signs infront of the correct variables(Fg and Fx are both pointing downwards in this problem, so I put a negative sign infront of both). Should I put a negative sign infront of Fnet too since Fnet points downwards in this case? Are you even supposed to put negative signs infront of variables based on direction when doing calculations? so you are getting an approx. correct answer. regarding the negative sign - in force equations as they are vectors ,one has to use sign convention. suppose the body is going up and you are writing mass xacceleration = the net force ' then the forces in the direction of motion may be taken as positive and opposed to the motion as negative. the sign of the accelaration after calculation can tell you whether the net force is helping the motion or otherwisei.e. retarding the motion. so the question you posed is perhaps asking for the net force rather than spring force; the wording of the question suggested that they need net force- It asks for 2 things: acceleration and spring force(Fx) Here's how I found acceleration: ma=-mg-k(deltax) I knew all the above values except a, so I just used algebra to solve. Also, what mistake did I make with my negative signs? What you posted says to find the force and acceleration of the mass. The two answers you provided agree with the values for both the force and the acceleration of the mass (F = -210N and a = -58). It also makes more sense that they'd ask for the force and then the acceleration of the mass (in that order) as what you posted suggests. Unless you're leaving something out, why do you think you need to find the spring force? It's impossible to help you if you don't post the entire problem exactly the way it was worded. Do note that -211.14 N is the same thing as 210 N (downward) once you take into account that the problem uses only two significant figures, so your final answer should only contain two significant figures. That aside, if you wanted to find the spring force, there's no need to work backward from the net force on the object. The spring force is given by Hooke's law and is independent of the acceleration of the mass and gravity. It only depends on how much the spring is compressed, which you know. $$F_s=-kx$$ so you are getting an approx. correct answer. regarding the negative sign - in force equations as they are vectors ,one has to use sign convention. suppose the body is going up and you are writing mass xacceleration = the net force ' then the forces in the direction of motion may be taken as positive and opposed to the motion as negative. the sign of the accelaration after calculation can tell you whether the net force is helping the motion or otherwisei.e. retarding the motion. so the question you posed is perhaps asking for the net force rather than spring force; the wording of the question suggested that they need net force- It just asks for force and since the lesson was about the spring force, I assumed that that's what they wanted. Also, the only 2 forces acting on the mass are spring force and gravity, so it makes sense that they'd ask for the spring force. What you posted says to find the force and acceleration of the mass. The two answers you provided agree with the values for both the force and the acceleration of the mass (F = -210N and a = -58). It also makes more sense that they'd ask for the force and then the acceleration of the mass (in that order) as what you posted suggests. Unless you're leaving something out, why do you think you need to find the spring force? It's impossible to help you if you don't post the entire problem exactly the way it was worded. Do note that -211.14 N is the same thing as 210 N (downward) once you take into account that the problem uses only two significant figures, so your final answer should only contain two significant figures. That aside, if you wanted to find the spring force, there's no need to work backward from the net force on the object. The spring force is given by Hooke's law and is independent of the acceleration of the mass and gravity. It only depends on how much the spring is compressed, which you know. $$F_s=-kx$$ The question I posted is exactly as written in the textbook so you are getting an approx. correct answer. regarding the negative sign - in force equations as they are vectors ,one has to use sign convention. suppose the body is going up and you are writing mass xacceleration = the net force ' then the forces in the direction of motion may be taken as positive and opposed to the motion as negative. the sign of the accelaration after calculation can tell you whether the net force is helping the motion or otherwisei.e. retarding the motion. so the question you posed is perhaps asking for the net force rather than spring force; the wording of the question suggested that they need net force- So, I should put a negative sign infront of the acceleration too The question I posted is exactly as written in the textbook I didn't get F=210N, I got 363.08N So, I should put a negative sign infront of the acceleration too you do not have to put in a sign- its the result of your calculation that acceleration = - number m/s^2 and this carries a meaning that it is in a direction opposite to the motion. you do not have to put in a sign- its the result of your calculation that acceleration = - number m/s^2 and this carries a meaning that it is in a direction opposite to the motion. Oh ok, so then I put the negative signs in the correct places, but still got the wrong answer The correct answer is 310N[down]. Can someone tell me what I did wrong? Oh ok, so then I put the negative signs in the correct places, but still got the wrong answer no you got 311 N for the net force and its close to 310 which was expected. It is the spring force pushing downward because of compression of spring , there was no term of 210 N in the discussion.and 363 is the wrong answer. = no you got 311 N for the net force and its close to 310 which was expected. It is the spring force pushing downward because of compression of spring , there was no term of 210 N in the discussion.and 363 is the wrong answer. = Why does spring force=net force? or does the question just want us to solve for net force? The question I posted is exactly as written in the textbook The problem asks you to find the net force acting on the mass and the mass's acceleration. The answers given are for the force and acceleration of the mass. I don't understand why you think you're finding the spring force. Even then, you can't find the correct value for it because you aren't using the acceleration you claimed to have found, which is a claim I'm now very skeptical of. In fact, you had to use the spring force in order to find the acceleration, so the fact that you're having trouble finding it is concerning. Last edited: Why does spring force=net force? or does the question just want us to solve for net force? i do not know whether you have done simple hamonic motion with a mass hanging on the spring; i quote a simple way to look at it the spring is normally supporting a mass by some stretching increase in length - any disturbance leads to oscillations ,if it is during compression stage the net downward force will be provided by the spring as all the time it is holding the mass i do not know whether you have done simple hamonic motion with a mass hanging on the spring; i quote a simple way to look at it the spring is normally supporting a mass by some stretching increase in length - any disturbance leads to oscillations ,if it is during compression stage the net downward force will be provided by the spring as all the time it is holding the mass Got ahead of myself. This is not correct. The net downward force is not provided by the spring alone. For a hanging system, the forces acting on the mass are the gravitational force and the spring force. If the spring is compressed, the spring force is downward and so is the gravitational force. Therefore, the net force in the downward direction is Fs + Fg, which also happens to be the net force acting on the mass. you are right about the force mg acting downward but in oscillatory motion the equilibrium point gets shifted and the initial condition supports the mass see Assume a mass suspended from a vertical spring of spring constant k. In equilibrium the spring is stretched a distance x0 = mg/k. If the mass is displaced from equilibrium position downward and the spring is stretched an additional distance x, then the total force on the mass is mg - k(x0 + x) = -kx directed towards the equilibrium position. If the mass is displaced upward by a distance x, then the total force on the mass is mg - k(x0 - x) = kx, directed towards the equilibrium position. The mass will execute simple harmonic motion. The angular frequency ω = SQRT(k/m) is the same for the mass oscillating on the spring in a vertical or horizontal position. The equilibrium length of the spring about which it oscillates is different for the vertical position and the horizontal position. ref.http://labman.phys.utk.edu/phys135/modules/m9/oscillations.htm The problem asks you to find the net force acting on the mass and the mass's acceleration. The answers given are for the force and acceleration of the mass. I don't understand why you think you're finding the spring force. Even then, you can't find the correct value for it because you aren't using the acceleration you claimed to have found, which is a claim I'm now very skeptical of. In fact, you had to use the spring force in order to find the acceleration, so the fact that you're having trouble finding it is concerning. Well, I did find the acceleration first and I got the correct answer, so I don't understand why that's hard to believe
2021-04-20 16:29:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6280620098114014, "perplexity": 374.4041090260993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00042.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-8-coordinate-geometry-and-linear-systems-8-4-writing-equations-of-lines-problem-set-8-4-page-360/33
## Elementary Algebra Slope: $-2$ Y-Intercept: $-5$ See image for graph. The $\text{slope-intercept}$ form of a linear equation is $y=mx+b$, where $m$ is the $\text{slope}$ and $b$ is the $y$-$\text{intercept}$. The given equation is $y=-2x-5$. Since $-2$ takes the place of the $m$, it is the $\text{slope}$. Since $-5$ takes the place of $b$, it is the $y$-$\text{intercept}$. To graph this line, we make sure that it crosses the $y$-axis at $y=-5$ and that the line goes down $2$ every time it goes 1 to the right.
2018-09-19 20:49:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9693187475204468, "perplexity": 133.46905088317314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156305.13/warc/CC-MAIN-20180919200547-20180919220547-00167.warc.gz"}
https://robotics.stackexchange.com/questions/7803/gyroscope-how-can-i-remove-low-frequency-component-with-a-high-pass-filter-onl
# Gyroscope - How can I remove low frequency component with a high pass filter only? I'm using Matlab to suppress low frequency components with a high pass filter. Objective • Filter angular velocity measurements affected by high frequency noise and bias in order to get the best estimate of the angular position. The output when the gyroscope is still looks like this. First Approach The easiest way to remove baseline is to remove the average and can be achieved with Matlab using one line of code. yFilt = y - mean(y) Second Approach We can design a high pass filter to attenuate low frequency components. If we analyze the frequency components of the signal we will see one peak at low frequency and "infinite" small components in all frequencies due to Noise. With a second order ButterWorth filter with normalized cutoff freq Wn = 0.2 we will get what we are looking for. Filtered data Tilting the Gyro When we tilt the gyroscope the situation changes. With a sampling frequency of 300Hz we get the following plot. The first half of the dft is shown below in a normalized scale. You can find the sample.mat file here The first approach works great. I would like to apply the second one to this particular case but here there are other low frequency components that make to job harder. How can I apply the second approach based on the High Pass filter to remove the bias? EDIT 2 How can we filter this signal to remove bias while keeping the angular velocity information (from 110-th to 300-th sample) intact? If gyroscopes have the bias problem only when they are not experiencing any rotation, then the offset is present only in the first ~110 samples. If the above hypothesis is correct, maybe if we apply high pass filtering only in the first 110 samples and desactivate the filter during rotations of the gyro, the estimated angular position will be more accurate. • I don't understand. You made a Butterworth filter, and it removes the DC offset well. What is the problem you are having? – Chuck Aug 4 '15 at 23:45 • Also, you may want to check whatever you're doing for simulation. You look like you average about 750 m/s2 for about 30 seconds - so your sensor is going about 0.5*750*900 = 337.5 km/hr. This might be correct if your're designing aircraft, but it's worth consideration if you're not. – Chuck Aug 4 '15 at 23:53 • Are you looking for a band pass filter? Are you looking to take the derivative of the signal? You either need to say in technical terms what you want or provide some clues as to what you are doing and what exact result you want. If you are looking for a band pass filter I would go for a difference of Gaussians (DOG) filter. If you want the derivative then do the difference equations followed by some smoothing if you like. There are some very interesting free DSP books on the Internet. – SeanOCVN Aug 5 '15 at 2:27 • @Chuck Comment 1: Hi Chuck thanks for your relpy. The high pass filter works well in steady state but I would like to use it when the gyroscope rotates. As you can see from the last Figure, in the spectrum of the signal there are low frequency components that make the job harder. I would like to apply a high pass filter to remove the offset in the last situation. – UserK Aug 5 '15 at 12:09 • On stack exchange, it is better to edit your question to add information requested in comments, rather than adding more comments. Comments are for helping to improve questions and answers, and are distracting, so we try to keep them to a minimum. If all of the information needed to answer the question is contained within it, the comments can be tidied up (deleted). – Mark Booth Aug 5 '15 at 12:17 :UPDATED: This is the last update I'm going to make to this answer because I feel like you are repeatedly redefining your question. 1. You have already designed a Butterworth filter that removes the DC offset, but your question title is "How can I remove low frequency component with a high pass filter". What is the problem you are having?? 2. You ask, in what was the 5th edit of your question, but you call "Edit 2", how you can remove the bias while retaining the signal intact. If you have to do it with a filter (which I DO NOT RECOMMEND), then set the cutoff frequency of your filter an order of magnitude (one decade) below your lowest frequency of interest. Refer to a Bode plot of your filter to see how your filter will modify your signal. 3. I don't recommend a filter because a bias is called "bias" and not "noise". This is because a bias is a systematic error - a DC bias will exist in all samples, so you cannot "turn on" a filter when you think you are stationary and then "turn off" the filter when you think you are moving - the DC bias will exist in all signals. This is why I have suggested you calibrate your sensor, which is what you should be doing with every sensor you connect to your system anyways. 4. You have not provided any information about your testing or application, but if what you say at the end of the current version of your question is correct - maybe if we apply high pass filtering only in the first 110 samples and desactivate the filter during rotations of the gyro, the estimated angular position will be more accurate - then this would imply that you know, in advance, when the the gyro is rotating and when it is not. If this is the case, you can remove all of the DC bias accurately, with very little computational cost, by calculating the signal mean during periods when the gyro is stationary and subtracting that mean from the signal when the gyro is in motion. 5. As BendingUnit22 pointed out in the comments, the sample data set you have provided is not representative of the scenario in which you intend to use the filter, which is making it needlessly difficult for everyone trying to contribute to this problem to divine what you're after. 6. I have to assume your insistence on using a filter is academic because it is more difficult to implement, more computationally expensive, and more likely to skew the data you want to use. :Original Content Below: I've moved from a comment to an answer in the hopes that I might clarify/point out some problems you might be having. 1. A DC offset in a sensor like an accelerometer or a gyroscope is called a bias - it's the output of the sensor when the input is zero. 2. For devices like this, you actually would do a calibration process where you would use your line of code yFilt = y - mean(y), but you could more appropriately call it biasOffset = y - mean(y). Once you've done the initial calibration you store biasOffset in nonvolatile memory and use that to adjust future input samples. 1. Be careful using a high pass filter when your frequencies of interest are low. In your Figure 3, you have a close approximation of 3/4 of a sine wave starting at 110 seconds and running to 150 seconds. That is, it looks like your signal has a period of about (150-110)/(3/4) = 53.3 seconds, corresponding to a frequency of (1/53.3) = 0.0188 Hz, or 0.118 rad/s. Bear in mind that a filter's "cutoff" frequency doesn't actually cut off all frequency content above or below the designate frequency, but rather the cutoff frequency is generally the half-power (-3dB) point. Here's a decent discussion of filters (page 9) from a signal processing/aliasing perspective, but still worth reading. What I'm trying to say is that you could wind up significantly skewing your desired signal if you choose a cutoff frequency that is anywhere near (within an order of magnitude) of your desired signal frequency. 2. Be careful with units. You caught it! You changed the units on Figure 2 from $m/s^2$ to deg/s - it makes a big difference! Where last night it looked like your sensor was accelerating up to 335km/hr, now it looks like it's turning at 1rpm. That's a big difference! :EDIT: You have since changed the units on Figure 2 (x-axis) from seconds to samples, which, at 300Hz, means you've changed the time scale from 5 minutes to 1 second. Again, a huge difference, but now this means that your 3/4 sine wave-looking section occurs in 40 samples, not 40 seconds, which at 300Hz means it happens in 0.1333 seconds. Then, a full sine wave happens in a period of 0.1777 seconds, for a frequency of 5.6257Hz. Note, however, that you are using normalized frequency on your last figure, and your signal frequency, 5.6257Hz, divided by your sampling frequency, 300Hz, gives you 0.0188. This again shows that the low frequency signal I'm assuming you're trying to eliminate is, in fact, the signal you want. As a correction, I had previously stated that you didn't provide time data, but in fact you had provided sampling frequency, which means that you have indirectly provided time data, given the number of samples in the test data you provided. • And those 0.0188 Hz are the low frequency spikes in the dft. @UserK: As you can see from the last Figure, in the spectrum of the signal there are low frequency components that make the job harder. The low frequency components are your actual signal. They don't make the job harder. – Bending Unit 22 Aug 5 '15 at 14:22 • @Chuck I've updated the plot in Figure 2 and corrected the units. Fig 2 represents 1 second of data. The partial sinusoid was generated by a fast movement (~ [4;8] Hz) – UserK Aug 5 '15 at 15:31 • @UserK so the image and data provided isn't representing the actual data/situation. I don't really understand what the question is then. Do you have data that is more closely related to your use case? – Bending Unit 22 Aug 5 '15 at 15:46 • You do it the way I explained in the answer above - you use your first method with a calibration test. Turn the system on, let it "warm up" for a few (maybe 5) minutes, then start recording sensor output while you leave the gyro stationary. Record for some adequate period of time (a minute or two is probably sufficient), then find the mean of the gyro output while the gyro is stationary. This is your bias. This procedure is called a calibration. Use the bias you determine from the calibration to adjust your sample readings for all readings in the future. No filter needed. – Chuck Aug 5 '15 at 17:03 • I tend to agree with all the points highlighted by Chuck. Let me add one more thing: filtering the gyro has the significant downside to introduce latency in the final response you will use to account for rotations, whereas the gyro is purposely meant to provide very fast measurements. – Ugo Pattacini Aug 6 '15 at 11:55 Please refer to this Article "Keeping a Good Attitude: A Quaternion-Based Orientation Filter for IMUs and MARGs". They are using low-pass filter at stationary gyro to estimate the gyro bias and then subtract it from the original signal. They are doing that only if the gyro is stationary. However, I recommend to remove the bias when the gyro is stationary by means of subtracting the mean value. The method of subtracting the bias only when the gyro is stationary based on low pass filter is called online gyro compensation.
2021-07-25 09:42:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45322301983833313, "perplexity": 602.904107121075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151641.83/warc/CC-MAIN-20210725080735-20210725110735-00632.warc.gz"}
https://schochastics.github.io/netrankr/reference/neighborhood_inclusion.html
Calculates the neighborhood-inclusion preorder of an undirected graph. neighborhood_inclusion(g) Arguments g An igraph object Value The neighborhood-inclusion preorder of g as matrix object. P[u,v]=1 if $$N(u)\subseteq N[v]$$ Details Neighborhood-inclusion is defined as $$N(u)\subseteq N[v]$$ where $$N(u)$$ is the neighborhood of $$u$$ and $$N[v]=N(v)\cup \lbrace v\rbrace$$ is the closed neighborhood of $$v$$. $$N(u) \subseteq N[v]$$ implies that $$c(u) \leq c(v)$$, where $$c$$ is a centrality index based on a specific path algebra. Indices falling into this category are closeness (and variants), betweenness (and variants) as well as many walk-based indices (eigenvector and subgraph centrality, total communicability,...). References Schoch, D. and Brandes, U., 2016. Re-conceptualizing centrality in social networks. European Journal of Applied Mathematics 27(6), 971-985. Brandes, U. Heine, M., Müller, J. and Ortmann, M., 2017. Positional Dominance: Concepts and Algorithms. Conference on Algorithms and Discrete Applied Mathematics, 60-71. Examples require(igraph) #the neighborhood inclusion preorder of a star graph is complete g <- graph.star(5,'undirected') P <- neighborhood_inclusion(g) comparable_pairs(P)#> [1] 1 #the same holds for threshold graphs tg <- threshold_graph(50,0.1) P <- neighborhood_inclusion(tg) comparable_pairs(P)#> [1] 1 #standard centrality indices preserve neighborhood-inclusion g <- graph.empty(n=11,directed = FALSE) is_preserved(P,degree(g))#> [1] TRUEis_preserved(P,closeness(g))#> [1] TRUEis_preserved(P,betweenness(g))#> [1] TRUE
2017-10-18 18:28:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638715744018555, "perplexity": 8518.80082013833}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823067.51/warc/CC-MAIN-20171018180631-20171018200631-00805.warc.gz"}
http://www.physicsforums.com/showpost.php?p=2515553&postcount=4
View Single Post P: 87 I want to understand intuitively why LCM(a,b)=ab/GCF(a,b) Quote by statdad I'll use a = 84, b = 96. The prime factorizations of these are \begin{align*} a & = 2^2 \cdot 3\cdot 7 \\ b & = 2^5 \cdot 3 \end{align*} It's easy to see that the LCM of these numbers is $$2^5 \cdot 3 \cdot 7$$ and, further, that the GCF is $$2^2 \cdot 3$$. Somehow it is not easy for me to see why the LCM of the two numbers is $$2^5 \cdot 3 \cdot 7$$. Can you please tell me how you can tell the LCM from the prime factors of the numbers?
2014-07-29 02:44:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991891980171204, "perplexity": 144.67242714809404}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510264575.30/warc/CC-MAIN-20140728011744-00285-ip-10-146-231-18.ec2.internal.warc.gz"}
https://gameradvocate.com/post/can_x_ray_pass_through_glass
# Can X Ray Pass Through Glass ### Depending on how big the glass shard is, you'll notice recurrent sharp pain at the site, as you move or are touched. As far as you human body is concerned, glass is inert, so it won't change, discolor or dissolve into your system over time. (Source: www.raybar.com) Contents Because pencil graphite has organic binders holding it together that the body can digest or at least break it down into tiny bits, making it easy to eliminate. Diamond is canned not be digested and must pass-through the entire GI tract to be eliminated from the body. It can probably be purchased after market by an auto body or glass shop. You may have heard that you can 't get a sunburn through glass, but that doesn't mean glass blocks all ultraviolet, or UV, light. UVC is completely absorbed by Earth's atmosphere, so it doesn't pose a risk to your health. UV light from the sun and man-made sources are mainly in the UVA and UVB range. This happens because the plant was unaccustomed to the higher levels of UVA found outside, compared with inside a sunny window. For example, most sunglasses made from glass are coated, so they block both UVA and UVB. The laminated glass of automobile windshields offers some (not total) protection against UVA. Automotive glass used for side and rear windows ordinarily does not protect against UVA exposure. (Source: continentalmetal.com) Tinting glass reduces the amount of both visible and UVA transmitted through it. In a fluorescent bulb, electricity excites a gas, which emits UV light. Your actual exposure depends on how close you sit to the light, the type of product that is used, and how long you are exposed. You can reduce exposure by increasing your distance from the fluorescent fixture or wearing sunscreen. Sometimes the lights are made using special high-temperature glass (which at least filters UVB) or doped quartz (to block UV). UV exposure from a pure quartz lamp can be reduced by using a diffuser (a lampshade) to spread out the light or increasing your distance from the bulb. Organism Senior Members 437 4705 posts Location: Nova Scotia, Canada Even if we focus only on a particular kind of radiation the effects it can produce are a lot, depending on the elements with which interacts, and on how they are combined (for example if they are free atoms, or bounded together forming molecules or solids). Now, we can say that the intensity of a beam of monochromatic waves that encounters an object of width L, drops exponentially as it crosses that object, $$I(x)=I_OEM{-\eta x}$$ where x is the distance the beam has covered, and \$\ETA is called the absorption coefficient, which encloses the mechanism of interaction itself, so it is different for X -rays and visible light. However, when we talk about X -rays, they are thousands of times more energetic than visible light, due to the energy-frequence relation, which is $$E=h u =h\franc{c}{\lambda}$$ So the shorter the wavelength, the higher the energy of the photon. (Source: www.ebay.com) From a benefit perspective, glass containers, jars, bottles and other vessels, are a solid (no pun intended) choice for a variety of food and nonfood products: To prevent this potential food safety hazard before it is a direct consumer problem, advanced inspection of products packaged in glass is crucial and is an integral part of a manufacturer’s HACCP plan. The Quarries, as its name suggests, provides four-view detection for a more comprehensive inspection of containers up to 12” tall and 6” in diameter, overcoming potential blind spots. As with other Eagle systems, the Quarries also performs inline quality checks, including fill level inspection and package integrity, among other assessments and analyses. When you realize how much time and money it can save you in the long run, there really is no reason not to employ this tactic in the food packaging industry. Using a proper inspection process to detect any possible contaminants is the best way to ensure the quality of your products and the safety and satisfaction of your customers. Glass is used quite commonly in many industries, including food packaging, and for good reason. For this reason, glass is the only type of packaging that rates the FDA’s highest safety ranking, “GAS” or generally recognized as safe. When you compare this to the potential for chemical contamination that is seen with other packaging materials such as plastic, it’s easy to see the difference. In the manufacturing and packaging process, there is quite a bit of stress put on individual items and, subsequently, a high risk of breakage, which can result in glass shards contaminating food products. (Source: leytemed.en.alibaba.com) Generally, radiation with wavelengths much shorter than visible light have enough energy to strip electrons from atoms. In general, the shorter the wavelength, the greater the danger to living things. TL;DR (Too Long; Didn't Read) The most dangerous frequencies of electromagnetic energy are X -rays, gamma rays, ultraviolet light and microwaves. X -rays, gamma rays and UV light can damage living tissues, and microwaves can cook them. These waves are smaller than an atom and canpassthrough most materials as sunlight passes through glass. Although X -rays have many beneficial applications, using them requires caution since exposure can cause blindness, cancer and other injuries. Because it has longer wavelengths than X -rays, UV causes less damage to tissue, but even so, it is still not completely safe. Nuclear processes in atoms produce this kind of radiation, which has more energy and greater penetrating power than X -rays. Food producers use gamma ray devices to kill mold, germs, and parasites in fruits and vegetables. Cell phones and other gadgets emit microwaves although they are generally considered too weak to affect living tissue. (Source: www.slideshare.net) When doctors need to get a better look at what’s going on in their patient’s bodies, they will often refer them to receive some type of diagnostic imaging. Unfortunately, the thought of having these tests done can often make patients anxious, but it’s important to remember that diagnostic imaging is typically non-invasive and painless. At the time, Röntgen was exploring how electrical rays canvass from an induction coil through a glass tube. This discovery proved that the internal parts of our body can be seen without needing invasive and risky surgery to do so. It was eventually recognized that frequent exposure to X -rays could be harmful, but today special measures are taken to protect the patient and doctor, and prevent complications. Today, digital radiography has several advantages over traditional film/screen X -rays, including less radiation, quality of image for accurate diagnosis, and quicker results. Ultrasound is the technology, or the “eyes” if you will, for helping doctors get a closer look to make an accurate diagnosis. During an ultrasound exam, a probe called a transducer is placed directly on the skin or inside the body. The quality of the ultrasound images is produced based on the reflection of the waves off of the body structures. The strength or amplitude of the sound signal and the time it takes for the wave to travel through the body provides the information necessary to produce an image. (Source: www.slideshare.net) An MRI uses a powerful magnetic field combined with specific radio frequencies to create detailed images of internal body structures with the aid of a sophisticated computing system. They can also help your doctor gain a better understanding of your joints, cartilage, bone, and soft tissues in a way that other tests cannot. Computed Tomography technology was originally created for taking detailed pictures of the brain. In other words, CT scans are used to take pictures of internal organs, bones, soft tissue, and blood vessels.
2021-10-28 20:23:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3306984603404999, "perplexity": 1685.253366576863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588526.57/warc/CC-MAIN-20211028193601-20211028223601-00109.warc.gz"}
https://tiu.edu.iq/scholarship-offers/
Tishk International University (TIU) offers discounts on tuition fees for the academic year (2020 – 2021) for the following departments. if you are looking for quality education, top one ranked university between private institutions, you shouldn’t miss this opportunity. You have the chance to study at TIU and get a scholarship at one of the departments mentioned below. Check the requirements… Tishk International University Departmental Discounts for Academic Year (2020 – 2021) Engineering Faculty: 1- Students which will apply for the Civil Engineering Department. · If the percentage degree was more than 90 %. She/he will get 50 % Scholarships. 2- Students which will apply for Surveying and Geomatics & Mechatronic Departments. · If the percentage degree was more than 85 %. She/he will get 60 % Scholarships. · If the percentage degree was between 80 – 85 % she/he will get 50 % Scholarships. Education Faculty: 3- Students which will apply for departments of Mathematics Education, · If the percentage degree was more than 70 she/he will get 100 % Scholarships. · If the percentage degree was between 60 – 69 % she/he will get 80 % Scholarships 4- Students which will apply for departments of Biology Education, · If the percentage degree was more than 70 she/he will get 100 % Scholarships. · If the percentage degree was between 60 – 69 % she/he will get 80 % Scholarships 5- Students which will apply for departments of Physics Education, · If the percentage degree was more than 70 she/he will get 100 % Scholarships. · If the percentage degree was between 60 – 69 % she/he will get 80 % Scholarships 6- Students which will apply for departments of Computer Education, · She/he will get 80 % Scholarships. 7- Students which will apply for departments of (ELT). · If the percentage degree was more than 80 % she/he will get 50 % Scholarships. Administrative Sciences and Economics Faculty: 8- Students which will apply to departments of International Relations and Diplomacy. · If the percentage degree was more than 90 %. She/he will get 80 % Scholarships. · If the percentage degree was between 80 – 89 % she/he will get 50 % Scholarships. · If the percentage degree was between 70 – 79 % she/he will get 40 % Scholarships. · If the percentage degree was between 60 – 69 % she/he will get 20 % Scholarships. 9- Students which will apply to departments of Business Management Department. · If the percentage degree was more than 90 %. She/he will get 80 % Scholarships. · If the percentage degree was between 80 – 89 % she/he will get 50 % Scholarships. · If the percentage degree was between 70 – 79 % she/he will get 30 % Scholarships. 10- Students which will apply to departments of Accounting Department. · If the percentage degree was more than 80 %. She/he will get 80 % Scholarships. · If the percentage degree was between 70 – 79 % she/he will get 50 % Scholarships. · If the percentage degree was between 60 – 69 % she/he will get 30 % Scholarships. 11- Students which will apply for departments of Banking and Finance. · If the percentage degree was more than 80 %. She/he will get 100 % Scholarships. · If the percentage degree was between 55 – 79 %. She/he will get 80 % Scholarships. 12- Students which will apply for departments of Tourism, · She/he will get 80 % Scholarships. Law Faculty: 13- Students which will apply to departments of Law. · If the percentage degree was more than 90 %. She/he will get 70 % Scholarships. · If the percentage degree was between 80 – 89 % she/he will get 50 % Scholarships. · If the percentage degree was between 70 – 79 % she/he will get 30 % Scholarships. 14- Any students graduated from background of English Curriculum teaching high school, she/he will get 20 % scholarships all departments, except dentistry and Pharmacy departments. Note: If a student get more than one scholarship, S/he have to choose one of them.
2020-12-02 19:22:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8211407661437988, "perplexity": 2104.8717000083548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141715252.96/warc/CC-MAIN-20201202175113-20201202205113-00668.warc.gz"}
https://www.math.ntnu.no/conservation/2003/039.html
### Renormalized Entropy Solutions for Quasilinear Anisotropic Degenerate Parabolic Equations. Mostafa Bendahmane and Kenneth H. Karlsen Abstract: We prove the well posedness (existence and uniqueness) of renormalized entropy solutions to the Cauchy problem for quasilinear anisotropic degenerate parabolic equations with $L^1$ data. This paper complements the work by Chen and Perthame \cite{ChenPerthame}, who developed a pure $L^1$ theory based on the notion of kinetic solutions. Paper: Available as PDF (264 Mbytes). Author(s): Mostafa Bendahmane, <mostafab@math.uio.no> Kenneth H. Karlsen, <kennethk@mi.uib.no> Publishing information: Revised version submitted on June 10 2003.
2017-12-16 03:33:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7418183088302612, "perplexity": 3885.957199262587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00523.warc.gz"}
https://brilliant.org/problems/2015-countdown-problem-12-sums-of-consecutive/
# 2015 Countdown Problem 12: Sums of consecutive numbers How many integers between 2 and 2015 inclusive cannot be expressed as a sum of at least two consecutive positive integers? ×
2017-07-26 01:00:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039217233657837, "perplexity": 323.959078090241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425737.60/warc/CC-MAIN-20170726002333-20170726022333-00390.warc.gz"}